Skip to content

Storage streams use 'complete' event for end of stream but built-in streams use 'finish'  #362

Closed
@ryanseys

Description

@ryanseys

As was raised in #340, I looked into why there was a discrepancy between what the developer thought and what was the real case. There was a suggestion made to update our docs, but our docs aren't the issue here. A snippet from the tests shows the issue:

Using 'finish' event:

file.createReadStream()
.pipe(fs.createWriteStream(tmpFilePath))
.on('error', done)
.on('finish', function() {
  file.delete(function(err) {
    assert.ifError(err);

    fs.readFile(tmpFilePath, function(err, data) {
      assert.equal(data, fileContent);
      done();
    });
  });
});

Using 'complete' event:

var file = bucket.file(filenames[0]);
fs.createReadStream(files.logo.path)
  .pipe(file.createWriteStream())
  .on('error', done)
  .on('complete', function() {
    file.copy(filenames[1], function(err, copiedFile) {
      assert.ifError(err);
      copiedFile.copy(filenames[2], done);
    });
  });

Seems the only difference is the type of file that is getting piped to. In the first case, it's a regular stream from fs and in the second it's our implementation of the storage file write stream.

So my question is, should we use a consistent finish event everywhere or is this by-design or otherwise okay?

Metadata

Metadata

Assignees

Labels

api: storageIssues related to the Cloud Storage API.type: questionRequest for information or clarification. Not an issue.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions