Skip to content

Resumable uploads aren't tested #825

Closed
@stephenplusplus

Description

@stephenplusplus

We kind of are, but not correctly. We need to actually destroy the event mid-stream. Our test should look more like this:

it('should resume an upload after an interruption', function(done) {
  fs.stat(files.big.path, function(err, metadata) {
    assert.ifError(err);

    // Use a random name to force an empty ConfigStore cache.
    var file = bucket.file('LargeFile' + Date.now());
    var fileSize = metadata.size;

    upload({ interrupt: true }, function(err) {
      assert.strictEqual(err.message, 'Interrupted.');

      upload({ interrupt: false }, function(err) {
        assert.ifError(err);

        assert.equal(file.metadata.size, fileSize);
        file.delete(done);
      });
    });

    function upload(opts, callback) {
      var ws = file.createWriteStream();
      var sizeStreamed = 0;

      fs.createReadStream(files.big.path)
        .pipe(through(function(chunk, enc, next) {
          sizeStreamed += chunk.length;

          if (opts.interrupt && sizeStreamed >= fileSize / 2) {
            // stop sending data half way through.
            this.push(chunk);
            this.destroy();
            ws.destroy(new Error('Interrupted.'));
          } else {
            this.push(chunk);
            next();
          }
        }))
        .pipe(ws)
        .on('error', callback)
        .on('finish', callback);
    }
  });
});

But more importantly, the file we're using is apparently too small. Either the chunks aren't actually being emitted by request (I think they are, though, cause they're supposed to be drained in 16kb chunks), or GCS doesn't recognize storing any data for a resumable upload until it meets some threshold.

Our test file is 3MB and cuts off half way though. I tried with 10MB, 20MB, 31MB, and 42MB. 42 and higher were the only ones that got a response from the API call to see where the request left off:

PUT {resumable_uri}
Content-Length: 0
Content-Range: bytes */*

(Note that the amount stored by the API would have been halved (so 21 when 42 worked))

// @jgeewax

Metadata

Metadata

Labels

api: storageIssues related to the Cloud Storage API.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions