Vertx 3 - Proxy fileupload to S3

696 views
Skip to first unread message

Patrick

unread,
Nov 27, 2015, 11:24:13 AM11/27/15
to vert.x
Hi, 

I am trying to create a non-blocking, low memory solution to proxy (rather large >5GB) files from an HttpServerRequest to S3 (using the SuperS3t code, https://github.com/spartango/SuperS3t). It's all working fine if I buffer the file in memory and write it all at once, but it seems to fail when using chunked writes. At executing the endHandler, S3 responds with a HTTP 501:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><Header>Transfer-Encoding</Header><RequestId>4B2C926CF0A17F5C</RequestId><HostId>z+6pGNjvut3a6wUaZ5ILw3+0GqiLmj4+5KnfdzqR2+FzzJ6SBgKKPHwEthtORDEx1QTt2N4lg7U=</HostId></Error>


Here's (part of) the code I'm using:


HttpServerRequest req = rc.request();
req.setExpectMultipart(true);


Handler<HttpClientResponse> h = new Handler<HttpClientResponse>() {

 @Override

  public void handle(HttpClientResponse hcr) {

   System.out.println("S3 statuscode: "+hcr.statusCode());

   System.out.println("S3 statuscode: "+hcr.statusMessage());

   hcr.bodyHandler(new Handler<Buffer>() {

     @Override

     public void handle(Buffer buffer) {

       System.out.println("Response (" + buffer.length() + "): ");

       System.out.println(buffer.getString(0, buffer.length()));

     }

   });

 }

};


S3Client s3 = new S3Client("key","key");

S3ClientRequest putRequest = s3.createPutRequest("bazana-demo-datasets", "test.csv", h);

putRequest.setChunked(true);

System.out.println("created the request");


req.uploadHandler(upload -> {

  upload.handler(chunk -> {

   System.out.println("Received a chunk of the upload of length " + chunk.length());

   putRequest.write(chunk);

 });

});


req.endHandler(request -> {

  putRequest.end();

});


Any ideas?


Thanks,

Patrick

               



Nat

unread,
Nov 27, 2015, 5:27:47 PM11/27/15
to vert.x
S3 does not support chunked upload. It requires you to know the content length before hand.

Patrick Conway

unread,
Jun 27, 2016, 9:06:51 AM6/27/16
to vert.x
Have a look at the multipart upload

I'm trying to do the same thing at the moment.....



 

Paulo Lopes

unread,
Jun 27, 2016, 10:42:49 AM6/27/16
to vert.x
You can set the content-length header manually and pump between 2 streams as an (not tested) alternative.
Message has been deleted

Doug Galligan

unread,
Jun 27, 2016, 5:51:20 PM6/27/16
to vert.x
To Paulo's point, as a Proxy you should be able to get the length from your client and pass it to S3.  Here's an example that works for me uploading with cURL. (semi-pseudo cleaned code).

router.post("/os").handler(ctx -> {
    ctx.response().setChunked(true);

    String accessKey = "<Your Key Goes Here>";
    String secretKey = "<Your Secret Goes Here>";
    String bucket = "<Your Bucket Goes Here>";
    String endpoint = "<Your Endpoint Goes Here>";

    HttpServerRequest req = ctx.request();
    String contentLength = req.getHeader("Content-Length");

    // Pause reading the body of the request until the callbacks are hooked up in the Pump
    req.pause();

    S3Client s3c = new S3Client(vertx, accessKey, secretKey, null, endpoint);

    S3ClientRequest s3cr = s3c.createPutRequest(bucket, "filename", resp2 -> {
       ctx.response().end("All Done.\n");
    });
    s3cr.putHeader("Content-Length", contentLength);
    Pump pump = Pump.pump(req, s3cr);
    // End the connection of our request to the data store after our client is finished sending.
    req.endHandler(v -> s3cr.end());
    // Turn the spigot, let it flow, let it flow, don't hold it back anymore.....
    pump.start();
    req.resume();
});

Reply all
Reply to author
Forward
0 new messages