Working client-server example of 'Expect: 100-continue'?

111 views
Skip to first unread message

andreas....@gmail.com

unread,
Mar 12, 2015, 12:34:17 PM3/12/15
to mojol...@googlegroups.com
These days we were trying to teach our server to deal with client 'Expect: 100-continue' request headers correctly. We would like to respond with either status 100 or 4xx depending on various aspects of the request headers available at that moment. 100 would then require that we later send a 200 as well, 4xx would have to finish the ongoing request, discarding already gathered content. I'm sure there are plenty of problems to solve here but we could not even get the very basics working.

Is there an example available how this can or should be done?

Thanks,
--
andreas

sri

unread,
Mar 12, 2015, 1:05:59 PM3/12/15
to mojol...@googlegroups.com
Is there an example available how this can or should be done?

It's not officially supported anymore. Didn't even make it into the 1.0 release, since the code was too complicated and literally nobody used it.


There used to be a continue_handler in the app class, which you could define to send other response codes than 100.


Which would get called by the server if it exists.


And i don't think there's an elegant way to make it work today, you'd have to hook into request parser events (possibly "body" in Mojo::Content) and write your 100 response directly to the stream with Mojo::IOLoop.

    # Completely untested
    use Scalar::Util;
    hook after_build_tx => sub {
      my ($tx, $app) = @_;
      Scalar::Util::weaken $tx;
      $tx->req->content->on(body => sub {
        return unless my $expect = $tx->req->headers->expect;
        ...
        if (...) {
          Mojo::IOLoop->stream($tx->connection)->write("100 Continue\r\n\r\n");
        }
        else {
          $tx->req->error({message => 'We do not like this request'});
        }
      });
    };

--
sebastian

sri

unread,
Mar 12, 2015, 1:33:17 PM3/12/15
to mojol...@googlegroups.com
One more thing that might not have been totally clear... Mojolicious is completely event loop based, and we are always watching if sockets become readable, even when we are still writing. So if you have a Mojolicious client and server on both sides, this whole topic becomes kinda meh, since you can bail out of parsing a huge request at any time by setting a parser error like i've done above. There would be no need to use 100 Continue, which is only useful for clients that can't read and write at the same time.

--
sebastian

andreas....@gmail.com

unread,
Mar 12, 2015, 4:51:03 PM3/12/15
to mojol...@googlegroups.com
We got the proposal from a curl programmer and got the impression that our routing happens pretty late on uploads, so we thought we should probably help to prevent big uploads earlier that in the end might get rejected for some reason. And as I understand it this is indeed the motivation for the 100-continue idea.

Thanks for the code anyway, it does not look inelegant to me. We'll poke around with it and see if it fits for the immediate need.

--
andreas

sri

unread,
Mar 12, 2015, 5:11:59 PM3/12/15
to mojol...@googlegroups.com
There would be no need to use 100 Continue, which is only useful for clients that can't read and write at the same time.

That statement is not entirely correct, 100 Continue is a bit of an idealistic approach, *if* it works correctly you can prevent a huge upload *and* reuse the connection afterwards if it was keep-alive. But this is very very hard to get right, and timeouts are involved that can and will cause things to fail randomly in fun ways.

--
sebastian
Reply all
Reply to author
Forward
0 new messages