JSONRPC 2.0 over HTTP and Authorization Error codes

1,740 views
Skip to first unread message

Victor Efimov

unread,
Nov 18, 2018, 3:20:29 PM11/18/18
to JSON-RPC
Hello!
We use JSONRPC 2.0 over HTTP(S) and we decided to implement authorization in HTTP Header
Authorization (Authorization 'Bearer' type).

I read that thread https://groups.google.com/forum/m/#!topic/json-rpc/PN462g49yL8 and understand that using transport level for authorization is totally ok.

Question now - should unauthorized event (bad bearer token) return HTTP 401 code (with www-authenticate header), or
should it be HTTP 200 with Error object ( https://www.jsonrpc.org/specification#error_object )?

If it's HTTP 401, should the content type be application/json with an error object too, or should it be totally unrelated to jsonrpc thing (like text/plain with "Unauthorized" text)?

What if autorization is made by reverse proxy in front of JSONRPC endpoint, which knows nothing about that it's JSONRPC endpoint?

There is a page http://www.simple-is-better.org/json-rpc/transport_http.html describing JSONRPC 2.0 over http, but I can't see if it's official specification or not and it does not mention HTTP 401.

Nathan Fischer

unread,
Nov 20, 2018, 8:06:28 PM11/20/18
to JSON-RPC
Ultimately, it's up to you. But I'd do something like
  • Use 401 or 403 depending what's appropriate (a good explanation of the difference)
  • I wouldn't bother with the www-authenticate header, it's really meant to inform a browser how to proceed with basic authentication, which you're not using
  • Return the JSON-RPC response object in the body
  • If your reverse proxy is doing authorization, then it's not really a reverse proxy. It's more of an API gateway and you might just have to inform it what kind of response it should be returning
    • If your proxy is only doing authentication then I would either have it pass the request on to your rpc endpoint with a "public" authentication context and let the rpc server build the response, or just have the proxy return 401 with a non JSON-RPC body under the assumption that it should have been obvious to the client that they needed to authenticate for rpc routes.

Martin Barker

unread,
Nov 22, 2018, 6:58:45 AM11/22/18
to json...@googlegroups.com
We use Authentication on our QA system that's at the control of our web server if the client is unauthorized to access you should send the unauthorized header (401). the difference between unauthorized and Forbidden is that Forbidden means the user is authenticated but does not have permissions as per the link Nathen has given. however, while Nathen is correct about most stuff he is wrong about the www-authenticate header being for basic authentication as you have even said your using oAuth style authentication with a Bearer token this is perfectly fine and acceptable over a HTTPS Connection as most oAuth services do.

To answer your question though you should use both the unauthorized header and the Error object as the response. headers are sent before the content so your servers whole payload sent to a client requesting


--
You received this message because you are subscribed to the Google Groups "JSON-RPC" group.
To unsubscribe from this group and stop receiving emails from it, send an email to json-rpc+u...@googlegroups.com.
To post to this group, send email to json...@googlegroups.com.
Visit this group at https://groups.google.com/group/json-rpc.
For more options, visit https://groups.google.com/d/optout.

Victor Efimov

unread,
Nov 22, 2018, 7:21:19 AM11/22/18
to JSON-RPC
Yes, www-authenticate is a MUST in both RFC rfc6750 and rfc7235.

I am trying to understand why to return JSONRPC Error object in 401 response body?

Using 401 as response for bad authentication means we encapsulate JSONRPC into HTTP. So we use HTTP rules for things not related to JSONRPC.
Why then expose JSONRPC Error in HTTP 401 response?

Will client who receive that response try to parse JSON and handle it as JSONRPC error? If yes, then why, if it's related to transport protocol only?
What if JSONRPC server is behind nginx (as nginx upstream server) and server worker died. Nginx then return HTTP 502, and it does not contain JSON Error object.
So client should parse JSON from 401 body but should not parse 502 body?

Martin Barker

unread,
Nov 22, 2018, 8:13:41 AM11/22/18
to json...@googlegroups.com
It should, 

You should configure all your error systems so to give an Error object by setting the handler files, you can look at https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-to-use-custom-error-pages-on-ubuntu-14-04 to see how to configure error handler pages in NGinX.

So the only error that does not also give an error object is a server offline error AKA a Connection Timeout, and it's for the reason of what if the client ignores the status you want to give them an error that they can still parse with a JSON decode.

Victor Efimov

unread,
Nov 22, 2018, 8:31:08 AM11/22/18
to JSON-RPC


On Thursday, November 22, 2018 at 4:13:41 PM UTC+3, Martin Barker wrote:
It should, 

You should configure all your error systems so to give an Error object by setting the handler files, you can look at https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-to-use-custom-error-pages-on-ubuntu-14-04 to see how to configure error handler pages in NGinX.


ok. what if on my sever some urls are jsonrpc endpoints, and some other - REST endpoints (both with same authentication). Should I manintain a list of JSONRPC URL in nginx to show JSONRPC error page to jsonrpc endpoints only?
 
So the only error that does not also give an error object is a server offline error AKA a Connection Timeout, and it's for the reason of what if the client ignores the status you want to give them an error that they can still parse with a JSON decode.


Another case - SSL connection and bad certificate.

Why instead not just state that client should analyze transport error code (HTTP code), and if it's 200, proceed to JSONRPC codes, otherwise something is bad with transport?
That's how encapsulation works, no?

Or another case - In JSONRPC over unix sockets authentication would be unix file permissions, and if permission to socket is denied, there will be unix errno "permission denied" and no JSONRPC answer.

Martin Barker

unread,
Nov 22, 2018, 8:59:12 AM11/22/18
to json...@googlegroups.com
ok. what if on my sever some urls are jsonrpc endpoints, and some other - REST endpoints (both with same authentication). Should I manintain a list of JSONRPC URL in nginx to show JSONRPC error page to jsonrpc endpoints only?  
I'm sure you can do it in nginx but in apache, you can set up error handling to specific paths, See: https://serverfault.com/questions/836899/ignore-nginx-server-error-page-config-for-for-one-specific-location-directive,

Another case - SSL connection and bad certificate.  
and SSL Error is still a failure to connect so transmission will never happen from the server to the client again because the client does not trust the server to respond correctly.


Or another case - In JSONRPC over unix sockets authentication would be unix file permissions, and if permission to socket is denied, there will be unix errno "permission denied" and no JSONRPC answer.  
In the case of over Unix sockets authentication, you would never be using HTTP status codes and the permission denied is out of your control so there is nothing you can do my answers were based on your service coming from a HTTP server so the connection method would be different. and really outside the remit of what you asked you were asking specifically about using HTTP. however, that is also up to the client implementor to know how they are connecting and how to handle File System errors correctly. but as a rule of thumb, you should always provide as much detail as to help with the problems debugging without compromising server security this aids in stages of development.

Nathan Fischer

unread,
Nov 26, 2018, 3:50:16 PM11/26/18
to json...@googlegroups.com
Victor,

Another way you might think about this is that yes, as you mentioned the HTTP status code is a transport level concern, while the JSON-RPC error is an application logic concern. Your application logic could probably function just fine regardless of what status code you used as long as you return JSON-RPC responses. However, returning correct status codes will help that transport layer function correctly.
For example, intermediate HTTP caching servers and other middleware will probably never cache a 403 response, but might cache 200 responses.

In nginx you should be able to define different error pages for each route, like
location /rpc {
error_page ...
...
}
location /rest {
error_page ...
...
}

And as Martin discussed, SSL errors and unix sockets aren't really JSON-RPC concerns since no connection would be established and therefore no JSON-RPC requests or responses would be sent.

Nathan Fischer

Software Engineer, Lightform

lightform.com | blog | @lightforminc

Victor Efimov

unread,
Nov 29, 2018, 5:33:05 PM11/29/18
to JSON-RPC


On Monday, November 26, 2018 at 11:50:16 PM UTC+3, Nathan Fischer wrote:
Victor,

Another way you might think about this is that yes, as you mentioned the HTTP status code is a transport level concern, while the JSON-RPC error is an application logic concern. Your application logic could probably function just fine regardless of what status code you used as long as you return JSON-RPC responses.

yes
 
However, returning correct status codes will help that transport layer function correctly.

yes
 
For example, intermediate HTTP caching servers and other middleware will probably never cache a 403 response, but might cache 200 responses.

In nginx you should be able to define different error pages for each route, like
location /rpc {
error_page ...
...
}
location /rest {
error_page ...
...
}

So, two cases:

1) 401 and JSON Error object
pros: It's easier to write application, because it does not have to have extra "if status==401" branch to handle transport level error.

cons: need inject into nginx proxy error_page for each rpc endpoint. i.e. need to keep a list of rpc endpoints
in nginx, which is above in stack of transport. I think it's a https://en.wikipedia.org/wiki/Leaky_abstraction because
nginx should know nothing about internals of upstream server.

2) 401 and non-JSON body

pros: no list of rpc endpoints in nginx, no leaky abstraktion

cons: application need handle HTTP 401 as special case (need to have extra "if" branch)

 

And as Martin discussed, SSL errors and unix sockets aren't really JSON-RPC concerns since no connection would be established and therefore no JSON-RPC requests or responses would be sent.


That part is related to why I think that (2) cons is not a problem at all.
Client application anyway should have centralized place where it creates connection to server and where it handles errors, because there are cases like bad SSL, unix socket errors, and
broken connection. The same place would be used for this one "if 401" branch. So on client side there will be "if 401" in only one place.
That is why (2) "cons" is much better than (1) "cons".

Also I don't see difference between "connection not estabilished" for SSL problem. And HTTP 401.
Both cases are equivalent. If either physical connection is not established and authentication level answered with HTTP 401, in both this cases JSON RPC layer on server is not called (should not by called),
because it's diferent layers. The only important concern here that physical/ssl/authentication layer(s) differs from JSONRPC layer. No matters if authentication differs from ssl and differs from physical layer.

Martin Barker

unread,
Nov 30, 2018, 8:46:36 AM11/30/18
to json...@googlegroups.com
pros: It's easier to write application, because it does not have to have extra "if status==401" branch to handle transport level error.

This is not a transport level error this a transmission of a data through the transport layer was incorrect for this request, the JSON body then tells why the error occurred much like when you do it in a Web Browser the 401 status is sent then the HTML body explaining why it failed.
 
cons: need inject into nginx proxy error_page for each rpc endpoint. i.e. need to keep a list of rpc endpoints
in nginx, which is above in stack of transport. I think it's a https://en.wikipedia.org/wiki/Leaky_abstraction because
nginx should know nothing about internals of upstream server.

No, you can do all error handling inside your server-side application if it was architected to support this (see E.G below). this would mean you don't require custom error handlers for most errors if not all. and you also need to separate endpoints for directories
a web server works with directories you can then effectively hack the server into going this file is actually hear this is rewriting. but the server core does not understand that so your rules for your errors are on the server core. so you can use a wildcard path to cover all endpoints 

location = /rpc { 
 =  means exact

location ~ /rpc {
 ~ use Regex patern

So you could use /rpc/.*

E.G in PHP I would use Try catch around a bootstrap for an MVC, now the only error that is not handled by my application is PHP parse error triggering a 500 but then this is a failure to compile so the server application is not working, all others I handle through the application framework and I would have a Nginx or apache default error page specified.

notation code showing how different errors can be implemented and handled in the application logic, now these normally would not be all in one place

User_Authentication_Exception  implements JSONRPC_Exception;
 
function handle_user_request(){
   if user_logged_in eq true then 
     constuct user response object
     return user response object;
   else
      throw User_Authentication_Exception;
}
try{
  handle_user_request(); 
}catch( User_Authentication_Exception jx){  // user not authed exception
   // output the given Error (jx) for the user not authed request  
}catch(JSONRPC_Exception jx){  // unknow json error caught
   // output the given Error (jx) for the Exception request  
}catch(Exeption e){   // unhandled exception
  // create and output an Error for a server application error
}


That part is related to why I think that (2) cons is not a problem at all.
Client application anyway should have centralized place where it creates connection to server and where it handles errors, because there are cases like bad SSL, unix socket errors, and
broken connection. The same place would be used for this one "if 401" branch. So on client side there will be "if 401" in only one place.
That is why (2) "cons" is much better than (1) "cons". 

 You have one thing wrong here that might be breaking your understanding, "and broken connection. The same place would be used for this one "if 401" " no your 401 handling would never be in the transport layer it above that. the transport layer has not errored it has transported an error back from the server. 

It goes:

Client (Request) -> Transport Layer -> Server (Request then Response) -> Transport Layer -> Client (Response).
If you get any response from a server it is not a transport layer error is is a response of an error.

Application logic uses a transport layer
Application -> Client (request) -> Transport Layer -> Server (request) -> Server Application -> Server (response) -> Transport Layer -> Client (Response)  -> Application

Your server application sending an error is a valid response that will be sent to the client application so it can handle it, otherwise, the client application would never know about the error, all transport layer errors should close the transport layer, HTTP or HTTPS is not a transport layer they are standards of communication through a transport layer.

Victor Efimov

unread,
Nov 30, 2018, 9:24:51 AM11/30/18
to JSON-RPC


On Friday, November 30, 2018 at 4:46:36 PM UTC+3, Martin Barker wrote:
pros: It's easier to write application, because it does not have to have extra "if status==401" branch to handle transport level error.

This is not a transport level error this a transmission of a data through the transport layer was incorrect for this request, the JSON body then tells why the error occurred much like when you do it in a Web Browser the 401 status is sent then the HTML body explaining why it failed.

HTTP (or unix socket) is a transport for JSONRPC. so by transport layer I mean HTTP in case we talk about JSONRPC over HTTP. It's transport layer in relation to JSONRPC.
 
cons: need inject into nginx proxy error_page for each rpc endpoint. i.e. need to keep a list of rpc endpoints
in nginx, which is above in stack of transport. I think it's a https://en.wikipedia.org/wiki/Leaky_abstraction because
nginx should know nothing about internals of upstream server.

No, you can do all error handling inside your server-side application if it was architected to support this (see E.G below). this would mean you don't require custom error handlers for most errors if not all. and

I understand that, but we talk about case when nginx is reverse proxy, which handles authentication, that's we we have nginx at all in our discussion.
 
you also need to separate endpoints for directories
a web server works with directories you can then effectively hack the server into going this file is actually hear this is rewriting. but the server core does not understand that so your rules for your errors are on

"hack" the server means workaround the problem, but root of problem that it's Leaky abstraction.
Again, I meant now Transport Layer, but layer of HTTP (and below), which is transport for JSONRPC. Sorry for confusion.
Reply all
Reply to author
Forward
0 new messages