Improving SSL Support

252 views
Skip to first unread message

Yarin

unread,
Sep 20, 2012, 10:27:27 AM9/20/12
to
A proposal for improving SSL support in web2py 

For authenticated web applications, there are two "grades" of SSL implementions: Forcing SSL on login, vs forcing SSL on the entire authenticated session.

In the first case, HTTPS is forced on login/registration, but reverts back to HTTP upon authentication. This protects against passwords from being sent unencrypted, but won't prevent session hijacking as the session cookie can still be compromised on subsequent HTTP requests. (See Firesheep for details). Nonetheless, many sites choose this approach for performance reasons, as SSL-delivered content is not cached by browsers as efficiently (discussed on 37signals blog).

In the second case, the entire authenticated session is secured by forcing all traffic to go over HTTPS while a user is logged in and by securing the session cookie so that it will only be sent by the browser over HTTPS.


web2py should make it easier to deal with these scenarios. I just implemented a case-1 type solution and it took quite a bit of work.

Moreover, web2py currently provides two SSL-control functions, which, taken on their own, can lead to problems for the uninitiated:
  • session.secure() will ensure that the session cookie is only transmitted over HTTPS, but doesn't force HTTPS, so that for any subsequent session calls made over HTTP will simply not have access to the auth session, but this is not obvious (Correct me if I'm wrong)
  • request.requires_https() (undocumented?) is a misnomer, because if forces HTTPS but then assumes a case-2 scenario and secures the session cookie

Proposals:
  • SSL auth settings
    • auth.settings.force_ssl_login - Forces HTTPS for login/registration
    • auth.settings.force_ssl_session - Forces HTTPS throughout an authenticated session, and secure the session cookie (If True, force_ssl_login not necessary)
  • Other more granular controls
    • @requires_https() - decorator for controller functions that forces HTTPS for that function only
    • 'secure=True' option on forms ensures submission over HTTPS


Niphlod

unread,
Sep 20, 2012, 10:47:22 AM9/20/12
to web...@googlegroups.com
ok, i got the point on having a hard time implementing case-1 with web2py.
Please humour me, what I didn't got over those kind of requirements is basically.......case-1 adds 0% security.
If there is a man in the middle, he can "scoop" the cookie just as he would scoop the password in the case of no HTTPS at all on the site (so, user A gets access to whatever B can see).
Is it just to "reassure" the majority of dumb users with a nice padlock on the login page or it gives some kind of actual protection?

PS: leave aside the fact that SSL certificate allows your site to be "trustworthy" (user A knows that the site at httpS://example.com is managed by Yarin).
If you are trustworthy for only some of the urls, you publish something at httpS://example.com and something at http://example.com, and users need to know in advance that your login is on a httpS, and pay attention before hitting "login" to avoid a simple dns poisoning attack (i.e. for their computer http://example.com points to the site managed by a - very bad - Niphlod, that can fake whatever is there on Yarin's httpS://example.com/login on Niphlod's http://example.com/login).
That is a nice "theoretical" security gain (Niphlod can't have a working httpS://example.com/login because he doesn't get to know the private key of the certificate, and users are at least "warned" that the certificate is not right) but with "normal" users (click everything shiny? yes!, install this toolbar? yes! format the c: drive? yes!) means more or less 0% achieved security (given that they hardly "know in advance" that a padlock on the page is "required" to be there).

Am I missing something ?

Yarin

unread,
Sep 20, 2012, 11:57:21 AM9/20/12
to
Was wrestling with these points myself...

With respect to case-1 adding 0% security:

- Is a hijacked session the same as an exposed password? a hijacked session compromises a single session on a single system, while a stolen password constitutes a cross-session and (because passwords are re-used) often a cross-system breach. At the very least, probably better to keep the damage temporary and in-house rather than be the site that compromises everyone else?
- Also, consider OAuth- it issues tokens in lieu of passwords to allow for third party access, which provides the safety of limiting access and duration of access, and of invalidating the token if required. To me a session cookie is somewhere between a token and a password in that respect.
- Many major sites implement case-1 security- Facebook for example. There's got to be some reason for that?
- The login-only vs all-the-time SSL options aren't my idea, I took it from WordPress (http://codex.wordpress.org/Administration_Over_SSL). Again, assuming there must be a reason.

As for your PS, I don't see how the scenario you describe would be any different from a site that is all SSL all the time? The user has to start at an HTTPS login screen somehow- are you saying the whole HTTPS concept is BS?

Feel free to keep this going- I'm no security expert, just trying to get a handle on in it all--

Massimo Di Pierro

unread,
Sep 20, 2012, 12:52:22 PM9/20/12
to web...@googlegroups.com
I think we should do something like this. 

I think we should have auth.settings.force_ssl_login and  auth.settings.force_ssl_login.
We could add secure=True option to existing requires validators.

This should not be enforced from localhost.

Niphlod

unread,
Sep 20, 2012, 12:56:50 PM9/20/12
to web...@googlegroups.com
Well, this list is often used to discuss "good behaviours", so, let's brainstorm.

 
- Is a hijacked session the same as an exposed password? a hijacked session compromises a single session on a single system, while a stolen password constitutes a cross-session and (because passwords are re-used) often a cross-system breach. At the very least, probably better to keep the damage temporary and in-house rather than be the site that compromises everyone else?

The only valid point here is a sort of "netiquette" among web developers (users have a few username-password and use those in each and every site): sort of "if only all sites gave secured SSL access"... no gain in your's site security at all. Session expiration is another beast to tackle (users will feel "good" without re-entering the same username-password if yet authenticated "some time ago".....but the "time window" is very different from site to site - just think to different requirements as admin dashboards, banking accounts, data entry, blogs, forums, etc etc).

- Many major sites implement case-1 security- Facebook for example. There's got to be some reason for that?

Yep, saving CPU :P. Users and data security was never a "level A" goal to facebook (and, respectfully, the data in facebook is not "superprivate" anyhow). They have a default method for "non-concerned citizens", but you can tune the preferences to allow "only https navigation" as well if I remember correctly (I was never a fan of facebook). Probably the "netiquette" aforementioned works for facebook "big times": if you "leak" to man in the middle passwords for billions users it "a worst thing" than leaking 1k usernames-passwords.
 
- The login-only vs all-the-time SSL options aren't my idea, I took it from WordPress (http://codex.wordpress.org/Administration_Over_SSL). Again, assuming there must be a reason.

Quote:

Which Should I Use?

FORCE_SSL_LOGIN is for when you want to secure logins so that passwords are not sent in the clear, but you still want to allow non-SSL admin sessions (since SSL can be slow).

FORCE_SSL_ADMIN is for when you want to secure logins and the admin area so that both passwords and cookies are never sent in the clear. This is the most secure option.

This probably has to do with Wordpress being installed on every shared hosting facility in the world..... speed vs security just to not leak password (but leak cookie) is not a dealbreaker in 2012 for us (at least until python shared hosting improves the coverage). An addendum to the 37signals link you posted before: is dated 2008! Even IE8 now caches correctly by default if cache headers are set.
 
As for your PS, I don't see how the scenario you describe would be any different from a site that is all SSL all the time? The user has to start at an HTTPS login screen somehow- are you saying the whole HTTPS concept is BS?

BS? What am I, mad ? Absolutely not! The obvious big +1 is that no man in the middle can ever see the exchanged data between Yarin server and user A.
As for "users concerns" and dns poisoning, if every page of http://example.com "normally" redirects to httpS://example.com, then users knows that every page of Yarin site is providing a padlock (and at least a good 70% spots the difference).
 
Phishing is always behind the corner (e.g. https://exampl3.com), but if you are concerned you can buy "similar" domains and manage those yourself.
 

Feel free to keep this going- I'm no security expert, just trying to get a handle on in it all--

Neither am I, but what's the point in 2012 of using partially ssl for your site? If the data is private, needs to be secured ("data" as in, e.g., personal information, like phone no or address). When a page like http://example.com/profile shows you your address, your man in the middle will read it.
Should I really go buying a relatively expensive SSL certificate to provide the "sort of netiquette" said before: maybe yes, but you're confirming that security on your site is not a "level A" goal.
If the reply here is "browser don't cache well ssl resources" I can assure you that the situation changed a lot in the last years (mostly from the time big sites like google, yahoo, twitter & co. allowed a "complete https navigation" in their domains).
If the reply is "SSL encryption and handshaking is hurting my CPU" I'd say that also in this "realm" situation changed a lot: you can get a hell lot of more raw-power/buck than 4 years ago (while instead bandwith-related costs tends to be more "statical")

Yarin

unread,
Sep 20, 2012, 2:56:55 PM9/20/12
to web...@googlegroups.com
@Massimo - that'd be great. 

One more kink to throw in is recognizing proxied SSL calls. This requires knowing whether you can trust the traffic headers (e.g. having apache locked down to all traffic except your load balancer), so maybe we need a trust_proxied_ssl or is_proxied setting somewhere?

if request.env.http_x_forwarded_for and request.env.http_x_forwarded_proto in ['https', 'HTTPS'] and auth.settings.is_proxied:

Yarin

unread,
Sep 20, 2012, 3:00:19 PM9/20/12
to web...@googlegroups.com
@Niphlod

1) "only valid point here is a sort of "netiquette" among web developers"

Wrong- this is a matter of protecting the user. I may have a site that doesn't deal with anything important. Let's say it allows users to see their friends' baby pictures. They need to login so that we know what babies to show. Session gets hijacked? Big deal, you can see someone else's babies. But half the users are gonna use the same password they use for their paypal account. Should they have? no. But you don't want to be the vector by which that stuff happens.

2) It's a cpu issue

I can't follow your conclusion here. You say it's not important in 2012, and yet Facebook still defaults to it and WordPress continues to offer it. It may not be a good trade off in most cases, but it's certainly common practice. I think we're still missing something here...

3) Caching issue outdated

This is good to know- was not aware.

4) Mixed content issue

Mixed content is not always a choice. If you pull images hosted on other HTTP sites, boom, you're stuck with mixed content, and some browsers don't handle it elegantly. IE throws a pop up in your face. Chrome shows a warning indicator on all your HTTPS pages in a session if just one page has mixed content. 

This is why we are making all-SSL an option but not the default on our site- we interface with Facebook, pull thumb images from outside pages, etc and don't want the pop ups or warnings. We don't deal with sensitive data so the trade-off makes sense to us, but we're certainly not gonna be passing around our users' passwords unprotected.

5) Still don't understand your PS. Can't tell if you're talking about user perception or actual DNS poisoning, but the first point is out of scope I think- my concern is that WE know what is secure- I'm not counting on the user to know or care. As for the latter, I still don't see how the scenario is any different if both case-1 and case-2 require a user to be redirected to https://example.com/login if they type in http://example.com/login. lost me on this one.

Niphlod

unread,
Sep 20, 2012, 3:49:35 PM9/20/12
to web...@googlegroups.com
Wrong- this is a matter of protecting the user. I may have a site that doesn't deal with anything important. Let's say it allows users to see their friends' baby pictures. They need to login so that we know what babies to show. Session gets hijacked? Big deal, you can see someone else's babies. But half the users are gonna use the same password they use for their paypal account. Should they have? no. But you don't want to be the vector by which that stuff happens.

That's the kind of "netiquette" I was talking about. Web developers should "apply the netiquette", so it would be less probable that the password gets leaked (and users are happy) .
 
2) It's a cpu issue

I can't follow your conclusion here. You say it's not important in 2012, and yet Facebook still defaults to it and WordPress continues to offer it. It may not be a good trade off in most cases, but it's certainly common practice. I think we're still missing something here...


Facebook runs how many server ? Given that content on facebook is not so precious, and that privacy was never a big issue for them, they save some buck on plain navigation vs ssl.
Wordpress is not recommending that, it's giving you a choice. Wordpress runs for the 80% of the time a relatively small site on a supercrowded shared hosting scenario. SSL navigation shouldn't be slower, but crowded server are the majority of the cases where Wordpress runs, so its common to see a certain "limitation" using those providers together with a SSL certificate. I think the point you're missing from my post before is related to the "lack" of coverage for python shared-hosting that could pose the same problem in terms of "slower" response times. 
Given that normally web2py apps runs on VPS in the worst case scenario, the CPU time "wasted" in SSL protection is negligible.

4) Mixed content issue

Mixed content is not always a choice. If you pull images hosted on other HTTP sites, boom, you're stuck with mixed content, and some browsers don't handle it elegantly. IE throws a pop up in your face. Chrome shows a warning indicator on all your HTTPS pages in a session if just one page has mixed content. 
 
Yep. This is kind of frustrating. But has a logic: an URL "with padlock" doesn't necessary mean that all the data the user sees (and exchange, e.g. posting a form) is secured. We (web developers) know that ajax, iframe, images and recently websockets can exchange data "out of band" in regards of SSL protection, "normal" users don't. Some browsers alert the user of this fact, some browsers not. Some sites "proxy" the content served to "avoid" the itchy browsers "popups", some sites don't. It's just a matter of "visions".

5) Still don't understand your PS. Can't tell if you're talking about user perception or actual DNS poisoning, but the first point is out of scope I think- my concern is that WE know what is secure- I'm not counting on the user to know or care. As for the latter, I still don't see how the scenario is any different if both case-1 and case-2 require a user to be redirected to https://example.com/login if they type in http://example.com/login. lost me on this one.
 
Well, both. Your point is "don't let the users give away their passwords on my site": ok, I get that point.
But, e.g. with DNS poisoning: your pages shows up in the google search results. User clicks on "https://example.com/yarin/baby_pictures (login to see Yarin's baby pictures)" and are instead connected to an evil Niphlod's hosted https://example.com/yarin/baby_pictures. When browsing to Niphlod's hosted, they get a warning about a certificate mismatch, while on Yarin's one all goes fine. Without SSL, your users are giving passwords away to Niphlod.
User perception: "hey, it had a padlock all the time, why this time there is no padlock?" . This has more impact in users mind than checking if https is enabled only in the login page.

Explicit redirection make your site "protected" without caring on the web2py side, if you are on a "production" webserver.
Beware that you can force redirect to https only for "sections" of your site (e.g., the login page, assuming you're still fine with that). If you run web2py behind a webserver with that "restriction", your code doesn't have to deal with checking if SSL is enabled or not.
Another fine addition is that in modern browsers (and cellphones, and tablets) if you type "example.com" you are "pointed" to http://example.com. Having to type https:// is a loss of time for desktop users (see all the madness regarding short urls) and a little cumbersome on "touch keyboards" on small devices.

@Yarin and all: at the end of the "speech", still Yarin's suggestion to make case-1 more achievable by web2py is good.

Massimo Di Pierro

unread,
Sep 21, 2012, 8:40:35 AM9/21/12
to web...@googlegroups.com
Can you suggest a way to detect that?

Yarin

unread,
Sep 21, 2012, 12:38:49 PM9/21/12
to
The completely naive approach would be to do:

if request.env.http_x_forwarded_for and \
    request
.env.http_x_forwarded_proto in ['https', 'HTTPS']:
     
# Is HTTPS...

But you cannot detect whether proxied traffic is real because headers are unreliable. Instead you must securely set up a server behind a proxy and set the .is_proxied flag explicitly.

Example:
We put our app server behind an SSL-terminating load balancer on the cloud. The domain app.example.com points to the loadbalancer, so we configure app server's Apache to allow traffic from that domain only, and block any outside direct traffic. Then we set auth.settings.is_proxied to tell web2py "this proxy traffic is legit"

HTTPS/443 requests will hit the loadbalancer, and be transformed to HTTP/80 traffic with http_x_forwarded_for and http_x_forwarded_proto headers set. Now we can confidently check:

if auth.settings.is_proxied and \
    request
.env.http_x_forwarded_proto in ['https', 'HTTPS']:
   
# Is HTTPS...

In other words http_x_forwarded_for header is useless and you can't mix direct and proxied traffic. To be able to handle proxy-terminated SSL, we need to know that all the traffic is via a trusted proxy.

Yarin

unread,
Sep 21, 2012, 11:43:16 AM9/21/12
to
FYI this is the enforcer function we wrote for our implementation- basically a rewrite of request.requires_https():

def force_https(trust_proxy = False):
 
""" Enforces HTTPS in appropriate environments
 
 Args:
     trust_proxy: Can we trust proxy header 'http_x_forwarded_proto' to determine SSL.
     (Set this only if ALL your traffic comes via trusted proxy.)
 """

 
 # If cronjob or scheduler, exit:
 cronjob 
= request.global_settings.cronjob
 cmd_options 
= request.global_settings.cmd_options
 
if cronjob or (cmd_options and cmd_options.scheduler):
     
return

 
# If local host, exit:
 
if request.env.remote_addr == "127.0.0.1":
     
return
 

 
# If already HTTPS, exit:
 
if request.env.wsgi_url_scheme in ['https', 'HTTPS']:
     
return
 
 
# If HTTPS request forwarded over HTTP via SSL-terminating proxy, exit:
 
if trust_proxy and request.env.http_x_forwarded_proto in ['https', 'HTTPS']:
     
return
 
 
# Redirect to HTTPS:
 redirect
(URL(scheme='https', args=request.args, vars=request.vars))





On Friday, September 21, 2012 9:53:36 AM UTC-4, Yarin wrote:
The completely naive approach would be to do:

if request.env.http_x_forwarded_for and \

    request
.env.http_x_forwarded_proto in ['https', 'HTTPS']:
     
# Is HTTPS...

But you cannot detect whether proxied traffic is real because headers are unreliable. Instead it is up to the user to securely set up a server behind a proxy and set the .is_proxied flag themselves.

Example:
We put our app server behind an SSL-terminating load balancer on the cloud. The domain app.example.com points to the loadbalancer, so we configure app server's Apache to allow traffic from that domain only, and block any outside direct traffic. Then we set auth.settings.is_proxied to tell web2py "this proxy traffic is legit"

HTTPS/443 requests will hit the loadbalancer, and be transformed to HTTP/80 traffic with http_x_forwarded_for and http_x_forwarded_proto headers set. Now we can confidently check:

if auth.settings.is_proxied and \
    request
.env.http_x_forwarded_proto in ['https', 'HTTPS']:
   
# Is HTTPS...

In other words http_x_forwarded_for header is useless and you can't mix direct and proxied traffic. To be able to handle proxy-terminated SSL, we need to know that all the traffic is via a trusted proxy.


Massimo Di Pierro

unread,
Sep 21, 2012, 12:05:56 PM9/21/12
to web...@googlegroups.com
Yes but how do you detect if is_proxied reliably?
if request.env.http_x_forwarded_for and \

    request
.env.http_x_forwarded_proto in ['https', 'HTTPS']:
     
# Is HTTPS...

But you cannot detect whether proxied traffic is real because headers are unreliable. Instead it is up to the user to securely set up a server behind a proxy and set the .is_proxied flag themselves.

Example:
We put our app server behind an SSL-terminating load balancer on the cloud. The domain app.example.com points to the loadbalancer, so we configure app server's Apache to allow traffic from that domain only, and block any outside direct traffic. Then we set auth.settings.is_proxied to tell web2py "this proxy traffic is legit"

HTTPS/443 requests will hit the loadbalancer, and be transformed to HTTP/80 traffic with http_x_forwarded_for and http_x_forwarded_proto headers set. Now we can confidently check:

if auth.settings.is_proxied and \
    request
.env.http_x_forwarded_proto in ['https', 'HTTPS']:
   
# Is HTTPS...

In other words http_x_forwarded_for header is useless and you can't mix direct and proxied traffic. To be able to handle proxy-terminated SSL, we need to know that all the traffic is via a trusted proxy.


Yarin

unread,
Sep 21, 2012, 12:35:37 PM9/21/12
to web...@googlegroups.com
You can't detect this- it must be a setting. Please see my previous answer:

"you cannot detect whether proxied traffic is real because headers are unreliable. Instead you must securely set up a server behind a proxy and set the .is_proxied flag explicitly."

"you can't mix direct and proxied traffic. To be able to handle proxy-terminated SSL, we need to know that all the traffic is via a trusted proxy."

Yarin

unread,
Sep 21, 2012, 2:05:35 PM9/21/12
to
Here's a complete example of our own implementation (simplified, untested) using the proposed auth settings:

In our model:

def force_https(trust_proxy = False, secure_session = False):
   
""" Enforces HTTPS in appropriate environments


    Args:
        trust_proxy: Can we trust proxy header 'http_x_forwarded_proto' to determine SSL.
        (Set this only if ALL your traffic comes via trusted proxy.)
        secure_session: Secure the session as well.
        (Do this only when enforcing SSL throughout the session)
    """


   
# If cronjob or scheduler, exit:
    cronjob
= request.global_settings.cronjob
    cmd_options
= request.global_settings.cmd_options
   
if cronjob or (cmd_options and cmd_options.scheduler):
       
return

   
# If local host, exit:
   
if request.env.remote_addr == "127.0.0.1":
       
return

   
# If already HTTPS, exit:
   
if request.env.wsgi_url_scheme in ['https', 'HTTPS']:

       
if secure_session:
            current
.session.secure()
       
return

   
# If HTTPS request forwarded over HTTP via a SSL-terminating proxy, exit:

   
if trust_proxy and request.env.http_x_forwarded_proto in ['https', 'HTTPS']:

       
if secure_session:
            current
.session.secure()

       
return

   
# Redirect to HTTPS:
    redirect
(URL(scheme='https', args=request.args, vars=request.vars))

# If a login function, force SSL:
if request.controller == 'default' and request.function == 'user' and (auth.settings.force_ssl_login or auth.settings.force_ssl_session):
    force_https(trust_proxy = auth.settings.is_proxied, secure_session = auth.settings.force_ssl_session)
# If user is logged in and we're enforcing a full SSL session:
elif auth.is_logged_in() and auth.settings.force_ssl_session:
    force_https
(trust_proxy = auth.settings.is_proxied, secure_session = True)

def on_login(form):
   
""" Post login redirection"""

   
# If we're enforcing SSL on login only, redirect from HTTPS to HTTP immediately after login:
   
if auth.settings.force_ssl_login is True and auth.settings.force_ssl_session is False:
       
if request.env.wsgi_url_scheme in ['https', 'HTTPS'] or request.env.http_x_forwarded_proto in ['https', 'HTTPS']:

           
# Extract the post-login url value from auth
           
# (hack - look at end of login() function in tools.py. This belongs in Auth itself.):
            login_next_path
= auth.next or auth.settings.login_next
           
# Build an absolute, HTTP url from it:
            login_next_url
= URL(scheme='http',c='default',f='index') + login_next_path[1:]
           
# Redirect to the HTTP URL:
            redirect
(login_next_url)

auth
.settings.login_onaccept = on_login


Massimo Di Pierro

unread,
Sep 21, 2012, 2:05:41 PM9/21/12
to web...@googlegroups.com
Yarin, please open an issue on google code as suggested enhancement so ti does not get lost. Also feel free to move the discussion on web2py developers.


On Friday, 21 September 2012 12:22:57 UTC-5, Yarin wrote:
Here's a complete example of our own implementation (simplified, untested) using the proposed auth settings:

In our model:

def force_https(trust_proxy = False, secure_session = False):
   
""" Enforces HTTPS in appropriate environments


    Args:
        trust_proxy: Can we trust proxy header 'http_x_forwarded_proto' to determine SSL.
        (Set this only if ALL your traffic comes via trusted proxy.)
        secure_session: Secure the session as well.
        (Do this only when enforcing SSL throughout the session)
    """


   
# If cronjob or scheduler, exit:

    cronjob
= request.global_settings.cronjob
    cmd_options
= request.global_settings.cmd_options
   
if cronjob or (cmd_options and cmd_options.scheduler):
       
return

   
# If local host, exit:
   
if request.env.remote_addr == "127.0.0.1":
       
return

   
# If already HTTPS, exit:
   
if request.env.wsgi_url_scheme in ['https', 'HTTPS']:

       
if secure_session:
            current
.session.secure()
       
return

   
# If HTTPS request forwarded over HTTP via a SSL-terminating proxy, exit:

   
if trust_proxy and request.env.http_x_forwarded_proto in ['https', 'HTTPS']:

       
if secure_session:
            current
.session.secure()

       
return

   
# Redirect to HTTPS:
    redirect
(URL(scheme='https', args=request.args, vars=request.vars))

# If a login function, force SSL:
if request.controller == 'default' and request.function == 'user' and auth.settings.force_ssl_login:

    force_https
(trust_proxy = auth.settings.is_proxied, secure_session = auth.settings.force_ssl_session)
# If user is logged in and we're enforcing a full SSL session:
elif auth.is_logged_in() and auth.settings.force_ssl_session:
    force_https
(trust_proxy = auth.settings.is_proxied, secure_session = True)

def on_login(form):
   
""" Post login redirection"""

   
# If we're enforcing SSL on login only, redirect from HTTPS to HTTP immediately after login:
   
if auth.settings.force_ssl_login is True and auth.settings.force_ssl_session is False:
       
if request.env.wsgi_url_scheme in ['https', 'HTTPS'] or request.env.http_x_forwarded_proto in ['https', 'HTTPS']:

           
# Extract the post-login url value from auth
           
# (hack - look at end of login() function in tools.py. This belongs in Auth itself.):
            login_next_path
= auth.next or auth.settings.login_next
           
# Build an absolute, HTTP url from it:
            login_next_url
= URL(scheme='http',c='default',f='index') + login_next_path[1:]
           
# Redirect to the HTTP URL:
            redirect
(login_next_url)

auth
.settings.login_onaccept = on_login


On Friday, September 21, 2012 12:35:37 PM UTC-4, Yarin wrote:

Yarin

unread,
Sep 21, 2012, 2:26:36 PM9/21/12
to web...@googlegroups.com

Yarin

unread,
Oct 4, 2012, 8:05:17 AM10/4/12
to web...@googlegroups.com
I'm revising my stance on this. After further digging around, I'm gonna go with Niphlod's position that securing only the login traffic without securing the entire session is for the most part pretty worthless. While this might have value to some sites that have to deal with mixed content, the complexity it introduces isn't worth it.

I'm also taking back my recommendation that we need to have a setting to explicitly allow SSL traffic. I think it's fine to just check the headers for forwarded SSL traffic and trust that it is. Yes, headers can be spoofed, but I can't think of how this could be exploited on the user end- 

So that leaves only two recommended changes:
  • When checking whether HTTPS, check for forwarded SSL headers with if request.env.http_x_forwarded_proto in ['https', 'HTTPS']:
  • Add a auth.secure = True convenience setting, which would call requires_https() while the user is logged in, and on all login/registration methods.
I'll update the ticket

Massimo Di Pierro

unread,
Oct 4, 2012, 12:38:34 PM10/4/12
to web...@googlegroups.com
So... would replaing this in gluon.main.py 

is_https = env.wsgi_url_scheme in ['https', 'HTTPS'] or env.https=='on')

with 

is_https = env.wsgi_url_scheme in ['https', 'HTTPS'] or env.https=='on' or request.env.http_x_forwarded_proto in ['https', 'HTTPS']

address the first issue?

Massimo

Yarin

unread,
Oct 4, 2012, 2:01:51 PM10/4/12
to web...@googlegroups.com
Yes exactly

Massimo Di Pierro

unread,
Oct 4, 2012, 4:13:07 PM10/4/12
to web...@googlegroups.com
OK. check trunk. Auth(db,secure=True). 

Yarin Kessler

unread,
Oct 4, 2012, 4:44:28 PM10/4/12
to web...@googlegroups.com
Awesome thanks Massimo- will test tonight

OK. check trunk. Auth(db,secure=True). 
--
 
 
 

Yarin

unread,
Oct 5, 2012, 12:08:13 AM10/5/12
to web...@googlegroups.com
Massimo,

The current Auth(secure=True) implementation is redirecting to http instead of https, resulting in redirect loop. An easy fix is to just use request.requires_https() if secure:

gluon/tools.py:

Instead of:
    if secure and not request.is_https:
        session
.secure()
        redirect
(URL(args=request.args,vars=request.vars,scheme='http'))

        
Do:
    if secure:
        request
.requires_https()


This fixes the redirect bug.

However, as for behavior, I had in mind that setting Auth(secure=True) would only enforce HTTPS during an authenticated session- that is, HTTPS enforcement would only kick in once the user logged in. 

The intention was to provide a convenient way to protect the session of a logged in user, but not necessarily require HTTPS for non-logged in requests, which is what it's doing now.

(Sorry I'd help with implementation but no time right now..)
Reply all
Reply to author
Forward
0 new messages