Friday, February 20, 2009

Compromising HTTP to HTTPS redirects

Update [3/22/2011]:
A great solution to properly address this problem is to use HTTP Strict Transport Security. This effectively instructs the browser to only interact with the webpage over HTTPS. The browser will prevent a user from sending any requests to the site over HTTP.
Content Security Policy (CSP Blog Post)
OWASP TLS Cheat Sheet

Many sites do their best to secure the login page. They post to https and even use an https landing page to accept the username and password. These items are great and exactly what should be happening. The next step is often to disable http access to the login page or automatically redirect http to https. Just in case a user types in instead of https.

Here is where we hit a problem. Once a user makes an http request they have lost. We must assume from a security perspective that there is always an active-man-in-the-middle attack. Assuming otherwise is placing a false trust on the infrastructure. So, the MitM simply takes the http request, intercepts the response (404 or 300 redirect) and inserts their own content. The easiest attack? Simply modify the response to a 200 and insert the correct html from the site's https login page.

Now the victim sees what they expect, a login page on their bank's website. The only problem is that the site never redirected to https. So the user enters the username and password and its transmitted in the clear. The MitM then happily sniffs that info and passes it to the actual site. The user never knows their creds have been stolen.

The above scenario is a simplified version of the new attack tool SSLStrip which was recently unveiled at Black Hat. The SSLStrip tool actually does a bit more by modifying the post location to a similar looking url which is accomplished with punycode. However, this is just icing on the cake. The fundamental issue above is a large enough issue on its own.

How to we protect ourselves?
We can say that we need to keep educating our users to look for HTTPS, but realisticly that's not the best defense. We should have technical controls to protect the user. What can we do? Aside from hoping the user always directly types in, the answer is nothing. But this isn't realistic. Users see that their site automatically redirects them so they happily type in the least amount of characters "".

Our only hope is to begin modifying the browser itself. I propose we add a setting to the browsers which allows a user to specify the website of their bank to the browser. The browser will then never access a page from that domain unless it is HTTPS.

Will this protect the user? Yes. Will it also break other things or be a burden to the site? Probably. But what's it going to be? Are we going to adapt or continue to let our users get compromised?

-Michael Coates


  1. How about a more realistic change?

    When the user types "any.url" into the location bar, the browser assumes https rather than http, and falls back to trying http if there is no response on the SSL port.

  2. But then we enter a situation where the user again may place trust in the browser to default to HTTPS. What about those scenarios where the browser does "fall back". If it is my bank, I wouldn't want that to happen at all. I prefer no access instead of http access.

    I think we need to move to a scenario where the user is secure by default or no access is given. The days of allowing the user to accept the risk by clicking yes to a popup are over. The users can't intelligently evaluate the situation. So I believe the browser should make the secure decision for them. Again, this is for that "secure" mode which would be applied to bank websites (or any other critical site) specified by the user.

  3. There are several proposed solution to this problem.

    1. which points to work done by Barth and Jackson at Stanford. It is a Firefox plugin that fores this with a user defined configuration, and allows a server to specify this persistent setting via a cookie.

    2. NoScript supports this user-defined behavior. Users can specify which sites should only do HTTPS.

    Totally agreed that users and sites should be able to specify this as a default setting that persists.

  4. The work done at stanford is excellent. I've take a quick read of the forcehttps plugin and will experiment with it more. This is great stuff! Thanks for pointing it out.


  5. Or just only use https, redirect all attempts at port 80 to 443.
    I wonder how the browser reacts if it gets an SSL handshake on port 80.

  6. Web robots will look for robots.txt file to find out whether some parts of the site are not to be crawled.

    The same idea should be applied to web browsers. They will look for e.g. to find out what parts of the domain are supposed to be HTTPS-only. The https.txt would contain data similar to this:

    disallow http:

    Or it could just contain one slash to define the whole site as HTTPS only.

    The browser could cache this file for some period of time to allow faster requsts.

    There is one potential problem: the MITM will modify the data that the browser gets when it sends the HTTP request to /https.txt

    To prevent that, the browser will use HTTPS to request that https.txt. If the file does not exists or if the HTTPS was not possible, then the browser defaults to HTTP for the whole website. Otherwise, the browser will apply the filters on the text file.

    This kind of behavior I am expecting from future web browsers - most of today's web browsers are very insecure and I am very disappointed at them. I hate to see text similar to "Surf web safely". Bollocks.


Note: Only a member of this blog may post a comment.