IDSOWFT, but that is the way I understand it.
--
------------------------------------ personal: http://www.cameronkaiser.com/ --
Cameron Kaiser * Floodgap Systems * www.floodgap.com * cka...@floodgap.com
-- Roger Waters, public health officer: "Careful with that pox, Eugene!" ------
No. The process generates a user access token along with a new "child" app
key in one step. There is no additional xAuth step, and I suspect Twitter
won't want xAuth-enabled app keys to be "childed" in any case. Like any user
token, it does not expire until the user revokes it, which I assume in this
case will probably never occur since it can only ever be used by the app key
"child" instance they themselves generated.
--
------------------------------------ personal: http://www.cameronkaiser.com/ --
Cameron Kaiser * Floodgap Systems * www.floodgap.com * cka...@floodgap.com
-- Put down your guns, it's Weasel Stomping Day! ------------------------------
That works fine with binaries, but may not work fine with apps written in
scripting languages.
--
------------------------------------ personal: http://www.cameronkaiser.com/ --
Cameron Kaiser * Floodgap Systems * www.floodgap.com * cka...@floodgap.com
-- I went to San Francisco. I found someone's heart. Now what? ----------------
One thing I wish was easier though for desktop apps and OAuth is if most API providers would make it possible to have multiple consumers and secrets out for the same app at the same time. You can then rotate new ones in constantly in your builds and if one key is discovered or extracted and abused and revoked, all the versions of your app wouldn't be affected. It's something we do with SSL client certificates against our API when we ship a new build (even each of our nightly builds has its own certificate). If someone extracts it and tries to use it, then we can blacklist that one certificate and it doesn't take down all the versions of our apps.
Zac Bowling
@zbowling
> Yes, that is a problem with any app that you distribute that has any
> embedded keys. Unfortunately, you ultimately can't really entirely
> secure anything you ship that a user can run on their own machine.
> You can however take a few steps to make that extremely difficult by
> encrypting and obfuscating your consumer keys/secrets in your app
> package before you distribute. Nothing is impossible to reverse
> engineer if you can get your hands on it (look at iTunes), but you
> can make it take so long and be so hard that it becomes to hard and
> almost everyone gives up (look at iTunes 9).
An important question:
secure against what?
Against posting tweets when the user is not who they say they are?
You can't secure against that. Desktop machines are left unattended.
Mobile phones are borrowed and stolen.
Sure you can make it harder to just grab the key/secret pair of open
source application A and implement application B, pretending to be A.
But what does that buy you? What does that protect against?
--
Bernd Stramm
<bernd....@gmail.com>
If it's a problem with the way the credentials are transmitted, maybe a
different or alternative way of transmitting them? E-mail them at the
same time, perhaps? A callback URL?
--
------------------------------------ personal: http://www.cameronkaiser.com/ --
Cameron Kaiser * Floodgap Systems * www.floodgap.com * cka...@floodgap.com
-- If a seagull flies over the sea, what flies over the bay? ------------------
It does from a web app perspective which is the primary design goal of OAuth since there would be no distribution of your secret in that scenario.
With OAuth, the issue is that if you are distributing secrets out that are embedded in your app, even with all the measures you can take to encrypt and obfuscate them, they can still be extracted at some point if someone has time. The issue is compounded since the app uses the same key universally in all the versions they ship that work so you are screwed if someone does yank your key. All versions you shipped are at risk then. Your only recourse is to rev your secret and force all your users to upgrade their apps to get new keys. In practice, this isn't that bad since twitter isn't hosting credit card data or anything of major risk and you basically devolve into the same issue we had with app identify we had with basic auth and passing clear text source ids (except that maybe now all your apps are crippled).
I've been pondering how you could solve this from my experience with solving these issues with SSL/TLS. One idea is having a sort of delegation chain where I could generate a new delegated secret for each copy of my app I distribute rather then using my same static secret directly in all my apps and then the client could pass the authentication chain up when it goes to Twitter to get an access token.
This is similar to the idea of having the ability to issue multiple secrets against a single app like I was suggesting earlier which could work with the OAuth spec today. However a delegation system would be even better so I could issue delegated secrets at will without going back to Twitter, although that idea would probably require extending the OAuth spec to handle passing signed delegation chains of some kind.
I'm hoping OAuth 2\WRAP allows this somehow since it builds on SSL/TLS instead of reinventing the wheel. There is a lot OAuth could learn from SSL/TLS which I'm hoping that OAuth 2/WRAP takes full advantage of in solving. :-)
Right now though, one solution if you are ultra paranoid if you are going to distribute software, is to proxy the calls from your own software through your own web service (which would render the ease of use you get from xAuth moot but you are sacrificing usability for security).
Zac Bowling
@zbowling
That sounds good to me too. That could also semi-automate the process.
Taylor and Raffi?
--
------------------------------------ personal: http://www.cameronkaiser.com/ --
Cameron Kaiser * Floodgap Systems * www.floodgap.com * cka...@floodgap.com
-- My Pink Floyd code: v1.2a s BO 1/0/pw tinG 0? 0 Relics 2 8 <6mar98> --------
> On Jun 12, 2010, at 11:57 AM, Jef Poskanzer wrote:
> > Application authors are being asked to devote substantial resources
> > to the OAuth conversion, but OAuth provides no security for
> > application authors!
>
> It does from a web app perspective which is the primary design goal
> of OAuth since there would be no distribution of your secret in that
> scenario.
Precisely. OAuth is designed for a 3-party security deal: Twitter
running on some host, the App running on another host, and the User
running on a third.
OAuth is pretty useless for a 2-party situation, as is present in a
desktop/mobile app on the users device. This isn't all that surprising,
since it is designed for something different.
> ... shipped are at risk then. Your only recourse is to rev your secret
> and force all your users to upgrade their apps to get new keys.
Another recourse might be to design a security approach for the
app-on-user-device scenario, rather than trying to shoehorn a 3-party
scheme into this.
> .. this isn't that bad since twitter isn't hosting credit card
> data or anything of major risk and you basically devolve into the
> same issue we had with app identify we had with basic auth and
> passing clear text source ids (except that maybe now all your apps
> are crippled).
Right. As long as the app doesn't do any side business with a third
party. If it does, then it should not use twitter's authentication for
that purpose.
> I've been pondering how you could solve this from my experience with
> solving these issues with SSL/TLS. One idea is having a sort of
> delegation chain where I could generate a new delegated secret for
> each copy of my app I distribute rather then using my same static
> secret directly in all my apps and then the client could pass the
> authentication chain up when it goes to Twitter to get an access
> token.
The question is also - why do you care which copy of your app it is?
People using your app will post silly things, engage in slander of
other people, commit crimes, plot revolutions.
Are you responsible for these things?
Is Twitter responsible for broadcasting the content?
While we're at it, let's go after the phone manucacturer, the network
bandwidth provider.
> ...
> Right now though, one solution if you are ultra paranoid if you are
> going to distribute software, is to proxy the calls from your own
> software through your own web service (which would render the ease of
> use you get from xAuth moot but you are sacrificing usability for
> security).
You mean something like Microsoft authorization codes.
Or make them download a UUID with the code, and send that code to
twitter for each individual downloaded copy. Of course once the
download count goes into the millions, twitter will love to store all
those IDs. And of course every application developer has a website that
handles all the downloads, none of them use google code, sourceforge,
github, ... oh wait.
Oh well, why bother.
--
Bernd Stramm
<bernd....@gmail.com>
Yes, the reason I'm worried is when a token/secret is blocked/revoked, it doesn't take down all clients using that same key in my app. Currently I get one consumer token/secret so if twitter needs to block one bad user running around using the key they reverse engineered from my compiled/obfuscated app, it may take down all my users if they block the entire token/secret (hopefully twitter investigates and warns me and blocks the offending IPs rather then the entire token/secret to give me some time to rev a new key and figure out a deployment but that is asking a lot from them).
Having the ability to issue multiple consumer token/secrets per app, or having delegated chaining (like in SSL), I have some ability to mitigate the issue a bunch and give twitter the ability to block a much smaller subset of my users if a key was extracted and used abusively.
OAuth 1.0a isn't well designed for desktop/mobile apps and it's more than just usability issues that the Twitter gang are trying tackle with things like xAuth. It wasn't designed with the thought that keys could be compromised by third parties embedded inside apps. I can only hope it's fixed OAuth 2.0.
Just ideas. :-)
Zac Bowling
@zbowling
> Yeah, it's really the step of manually getting that long string of
> seemingly-random characters from one app to another. a callback url
> makes sense for web-based apps.
>
> Something like PIN auth that would allow a desktop/mobile app to make
> an HTTP call and recover the string programatically would be good, I
> think. Typing 4-6 characters is much, much easier than copying and
> pasting that long string.
Yes! Can we get a PIN workflow for end users? That would be perfect!
That's what I'm using now.
But for a desktop/mobile standalone application, there is no single
client entity. What is called the "consumer" is not an entity. It is a
program running on a device, not a company.
And to re-quote them:
> For example, if the client is a
> desktop application with freely available source code or an
> executable binary, an attacker may be able to download a copy for
> analysis.
This borders on being silly - why bother with analysis, when the
attacker can just run the program.
The oauth system comes from client/server concepts and client/server
thinking. In that scenario, the authentication is between one client
and two servers. That is not the case with most desktop/mobile apps.
--
Bernd Stramm
<bernd....@gmail.com>
Their idea is that if you can embed a browser and get the user to authenticate through it, you can inspect the url of the embedded browser and detect when it hits login_success.html and take the access token fragment and store it.
However, what is interesting about that is that I can embed client_ids I stole from other desktop apps (and possibly other web apps if they don't protect against it) and generate valid access_tokens against other ids in my own desktop app. The user may notice the app they authorize isn't the one they are using because because facebook identifies the app with its name and icon on the authorize page. However if I'm being evil, i could social engineer the user some how like I could name my app the same as the one I'm stealing or something similar and use the same icon, and then I can get access tokens like I'm that app.
Basically when it comes to desktop apps, Facebook can't for sure tell the difference between my desktop app and illegitimate one. If Facebook blocks entire apps or rate limits by them, then I can still DOS the app by using their client_id. It doesn't offer anymore application identity protection then just embedding a secret and using the OAuth 1.0a flow and embedding secrets.
Facebook probably realizes this. Since you can mark your app as a desktop app and not a web app in your app settings, they probably realize this issue and know that you can't always trust the desktop clients so why even bother with secrets (probably good that they ask your app type upfront for this reason and it doesn't give a false sense of security by even having a secret). From an operations perspective for FB, it gives them less options to safely blacklisting desktop apps without taking out legitimate ones though.
Zac Bowling
@zbowling
On Mon, 14 Jun 2010 10:51:34 -0700
Zac Bowling <zbow...@gmail.com> wrote:
> In facebook's desktop authflow, rather then giving you an
> ...
> Basically when it comes to desktop apps, Facebook can't for sure tell
> the difference between my desktop app and illegitimate one.
Not only that, they (or anyone) cannot tell a legitimate desktop from an
illegitimate one. An illegitimate person can take a desktop with a
bunch of legitimate apps and do illegitimate things with the whole
collection.
And then we should not forget that a mobile phone is a the same as a
desktop, from the point of view of the web server. Phones are usually
not protected very well, both in terms of autheticating users and in
physical terms.
What is it that makes an app illegitimate? Basically that is
impersonates the user, and does things the user doesn't want done.
Unless of course the app does business on behalf of a third party with
both the user and the server (twitter, facebook, ...). Collecting data
is "doing business" in this sense. Then the app is an agent for that
third party.
But for a lot of apps, this is not the case, they act entirely as an
agent for the user. They are no different than browsers in this respect.
--
Bernd Stramm
<bernd....@gmail.com>
Any update or ETA on this? I have an app that I'm eager to test out.
(I notice that if you open http://dev.twitter.com/apps/key_exchange
with a valid oauth_consumer_key, instead of a 404 there is a page that
says "Sorry, key exchange is not permitted for this application." Does
this mean the answer is "soon"?)
--
things change.
dec...@red-bean.com
On another thread, Taylor said "No". So, basically, you will have to
let your secret "leak" so your users can use your app.
--
Julio Biason <julio....@gmail.com>
Twitter: http://twitter.com/juliobiason