Essentially, we believe that mechanism 3 of https://wiki.mozilla.org/Thunderbird:Autoconfiguration
does not meet our requirements and could introduce security problems.
Mechanism 2 is what we would like to pursue. So, I'm looking for a
way to prevent mechanism 3 from happening for any of the domains we
host.
> We have had an unauthorized user publish settings for our mail
> service. Â This person was well intentioned, and mostly got it right.
> However, we are worried about the possibility for a malicious person
> publishing false settings in order to steal end-user credentials. Â How
> can we prevent this from happening for our domains? Â We currently have
> 271 domains.
>
> Essentially, we believe that mechanism 3 ofhttps://wiki.mozilla.org/Thunderbird:Autoconfiguration
> does not meet our requirements and could introduce security problems.
> Mechanism 2 is what we would like to pursue. Â So, I'm looking for a
> way to prevent mechanism 3 from happening for any of the domains we
> host.
Hi Jesse --
In the immediate, I think best is to file a bug on ISPDB, ccing me and
blake winton (bwi...@latte.ca), and we can talk about your specific
needs off-list, in a confidential bug if necessary.
While you're here, though, it'd be really good for us to understand
better a few things, though, so we can improve the system:
a) how did the person get it wrong? (and if you know which bug in
bugzilla was involved in publishing the wrong data, we clearly need to
do a post-mortem to figure out how our review process failed, and
adjust our tools/processes to fit)
b) can you be specific in your issues w/ mechanism 3, so that we can
learn from them and address them? Before answering that, also:
c) Note that that wiki page is probably not fully up to date. In
particular, we have been working with some ISPs (who control thousands
of domains) so that they can effectively self-manage their domain's
configurations by publishing appropriate DNS records. I don't have
the bug# handy, but our hope is that by letting organizations who
control the domain publish configuration data, they can in effect make
the central lookup service work for them. It would be good to know
if
1) that system would let you deliver configurations that work for
your users/security concerns
2) we need to incorporate your input into that system (which
hasn't been deployed yet) to let people explictly opt-out rather than
provide configuration data.
d) We'd like it fine if we could provide the service to Thunderbird
users with no central lookup, but we're currently unable to do the
appropriate DNS queries from Thunderbird itself due to platform
limitations. I'm not sure whether that would work for you anyway, as
I don't yet understand the specific issues yet.
Cheers, and thanks for pointing out the problem.
--david
Thanks for your response. See inline.
On 1/8/2010 11:39 AM, David Ascher wrote:
> On Jan 8, 8:57 am, Jesse Thompson<jesserthomp...@gmail.com> wrote:
>
>> We have had an unauthorized user publish settings for our mail
>> service. This person was well intentioned, and mostly got it right.
>> However, we are worried about the possibility for a malicious person
>> publishing false settings in order to steal end-user credentials. How
>> can we prevent this from happening for our domains? We currently have
>> 271 domains.
>>
>> Essentially, we believe that mechanism 3 ofhttps://wiki.mozilla.org/Thunderbird:Autoconfiguration
>> does not meet our requirements and could introduce security problems.
>> Mechanism 2 is what we would like to pursue. So, I'm looking for a
>> way to prevent mechanism 3 from happening for any of the domains we
>> host.
>
> Hi Jesse --
>
> In the immediate, I think best is to file a bug on ISPDB, ccing me and
> blake winton (bwi...@latte.ca), and we can talk about your specific
> needs off-list, in a confidential bug if necessary.
>
> While you're here, though, it'd be really good for us to understand
> better a few things, though, so we can improve the system:
>
> a) how did the person get it wrong? (and if you know which bug in
> bugzilla was involved in publishing the wrong data, we clearly need to
> do a post-mortem to figure out how our review process failed, and
> adjust our tools/processes to fit)
It was only a minor issue with the username format. I am using it as an
illustration that the information provided by non-authoritative people
can be incorrect, and the review process failed to detect the mistake.
We can't assume that phishers will not try to forge this information.
I can't assume that the review process will protect against this type of
threat.
Ultimately I, as an email service administrator, am responsible for the
security of the service I maintain. I therefore can't rely on a 3rd
party to accurately review this information.
We have 271 domains, and we are a university, not an ISP. I find it
hard to believe that the review process will scale in a way that will be
secure.
> b) can you be specific in your issues w/ mechanism 3, so that we can
> learn from them and address them? Before answering that, also:
It allows for unverified users to submit the information. See above.
It is a pain to manage if you host lots of domains (we host 271 domains.)
It makes Mozilla a dependency, which leads to concerns about their
reliability and longevity.
How can we ensure that this mozilla-centric model won't interfere with
other clients that want to implement autoconfiguration. If we have to
backfill this information for 270 domains, I don't want to do it again
for each client that chooses to implement their own centralized
database. I think that it is safe to assume that Microsoft, Apple and
Google will not build their clients to rely on Mozilla in this way.
> c) Note that that wiki page is probably not fully up to date. In
> particular, we have been working with some ISPs (who control thousands
> of domains) so that they can effectively self-manage their domain's
> configurations by publishing appropriate DNS records. I don't have
> the bug# handy, but our hope is that by letting organizations who
> control the domain publish configuration data, they can in effect make
> the central lookup service work for them. It would be good to know
> if
You might be referring to this bug
https://bugzilla.mozilla.org/show_bug.cgi?id=342242
> 1) that system would let you deliver configurations that work for
> your users/security concerns
Yes, the mechanisms in section 2 seem to be on the right track for
providing the level of control that would work for us.
But that does not mean that mechanism 3 still isn't a concern. If there
are no "mechanism 2" records for a domain, or if there is an error
retrieving or parsing them, should mechanism 3 be a fall back, and hence
a possible security loophole?
At the very least, I would hope that the central database should defer
to the "mechanism 2" entries if they exist.
We ran into a similar problem with our XMPP service and Google Apps. We
had our chat service running for a long time, and we had SRV records
published in DNS that defined that our servers hosted the service. One
day, Google decided to open up Google Apps so that any end user (defined
by having an email address in the domain) of our domain could enable
Google Apps without authorization from the domain administrators. One
of the Apps that was enabled was Google Talk (which is an XMPP chat
service.) Google made the mistake of neglecting to read the SRV records
that we had published. As a result, Google then assumed that the domain
was locally hosted by Google, and would no longer route traffic to our
service. Google has since corrected their mistake by always checking
for the existence of published SRV records prior to enabling Talk for a
domain. However, I think that a lot of lessons about domain ownership
can be learned from this example.
> 2) we need to incorporate your input into that system (which
> hasn't been deployed yet) to let people explicitly opt-out rather than
> provide configuration data.
I think that it is wrong to assume that the centralized configuration
data should be used unless the service administrator provides it.
> d) We'd like it fine if we could provide the service to Thunderbird
> users with no central lookup, but we're currently unable to do the
> appropriate DNS queries from Thunderbird itself due to platform
> limitations. I'm not sure whether that would work for you anyway, as
> I don't yet understand the specific issues yet.
Keep in mind that you are not the first ones to struggle with this idea.
XMPP is farther down this road (relies on DNS SRV records) but they
are still working out some of the domain assertion issues. You can
learn a lot from their successes and mistakes.
Take a look at:
http://tools.ietf.org/html/draft-hildebrand-dna-00
Jesse
How about this idea.
If the client is not yet able to do the queries directly, but is able to
query the ISPDB, what if you just set up the ISPDB to do the DNS
queries? This way, the DNS would still be authoritative, and you would
be able to work around the limitations of Thunderbird. In the future,
if and when Thunderbird is able to do the queries directly, nothing
would need to change with the configuration in DNS.
Jesse
Right, that's what I was trying to explain in my c) bit.
I need to find the bug that describes what we're trying to do.
(reply to your other email in progress)
--david
I agree that you can't assume it, but maybe we can convince you that we
can get there. In particular, AFAICT, the risk from a phishing POV is
that the a domain not managed by the service provider effectively
hijacks a domain. We have built in tests that flag any domain that
doesn't match fairly strict rules.
In practice, you can't be sure that the review processes that we use for
our code are secure either. However, we seem to have evolved a set of
procedures, cultural norms, etc., that make our code secure enough for
many. I'm going to claim that we should be able to get there as well
for data like this. In particular, the process as I'm seeing it involves:
1) code-supported checks to make sure that foo.com email addresses
don't map to bar.com domains
2) requiring independent review of any user-submitted configuration by
people with some (still TBD) training
3) requiring that the configuration data be available to the reviewer
on a public site clearly published by the owner of the domain (and not a
third-party site)
I'd love input on other ways we can make this review process as
bulletproof as possible.
> We have 271 domains, and we are a university, not an ISP. I find it
> hard to believe that the review process will scale in a way that will
> be secure.
I don't think we need to support all domains in this process, but it'd
be good to support the "head" of domains (large domains, etc., where
getting the participation of the mail admins in _anything_ is likely
impossible (think hotmail and equivalents ;-).
>> b) can you be specific in your issues w/ mechanism 3, so that we can
>> learn from them and address them? Before answering that, also:
>
> It allows for unverified users to submit the information. See above.
Just to confirm: the submitting of the information isn't the problem,
but publishing erroneous information is, right? (IOW, it's the accuracy
of the data that's the problem, not the source of the original data).
> It is a pain to manage if you host lots of domains (we host 271 domains.)
Agreed, and we do want to come up with ways to help people like
yourselves to self-manage large groups of domains.
> It makes Mozilla a dependency, which leads to concerns about their
> reliability and longevity.
Their? (you mean Mozilla there?) Thunderbird is dependent on Mozilla
today, so I don't see a real change. Can you clarify?
> How can we ensure that this mozilla-centric model won't interfere with
> other clients that want to implement autoconfiguration.
How could it interfere with other clients? I'm not understanding.
> If we have to backfill this information for 270 domains, I don't want
> to do it again for each client that chooses to implement their own
> centralized database. I think that it is safe to assume that
> Microsoft, Apple and Google will not build their clients to rely on
> Mozilla in this way.
I can't speak for any third parties. We're doing this because email
configuration is too hard for way too many people. Our APIs are public,
and I would have no problem if other apps wanted to work with us on a
public-service API. I don't know for sure because their code isn't
public AFAIK, but Postbox seems to be copying our service (& data) in
their next beta. That's fine with me, as I think anything that makes
setting up email accounts easier. We have been in conversation with
some other email client vendors who end up building the same kinds of
systems because they feel the same pain, and I'd be more than happy to
collaborate on a client-neutral system in the long term. (Or happy to
scrap it all if evolving internet standards address the issue)
In the meantime, however, inaction would be doing disservice to our users.
As I said, though, I don't think you _should_ be submitting 270 domains
(or thousands for other) -- we should instead work on teaching the
central lookup system to intelligently delegate to domain owners.
>
>> c) Note that that wiki page is probably not fully up to date. In
>> particular, we have been working with some ISPs (who control thousands
>> of domains) so that they can effectively self-manage their domain's
>> configurations by publishing appropriate DNS records. I don't have
>> the bug# handy, but our hope is that by letting organizations who
>> control the domain publish configuration data, they can in effect make
>> the central lookup service work for them. It would be good to know
>> if
>
> You might be referring to this bug
> https://bugzilla.mozilla.org/show_bug.cgi?id=342242
No, although that bug is related. I'm referring to work that Rob
Mueller is leading -- Rob, are you on this thread?
The draft spec:
Given a domain xyz.com to lookup, it
does these steps:
1. If xyz.com is a "known domain" (based on a stat into a directory of
files), return the content of that file, finish
2. If xyz.com is a "cached domain" (based on a stat into a direcotry of
files), return the content of that file, finish
3. Do a DNS lookup, TXT records for mozautoconfig.xyz.com and MX records
for xyz.com
4. If the TXT record is present, and begins "v=mozautoconfig1 ", look at
the remainder of the record:
a) If it's "domain:abc.com", repeat the whole process for abc.com
b) If it's "url:https?://def.com/something
i) Replace %%DOMAIN%% in the url with the original domain
requested
ii) Fetch the resulting URL. Check the content response for sanity
iii) Cache the response and return the content, finish
5. If the MX record is present, find the lowest priority MX record. If
it's from a known list (currently a hardcoded hash in the code),
find the corresponding domain, and repeat the whole process for
that domain
---
Feedback on whether the above is the right protocol for you would be
super helpful.
> At the very least, I would hope that the central database should defer
> to the "mechanism 2" entries if they exist.
> We ran into a similar problem with our XMPP service and Google Apps.
> We had our chat service running for a long time, and we had SRV
> records published in DNS that defined that our servers hosted the
> service. One day, Google decided to open up Google Apps so that any
> end user (defined by having an email address in the domain) of our
> domain could enable Google Apps without authorization from the domain
> administrators. One of the Apps that was enabled was Google Talk
> (which is an XMPP chat service.) Google made the mistake of
> neglecting to read the SRV records that we had published. As a
> result, Google then assumed that the domain was locally hosted by
> Google, and would no longer route traffic to our service. Google has
> since corrected their mistake by always checking for the existence of
> published SRV records prior to enabling Talk for a domain. However, I
> think that a lot of lessons about domain ownership can be learned from
> this example.
Great lesson that we should learn from, I agree.
> Keep in mind that you are not the first ones to struggle with this
> idea. XMPP is farther down this road (relies on DNS SRV records) but
> they are still working out some of the domain assertion issues. You
> can learn a lot from their successes and mistakes.
>
> Take a look at:
> http://tools.ietf.org/html/draft-hildebrand-dna-00
Thanks, will look!
--david
That applies most of our domains. It probably applies to most email
domains in general.
> 2) requiring independent review of any user-submitted configuration by
> people with some (still TBD) training
This isn't a training issue, or a trust issue with Mozilla. I wouldn't
even be surprised if our own help desk agents got something wrong.
Without an intimate knowledge of the state of the particular domain,
only the domain owners and the service administrators know for sure what
is correct.
> 3) requiring that the configuration data be available to the reviewer on
> a public site clearly published by the owner of the domain (and not a
> third-party site)
Most of our hosted domains use our documentation. So, all of these are
on a third-party site.
There are lots of email hosting providers. It would be pretty easy to
put up a fake web site that made it look like shadyhosting.com is a
legitimate hosting provider. Then they would just need to add some
documentation to that web site that had instructions for configuring
innocentdomain.org to connect to shadyhosting.com
There is absolutely no way for you to verify that an email domain is
hosted by any particular hosting provider. So, if you can't be sure
that the configuration data is correct, then you can't be sure that the
configuration data isn't malicious.
> I'd love input on other ways we can make this review process as
> bulletproof as possible.
I think that your review process makes assumptions it shouldn't be making.
If you are being as due diligent as you claim, none of our domains would
pass your review process. If your goal is to make this
autoconfiguration work for most email services, then you will have to
compromise the review process, which will make all domains susceptible
to hijacking.
>> We have 271 domains, and we are a university, not an ISP. I find it
>> hard to believe that the review process will scale in a way that will
>> be secure.
>
> I don't think we need to support all domains in this process, but it'd
> be good to support the "head" of domains (large domains, etc., where
> getting the participation of the mail admins in _anything_ is likely
> impossible (think hotmail and equivalents ;-).
How do you define "large domains"? Our main domain has 85,000 users; is
that large or small? Will you not accept any user submitted
configuration information (and depend only on DNS) for "small domains"?
I suppose that this could solve the hijacking issue as long as this
"head" distinction is clearly defined.
But otherwise, what you say sounds discriminatory to the small domains.
We have some domains with less than 10 users in them. But I don't
consider them less important; especially when it comes to their security.
>>> b) can you be specific in your issues w/ mechanism 3, so that we can
>>> learn from them and address them? Before answering that, also:
>>
>> It allows for unverified users to submit the information. See above.
>
> Just to confirm: the submitting of the information isn't the problem,
> but publishing erroneous information is, right? (IOW, it's the accuracy
> of the data that's the problem, not the source of the original data).
No. The only way to guarantee accuracy is to get the information from
the correct source.
>> It is a pain to manage if you host lots of domains (we host 271 domains.)
>
> Agreed, and we do want to come up with ways to help people like
> yourselves to self-manage large groups of domains.
Yes, this is something that we would love to pursue. Having the central
service query DNS "in some standard way" and parse the data on behalf of
Thunderbird seems like a reasonable solution.
>> It makes Mozilla a dependency, which leads to concerns about their
>> reliability and longevity.
>
> Their? (you mean Mozilla there?) Thunderbird is dependent on Mozilla
> today, so I don't see a real change. Can you clarify?
As much as I admire Mozilla, I can't predict their longevity. If the
ispdb were disbanded for whatever reason, Thunderbird would still exist
on computers and would continue to try to autoconfigure. Heck, we still
have Eudora users who haven't upgraded in years.
As far as reliability, what happens to the user experience when the
ispdb isn't available? Suppose you get DDOS'ed, or you have a
misconfiguration, or there is a database corruption...
>> How can we ensure that this mozilla-centric model won't interfere with
>> other clients that want to implement autoconfiguration.
>
> How could it interfere with other clients? I'm not understanding.
I phrased that poorly. I'm just worried about duplication here.
Microsoft already invented an arcane autoconfiguration system. Now,
Mozilla has a different one. Why didn't Mozilla use the Microsoft
system? That's probably the same reason why Microsoft wouldn't use
Mozilla's system. What if another client (Apple Mail perhaps) decides
to implement a third type of autoconfiguration system.
>> If we have to backfill this information for 270 domains, I don't want
>> to do it again for each client that chooses to implement their own
>> centralized database. I think that it is safe to assume that
>> Microsoft, Apple and Google will not build their clients to rely on
>> Mozilla in this way.
>
> I can't speak for any third parties. We're doing this because email
> configuration is too hard for way too many people. Our APIs are public,
> and I would have no problem if other apps wanted to work with us on a
> public-service API. I don't know for sure because their code isn't
> public AFAIK, but Postbox seems to be copying our service (& data) in
> their next beta. That's fine with me, as I think anything that makes
> setting up email accounts easier. We have been in conversation with some
> other email client vendors who end up building the same kinds of systems
> because they feel the same pain, and I'd be more than happy to
> collaborate on a client-neutral system in the long term. (Or happy to
> scrap it all if evolving internet standards address the issue)
I doubt that many non-Thunderbird-based clients would implement a
non-standard system.
However, I do feel optimistic that the ispdb is in a good position to
spark the creation of a standard, if it is able to proxy the "mechanism
2" settings from DNS to the client.
> In the meantime, however, inaction would be doing disservice to our users.
Please understand that I sympathize with your motivation to make it
easier for autoconfiguration. I love how XMPP clients are able to
autoconfigure, and I would love it if email clients work similarly. I
agree with your 'end', I just see some problems with your 'means'.
> As I said, though, I don't think you _should_ be submitting 270 domains
> (or thousands for other) -- we should instead work on teaching the
> central lookup system to intelligently delegate to domain owners.
That sounds great, but that's not what seems to be happening.
Are you just hoping that domains like this don't bother to submit?
Autoconfiguration is a very appealing concept, so I can imagine that
this could catch on.
I think that this needs some work, but you're heading in the right
direction. I'm willing to help vet this out. Again, this might be an
area where we can learn from XMPP.
I'd skip looking at MX records, since they are used for incoming email,
not client configurations. It's only coincidental that some email
services use the same hostname for inbound mail and client
configurations, but it's a bad assumption to make in general.
>> At the very least, I would hope that the central database should defer
>> to the "mechanism 2" entries if they exist.
>
>> We ran into a similar problem with our XMPP service and Google Apps.
>> We had our chat service running for a long time, and we had SRV
>> records published in DNS that defined that our servers hosted the
>> service. One day, Google decided to open up Google Apps so that any
>> end user (defined by having an email address in the domain) of our
>> domain could enable Google Apps without authorization from the domain
>> administrators. One of the Apps that was enabled was Google Talk
>> (which is an XMPP chat service.) Google made the mistake of neglecting
>> to read the SRV records that we had published. As a result, Google
>> then assumed that the domain was locally hosted by Google, and would
>> no longer route traffic to our service. Google has since corrected
>> their mistake by always checking for the existence of published SRV
>> records prior to enabling Talk for a domain. However, I think that a
>> lot of lessons about domain ownership can be learned from this example.
>
> Great lesson that we should learn from, I agree.
>
>> Keep in mind that you are not the first ones to struggle with this
>> idea. XMPP is farther down this road (relies on DNS SRV records) but
>> they are still working out some of the domain assertion issues. You
>> can learn a lot from their successes and mistakes.
>>
>> Take a look at:
>> http://tools.ietf.org/html/draft-hildebrand-dna-00
>
> Thanks, will look!
Overall, I think that what you guys are doing is a good thing. Client
autoconfiguration is much needed for email. Although I feel that a
decentralized system is essential, I think that your centralized
approach might be a good stepping stone to a standards-based system. My
main concerns at this point have to do with security from configuration
poisoning, maintainability for email service administrators, and
usability for end users in the fringe cases.
Jesse
> --david
>
clarification: it is the mismatch that applies
Jesse
weird coincidence. I just got a notification saying that they are
moving this forward. It will be revised and resubmitted as
draft-ietf-xmpp-dna-00
by they way, it looks like the authors are intending this to be relevant
to SMTP and IMAP:
" The DNA mechanism can be used for multiple different protocols. In
particular, client-to-server XMPP and server-to-server XMPP are
discussed herein, but the general approach could be used for non-XMPP
protocols such as SMTP or IMAP.
Jesse
This brings to mind that we could also do the exchange-style lookup
centrally, along with the DNS-based, XMPP-DNA style, google apps, etc.
lookups, so that if providers are already publishing their data using
any system, we can increase our changes of avoiding duplication of
effort on behalf of the administrator.
My understanding is that I don't think the autoconfig format maps
trivially to the MS format, but we may be able to support a decent
percentage of Exchange installations (or not, don't know!). In general,
gathering data about which kinds of configurations exist in the wild
would be very useful in understanding how to move the specs along. If
things settle down and clear winners evolve, then it would make sense to
do the heavy lifting engineering required to do more in the client.
--david
Yeah that particular technique wouldn't map. But there are other
lessons to be learned. The domain assertion draft had some interesting
methods for determining the validity of the information in DNS.
>> I'd skip looking at MX records, since they are used for incoming
>> email, not client configurations. It's only coincidental that some
>> email services use the same hostname for inbound mail and client
>> configurations, but it's a bad assumption to make in general.
>
> No, I want to leave this because it's damn useful for all those "google
> apps for domains" users (and in our case, all the FastMail
> family/business users). Basically they can just enter their username
> (which is in their own domain) and password and things "just work",
> because we detect the MX records point to google/fastmail, and thus
> return the standard gmail/fastmail config data. This would be something
> added to the code by mozilla people to "make common hosting providers
> work" without the hosting provider having to do anything.
What happens when the MX records point to an email service that is only
smarthosting the email for the domain?
http://en.wikipedia.org/wiki/Smart_host
> If particular providers want to override this, they can by specifying a
> TXT record, but it's still a nice fallback to make providers "just work"
> where possible.
Fair enough.
But I think that there is still a risk of configuration hijacking for
domains that don't publish in DNS.
Jesse
Yes, but you are in the position of picking the winner. Pick wisely. :-)
Jesse
I will be happy to help test the DNS protocol.
How do you define "major provider?"
What do the "non-major providers" do to prevent configuration hijacking
in the mean time? Currently the only method is to preemptively submit
configurations, which contradicts the desire to keep the ispdb small.
Jesse
Gozer's on the hook to look at the code, but he's just got back from
vacation and is pushing 3.0.1 out now. It's on his todo list though! I
suggest finding an appropriate bug (or creating a new one) and attaching
the code to it so that more people can review/test the code.
On the topic of which providers to put in ISPDB or not, there's clearly
tension between wanting to provide a good experience for providers who
aren't interested, and not screwing up / letting the mail admins who
want to self-manage the system do so.
What do y'all think of:
a) we go ahead with code like Rob's to allow leading-edge MSPs to drive
the process that works for them. The details need figuring out, but the
general approach seems agreeable to all (it also provides a
significantly better user experience for users of these ISPs, which I
think of as a win-win-win for users, ISPs, and Mozilla ;-).
b) we keep ISPDB for domains for which adding the configuration is a
clear and safe win -- in particular, ISPs who have demonstrably simple
configurations and clearly published configurations, or for ISPs who
explicitly reach out to us. I'm happy to be fairly conservative about
which domains we add to the ISPDB until we understand the impact & risks
better.
c) we maybe tweak ispdb to let us store an optional email contact for
each domain, so that we can communicate more effectively with the domain
owner in the future
d) we look at the logs of autoconfig usage in the wild to find out more
about which domains Thunderbird users actually care about, and outreach
to them to figure out what's the highest value work (where I define
high-value as "largest number of users helped per hour of work"). It
may result in changes to the DNS-based system, or changes to ISPDB, or...
?
--david
This also seems to be the point of view of others (Ben Bucksch)
https://bugzilla.mozilla.org/show_bug.cgi?id=534722
but no one is able to define it.
It makes sense to limit it. The whole idea of non-ISP-sanctioned
configuration settings only works for those providers that are "too big"
to publish their settings in a standard way; i.e. they don't actually
want their customers to use a client. Any other sensible ISP that
encourages client use, like us, would not risk letting this happen.
> I'll pick an arbitrary value, top 20.
There are a lot more than 20 in
https://live.mozillamessaging.com/autoconfig/
I now see that our primary domain's configuration (wisc.edu) was
submitted to the ispdb, but it is not in that list. Does that mean:
- it hasn't been reviewed?
- it doesn't qualify for the ispdb?
- other?
I would prefer that there were some kind of guarantee that our domains
will not be allowed to be part of the centralized ispdb. Allowing them
to be submitted, with nothing stating that they will or will not be
published, just makes us worry about the possibility of hijacking.
>> What do the "non-major providers" do to prevent configuration hijacking
>> in the mean time? Currently the only method is to preemptively submit
>> configurations, which contradicts the desire to keep the ispdb small.
>
> My recommendation in general would be ispdb shouldn't accept arbitrary
> outside submissions. As noted, it requires a bunch of work by Mozilla to
> check the entries, and vet them, and ensure no hijacking is occurring.
> Instead I think Mozilla should fill it with the top 20 providers or so,
I'll buy that. But mozilla is going to have a hard time defining "top
20 providers or so" in a way that includes the ones that they want to
include, and provide assurance to domains that don't want arbitrary
submissions published.
> and then leave it at that. That'll get 75% of personal account holders
> "just working" out of the box with TB3. Everyone else should use DNS.
I always assumed that the majority of Thunderbird users aren't the ones
that use the big freemail providers.
> In general I think that'll work well, because most smaller providers
> differentiate based on "better service", so it'll be in their interest
> to setup the DNS when their customers start complaining to them.
Wait. Our customers weren't complaining before, and now they are
confused and may start complaining (because mechanisms 3 and 4 don't
work for most email services, and mechanism 2 isn't yet implemented.)
So, mozilla is introducing a feature that causes our customers to
complain to us?
I think that's backwards thinking. The DNS settings should be an
incentive for ISPs to make the customer experience better, instead
mozilla is making the not-yet-available-DNS-mechanism a requirement to
make the customer experience not-not-better.
ISPs like us encourage our users to use Thunderbird, and we support its
usage. The "big ISPs" that this project is catering to don't encourage
their users to use a client, and they don't support it. That's making
it harder for us to offer "better service."
Jesse
It seems OK on the face of it. But that all depends on how you define
"leading-edge MSPs."
> b) we keep ISPDB for domains for which adding the configuration is a
> clear and safe win -- in particular, ISPs who have demonstrably simple
> configurations and clearly published configurations, or for ISPs who
> explicitly reach out to us. I'm happy to be fairly conservative about
> which domains we add to the ISPDB until we understand the impact & risks
> better.
>
> c) we maybe tweak ispdb to let us store an optional email contact for
> each domain, so that we can communicate more effectively with the domain
> owner in the future
How are you authenticating this email contact address as authoritative?
Hint: postmaster@domain should be the only correct answer
Jesse
I was just referring to his comment: "big ISPs - including hotmail.com -
which don't set up a config server"
No one has really defined "big ISPs" yet. I think that if you are able
to provide this definition, it will answer the concern I am bringing up
about the potential for configuration hijacking of "normal ISPs"
> For the DNS lookup, is still crappy.
>
> 5.2.) the domain of the email address that the user entered does not
> match any of the domain names in the cert,
>
> Is just not feasible for email service providers.
>
> Anyway my implementation is similar to
> Thunderbird:Autoconfiguration:DNSBasedLookup, but with a few extra
> options, and a few reduced requirements (SSL). I could add the
Yeah, I wonder if it is possible for you to implement some technique
that would avoid the DNS poisoning threat without depending on DNSSEC?
Publishing TXT records in DNS would be by far the easiest. But DNSSEC
isn't ubiquitous. And any technique that relies on the hosting provider
to have a matching certificate for the domain is essentially a show stopper.
Is your implementation strategy published somewhere?
> https://autoconfig.emailaddressdomain lookup to the server code as well
> pretty easily I think.
Is it possible to also query https://emailaddressdomain/something?
This might be more tolerable to those domains that don't want to shell
out for an additional certificate.
Additionally, how feasible would it be to support a redirect? i.e. so that
https://emailaddressdomain/something or
https://autoconfig.emailaddressdomain
has content that instructs your code to fetch the config from
https://autoconfig.hostingproviderdomain or
https://hostingproviderdomain/something
Or something like that. I'm not completely sure of the feasibility or
implications of these ideas. But it seems like it would be easier for
domains/hosting providers to set up, which might help spur adoption.
Jesse
I posted some comments.
Jesse