How about putting the PSL in DNS?

208 views
Skip to first unread message

AJ ONeal

unread,
Feb 3, 2017, 10:24:39 PM2/3/17
to psl-discuss
I like how Let's Encrypt is using DNS records for validation and adding new types like CCA. I think DNS is a much underutilized resource on the net.

Obviously as we continue to create peer-to-peer systems and the facilitators of those systems release domains into the PSL, the list will become longer and longer and longer. To me it would seem like a good move to provide a standard for browsers moving into the future that the look at some sort of DNS record to learn the domain policy for the domain (the HSTS Preload list needs this too).

Is there a way by which I can make a formal proposal for this?

Ryan Sleevi

unread,
Feb 3, 2017, 10:37:58 PM2/3/17
to AJ ONeal, psl-discuss
It's been proposed a number of times and a number of ways for almost a decade. That's not to say it's a bad idea, just that no one has figured out how to make it work reliably and for purpose.

Browsers have so far indicated a lack of interest in that, for a variety of reasons. At least for Chrome, the latency impact and the unreliability of non-A/AAAA records are probably two critical blockers that see us unlikely to support a DNS-based solution that's client-discovered. It's been impractical for years and remains so.

Rather than rehash it all here, there's the IETF group DBOUND which has been exploring this for some time - https://datatracker.ietf.org/wg/dbound/charter/

Ryan Sleevi

unread,
Feb 3, 2017, 10:40:49 PM2/3/17
to AJ ONeal, psl-discuss
Oh, and I should mention - it may very well be that for things like Let's Encrypt / ACME, DBOUND is a perfect fit. I didn't want to come across as too browser-specific, just that for the use cases of browsers so far, the PSL, warts and all, has some advantages that no one has been able to replicate or replace with DNS.

Peter Thomassen

unread,
Jun 10, 2019, 6:59:02 PM6/10/19
to psl-discuss
Hi Ryan,


On Saturday, February 4, 2017 at 4:37:58 AM UTC+1, Ryan Sleevi wrote:
On Fri, Feb 3, 2017 at 7:24 PM, AJ ONeal <cool...@gmail.com> wrote:
I like how Let's Encrypt is using DNS records for validation and adding new types like CCA. I think DNS is a much underutilized resource on the net.

Obviously as we continue to create peer-to-peer systems and the facilitators of those systems release domains into the PSL, the list will become longer and longer and longer. To me it would seem like a good move to provide a standard for browsers moving into the future that the look at some sort of DNS record to learn the domain policy for the domain (the HSTS Preload list needs this too).

Is there a way by which I can make a formal proposal for this?

It's been proposed a number of times and a number of ways for almost a decade. That's not to say it's a bad idea, just that no one has figured out how to make it work reliably and for purpose.
[...] 
Rather than rehash it all here, there's the IETF group DBOUND which has been exploring this for some time - https://datatracker.ietf.org/wg/dbound/charter/

I've just discovered this thread. As you know, I recently came up with a DNS implementation of this, not to thwart any other approaches, but because I needed a service like this and was unaware of other ongoing efforts.

Ryan, do you know whether there is continued interest in this by the IETF DBOUND group? If so, is there anything I should know / take into account when I talk to them regarding whether/how to streamline the efforts?

Best,
Peter

-- 
Dr. Peter Thomassen
Senior Security Expert


SSE Secure Systems Engineering GmbH | Am Sandwerder 21 | 14109 Berlin

Dave Crocker

unread,
Jun 11, 2019, 9:52:19 AM6/11/19
to psl-discuss
On 6/10/2019 3:59 PM, Peter Thomassen wrote:
> Ryan, do you know whether there is continued interest in this by the
> IETF DBOUND group? If so, is there anything I should know / take into
> account when I talk to them regarding whether/how to streamline the efforts?


Hi. DBound stopped serious work quite awhile ago.

It had competing proposals and didn't find the energy to resolve things.
A major challenge was trying to satisfy some very different needs.
One is to mark authority transitions along a DNS branch. Another is to
declare a relationship between different branches.

There has been a small amount of recent discussion, prompted by some new
proposals, which has caused folk to think there is new energy but the
discussion has been limited and, so far IMO, shows no signs of being
productive.

This seems to be a topic about which people find it difficult to have
serious discussion about architecture, operation, tradeoffs and merits,
rather than just resorting to aggressive dismissal of alternatives that
are not one's own...

Full disclosure: my own proposal is one of the recent ones:

DNS Perimeter Overlay
https://datatracker.ietf.org/doc/draft-dcrocker-dns-perimeter/

There's been a claim that it is essentially the same as one of the
earlier proposals. It isn't.

Most successful IETF work is driven by a core of people who are clear
about the basic functional and operational goals -- that is, they are
clear about what problem they want to solve and roughly what its
operational characteristics need to be -- and merely have to haggle over
the technical details. (No, I don't really mean that's trivial, but
rather than haggling over technical details is guided by a shared sense
of the problem being solved.)

This topic has lacked that core energy, which I think has produced too
much theory and not enough pragmatics. You folk would make a natural
source of the energy, but it takes time and... energy.

This doesn't mean you have to create the technical specification but it
means you have to engage in the discussions trying to do that and
provide feedback about functional and operational goals. What does it
really need to do and what constitutes acceptable administration and
user activities to achieve it?

If a specification is developed and published, you folk would be one of
the earliest and most important adopters. That makes it natural that
you should provide guidance about real utility during its development.


d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net

Jothan Frakes

unread,
Jun 11, 2019, 2:30:31 PM6/11/19
to Dave Crocker, Peter Thomassen, psl-discuss, John Levine
Hi-

This comes up from time to time, and has across the past decade.  Not saying it is a bad idea, but there's a lot to consider, and the institutional background on the discussions is not always present for each time it is raised.  I don't want to prejudice or bias the dialog, because there MIGHT be some advantages for SOME of the use-cases out there.

The argument exists that DNS moved a static list called HOSTS.TXT to a distributed and more efficient lookup model back in the 1980's, and static list model used in the PSL or other static lists like it carry some of the architectural pros and cons of a system that 'evolved' a number of decades ago.

It seems that there may have been a number of benefits of a local file that may have been overlooked a the time which may be hard to leverage DNS for.

Where we have hit a wall in the past with something like this is where we're making assumptions about how everyone is using the list.

There is a segment of the user/integrator base and their use-cases that moving away from the static file into DNS makes sense for.  

There is also a significant segment of that base where it is absolutely not going to work.  An example (and not the only one) would be "omnibox" browser implementations that use a local logic to determine if DNS gets consulted at all, or the user entry gets sent instead to search.  To DNS-ify the list lookups would increase DNS traffic and introduce latency.  This is not a good thing, as browsers compete on how responsive they are for page loads.

That is just one example of X many, and we don't know X without an expedition.  

Unfortunately, without such an expedition, many of the IETF proposals end up being solutions in search of a problem.  Maybe better stated, solutions coming from the perspective of the problem set that is self-identified by the proposing expert(s) absent knowledge of a full set of X, until some time as we determine a more comprehensive universe of use-cases to solve for X.

The "X-pedition" (sorry, Marvel mutant nerd here, couldn't resist) challenge is that there is not a global list of users or use cases to allow those who might seek to come up with a standard within the IETF process to propose something that would/could suffice.  Unfortunately there exists no means to poll people who implement/use PSL via libraries (or directly) into their code, and there's a thinly stretched layer of volunteers that MUST focus on the core _stuff_.

I've invested time in dialog with a number of those proposing solutions in search of problems attempting to define a large segment of the use cases, but these efforts will still fail to define "X" entirely, and we're still in a place where there would need to be a large effort into an expedition to gain more awareness of how the PSL is being used be everyone.  There is not a PSL login or user list.  It is fully given to the world from the volunteers and the great people at Mozilla who help cover the CDN costs.

Assuming we had a communication means with all the users and implementer/integrators (other than perhaps fiddling with the #ICANN section header comment or adding another one, which may break some people's code), and assuming some poll was assembled, and a subsequent polling period where we might get answers, would we still have a full list of use-cases?  Likely not.  We have made changes to the headers or structure of the file over the span of time and have rolled them back due to furious feedback from the community over those changes due to code breakage.

But lets, for a moment, assume we might get 100% of the use-cases so that we could potentially replace the current PSL process with a DNS-based solution that in fact may be a viable replacement.  We already know of a few that don't, and it is statistically unlikely we would get all the use-cases reported to us, but let's ignore those factors for this hypothetical scenario and receive a solution that works for all of them from the IETF process.

Once one might conclude that expensive expedition, and let's for the sake of argument ignore that we know it would represent a viable solution, there is the matter of the labor, time and architecture, as well as costs involved in upgrading the systems to support it.

We have a lot of very burdened volunteers that are investing night and weekend personal hours in managing the current setup - with no spare cycles to feed into a process like those described above, which has the allure of stepping in front of a fast moving bus.

As a rule of thumb, as volunteers, want to support a robust system, but skip the expedition type activities out of a forced pragmatism that stems from our available cycles being dedicated to upkeep of this important internet resource.

We also apply that pragmatism as a full stop on DNS solutions for the time being due to knowing at least that X>1, so expedition = disposable labor.  Also, expedition LOE >  current LOE of basic (growing) upkeep.  
"Upgrade" LOE also > current LOE .     AND   knowing X>1 means that there would need to be parallel evolution and reverse compatibility which expands the scope.

Some of us have invested time into (as has been pointed out) IETF process(es) that have not yet provided a viable path forward (John's most recent update to DBOUND is a little promising) and we have volunteered more of our time doing so in being as inclusive as possible about the known use cases being considered in the solution proposals. 

Each iteration of these "let's DNS this" extracts volunteer time, which is something that we all try to approach with care.

John Levine made a recent re-proposal to the DBOUND, and as Dave mentioned, he recently proposed something as well.  There are some areas where these could deliver some benefits using txt records in DNS, but largely to areas where the maintainers could pull self-identified records from authoritative zones via the DNS and incorporate them into the PSL static list.  What I just described does not exist and would need to be developed, again, more volunteer time extraction that would be a likely non-starter.

If only those updates and changes would appear magically in a cornfield.  Or, more realistically, there were a patron who would cover the costs and resourcing of the expedition and cover all of those costs and additional labor, people, time, etc of implementing those changes or the surrounding expeditions, it could be a potentially different outcome.  

In the meantime, we try to limit the number of frisbees that we chase.

-Jothan

--
You received this message because you are subscribed to the Google Groups "psl-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to publicsuffix-dis...@googlegroups.com.
To post to this group, send an email to publicsuff...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/publicsuffix-discuss/da62d5dd-8e8b-ecf0-7992-78b90a2432eb%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Dave Crocker

unread,
Jun 11, 2019, 5:04:09 PM6/11/19
to Jothan Frakes, Peter Thomassen, psl-discuss, John Levine
On 6/11/2019 11:30 AM, Jothan Frakes wrote:
> We have a lot of very burdened volunteers that are investing night and
> weekend personal hours in managing the current setup - with no spare
> cycles to feed into a process like those described above, which has the
> allure of stepping in front of a fast moving bus.


I'll make two brief comments on just this very narrow, very basic concern:

Any involvement by this community in an effort to create a DNS-based
mechanism needs to be extremely attentive to your time. Of course.

The other is that any operation that is running in this kind of overload
is not sustainable for the long term, no matter how long it has been
done. This isn't about dedication. A situation of such overload only
happens when the volunteers are massively dedicated. It's about human
limits. People burn out. Then they go away. Then the work doesn't get
done.

So, now I'll add a more general comment:

I see the main goals of using the DNS for the PSL as permitting more
timely information, with less effort by the core team.

The key to doing this is to have the DNS-based raw data being maintained
by the individual folk who own each domain name. That requires:

* being very clear about the required semantics

* being practical about who creates and maintains the entries

* defining a method of finding entries that is operationally
acceptable.

Jothan Frakes

unread,
Jun 11, 2019, 6:29:24 PM6/11/19
to Dave Crocker, Peter Thomassen, psl-discuss, John Levine
I look forward to where the IETF drafts take things.  Each of them have elements that could introduce some benefit to the process, but little spare cycles exist to really aid those in moving forward from the PSL pool of volunteers.

The only area of overload is not as much an issue. What often happens is things that need a lot of work to become practical do get some attention, but have needs from the volunteers to evolve those to where they MIGHT be helpful.

Not sure if there is a general familiarity with the list participants with the story of 'Stone Soup', but it surrounds someone seeing a kettle of boiling water in the middle of a village, dropping a stone into it, and then stirring it in the hopes that villagers will leap to add the appropriate ingredients to allow it to have taste or nutritional value as a meal.  The fact that volunteers don't come and start adding ingredients to the stone and water that someone stirs in the center of the village  

There likely exist a number of potential places that DNS could improve certain use cases.  It could also break others.  There is a great amount of care taken to not breaking stuff, and pragmatism about where time gets invested to the best benefit.  Each time the "help" comes, it requires village ingredients aka volunteer hours that were not otherwise a requirement.

I think aversion to stone soup kind of things or not having cycles for that stuff it is common to a lot of organizations and volunteer groups.  Labeling that as overload is not really fair or appropriate.

Pivoting from the stone soup to the throwing frisbees example... The challenge in even tossing the DNS frisbee here is that there's no target other than the desire to throw the frisbee or a particular type of frisbee with acronyms that begin with a giant letter "i" on it.  The target of the frisbee should be defined in a manner that serves the existing problem space that the PSL serves a need for. The problem space definition right now seems to be 'It is a problem that this resource is not in DNS', and that's not going to work.   IF someone takes time in identifying the universe of use cases, and then determines IF they're solved by use of DNS, that is an entirely different direction to take this.

It IS the situation that DNS is being used for SOME things.   For example, in order to increase the security and authority validation of a patch, DNS currently contributes a validation stage in the self-update mechanism at Github, to connect the authority of the submitting party's change with the resource they are editing.

A future project that might incorporate sourcing components of record updates that are held within the DNS would require development and testing, which need resourcing.

I agree that clear semantics, authority, and method of detection (and it's acceptance) are important should that path begin, and I believe that the IETF could be a potential source for those definitions.  OR those might come from the community, or other sources.  Wherever those definitions get vetted and produced, it still drives a need to resource those discussions and then subsequent changes.

The resource for those changes at the current point in time would be the volunteers.  No need to point out that doesn't scale, but that also gets exacerbated by the frequency of discussions about changes.

I am but one voice here, and I hope my email was not suggesting "no soup for you" to anyone.  Just bring more ingredients to the village.
-j
Reply all
Reply to author
Forward
0 new messages