Idea: DNS-over-HTTPS transport

444 views
Skip to first unread message

David Fifield

unread,
Mar 21, 2018, 7:31:06 PM3/21/18
to traff...@googlegroups.com
https://en.wikipedia.org/wiki/DNS_over_HTTPS
https://github.com/curl/curl/wiki/DNS-over-HTTPS
https://datatracker.ietf.org/doc/draft-ietf-doh-dns-over-https/?include_text=1

DNS over HTTPS allows you to communicate with a DNS server over HTTPS,
rather than UDP or TCP. How it works is you make a GET or POST request
that contains an encoding of your query. The server sends back an
encoding of the DNS response. For example, you as a client do either:
GET /query?ct&dns=AAABAAABAAAAAAAAA3d3dwdleGFtcGxlA2NvbQAAAQAB HTTP/1.1
Accept: application/dns-udpwireformat
or
POST /query
Accept: application/dns-udpwireformat
Content-Type: application/dns-udpwireformat
Content-Length: 33

00\x00\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x03www\x07example\x03com\x00\x00\x01\x00\x01
and the server replies with
HTTP/1.1 200 OK
Content-Type: application/dns-udpwireformat
Content-Length: 64

\x00\x00\x81\x80\x00\x01\x00\x01\x00\x00\x00\x00\x03www\x07example\x03com
\x00\x00\x01\x00\x01\x03www\x07example\x03com\x00\x00\x01\x00\x01\x00\x00
\x00\x80\x00\x04\xc0\x00\x02\x01

The circumvention idea is to take any existing DNS tunneling
scheme and send it through DNS over HTTPS. To be a bit more specific:
you send recursive DNS queries (encoding your upstream traffic) to the
DNS-over-HTTPS server, which then forwards the queries to another
specialized server that decodes them and proxies the data they contain.

Advantages:
* A large centralized DNS-over-HTTPS server is effectively your proxy;
blocking it results in high collateral damage.
* Concerns about the detection of DNS-based tunnels mostly don't
apply, because everything is encrypted.
* No out-of-band distribution needed, just hardcode a few domain names.
Disadvantages:
* High overhead (even with a persistent HTTPS connection, there's an
HTTP header for each DNS query and response).
* Potentially subject to rate limits by the DNS server(?). Sending
tons of uncacheable queries may not be so nice to the service.
* Even though encrypted, traffic volume may be unusual(?).
* Still need to worry about TLS fingerprint.

Even if not a general-purpose transport, DNS-over-HTTPS could be an
ideal rendezvous mechanism for a system like Snowflake or Moat. One
where you only need to send/receive a small amount of very hard-to-block
data in order to bootstrap a connection.

It looks like DNS-over-HTTPS is progressing towards deployment. So it
may soon be that such traffic is common. Google and Cloudflare are
running servers. Firefox was talking about doing a Nightly experiment of
having some users use the Cloudflare server.
https://github.com/curl/curl/wiki/DNS-over-HTTPS#servers
https://groups.google.com/forum/#!topic/mozilla.dev.platform/_8OAKUHso0c

In addition to the IETF draft protocol, Google runs a JSON-based API.
https://developers.google.com/speed/public-dns/docs/dns-over-https#api_specification
https://dns.google.com/query?name=example&type=A&dnssec=true

It seems like it would be pretty easy to hack together a prototype of a
DNS-over-HTTPS tunnel using, say, dnscat2 or iodine.
https://github.com/iagox86/dnscat2
https://github.com/yarrick/iodine
https://trac.torproject.org/projects/tor/wiki/doc/DnsPluggableTransport
https://bugs.torproject.org/15213

(Because a lot of blocking is based only on DNS, /me wonders if browsers
deploying DNS-over-HTTPS would suddenly unblock a lot of sites, at least
temporarily, even without any special circumvention effort.)

Martin Johnson

unread,
Mar 22, 2018, 3:08:24 AM3/22/18
to Network Traffic Obfuscation
Thanks for sharing, this looks very interesting.

I'm especially interested in the collateral aspects. "A large centralized DNS-over-HTTPS server is effectively your proxy; blocking it results in high collateral damage." - the main advantage of such a service would be to evade censorship, and with any significant number of users China would block it right away. I don't see any collateral damage. Even if this ended up replacing the current DNS model as a new standard, China would just run its own domestic DNS services and block the foreign, uncensored ones.

Google has no collateral value in China. Cloudflare looks very interesting though. However, all the Cloudflare DNS-over-HTTPS links I found are broken, and I can't find anything searching for it either. If Cloudflare or any other major CDN offered a service where encrypted DNS lookups could be sent through any of their IP addresses, that could be very useful.

Vinicius Fortuna [vee-NEE-see.oos]

unread,
Mar 22, 2018, 1:50:23 PM3/22/18
to mar...@greatfire.org, Network Traffic Obfuscation
You can see it from a different perspective: every web server could become a DNS provider. Then it becomes a lot harder to block :-)

There are performance benefits too down the line (not part of the standard). With HTTP/2 you may be able to push DNS records to the client if you know they'll be needed. For example, a web search results page could also send DNS records for each result. Ideally DNSSEC signed.


--
You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Justin Henck

unread,
Mar 22, 2018, 2:23:24 PM3/22/18
to Network Traffic Obfuscation
Cloudflare looks very interesting though. However, all the Cloudflare DNS-over-HTTPS links I found are broken, and I can't find anything searching for it either. 

It's possible that the Cloudflare endpoint returns a 404 without the correct encoding.  A few people started putting together client implementations as part of the IETF hackathon:

To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf+unsubscribe@googlegroups.com.

Martin Johnson

unread,
Mar 23, 2018, 2:22:22 AM3/23/18
to Network Traffic Obfuscation
That sounds pretty great. Then you could just choose a service with high collateral value and use it as your DNS service.

Martin Johnson

unread,
Mar 23, 2018, 2:23:25 AM3/23/18
to Network Traffic Obfuscation
I created an issue at https://github.com/IETF-Hackathon/ietf101-project-presentations/issues/3. Hope to get Cloudflare to work ASAP.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.

David Fifield

unread,
Mar 23, 2018, 3:09:32 PM3/23/18
to Network Traffic Obfuscation
On Thu, Mar 22, 2018 at 12:08:24AM -0700, Martin Johnson wrote:
> I'm especially interested in the collateral aspects. "A large centralized
> DNS-over-HTTPS server is effectively your proxy; blocking it results in high
> collateral damage." - the main advantage of such a service would be to evade
> censorship, and with any significant number of users China would block it right
> away. I don't see any collateral damage. Even if this ended up replacing the
> current DNS model as a new standard, China would just run its own domestic DNS
> services and block the foreign, uncensored ones.

You have a point. Let me explain more what I mean. It depends on how the
technology develops. There's no collateral damage if DNS over HTTPS is
something that individual users have to configure manually. But it's
different if DNS over HTTPS becomes a default for a significant fraction
of users of a certain browser (like Firefox was planning to test, as an
experiment)--a default that requires technical knowledge to disable--or
if it becomes a default for Android phones, or something like that. A
censor could try to block the DNS over HTTPS servers, and require all
users to reconfigure their DNS settings--but the fact that censors have
not yet succeeded in compelling people to install a MITM root CA
suggests that social steps like that may not be so easy to achieve.

I'm aware that conditions in China are different than other places, and
the logic does not apply as strongly. The GFW could ask UCBrowser to use
domestic DNS servers by default, for example, and that would undermine
any defaults of other programs.

Having large, centralized DNS servers used by many people could be an
advantage for circumvention, but it could be bad for privacy for other
reasons. It's not good for one group to have so much visibility of name
lookups. Again it depends on how it develops--who knows but DNS over
HTTPS may be a net negative overall for security and privacy, even if it
provides confidentiality against some adversaries.

Justin Henck

unread,
Mar 23, 2018, 3:15:10 PM3/23/18
to traff...@googlegroups.com
Again it depends on how it develops--who knows but DNS over
HTTPS may be a net negative overall for security and privacy, even if it
provides confidentiality against some adversaries.

This is one of the reasons some people are interested in pushing any DNSSEC signed records referenced by the current resource.  It is both performant and exceptionally private.


--
You received this message because you are subscribed to a topic in the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/traffic-obf/ZQohlnIEWM4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to traffic-obf...@googlegroups.com.

Adam Fisk

unread,
Mar 23, 2018, 6:39:17 PM3/23/18
to Justin Henck, traff...@googlegroups.com
Dumb question: don’t browsers typically respect the DNS servers set at the OS level? Is Firefox considering overriding that, or does it possibly do so already?

You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
--
--
President
Brave New Software Project, Inc.
https://www.getlantern.org
A998 2B6E EF1C 373E 723F A813 045D A255 901A FD89

Tom Ritter

unread,
Mar 24, 2018, 9:46:53 AM3/24/18
to Adam Fisk, Justin Henck, Network Traffic Obfuscation
Yes. We're planning an experiment on nightly that will override it. 

To unsubscribe from this group and all its topics, send an email to traffic-obf+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.
--
--
President
Brave New Software Project, Inc.
https://www.getlantern.org
A998 2B6E EF1C 373E 723F A813 045D A255 901A FD89

--
You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf+unsubscribe@googlegroups.com.

Tom Ritter

unread,
Mar 24, 2018, 9:48:22 AM3/24/18
to Adam Fisk, Justin Henck, Network Traffic Obfuscation
Well actually, we won't override it - we'll only use the data from the OS DNS. But we'll query a DNS-over-HTTPS server also.  In the experiment.  Actual deployment of this is dependent on a lot of things.

Cecylia Bocovich

unread,
Apr 1, 2018, 3:34:41 PM4/1/18
to traff...@googlegroups.com
Thanks, this is really interesting! I'm curious about how Snowflake and
Moat are typically used, could a censor also perform some sort of rate
limiting for connections that are moving a suspiciously high volume of
traffic though the DNS-over-HTTPS server?

Also, in addition to the privacy concern, how easily could a centralized
server be compromised by the policies of the jurisdiction it resides in?
We're dealing with the fairplay stuff in Canada right now and it's cause
for worry.

David Fifield

unread,
Apr 1, 2018, 6:05:53 PM4/1/18
to traff...@googlegroups.com
On Wed, Mar 21, 2018 at 04:31:00PM -0700, David Fifield wrote:
> It looks like DNS-over-HTTPS is progressing towards deployment. So it
> may soon be that such traffic is common. Google and Cloudflare are
> running servers. Firefox was talking about doing a Nightly experiment of
> having some users use the Cloudflare server.
> https://github.com/curl/curl/wiki/DNS-over-HTTPS#servers
> https://groups.google.com/forum/#!topic/mozilla.dev.platform/_8OAKUHso0c
>
> In addition to the IETF draft protocol, Google runs a JSON-based API.
> https://developers.google.com/speed/public-dns/docs/dns-over-https#api_specification
> https://dns.google.com/query?name=example&type=A&dnssec=true

Cloudflare's just-announced 1.1.1.1 public DNS supports DNS over HTTPS.
https://developers.cloudflare.com/1.1.1.1/dns-over-https/wireformat/
https://developers.cloudflare.com/1.1.1.1/dns-over-https/json-format/

Example (using application/dns-json):
curl 'https://1.1.1.1/dns-query?ct=application/dns-json&name=example.com&type=A'

Also DNS over TLS (RFC 7858):
https://developers.cloudflare.com/1.1.1.1/dns-over-tls/

David Fifield

unread,
Apr 1, 2018, 8:04:01 PM4/1/18
to traff...@googlegroups.com
On Sun, Apr 01, 2018 at 03:34:34PM -0400, Cecylia Bocovich wrote:
> Thanks, this is really interesting! I'm curious about how Snowflake and
> Moat are typically used, could a censor also perform some sort of rate
> limiting for connections that are moving a suspiciously high volume of
> traffic though the DNS-over-HTTPS server?

Snowflake rendezvous and Moat aren't high-volume. Snowflake just needs
to send/receive one message at the start to rendezvous; after that, bulk
traffic goes over WebRTC. Currently we use a domain-fronted HTTP request
for rendezvous, but a DNS request would work as well (in flash proxy we
also used email). Moat is an interface to Tor BridgeDB to make it easier
for users to discover a non-default obfs4 bridge. That will be a little
more awkward because Moat relies on a tunneled TLS connection to the
BridgeDB server, but it's still probably within the number of DNS
requests of a typical web page.

https://bugs.torproject.org/25594
https://trac.torproject.org/24689

> Also, in addition to the privacy concern, how easily could a centralized
> server be compromised by the policies of the jurisdiction it resides in?
> We're dealing with the fairplay stuff in Canada right now and it's cause
> for worry.

I don't know. Easily, I guess? I would say it's prudent to assume that
the DNS-over-HTTPS provider knowingly or unknowingly has a PRISM tap or
similar installed and that they log everything. There needs to be
end-to-end security between the client and the proxy server (which is
the upstream DNS server from the DNS-over-HTTPS server's point of view)
so that at least you don't reveal more than metadata. Unfortunately the
DNS-over-HTTPS provider is still in a position to make a list of all
circumventors' IP addresses and do timing correlation if the eventual
destination is the same entity. (E.g. if you use Cloudflare DNS over
HTTPS as a tunnel to reach a Cloudflare-hosted site.)

David Fifield

unread,
May 15, 2018, 5:03:54 PM5/15/18
to traff...@googlegroups.com
I've been thinking more about a DNS-over-HTTPS transport. My thinking is
to try it out first as a Snowflake rendezvous, and then see if it works
as a general transport. To that end, I did a little survey of existing
DNS tunnels (dnscat2, TUNS, OzymanDNS, iodine) in order to understand
the design parameters.

https://trac.torproject.org/projects/tor/wiki/doc/DnsPluggableTransport/Survey

In the upstream direction, there is not much flexibility: you only get
one DNS name (<255 bytes), so you cram as much data as you can into it
using hexadecimal or base32 encoding. Downstream, you have a choice of
resource record types with differing carrying capacity: I don't see a
reason to use any type other than TXT, which offers plenty of capacity
and easy encoding. With EDNS, you can easily get about 4 KB of payload
in a single response. (But even with EDNS, query payloads are still
limited to about 140 bytes.)

Of the ones I looked at, I think I like the dnscat2 protocol the best,
for its simplicity, full-duplex nature, and the fact that it deals with
reliability and retransmission (iodine and TUNS operate at a lower level
and rely on the kernel to provide reliability). I would use base32
rather than hexadecimal (25% more dense). I think OzymanDNS makes the
right call in only supporting TXT records. TUNS is quite clean, though I
would want it to be full duplex. iodine strikes me as needlessly
complex.


----

Survey of techniques to encode data in DNS messages.

Summary
* [[#dnscat2]]
* Uses CNAME, MX, TXT (randomly selected) by default. Also supports A and AAAA.
* Uses hexadecimal encoding everywhere.
* Full duplex: responses to data can themselves have data.
* Doesn't use the Additional section in responses.
* Doesn't use multiple strings in TXT responses (maximum length < 255 bytes).
* Doesn't use EDNS.
* [[#TUNS]]
* Uses CNAME exclusively.
* Encodes with base32, using `'0'` as a padding character instead of `'='`.
* Half duplex: client either sends a `d` data-carrying query and gets back an empty `l` length response; or sends an empty `r` request and gets back a `d` data-carrying response.
* `l` responses encode the number of server-queued messages.
* Doesn't use EDNS.
* [[#OzymanDNS]]
* Uses TXT exclusively.
* Encodes with base32 in queries and base64 in responses.
* Half duplex: each query–response pair either sends a piece of data (`up`) or receives a piece of data (`down`), not both.
* Supports long TXT records (multiple 255-byte-long strings concatenated in one resource record). Seems hardcoded to use exactly two TXT strings in a record.
* Responses encode the number of server-queued messages as the first byte of a `pending` A record in the Additional section.
* Doesn't use EDNS in my tests, though maybe a different version does.
* [[#iodine]]
* Supports A, CNAME, MX, SRV, TXT, NULL, and PRIVATE resource records. Does autodetection of which are supported on the DNS path.
* Supports a number of encodings (also autodetected): base32, base64, base64u, base128.
* Full duplex: server can send data in response to data; client can also solicit data without sending anything, using a `p` ping packet.
* Supports EDNS for long replies.
* Complicated.


= General observations =

DNS tunnels work either at the transport layer (netcat-like) or at the network layer (VPN-like). The network-layer ones have the implementation advantage that they don't have to worry about reliability, because retransmissions etc. are handled by lower-level kernel procedures.

Upstream queries are more squeezed for bandwidth than downstream responses. Although the protocol [https://tools.ietf.org/html/rfc1035#section-4.1.1 syntactically permits] sending any number of entries in the Question section, in practice the only supported value is QDCOUNT=1.
* https://stackoverflow.com/questions/4082081/requesting-a-and-aaaa-records-in-single-dns-query#4083071
* https://groups.google.com/forum/?hl=en#!topic/comp.protocols.dns.bind/uOWxNkm7AVg
* http://maradns.samiam.org/multiple.qdcount.html

A single DNS name has a maximum length of 255 bytes ([https://tools.ietf.org/html/rfc1035#section-2.3.4 RFC 1035 §2.3.4]). However, a name is composed of labels, each of which is a maximum of 63 bytes (or 64 bytes if including the length prefix). And the name's null terminator byte is counted in the limit as well. So it's really only 250 usable bytes. And from that you have to subtract the length of the name suffix that the DNS server is actually authoritative for.

Technically, [https://tools.ietf.org/html/rfc2181#section-11 any 8-bit value] can be part of a name label. However RFC 1035 [https://tools.ietf.org/html/rfc1035#section-2.3.1 recommends] `[A-Za-z][A-Za-z0-9-]*` for compatibility. Additionally, you can't rely on uppercase/lowercase being preserved ([https://developers.google.com/speed/public-dns/docs/security?hl=en#randomize_case Google Public DNS, among others, may randomize letter case]). So you have to encode somehow; base32 or hexadecimal suffices but better efficiency may be possible.

Supposing base32 encoding and 20 bytes for the length of the server's domain name suffix, the most raw data you can send in a single query is about 140 bytes. ''(Even if upstream queries may contain only one entry in the Question section, it may be possible to stuff extra data into the other sections. EDNS queries have an [https://tools.ietf.org/html/rfc6891#section-6 OPT resource record] in the Additional section—but that one only has hop-by-hop meaning, and isn't forwarded by a recursive server to an authoritative server. But perhaps some possibilities exist.)''

It's easier to pack data into responses than into queries. Responses have to echo the original question, but they can contain multiple resources records for that query in the Answer section, not to mention the Additional section. You can have multiple resource records of the same or differing types, but the order of resource records within a section is not guaranteed. Resource records can be of any type ([https://tools.ietf.org/html/rfc1035#section-3.4.1 A], [https://tools.ietf.org/html/rfc3596#section-2 AAAA], [https://tools.ietf.org/html/rfc1035#section-3.3.9 MX], [https://tools.ietf.org/html/rfc1035#section-3.3.1 CNAME], etc.). A resource record has a maximum data length of 65535 bytes, but most of them have a fixed format that's shorter than that; e.g. A is 4 bytes and CNAME is up to 255 bytes. The [https://tools.ietf.org/html/rfc1035#section-3.3.14 TXT] type seems practical: it's a byte sequence of any length up to the 64K limit. (The [https://github.com/iagox86/dnscat2/blob/master/doc/protocol.md#dns-record-type dnscat2 docs] say that the Windows DNS client can't handle `'\0'` bytes in TXT records, but that shouldn't be a problem if you use a custom client.) The [https://tools.ietf.org/html/rfc1035#section-3.3.10 NULL] type is marginally more efficient than TXT, but is marked "experimental".

The overall length of a DNS response message cannot be more than 65535 bytes. This holds whether using UDP with EDNS (the [https://tools.ietf.org/html/rfc6891#section-6.1.2 UDP payload size] field is 16 bits, and anyway that's the largest IP datagram size); or TCP (messages are preceded by a [https://tools.ietf.org/html/rfc1035#section-4.2.2 16-bit length field]). In the absence of EDNS, UDP responses are supposed to be [https://tools.ietf.org/html/rfc1035#section-4.2.1 limited to 512 bytes]. But as EDNS has good support, the [https://en.wikipedia.org/wiki/Maximum_transmission_unit MTU] is likely a bigger consideration than any DNS-imposed length limitations. A common value for the EDNS length field is 4096.


= dnscat2 =

[https://github.com/iagox86/dnscat2/blob/master/doc/protocol.md dnscat2] is a DNS tunnel. It works at the socket layer. Despite its name, it doesn't provide a simple netcat-like interface; it overlays a lot of pentesting-oriented functionality like session management and high-level commands like "send a file". But the underlying transport protocol is separable from all that.

* Uses hexadecimal encoding everywhere (even in TXT records).
* Uses CNAME, MX, TXT (randomly selected) by default. Also supports A and AAAA (use `type=A,AAAA` with the `--dns` option).
* Full duplex: responses to data can themselves have data.
* Prepends a 2-byte `packet_id` to each query to prevent caching.
* Doesn't use the Additional section in responses.
* Only sends back one RR in the Answer section, except for A (max 64) or AAAA (max 16). Prepends a sequence byte to each A and AAAA record.
* Doesn't use multiple strings in TXT responses (maximum length < 255 bytes).
* Implements TCP-like reliability and virtual connections with 16-bit session IDs, SYN/FIN messages, and SEQ/ACK numbers.
* Doesn't use EDNS.

MESSAGE_TYPE_SYN::
packet_id u16
message_type u8 (= 0x00)
session_id u16
initial sequence number u16
options u16
session name null-terminated string

MESSAGE_TYPE_MSG::
packet_id u16
message_type u8 (= 0x01)
session_id u16
seq u16
ack u16
data []u8

MESSAGE_TYPE_FIN::
packet_id u16
message_type u8 (= 0x02)
session_id u16
reason null-terminated string

== dnscat2 sample conversation ==

https://trac.torproject.org/projects/tor/attachment/wiki/doc/DnsPluggableTransport/Survey/dnscat2.pcapng

Starts with a MESSAGE_TYPE_SYN exchange. The session ID is 48ac. The client and server choose initial sequence numbers of 9037 and 7643 respectively. The client additionally sends a session name of !command (happy).

Transaction ID: 0xfe47
Flags: 0x0100 Standard query
Queries
26b80048ac90370021636f6d6d616e64202868617070792900.example.com: type TXT, class IN

> Transaction ID: 0xfe47
> Flags: 0x8180 Standard query response, No error
> Queries
> 26b80048ac90370021636f6d6d616e64202868617070792900.example.com: type TXT, class IN
> Answers
> 26b80048ac90370021636f6d6d616e64202868617070792900.example.com: type TXT, class IN
> TXT: 3da50048ac76430000

Client and server exchange empty MESSAGE_TYPE_MSG packets while the connection is idle. (Notice SEQ and ACK don't change.)

Transaction ID: 0xc641
Flags: 0x0100 Standard query
Queries
f4580148ac90377643.example.com: type TXT, class IN

> Transaction ID: 0xc641
> Flags: 0x8180 Standard query response, No error
> Queries
> f4580148ac90377643.example.com: type TXT, class IN
> Answers
> f4580148ac90377643.example.com: type TXT, class IN
> TXT: 41e80148ac76439037

Now the server has 17 bytes to send. (It happens to be a command to send the contents of the file "dnscat.c", but the details aren't important.) The client has randomly chosen the CNAME response record type, so the server has to format its data as a domain name.

Transaction ID: 0xf33d
Flags: 0x0100 Standard query
Queries
ea430148ac90377643.example.com: type CNAME, class IN

> Transaction ID: 0xf33d
> Flags: 0x8180 Standard query response, No error
> Queries
> ea430148ac90377643.example.com: type CNAME, class IN
> Answers
> ea430148ac90377643.example.com: type CNAME, class IN
> CNAME: 84000148ac764390370000000d00010003646e736361742e6300.example.com

The client sends 101 bytes. Notice the client's ACK has advanced 17 bytes from 7643 to 7654. The data is long enough that the client has to use multiple labels in the DNS name. The server's response advances the ACK from 9037 to 909c.

Transaction ID: 0x6272
Flags: 0x0100 Standard query
Queries
11d50148ac903776540000461c800100032f2a20646e736361742e630a20.2a204372656\
1746564204d617263682f323031330a202a20427920526f6e.20426f7765730a202a0a20\
2a20536565204c4943454e53452e6d640a202a.2f0a23696e636c756465203c617373657\
2742e68.example.com: type TXT, class IN

> Transaction ID: 0x6272
> Flags: 0x8180 Standard query response, No error
> Queries
> 11d50148ac903776540000461c800100032f2a20646e736361742e630a20.2a2043\
> 726561746564204d617263682f323031330a202a20427920526f6e.20426f776573\
> 0a202a0a202a20536565204c4943454e53452e6d640a202a.2f0a23696e636c7564\
> 65203c6173736572742e68.example.com: type TXT, class IN
> Answers
> 11d50148ac903776540000461c800100032f2a20646e736361742e630a20.2a2043\
> 726561746564204d617263682f323031330a202a20427920526f6e.20426f776573\
> 0a202a0a202a20536565204c4943454e53452e6d640a202a.2f0a23696e636c7564\
> 65203c6173736572742e68.example.com: type TXT, class IN
> TXT: 018b0148ac7654909c


= TUNS =

[https://members.loria.fr/LNussbaum/tuns.html TUNS] is an IP-layer (VPN-like) tunnel described in the paper [https://members.loria.fr/LNussbaum/files/tuns-sec09-article.pdf "On Robust Covert Channels Inside DNS"] by Lucas Nussbaum, Pierre Neyron and Olivier Richard. It works at the IP layer (VPN-like). Compared to other tunnels, TUNS aims for better compatibility with existing servers, at the expense of efficiency. (The paper claims that characters such as `'_'` and `'/'`, as used by [[#iodine]] and dns2tcp, are not standards-compliant, but that's not quite true: they are just outside of the maximum-compatibility set recommended by RFC 1035. The authors acknowledge this in the "Future Work" section.)

* Uses CNAME exclusively.
* Encodes with base32, using `'0'` as a padding character instead of `'='`.
* Half duplex: client either sends a `d` data-carrying query and gets back an empty `l` length response; or sends an empty `r` request and gets back a `d` data-carrying response.
* `l` responses encode the number of server-queued messages.
* Fakes a low MTU (140 bytes) in order to avoid having to split IP datagrams across multiple DNS messages.
* Eschews EDNS.
* Caches responses so that duplicated queries get identical responses.

== TUNS sample conversation ==

Taken from [https://members.loria.fr/LNussbaum/files/tuns-sec09-article.pdf#page6 Fig. 2] of the paper.

* message type: `d` for data, `l` for server queue length; `r` for data request
* request counter, prevents caching
* size of server queue
* data

The client sends d data. The server responds with an l indicating that it has 4 responses in its queue.

Flags: 0x0100 Standard query
Queries
dIUAAAVAAABAAAQABJ5K4BKBVAHAKQNICBAAAOS5TD4ASKPSQIJEM7VABAAEASC.MRTGQ2TMNY0.example.com: type CNAME, class IN

> Flags: 0x8180 Standard query response, No error
> Queries
> dIUAAAVAAABAAAQABJ5K4BKBVAHAKQNICBAAAOS5TD4ASKPSQIJEM7VABAAEASC.MRTGQ2TMNY0.example.com: type CNAME, class IN
> Answers
> dIUAAAVAAABAAAQABJ5K4BKBVAHAKQNICBAAAOS5TD4ASKPSQIJEM7VABAAEASC.MRTGQ2TMNY0.example.com: type CNAME, class IN
> CNAME: l4.example.com

The client sends r data requests to drain the server's queue. The server sends back d data responses.

Flags: 0x0100 Standard query
Queries
r882.example.com: type CNAME, class IN

> Flags: 0x8180 Standard query response, No error
> Queries
> r882.example.com: type CNAME, class IN
> Answers
> r882.example.com: type CNAME, class IN
> CNAME: dIUAAAVCWIUAAAQABHVCY2DMO2HQ7EAQSEIZEEUTCOKBJFIVSYLJOF4YDC.MRTGQ2TMNY0.example.com

When the server's queue is empty, it sends back the pseudo-data `zero`. (Distinguishable from actual base32 data by its lack of padding, I suppose?)

Flags: 0x0100 Standard query
Queries
r993.example.com: type CNAME, class IN

> Flags: 0x8180 Standard query response, No error
> Queries
> r993.example.com: type CNAME, class IN
> Answers
> r993.example.com: type CNAME, class IN
> CNAME: dzero.example.com


= OzymanDNS =

[https://dankaminsky.com/2004/07/29/51/ OzymanDNS] is a suite of DNS tools. droute.pl (client) and nomde.pl (server) implement a tunnel. It implements more of a netcat-like interface, but the code is old and crufty and doesn't work out of the box. Described in the talk [https://events.ccc.de/congress/2004/fahrplan/files/297-black-ops-of-dns-slides.pdf "Black Ops of DNS"] from 2004 (pages 12–17).

* Uses TXT exclusively.
* Uses base32 (without padding) in queries; base64 (padded, with newline breaks) in responses.
* Supports long TXT records (multiple 255-byte-long strings concatenated in one resource record). Seems hardcoded to use exactly two TXT strings in a record.
* For serializing messages, allows only one query or response in flight ([https://www.bamsoftware.com/papers/fronting/#sec:deploy-tor like meek]).
* Sort-of has SEQ and ACK fields, but they seem advisory and not really used.
* Half duplex: each query–response pair either sends a piece of data (`up`) or receives a piece of data (`down`), not both.
* Responses encode the number of server-queued messages as the first byte of a `pending` A record in the Additional section.
* Doesn't use EDNS in my tests, though [https://events.ccc.de/congress/2004/fahrplan/files/297-black-ops-of-dns-slides.pdf#page7 the slides] say Hi Bandwidth Payloads using EDNS are "Under Development". The [[#TUNS]] paper claims OzymanDNS uses EDNS.

== OzymanDNS sample conversation ==

https://trac.torproject.org/projects/tor/attachment/wiki/doc/DnsPluggableTransport/Survey/ozymandns.pcap

The nomde.pl server supports more than just DNS tunneling, so you have to scope your tunnel requests under a distinguished subdomain (here `myservice`).

* function (`up` or `down`)
* nonce, prevents caching
* session ID
* number of bytes sent (like a SEQ field, only in the client—though it is bugged and always remains 0)
* number of bytes received (like an ACK field, only in the client)
* size of server queue
* data

The session is using session ID 35765. Here the client has sent no data and the server has sent back 12 bytes in response. The first byte 0 of the `pending` A record indicates that the server has no further data queued at the moment.

Transaction ID: 0x5aa8
Flags: 0x0100 Standard query
Queries
0-27778.id-35765.down.myservice.example.com: type TXT, class IN

> Transaction ID: 0x5aa8
> Flags: 0x8500 Standard query response, No error
> Queries
> 0-27778.id-35765.down.myservice.example.com: type TXT, class IN
> Answers
> 0-27778.id-35765.down.myservice.example.com: type TXT, class IN
> TXT: aGVsbG8gd29ybGQK\n
> TXT:
> Additional records
> pending.0-27778.id-35765.down.myservice.example.com: type A, class IN
> Address: 0.0.0.0

Now the client has 18 bytes to send. It encodes them into an A query. The first byte `18` of the server's answer just echoes the number of bytes received.

Transaction ID: 0x9d9a
Flags: 0x0100 Standard query
Queries
nbswy3dpebthe33nebrwy2lfnz2au.12010-0.id-35765.up.myservice.example.com: type A, class IN

> Transaction ID: 0x9d9a
> Flags: 0x8500 Standard query response, No error
> Queries
> nbswy3dpebthe33nebrwy2lfnz2au.12010-0.id-35765.up.myservice.example.com: type A, class IN
> Answers
> nbswy3dpebthe33nebrwy2lfnz2au.12010-0.id-35765.up.myservice.example.com: type A, class IN
> Address: 18.0.0.0

Now the client wants to send 273 bytes, which it splits up into 110+110+53.

Transaction ID: 0xa5d2
Flags: 0x0100 Standard query
Queries
pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.pj5hu6t2pj5h\
u6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.19517-0.id-35765.up.my\
service.example.com: type A, class IN

> Transaction ID: 0xa5d2
> Flags: 0x8500 Standard query response, No error
> Queries
> pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
> 5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.pj5hu6t2pj5h\
> u6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.19517-0.id-35765.up.my\
> service.example.com: type A, class IN
> Answers
> pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
> 5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.pj5hu6t2pj5h\
> u6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.19517-0.id-35765.up.my\
> service.example.com: type A, class IN
> Address: 110.0.0.0

Transaction ID: 0x067f
Flags: 0x0100 Standard query
Queries
pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.pj5hu6t2pj5h\
u6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.15796-0.id-35765.up.my\
service.example.com: type A, class IN

> Transaction ID: 0x067f
> Flags: 0x8500 Standard query response, No error
> Queries
> pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
> 5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.pj5hu6t2pj5h\
> u6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.15796-0.id-35765.up.my\
> service.example.com: type A, class IN
> Answers
> pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
> 5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.pj5hu6t2pj5h\
> u6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2.15796-0.id-35765.up.my\
> service.example.com: type A, class IN
> Address: 110.0.0.0

Transaction ID: 0x663f
Flags: 0x0100 Standard query
Queries
pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj5h\
u6t2pj5hu6t2pj5au.34064-0.id-35765.up.myservice.example.com: type A, class IN

> Transaction ID: 0x663f
> Flags: 0x8500 Standard query response, No error
> Queries
> pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
> 5hu6t2pj5hu6t2pj5au.34064-0.id-35765.up.myservice.example.com: type A, class IN
> Answers
> pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5hu6t2pj5h.u6t2pj\
> 5hu6t2pj5hu6t2pj5au.34064-0.id-35765.up.myservice.example.com: type A, class IN
> Address: 53.0.0.0

Now the server has 298 bytes to send, which it splits up as 220+78. After the first send, the client increments its count of received bytes from 12 to 232 and the server indicates that it has 1 further response queued.

Transaction ID: 0x6afa
Flags: 0x0100 Standard query
Queries
12-63032.id-35765.down.myservice.example.com: type TXT, class IN

> Transaction ID: 0x6afa
> Flags: 0x8500 Standard query response, No error
> Queries
> 12-63032.id-35765.down.myservice.example.com: type TXT, class IN
> Answers
> 12-63032.id-35765.down.myservice.example.com: type TXT, class IN
> TXT: eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4\n\
> eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHg=\n
> TXT: eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4\n\
> eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHg=\n
> Additional records
> pending.12-63032.id-35765.down.myservice.example.com: type A, class IN
> Address: 1.0.0.0

Transaction ID: 0xa06c
Flags: 0x0100 Standard query
Queries
232-7106.id-35765.down.myservice.example.com: type TXT, class IN

> Transaction ID: 0xa06c
> Flags: 0x8500 Standard query response, No error
> Queries
> 232-7106.id-35765.down.myservice.example.com: type TXT, class IN
> Answers
> 232-7106.id-35765.down.myservice.example.com: type TXT, class IN
> TXT: eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4eHh4\n\
> eHh4eHh4eHh4eHh4eHh4eHh4eHgK\n
> TXT:
> Additional records
> pending.232-7106.id-35765.down.myservice.example.com: type A, class IN
> Address: 0.0.0.0


= iodine =

[https://code.kryo.se/iodine/ iodine] is a network-layer DNS tunnel.
It has a lot of features, like password authentication, compression, and auto-probing of features like server case sensitivity and the MTU.

[https://code.kryo.se/iodine/README.html README]
[https://github.com/yarrick/iodine/blob/master/doc/proto_00000502.txt Protocol documentation]

* Supports all of A, CNAME, MX, SRV, TXT, NULL, and PRIVATE resource records. NULL is like TXT but with slightly less overhead. PRIVATE uses a code in the [https://tools.ietf.org/html/rfc6895#section-3.1 private-use range], relying on other servers [https://tools.ietf.org/html/rfc3597#section-3 not to understand it and to leave it alone].
* Queries can be encoded with any of (autodetected):
* base32 (but a mutant base32 with a different alphabet: `abcdefghijklmnopqrstuvwxyz012345`)
* base64 (again mutant with a different order and `'-'` and `'+'` in place of `'+'` and `'/'`) if the server is case-preserving
* base64u (same as base64 but with `'_'` in place of `'+'`)
* base128 (uses alphabet `a-zA-Z0-9\xbc-\xfd`) if the DNS path preserves case and the high bit
* Full duplex: server can send data in response to data; client can also solicit data without sending anything, using a `p` ping packet.
* Supports EDNS for long replies.
* Caches responses so that duplicated queries get identical responses.
* Complicated.

== iodine sample conversation ==

{{{
#!comment
iodined -f -l 192.168.0.64 -P password 10.0.0.1 example.com
iodine -f -r -P password -I 50 192.168.0.64 example.com
ping 10.0.0.1
}}}

[[attachment:iodine.pcap]]

The iodine trace is more complicated than I want to walk through fully.

The first byte of queries indicates the type: `v` means version, `l` is login, `s` is switch codec, etc. Hex digits `0-9a-f` represent a user ID and indicate an actual data packet (which means there can only be 16 simultaneous users of the tunnel).

Here is a sample data transaction. The client and server have negotiated the base128 codec and the NULL record type. The client with user ID `0` sends a data packet and the server responds with no data (just a two-byte data header `\xa0\x20`). Note the client signals support for EDNS in the Additional section.

Transaction ID: 0x5607
Flags: 0x0100 Standard query
Queries
0iabb82\xca2hb\xbe\xeeY\xd6bp\xc9\xcdNb\xde\xdeQFzWUdb\xc2\xe0dB\
\xc6\xdeS\xdbO\xe3\xdagdqP\xf9\xcc\xefGiHLoy\xdeLaWfcHW.\xc2Qtl\
\xc2s\xc7u\xd8Jv\xc7\xe3\xf0fI\xc7S\xcfQ\xd2\xe8\xf3g\xd1L\xd4\
\xe6\xf369\xbc\xbe\xe6YBmM\xd1z\xda\xdei\xc98B\xf9.example.com: type NULL, class IN
Additional records
&lt;Root&gt;: type OPT
UDP payload size: 4096
Higher bits in extended RCODE: 0x00
EDNS0 version: 0
Z: 0x8000

> Transaction ID: 0x5607
> Flags: 0x8400 Standard query response, No error
> Queries
> 0iabb82\xca2hb\xbe\xeeY\xd6bp\xc9\xcdNb\xde\xdeQFzWUdb\xc2\xe0dB\
> \xc6\xdeS\xdbO\xe3\xdagdqP\xf9\xcc\xefGiHLoy\xdeLaWfcHW.\xc2Qtl\
> \xc2s\xc7u\xd8Jv\xc7\xe3\xf0fI\xc7S\xcfQ\xd2\xe8\xf3g\xd1L\xd4\
> \xe6\xf369\xbc\xbe\xe6YBmM\xd1z\xda\xdei\xc98B\xf9.example.com: type NULL, class IN
> Answers
> 0iabb82\xca2hb\xbe\xeeY\xd6bp\xc9\xcdNb\xde\xdeQFzWUdb\xc2\xe0dB\
> \xc6\xdeS\xdbO\xe3\xdagdqP\xf9\xcc\xefGiHLoy\xdeLaWfcHW.\xc2Qtl\
> \xc2s\xc7u\xd8Jv\xc7\xe3\xf0fI\xc7S\xcfQ\xd2\xe8\xf3g\xd1L\xd4\
> \xe6\xf369\xbc\xbe\xe6YBmM\xd1z\xda\xdei\xc98B\xf9.example.com: type NULL, class IN
> Null (data): \xa0\x20

Here the client sends a `p` ping message. The server responds with a blob of unencoded data.

Transaction ID: 0x7436
Flags: 0x0100 Standard query
Queries
paaifhjy.example.com: type NULL, class IN
Additional records
&lt;Root&gt;: type OPT
UDP payload size: 4096
Higher bits in extended RCODE: 0x00
EDNS0 version: 0
Z: 0x8000

> Transaction ID: 0x7436
> Flags: 0x8400 Standard query response, No error
> Queries
> paaifhjy.example.com: type NULL, class IN
> Answers
> paaifhjy.example.com: type NULL, class IN
> Null (data): \xb0Ax\xdac`\xe0`pe`\x089\xbf\x8f\x81\xc1\x81q\xda\
> \x0b.\x06\x06F fb`\xe8\xba\xa6-\xc0\xc0\xc4%\xf7;\
> \x8a\x01\x08\x9aO3\x81(\x06\x01A!a\x11Q1q\tI)i\x19\
> Y9y\x05E%e\x15U5u\rM-m\x1d]=}\x03C#c\x13S3s\x00\
> \x84\xa9\r\xfb


= Others =

* [http://thomer.com/howtos/nstx.html NSTX] (predecessor of [[#iodine]]?)
* According to the [[#TUNS]] paper, uses TXT and base64 (`[A-Za-z0-9_-]`).
* [https://tools.kali.org/maintaining-access/dns2tcp dns2tcp]
* According to the [[#TUNS]] paper, uses TXT and base64 (`[A-Za-z0-9/-]`).
* [http://tadek.pietraszek.org/projects/DNScat/index.html DNScat] by Tadek Pietraszek – ''different'' than the similarly named [https://wiki.skullsecurity.org/Dnscat dnscat] by Ron Bowes that was the predecessor of [[#dnscat2]]

Tom Ritter

unread,
May 15, 2018, 5:51:16 PM5/15/18
to Network Traffic Obfuscation
Do you have any observations about what sorts of speeds one can
achieve with these? The uplink seems so limited that I immediately
started imagining optimization tricks; but then realized they'd all be
very limited when you're sending TLS data instead of HTTP.

-tom
> --
> You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.

David Fifield

unread,
May 15, 2018, 6:43:29 PM5/15/18
to Network Traffic Obfuscation
On Tue, May 15, 2018 at 04:50:53PM -0500, Tom Ritter wrote:
> Do you have any observations about what sorts of speeds one can
> achieve with these? The uplink seems so limited that I immediately
> started imagining optimization tricks; but then realized they'd all be
> very limited when you're sending TLS data instead of HTTP.

The TUNS paper has some figures that show 50–200 Kbps up/down at 50 ms
RTT; 15–80 Kbps at 200 ms RTT.
https://members.loria.fr/LNussbaum/files/tuns-sec09-article.pdf#page=8

The iodine README has a section on performance that reports 50–350 Kbps
over the Internet.
https://code.kryo.se/iodine/README.html

I'm not too concerned about speed at this point. DNS is certainly
sufficient for rendezvous (which is not all that different from what DNS
ordinarily does). Maybe it will be fast enough for general transport, or
maybe not. I suspect that the polling nature of DNS and dealing with
packet loss will be as or more important than the limited upload
capacity.

Some quick estimations. At the DNS layer, you can represent ~140 bytes
in a ~280-byte DNS message, for about 50% efficiency. Tack on a fixed
HTTP or HTTP/2 header per message, call it 100–300 additional bytes,
gives an efficiency of 25–35% per message. (I haven't actually tried to
measure header size.) Add in a little bit of TLS record overhead. So not
terribly efficient, but perhaps workable. You can pipeline your queries
(because you have to do reassembly/reordering anyway) and that's really
efficient over HTTP/2.

Tom Ritter

unread,
May 16, 2018, 12:30:47 PM5/16/18
to Network Traffic Obfuscation
On 15 May 2018 at 17:43, David Fifield <da...@bamsoftware.com> wrote:
> On Tue, May 15, 2018 at 04:50:53PM -0500, Tom Ritter wrote:
>> Do you have any observations about what sorts of speeds one can
>> achieve with these? The uplink seems so limited that I immediately
>> started imagining optimization tricks; but then realized they'd all be
>> very limited when you're sending TLS data instead of HTTP.
>
> The TUNS paper has some figures that show 50–200 Kbps up/down at 50 ms
> RTT; 15–80 Kbps at 200 ms RTT.
> https://members.loria.fr/LNussbaum/files/tuns-sec09-article.pdf#page=8
>
> The iodine README has a section on performance that reports 50–350 Kbps
> over the Internet.
> https://code.kryo.se/iodine/README.html
>
> I'm not too concerned about speed at this point. DNS is certainly
> sufficient for rendezvous (which is not all that different from what DNS
> ordinarily does).

Ah okay; I was imagining this as a full PT; not just for rendezvous.
For rendezvous that's definitely sufficient.

-tom
Reply all
Reply to author
Forward
0 new messages