Dust: A Blocking-Resistant Internet Transport Protocol 研究人员提出的一种新的反协议检测技术

24 views
Skip to first unread message

Max贝立.NoGFW审查

unread,
Aug 29, 2013, 9:53:38 AM8/29/13
to tahrir-de...@googlegroups.com
Dust: A Blocking-Resistant Internet Transport Protocol 研究人员提出的一种新的反协议检测技术

Brandon Wiley

School of Information, University of Texas at Austin

1616 Guadalupe #5.202


Austin, TX 78701-1213

Abstract. Censorship of information on the Internet has been an increasing 

problem as the methods have become more sophisticated and increasing

resources have been allocated to censor more content. A number of approaches

to counteract Internet censorship have been implemented, from censorshipresistant  publishing systems to anonymizing proxies. A prerequisite for these

systems to function against real attackers is that they also offer blocking

resistance. Dust is proposed as a blocking-resistant Internet protocol designed 

to be used alone or in conjunction with existing systems to resist a number of 

attacks currently in active use to censor Internet communication. Unlike 

previous work in censorship resistance, it does not seek to provide anonymity in 

terms of unlinkability of sender and receiver. Instead it provides blocking 

resistance against the most common packet filtering techniques currently in use 

to impose Internet censorship.

Keywords: censorship resistance, blocking resistance


1   Introduction

Censorship of information on the Internet has been implemented  using  increasingly 

sophisticated techniques. Shallow packet filtering,  which  can  be  circumvented  by 

anonymizing proxies, has been replaced by deep packet inspection technology which 

can filter out specific Internet protocols. This has resulted in censorship-resistant 

services being entirely blocked or partially blocked through bandwidth throttling.

Traditional approaches to censorship resistance are not effective unless they also 

incorporate blocking resistance so that users can communicate with the censorship 

circumvention services.

Dust is an Internet protocol designed to resist a number of attacks currently in 

active use to censor Internet communication. Dust uses a novel technique for 

establishing a secure, blocking-resistant channel for communication over a filtered

channel. Once a channel has been established, Dust packets are indistinguishable from 

random packets and so cannot be filtered by normal techniques. Unlike other 

encrypted protocols such as SSL/TLS, there is no plaintext handshake which would 

allow the protocol to be fingerprinted and therefore blocked or throttled. This solves a 

principle weakness of current censorship-resistant systems, which are vulnerable to 

deep packet inspection filtering attacks.

1.1   Problem

Traditionally, Internet traffic has been filtered using “shallow packet inspection”

(SPI). With SPI, only packet headers are examined. Since packet headers must be 

examined anyway in order to route the packets, this form of filtering has minimal

impact on the scalability of the filtering process, allowing for its widespread use. The 

primary means of determining “bad” packets with SPI is to compare the source and 

destination IP addresses and ports to IP and port  blacklists. The blacklists  must  be 

updated as new target IPs and port are discovered. Circumvention technology, such as 

anonymous proxies,  bypass  this  filtering  by  providing  new  IPs  and  ports  not  in  the 

blacklist which proxy connections to blacklisted IPs. As the IPs of proxies are 

discovered, they are added to the blacklist, so a fresh set of proxy IPs must be made 

available and communicated to users periodically. As port blacklists are used to block

certain protocols, such as BitTorrent, regardless of IP, clients use port randomization

to find ports which are not on the blacklist.

Recently, “deep packet inspection” (DPI) techniques have been deployed which

can successfully block or throttle most censorship circumvention solutions [14]. DPI 

filters packets by examining the packet payload. DPI can achieve suitable scalability

through random sampling of packets. Another technique in use is to initially send 

packets through, but also send them to a background process for analysis. When a bad 

packet is found, further packets in that packet stream can be blocked, or the IPs of 

participants added to the blacklist. The primary tests that DPI filters apply to packets

are packet length  comparison and static string matching, although timing-based 

fingerprints are also possible. DPI can not only filter content, but also fingerprint and 

filter specific  protocols, even encrypted protocols such as SSL/TLS. Encrypted

protocols are vulnerable to fingerprinting based on packet length, timing, and static 

string matching of  the  unencrypted  handshake  that  precedes  encrypted

communication. For instance, SSL/TLS uses an unencrypted handshake for cipher

negotiation and key exchange. 

The goal of Dust is to provide a transport protocol which cannot be filtered with 

DPI. To accomplish this goal it must not be vulnerable to fingerprinting using static 

string matching, packet length comparison, or timing profiling. Other attacks such as 

IP address matching and coercion of operators are outside of the scope and are best 

addressed by use of existing systems such as anonymizing proxies and censorshipresistant publishing systems running on top of a Dust transport layer.

2   Related Work

Censorship resistance is  often discussed in connection with other related concepts 

such as anonymity, unlinkability, and unobservability. These  terms are sometimes

used interchangeably and sometimes assumed to have specific technical definitions.

Pfitzmann  proposed  a standardized terminology that  defines  and  relates  these  terms

[13]. Unlinkability is defined as the indistinguishability of two objects within an 

anonymity set. Anonymity is defined as unlinkability between a given object and a 

known object of interest. Unobservability is defined as unlinkability of a given object 

and a randomly chosen object.

Defining properties such as anonymity and unobservability in terms of 

unlinkability opens the way for an information theoretical approach. Hevia  offers 

such an approach by defining levels of anonymity in terms of what information is 

leaked from the system to the attacker [8]. Unlinkability requires the least protection,

hiding only the message contents. Unobservability requires that no information is 

leaked whatsoever. Of particular interest is that an anonymous system of any type can 

be taken up to the next level of anonymity by adding one of two system design 

primitives: encryption and cover traffic.


--
Max贝立

请帮助Re推下面的微博短文,谢谢
#nogfw 我相信能网聚百万人反GFW审查:请电邮nogfw+s...@googlegroups.com响应,汇集百万呼吁,groups.google.com/group/nogfw 将自动统计人数。 #GFW,我们不高兴!请RT,成功就在指尖
Reply all
Reply to author
Forward
0 new messages