Having a paper translated from Chinese to English ("Towards Measuring Unobservability in Anonymous Communication Systems" 2015)

107 views
Skip to first unread message

David Fifield

unread,
Apr 4, 2019, 3:17:48 PM4/4/19
to traff...@googlegroups.com
In 2015, Tan Qingfeng et al. published a paper on quantifying the
unobservability of circumvention systems. It was only available in
Chinese, and at the time I read a machine translation.
http://crad.ict.ac.cn/EN/10.7544/issn1000-1239.2015.20150562

Lately I've been interested in translation and so, as an experiment, I
paid to have the article professionally translated. Here is the result:
https://www.bamsoftware.com/papers-of-others/Tan2015a/Tan2015a.en.html

The translation seems good for a specialized technical topic, according
to bilingual people I've asked. I had no idea how to select a
translation service, except that I've heard there are a lot of
low-quality ones online, so I picked one with a local office. They
quoted me a price of $0.18 per word, plus a 10% project fee, which for
the estimated 3,000 words came out to about $600. After we signed the
paperwork, they got back to me and said that they underestimated the
word count and it should have been more than double what it was, but
they honored the original quoted price. It only took one week (proposal
approved March 6, translation delivered March 13).

What's on a wishlist of papers to have translated? For me it includes
* http://cdmd.cnki.com.cn/Article/CDMD-10004-1016120870.htm
Master's thesis, "Research and Realization of Tor Anonymous
Communication Identification Method Based on Meek"
* https://archive.org/details/CN109286576A
Patent application, something about using ML to detect Shadowsocks
and SSH.

Below is my summary of the Tan Qingfeng et al. paper.

----

The paper proposes a method of quantifying the unobservability of a
circumvention system that imitates a target protocol. Conceptually, they
look at how much the circumvention protocol diverges from an ideal
implementation of the target protocol, compared to how much an actual
implementation of the target protocol diverges from the ideal.

There are a lot of definitions for an abstract computational model of
how a network protocol runs, but they don't use the model in any
essential way. The important parts are that the output of running a
protocol in a certain way (which they call a "state") is a sequence of N
packets. From each packet they extract M features, resulting in an N×M
feature vector. The feature vectors are used to compute divergence.
Concretely, they look at the 6 states
S = {C2S Handshake, C2S Idle, C2S Data Transmission,
S2C Handshake, S2C Idle, S2C Data Transmission}
and the M=2 features
{packet length, packet timing interval}.
and they don't specify what N is, or whether it is the same across all
states.

They use the notational conventions:
p: measured distribution
q: ideal distribution
τ: target protocol
π: circumvention protocol that mimics τ
and so define probability distributions over feature vectors:
q_τ: ideal distribution of target protocol
p_τ: measured distribution of target protocol
p_π: measured distribution of circumvention protocol
The basic measure of divergence is:
D(p_π, q_τ) − D(p_τ, q_τ)
which you can understand as:
how much the circumvention protocol differs from the target protocol
minus
how much the target protocol differs from itself

I don't really understand the reasoning behind the formulas they use to
get an overall unobservability score. It seems ad-hoc without a
theoretical motivation. To get the overall divergence of a protocol,
they average the divergence of each of its states (C2S Handshake,
S2C Handshake, etc.) (Eq. (4)). They then divide that average by the
maximum divergence of any single state (Eq. (5)), which means the
outcome depends on how you slice up the state space. The unobservability
score d is supposed to lie within [0, 1] but technically it can be
negative—perhaps the difference of divergence should be taken in
absolute value. In all tests, the target protocol τ is Firefox speaking
HTTPS, which makes sense when they evaluate meek, but not when they
evaluate the vanilla Tor protocol. How do they get q_τ, the "ideal"
distribution of Firefox HTTPS? They don't say, but I suspect they run
Firefox twice, and call one distribution "ideal" and one "measured."

The summary unobservability scores lead to the conclusion that meek is
more unobservable than the vanilla Tor protocol, but less unobservable
than Firefox HTTPS itself. I managed to reproduce their final scores
using the data in Table 1 and Table 2—except that 1 of the 4 values was
a little off. The way to do it is, for example, to take the 12 values in
the Meek–Windows rows in both tables (that's your D(p_π, q_τ)) and
compare them to the 12 values in the HTTPS–Windows rows (that's
D(p_τ, q_τ)). Example in R:
> D_p_π <- c(3.56, 1.20, 18.27, 18.14, 10.35, 5.19, 17.57, 4.76, 20.91, 20.20, 19.89, 10.86)
> D_p_τ <- c(1.67, 2.05, 2.56, 4.64, 1.96, 0.77, 0, 0, 0, 0, 0, 0.77)
> mean(D_p_π - D_p_τ) / max(D_p_π)
[1] 0.5439184
then repeat for Meek–Ubuntu, Bridge–Windows, and Bridge–Ubuntu. Here are
the paper's numbers, and the ones that I get:
Meek–Windows 0.54 0.5439184
Meek–Ubuntu 0.36 0.342749 * not equal
Bridge–Windows 0.72 0.7160888
Bridge–Ubuntu 0.53 0.5324601

klzgrad

unread,
Jun 5, 2019, 7:58:35 AM6/5/19
to traff...@googlegroups.com
Summary of CN109286576A

The patent improves feature engineering for website fingerprinting
classifiers. This is finer grained than detecting Shadowsocks from
background traffic. It is to infer e.g. URLs from Shadowsocks traffic.

Scenario: Each web page generates several requests, and each request
forms a flow.

Step 1: Remove uninformative tuples of (direction, packet length, TCP
flags) by a tf-idf threshold 0.00001.

Consider web pages as documents and the tuples as words, compute the
tf-idf metric for each word, and remove the words that appear in too
many documents.

Step 2: Generate clusters from tuples of (first L7 packet length in
flow, time to first L7 byte in flow).

Do k-Means on Euclidean distance for the number of clusters from 2 to
3*number of web pages.

Step 3: Determine the optimal number of clusters with the minimum
residual error.

Step 4: Form a feature vector per flow for later classifiers.

The first part of the vector is the "statistics of all data packets
that pass step 1 (max, min, mean, ..., var)" [which didn't make sense
to me because this part would be the same for every flow].

The second part of the vector is the distances from all cluster centroids.

Comments:

[0079] to [0084] showed various accuracy, precision-recall numbers,
all of which are not high.

Anyway this patent seems to me a moot one because Shadowsocks is
basically obsolete technology. All these flow-based approaches have an
obvious answer: connection multiplexing.
> --
> You received this message because you are subscribed to the Google Groups "Network Traffic Obfuscation" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to traffic-obf...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Justin Henck

unread,
Jun 6, 2019, 12:54:09 PM6/6/19
to Network Traffic Obfuscation
-> New subject
 
Anyway this patent seems to me a moot one because Shadowsocks is 
basically obsolete technology.

My understanding is that Shadowsocks still continues to be widely deployed with limited protocol-based blocking (rather than [all traffic going from SOURCE to DEST and vice versa]).  Are there alternatives that are similarly low-overhead but perhaps more resilient to detection and symmetric key compromise? 
Reply all
Reply to author
Forward
0 new messages