Yes, we are aware of PCC as well. This PCC paper described some nice improvements relative to loss-based congestion control.
There are some similarities between PCC, as it is described here, and BBR. But there are also big differences.
In my mind, the biggest similarities are an overall preference for rate-based control mechanisms, an emphasis on probing the network to inform control decisions, the incorporation of goodput as a core measurement to guide these control decisions, and a goal of being resilient to incidental packet losses that are not indicative of a persistently full bottleneck.
In my mind, the biggest differences revolve around the core framework for modeling the system and controlling behavior:
o PCC treats the network as a black box, where the sender does
random rate changing experiments to adjust the sending rate
according to a utility function that incorporates both throughput
and packet loss, but is largely "loss-based" (in its own terms;
see section 2.3). There does not seem to be a bound
for in-flight data, as it is described in the NSDI paper.
o BBR tries to build an explicit model of the network path, with
estimates of the bandwidth available to the flow and the round-trip
propagation time, to inform its control decisions about pacing rate
and a bound for in-flight data.
Other specific thoughts about PCC:
o The PCC NSDI paper does not seem to mention any bound on the
amount of data in flight (cwnd, or a similar mechanism). This suggest
that there is no such bound, which suggests that if the bandwidth
available to a flow decreases significantly then a substantial queue
would rapidly build, causing high delays and packet loss.
o The PCC utility function incorporates throughput and loss rate, but
not delay. This fact, together with the apparent lack of bound on
in-flight data, suggests that the algorithm is likely vulnerable to
unknowingly operating with persistently big queues and high queueing
delays, with no apparent mechanism for exiting this operating
regime.
o The shape of the PCC utility function would seem to tend to drive
the system to operate near a full queue. In multi-flow scenarios the
throughput a flow gets is roughly proportional to its share of queue
slots, which means that a flow's throughput will grow as a function
of its sending rate. Given that the PCC utility function rewards
throughput but does not penalize delay, it seems that the dynamics
of the PCC algorithm described in that paper would tend to drive the
system to oscillate around a full bottleneck buffer, with delay near
the maximum potential delay offered on that path, and periodic loss
causing the largely "loss-based" PCC utility function to conclude that
sending faster greatly increases loss but not throughput.
That said, I suspect that there are important details about PCC that
are not captured in that NSDI paper. Furthermore, I gather the authors
of PCC have continued to work in this area, so I suspect PCC has
evolved well past the state described in that paper, much as we are
continuing to evolve BBR.
But I hope that helps sketch out some similarities and differences.
cheers,
neal