My intention was to compare stages of a "grammar" as defined in the derivatives paper with a sequence of Earley sets similar in syntax to the ones in yours, yes. A more mathy way to present the former sequence would perhaps be:
&start = (A + B) U (A + &start + B)
Which then gets transformed through
-A : B U (&start + B)
-A : 0 U ((B U (&start + B) + B)
=> (B U (&start + B)) + B
-B : (ε U (0 + B)) + B ∵ -B. &start = (0 + B) U (0 + &start + B) => 0
=> B
(I gloss over some invariants not present in the original paper, having to do with the derivation step mapping from {sym, &ref, U, +} to one that additionally contains 0 and ε, and the compaction step removing those back again; my question is of high-level correspondence with marpa, which itself does not admit nullable or void rules)
The fundamental projection that seems present to me is that a differentiation grammar formally keeps only the remaining input, but all of it in the same data structure; compare
0. start: A + B ; A + &start + B
1. -A : B ; &start + B # start_1(g) := (g + B)
2. -A : start_1(B ; &start + B) # "predict"
3. -B : start_1(ε ; 0) => ε + B => B # "reduce"
to the marpa
0. start: •A + B; •A + &start + B
1. -A : (A+)•B ; (A+)•&start + B
; start_1:•A + B; •A + &start + B
2. -A : start_1: (A+)•B; (A+)•&start + B
; start_2: (etc. unused)
3. -B : start_1: (A+B)• => start(_0) : (A+&start+)•B
The main difference seems to be that the explicit tree sharing leaves the rule names around to inspect, while Might-Darais has to stick them in separate tagged εs.
NB. Regardless of the existence or fictouisness of such an equivalence, writing these emails has helped my understanding of Earley tremendously, to the point where I would almost be comfortable trying to write an implementation; or possibly taking a deeper look at libmarpa, whose interface docs I had skimmed and bounced off of. Thank you for being around to answer questions :3