Maybe it would be more precise to use different terms entirely.
I would personally call SAGE/Springfield a concolic execution engine. The technique it uses is symbolic execution, starting from concrete inputs. SAGE very closely looks at the program under test, trying to model the effect of each instruction that touches input data. SAGE then uses a constraint solver to generate new inputs that would take the program down different paths. Because it examines the program under test very deeply, you could call it "white-box". On the other hand, SAGE does not require source code AFAIK.
AFL is a fuzzer because randomness is key in exploring the program. It also looks at the program under test, but only gathers limited information -- mostly whether an input causes some edge in the control-flow graph to be executed or not. I'd call it a coverage-guided fuzzer for this reason.
To me, fuzz testing is strongly linked to randomness. AFL very much relies on randomness, while SAGE/Springfield tries to use a constraint solver instead. Use of randomness in SAGE/Springfield is more of a fallback for cases where the constraint solver is not precise.
Caveat: my understanding of SAGE is based on research papers by Ella Bounimova, Patrice Godefroid and others, which are five years old or more. So this might not reflect the current state of Springfield very well.
Hope this helps,
Jonas