I think it is a semantic difference -- Theano implements "automatic differentiation" but refers to its own types as "symbolic" tensors (as opposed to "numeric" arrays), and so Theano's gradient implements an operation that Theano calls "symbolic differentiation".
But the truth is a little ambiguous. Broadly speaking, "symbolic differentiation" refers to the manipulation of mathematical statements, and a symbolic differentiator knows the rules for taking the derivative of each element of a statement, or each basic operation. "Automatic differentiation" refers to the manipulation of a computer program, and an automatic differentiator knows the rules for taking the derivative of each element of a program, or each basic function call or control flow statement. These two things are not the same, though they are sometimes very similar. Symbolic differentiation requires the repeated application of differentiation operators, and often yields very complex derivatives. Reverse-mode automatic differentiation, on the other hand, replaces a chain of functions with a chain of derivative functions, and so the result is usually much more manageable. That's how Theano works: it builds up a computational chain, and then passes derivatives backwards through the chain's gradient functions (think backprop).
However, for very simple statements, they may work the same, which is why identifying the differences can be confusing. For control flows like "if" statements and loops, they could be wildly different -- AD will usually give far more efficient answers by taking advantage of its knowledge of the program structure.