Not to object nitpick over a vague description, but it's the other way around. CPUs can do intensive operations that GPUs aren't so good at: if you see an incredibly complicated algorithm, think, say, a computer engine where a position is broken down into hundreds of parts. Each of those parts is quantified based on the values of other parts, and then these are all fed into a conditional formula, yielding an evaluation. Now do this a hundred-thousand times a second. You want a CPU for that.
GPUs on the other hand, are good at massively parallel, but relatively simple operations. If you think of 3d graphics, it's simple to rotate an object in 3 dimensional space as represented by coordinates. Well, relatively simple - mathematically smart people can do it in their head. Now, add in about a hundred different objects in the scene, light refraction - again, not difficult mathematically, but if you're simulating a fully lit scene, you're talking about a LOT of light rays - light absorption, shadows, etc, and now a GPU shows its worth: lots and lots of simple, independent operations, preferably fast enough so that the scene can run in real time. A CPU, which can do things far more complicated if you're talking about a single algorithm, would melt.
NNs are based around linear algebra, which is the math behind a lot of the stuff that happens in 3d graphics. I just wrote a perhaps not completely apt, but long explanation of why neural nets do operations that are similar to computer graphics, and my computer ate it. But for a partial explanation, this is okay I think.