Hi Vinicius,
Actually, you're using numpy in both cases :) What matters is not the evaluation function, but the primitive set, and in both cases you are using numpy functions when you do something like pset.addPrimitive(numpy.add, 2, name="vadd").
The difference is that when you use eval(), you're relying on global variables and global functions. Hence, you must define vadd and others by explicitely import them (from numpy import add as vadd), and you must define your inputs in the global scope with a predefined name (in your case, x, y, and z). When using compile, you actually receive a function, standalone, in which you can process any arbitrary arguments.
Not only it is much more trouble, it is also prone to errors. For instance, if you have a test dataset on which you want to assert the performance of the best individual at the end of the evolution, you must redefine x, y, and z in the global scope prior calling its evaluation. With compile, you just receive a function which can evaluate any arbitrary input data.
That said, there's indeed a slight difference in timings between using eval() and compile() (in favor of the prior). However, this difference and quite small (about 0.1 seconds on my computer for the whole evolution) and does not depend on the dataset size, so it does not really matter. It is interesting from a Python point of view (why is access to global variables faster than calling a function?) but does not change a lot in practical cases. In particular, note that your individuals do not take 1 second to be evaluated, the whole evolution process takes 1 second to be done, which a quite a different thing :)
Do not hesitate if you have any remaining questions,
Marc-André