>
> Another example is the algorithms outputting random group elements that are
> supposed to be nearly uniformly distributed in the group.
If they are selected by integers which themselves have been tested to
be uniformly distributed (i.e. python's generator) then I don't think
you have to test the sympy output. But perhaps I am misunderstanding.
Am 23.06.2012 17:38, schrieb Aleksandar Makelov:
> We want to make sure that the right thing is done with the output from the
> RNGs, so we manually supply as an additional argument to a given function
> some particular choice for all the variables inside the function that come
> from RNGs.
Ah, I see.
I'm not convinced that it's the best way to design such a thing. Adding
parameters to a function that are purely there for testing purposes is
going to confuse people who aren't into testing. It's also in
contradiction to the "keep interfaces as narrow as possible" principle -
a narrow interface means less things that need to be remembered by
programmers, less things that need to be set up by the caller, less
things that might be misinterpreted.
Also, it'd adding code to the functions. Which means adding bugs - which
may affect the function if it's running in production. Which kind of
defeats the purpose of testing in the first place.
> The reason that we use certain precomputed values is that doing
> the test with some randomly generated set of values as an additional
> argument is essentially going to have to repeat the calculations in the
> function itself (which we want to test) - whereas for concrete values we
> know the answer right away. Does that make sense?
Not very much, I fear.
As Stefan said, repeating a calculation in test code isn't a useful unit
test, even if you place the unit test into another module. Or if you're
doing the calculation by hand - unless those calculations have been done
by experts in the field and verified by other experts in the field, of
course.
Expanding on Stefan's example.
Assuming you're testing an array-inversion routine.
We agree on the worst approach to test it: repeat the array inversion
algorithm in the test and see whether it gives the same result as the
code in SymPy.
Actually this kind of test isn't entirely pointless - if the test code
remains stable but the SymPy code evolves into optimizations, this could
serve a useful purpose. On the other hand, you still don't write this
kind of test code until you actually do the optimization.
The other approach would be to add an "expected result" parameter, and
fail if the result isn't the expected one.
This has two problems:
a) It adds an unwanted dependency to the testing modules. At least if
you want to give better diagnostics than just throwing an exception (for
example, you may want to test internal workings that throw exceptions
which get caught).
b) You're supplying precomputed results. You'd still need to explain why
the results are correct. Somebody has to verify that they are, indeed,
correct.
My approach for that would be to test the defining property of the function:
(matrix_inv(A) * A).is_unit_matrix()
(sorry for ad-hoc invention of matrix functions)
I.e. you're testing the purpose of the function, not its inner workings.
On Jun 23, 2012, at 10:32 AM, Joachim Durchholz <j...@durchholz.org> wrote:
> Am 23.06.2012 17:38, schrieb Aleksandar Makelov:
>> We want to make sure that the right thing is done with the output from the
>> RNGs, so we manually supply as an additional argument to a given function
>> some particular choice for all the variables inside the function that come
>> from RNGs.
>
> Ah, I see.
> I'm not convinced that it's the best way to design such a thing. Adding parameters to a function that are purely there for testing purposes is going to confuse people who aren't into testing. It's also in contradiction to the "keep interfaces as narrow as possible" principle - a narrow interface means less things that need to be remembered by programmers, less things that need to be set up by the caller, less things that might be misinterpreted.
> Also, it'd adding code to the functions. Which means adding bugs - which may affect the function if it's running in production. Which kind of defeats the purpose of testing in the first place.
This is fixed by the ideas of this pull request:
https://github.com/sympy/sympy/pull/1375.
> To unsubscribe from this group, send email to sympy+unsubscribe@googlegroups.com.
24 юни 2012, неделя, 04:22:56 UTC+3, Aaron Meurer написа:On Jun 23, 2012, at 10:32 AM, Joachim Durchholz <j...@durchholz.org> wrote:
> Am 23.06.2012 17:38, schrieb Aleksandar Makelov:
>> We want to make sure that the right thing is done with the output from the
>> RNGs, so we manually supply as an additional argument to a given function
>> some particular choice for all the variables inside the function that come
>> from RNGs.
>
> Ah, I see.
> I'm not convinced that it's the best way to design such a thing. Adding parameters to a function that are purely there for testing purposes is going to confuse people who aren't into testing. It's also in contradiction to the "keep interfaces as narrow as possible" principle - a narrow interface means less things that need to be remembered by programmers, less things that need to be set up by the caller, less things that might be misinterpreted.
> Also, it'd adding code to the functions. Which means adding bugs - which may affect the function if it's running in production. Which kind of defeats the purpose of testing in the first place.
This is fixed by the ideas of this pull request:
https://github.com/sympy/sympy/pull/1375.I'm confused about this - how does the PR avoid the use of an additional parameter?
> To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sympy?hl=en.
>
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To view this discussion on the web visit https://groups.google.com/d/msg/sympy/-/3QN8h68PtfMJ.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
On Jul 4, 2012, at 2:20 PM, Aleksandar Makelov <amak...@college.harvard.edu> wrote:
24 юни 2012, неделя, 04:22:56 UTC+3, Aaron Meurer написа:On Jun 23, 2012, at 10:32 AM, Joachim Durchholz <j...@durchholz.org> wrote:
> Am 23.06.2012 17:38, schrieb Aleksandar Makelov:
>> We want to make sure that the right thing is done with the output from the
>> RNGs, so we manually supply as an additional argument to a given function
>> some particular choice for all the variables inside the function that come
>> from RNGs.
>
> Ah, I see.
> I'm not convinced that it's the best way to design such a thing. Adding parameters to a function that are purely there for testing purposes is going to confuse people who aren't into testing. It's also in contradiction to the "keep interfaces as narrow as possible" principle - a narrow interface means less things that need to be remembered by programmers, less things that need to be set up by the caller, less things that might be misinterpreted.
> Also, it'd adding code to the functions. Which means adding bugs - which may affect the function if it's running in production. Which kind of defeats the purpose of testing in the first place.
This is fixed by the ideas of this pull request:
https://github.com/sympy/sympy/pull/1375.I'm confused about this - how does the PR avoid the use of an additional parameter?You still have an additional parameter, but there's no more code duplication.Aaron Meurer