I got to the h1d again, and I started fixing it, so that Python works
again. It took me good 6 hours and I am still not done.
Currently, the following tests fail in master:
17 - example-laplace_bc_newton2 (SEGFAULT)
19 - example-system_car_model (Failed)
21 - example-system_exp (Failed)
18 - example-neutronics (Failed)
22 - example-system_neutronics_eigenvalue (Failed)
23 - example-system_neutronics_fixedsrc (Failed)
24 - example-system_neutronics_fixedsrc2 (Failed)
20 - example-system_chaotic (SEGFAULT)
32 - benchmark-first_order_general_adapt_ftr (SEGFAULT)
Those are unrelated to Python. Does anyone know why they fail? Last
time I touched h1d, all tests passed, so somebody must have broken it.
Unless someone is willing to work on that, I am going to disable them.
Ondrej
I do not agree. By disabling them you would put them out of sight
and they may never be recovered again. I did not work on H1D,
so I guess someone else broke the tests, and this should be
easy to find in the git history.
Pavel
>
> Ondrej
>
--
Pavel Solin
University of Nevada, Reno
Home page: http://hpfem.org/~pavel
FEMTEC 2011: http://hpfem.org/events/femtec-2011/
Hermes: http://hpfem.org/
FEMhub: http://femhub.org/
In the topic "tests statistics" that Zhonghua started on Nov 29, he said that 100% tests passed.
On Sat, Dec 4, 2010 at 3:12 AM, Zhonghua Ma <mazhon...@gmail.com> wrote:
> Hello Lukas,
>
> I created the new tests for regular examples and benchmarks in this week
> according to what we have in H1D, and some of them failed as Ondrej
> reported.
>
> On Sat, Dec 4, 2010 at 6:12 PM, Lukas Korous <lukas....@gmail.com> wrote:
>>
>> In the topic "tests statistics" that Zhonghua started on Nov 29, he said
>> that 100% tests passed.
>
>
> Yes, 100% tests passed, but they were not including the new tests.
So if I understand correctly, you have tried to create new tests, but
they are still unfinished and don't pass?
In that case, we should disable them, until they are finished. It is
not something, that used to work and now it is broken, it has never
been working.
The problem is, that with failing tests, it's impossible to do any
development, because I need to know whether my changes broke something
or not. So I'll go ahead and disable the failing tests for now, until
they get fixed.
Ondrej
Hi Zhonghua,
So if I understand correctly, you have tried to create new tests, but
On Sat, Dec 4, 2010 at 3:12 AM, Zhonghua Ma <mazhon...@gmail.com> wrote:
> Hello Lukas,
>
> I created the new tests for regular examples and benchmarks in this week
> according to what we have in H1D, and some of them failed as Ondrej
> reported.
>
> On Sat, Dec 4, 2010 at 6:12 PM, Lukas Korous <lukas....@gmail.com> wrote:
>>
>> In the topic "tests statistics" that Zhonghua started on Nov 29, he said
>> that 100% tests passed.
>
>
> Yes, 100% tests passed, but they were not including the new tests.
they are still unfinished and don't pass?
Ondrej:
Please make them "supposed to fail", do not disable them completely.
Otherwise we'll not see them and will not return to them. It is very
important that we see that something there is not OK.
Exactly. I am now working on that. For some reason, new cmake stops
saying "supposed to fail", it just says "passes", of the test fail and
it has the WILL_FAIL marker. So it's not so easy to see which tests
have this problem. But, still I think it's a better solution than
disabling them.
I will send a pull request with all my changes in h1d later today.
Ondrej