tasks for 1.2 alpha

72 views
Skip to first unread message

Winston Wolff

unread,
Jun 21, 2012, 8:32:21 PM6/21/12
to pyglet...@googlegroups.com
I'm going to spend another day tomorrow working on the release. My to do list is:

- apply Txema's work on documentation
- apply patch in issue 580 from M Paola for DDS_RGBA_DXT1_LOAD test failures
- go through issues to see if there are other patches that look useful and safe (I don't want to add too much which might delay this release)
- try to fix other test failures
- improve output format of test runner so we can get better data on how Pyglet is working.

Anything else?

Winston Wolff
Stratolab - Games for Learning
tel: (917) 543 8852
web: www.stratolab.com

Richard Jones

unread,
Jun 21, 2012, 8:58:02 PM6/21/12
to pyglet...@googlegroups.com
On 22 June 2012 10:32, Winston Wolff <winsto...@gmail.com> wrote:
> I'm going to spend another day tomorrow working on the release. My to do list is:
>
> - apply Txema's work on documentation
> - apply patch in issue 580 from M Paola for DDS_RGBA_DXT1_LOAD test failures
> - go through issues to see if there are other patches that look useful and safe (I don't want to add too much which might delay this release)
> - try to fix other test failures

Awesome, thanks! I'm sorry I've been so quiet this week. I think after
applying the few fixes that've come up during testing we should push
out the release, with the intention of releasing the alpha "early" and
releasing fixes often.


> - improve output format of test runner so we can get better data on how Pyglet is working.

Just some thoughts on this. Pie in the sky stuff. It would be useful
if this could include some thinking about what to do with test
reports. Having people email the mailing list - or even creating
individual issue tracker items - mostly generates noise. If the test
run output could be generated in a structured manner we could feed it
into a collation system (which wouldn't take a significant effort to
develop.) The run output would need to include the OpenGL info dump,
repository version and testing log. The testing log would ideally be
able to be captured over multiple partial runs to allow for complete
system failures in individual tests.

I'd be happy to install a web service to collate the above information
if someone were to write it. Doesn't have to be fancy. If no-one steps
up I'll try to write something myself. If we do get something done
it'd be good to have from the earliest alpha releases.


Richard

Martin Di Paola

unread,
Jun 21, 2012, 11:43:42 PM6/21/12
to pyglet...@googlegroups.com

I think the most useful would be to have at least one system to order the reports. Although not able to interpret the output of the reports, the system can at least classify and determine which platform / OS / GDM / python versions were tested.

With respect to the TODO list for the next release, may you can add the fix for the issue 394.
(I am posting this because
the google issue system says that the responses to the issue are reported, but to whom? the owner? can add another person as CC copy? Mmm, too many questions)




El jueves, 21 de junio de 2012 21:58:02 UTC-3, Richard Jones escribió:

Richard Jones

unread,
Jun 22, 2012, 2:11:18 AM6/22/12
to pyglet...@googlegroups.com
On 22 June 2012 13:43, Martin Di Paola <petete...@gmail.com> wrote:
> With respect to the TODO list for the next release, may you can add the fix
> for the issue 394.
> (I am posting this because the google issue system says that the responses
> to the issue are reported, but to whom? the owner? can add another person as
> CC copy? Mmm, too many questions)

That is odd - I didn't get a notification for the patches being added.

The patch looks great, thanks! I had to add one fix in the setting of
the "width" attribute on the font.Text object so it would modify the
wrapping flags in the case where a width was not previously specified.


Richard

Winston Wolff

unread,
Jun 22, 2012, 12:06:02 PM6/22/12
to pyglet...@googlegroups.com

On Jun 21, 2012, at 8:43 PM, Martin Di Paola wrote:

> I think the most useful would be to have at least one system to order the reports. Although not able to interpret the output of the reports, the system can at least classify and determine which platform / OS / GDM / python versions were tested.

Yes I agree. I'm hoping to find some pre-made service like Google Forms that we could submit our test data to.

>
> With respect to the TODO list for the next release, may you can add the fix for the issue 394.
> (I am posting this because the google issue system says that the responses to the issue are reported, but to whom? the owner? can add another person as CC copy? Mmm, too many questions)

Yes.

Winston Wolff

unread,
Jun 22, 2012, 8:50:46 PM6/22/12
to pyglet...@googlegroups.com
Here's the status after today's work. Hopefully it's enough to make an alpha-1 release? I agree with the release early and often:

- Sphinx documentation is in place. Thanks Txema. It produces one error on Mac related to pyglet.com which is Windows only. Epydoc is remove and Sphinx is in the tools/ folder. Run it from the pyglet folder with:

./make.py docs

- Deleted about 15 really old issues. I added comments to the authors thanking them for the issue, saying we are closing it because it is so old and we need to cleanup the list, and asking them to repost the issue if it's still around. I had hoped to bring the number of issues to less than 100 so they are all on one page. Didn't quite make it: 103.

- Applied a small handful of patches, mostly documentation change requests.

Didn't get to any test failures or testing output.

Regarding collating test output, my thoughts:
- Use google forms to collect data into a google docs spreadsheet.
- Form would include:
- Date
- platform information, python bit size, version, etc
- pyglet version
- OpenGL info dump
- hg id if any. I.e. handle the case where people are running from tarball
- # tests run
- # tests failed
- list of failures - try to make it short and readable
- complete test output

- Modify tests.py so at the end, it asks the user if we may upload their test results. If so, submit via HTTP post.
> --
> You received this message because you are subscribed to the Google Groups "pyglet-users" group.
> To post to this group, send email to pyglet...@googlegroups.com.
> To unsubscribe from this group, send email to pyglet-users...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/pyglet-users?hl=en.
>

Nathan

unread,
Jun 22, 2012, 10:42:03 PM6/22/12
to pyglet...@googlegroups.com
On Fri, Jun 22, 2012 at 6:50 PM, Winston Wolff <winsto...@gmail.com> wrote:
> Here's the status after today's work. Hopefully it's enough to make an alpha-1 release? I agree with the release early and often:
[snip]

Nice work!

I especially like the streamlined-submitting-test-results stuff. I
hope the actual testing can be streamlined as well -- I've never made
it through them all without a fatal crash that terminated the tests.

~ Nathan

Jonathan Hartley

unread,
Jun 23, 2012, 1:03:26 PM6/23/12
to pyglet...@googlegroups.com
On Saturday, June 23, 2012 3:42:03 AM UTC+1, Nathan wrote:
On Fri, Jun 22, 2012 at 6:50 PM, Winston Wolff <win...@gmail.com> wrote:
> Here's the status after today's work. Hopefully it's enough to make an alpha-1 release? I agree with the release early and often:
[snip]

Nice work!

I especially like the streamlined-submitting-test-results stuff.  I
hope the actual testing can be streamlined as well -- I've never made
it through them all without a fatal crash that terminated the tests.

~ Nathan



I think some of the manual tests could be automated. e.g. the font ones could look for font-colored pixels in the output, to establish the approximate rectangle that the font has been rendered in.

If fatal crashes terminating the test run early is a persistent problem, an idea for the future might be to convert tests which actually fire up a window to do so in a new process, so that it the process-under-test could crash without terminating the test run.

Winston Wolff

unread,
Jun 23, 2012, 2:58:15 PM6/23/12
to pyglet...@googlegroups.com
I was thinking along the same lines--both testing pixels for specific colors and having tests be in a separate process so they can be killed.

-ww

greenmoss

unread,
Jun 23, 2012, 7:20:24 PM6/23/12
to pyglet...@googlegroups.com
So regarding pixel testing, are you referring to true "ignorant" pixel testing (eg reading frame buffer values)? I was recently talking to a friend who is a very good QA tester. He has done pixel-level UI testing, and says it has certain inherent difficulties which are not at first glance obvious. Things like the rgb values for *all* pixels in the frame buffer changing even though the test inputs are identical and the results also look identical to human eyes. FWIW, I asked him if there was any free way to do this kind of testing, and he pointed out "T-Plan Robot"; I'm unassociated with the project: http://sourceforge.net/projects/tplanrobot/.

Then again, if you're reading OpenGL memory instead, that would sidestep these kinds of problems entirely. If this is so, I apologize for interjecting an ignorant comment :)

Richard Jones

unread,
Jun 23, 2012, 7:24:03 PM6/23/12
to pyglet...@googlegroups.com
On 24 June 2012 09:20, greenmoss <kyo...@gmail.com> wrote:
> So regarding pixel testing, are you referring to true "ignorant" pixel
> testing (eg reading frame buffer values)? I was recently talking to a friend
> who is a very good QA tester. He has done pixel-level UI testing, and says
> it has certain inherent difficulties which are not at first glance obvious.

pyglet initially did have this kind of testing, but it was removed
because of the reasons you indicate, and the lack of time or
motivation needed to produce a tool that could do the testing.


Richard

Jonathan Hartley

unread,
Jun 28, 2012, 1:08:10 PM6/28/12
to pyglet...@googlegroups.com

Thanks - that's interesting to hear.
 
I was envisioning naively grabbing the screen buffer, and testing for approximate colored pixels. I can see that anti-aliasing will result in many unexpected colors being present, which I could believe aren't tightly controlled by the specification, but I am surprised to hear that it's not easy to reproducibly control the color of unaliased drawing.

 You may be right that it's more difficult than I'm imagining then.

Richard Jones

unread,
Jun 28, 2012, 9:48:08 PM6/28/12
to pyglet...@googlegroups.com
On 29 June 2012 03:08, Jonathan Hartley <tar...@tartley.com> wrote:
> I was envisioning naively grabbing the screen buffer, and testing for
> approximate colored pixels. I can see that anti-aliasing will result in many
> unexpected colors being present, which I could believe aren't tightly
> controlled by the specification, but I am surprised to hear that it's not
> easy to reproducibly control the color of unaliased drawing.

Most of the image-based tests would involve texturing or blending in
some way. On a given device running a specific driver version it's
more likely to be consistent. Comparing between devices or driver
versions is definitely out.


Richard
Reply all
Reply to author
Forward
0 new messages