Test Generator

1 view
Skip to first unread message

Tom Davis

unread,
Aug 11, 2009, 9:26:06 AM8/11/09
to libc...@googlegroups.com
I had what I perceived as a possible good idea: automated test
generation. Basically, to use the MockHttp tests at this stage you
mostly just copy and paste stuff, e.g. make a request, copy the
output, write a def. What if you could just say up front, "for every
command which returns a response, capture the response and generate a
skeleton test case." Then, whenever you're ready, you can access the
generated code which will contain your mock test cases.

This would have the effect of allowing you to test live online and get
free offline tests (or most of them) for your troubles, automatically.
Given that we already have base classes for drivers and responses,
this would be a seamless and backwards compatible addition. Example:

>>> driver = MyNodeDriver(...)
>>> driver.track_actions()
>>> driver.list_nodes()
[<FooNode ...>]
>>> driver.generate_tests(cls=True)
Class FooMockHttp(MockHttp):
def list_nodes(self, ...):
# unparsed body from response,
# status code, etc.

Well, you get the idea. This would of course support multiple
responses so you could feed bad data to test errors, too. There might
be some minor edits to make to the generated code, but the main goal
would be to drastically reduce test creation effort. Thoughts?

Alex Polvi

unread,
Aug 11, 2009, 1:03:22 PM8/11/09
to libc...@googlegroups.com
Tom,

On Tue, Aug 11, 2009 at 6:26 AM, Tom Davis<t...@dislocatedday.com> wrote:
>
> I had what I perceived as a possible good idea: automated test
> generation. Basically, to use the MockHttp tests at this stage you
> mostly just copy and paste stuff, e.g. make a request, copy the
> output, write a def. What if you could just say up front, "for every
> command which returns a response, capture the response and generate a
> skeleton test case." Then, whenever you're ready, you can access the
> generated code which will contain your mock test cases.
>
> This would have the effect of allowing you to test live online and get
> free offline tests (or most of them) for your troubles, automatically.
> Given that we already have base classes for drivers and responses,
> this would be a seamless and backwards compatible addition. Example:


This definitely sounds sweet. The place where I have the most trouble
right now is getting different responses from the same function. On
Slicehost, for example, you GET and POST on /slices.xml for various
things, so there is a lot of conditional logic around what responses
to give. Also, the @multipleresponse decorator has not really been
that helpful, as tests easily get broken and out of sync.

Anyway, if you can pull off something that just sniffs and writes to a
file or something with the correct response status, text and headers,
that'd be awesome!

-Alex

--
co-founder, cloudkick.com
twitter.com/cloudkick
541 231 0624

Tom Davis

unread,
Aug 11, 2009, 3:31:46 PM8/11/09
to libc...@googlegroups.com
> Also, the @multipleresponse decorator has not really been
> that helpful, as tests easily get broken and out of sync.

How so? Even if a test fails, it should still advance to the next
response. I guess maybe I don't understand the issue?

Alex Polvi

unread,
Aug 11, 2009, 4:05:25 PM8/11/09
to libc...@googlegroups.com
On Tue, Aug 11, 2009 at 12:31 PM, Tom Davis<t...@dislocatedday.com> wrote:
>
>> Also, the @multipleresponse decorator has not really been
>> that helpful, as tests easily get broken and out of sync.
>
> How so? Even if a test fails, it should still advance to the next
> response. I guess maybe I don't understand the issue?

For instance, if I run test_foo, which calls list_nodes twice. Then I
call test_bar, which calls list nodes once. If I call test_foo or
test_bar by itself, the expect response will be screwed up, because
the counters will be initialized to different numbers.

-Alex
Reply all
Reply to author
Forward
0 new messages