Example of report

15 views
Skip to first unread message

jonathan

unread,
Oct 7, 2009, 5:27:18 PM10/7/09
to django-lean
Hi, I am trying to figure out if this is something that I could use.
But I am not sure that I can get my head around the statistical stuff.
Could you make an example of a report available? Basically just so
that I can see if the reports contain any information that I would be
able to make use of.

If this is something that I could use, I would need to port it to
Google App Engine, which probably just involves a different set of
models and a way to run to jobs/store the results via a gae cron job.

thanks
j

Erik Wright

unread,
Oct 7, 2009, 7:39:48 PM10/7/09
to djang...@googlegroups.com, django-lean
There is a link on the wiki to a slideshare deck which includes one
scteenshot from a conversion experiment report. I'm on my phone right
now but tomorrow I'll post the link along with another shot of the
engagement experiment reports.

Can you tell us more about the experiment you want to run? If so I
could give you some ideas on whether or not this would work.

-erik

jonathan

unread,
Oct 8, 2009, 1:32:30 AM10/8/09
to django-lean
I want to do some experiments (5-10) around my static landing pages,
testing how to make users more likely to sign up. I really like the
idea of the A/B testing and the lean startup idea and being driven by
quantifiable results. So this will be for anonymous users.

j

On Oct 8, 10:39 am, Erik Wright <e...@erikwright.com> wrote:
> There is a link on the wiki to a slideshare deck which includes one  
> scteenshot from a conversion experiment report. I'm on my phone right  
> now but tomorrow I'll post the link along with another shot of the  
> engagement experiment reports.
>
> Can you tell us more about the experiment you want to run? If so I  
> could give you some ideas on whether or not this would work.
>
> -erik
>

Erik Wright

unread,
Oct 8, 2009, 9:56:23 AM10/8/09
to django-lean
Hi Jonathan,

django-lean can definitely do that, and is well suited if your static
landing pages are already rendered by Django.

An example report is shown on slide 11 of this prez:
http://www.slideshare.net/erikwright/djangolean-akohas-opensource-ab-experimentation-framework-montreal-python-9

In addition to that report, a similar report is shown per day of the
experiment.

Cheers,

Erik

jonathan

unread,
Oct 12, 2009, 6:46:06 PM10/12/09
to django-lean
I guess my question is more about: how do I make decisions based on
the experiment results? I can't really tell from the example report
how I would make a decision. Do the results say (to someone like me)
"this experiment was a success/failure"? or are they more subtle?

j

On Oct 9, 12:56 am, Erik Wright <e...@erikwright.com> wrote:
> Hi Jonathan,
>
> django-lean can definitely do that, and is well suited if your static
> landing pages are already rendered by Django.
>
> An example report is shown on slide 11 of this prez:http://www.slideshare.net/erikwright/djangolean-akohas-opensource-ab-...

Jeremy Dunck

unread,
Oct 13, 2009, 7:52:05 PM10/13/09
to djang...@googlegroups.com
Jonathan, see below.

On Mon, Oct 12, 2009 at 5:46 PM, jonathan <jrick...@gmail.com> wrote:
>
> I guess my question is more about: how do I make decisions based on
> the experiment results? I can't really tell from the example report
> how I would make a decision. Do the results say (to someone like me)
> "this experiment was a success/failure"? or are they more subtle?
>
> j
>
> On Oct 9, 12:56 am, Erik Wright <e...@erikwright.com> wrote:

...


>> An example report is shown on slide 11 of this prez:http://www.slideshare.net/erikwright/djangolean-akohas-opensource-ab-...
>>
>> In addition to that report, a similar report is shown per day of the
>> experiment.

You may need to take the time to "get your head around the
statistics". Quantative results without understanding what the
numbers mean can be dangerous or misleading.

In a report, as shown in his slide 11, for a given experiment, there
are two numbers that need to factor into your decision-- improvement,
and confidence. Improvement is the observed difference in the
measured metric. Confidence is how certain you can be that the
observed difference is not down to random chance. Basically, for
small differences, you need more observed data to be as confident as
you would be in a large observed difference.

For a thorough introduction to the math, see here:
http://elem.com/~btilly/effective-ab-testing/

jonathan

unread,
Oct 13, 2009, 11:19:30 PM10/13/09
to django-lean
I read that, and to be honest a lot of the math went over my head.

The linked presentation has 'x's where the interesting stuff is, and I
wasn't sure what it would actually look like. I guess I will just give
it a go when I get the chance and see what the reports look like.

thanks guys
Jonathan

Erik Wright

unread,
Oct 14, 2009, 12:45:59 AM10/14/09
to djang...@googlegroups.com, django-lean
Hi,

The Xs are there to hide confidential information. The screenshot is
from an actual experiment run by Akoha.

The report shows the number of users in each test / control group.

For each conversion goal you will see:

1) the absolute number of users who hit it one or more times, per test/
control group
2) that number, as a percentage of the number of users in the group
3) the ratio between the performance of the test and control groups
4) the likelihood that the difference is "significant" as calculated
using a two-tailed chi-square test.

You will also see those values for "any" goal (count of users havin
achieved at least one goal one or more times).

Those numbers are shown for every day the experiment has been active.

With regards to confidence, the simple explanation is that 95% is
good. But the number should only be trusted once you have at least 10
conversions in both your test and control groups. At some point the
tool itself should be modified to reflect that.

Hope that helps,

Erik

Reply all
Reply to author
Forward
0 new messages