Each user who is exposed to a given experiment is enrolled in a group
for that experiment independently of any other experiments.
In other words, if you have two experiments, A and B, the control
group of B should be equally divided amongst control and test users in
A (assuming that B is equally exposed to both groups).
On the other hand, if, for example, B is only visible to users in
Control(A), or if it is harder to find B when a user is in Test(A),
the population of B could be exclusively (or heavily) in Control(A).
If one had enough traffic, one could reduce the impact of such things
by only running each test on a small subset of users. You would need
to build that feature, however.
Assuming that your conversion rate is 5% (the minimum to be above the
10 user threshold I mentioned), you have 10 conversions in your
Control group and ~15 in your Test group. I think it's likely that
these numbers are correct, and that your users were indeed assigned to
(and exposed to) the appropriate groups. That being said, though, my
immediate impression is that these numbers are too low to be
significant with 98% confidence. If there is a possible bug, it would
be in the confidence calculation.
I did a quick check using this site:
http://abtester.com/calculator/
I used 10/198 and 16/209 as the conversion rates. It gave me a
confidence of 86%. I don't know why that differs from the rate
reported by Django-Lean. Perhaps it is a different formula, or perhaps
there is a bug in Django-Lean.
-e