Combining Odds Ratios

1,548 views
Skip to first unread message

Martin P. Holt

unread,
Jun 7, 2005, 2:18:05 PM6/7/05
to MedS...@googlegroups.com
I'd appreciate some help with this. It feels as if it should be
simple..........

In a population, 3 polymorphisms (A,B,C) have odds ratios (a,b,c
respectively) expressing their risk of causing a disease.

How would you go about combining the odds ratios to get one overall measure
of risk ? My thoughts so far have been to convert the individual odds ratios
to individual probabilities, sum these, and then convert back to an odds
ratio, but (a) is that OK ?, (b) is there a way of doing it on the odds
ratios themselves ?

TIA,
Martin

Ted Harding

unread,
Jun 7, 2005, 4:25:50 PM6/7/05
to MedS...@googlegroups.com
On 07-Jun-05 Martin P. Holt wrote:
>
> I'd appreciate some help with this. It feels as if it should be
> simple..........

It isn't!

> In a population, 3 polymorphisms (A,B,C) have odds ratios (a,b,c
> respectively) expressing their risk of causing a disease.

> How would you go about combining the odds ratios to get one overall
> measure of risk ? My thoughts so far have been to convert the
> individual odds ratios to individual probabilities,

You can't!

> sum these,

Not strictly so -- see below!

> and then convert back to an odds ratio,

This you could do, provided you could get this far.

> but (a) is that OK ?,

No!

> (b) is there a way of doing it on the odds ratios themselves ?

No!

1. Only if the probabilities are small can you get *approximate*
risk ratios from odds ratios: say for polymorphism A

Ra = (pa1/(1 - pa1))/(pa0/(1 - pa0))

where Ra is the odds ratio for "A", and pa1 is the probability
of disease if A is present, pa0 the probability if A is absent.

You have 1 equation in two unknowns, so no solution. However,
is pa1 and pa0 are both small, then you can approximate by
putting (1 - pa0) = (1 - pa1) = 1, and then you get the
approximate result

Ra = pa1/pa0

which is the risk ratio. You still can't get an absolute
probability out of this without further information, such
as the prevalence of the disease along with data on the
frequency of A.

2. There is an implicit assumption in what you say, that
the effects of A, B and C are independent of each other,
and also that the incidences of A, B and C are independent
(e.g. if possessing A and B always implied possessing C then
the information about C would add nothing to what you can
say about a person who has A and B).

The independence of effect means that the proportion of
B's who have the disease in the whole population is the
same as the proportion of A's, who also are B's, who have
the disease, and so on for all subsets of (A,B,C).

Assuming independence, the correct calculation in the first
place is that (e.g. amongst AB's)

P(AB not diseased) = P(A not diseased)*P(B not diseased)

(i.e. the individual must remain unscathed by both assaults),

so

1 - P(AB diseased) = (1 - P(A diseased))*(1 - P(B diseased))

Now, again if the probabilities of disease are small, you can
approximate by neglecting P(A diseased)*P(B diseased) and get

pa1b1 = P(AB diseased) ~= P(A diseased) + P(B diseased)

to that degree of approximation.

3. If you could get this far, then (in this example)

Rab = (pa1b1/(1 - pa1b1))/(pa0b0/(1 - pa0b0))

as the odds ratio for the disease when both A and B are present
relative to cases where A and B are both absent; and so on for
other combinations.

4. But your fundamental problem is that -- even if you assume
odds ratios can be taken as risk ratios -- you can't get to
the absolute probabilities. I don't think you can combine the
information in risk ratios in the way you want without also
having prevalence/incidence information as well.

However, if you do have such information (amnd also make the
other assumptions about independence) then you can get the
absolute probabilities straightforwardly, and simply use
these to get your desired odds ratios.

If you can't assume independence, then it gets more complicated!

Hoping this helps, and all best wishes,
Ted.


--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.H...@nessie.mcc.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 07-Jun-05 Time: 20:44:52
------------------------------ XFMail ------------------------------

John Whittington

unread,
Jun 8, 2005, 9:45:18 AM6/8/05
to MedS...@googlegroups.com
At 21:25 07/06/05 +0100, Ted Harding wrote (in small part):

> > I'd appreciate some help with this. It feels as if it should be
> > simple..........
>
>It isn't!
>
> > In a population, 3 polymorphisms (A,B,C) have odds ratios (a,b,c
> > respectively) expressing their risk of causing a disease.
>
>[very big snip]
>2. There is an implicit assumption in what you say, that
>the effects of A, B and C are independent of each other,
>and also that the incidences of A, B and C are independent

Whilst I agree that Martin's question is far less trivial than he probably
thought, and while I'm sure Martin will address this issue himself in due
course, I suspect that the potential complication of non-independence is
less than you might fear. Since Martin speaks of polymorphism (presumably
of one thing), I would presume that A, B and C are 'mutually
exclusive'. What only Martin can tell us is whether or not (A,B,C) is an
'exhaustive' list of possibilities (i.e. the probabilities of each of them
existing sums to unity) - if that were the case, then it would obviously be
true that, for example, the ABSENCE of A and B meant that C was
present. However, I rather suspect that may not be the case.

... just my initial thoughts!

Kind Regards,


John

----------------------------------------------------------------
Dr John Whittington, Voice: +44 (0) 1296 730225
Mediscience Services Fax: +44 (0) 1296 738893
Twyford Manor, Twyford, E-mail: Joh...@mediscience.co.uk
Buckingham MK18 4EL, UK medis...@compuserve.com
----------------------------------------------------------------

Martin P. Holt

unread,
Jun 8, 2005, 12:07:44 PM6/8/05
to MedS...@googlegroups.com
In my example I intended A,B and C to be independent, both in terms of
effect and in terms of incidence. For a mutually exhaustive list one would
have to include the wild type, but this would not be expected to increase
the risk of disease. So A,B and C are mutually exhaustive in the sense that
each increases the risk of disease, and this is measured by a (log) odds
ratio.
When I posted the original posting, and talked about converting to
probabilities, it was _odds_ that I had in mind, rather than _odds ratios_.
Ted has put me right on that one.
It is frustrating that such a common measure of risk (odds ratio) should
prove to be so intractable ! I know odds ratios are used in logistic
modelling where their form lends them naturally to this, but what other
benefits are their in using odds ratios rather than, say, relative risk ? (I
know one can approximate relative risk from an odds ratio if the risk is
small).
There are also risk ratio,absolute difference in risks(ARD) or absolute risk
reduction, number needed to treat (NNT), relative risk reduction........any
more measures ?

If one knows the _patient_expected_event_rate (PEER), one can convert ORs
into NNTs, hence ARDs. This is helpful. PEER is the estimate of a particular
patient's risk of disease over a period of time (obtained from research).

Best Regards,
Martin

John Whittington

unread,
Jun 8, 2005, 1:59:19 PM6/8/05
to MedS...@googlegroups.com
At 17:07 08/06/05 +0100, Martin P. Holt wrote:

>In my example I intended A,B and C to be independent, both in terms of
>effect and in terms of incidence. For a mutually exhaustive list one would
>have to include the wild type, but this would not be expected to increase
>the risk of disease. So A,B and C are mutually exhaustive in the sense
>that each increases the risk of disease, and this is measured by a (log)
>odds ratio.

I take it that you are not working from 'raw data', since if you were you
could presumably calculate the odds ratio for (A OR B OR C) just as easily
as for A, B and C separately.

>When I posted the original posting, and talked about converting to
>probabilities, it was _odds_ that I had in mind, rather than _odds ratios_.

Yes, I rather assumed that :-)

>It is frustrating that such a common measure of risk (odds ratio) should
>prove to be so intractable ! I know odds ratios are used in logistic
>modelling where their form lends them naturally to this, but what other
>benefits are their in using odds ratios rather than, say, relative risk ?

The answer to that, surely, it is not an 'either/or' situation in the
manner you imply. Whether one can calculate and odds ratio (OR) or
relative risk (RR) depends on what data one has (or, more specifically,
'how subjects were selected') - and if one only has enough information to
calculate one, then one will not have the information required to calculate
the other.

More specifically, one can estimate RR if subjects are selected on the
basis of 'characteristics' and one then examines their outcomes, (e.g. a
prospective cohort study), and one can estimate OR if subjects are selected
upon the basis of outcome and then examines 'groups' (by 'characteristic')
within those subjects (e.g. a case-control study). However, without
additional information (e.g. prevalence of the outcome in the general
population, or assumptions about that prevalence [e.g. that it is small]
which allows approximations) one cannot determine an OR in the former case
or an RR in the latter case.

The frustration you are experiencing therefore results from a lack of
adequate information, not from the numeric indices calculated from it.

>(I know one can approximate relative risk from an odds ratio if the risk
>is small).

Indeed - that is the approximation to which I have just referred.
Reply all
Reply to author
Forward
0 new messages