Many medical imaging examinations involve exposure to ionizing radiation. The exposure amount in these exams is very small, to the extent that the health risk associated with such low levels of exposure is frequently debated in scientific meetings. Nonetheless, the prevailing scientific view is that there is a finite (though small) amount of risk involved with such exposures. The risk is increased with the amount of exposure, repeated exposures, and when the patient is young. This material aims to provide a brief overview of the risk associated with medical imaging examinations that involve ionizing radiation.
The amount of radiation required to produce these deterministic effects has been derived from studies in experimental cell cultures, animal studies, as well as human epidemiology studies. From these studies, the dose thresholds have been established where the effect is observed in 1% of a population (see Table 1). This means, these values are the amount of radiation energy absorbed by the tissue where if 100 people were exposed to this level of radiation, only a single individual would experience this effect. The unit used for absorbed radiation dose in Table 1 is the Gray (Gy). This value is the standard international measure for absorbed radiation energy. We will see later that this unit must be converted to another unit to understand the stochastic effects (i.e., genetic and cancer effects) of radiation.
Cancer induction is arguably the most important and the most feared radiation effect. From the discovery of ionizing radiation there has been documented evidence of radiation induced cancer in animal and human studies. The initial human experiences were all at high radiation dose levels from people working with radiation or using radiation without the knowledge of its potential harm. In addition, long-term follow-up studies of the Japanese survivors of the atomic bomb attacks on Hiroshima and Nagasaki and the early medical usage of radiation in treatment and diagnostic studies have shown increased cancer incidence in the exposed populations.
All radiation effects have a latency period between the time of exposure and the onset of the effect, as seen with deterministic effects in Table 1. For cancer induction, the latency period is on the order of years, with leukemia having the shortest latency period (5 to 15 years) and solid tumors having the longest latency period (10 to 60 years). Therefore, it is very difficult to prove that a cancer is directly related to earlier radiation exposure, because other factors encountered during the latency period may be the actual cause of the cancer. This is particularly true when the exposures are at low radiation levels such as those received in diagnostic radiology and cardiology studies.
Currently, at low radiation exposure levels no study has been comprehensive enough to demonstrate stochastic effects conclusively. But as stated above, at very high radiation exposure levels there is good data that proves the induction of cancer from the exposure. So the estimation of risk for cancer induction at low radiation exposure must be extrapolated from the high exposure level data. This is where most of the controversy concerning radiation effects exists. The most conservative estimation of risk from radiation exposure assumes the effects from low radiation exposure are a simple scaled version of the high exposure results (i.e. a linear or straight-line) extrapolation from the high- to the low-exposure results). Most groups that monitor and analyze radiation exposures use this linear extrapolation model to estimate cancer induction from radiation.
Currently there are two models used to assess risk of stochastic effects from radiation exposure; these are the absolute and relative risk models.
Absolute risk is defined as the probability that a person who is disease free at a specific age will develop the disease at a later time following exposure to a risk factor, e.g. the probability of cancer induction following exposure to radiation.
These data also demonstrate that you cannot simply use the average relative risk shown in Table 2 to estimate the increased incidence of cancer due to radiation exposure. In order to do this analysis correctly you need take into consideration the age of all individuals in the group that is irradiated.
The early development of life is a time when rapid cell division and differentiation are occurring. Therefore, radiation sensitivity is high for the developing embryo/fetus and protection from radiation needs to be considered differently than the general public. Table 3 provides a review of the stage and deterministic effects that may occur in the embryo/fetus following exposure to different levels of radiation. Similar to what was shown in Table 1, deterministic effects below an absorbed dose of 0.1 Gy are not found, even in the embryo/fetus.
Although deterministic effects are not seen at low dose levels in the embryo/fetus, there have been many studies that have shown an increased incidence of cancer (i.e. stochastic effects) in children following in-utero exposure to radiation. Pre-natal radiation exposures resulted in an increased cancer rate in the offspring of the survivors of the atomic bombings in Hiroshima and Nagasaki. In other epidemiological studies, there have also been good statistical results that demonstrate an increased cancer rate in children following pre-natal radiation exposure from diagnostic radiology studies. Unfortunately, these epidemiological studies do not provide very good data on the specific absorbed dose received to the fetus or embryo. This limits the ability to accurately characterize the dose vs. response as has been done for deterministic effects. But since the doses received in these epidemiology studies were in the diagnostic radiology range, they suggest that low levels of radiation exposure to the embryo/fetus definitely increase the risk of childhood cancer.
In the context of dose quantities relevant to the topic of radiation risk, two types of quantities are of importance: dose limits and reference levels. Dose limits refer to the maximum level of dose that the general public can receive from a source other than natural background radiation levels and those received by occupational workers in their job. The reference levels reflect the typical dose values expected in the majority of imaging studies. Both quantities can be described in terms of a number of dose-related metrics, including absorbed dose, effective dose, exposure, or any modality-specific dose index (e.g., CTDI for CT imaging).
Limits for exposure to radiation should be at a level below the threshold where deterministic effects occur, i.e. below 0.1 Gy. Furthermore, the limit should exclude exposures from background radiation. Because radiation is around us all of the time from the sun and naturally occurring sources, it would not make sense to try to limit radiation exposure below natural background levels.
As the main concern is stochastic effects, the effective dose with units of Sieverts or 1/1000 of a Sievert (i.e. mSv) is most commonly used. Using the threshold dose as a starting point, dose limits are determined using the Principles of Justification and Optimization.
It is important to understand that dose limits are not levels of exposure that should be considered acceptable in an occupation or that can be safely received by the general public. They are rather maximum limits consistent with the current state of medical practice. In general, the concept of ALARA should still be used when developing radiation protection procedures/policies, and dose limits should be thought of as the maximum exposure that should be allowed in any situation. In most states, the ALARA concept requires investigation into dose-reduction methods even if only a fraction of the limit is received by any individual.
There are a number of sources for reference level values. Some of these can be found in the Section on Information Sources and Recommended References and Citations. The American College of Radiology (ACR) Appropriateness Criteria has developed a comparative scale for the Relative Radiation Level (RRL) values based on effective dose (Table 5), which may be used toward reference levels or simple comparison of exams.
With an understanding of the effects of radiation and the doses for standard examinations, a physician (possibly with the help of a radiologist) can make a determination of which examination provides the most benefit to the patient at the lowest possible dose. To do this, the physician needs consider the following criteria:
Medical environments are full of technical jargon that the public and even some personnel within different medical professions cannot understand or interpret correctly. Technical information must be conveyed in simple, clear terms. In addition, care must be taken to emphasize important ideas so that they do not get lost in the discussion. In general the following principles should be used when trying to convey technical information to the public:
Often you will read risks compared for unrelated situations. For example, in Table 14 the odds of dying from accidental death are shown. Note the lifetime odds of dying from an injury for a person born in 2005 were 1 in 22 (i.e. 4.5%). This suggests the odds of dying from accidental causes are similar to getting cancer at a later time from an exposure of 1 Sv (see Table 2). The latency is a factor that weights heavily on the risk perception and as such, equal risk factors of different timescale cannot be directly compared. Furthermore, an accidental injury cannot be related to a decision about a medical procedure where the risk of not performing it would have its own associated risk. Herein lies one of the main challenges in communicating risk associated with medical exposures. The exposure involves a finite stochastic risk with a very long latency period but not doing the procedure would have another risk with possibly a much shorter time horizon.