Therapid evolution of mathematical methods of image reconstruction in computed tomography (CT) reflects the race to produce an efficient yet accurate image reconstruction method while keeping radiation dose to a minimum and has defined improvements in CT over the past decade.
The mathematical problem that CT image reconstruction is trying to solve is to compute the attenuation coefficients of different x-ray absorption paths (ray sum) that are obtained as a set of data (projection).
Image reconstruction in CT is a mathematical process that generates tomographic images from X-ray projection data acquired at many different angles around the patient. Image reconstruction has fundamental impacts on image quality and therefore on radiation dose. For a given radiation dose it is desirable to reconstruct images with the lowest possible noise without sacrificing image accuracy and spatial resolution. Reconstructions that improve image quality can be translated into a reduction of radiation dose because images of the same quality can be reconstructed at lower dose.
The selection of reconstruction kernel should be based on specific clinical applications. For example, smooth kernels are usually used in brain exams or liver tumor assessment to reduce image noise and enhance low contrast detectability, whereas sharper kernels are usually used in exams to assess bony structures due to the clinical requirement of better spatial resolution.
Another important reconstruction parameter is slice thickness, which controls the spatial resolution in the longitudinal direction, influencing the tradeoffs among resolution, noise, and radiation dose. It is the responsibility of CT users to select the most appropriate reconstruction kernel and slice thickness for each clinical application so that the radiation dose can be minimized consistent with the image quality needed for the examination.
Different from analytical reconstruction methods, IR reconstructs images by iteratively optimizing an objective function, which typically consists of a data fidelity term and an edge-preserving regularization term [6]. The optimization process in IR involves iterations of forward projection and backprojection between image space and projection space. With the advances in computing technology, IR has become a very popular choice in routine CT practice because it has many advantages compared with conventional FBP techniques. Important physical factors including focal spot and detector geometry, photon statistics, X-ray beam spectrum, and scattering can be more accurately incorporated into IR, yielding lower image noise and higher spatial resolution compared with FBP. In addition, IR can reduce image artifacts such as beam hardening, windmill, and metal artifacts.
Due to the intrinsic difference in data handling between FBP and iterative reconstruction, images from IR may have a different appearance (e.g., noise texture) from those using FBP reconstruction. More importantly, the spatial resolution in a local region of IR-reconstructed images is highly dependent on the contrast and noise of the surrounding structures due to the non-linear regularization term and other factors during the optimization process [7]. Measurements on different commercial IR methods have demonstrated this contrast- and noise-dependency of spatial resolution [8,9]. Because of this dependency, the amount of potential radiation dose reduction allowable by IR is dependent on the diagnostic task since the contrast of the subject and the noise of the exam vary substantially in clinical exams [10]. For low-contrast detection tasks, several phantom and human observer studies on multiple commercial IR methods demonstrated that only marginal or a small amount of radiation dose reduction can be allowed [11,12,13]. Careful clinical evaluation and reconstruction parameter optimization are required before IR can be used in routine practice [10,14,15]. Task-based image quality evaluation using model observers have been actively investigated so that image quality and dose reduction can be quantified objectively in an efficient manner [16,17,18].
Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques.For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually abetter, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image ina single reconstruction step.[1] In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.[2]
The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverseproblem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time.
There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections.
The iterative Sparse Asymptotic Minimum Variance algorithm is an iterative, parameter-free superresolution tomographic reconstruction method inspired by compressed sensing, with applications in synthetic-aperture radar, computed tomography scan, and magnetic resonance imaging (MRI).
In learned iterative reconstruction, the updating algorithm is learned from training data using techniques from machine learning such as convolutional neural networks, while still incorporating the image formation model. This typically gives faster and higher quality reconstructions and has been applied to CT[4] and MRI reconstruction.[5]
The advantages of the iterative approach include improved insensitivity to noise and capability of reconstructing an optimal image in the case of incomplete data. The method has been applied in emission tomography modalities like SPECT and PET, where there is significant attenuation along ray paths and noise statistics are relatively poor.
Statistical, likelihood-based approaches: Statistical, likelihood-based iterative expectation-maximization algorithms[7][8]are now the preferred method of reconstruction. Such algorithms compute estimates of the likely distribution of annihilation events that led to the measured data, based on statistical principle, often providing better noise profiles and resistance to the streak artifacts common with FBP. Since the density of radioactive tracer is a function in a function space, therefore of extremely high-dimensions, methods which regularize the maximum-likelihood solution turning it towards penalized or maximum a-posteriori methods can have significant advantages for low counts. Examples such as Ulf Grenander's Sieve estimator[9][10]or Bayes penalty methods,[11][12] or via I.J. Good's roughness method[13][14] may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function only.
As another example, it is considered superior when one does not have a large set of projections available, when the projections are not distributed uniformly in angle, or when the projections are sparse or missing at certain orientations. These scenarios may occur in intraoperative CT, in cardiac CT, or when metal artifacts[15][16]require the exclusion of some portions of the projection data.
In Magnetic Resonance Imaging it can be used to reconstruct images from data acquired with multiple receive coils and with sampling patterns different from the conventional Cartesian grid[17] and allows the use of improved regularization techniques (e.g. total variation)[18] or an extended modeling of physical processes[19] to improve the reconstruction. For example, with iterative algorithms it is possible to reconstruct images from data acquired in a very short time as required for real-time MRI (rt-MRI).[6]
In Cryo Electron Tomography, where the limited number of projections are acquired due to the hardware limitations and to avoid the biological specimen damage, it can be used along with compressive sensing techniques or regularization functions (e.g. Huber function) to improve the reconstruction for better interpretation.[20]
Thank you for visiting
nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
B.Z., J.Z.L., S.F.C., B.R.R. and M.S.R. conceptualized the problem and contributed to experimental design. B.Z. developed, implemented and tested the technical framework. J.Z.L. and B.Z. constructed the theoretical description. B.Z., J.Z.L., S.F.C., B.R.R. and M.S.R. wrote the manuscript.
Mean squared error (MSE) loss was minimized with stochastic gradient descent using the RMSProp algorithm and plotted here against training epoch count for: a, Cartesian Fourier encoding on IMAGENET corpus; b, spiral Fourier encoding on IMAGENET corpus; and c, Cartesian undersampled Fourier encoding on HCP brain corpus. The validation error tracks the training error without upward divergence, demonstrating a stable training regime with good bias-variance tradeoff.
Reconstructing images from data, whether for medical or astronomical purposes, hinges on well-defined steps. The data sensor encodes an intermediate representation of the observed object, which is converted into an image by a mathematical operation known as the inversion of the encoding function. This inversion is often plagued by sensor imperfections and noise, requiring extra technique-specific steps to correct them. Here, Matthew Rosen and colleagues present a more unified framework termed 'automated transform by manifold approximation' (AUTOMAP). AUTOMAP tackles image reconstruction as a supervised learning task, which uses appropriate training data to link the sensor data to the output image. The authors implemented AUTOMAP with a deep neural network and tested its flexibility in learning how to reconstruct images for various magnetic resonance imaging acquisition strategies. AUTOMAP reduced artefacts and improved accuracy in images reconstructed from noisy and undersampled acquisitions. The authors expect their framework to apply to other imaging methods.
3a8082e126