Linear Systems And Signals 3rd Edition Solutions Pdf

27 views
Skip to first unread message

Tadeo Lentz

unread,
Jul 21, 2024, 4:59:50 AM7/21/24
to milkroubiri

The course is designed to provide the fundamental concepts in signals and systems. By the end of the course, students should be able to use signal transforms, system convolution and describe linear operations on these.

linear systems and signals 3rd edition solutions pdf


DOWNLOADhttps://tlniurl.com/2zv7ri



We draw a distinction between the fundamentals of signal modelling in time and frequency domains, and indicate the signficance of alternative descriptions. The basic concepts of Fourier series, Fourier transforms, Laplace transforms and related areas are developed. The idea of convolution for linear time-variant systems are introduced and expanded on from a range of perspectives. The transfer function for continuous and discrete tiem systems is used in this context. Stability is duscussed with respect to the pole locations. Some elements of statistical signal description are introduced as signal comparision methods. The Discrete Fourier Transform is discussed as a z-transform evaluation and its consequences examined. Some basic filtering operatings for both continuous and discrete signals are developed.

Textbook & Key References

  • "Linear Systems and Signals", B.P. Lathi, 2nd Edition, Oxford University Press (Main Textbook)
  • "Signals and Systems" , A. Oppenheim, A. Wilsky, Prentice Hall
Matlab LicenceThis course includes the use of Matlab for tutorial problems. Two Matlab tutorial sessions will be given at the beginning of the course. It is important that you have a copy of Matlab installed and properly licensed under Imperial College's Licensing Scheme. You can find the instruction about how to obtain Matlab here (Imperial login required). Since you are a full-time member of the College, if you wish to install Matlab on a personally owned system, please complete a licence form available here. For installation instructions, please click here.

A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized.

Around 2004, Emmanuel Cands, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires.[4][5] This idea is the basis of compressed sensing.

At first glance, compressed sensing might seem to violate the sampling theorem, because compressed sensing depends on the sparsity of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.[10]

An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation system y = D x \displaystyle \mathbf y =D\mathbf x where we want to find a solution for x \displaystyle \mathbf x .

In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.

Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found by Emmanuel Cands, Justin Romberg, Terence Tao, and David Donoho showed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations.

To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution. The function counting the number of non-zero components of a vector was called the L 0 \displaystyle L^0 "norm" by David Donoho.[note 1]

Cands et al. proved that for many problems it is probable that the L 1 \displaystyle L^1 norm is equivalent to the L 0 \displaystyle L^0 norm, in a technical sense: This equivalence result allows one to solve the L 1 \displaystyle L^1 problem, which is easier than the L 0 \displaystyle L^0 problem. Finding the candidate with the smallest L 1 \displaystyle L^1 norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist.[13] When measurements may contain a finite amount of noise, basis pursuit denoising is preferred over linear programming, since it preserves sparsity in the face of noise and can be solved faster than an exact linear program.

Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). For signals, especially, total variation refers to the integral of the absolute gradient of the signal. In signal and image reconstruction, it is applied as total variation regularization where the underlying principle is that signals with excessive details have high total variation and that removing these details, while retaining important information such as edges, would reduce the total variation of the signal and make the signal subject closer to the original signal in the problem.

Early iterations may find inaccurate sample estimates, however this method will down-sample these at a later stage to give more weight to the smaller non-zero signal estimates. One of the disadvantages is the need for defining a valid starting point as a global minimum might not be obtained every time due to the concavity of the function. Another disadvantage is that this method tends to uniformly penalize the image gradient irrespective of the underlying image structures. This causes over-smoothing of edges, especially those of low contrast regions, subsequently leading to loss of low contrast information. The advantages of this method include: reduction of the sampling rate for sparse signals; reconstruction of the image while being robust to the removal of noise and other artifacts; and use of very few iterations. This can also help in recovering images with sparse gradients.

Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm.[14] Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points. Thus, this method leads to low relative error and higher correlation as compared to the TV method. It also effectively suppresses and removes any form of image noise and image artifacts such as streaking.

The structure tensor obtained is convolved with a Gaussian kernel G \displaystyle G to improve the accuracy of the orientation estimate with σ \displaystyle \sigma being set to high values to account for the unknown noise levels. For every pixel (i,j) in the image, the structure tensor J is a symmetric and positive semi-definite matrix. Convolving all the pixels in the image with G \displaystyle G , gives orthonormal eigen vectors ω and υ of the J \displaystyle J matrix. ω points in the direction of the dominant orientation having the largest contrast and υ points in the direction of the structure orientation having the smallest contrast. The orientation field coarse initial estimation d ^ \displaystyle \hat d is defined as d ^ \displaystyle \hat d = υ. This estimate is accurate at strong edges. However, at weak edges or on regions with noise, its reliability decreases.

To overcome this drawback, a refined orientation model is defined in which the data term reduces the effect of noise and improves accuracy while the second penalty term with the L2-norm is a fidelity term which ensures accuracy of initial coarse estimation.

Based on peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics and known ground-truth images for testing performance, it is concluded that iterative directional total variation has a better reconstructed performance than the non-iterative methods in preserving edge and texture areas. The orientation field refinement model plays a major role in this improvement in performance as it increases the number of directionless pixels in the flat area while enhancing the orientation field consistency in the regions with edges.

The field of compressive sensing is related to several topics in signal processing and computational mathematics, such as underdetermined linear systems, group testing, heavy hitters, sparse coding, multiplexing, sparse sampling, and finite rate of innovation. Its broad scope and generality has enabled several innovative CS-enhanced approaches in signal processing and compression, solution of inverse problems, design of radiating systems, radar and through-the-wall imaging, and antenna characterization.[22] Imaging techniques having a strong affinity with compressive sensing include coded aperture and computational photography.

e59dfda104
Reply all
Reply to author
Forward
0 new messages