Introduction To Numerical Linear Algebra And Optimisation Pdf

1 view
Skip to first unread message

Laura N Gerard

unread,
Jul 26, 2024, 12:41:10 AM7/26/24
to ATB EXECUTIVE MEMBERS

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones.

Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations.[2] The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947.[3] The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems.[2]

There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary m m \displaystyle m\times m matrix require O ( m 3 ) \displaystyle O(m^3) time, which is a surprisingly high floor given that matrices contain only m 2 \displaystyle m^2 numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix.

The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method. If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration.

Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include MATLAB, Analytica, Maple, and Mathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation; C and Fortran have packages like Basic Linear Algebra Subprograms and LAPACK, python has the library NumPy, and Perl has the Perl Data Language. Many numerical linear algebra commands in R rely on these more fundamental libraries like LAPACK.[5] More libraries can be found on the List of numerical libraries.

The success of modern codes for large-scale optimization is heavily dependent on the use of effective tools of numerical linear algebra. On the other hand, many problems in numerical linear algebra lead to linear, nonlinear or semidefinite optimization problems. The purpose of the conference is to bring together researchers from both communities and to find and communicate points and topics of common interest. This Conference has been organised in cooperation with the Society for Industrial and Applied Mathematics (SIAM).

In multiobjective optimization, one considers optimization problems with several competing objective functions. For instance, in engineering, a design often has to be stable and light at the same time. A classical approach to such optimization problems is to formulate suitable parameter-dependent single-objective replacement problems, called scalarization, such as considering a weighted sum of the objective functions. Then, the parameters are varied and the scalarized problems are solved iteratively.

However, many multiobjective optimization problems have a structure where a scalarization is not a suitable approach for an efficient procedure. In this talk, we give an introduction to the basic concepts and classical approaches in multiobjective optimization. Then, we present such classes of multiobjective optimization problems where it is better not to scalarize. For specific heterogeneous problems, where one of the objective functions is assumed to be an expensive black-box function while the other objectives are analytically given, we give more details on a numerical approach. That method uses the basic trust region concept by restricting the computations in every iteration to a local area. The objective functions are replaced by suitable models which reflect the heterogeneity of the objective functions.

The Incompressible Flow & Iterative Solver Software (IFISS) package contains software that can be run with MATLAB or Octave to create a computational laboratory for the interactive numerical study of incompressible flow problems. It includes algorithms for discretisation by mixed finite element methods and a posteriori error estimation of the computed solutions, together with state-of-the-art preconditioned iterative solvers for the resulting discrete linear equation systems. In this talk we give a flavour of the main features and illustrate its applicability using several case studies. We will demonstrate that the software is a valuable tool in the present era of open science and reproducible research.

In the recent years, the practical importance of optimization problems on manifolds has stimulated the development of geometric optimization algorithms that exploit the differential structure of the manifold search space. In this talk, we give an overview of geometric optimization algorithms and their applications, with an emphasis on the underlying geometric concepts and on the numerical efficiency of the algorithm implementations.

Day Delegate rate:
A Day Delegate rate is also available for this Conference if you would like to attend one of the scheduled Conference days. If you would like to find out more information about our Day Delegate rate, please contact us at confe...@ima.org.uk

The IMA have booked accommodation at Edgbaston Park Hotel on hold for delegates on a first-come, first-serve basis. The room is 90 Single occupancy, B&B which will be available to book until 16/05/2022.

For general conference queries please contact the Conferences Department, Institute of Mathematics and its Applications, Catherine Richards House, 16 Nelson Street, Southend-on-Sea, Essex, SS1 1EF, UK.

This course is a systematic introduction to computing (with python and jupyter notebooks) for science and engineering applications. Applications are drawn from a broad range of disciplines, including physical, financial, and biological-epidemiological problems. The course consists of two parts: 1. Basics: essential elements of computing, including types of variables, lists, arrays, iteration and control flow (for, while loops, if statement), definition of functions, recursion, file handling and simple plots, plotting and visualization tools in higher dimensions. 2. Applications: development of computational skills for problem solving, including numerical and machine learning methods, and their use in deterministic and stochastic approaches; examples include numerical differentiation and integration, fitting of curves and error analysis, solution of simple differential equations, random numbers and stochastic sampling, and advanced methods like neural networks and simulated annealing for optimization in complex systems. Course work consists of attending lectures and labs, weekly homework assignments, a mid-term project and a final project; while work is developed collaboratively, coding assignments are submitted individually.

This course covers a combination of linear algebra and multivariate calculus with an eye towards solving systems of equations and optimization problems. Students will learn how to prove some key results, and will also implement these ideas with code. Linear algebra: matrices, vector spaces, bases and dimension, inner products, least squares problems, eigenvalues, eigenvectors, singular values, singular vectors. Multivariate calculus: partial differentiation, gradient and Hessian, critical points, Lagrange multipliers.

Reply all
Reply to author
Forward
0 new messages