Matrix Method Problems

0 views
Skip to first unread message

Verline

unread,
Aug 3, 2024, 6:09:11 PM8/3/24
to veycaldioduns

I'm learning C# and I came across to linear algebra and matrices. I'm aiming to make a class that takes a 2D matrix with given values through input and print the elements of the matrix. With this class, I'm planning to have access to some methods that allows do linear algebra calculations like transformations (rotation, scaling, translation) and other arithmetic operations (e.g adding, multiplication, etc), and miscellaneous methods such as print.

To keep it simple, I have something, but now I'm stuck with an approach for printing the 2D matrix that I'm trying to do. Basically, I'm creating a constructor that takes the size of the array (x and y) in my class and then just prints it inside of the constructor. However, my goal is to have the values of the matrix creation and use it for my print method. This print method will call that matrix and print the matrix with the elements. I'm having difficulties to wrap around how to return the array by itself and elements in it for just a separate method function of my class. It works fine inside of the constructor, but the whole idea is to have a return value and use it for later (e.g like print in this case).

I came across with few ideas that involves with ref and out, but I'm not quite confident enough to know how to properly use it for my case. I know I would use it for the arithmetic and maybe transformation, but not for printing.

I would recommend moving your 2D array declaration as a field or property of your Matrices class and write a separate method that prints the matrix like ToConsole(). This method will have access to the instance fields/properties of the Matrices class.

Because Constructors should return fast, reading input from user while creating the object will slow down the creation and is considered a bad practice (you can create the matrix explicitly via a separate method).

Although we endeavor to make our web sites work with a wide variety of browsers, we can only support browsers that provide sufficiently modern support for web standards. Thus, this site requires the use of reasonably up-to-date versions of Google Chrome, FireFox, Internet Explorer (IE 9 or greater), or Safari (5 or greater). If you are experiencing trouble with the web site, please try one of these alternative browsers. If you need further assistance, you may write to he...@aps.org.

A new transfer-matrix method is developed for low-energy-electron diffraction and other interface electronic-structure problems. This new procedure uses the waves incident on each layer as the basis for the transfer matrix. These waves are not singular at the plane of atoms, so that transfer can be done from one atom plane to the next, further from the singularities which remain at neighboring planes than in other methods. Because of the resulting more rapid convergence, this new transfer matrix can be chosen smaller than previous transfer matrices whenever interlayer scattering is strong. The convergence is further improved for strong scattering between adjacent layers by handling this scattering without approximation in a spherical-wave basis. A similar procedure is given for treating the surface region and connecting it to the semi-infinite bulk. A new procedure is also given for efficiently obtaining the bulk boundary conditions used beneath the surface from the transfer matrix. This new boundary-condition procedure is also useful in transfer-matrix methods other than the present one.

In this paper a novel operational matrix of derivatives of certain basis of Legendre polynomials is established. We show that this matrix is expressed in terms of the harmonic numbers. Moreover, it is utilized along with the collocation method for handling initial value problems of any order. The convergence and the error analysis of the proposed expansion are carefully investigated. Numerical examples are exhibited to confirm the reliability and the high efficiency of the proposed method.

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero.[1] There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense.[1] The number of zero-valued elements divided by the total number of elements (e.g., m n for an m n matrix) is sometimes referred to as the sparsity of the matrix.

Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory and numerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.

When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices,[2] as they are common in the machine learning field.[3] Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeros. Sparse data is by nature more easily compressed and thus requires significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.

Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.

The fill-in of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.

There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.

Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products A x i \displaystyle Ax_i , where matrix A \displaystyle A is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.

In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously.

DOK consists of a dictionary that maps (row, column)-pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing.[4]

LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction.[5]

COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction.[6]

To extract the row 1 (the second row) of this matrix we set row_start=1 and row_end=2. Then we make the slices V[1:2] = [8] and COL_INDEX[1:2] = [1]. We now know that in row 1 we have one element at column 1 with value 8.

The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combines ROW_INDEX and COL_INDEX into a single array and handles the diagonal of the matrix separately.[9]

CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is (val, row_ind, col_ptr), where val is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; row_ind is the row indices corresponding to the values; and, col_ptr is the list of val indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via the sparse function).

c80f0f1006
Reply all
Reply to author
Forward
0 new messages