Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita,[1] it was used by Albert Einstein to develop his general theory of relativity. Unlike the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold.
Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning.
In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.
For example, in physics you start with a vector field, you decompose it with respect to the covariant basis, and that's how you get the contravariant coordinates. For orthonormal cartesian coordinates, the covariant and contravariant basis are identical, since the basis set in this case is just the identity matrix, however, for non-affine coordinate system such as polar or spherical there is a need to distinguish between decomposition by use of contravariant or covariant basis set for generating the components of the coordinate system.
The metric tensor represents a matrix with scalar elements ( Z i j \displaystyle Z_ij or Z i j \displaystyle Z^ij ) and is a tensor object which is used to raise or lower the index on another tensor object by an operation called contraction, thus allowing a covariant tensor to be converted to a contravariant tensor, and vice versa.
This means that if we take every permutation of a basis vector set and dotted them against each other, and then arrange them into a square matrix, we would have a metric tensor. The caveat here is which of the two vectors in the permutation is used for projection against the other vector, that is the distinguishing property of the covariant metric tensor in comparison with the contravariant metric tensor.
Two flavors of metric tensors exist: (1) the contravariant metric tensor ( Z i j \displaystyle Z^ij ), and (2) the covariant metric tensor ( Z i j \displaystyle Z_ij ). These two flavors of metric tensor are related by the identity:
For an orthonormal Cartesian coordinate system, the metric tensor is just the kronecker delta δ i j \displaystyle \delta _ij or δ i j \displaystyle \delta ^ij , which is just a tensor equivalent of the identity matrix, and δ i j = δ i j = δ j i \displaystyle \delta _ij=\delta ^ij=\delta _j^i .
2. The J \displaystyle \bar J matrix, representing the change from barred to unbarred coordinates. To find J \displaystyle \bar J , we take the "unbarred gradient", i.e. partial derive with respect to x i \displaystyle x^i :
In contrast, for standard calculus, the gradient vector formula is dependent on the coordinate system in use (example: Cartesian gradient vector formula vs. the polar gradient vector formula vs. the spherical gradient vector formula, etc.). In standard calculus, each coordinate system has its own specific formula, unlike tensor calculus that has only one gradient formula that is equivalent for all coordinate systems. This is made possible by an understanding of the metric tensor that tensor calculus makes use of.
Introductory course in modern differential geometry focusing on examples, broadly aimed at students in mathematics, the sciences, and engineering. Emphasis is on rigorously presented concepts, tools and ideas rather than on proofs. Topics covered include differentiable manifolds, tangent spaces and orientability; vector and tensor fields; differential forms; integration on manifolds and Generalized Stokes' Theorem; Riemannian metrics, Riemannian connections and geodesics. Applications to configuration and phase spaces, Maxwell equations and relativity theory will be discussed.
Students with a Bachelor's degree will be assessed graduate level tuition rate for this course. However, one cannot receive graduate level credit for courses numbered below 400 at the University of Illinois.
Students currently registered in a University of Illinois Graduate Degree program will be restricted from registering in 16-week Academic Year-term NetMath courses. Matriculating UIUC Grad students will be allowed to register in Summer Session II NetMath courses.
This page has information regarding the self-paced, rolling enrollment course. If you are a UIUC student interested in taking a course during the summer, you may be interested in a Summer Session II course.
I understand a four-vector is a four dimensional vector, which is written in the form $(ct, x, y, z)$, in the convention I am using. Sometimes, we refer to the contravariant components of the four vector $x^\alpha$. My understanding is sort of off here, though. Sometimes, we write
I don't really understand what this expression means. On the one hand, I think of this as essentially a matrix multiplication equation, where we have that the $\alpha$'th component $x^\alpha$ of a four vector $\textbfx = (x^0, x^1, x^2, x^3)$, is given (writing the explicit sum) as $\sum_\beta = 0^3\Lambda_\beta^\alpha x^\beta$.
I have also seen it written that $x^\alpha = (ct, x, y, z)$, which confuses me, since I understood $x^\alpha$ to be a component rather than a vector itself. Though, if we understand $x$'s with superscripts to be vectors, then what could $\Lambda_\beta^\alpha x^\beta$ possibly mean? Given that there is an implied summation over $\beta$, it doesn't make sense to me that $x^\beta$ could be a vector, and not just a component.
On the other hand, I've also heard that greek letter superscripts can be thought of as meaning "in this coordinate system", meaning $x^\alpha$ is a four vector -- not just a component -- in a coordinate system labelled $\alpha$, $x^\beta$ is the coordinates of the same vector in a coordinate system labelled $\beta$, and $\Lambda_\alpha^\beta$ actually $\textitis$ a matrix, and not just an entry in a matrix.
It seems like with tensors, it's the same thing, apart from we write $\delta_i^j$, for some reason. I understand that superscripts are for contravariant components and subscripts are for covariant components, but I have no idea why this matters for a function which can only be either 0 or 1. Surely, no matter what $i$ and $j$ are, the end result is the same, regardless of how high up the $\delta$ we've chosen to write the indices?
I get that the matrix whose (i,j)th entry is $\delta_i^j$ would be the identity matrix, but surely the function $\delta_i^j$ isn't a matrix itself? I just get really confused when the same symbol means a bunch of different things! Also, if $\delta_i^j$ is thought of as a matrix, assuming that its subscript index tells us the column and the superscript index tells us the row, is this not assigning some sort of different contra/co variance between rows and columns?
I think you've pretty much answered the question yourself. Notations like $x^\alpha$ and $\Lambda^\alpha_\beta$ can be read either as a notation for a whole tensor or as a notation for one of its components. If you like, you can imagine this as two different notations where the translation between the two notations is trivial.
One thing that seems missing from what you said is the distinction between abstract index notation and concrete index notation. Most relativists today use greek indices for concrete indices, which, as you say, have meaning in a certain coordinate system. But they use latin indices as abstract indices, meaning that the expressions they're writing aren't in any particular coordinate system, and would be valid in any coordinate system.
Also, it's important in general to keep the order of indices straight, so rather than writing things like $a^j_i$, you want to write something like $a^j_i$. This distinction is only irrelevant if the tensor is symmetric.
Scalar, vector, tensor - a mathematical representation of a physical entity that may becharacterized by a magnitude and/or directions associated with it. Scalars, vectors and tensors are quantities, which do not change if the system of coordinates is changed (e.g. between Cartesian, cylindrical, spherical).1)2)
Vectors can be analysed from the viewpoint of covariant and contravariant components, and can be transformed between various coordinate systems, including non-orthogonal ones. There are many detailed implications regarding vectors and calculations based of them, and it is best to study the relevant textbooks and literature, as well as practice with simpler cases before performing more complex calculations.6) This article contains only the most basic information.
Mathematically, a scalar can be shown to be a tensor of rank zero, or to be a vector in a 1D space, so that its amplitude can be positive or negative. Scalars can be used also in spaces with more dimensions (2D, 3D, 4D). For example, rest mass is a scalar in a 4D time-space.
Vectors and vector components can be translated between different coordinate systems without changing their meaning or value of the represented physical quantity. Such transformations are mathematically strict if performed in an analytical way.8)
In 2D space there are 2 components, so it is possible to perform calculations similar to vector problems, but by using the tool of complex numbers.9) Vectors can also be used in 4D spaces10), and in any arbitrary number of dimensions, as required.