The space of all linear functionals \(f:V\rightarrow \mathbb{R}\), noted as \(V^{*}\)
The dual space has the same dimension as the corresponding vector space or, given a space \(V\), with bases \((v_{1},…,v_{n})\), there exists a dual space \(V^{*}\) with a dual basis \((v^{*}_{1},…,v^{*}_{n})\).
As a linear representation # A tensor can be represented as a vector of x-number of dimensions. Basically, a generalization on top of scalars, vectors, and matrices. The specific “flavor” of the tensor (i.e. is it a scalar, vector, or matrix) is clarified by referring to the tensor’s “rank”. For instance; a rank 0 tensor is a scalr, rank 1 tensor is a one-dimensional vector, a rank 2 tensor is a two-dimensional vector (\(2x2\) matrix), etc.
...
\begin{equation} \delta_{ij}= \begin{cases} 0 & \text{if i \(\neq\) j}\\\
1 & \text{if i \(=\) j} \end{cases} \end{equation}
a basis for an n-dimensional vector space \(V\) is any ordered set of linearly independent vectors \((\mathbf{e}_{1}, \mathbf{e}_{2},…,\mathbf{e}_{n})\)
An arbitrary vector \(\mathbf{x}\) in \(V\) can be expressed as a linear combination of the basis vectors:
\begin{equation} \mathbf{x}\,=\,\sum\limits_{i = 1}^{n} \mathbf{e}_{i}x^{i} \end{equation}
See Bases Transformation, Coordinate Transformation
An orthonormal basis is a basis where all the vectors are one unit long and all perpendicular to each other (e.g. the Cartesian plane)
Consider two bases \((\mathbf{e}_{1},\mathbf{e}_{2})\) and \((\mathbf{\tilde{e}}_{1},\mathbf{\tilde{e}}_{2})\), where we consider the former the old basis and the latter the new basis.
Each vector \((\mathbf{\tilde{e}}_{1},\mathbf{\tilde{e}}_{2})\) can be expressed as a linear combination of \((\mathbf{e}_{1},\mathbf{e}_{2})\):
\begin{equation} \mathbf{\tilde{e}}_{1}\,=\,\mathbf{e}_{1}S^{1}_{1}\,+\,\mathbf{e}_{2}S^{2}_{1}\\\
\tag{1.0}\\\
\mathbf{\tilde{e}}_{2}\,=\,\mathbf{e}_{1}S^{1}_{2}\,+\,\mathbf{e}_{2}S^{2}_{2} \end{equation}
(1.0) is the basis transformation formula, and the object \(S\) is the direct transformation \(\{S^{j}_{i},\,1\,\leq\,i,\,j\,\leq\,2\}\), (assuming a \(2x2\) matrix) which can also be written in matrix form:
\begin{equation} \begin{bmatrix} \mathbf{\tilde{e}}_{1} & \mathbf{\tilde{e}}_{2} \end{bmatrix}\,=\, \begin{bmatrix} \mathbf{e}_{1} & \mathbf{e}_{2}\, \end{bmatrix} \begin{bmatrix} S^{1}_{1} & S^{1}_{2}\\\
...
\begin{equation} \mathbf{I}(\mathbf{X})=\mathbf{X} \end{equation}
Where any \(nxn\) matrix is established via the Kronecker Delta, e.g.
\begin{equation} \mathbf{I}_{ij}\,=\,\delta_{ij} \end{equation}
A matrix, which when multiplied by another matrix, results in the identity matrix.
\begin{equation} \mathbf{A}\mathbf{A}^{-1} = I \end{equation}
e.g.
\begin{equation} \begin{bmatrix} a & b\\\
c & d \end{bmatrix} \begin{bmatrix} d & -b\\\
-c & a \end{bmatrix}= \begin{bmatrix} 1 & 0\\\
0 & 1 \end{bmatrix} \end{equation}
also known as a linear space
A collection of objects known as “vectors”. In the Euclidean space these can be visualized as simple arrows with a direction and a length, but this analogy will not necessarily translate to all spaces.
Addition and multiplication of these objects (vectors) must adhere to a set of axioms for the set to be considered a “vector space”.
Addition (+) \begin{equation} +\,:\,V\,\times\,V\,\longrightarrow\,V \end{equation}
...