I am working on a past Algebra exam paper and have come across a problem which requires me to write the linear operator associated to a given matrix $M$ in the standard basis of $\mathbb{R}^4$.
What does it mean to 'write a linear operator IN a given basis'?
Thank you.
NB: I have looked at this link, but I am not sure if the concept of writing a matrix in a given basis is synonymous to the concept of writing a linear operator in a given basis.
EDIT:
I think I should be more specific.
The statement of the problem I am working on is as follows:
Let $M$ be the matrix: $$\begin{bmatrix} -2 & 3 & 7 & -3 \\ -6 & 1 & 16 & 1 \\ -2 & 1 & 6 & -1 \\ -2 & -1 & 6 & 3 \\ \end{bmatrix}$$
Write the linear operator associated to $M$ in the standard basis of $\mathbb{R}^4$.
The model solution is as follows:
The linear operator associated to $M$ is the linear map given by
$$M \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ \end{bmatrix} = \begin{bmatrix} -2x_1 + 3x_2 + 7x_3 -3x_4 \\ -6x_1 + x_2 + 16x_3 + x_4 \\ -2x_1 + x_2 + 6x_3 - x_4 \\ -2x_1 - x_2 + 6x_3 + 3x_4 \end{bmatrix} $$
My current issue is that I don't fully understand how the model solution is an example of the linear operator written in the standard basis of $\mathbb{R}^4$.
In particular, I'm still trying to grasp the concept of writing a linear operator IN a given basis. What does this really mean?
6 Answers
$\begingroup$A linear operator can be written as a matrix in a given basis.
For example, suppose we have the linear operator, T, from $R^2$ to $R^2$ that maps (x, y) to T(x, y)= (x- y, 2y). Since that is from $R^2$ to $R^2$, in can be written as a 2 by 2 matrix: $\begin{bmatrix}a & b \\ c & d \end{bmatrix}$. If we use the "standard basis" for $R^2$, (1, 0) and (0, 1), then (x, y)= x(1,0)+ y(0, 1) so $\begin{bmatrix} x \\ y\end{bmatrix}$ is the representation in the standard basis. The operation, in matrix form is $\begin{bmatrix}a & b \\ c & d \end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}= \begin{bmatrix} ax+ by \\ cx+ dy\end{bmatrix}$.
We want that to be $T(x, y)= \begin{bmatrix} x- y \\ 2y\end{bmatrix}$ so we must have ax+ by= x- y and cx+ dy= 2y. That is two equations for the four unknowns, a, b, c, and d, but remember this must be true for all x and y. In particular, taking x= 1, y= 0 we get a(1)+ b(0)= a= 1- 0, so a= 1, and c(1)+ d(0)= 2(0), so c= 0. Taking x= 0, y= 1, we get a(0)+ b(1)= 0- 1, so b= -1, and c(0)+ d(1)= 2(1) so d= 2. The matrix representing linear operator, T, in this particular basis, is $\begin{bmatrix} 1 & -1 \\ 0 & 2\end{bmatrix}$. This particular choice of "x= 0, y= 1" and "x= 1, y= 0" makes the calculations especially easy since (0, 1) and (1, 0) are the basis vectors. Notice that $\begin{bmatrix}a & b \\ c & d \end{bmatrix}\begin{bmatrix}1 \\ 0 \end{bmatrix}= \begin{bmatrix}a \\ c \end{bmatrix}$ and $\begin{bmatrix}a & b \\ c & d \end{bmatrix}\begin{bmatrix}0 \\ 1 \end{bmatrix}= \begin{bmatrix}b \\ d \end{bmatrix}$. That is, applying the linear operator to each basis vector in turn, then writing the result as a linear combination of the basis vectors gives us the columns of the matrices as those coefficients.
For another example, let the vector space be the set of all polynomials of degree at most 2 and the linear operator, D, be the differentiation operator. That is, any such "vector" can be written as $P= ax^2+ bx+ c$ and $DP= 2ax+ b$. If we take $\{x^2,x, 1\}$ as basis $ax^2+ bx+ c$ will be written as $\begin{bmatrix} a \\ b \\ c\end{bmatrix}$. Applying the derivative operator to the first "basis vector", $x^2= 1x^2+ 0x+ 0$, gives $2x= 0x^2+ 2x+ 0$ so the first column of the matrix representation is $\begin{bmatrix}0 \\ 2 \\ 0\end{bmatrix}$. Applying the derivative operator to the second "basis vector", $x= 0x^2+ 1x+ 0$, gives $1= 0x^2+ 0x+ 1$ so the second column of the matrix representation is $\begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}$. Finally, applying the derivative operator to the third "basis vector", $1= 0x^2+ 0x+ 1$, gives $0= 0x^2+ 0x+ 0$ so the third column of the matrix representation is $\begin{bmatrix}0 \\ 0 \\ 0\end{bmatrix}$. The matrix representing the derivative operator in this basis is $\begin{bmatrix}0 & 0 & 0 \\ 2 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix}$
$\endgroup$ 1 $\begingroup$A linear operator $T$ is a transformation from a vector space $V$ to a vector space $W$, over the same field, that is ''linear'', in the sense that $T(ax+by)=aT(x)+bT(y)$.
If we chose a basis $\{v_i\}$ in $V$ and a basis $\{w_i\}$ in $W$ that the two vectors can be represented by sets of components with respect to such basis and, developing linerity with respect the basis, we can see that the linear operator is represented by a matrix.
But, in general a linear transformation is a geometrical thing that is independent by the chosen basis, and is representend by matrices with different components in different basis .
The link that you mentioned shows how the matrix representation change when we change the basis.
$\endgroup$ 3 $\begingroup$If $M:\Bbb R^4 \to \Bbb R^4$, linearly then this means that for all $v,w \in \Bbb R^4$, and all $\lambda, \mu \in \Bbb R$, then $$M(\lambda v + \mu w)=\lambda M(v) + \mu M(w).$$
Now given a basis $\{b_i\}_{i \in \{1,\ldots,4\}}$, the linear transformation $M$ is fixed by its action on this basis. Since every vector $v \in \Bbb R^4$ we have $v=\sum_i\alpha_i b_i$, (the alphas unique real numbers), then $M(v)=\sum_i\alpha_i M(b_i)$. So once we know how $M$ acts on the basis we know how $M$ acts on any vector written in the coordinates of this basis. Also because we are mapping from the space to itself, we can expand $M(b_i)=\sum_j m_{ji} b_j$, and $m_{ji}$ are the matrix elements of $M$ in that basis.
So in this case you're given $M$ as a matrix in the standard basis, so you can just act an an arbitrary column vector to see how a vector with co-ordinates in the standard basis change under $M$.
Also note, that the coordinates of a basis written in terms of itself will be of the form $(1,0,\ldots,0), (0,1,0,\ldots,0) \ldots$, but it just so happens that there's is a natural way to treat $\Bbb R^4$ as a vector space over the field $\Bbb R$, and because $1$ and $0$ exist and are distinct in a field, it's a natural choice to choose the vectors with only one $1$ in its coordinates, and the rest zero.
$\endgroup$ $\begingroup$My two cents for expressing vectors in different basis.
Expressing a vector in a given basis means to project the "dimensional values" (for instance x,y in $\mathbb{R}^2$) along your basis vectors (in $\mathbb{R}^2$ ($e_0,e_1$)). This is done with a linear operation, the result of the application of a linear function, a matrix. An example to clarify:
In $\mathbb{R}^2$ the standard basis is $B_1 = [e_0,e_1]= [(1, 1),(0, 1)] = \left[\matrix{ 1, 0 \\ 0,1} \right]$. We write (x, y)= x(1,0)+ y(0, 1) = $\left[\matrix{x\\y}\right] \left[\matrix{1, 0 \\ 0,1} \right] = \left[\matrix{x\\y}\right]$.
Then we can express any vector of $\mathbb{R}^2$ $\left[\matrix{x\\y}\right]$ in any basis, say $B_2 =[(1, 1),(1, -1)] = \left[\matrix{ 1, 1 \\ 1,-1} \right]$ by simply $\left[\matrix{x\\y}\right] \left[\matrix{1, 1 \\ 1,-1} \right] = \left[\matrix{x + y\\x - y}\right]$
The point (1,1) expressed in $B_1$ is : $\left[\matrix{x + 0y\\0x + y}\right] = \left[\matrix{1\\1}\right]$. In $B_2$ this becomes $ \left[\matrix{2\\0}\right]$.
In fact the new basis is a rotation of $\pi/4$ and a scaling of $\sqrt2$ cross all dimensions of $B_1$ ($\sqrt2$ is the norm of each basis vectors of $B_2$ $\sqrt{1^2 + (-1)^2}$).
$ \left[\matrix{ 1, 0 \\ 0,1} \right]\left[\matrix{ cos\theta, -sin\theta\\ sin\theta, cos\theta} \right]\left[\matrix{\sqrt2, \sqrt2}\right]= \left[\matrix{ 1, 0 \\ 0,1} \right]\left[\matrix{ \frac{\sqrt2}{2}, -\frac{\sqrt2}{2}\\ \frac{\sqrt2}{2}, \frac{\sqrt2}{2}} \right]\left[\matrix{\sqrt2, \sqrt2}\right] = \left[\matrix{ 1, 1 \\1,-1} \right]$
So you can safely think of expressing a vector in a different basis as applying a linear transformation (rotation+scale+symmetry+translation) (or simply a matrix) to the reference axis of your system and having your vector expressed in this transformed framework (with conservation of distances and so on)
$\endgroup$ $\begingroup$for part A do I need to prove that V is in basis by decomposing V into Identity matrix. If its successful then V is the basis
also for part B,do we need to multiply B and V to get the matrix K
$\endgroup$ $\begingroup$Assume that you have a result for a given basis.
T(e1) = [a1,a2....an]
T(e2) = [b1,b2....bn]
and so on. For this result and the given basis you can find the transformation matrix (by arranging the results in columns). So that is with respect to the given basis.
Now if we change the basis and write the result for each basis vectors, we get a new set of results non similar to the initial set of results. These can be written in columns to form the new transformation matrix which is definitely different from the initial transformation matrix which can also be called as linear operator.
$\endgroup$