The basic equation is $Ax=\lambda x$ The number$\lambda$ is an eigenvalue of $A$.
When $A$ is squared, the eigenvectors stay the same. The eigenvalues are squared.
For projection matrices $P,$ we can see when $Px$ is parallel to $x$. The eigenvectors for$\lambda = 1 \,and\, 0$ fill the column space and nullspace. The column space doesn't move $(Px = x ).$ The nullspace goes to zero $(Px = 0 x ).$
Because we want a nonzero solution $x$.
If you add a row of $A$ to another row, or exchange rows, the eigenvalues usually change.
The product $\lambda_1$ times $\lambda_2$ and the sum $\lambda_1+\lambda_2$ can be found quickly from the matrix.
The product of the $n$ eigenvalues equals the determinant. The sum of the $n$ eigenvalues equals the sum of the $n$ diagonal entries.
Suppose that$\lambda_1, \ldots, \lambda_n$ are the eigenvalues of $A.$ Then the $λs$ are also the roots of the characteristic polynomial, i.e.
$\begin{array}{rcl} \det (A-\lambda I)=p(\lambda)&=&(-1)^n (\lambda - \lambda_1 )(\lambda - \lambda_2)\cdots (\lambda - \lambda_n) \\ &=&(-1) (\lambda - \lambda_1 )(-1)(\lambda - \lambda_2)\cdots (-1)(\lambda - \lambda_n) \\ &=&(\lambda_1 - \lambda )(\lambda_2 - \lambda)\cdots (\lambda_n - \lambda) \end{array}$The first equality follows from the factorization of a polynomial given its roots; the leading (highest degree) coefficient $(-1)^n$ can be obtained by expanding the determinant along the diagonal.
Now, by setting $λ$ to zero (simply because it is a variable) we get on the left side $det(A),$ and on the right side $λ_1λ_2⋯λ_n$, that is, we indeed obtain the desired result $\det(A) = \lambda_1 \lambda_2\cdots\lambda_n$
For every matrix $A$ there exists a non singular matrix $P$ such that $PAP^{-1} = J$ where $J$ has Jordan canonical form. Now using $tr(ABC)=tr(CAB)=tr(BCA)$ (which is true whenever the products are defined), we obtain $tr(A) = tr(P^{-1}JP) = tr(PP^{-1}J) = tr(J) = \sum_i \lambda_i$ where $λ_i$ are the eigenvalues of $A$.