Linear Independence

Definition

The vectors are linearly independent if the equation

has only the trivial solution . Equivalently, no vector in the set can be written as a linear combination of the others.

If a non-trivial solution exists, the vectors are linearly dependent.

Intuition

Independent vectors point in genuinely different directions — none can be “explained” by the others. Geometrically, two vectors in are dependent if and only if they are collinear (one is a rescaling of the other). Independence is the key property that makes a spanning set a basis: it guarantees unique coordinates.

Formal Description

Example (dependent). The vectors

are linearly dependent since .

Example (independent). The standard basis vectors are linearly independent since

forces .

Algorithmic check. Place the vectors as rows of a matrix and compute the reduced row echelon form. If any row becomes all zeros, the vectors are linearly dependent.

A set of linearly independent vectors in an -dimensional space forms a basis for that space.

Applications

  • Determining whether columns of form a basis (and hence whether is full column rank).
  • Feature selection: redundant (dependent) features contribute no new information.
  • Verifying that a proposed basis is valid before using it for decompositions.

Trade-offs

Checking independence via RREF is exact but for an matrix. For large matrices, approximate methods (rank estimation via SVD) are preferred in practice but can misclassify near-dependent vectors due to floating-point tolerances.