Singular Value Decomposition (SVD) factorises any matrix into orthogonal bases and singular values. It underpins PCA, low-rank approximation, recommender systems, and image compression.
1. Why SVD? #
- Any rectangular matrix (A) can be decomposed into interpretable pieces: rotations plus scaling.
- By truncating the singular values we get the best low-rank approximation in the Frobenius norm sense.
- SVD is numerically stable and forms the backbone of many algorithms (LSA, PCA, collaborative filtering).
2. Definition #
For (A \in \mathbb{R}^{m \times n}):
$$A = U \Sigma V^\top$$
- (U): left singular vectors (orthonormal, size (m \times m))
- (\Sigma): diagonal matrix with singular values (\sigma_1 \ge \sigma_2 \ge \dots)
- (V): right singular vectors (orthonormal, size (n \times n))
The top singular values capture most of the energy of the matrix.
3. Prepare an image #
import numpy as np
import matplotlib.pyplot as plt
import japanize_matplotlib
from scipy import linalg
from PIL import Image
img = Image.open("./sample.png").convert("L").resize((163, 372)).rotate(90, expand=True)
img
4. Compute the SVD #
X = np.asarray(img)
U, Sigma, VT = linalg.svd(X, full_matrices=True)
print(f"A: {X.shape}, U: {U.shape}, Σ:{Sigma.shape}, V^T:{VT.shape}")
5. Low-rank approximation / compression #
for rank in [1, 2, 3, 4, 5, 10, 20, 50]:
U_i = U[:, :rank]
Sigma_i = np.matrix(linalg.diagsvd(Sigma[:rank], rank, rank))
VT_i = VT[:rank, :]
temp_image = np.asarray(U_i * Sigma_i * VT_i)
plt.title(f"rank={rank}")
plt.imshow(temp_image, cmap="gray")
plt.show()
A handful of singular values already reconstructs the image reasonably well.
6. Inspect singular vectors #
total = np.zeros((163, 372))
for rank in [1, 2, 3, 4, 5]:
U_i = U[:, :rank]
Sigma_i = np.matrix(linalg.diagsvd(Sigma[:rank], rank, rank))
VT_i = VT[:rank, :]
if rank > 1:
for ri in range(rank - 1):
Sigma_i[ri, ri] = 0
temp_image = np.asarray(U_i * Sigma_i * VT_i)
total += temp_image
plt.figure(figsize=(5, 5))
plt.suptitle(f"Contribution of $u_{rank}$")
plt.subplot(211)
plt.imshow(temp_image, cmap="gray")
plt.subplot(212)
plt.plot(VT[0])
plt.show()
plt.imshow(total)
Each singular vector pair encodes a particular pattern in the image; adding more pairs refines the detail.
7. Practical tips #
- Compression: choose (k) so that (\sum_{i=1}^k \sigma_i / \sum_j \sigma_j) hits the desired energy.
- Noise reduction: dropping small singular values often acts as a denoiser.
- PCA: simply apply SVD to the centred data matrix; the right singular vectors correspond to principal axes.
- Cost: full SVD is (O(mn \min(m,n))); use truncated or randomized SVD for large matrices.
Summary #
- SVD factorises matrices into rotations and scalings.
- Truncating the decomposition yields optimal low-rank reconstructions.
- The technique is ubiquitous—from PCA to recommendation engines and topic models.