>> SD = linalg.clarkson_woodruff_transform(D, sketch_n_rows) # slowest That said, this method does perform well on dense inputs, just slower on a relative scale. References ---------- .. [1] Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input sparsity time. In STOC, 2013. .. [2] David P. Woodruff. Sketching as a tool for numerical linear algebra. In Foundations and Trends in Theoretical Computer Science, 2014. Examples -------- Create a big dense matrix ``A`` for the example: >>> import numpy as np >>> from scipy import linalg >>> n_rows, n_columns = 15000, 100 >>> rng = np.random.default_rng() >>> A = rng.standard_normal((n_rows, n_columns)) Apply the transform to create a new matrix with 200 rows: >>> sketch_n_rows = 200 >>> sketch = linalg.clarkson_woodruff_transform(A, sketch_n_rows, seed=rng) >>> sketch.shape (200, 100) Now with high probability, the true norm is close to the sketched norm in absolute value. >>> linalg.norm(A) 1224.2812927123198 >>> linalg.norm(sketch) 1226.518328407333 Similarly, applying our sketch preserves the solution to a linear regression of :math:`\min \|Ax - b\|`. >>> b = rng.standard_normal(n_rows) >>> x = linalg.lstsq(A, b)[0] >>> Ab = np.hstack((A, b.reshape(-1, 1))) >>> SAb = linalg.clarkson_woodruff_transform(Ab, sketch_n_rows, seed=rng) >>> SA, Sb = SAb[:, :-1], SAb[:, -1] >>> x_sketched = linalg.lstsq(SA, Sb)[0] As with the matrix norm example, ``linalg.norm(A @ x - b)`` is close to ``linalg.norm(A @ x_sketched - b)`` with high probability. >>> linalg.norm(A @ x - b) 122.83242365433877 >>> linalg.norm(A @ x_sketched - b) 166.58473879945151 r