Approximate Message Passing (1)

10 Mar 2021

Categories:

Approximate Message Passing is a very strong tool in characterizing the solution of some sparse linear models.

Background

In many cases we need to work on linear models, maybe because we need strong interpretability (e.g. valid confidence interval for our conclusions) in contrast to deep learning models, which are often referred to as ‘black-box’ models. Consider the simple linear regression model \(y=X\beta+\epsilon\) where $y\in \mathbb{R}^n, X\in\mathbb{R}^{n\times p}$ are labels and data matrices, $\beta^p$ is the unknown parameters and $\epsilon$ is random noise.

for example in very high-dimension data (say you got 100 patients and each patient’s genetic data have 10,000 features). \(\alpha\)

Test test lorem ipsum dolor set amet

Test heading

test mathjax

\[\frac{d}{dx} x^2 = 2x\]

test auto numbering

\[\nabla \cdot \textbf{B} = 0 \label{gauss}\]

im referring to gauss law in \ref{gauss}

test mathjax inline $\epsilon < 0$

test image

test

Egyptian Cat