The basic fact from calculus that powers the whole discussion is:
The identity with is proved by the familiar trick of calculating the square of the
integral in polar coordinates. The general identity follows by change of variable
from
to
.
This fact generalizes to higher-dimensional integrals. Set
and
, and let
be a symmetric
by
matrix.
We work the calculation for the case diagonalizable: in that case there
exists an orthogonal matrix
(so
) such that
is the diagonal matrix
whose only
nonzero entries are
along the diagonal. Then
and
where
, using
and
. Since
is orthogonal
and the change of variable from
to
does not
change the integral:
The argument in general uses two additional observations: both sides of
the equation vary continuously as functions of the entries in , and any
matrix
with complex coefficients can be approximated to arbitrary
accuracy by diagonalizable matrices.
This follows from Proposition 1 by completion of the square in the exponent and a change of variables.
The generalization to dimensions replaces
with
as before and
with
the vector
.
This is proven exactly like Proposition 2.
If we write this integral as , then the integral of Proposition 2 is
and this proposition can be rewritten as