У SVM ядро Гаусса визначається як:
Я також хотів би знати , є чи
У SVM ядро Гаусса визначається як:
Я також хотів би знати , є чи
Відповіді:
. For notational simplicity, assume :
This is also discussed in more detail in these slides by Chih-Jen Lin of NTU (slide 11 specifically). Note that in the slides is used as kernel parameter.
The equation in the OP only holds for the linear kernel.
For any valid psd kernel , there exists a feature map such that . The space and embedding in fact need not be unique, but there is an important unique pair known as the reproducing kernel Hilbert space (RKHS).
The RKHS is discussed by: Steinwart, Hush and Scovel, An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels, IEEE Transactions on Information Theory 2006 (doi, free citeseer pdf).
It's somewhat complicated, but it boils down to this: define as
Let be a sequence ranging over all -tuples of nonnegative integers; if , perhaps , , , and so on. Denote the th component of the th tuple by .
Then the th component of is . So maps vectors in to infinite-dimensional complex vectors.
The catch to this is that we further have to define norms for these infinite-dimensional complex vectors in a special way; see the paper for details.
Steinwart et al. also give a more straightforward (to my thinking) embedding into , the Hilbert space of square-integrable functions from :
These are not the only embeddings that work.
Another is based on the Fourier transform, which the celebrated paper of Rahimi and Recht (Random Features for Large-Scale Kernel Machines, NIPS 2007) approximates to great effect.
You can also do it using Taylor series: effectively the infinite version of Cotter, Keshet, and Srebro, Explicit Approximations of the Gaussian Kernel, arXiv:1109.4603.
It seems to me that your second equation will only be true if is a linear mapping (and hence is a linear kernel). As the Gaussian kernel is non-linear, the equality will not hold (except perhaps in the limit as goes to zero).