Another Bias Correction Method John Ashburner


By integrating over k, we obtain the likelihood of yi



Yüklə 0,75 Mb.
səhifə4/6
tarix28.12.2021
ölçüsü0,75 Mb.
#17120
1   2   3   4   5   6

By integrating over k, we obtain the likelihood of yi:

  • P(yi|,,,) =kP(yi,k|k,k,k,) =kP(yi|k,k,k,) P(k|k)

  • Likelihood of the entire dataset is derived by assuming independence:

  • P(y|,,,) = i P(yi|,,,)

    • = i{k i() (2k2)-1/2 exp {-(yi-k /i())2/(2k2)} }
  • Likelihood is maximised when the following is minimised:

  • E = -log{P(y|,,,)}

  • = -i log{k i() (2k2)-1/2 exp {-(yi-k /i())2/(2k2)} }



  • Non-parametric Generalisation

    • Consider a simple mixture model, where the means and variances are known, and therefore fixed by the model. Also assume the means are equally spaced, and the variances are identical. The only unknown parameters are the mixing proportions (). By representing the density of the kth Gaussian at yi by k(yi), the negative log-likelihood of the model is:


    • Yüklə 0,75 Mb.

      Dostları ilə paylaş:
    1   2   3   4   5   6




    Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2024
    rəhbərliyinə müraciət

    gir | qeydiyyatdan keç
        Ana səhifə


    yükləyin