Wednesday, December 29, 2021

Estimate parameters of Negative Binomial Distribution - NB?

 NB = Number of trials(e.g. say coin toss) "n" required for k success(heads of coin)

p/probability = fairness of coin is to be estimated


{

pmf Derivation

The trial ends on observing  kth success. So nth trial has the kth success.

So n-1 trial have k-1 success and those success can be distributed in all possible ways among n-1 location i.e.

n-1Ck-1 ways, and each way/pattern having probability of p(k-1)(1-p)(n-k) 

And we know that probability of success at nth position is "p".

Probability of n-1 trials and nth trial are independent so we can multiply them to get join distribution

so

PMF = p* n-1Ck-1 p(k-1)(1-p)(n-k) 

n-1Ck-1 p(k)(1-p)(n-k)

----------------------------------------------------------------------

Compounding probability distribution

Lets say p = Beta(α, β) i.e. parameter

P(no of trials=n,  prob of success = p)

 = NBpmf(n, k, p) * Betapmf(p|α, β)

For each value of "p" between 0 and 1, we calculate above probability and sum it up and thus get rid of variable p. [Posterior predictive distribution]

Beta Compounded Negative Binomial PMF = ∫ NBpmf(n, k, p) * Betapmf(p|α, β)

BNBpmf = ∫ n-1Ck-1 p(k)(1-p)(n-k) * p(α-1)(1-p)(β-1)/B(α, β) dp

n-1Ck-1/B(α, β)  ∫ p(k+α-1)(1-p)(n-k+β-1) dp

n-1Ck-1 B(k+α, β+n-k)/B(α, β)


}


Now assume following sample is observed for "n" i.e number of trials for achieving k success and we want to estimate p - probability of success

[n1 , n2 , n3 , n4...….nm]


 pmf1 = n1-1Ck-1 p(k)(1-p)(n1-k)  


Joint distribution of all the observed sample that are independent will product of each pmf


L = i=1m pmfi

Take log to make taking derivative simple, 

LL = i=1Σm log(pmfi

 i=1Σm log(ni-1Ck-1 p(k)(1-p)(ni-k)) 

For maximizing Log likelihood take derivative and equate to zero

LL 


p = mk/Σni




No comments:

Post a Comment

Self Attention

  x → Embedding → MultiHeadAttention → Concat → Project to lower dim → → Add(x) → LayerNorm → FFN → Add → LayerNorm Vocab to embedding t...