`P(A ∪ B) = P(A) + P(B) − P(A ∩ B)`
If A and B are independent then:
`P(A ∪ B) = P(A) + P(B)`
`P(A ∩ B) = P(A|B) * P(B)`
If A and B are independent then:
`P(A ∪ B) = P(A) * P(B)`
Product rule can also be written as Bayes’ theorem (baby version):
`P(A|B) = (P(A ∩ B)) / (P(B))`
Two events A and B can be:
A
occurs it affects P(B)
or vice-versaA
occurs it does not affect P(B)
or vice-versaA
and B
are mutually exclusive. Always dependentA
and B
are are the only two possible (disjoint) events of the same random processA and B disjoint =>
` P(A ∩ B) = 0`
A and B complementary =>
`P(A) + P(B) = 1`
Posterior probability - P(hypothesis | data)
TODO - bayesian Inference
A random variable has binomial distribution when:
P(success)
is the same for each trial`((n),(k)) * p^k * (1 - p)^(n - k) `
n choose k
`((n),(k)) = (n!) / (k! (n-k)!)`
`μ = n * p`
`σ = sqrt(n * p * (1-p))`
When n is sufficienly large, the binomial distribution can be approximated by the normal distribution.
Rule of thumb for “sufficienly large”:
`n * p ≥ 10, n* (1 − p) ≥ 10`