Random Variables and Probability Functions — 2
There are two parts to this title. This is the second part.
Here we will look at CDF and PDF, some continuous distributions, expected values and variances.
If you landed here directly, please read the last part by clicking here, first. It’ll help you understand this one better because I’ll continue right where I left.
Cumulative Distribution Function
CDF and PDF are directly related, but we need to understand what CDF is, first.
CDF is a function which gives us the probability that a random variable X’s value is less than or equal to some value, x.
- CDF’s curve starts from 0 and saturates at 1.
- CDF is always increasing, with an increasing value of x.
Let’s say we have a discrete random variable X that takes 5 values
{x₁, x₂, x₃, x₄, x₅}
each x with a respective probability ->
{p₁, p₂, p₃, p₄, p₅}
Then, the CDF would look something like this →
Don’t be confused by this curve, I’ve taken a very arbitrary example with uneven values of x and their respective probabilities.
However, for a discrete RV, this is what a CDF graph would typically look like.
This will help us understand the CDF for the next part — PDFs
Probability Density Functions
Before we dive into PDFs, we must understand the concept of probability in continuous random variables.
What is the value of P(X=x) would be, where X is a continuous random variable, in your opinion?
The answer is ZERO!
Yes, the probability at a given point is always 0. Sounds fascinating, no? I mean, even though the probability at all the given points is 0, there is a probability for some interval.
So, the idea is that the probability exists for an interval when dealing with continuous random variables.
Can you guess why? We can explain it using the same idea that I mentioned for continuous RVs (Infinite precision).
Formal definition and relation with CDF
A continuous random variable X with CDF Fₓ(x) is said to have a PDF fₓ(x), if, for all x₀, the following holds
If the above definition is confusing, here’s a simpler version:
- CDF is the integration of PDF.
- PDF is the differentiation of CDF
That’s it. This is what was needed for you to know to move forward.
Properties of PDF
- Integral from -∞ to +∞ of PDF is always equal to 1.
- The area under the curve(AUC) of PDF is also equal to 1.
- Both these terms are equivalent if you can figure out how :p - PDF is always more than equal to 0
What does fₓ(x) mean?
fₓ(x) doesn’t mean that we are talking about the probability at a point x. Rather, it means what the probability density is AROUND x.
While writing this article, I picked up on a small issue I made. There’s a difference between probability and probability density. We may use them interchangeably, however, they are different.
So, we don’t explicitly mention it, but if you find fₓ(x) written somewhere, it implies that we are discussing about the probability around x.
Continuous Distributions
Let’s have a look at a few continuous distributions.
1) Uniform Distribution
If X ~ Uniform[a,b], then we say that every value x belonging to X has a uniform probability.
Try making a CDF curve for this. It’s easy :D
2) Exponential Distribution
X ~ Exp(λ)
Here, we have 1 parameter — λ
An exponential distribution is a way to describe how often something happens.
Here’s a rough calculation of CDF of a random variable X~Exp(λ)
This would help you get an idea of how to calculate CDF from PDF.
3) Normal Distribution
This is the last distribution that I want to touch upon, in this article.
I will write a detailed article on normal distribution later on, but for now, I just want to give a brief overview of it.
This is, by far, the most important distribution you’ll ever see.
Its graph is commonly referred to as a bell curve.
X ~ N(μ,σ)
There are 2 parameters — μ is the mean and σ is the standard deviation.
Sometimes, the notation X ~ N(μ,σ²) is used.
In this case, σ² is the variance.
It is not possible to directly calculate the CDF for a normal distribution. There are alternatives to it.
We’ll talk about them in a separate article :’)
Other Distributions
I did not mention many distributions, you must be aware of them. However, it is not necessary to memorise them.
That’s just not practical either :’)
I will list some of the distributions for you to explore when you have time —
- Poisson’s distribution
- Beta distribution
- Gamma distribution
- Chi-squared distribution
- Cauchy distribution
There are just a few that come in handy if you, at least, know about them.
Poisson’s and Chi-squared are, especially, the more important ones to be familiar with :’)
Expectations and Variance
I’d try to keep this brief.
Learning about random variables and probabilities without understanding expectation and variance is not fully beneficial.
Both, the expected value(or expectation) and the variance can be defined for discrete as well as continuous random variables.
All my formulae will be for discrete random variables.
You just have to smartly convert the summation sign to an integration sign, and you’ll have your formulae for continuous RVs.
Expectation
You already know what mean/average is. I had it mentioned in my “The Art of Descriptive Statistics” as well.
The expectation is a way to describe the mean of a random variable.
It is the average value of a random variable over a large number of trials.
Denoted as E[X]
It can be calculated as the summation given below, over the support of our RV.
What if you have a function of a random variable?
Let’s say, something like Y = g(X)
Where X and Y both are random variables and Y is a function of X.
In that case
Note that here, the summation is over the support of X and not Y.
Certainly, you can also express the formula completely in terms of Y, as we did before(for X).
Variance
The variance of a random variable is a measure of the spread between its possible values.
We define variance as follows —
Var(X) = E[ (X — E[x] )² ]
We can simplify this further, however, I’m not going to do that here.
Feel free to attempt it yourself. All you need to do is expand the squared term.
If you observe the simplified formula, you’ll see that we are using the concept of “the expected value of a function of a random variable”
Some points to keep in mind —
- Given Y = aX+b, E[Y] = a*E[X] + b
- E[X+Y] = E[X] + E[Y]
- Generally, Var(X+Y) ≠ Var(X) + Var(Y)
Wrapping Up
In conclusion, I would like to express my gratitude to all of my readers for taking the time to read this article.
I hope it was informative and helpful.
I will continue to research and write about related topics in the future, bringing you up-to-date information and insights on the subject matter.
I am planning to write an article on topics from linear algebra as well. So, once I’m done with the statistics part, I would be starting with that.
I would also love to hear feedback from you all, please feel free to reach out to me with any questions, comments or suggestions. I look forward to connecting with you all in the future with more such interesting articles.
Thank you again for your support.