In probability theory, Markov’s inequality tells us that for a non-negative random variable :

i.e., we can construct a strict upper bound with only the expected value. This is usually a poor upper bound, so it’s not very helpful in practice.

Chebyshev’s inequality is a better upper bound, with no requirement of the signedness of the random variable. It states:

The way the inequality is formatted means we have to rewrite any probabilities (like ) into a Chebyshev form.