Statistics How To

Slutsky’s Theorem: Definition

Statistics Definitions >

Slutsky’s theorem is used to explore convergence in probability distributions. It tells us that if a sequence of random vectors converges in distribution and another sequence converges in probability to a constant, those sequences are jointly convergent in distribution. Basically, it allows you to use convergence results proved for one sequence for other closely related sequences.

Slutsky’s has no real practical applications; Its use is mostly limited to theoretical mathematical statistics (specifically, asymptotic theory). For example, it extends the usefulness of the Central Limit Theorem. Other uses include:

  • Explore convergence in functions of random variables.
  • Highlight critical properties of converging random variables.
  • Calculate the convergence of any continuous function of a set of statistics, providing that one set of those statistics converges (Davidson, 1994).

Formal Definition of Slutsky’s Theorem

More formally, Manoukian (1986) defines Slutsky’s theorem as follows:

If Xi be a random variable sequence that converges to a random variable X with a distribution function F(x) and if Yi is a random variable sequence that converges to a probability of constant c. Then:

  1. Xi + Yi is distributed asymptotically as X + c.
  2. Xi Yi is distributed asymptotically as Xc.
  3. Xi / Yi is distributed asymptotically as X / c for c ≠ 0.

The theorem can also be written more succinctly as (from Proschan & Shaw, 2016):

Suppose that Xn→DX, An↠pA, and Bn↠pB,
where A and B are constants. Then AnXn+Bn↠DAX+B.

Simple Example

First, we need to define a couple of functions, g and h. The family of functions gi, is defined as:

  • g1({xn}) = {xn}
  • g2({xn}) = {2xn}
  • g3({xn}) = {3xn}
  • gk({xn}) = {kxn}

gi converges in probability to a constant c = μ.
And h is defined, in terms of g, as:
h (g1, g2, g3…gk) = slutsky's theorem

With reference to these two functions, Slutsky’s theorem tells us that the limit of h (g1, g2, g3…gk) as n approaches infinity is:
(k(k + 1) / 2) · μ
(Adapted from Kapadia et.al)

References:
Davidson, J. (1994). Stochastic Limit Theory: An Introduction for Econometricians.

Kapadia et. al. (2005). Mathematical Statistics With Applications.

Manoukian (1986) Mathematical Nonparametric Statistics. CRC Press.

Proschan, M. & Shaw, P. (2016). Essentials of Probability Theory for Statisticians. CRC Press.

------------------------------------------------------------------------------

Confused and have questions? Head over to Chegg and use code “CS5OFFBTS18” (exp. 11/30/2018) to get $5 off your first month of Chegg Study, so you can understand any concept by asking a subject expert and getting an in-depth explanation online 24/7.

Comments? Need to post a correction? Please post a comment on our Facebook page.

Check out our updated Privacy policy and Cookie Policy