1-Discrete Random variables 

Frequently, when an experiment is performed, we are interested mainly in

some function of the outcome as opposed to the actual outcome itself.

For instance, in tossing dice, we are often interested in the sum of the two dice

and are not really concerned about the separate values of each die. 

That is, we may be interested in knowing that the sum is 7 and may not be concerned over

whether the actual outcome was (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), or (6, 1).

Also, in flipping a coin, we may be interested in the total number of heads that occur and not

care at all about the actual head–tail sequence that results. 

These quantities of interest, or, more formally, these real-valued functions defined on

the sample space, are known as random variables.

Because the value of a random variable is determined by the outcome of the experiment,

we may assign probabilities to the possible values of the random variable. 

Suppose that our experiment consists of tossing 3 fair coins. 

If we let Y denote the number of heads that appear,

then Y is a random variable taking on one of the values 0, 1, 2, 

and 3 with respective probabilities

𝑃{π‘Œ=0}=𝑃{(𝑇,𝑇,𝑇)}=

𝑃{π‘Œ =1}=𝑃{(𝑇,𝑇,𝑇),(𝑇,𝐻,𝑇),(𝐻,𝑇,𝑇)}=

𝑃{π‘Œ =2}=𝑃{(𝑇,𝐻,𝐻),(𝐻,𝑇,𝐻),(𝐻,𝐻,𝑇)}=

𝑃{π‘Œ =3}=𝑃{(𝐻,𝐻,𝐻)}=

Since Y must take on one of the values 0 through 3, we must have

which, of course, is in accord with the preceding probabilities. 

Independent trials consisting of the flipping of a coin having probability 

p of coming up heads are continually performed until either a head occurs or

a total of n flips is made. If we let X denote the number of times the coin is flipped,

then X is a random variable taking on one of the values 1, 2, 3, . . . , n with respective probabilities

P{X = 1} = P{H} = 𝑝

P{X = 2} = P{(T,H)} = (1 – 𝑝) 𝑝

P{X = 3} = P{(T,T,H)} =

.

.

.

P{X = n – 1} = P{(𝑇,𝑇,β‹―,𝑇,𝐻)} = 

P{X = n} = P{( 𝑇,𝑇,β‹―,𝑇,𝑇),(𝑇,𝑇,β‹―,𝑇,𝐻)} = 

As a check, note that 

Three balls are randomly chosen from an urn containing 3 white,

3 red, and 5 black balls. Suppose that we win $1 for each white ball selected and 

lose $1 for  each  red  ball  selected. If  we let X denote our

total winnings from the experiment, then X is a random variable

taking on the possible values 0, ±1, ±2, ±3 with respective probabilities 

These probabilities are obtained, for instance, by noting that in order

for X to equal 0, either all 3 balls selected must be black or 1 ball of each

color must be selected. Similarly, the event {X = 1} occurs either if 1 white and 

2 black balls are selected or if 2 white and 1 red is selected. As a check, we note that 

The probability that we win money is given by

* Cumulative distribution function: 

For a random variable X, the function F defined by 𝐹(π‘₯)=𝑃{𝑋 ≤π‘₯}

−∞<π‘₯ <+∞is called the cumulative distribution function, or, more simply,

the distribution function, of X. Thus, the distribution function specifies, 

for all real values x, the probability that the random variable is less than or equal to x.

Now, suppose that a≤ b. Then, because the event {X ≤ a} is contained in the event {X ≤b},

it follows that F(a), the probability of the former, is less than or equal to F(b),

the probability of the latter. In other words, F(x) is a nondecreasing function of x.