Free Essay

Pdf, Docx

In:

Submitted By obengenoch
Words 29770
Pages 120
Notes on Probability
Peter J. Cameron

ii

Preface
Here are the course lecture notes for the course MAS108, Probability I, at Queen Mary, University of London, taken by most Mathematics students and some others in the first semester. The description of the course is as follows: This course introduces the basic notions of probability theory and develops them to the stage where one can begin to use probabilistic ideas in statistical inference and modelling, and the study of stochastic processes. Probability axioms. Conditional probability and independence. Discrete random variables and their distributions. Continuous distributions. Joint distributions. Independence. Expectations. Mean, variance, covariance, correlation. Limiting distributions. The syllabus is as follows: 1. Basic notions of probability. Sample spaces, events, relative frequency, probability axioms. 2. Finite sample spaces. Methods of enumeration. Combinatorial probability. 3. Conditional probability. Theorem of total probability. Bayes theorem. 4. Independence of two events. Mutual independence of n events. Sampling with and without replacement. 5. Random variables. Univariate distributions - discrete, continuous, mixed. Standard distributions - hypergeometric, binomial, geometric, Poisson, uniform, normal, exponential. Probability mass function, density function, distribution function. Probabilities of events in terms of random variables. 6. Transformations of a single random variable. Mean, variance, median, quantiles. 7. Joint distribution of two random variables. Marginal and conditional distributions. Independence. iii

iv 8. Covariance, correlation. Means and variances of linear functions of random variables. 9. Limiting distributions in the Binomial case. These course notes explain the naterial in the syllabus. They have been “fieldtested” on the class of 2000. Many of the examples are taken from the course homework sheets or past exam papers. Set books The notes cover only material in the Probability I course. The textbooks listed below will be useful for other courses on probability and statistics. You need at most one of the three textbooks listed below, but you will need the statistical tables. • Probability and Statistics for Engineering and the Sciences by Jay L. Devore (fifth edition), published by Wadsworth. Chapters 2–5 of this book are very close to the material in the notes, both in order and notation. However, the lectures go into more detail at several points, especially proofs. If you find the course difficult then you are advised to buy this book, read the corresponding sections straight after the lectures, and do extra exercises from it. Other books which you can use instead are: • Probability and Statistics in Engineering and Management Science by W. W. Hines and D. C. Montgomery, published by Wiley, Chapters 2–8. • Mathematical Statistics and Data Analysis by John A. Rice, published by Wadsworth, Chapters 1–4. You should also buy a copy of • New Cambridge Statistical Tables by D. V. Lindley and W. F. Scott, published by Cambridge University Press. You need to become familiar with the tables in this book, which will be provided for you in examinations. All of these books will also be useful to you in the courses Statistics I and Statistical Inference. The next book is not compulsory but introduces the ideas in a friendly way: • Taking Chances: Winning with Probability, by John Haigh, published by Oxford University Press.

v Web resources the address Course material for the MAS108 course is kept on the Web at

http://www.maths.qmw.ac.uk/˜pjc/MAS108/ This includes a preliminary version of these notes, together with coursework sheets, test and past exam papers, and some solutions. Other web pages of interest include http://www.dartmouth.edu/˜chance/teaching aids/ books articles/probability book/pdf.html A textbook Introduction to Probability, by Charles M. Grinstead and J. Laurie Snell, available free, with many exercises. http://www.math.uah.edu/stat/ The Virtual Laboratories in Probability and Statistics, a set of web-based resources for students and teachers of probability and statistics, where you can run simulations etc. http://www.newton.cam.ac.uk/wmy2kposters/july/ The Birthday Paradox (poster in the London Underground, July 2000). http://www.combinatorics.org/Surveys/ds5/VennEJC.html An article on Venn diagrams by Frank Ruskey, with history and many nice pictures. Web pages for other Queen Mary maths courses can be found from the on-line version of the Maths Undergraduate Handbook. Peter J. Cameron December 2000

vi

Contents
1 Basic ideas 1.1 Sample space, events . . . . . . 1.2 What is probability? . . . . . . . 1.3 Kolmogorov’s Axioms . . . . . 1.4 Proving things from the axioms . 1.5 Inclusion-Exclusion Principle . . 1.6 Other results about sets . . . . . 1.7 Sampling . . . . . . . . . . . . 1.8 Stopping rules . . . . . . . . . . 1.9 Questionnaire results . . . . . . 1.10 Independence . . . . . . . . . . 1.11 Mutual independence . . . . . . 1.12 Properties of independence . . . 1.13 Worked examples . . . . . . . . 1 1 3 3 4 6 7 8 12 13 14 16 17 20 23 23 25 26 28 29 31 34 39 39 40 41 43 47 55

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

2

Conditional probability 2.1 What is conditional probability? . 2.2 Genetics . . . . . . . . . . . . . . 2.3 The Theorem of Total Probability 2.4 Sampling revisited . . . . . . . . 2.5 Bayes’ Theorem . . . . . . . . . . 2.6 Iterated conditional probability . . 2.7 Worked examples . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

3

Random variables 3.1 What are random variables? . . . . 3.2 Probability mass function . . . . . . 3.3 Expected value and variance . . . . 3.4 Joint p.m.f. of two random variables 3.5 Some discrete random variables . . 3.6 Continuous random variables . . . . vii

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

viii 3.7 3.8 3.9 3.10 Median, quartiles, percentiles . . . Some continuous random variables On using tables . . . . . . . . . . Worked examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CONTENTS . . . . . . . . . . . . . . . . . . . . 57 58 61 63 67 67 70 73 74 77 79 83

4 More on joint distribution 4.1 Covariance and correlation . . . . . 4.2 Conditional random variables . . . . 4.3 Joint distribution of continuous r.v.s 4.4 Transformation of random variables 4.5 Worked examples . . . . . . . . . . A Mathematical notation B Probability and random variables

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

Chapter 1 Basic ideas
In this chapter, we don’t really answer the question ‘What is probability?’ Nobody has a really good answer to this question. We take a mathematical approach, writing down some basic axioms which probability must satisfy, and making deductions from these. We also look at different kinds of sampling, and examine what it means for events to be independent.

1.1

Sample space, events

The general setting is: We perform an experiment which can have a number of different outcomes. The sample space is the set of all possible outcomes of the experiment. We usually call it S . It is important to be able to list the outcomes clearly. For example, if I plant ten bean seeds and count the number that germinate, the sample space is

S = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}.
If I toss a coin three times and record the result, the sample space is

S = {HHH, HHT, HT H, HT T, T HH, T HT, T T H, T T T }, where (for example) HT H means ‘heads on the first toss, then tails, then heads again’. Sometimes we can assume that all the outcomes are equally likely. (Don’t assume this unless either you are told to, or there is some physical reason for assuming it. In the beans example, it is most unlikely. In the coins example, the assumption will hold if the coin is ‘fair’: this means that there is no physical reason for it to favour one side over the other.) If all outcomes are equally likely, then each has probability 1/|S |. (Remember that |S | is the number of elements in the set S ). 1

2

CHAPTER 1. BASIC IDEAS

On this point, Albert Einstein wrote, in his 1905 paper On a heuristic point of view concerning the production and transformation of light (for which he was awarded the Nobel Prize), In calculating entropy by molecular-theoretic methods, the word “probability” is often used in a sense differing from the way the word is defined in probability theory. In particular, “cases of equal probability” are often hypothetically stipulated when the theoretical methods employed are definite enough to permit a deduction rather than a stipulation. In other words: Don’t just assume that all outcomes are equally likely, especially when you are given enough information to calculate their probabilities! An event is a subset of S . We can specify an event by listing all the outcomes that make it up. In the above example, let A be the event ‘more heads than tails’ and B the event ‘heads on last throw’. Then A = {HHH, HHT, HT H, T HH}, B = {HHH, HT H, T HH, T T H}. The probability of an event is calculated by adding up the probabilities of all the outcomes comprising that event. So, if all outcomes are equally likely, we have |A| P(A) = . |S | In our example, both A and B have probability 4/8 = 1/2. An event is simple if it consists of just a single outcome, and is compound otherwise. In the example, A and B are compound events, while the event ‘heads on every throw’ is simple (as a set, it is {HHH}). If A = {a} is a simple event, then the probability of A is just the probability of the outcome a, and we usually write P(a), which is simpler to write than P({a}). (Note that a is an outcome, while {a} is an event, indeed a simple event.) We can build new events from old ones: • A ∪ B (read ‘A union B’) consists of all the outcomes in A or in B (or both!) • A ∩ B (read ‘A intersection B’) consists of all the outcomes in both A and B; • A \ B (read ‘A minus B’) consists of all the outcomes in A but not in B; • A (read ‘A complement’) consists of all outcomes not in A (that is, S \ A); / • 0 (read ‘empty set’) for the event which doesn’t contain any outcomes.

1.2. WHAT IS PROBABILITY?

3

Note the backward-sloping slash; this is not the same as either a vertical slash | or a forward slash /. In the example, A is the event ‘more tails than heads’, and A ∩ B is the event {HHH, T HH, HT H}. Note that P(A ∩ B) = 3/8; this is not equal to P(A) · P(B), despite what you read in some books!

1.2

What is probability?

There is really no answer to this question. Some people think of it as ‘limiting frequency’. That is, to say that the probability of getting heads when a coin is tossed means that, if the coin is tossed many times, it is likely to come down heads about half the time. But if you toss a coin 1000 times, you are not likely to get exactly 500 heads. You wouldn’t be surprised to get only 495. But what about 450, or 100? Some people would say that you can work out probability by physical arguments, like the one we used for a fair coin. But this argument doesn’t work in all cases, and it doesn’t explain what probability means. Some people say it is subjective. You say that the probability of heads in a coin toss is 1/2 because you have no reason for thinking either heads or tails more likely; you might change your view if you knew that the owner of the coin was a magician or a con man. But we can’t build a theory on something subjective. We regard probability as a mathematical construction satisfying some axioms (devised by the Russian mathematician A. N. Kolmogorov). We develop ways of doing calculations with probability, so that (for example) we can calculate how unlikely it is to get 480 or fewer heads in 1000 tosses of a fair coin. The answer agrees well with experiment.

1.3

Kolmogorov’s Axioms

Remember that an event is a subset of the sample space S . A number of events, / say A1 , A2 , . . ., are called mutually disjoint or pairwise disjoint if Ai ∩ A j = 0 for any two of the events Ai and A j ; that is, no two of the events overlap. According to Kolmogorov’s axioms, each event A has a probability P(A), which is a number. These numbers satisfy three axioms: Axiom 1: For any event A, we have P(A) ≥ 0. Axiom 2: P(S ) = 1.

4

CHAPTER 1. BASIC IDEAS

Axiom 3: If the events A1 , A2 , . . . are pairwise disjoint, then P(A1 ∪ A2 ∪ · · ·) = P(A1 ) + P(A2 ) + · · · Note that in Axiom 3, we have the union of events and the sum of numbers. Don’t mix these up; never write P(A1 ) ∪ P(A2 ), for example. Sometimes we separate Axiom 3 into two parts: Axiom 3a if there are only finitely many events A1 , A2 , . . . , An , so that we have n P(A1 ∪ · · · ∪ An ) = ∑ P(Ai ), i=1 and Axiom 3b for infinitely many. We will only use Axiom 3a, but 3b is important later on. Notice that we write n i=1

∑ P(Ai)

for P(A1 ) + P(A2 ) + · · · + P(An ).

1.4 Proving things from the axioms
You can prove simple properties of probability from the axioms. That means, every step must be justified by appealing to an axiom. These properties seem obvious, just as obvious as the axioms; but the point of this game is that we assume only the axioms, and build everything else from that. Here are some examples of things proved from the axioms. There is really no difference between a theorem, a proposition, and a corollary; they all have to be proved. Usually, a theorem is a big, important statement; a proposition a rather smaller statement; and a corollary is something that follows quite easily from a theorem or proposition that came before. Proposition 1.1 If the event A contains only a finite number of outcomes, say A = {a1 , a2 , . . . , an }, then P(A) = P(a1 ) + P(a2 ) + · · · + P(an ).

To prove the proposition, we define a new event Ai containing only the outcome ai , that is, Ai = {ai }, for i = 1, . . . , n. Then A1 , . . . , An are mutually disjoint

1.4. PROVING THINGS FROM THE AXIOMS

5

(each contains only one element which is in none of the others), and A1 ∪ A2 ∪ · · · ∪ An = A; so by Axiom 3a, we have P(A) = P(a1 ) + P(a2 ) + · · · + P(an ). Corollary 1.2 If the sample space S is finite, say S = {a1 , . . . , an }, then P(a1 ) + P(a2 ) + · · · + P(an ) = 1. For P(a1 ) + P(a2 ) + · · · + P(an ) = P(S ) by Proposition 1.1, and P(S ) = 1 by Axiom 2. Notice that once we have proved something, we can use it on the same basis as an axiom to prove further facts. Now we see that, if all the n outcomes are equally likely, and their probabilities sum to 1, then each has probability 1/n, that is, 1/|S |. Now going back to Proposition 1.1, we see that, if all outcomes are equally likely, then P(A) = |A| |S |

for any event A, justifying the principle we used earlier. Proposition 1.3 P(A ) = 1 − P(A) for any event A. / Let A1 = A and A2 = A (the complement of A). Then A1 ∩ A2 = 0 (that is, the events A1 and A2 are disjoint), and A1 ∪ A2 = S . So P(A1 ) + P(A2 ) = P(A1 ∪ A2 ) (Axiom 3) = P(S ) = 1 (Axiom 2). So P(A) = P(A1 ) = 1 − P(A2 ). Corollary 1.4 P(A) ≤ 1 for any event A. For 1 − P(A) = P(A ) by Proposition 1.3, and P(A ) ≥ 0 by Axiom 1; so 1 − P(A) ≥ 0, from which we get P(A) ≤ 1. Remember that if you ever calculate a probability to be less than 0 or more than 1, you have made a mistake! / Corollary 1.5 P(0) = 0. / / For 0 = S , so P(0) = 1 − P(S ) by Proposition 1.3; and P(S ) = 1 by Axiom 2, / so P(0) = 0.

6

CHAPTER 1. BASIC IDEAS

Here is another result. The notation A ⊆ B means that A is contained in B, that is, every outcome in A also belongs to B. Proposition 1.6 If A ⊆ B, then P(A) ≤ P(B). / This time, take A1 = A, A2 = B \ A. Again we have A1 ∩ A2 = 0 (since the elements of B \ A are, by definition, not in A), and A1 ∪ A2 = B. So by Axiom 3, P(A1 ) + P(A2 ) = P(A1 ∪ A2 ) = P(B). In other words, P(A) + P(B \ A) = P(B). Now P(B \ A) ≥ 0 by Axiom 1; so P(A) ≤ P(B), as we had to show.

1.5 Inclusion-Exclusion Principle
   

A

B

A Venn diagram for two sets A and B suggests that, to find the size of A ∪ B, we add the size of A and the size of B, but then we have included the size of A ∩ B twice, so we have to take it off. In terms of probability: Proposition 1.7 P(A ∪ B) = P(A) + P(B) − P(A ∩ B). We now prove this from the axioms, using the Venn diagram as a guide. We see that A ∪ B is made up of three parts, namely A1 = A ∩ B, A2 = A \ B, A3 = B \ A.

Indeed we do have A ∪ B = A1 ∪ A2 ∪ A3 , since anything in A ∪ B is in both these sets or just the first or just the second. Similarly we have A1 ∪A2 = A and A1 ∪A3 = B. The sets A1 , A2 , A3 are mutually disjoint. (We have three pairs of sets to check. / Now A1 ∩ A2 = 0, since all elements of A1 belong to B but no elements of A2 do. The arguments for the other two pairs are similar – you should do them yourself.)

1.6. OTHER RESULTS ABOUT SETS So, by Axiom 3, we have P(A) = P(A1 ) + P(A2 ), P(B) = P(A1 ) + P(A3 ), P(A ∪ B) = P(A1 ) + P(A2 ) + P(A3 ). From this we obtain P(A) + P(B) − P(A ∩ B) = (P(A1 ) + P(A2 )) + (P(A1 ) + P(A3 )) − P(A1 ) = P(A1 ) + P(A2 ) + P(A3 ) = P(A ∪ B) as required.

7

The Inclusion-Exclusion Principle extends to more than two events, but gets more complicated. Here it is for three events; try to prove it yourself.


C   
 

A

B

To calculate P(A ∪ B ∪C), we first add up P(A), P(B), and P(C). The parts in common have been counted twice, so we subtract P(A∩B), P(A∩C) and P(B∩C). But then we find that the outcomes lying in all three sets have been taken off completely, so must be put back, that is, we add P(A ∩ B ∩C). Proposition 1.8 For any three events A, B,C, we have P(A∪B∪C) = P(A)+P(B)+P(C)−P(A∩B)−P(A∩C)−P(B∩C)+P(A∩B∩C). Can you extend this to any number of events?

1.6

Other results about sets

There are other standard results about sets which are often useful in probability theory. Here are some examples. Proposition 1.9 Let A, B,C be subsets of S . Distributive laws: (A ∩ B) ∪C = (A ∪C) ∩ (B ∪C) and (A ∪ B) ∩C = (A ∩C) ∪ (B ∩C). De Morgan’s Laws: (A ∪ B) = A ∩ B and (A ∩ B) = A ∪ B . We will not give formal proofs of these. You should draw Venn diagrams and convince yourself that they work.

8

CHAPTER 1. BASIC IDEAS

1.7 Sampling
I have four pens in my desk drawer; they are red, green, blue, and purple. I draw a pen; each pen has the same chance of being selected. In this case, S = {R, G, B, P}, where R means ‘red pen chosen’ and so on. In this case, if A is the event ‘red or green pen chosen’, then |A| 2 1 P(A) = = = . |S | 4 2 More generally, if I have a set of n objects and choose one, with each one equally likely to be chosen, then each of the n outcomes has probability 1/n, and an event consisting of m of the outcomes has probability m/n. What if we choose more than one pen? We have to be more careful to specify the sample space. First, we have to say whether we are • sampling with replacement, or • sampling without replacement. Sampling with replacement means that we choose a pen, note its colour, put it back and shake the drawer, then choose a pen again (which may be the same pen as before or a different one), and so on until the required number of pens have been chosen. If we choose two pens with replacement, the sample space is {RR, GR, BR, PR, RG, GG, BG, PG, RB, GB, BB, PB, RP, GP, BP, PP}

The event ‘at least one red pen’ is {RR, RG, RB, RP, GR, BR, PR}, and has probability 7/16. Sampling without replacement means that we choose a pen but do not put it back, so that our final selection cannot include two pens of the same colour. In this case, the sample space for choosing two pens is { RG, RB, RP, GR, GB, GP, BR, BG, BP, PR, PG, PB } and the event ‘at least one red pen’ is {RG, RB, RP, GR, BR, PR}, with probability 6/12 = 1/2.

1.7. SAMPLING

9

Now there is another issue, depending on whether we care about the order in which the pens are chosen. We will only consider this in the case of sampling without replacement. It doesn’t really matter in this case whether we choose the pens one at a time or simply take two pens out of the drawer; and we are not interested in which pen was chosen first. So in this case the sample space is {{R, G}, {R, B}, {R, P}, {G, B}, {G, P}, {B, P}}, containing six elements. (Each element is written as a set since, in a set, we don’t care which element is first, only which elements are actually present. So the sample space is a set of sets!) The event ‘at least one red pen’ is {{R, G}, {R, B}, {R, P}}, with probability 3/6 = 1/2. We should not be surprised that this is the same as in the previous case. There are formulae for the sample space size in these three cases. These involve the following functions: n! = n(n − 1)(n − 2) · · · 1 n Pk = n(n − 1)(n − 2) · · · (n − k + 1) n Ck = n Pk /k! Note that n! is the product of all the whole numbers from 1 to n; and n Pk =

n! , (n − k)!

so that n Ck =

n! . k!(n − k)!

Theorem 1.10 The number of selections of k objects from a set of n objects is given in the following table. with replacement without replacement nP ordered sample nk k nC unordered sample k In fact the number that goes in the empty box is n+k−1Ck , but this is much harder to prove than the others, and you are very unlikely to need it. Here are the proofs of the other three cases. First, for sampling with replacement and ordered sample, there are n choices for the first object, and n choices for the second, and so on; we multiply the choices for different objects. (Think of the choices as being described by a branching tree.) The product of k factors each equal to n is nk .

10

CHAPTER 1. BASIC IDEAS

For sampling without replacement and ordered sample, there are still n choices for the first object, but now only n − 1 choices for the second (since we do not replace the first), and n − 2 for the third, and so on; there are n − k + 1 choices for the kth object, since k − 1 have previously been removed and n − (k − 1) remain. As before, we multiply. This product is the formula for n Pk . For sampling without replacement and unordered sample, think first of choosing an ordered sample, which we can do in n Pk ways. But each unordered sample could be obtained by drawing it in k! different orders. So we divide by k!, obtaining n Pk /k! = nCk choices. In our example with the pens, the numbers in the three boxes are 42 = 16, 4 P = 12, and 4C = 6, in agreement with what we got when we wrote them all 2 2 out. Note that, if we use the phrase ‘sampling without replacement, ordered sample’, or any other combination, we are assuming that all outcomes are equally likely. Example The names of the seven days of the week are placed in a hat. Three names are drawn out; these will be the days of the Probability I lectures. What is the probability that no lecture is scheduled at the weekend? Here the sampling is without replacement, and we can take it to be either ordered or unordered; the answers will be the same. For ordered samples, the size of the sample space is 7 P3 = 7 · 6 · 5 = 210. If A is the event ‘no lectures at weekends’, then A occurs precisely when all three days drawn are weekdays; so |A| = 5 P3 = 5 · 4 · 3 = 60. Thus, P(A) = 60/210 = 2/7. If we decided to use unordered samples instead, the answer would be 5C3 /7C3 , which is once again 2/7. Example A six-sided die is rolled twice. What is the probability that the sum of the numbers is at least 10? This time we are sampling with replacement, since the two numbers may be the same or different. So the number of elements in the sample space is 62 = 36. To obtain a sum of 10 or more, the possibilities for the two numbers are (4, 6), (5, 5), (6, 4), (5, 6), (6, 5) or (6, 6). So the probability of the event is 6/36 = 1/6. Example A box contains 20 balls, of which 10 are red and 10 are blue. We draw ten balls from the box, and we are interested in the event that exactly 5 of the balls are red and 5 are blue. Do you think that this is more likely to occur if the draws are made with or without replacement? Let S be the sample space, and A the event that five balls are red and five are blue.

1.7. SAMPLING

11

Consider sampling with replacement. Then |S | = 2010 . What is |A|? The number of ways in which we can choose first five red balls and then five blue ones (that is, RRRRRBBBBB), is 105 · 105 = 1010 . But there are many other ways to get five red and five blue balls. In fact, the five red balls could appear in any five of the ten draws. This means that there are 10C5 = 252 different patterns of five Rs and five Bs. So we have |A| = 252 · 1010 , 252 · 1010 = 0.246 . . . 2010 Now consider sampling without replacement. If we regard the sample as being ordered, then |S | = 20 P10 . There are 10 P5 ways of choosing five of the ten red balls, and the same for the ten blue balls, and as in the previous case there are 10C patterns of red and blue balls. So 5 P(A) = |A| = (10 P5 )2 · 10C5 , and P(A) = (10 P5 )2 · 10C5 = 0.343 . . . 20 P 10 and so

If we regard the sample as being unordered, then |S | = 20C10 . There are 10C5 choices of the five red balls and the same for the blue balls. We no longer have to count patterns since we don’t care about the order of the selection. So |A| = (10C5 )2 , (10C5 )2 = 0.343 . . . 20C 10 This is the same answer as in the case before, as it should be; the question doesn’t care about order of choices! So the event is more likely if we sample with replacement. P(A) = Example I have 6 gold coins, 4 silver coins and 3 bronze coins in my pocket. I take out three coins at random. What is the probability that they are all of different material? What is the probability that they are all of the same material? In this case the sampling is without replacement and the sample is unordered. So |S | = 13C3 = 286. The event that the three coins are all of different material can occur in 6 · 4 · 3 = 72 ways, since we must have one of the six gold coins, and so on. So the probability is 72/286 = 0.252 . . . and

12

CHAPTER 1. BASIC IDEAS The event that the three coins are of the same material can occur in
6

C3 + 4C3 + 3C3 = 20 + 4 + 1 = 25

ways, and the probability is 25/286 = 0.087 . . . In a sampling problem, you should first read the question carefully and decide whether the sampling is with or without replacement. If it is without replacement, decide whether the sample is ordered (e.g. does the question say anything about the first object drawn?). If so, then use the formula for ordered samples. If not, then you can use either ordered or unordered samples, whichever is convenient; they should give the same answer. If the sample is with replacement, or if it involves throwing a die or coin several times, then use the formula for sampling with replacement.

1.8 Stopping rules
Suppose that you take a typing proficiency test. You are allowed to take the test up to three times. Of course, if you pass the test, you don’t need to take it again. So the sample space is S = {p, f p, f f p, f f f }, where for example f f p denotes the outcome that you fail twice and pass on your third attempt. If all outcomes were equally likely, then your chance of eventually passing the test and getting the certificate would be 3/4. But it is unreasonable here to assume that all the outcomes are equally likely. For example, you may be very likely to pass on the first attempt. Let us assume that the probability that you pass the test is 0.8. (By Proposition 3, your chance of failing is 0.2.) Let us further assume that, no matter how many times you have failed, your chance of passing at the next attempt is still 0.8. Then we have P(p) P( f p) P( f f p) P( f f f ) = = = = 0.8, 0.2 · 0.8 = 0.16, 0.22 · 0.8 = 0.032, 0.23 = 0.008.

Thus the probability that you eventually get the certificate is P({p, f p, f f p}) = 0.8 + 0.16 + 0.032 = 0.992. Alternatively, you eventually get the certificate unless you fail three times, so the probability is 1 − 0.008 = 0.992. A stopping rule is a rule of the type described here, namely, continue the experiment until some specified occurrence happens. The experiment may potentially be infinite.

1.9. QUESTIONNAIRE RESULTS

13

For example, if you toss a coin repeatedly until you obtain heads, the sample space is S = {H, T H, T T H, T T T H, . . .} since in principle you may get arbitrarily large numbers of tails before the first head. (We have to allow all possible outcomes.) In the typing test, the rule is ‘stop if either you pass or you have taken the test three times’. This ensures that the sample space is finite. In the next chapter, we will have more to say about the ‘multiplication rule’ we used for calculating the probabilities. In the meantime you might like to consider whether it is a reasonable assumption for tossing a coin, or for someone taking a series of tests. Other kinds of stopping rules are possible. For example, the number of coin tosses might be determined by some other random process such as the roll of a die; or we might toss a coin until we have obtained heads twice; and so on. We will not deal with these.

1.9

Questionnaire results

The students in the Probability I class in Autumn 2000 filled in the following questionnaire: 1. I have a hat containing 20 balls, 10 red and 10 blue. I draw 10 balls from the hat. I am interested in the event that I draw exactly five red and five blue balls. Do you think that this is more likely if I note the colour of each ball I draw and replace it in the hat, or if I don’t replace the balls in the hat after drawing? More likely with replacement 2 2. What colour are your eyes? Blue 2 Brown 2 Green 2 Yes 2 Other 2 More likely without replacement 2

3. Do you own a mobile phone?

No 2

After discarding incomplete questionnaires, the results were as follows: Answer to question Eyes Mobile phone No mobile phone “More likely with replacement” Brown Other 35 4 10 3 “More likely without replacement” Brown Other 35 9 7 1

14

CHAPTER 1. BASIC IDEAS

What can we conclude? Half the class thought that, in the experiment with the coloured balls, sampling with replacement make the result more likely. In fact, as we saw in Chapter 1, actually it is more likely if we sample without replacement. (This doesn’t matter, since the students were instructed not to think too hard about it!) You might expect that eye colour and mobile phone ownership would have no influence on your answer. Let’s test this. If true, then of the 87 people with brown eyes, half of them (i.e. 43 or 44) would answer “with replacement”, whereas in fact 45 did. Also, of the 83 people with mobile phones, we would expect half (that is, 41 or 42) would answer “with replacement”, whereas in fact 39 of them did. So perhaps we have demonstrated that people who own mobile phones are slightly smarter than average, whereas people with brown eyes are slightly less smart! In fact we have shown no such thing, since our results refer only to the people who filled out the questionnaire. But they do show that these events are not independent, in a sense we will come to soon. On the other hand, since 83 out of 104 people have mobile phones, if we think that phone ownership and eye colour are independent, we would expect that the same fraction 83/104 of the 87 brown-eyed people would have phones, i.e. (83 · 87)/104 = 69.4 people. In fact the number is 70, or as near as we can expect. So indeed it seems that eye colour and phone ownership are more-or-less independent.

1.10 Independence
Two events A and B are said to be independent if P(A ∩ B) = P(A) · P(B). This is the definition of independence of events. If you are asked in an exam to define independence of events, this is the correct answer. Do not say that two events are independent if one has no influence on the other; and under no circum/ stances say that A and B are independent if A ∩ B = 0 (this is the statement that A and B are disjoint, which is quite a different thing!) Also, do not ever say that P(A ∩ B) = P(A) · P(B) unless you have some good reason for assuming that A and B are independent (either because this is given in the question, or as in the next-but-one paragraph). Let us return to the questionnaire example. Suppose that a student is chosen at random from those who filled out the questionnaire. Let A be the event that this student thought that the event was more likely if we sample with replacement; B the event that the student has brown eyes; and C the event that the student has a

1.10. INDEPENDENCE mobile phone. Then P(A) = 52/104 = 0.5, P(B) = 87/104 = 0.8365, P(C) = 83/104 = 0.7981. Furthermore, P(A ∩ B) = 45/104 = 0.4327, P(A ∩C) = 39/104 = 0.375, P(B ∩C) = 70/104 = 0.6731, P(A) · P(B) = 0.4183, P(A) · P(C) = 0.3990, P(B) ∩ P(C) = 0.6676.

15

So none of the three pairs is independent, but in a sense B and C ‘come closer’ than either of the others, as we noted. In practice, if it is the case that the event A has no effect on the outcome of event B, then A and B are independent. But this does not apply in the other direction. There might be a very definite connection between A and B, but still it could happen that P(A ∩ B) = P(A) · P(B), so that A and B are independent. We will see an example shortly. Example If we toss a coin more than once, or roll a die more than once, then you may assume that different tosses or rolls are independent. More precisely, if we roll a fair six-sided die twice, then the probability of getting 4 on the first throw and 5 on the second is 1/36, since we assume that all 36 combinations of the two throws are equally likely. But (1/36) = (1/6) · (1/6), and the separate probabilities of getting 4 on the first throw and of getting 5 on the second are both equal to 1/6. So the two events are independent. This would work just as well for any other combination. In general, it is always OK to assume that the outcomes of different tosses of a coin, or different throws of a die, are independent. This holds even if the examples are not all equally likely. We will see an example later. Example I have two red pens, one green pen, and one blue pen. I choose two pens without replacement. Let A be the event that I choose exactly one red pen, and B the event that I choose exactly one green pen. If the pens are called R1 , R2 , G, B, then

S = {R1 R2 , R1 G, R1 B, R2 G, R2 B, GB},
A = {R1 G, R1 B, R2 G, R2 B}, B = {R1 G, R2 G, GB}

16

CHAPTER 1. BASIC IDEAS

We have P(A) = 4/6 = 2/3, P(B) = 3/6 = 1/2, P(A∩B) = 2/6 = 1/3 = P(A)P(B), so A and B are independent. But before you say ‘that’s obvious’, suppose that I have also a purple pen, and I do the same experiment. This time, if you write down the sample space and the two events and do the calculations, you will find that P(A) = 6/10 = 3/5, P(B) = 4/10 = 2/5, P(A ∩ B) = 2/10 = 1/5 = P(A)P(B), so adding one more pen has made the events non-independent! We see that it is very difficult to tell whether events are independent or not. In practice, assume that events are independent only if either you are told to assume it, or the events are the outcomes of different throws of a coin or die. (There is one other case where you can assume independence: this is the result of different draws, with replacement, from a set of objects.) Example Consider the experiment where we toss a fair coin three times and note the results. Each of the eight possible outcomes has probability 1/8. Let A be the event ‘there are more heads than tails’, and B the event ‘the results of the first two tosses are the same’. Then • A = {HHH, HHT, HT H, T HH}, P(A) = 1/2, • B = {HHH, HHT, T T H, T T T }, P(B) = 1/2, • A ∩ B = {HHH, HHT }, P(A ∩ B) = 1/4; so A and B are independent. However, both A and B clearly involve the results of the first two tosses and it is not possible to make a convincing argument that one of these events has no influence or effect on the other. For example, let C be the event ‘heads on the last toss’. Then, as we saw in Part 1, • C = {HHH, HT H, T HH, T T H}, P(C) = 1/2, • A ∩C = {HHH, HT H, T HH}, P(A ∩C) = 3/8; so A and C are not independent. Are B and C independent?

1.11 Mutual independence
This section is a bit technical. You will need to know the conclusions, though the arguments we use to reach them are not so important. We saw in the coin-tossing example above that it is possible to have three events A, B,C so that A and B are independent, B and C are independent, but A and C are not independent.

1.12. PROPERTIES OF INDEPENDENCE

17

If all three pairs of events happen to be independent, can we then conclude that P(A ∩ B ∩C) = P(A) · P(B) · P(C)? At first sight this seems very reasonable; in Axiom 3, we only required all pairs of events to be exclusive in order to justify our conclusion. Unfortunately it is not true. . . Example In the coin-tossing example, let A be the event ‘first and second tosses have same result’, B the event ‘first and third tosses have the same result, and C the event ‘second and third tosses have same result’. You should check that P(A) = P(B) = P(C) = 1/2, and that the events A ∩ B, B ∩C, A ∩C, and A ∩ B ∩C are all equal to {HHH, T T T }, with probability 1/4. Thus any pair of the three events are independent, but P(A ∩ B ∩C) = 1/4, P(A) · P(B) · P(C) = 1/8. So A, B,C are not mutually independent. The correct definition and proposition run as follows. Let A1 , . . . , An be events. We say that these events are mutually independent if, given any distinct indices i1 , i2 , . . . , ik with k ≥ 1, the events Ai1 ∩ Ai2 ∩ · · · ∩ Aik−1 and Aik

are independent. In other words, any one of the events is independent of the intersection of any number of the other events in the set. Proposition 1.11 Let A1 , . . . , An be mutually independent. Then P(A1 ∩ A2 ∩ · · · ∩ An ) = P(A1 ) · P(A2 ) · · · P(An ).

Now all you really need to know is that the same ‘physical’ arguments that justify that two events (such as two tosses of a coin, or two throws of a die) are independent, also justify that any number of such events are mutually independent. So, for example, if we toss a fair coin six times, the probability of getting the sequence HHT HHT is (1/2)6 = 1/64, and the same would apply for any other sequence. In other words, all 64 possible outcomes are equally likely.

1.12

Properties of independence

Proposition 1.12 If A and B are independent, then A and B are independent.

18

CHAPTER 1. BASIC IDEAS

We are given that P(A ∩ B) = P(A) · P(B), and asked to prove that P(A ∩ B ) = P(A) · P(B ). From Corollary 4, we know that P(B ) = 1 − P(B). Also, the events A ∩ B and A ∩ B are disjoint (since no outcome can be both in B and B ), and their union is A (since every event in A is either in B or in B ); so by Axiom 3, we have that P(A) = P(A ∩ B) + P(A ∩ B ). Thus, P(A ∩ B ) = P(A) − P(A ∩ B) = P(A) − P(A) · P(B) (since A and B are independent) = P(A)(1 − P(B)) = P(A) · P(B ), which is what we were required to prove. Corollary 1.13 If A and B are independent, so are A and B . Apply the Proposition twice, first to A and B (to show that A and B are independent), and then to B and A (to show that B and A are independent). More generally, if events A1 , . . . , An are mutually independent, and we replace some of them by their complements, then the resulting events are mutually independent. We have to be a bit careful though. For example, A and A are not usually independent! Results like the following are also true. Proposition 1.14 Let events A, B, C be mutually independent. Then A and B ∩C are independent, and A and B ∪C are independent. Example Consider the example of the typing proficiency test that we looked at earlier. You are allowed up to three attempts to pass the test. Suppose that your chance of passing the test is 0.8. Suppose also that the events of passing the test on any number of different occasions are mutually independent. Then, by Proposition 1.11, the probability of any sequence of passes and fails is the product of the probabilities of the terms in the sequence. That is, P(p) = 0.8, P( f p) = (0.2) · (0.8), P( f f p) = (0.2)2 · (0.8), P( f f f ) = (0.2)3 , as we claimed in the earlier example. In other words, mutual independence is the condition we need to justify the argument we used in that example.

1.12. PROPERTIES OF INDEPENDENCE Example The electrical apparatus in the diagram works so long as current can flow from left to right. The three components are independent. The probability that component A works is 0.8; the probability that component B works is 0.9; and the probability that component C works is 0.75. Find the probability that the apparatus works.

19
   

A

B

 

C

At risk of some confusion, we use the letters A, B and C for the events ‘component A works’, ‘component B works’, and ‘component C works’, respectively. Now the apparatus will work if either A and B are working, or C is working (or possibly both). Thus the event we are interested in is (A ∩ B) ∪C. Now P((A ∩ B) ∪C)) = P(A ∩ B) + P(C) − P(A ∩ B ∩C) (by Inclusion–Exclusion) = P(A) · P(B) + P(C) − P(A) · P(B) · P(C) (by mutual independence) = (0.8) · (0.9) + (0.75) − (0.8) · (0.9) · (0.75) = 0.93. The problem can also be analysed in a different way. The apparatus will not work if both paths are blocked, that is, if C is not working and one of A and B is also not working. Thus, the event that the apparatus does not work is (A ∪B )∩C . By the Distributive Law, this is equal to (A ∩C ) ∪ (B ∩C ). We have P((A ∩C ) ∪ (B ∩C ) = P(A ∩C ) + P(B ∩C ) − P(A ∩ B ∩C ) (by Inclusion–Exclusion) = P(A ) · P(C ) + P(B ) · P(C ) − P(A ) · P(B ) · P(C ) (by mutual independence of A , B ,C ) = (0.2) · (0.25) + (0.1) · (0.25) − (0.2) · (0.1) · (0.25) = 0.07, so the apparatus works with probability 1 − 0.07 = 0.93. There is a trap here which you should take care to avoid. You might be tempted to say P(A ∩C ) = (0.2) · (0.25) = 0.05, and P(B ∩C ) = (0.1) · (0.25) = 0.025; and conclude that P((A ∩C ) ∪ (B ∩C )) = 0.05 + 0.025 − (0.05) · (0.025) = 0.07375 by the Principle of Inclusion and Exclusion. But this is not correct, since the events A ∩C and B ∩C are not independent!

20

CHAPTER 1. BASIC IDEAS

Example We can always assume that successive tosses of a coin are mutually independent, even if it is not a fair coin. Suppose that I have a coin which has probability 0.6 of coming down heads. I toss the coin three times. What are the probabilities of getting three heads, two heads, one head, or no heads? For three heads, since successive tosses are mutually independent, the probability is (0.6)3 = 0.216. The probability of tails on any toss is 1 − 0.6 = 0.4. Now the event ‘two heads’ can occur in three possible ways, as HHT , HT H, or T HH. Each outcome has probability (0.6) · (0.6) · (0.4) = 0.144. So the probability of two heads is 3 · (0.144) = 0.432. Similarly the probability of one head is 3 · (0.6) · (0.4)2 = 0.288, and the probability of no heads is (0.4)3 = 0.064. As a check, we have 0.216 + 0.432 + 0.288 + 0.064 = 1.

1.13 Worked examples
Question (a) You go to the shop to buy a toothbrush. The toothbrushes there are red, blue, green, purple and white. The probability that you buy a red toothbrush is three times the probability that you buy a green one; the probability that you buy a blue one is twice the probability that you buy a green one; the probabilities of buying green, purple, and white are all equal. You are certain to buy exactly one toothbrush. For each colour, find the probability that you buy a toothbrush of that colour. (b) James and Simon share a flat, so it would be confusing if their toothbrushes were the same colour. On the first day of term they both go to the shop to buy a toothbrush. For each of James and Simon, the probability of buying various colours of toothbrush is as calculated in (a), and their choices are independent. Find the probability that they buy toothbrushes of the same colour. (c) James and Simon live together for three terms. On the first day of each term they buy new toothbrushes, with probabilities as in (b), independently of what they had bought before. This is the only time that they change their toothbrushes. Find the probablity that James and Simon have differently coloured toothbrushes from each other for all three terms. Is it more likely that they will have differently coloured toothbrushes from each other for

1.13. WORKED EXAMPLES

21

all three terms or that they will sometimes have toothbrushes of the same colour? Solution (a) Let R, B, G, P,W be the events that you buy a red, blue, green, purple and white toothbrush respectively. Let x = P(G). We are given that P(R) = 3x, P(B) = 2x, P(P) = P(W ) = x.

Since these outcomes comprise the whole sample space, Corollary 2 gives 3x + 2x + x + x + x = 1, so x = 1/8. Thus, the probabilities are 3/8, 1/4, 1/8, 1/8, 1/8 respectively. (b) Let RB denote the event ‘James buys a red toothbrush and Simon buys a blue toothbrush’, etc. By independence (given), we have, for example, P(RR) = (3/8) · (3/8) = 9/64. The event that the toothbrushes have the same colour consists of the five outcomes RR, BB, GG, PP, WW , so its probability is P(RR) + P(BB) + P(GG) + P(PP) + P(WW ) 9 1 1 1 1 1 = + + + + = . 64 16 64 64 64 4 (c) The event ‘different coloured toothbrushes in the ith term’ has probability 3/4 (from part (b)), and these events are independent. So the event ‘different coloured toothbrushes in all three terms’ has probability 3 3 3 27 · · = . 4 4 4 64 The event ‘same coloured toothbrushes in at least one term’ is the complement of the above, so has probability 1 − (27/64) = (37)/(64). So it is more likely that they will have the same colour in at least one term. Question There are 24 elephants in a game reserve. The warden tags six of the elephants with small radio transmitters and returns them to the reserve. The next month, he randomly selects five elephants from the reserve. He counts how many of these elephants are tagged. Assume that no elephants leave or enter the reserve, or die or give birth, between the tagging and the selection; and that all outcomes of the selection are equally likely. Find the probability that exactly two of the selected elephants are tagged, giving the answer correct to 3 decimal places.

22

CHAPTER 1. BASIC IDEAS

Solution The experiment consists of picking the five elephants, not the original choice of six elephants for tagging. Let S be the sample space. Then |S | = 24C5 . Let A be the event that two of the selected elephants are tagged. This involves choosing two of the six tagged elephants and three of the eighteen untagged ones, so |A| = 6C2 · 18C3 . Thus P(A) = to 3 d.p. Note: Should the sample should be ordered or unordered? Since the answer doesn’t depend on the order in which the elephants are caught, an unordered sample is preferable. If you want to use an ordered sample, the calculation is P(A) =
6 P · 18 P · 5C 2 3 2 24 P 5 6C · 18C 2 3 24C 5

= 0.288

= 0.288,

since it is necessary to multiply by the 5C2 possible patterns of tagged and untagged elephants in a sample of five with two tagged. Question A couple are planning to have a family. They decide to stop having children either when they have two boys or when they have four children. Suppose that they are successful in their plan. (a) Write down the sample space. (b) Assume that, each time that they have a child, the probability that it is a boy is 1/2, independent of all other times. Find P(E) and P(F) where E = “there are at least two girls”, F = “there are more girls than boys”. Solution (a) S = {BB, BGB, GBB, BGGB, GBGB, GGBB, BGGG, GBGG, GGBG, GGGB, GGGG}. (b) E = {BGGB, GBGB, GGBB, BGGG, GBGG, GGBG, GGGB, GGGG}, F = {BGGG, GBGG, GGBG, GGGB, GGGG}. Now we have P(BB) = 1/4, P(BGB) = 1/8, P(BGGB) = 1/16, and similarly for the other outcomes. So P(E) = 8/16 = 1/2, P(F) = 5/16.

Chapter 2 Conditional probability
In this chapter we develop the technique of conditional probability to deal with cases where events are not independent.

2.1

What is conditional probability?

Alice and Bob are going out to dinner. They toss a fair coin ‘best of three’ to decide who pays: if there are more heads than tails in the three tosses then Alice pays, otherwise Bob pays. Clearly each has a 50% chance of paying. The sample space is

S = {HHH, HHT, HT H, HT T, T HH, T HT, T T H, T T T }, and the events ‘Alice pays’ and ‘Bob pays’ are respectively A = {HHH, HHT, HT H, T HH}, B = {HT T, T HT, T T H, T T T }. They toss the coin once and the result is heads; call this event E. How should we now reassess their chances? We have E = {HHH, HHT, HT H, HT T }, and if we are given the information that the result of the first toss is heads, then E now becomes the sample space of the experiment, since the outcomes not in E are no longer possible. In the new experiment, the outcomes ‘Alice pays’ and ‘Bob pays’ are A ∩ E = {HHH, HHT, HT H}, B ∩ E = {HT T }. 23

24

CHAPTER 2. CONDITIONAL PROBABILITY

Thus the new probabilities that Alice and Bob pay for dinner are 3/4 and 1/4 respectively. In general, suppose that we are given that an event E has occurred, and we want to compute the probability that another event A occurs. In general, we can no longer count, since the outcomes may not be equally likely. The correct definition is as follows. Let E be an event with non-zero probability, and let A be any event. The conditional probability of A given E is defined as P(A | E) = P(A ∩ E) . P(E)

Again I emphasise that this is the definition. If you are asked for the definition of conditional probability, it is not enough to say “the probability of A given that E has occurred”, although this is the best way to understand it. There is no reason why event E should occur before event A! Note the vertical bar in the notation. This is P(A | E), not P(A/E) or P(A \ E). Note also that the definition only applies in the case where P(E) is not equal to zero, since we have to divide by it, and this would make no sense if P(E) = 0. To check the formula in our example: P(A ∩ E) 3/8 3 = = , P(E) 1/2 4 P(B ∩ E) 1/8 1 = = . P(B | E) = P(E) 1/2 4 P(A | E) = It may seem like a small matter, but you should be familiar enough with this formula that you can write it down without stopping to think about the names of the events. Thus, for example, P(A | B) = if P(B) = 0. Example A random car is chosen among all those passing through Trafalgar Square on a certain day. The probability that the car is yellow is 3/100: the probability that the driver is blonde is 1/5; and the probability that the car is yellow and the driver is blonde is 1/50. Find the conditional probability that the driver is blonde given that the car is yellow. P(A ∩ B) P(B)

2.2. GENETICS

25

Solution: If Y is the event ‘the car is yellow’ and B the event ‘the driver is blonde’, then we are given that P(Y ) = 0.03, P(B) = 0.2, and P(Y ∩ B) = 0.02. So P(B | Y ) = P(B ∩Y ) 0.02 = = 0.667 P(Y ) 0.03

to 3 d.p. Note that we haven’t used all the information given. There is a connection between conditional probability and independence: Proposition 2.1 Let A and B be events with P(B) = 0. Then A and B are independent if and only if P(A | B) = P(A). Proof The words ‘if and only if’ tell us that we have two jobs to do: we have to show that if A and B are independent, then P(A | B) = P(A); and that if P(A | B) = P(A), then A and B are independent. So first suppose that A and B are independent. Remember that this means that P(A ∩ B) = P(A) · P(B). Then P(A | B) = P(A ∩ B) P(A) · P(B) = = P(A), P(B) P(B)

that is, P(A | B) = P(A), as we had to prove. Now suppose that P(A | B) = P(A). In other words, P(A ∩ B) = P(A), P(B) using the definition of conditional probability. Now clearing fractions gives P(A ∩ B) = P(A) · P(B), which is just what the statement ‘A and B are independent’ means. This proposition is most likely what people have in mind when they say ‘A and B are independent means that B has no effect on A’.

2.2

Genetics

Here is a simplified version of how genes code eye colour, assuming only two colours of eyes. Each person has two genes for eye colour. Each gene is either B or b. A child receives one gene from each of its parents. The gene it receives from its father is one of its father’s two genes, each with probability 1/2; and similarly for its mother. The genes received from father and mother are independent. If your genes are BB or Bb or bB, you have brown eyes; if your genes are bb, you have blue eyes.

26

CHAPTER 2. CONDITIONAL PROBABILITY

Example Suppose that John has brown eyes. So do both of John’s parents. His sister has blue eyes. What is the probability that John’s genes are BB? Solution John’s sister has genes bb, so one b must have come from each parent. Thus each of John’s parents is Bb or bB; we may assume Bb. So the possibilities for John are (writing the gene from his father first) BB, Bb, bB, bb each with probability 1/4. (For example, John gets his father’s B gene with probability 1/2 and his mother’s B gene with probability 1/2, and these are independent, so the probability that he gets BB is 1/4. Similarly for the other combinations.) Let X be the event ‘John has BB genes’ and Y the event ‘John has brown eyes’. Then X = {BB} and Y = {BB, Bb, bB}. The question asks us to calculate P(X | Y ). This is given by P(X | Y ) = P(X ∩Y ) 1/4 = = 1/3. P(Y ) 3/4

2.3 The Theorem of Total Probability
Sometimes we are faced with a situation where we do not know the probability of an event B, but we know what its probability would be if we were sure that some other event had occurred. Example An ice-cream seller has to decide whether to order more stock for the Bank Holiday weekend. He estimates that, if the weather is sunny, he has a 90% chance of selling all his stock; if it is cloudy, his chance is 60%; and if it rains, his chance is only 20%. According to the weather forecast, the probability of sunshine is 30%, the probability of cloud is 45%, and the probability of rain is 25%. (We assume that these are all the possible outcomes, so that their probabilities must add up to 100%.) What is the overall probability that the salesman will sell all his stock? This problem is answered by the Theorem of Total Probability, which we now state. First we need a definition. The events A1 , A2 , . . . , An form a partition of the sample space if the following two conditions hold: / (a) the events are pairwise disjoint, that is, Ai ∩ A j = 0 for any pair of events Ai and A j ; (b) A1 ∪ A2 ∪ · · · ∪ An = S .

2.3. THE THEOREM OF TOTAL PROBABILITY

27

Another way of saying the same thing is that every outcome in the sample space lies in exactly one of the events A1 , A2 , . . . , An . The picture shows the idea of a partition.

A1 A2 . . . An

Now we state and prove the Theorem of Total Probability. Theorem 2.2 Let A1 , A2 , . . . , An form a partition of the sample space with P(Ai ) = 0 for all i, and let B be any event. Then n P(B) = ∑ P(B | Ai ) · P(Ai ). i=1 Proof By definition, P(B | Ai ) = P(B ∩ Ai )/P(Ai ). Multiplying up, we find that P(B ∩ Ai ) = P(B | Ai ) · P(Ai ). Now consider the events B ∩ A1 , B ∩ A2 , . . . , B ∩ An . These events are pairwise disjoint; for any outcome lying in both B ∩ Ai and B ∩ A j would lie in both Ai and A j , and by assumption there are no such outcomes. Moreover, the union of all these events is B, since every outcome lies in one of the Ai . So, by Axiom 3, we conclude that n i=1

∑ P(B ∩ Ai) = P(B).

Substituting our expression for P(B ∩ Ai ) gives the result.
  

B


A1 A2 . . . An Consider the ice-cream salesman at the start of this section. Let A1 be the event ‘it is sunny’, A2 the event ‘it is cloudy’, and A3 the event ‘it is rainy’. Then A1 , A2 and A3 form a partition of the sample space, and we are given that P(A1 ) = 0.3, P(A2 ) = 0.45, P(A3 ) = 0.25.

28

CHAPTER 2. CONDITIONAL PROBABILITY

Let B be the event ‘the salesman sells all his stock’. The other information we are given is that P(B | A1 ) = 0.9, P(B | A2 ) = 0.6, P(B | A3 ) = 0.2.

By the Theorem of Total Probability, P(B) = (0.9 × 0.3) + (0.6 × 0.45) + (0.2 × 0.25) = 0.59. You will now realise that the Theorem of Total Probability is really being used when you calculate probabilities by tree diagrams. It is better to get into the habit of using it directly, since it avoids any accidental assumptions of independence. One special case of the Theorem of Total Probability is very commonly used, and is worth stating in its own right. For any event A, the events A and A form a partition of S . To say that both A and A have non-zero probability is just to say that P(A) = 0, 1. Thus we have the following corollary: Corollary 2.3 Let A and B be events, and suppose that P(A) = 0, 1. Then P(B) = P(B | A) · P(A) + P(B | A ) · P(A ).

2.4 Sampling revisited
We can use the notion of conditional probability to treat sampling problems involving ordered samples. Example I have two red pens, one green pen, and one blue pen. I select two pens without replacement. (a) What is the probability that the first pen chosen is red? (b) What is the probability that the second pen chosen is red? For the first pen, there are four pens of which two are red, so the chance of selecting a red pen is 2/4 = 1/2. For the second pen, we must separate cases. Let A1 be the event ‘first pen red’, A2 the event ‘first pen green’ and A3 the event ‘first pen blue’. Then P(A1 ) = 1/2, P(A2 ) = P(A3 ) = 1/4 (arguing as above). Let B be the event ‘second pen red’. If the first pen is red, then only one of the three remaining pens is red, so that P(B | A1 ) = 1/3. On the other hand, if the first pen is green or blue, then two of the remaining pens are red, so P(B | A2 ) = P(B | A3 ) = 2/3.

2.5. BAYES’ THEOREM By the Theorem of Total Probability, P(B) = P(B | A1 )P(A1 ) + P(B | A2 )P(A2 ) + P(B | A3 )P(A3 ) = (1/3) × (1/2) + (2/3) × (1/4) + (2/3) × (1/4) = 1/2.

29

We have reached by a roundabout argument a conclusion which you might think to be obvious. If we have no information about the first pen, then the second pen is equally likely to be any one of the four, and the probability should be 1/2, just as for the first pen. This argument happens to be correct. But, until your ability to distinguish between correct arguments and plausible-looking false ones is very well developed, you may be safer to stick to the calculation that we did. Beware of obvious-looking arguments in probability! Many clever people have been caught out.

2.5

Bayes’ Theorem

There is a very big difference between P(A | B) and P(B | A). Suppose that a new test is developed to identify people who are liable to suffer from some genetic disease in later life. Of course, no test is perfect; there will be some carriers of the defective gene who test negative, and some non-carriers who test positive. So, for example, let A be the event ‘the patient is a carrier’, and B the event ‘the test result is positive’. The scientists who develop the test are concerned with the probabilities that the test result is wrong, that is, with P(B | A ) and P(B | A). However, a patient who has taken the test has different concerns. If I tested positive, what is the chance that I have the disease? If I tested negative, how sure can I be that I am not a carrier? In other words, P(A | B) and P(A | B ). These conditional probabilities are related by Bayes’ Theorem: Theorem 2.4 Let A and B be events with non-zero probability. Then P(A | B) = The proof is not hard. We have P(A | B) · P(B) = P(A ∩ B) = P(B | A) · P(A), using the definition of conditional probability twice. (Note that we need both A and B to have non-zero probability here.) Now divide this equation by P(B) to get the result. P(B | A) · P(A) . P(B)

30

CHAPTER 2. CONDITIONAL PROBABILITY If P(A) = 0, 1 and P(B) = 0, then we can use Corollary 17 to write this as P(A | B) = P(B | A) · P(A) . P(B | A) · P(A) + P(B | A ) · P(A )

Bayes’ Theorem is often stated in this form. Example Consider the ice-cream salesman from Section 2.3. Given that he sold all his stock of ice-cream, what is the probability that the weather was sunny? (This question might be asked by the warehouse manager who doesn’t know what the weather was actually like.) Using the same notation that we used before, A1 is the event ‘it is sunny’ and B the event ‘the salesman sells all his stock’. We are asked for P(A1 | B). We were given that P(B | A1 ) = 0.9 and that P(A1 ) = 0.3, and we calculated that P(B) = 0.59. So by Bayes’ Theorem, P(A1 | B) = to 2 d.p. Example Consider the clinical test described at the start of this section. Suppose that 1 in 1000 of the population is a carrier of the disease. Suppose also that the probability that a carrier tests negative is 1%, while the probability that a noncarrier tests positive is 5%. (A test achieving these values would be regarded as very successful.) Let A be the event ‘the patient is a carrier’, and B the event ‘the test result is positive’. We are given that P(A) = 0.001 (so that P(A ) = 0.999), and that P(B | A) = 0.99, P(B | A ) = 0.05. (a) A patient has just had a positive test result. What is the probability that the patient is a carrier? The answer is P(A | B) = P(B | A)P(A) P(B | A)P(A) + P(B | A )P(A ) 0.99 × 0.001 = (0.99 × 0.001) + (0.05 × 0.999) 0.00099 = = 0.0194. 0.05094 P(B | A1 )P(A1 ) 0.9 × 0.3 = = 0.46 P(B) 0.59

(b) A patient has just had a negative test result. What is the probability that the patient is a carrier? The answer is P(A | B ) = P(B | A)P(A) P(B | A)P(A) + P(B | A )P(A )

2.6. ITERATED CONDITIONAL PROBABILITY = 0.01 × 0.001 (0.01 × 0.001) + (0.95 × 0.999) 0.00001 = = 0.00001. 0.94095

31

So a patient with a negative test result can be reassured; but a patient with a positive test result still has less than 2% chance of being a carrier, so is likely to worry unnecessarily. Of course, these calculations assume that the patient has been selected at random from the population. If the patient has a family history of the disease, the calculations would be quite different. Example 2% of the population have a certain blood disease in a serious form; 10% have it in a mild form; and 88% don’t have it at all. A new blood test is developed; the probability of testing positive is 9/10 if the subject has the serious form, 6/10 if the subject has the mild form, and 1/10 if the subject doesn’t have the disease. I have just tested positive. What is the probability that I have the serious form of the disease? Let A1 be ‘has disease in serious form’, A2 be ‘has disease in mild form’, and A3 be ‘doesn’t have disease’. Let B be ‘test positive’. Then we are given that A1 , A2 , A3 form a partition and P(A1 ) = 0.02 P(A2 ) = 0.1 P(A3 ) = 0.88 P(B | A1 ) = 0.9 P(B | A2 ) = 0.6 P(B | A3 ) = 0.1 Thus, by the Theorem of Total Probability, P(B) = 0.9 × 0.02 + 0.6 × 0.1 + 0.1 × 0.88 = 0.166, and then by Bayes’ Theorem, P(A1 | B) = to 3 d.p. P(B | A1 )P(A1 ) 0.9 × 0.02 = = 0.108 P(B) 0.166

2.6

Iterated conditional probability

The conditional probability of C, given that both A and B have occurred, is just P(C | A ∩ B). Sometimes instead we just write P(C | A, B). It is given by P(C | A, B) = P(C ∩ A ∩ B) , P(A ∩ B)

32 so

CHAPTER 2. CONDITIONAL PROBABILITY

P(A ∩ B ∩C) = P(C | A, B)P(A ∩ B). Now we also have P(A ∩ B) = P(B | A)P(A), so finally (assuming that P(A ∩ B) = 0), we have P(A ∩ B ∩C) = P(C | A, B)P(B | A)P(A). This generalises to any number of events: Proposition 2.5 Let A1 , . . . , An be events. Suppose that P(A1 ∩ · · · ∩ An−1 ) = 0. Then P(A1 ∩ A2 ∩ · · · ∩ An ) = P(An | A1 , . . . , An−1 ) · · · P(A2 | A1 )P(A1 ).

We apply this to the birthday paradox. The birthday paradox is the following statement: If there are 23 or more people in a room, then the chances are better than even that two of them have the same birthday. To simplify the analysis, we ignore 29 February, and assume that the other 365 days are all equally likely as birthdays of a random person. (This is not quite true but not inaccurate enough to have much effect on the conclusion.) Suppose that we have n people p1 , p2 , . . . , pn . Let A2 be the event ‘p2 has a different birthday 1 from p1 ’. Then P(A2 ) = 1 − 365 , since whatever p1 ’s birthday is, there is a 1 in 365 chance that p2 will have the same birthday. Let A3 be the event ‘p3 has a different birthday from p1 and p2 ’. It is not straightforward to evaluate P(A3 ), since we have to consider whether p1 and p2 have the same birthday or not. (See below). But we can calculate that P(A3 | 2 A2 ) = 1 − 365 , since if A2 occurs then p1 and p2 have birthdays on different days, and A3 will occur only if p3 ’s birthday is on neither of these days. So
1 2 P(A2 ∩ A3 ) = P(A2 )P(A3 | A2 ) = (1 − 365 )(1 − 365 ).

What is A2 ∩ A3 ? It is simply the event that all three people have birthdays on different days. Now this process extends. If Ai denotes the event ‘pi ’s birthday is not on the same day as any of p1 , . . . , pi−1 ’, then P(Ai | A1 , . . . , Ai−1 ) = 1 − i−1 , 365

2.6. ITERATED CONDITIONAL PROBABILITY and so by Proposition 2.5,
1 2 P(A1 ∩ · · · ∩ Ai ) = (1 − 365 )(1 − 365 ) · · · (1 − i−1 ). 365

33

Call this number qi ; it is the probability that all of the people p1 , . . . , pi have their birthdays on different days. The numbers qi decrease, since at each step we multiply by a factor less than 1. So there will be some value of n such that qn−1 > 0.5, qn ≤ 0.5,

that is, n is the smallest number of people for which the probability that they all have different birthdays is less than 1/2, that is, the probability of at least one coincidence is greater than 1/2. By calculation, we find that q22 = 0.5243, q23 = 0.4927 (to 4 d.p.); so 23 people are enough for the probability of coincidence to be greater than 1/2. Now return to a question we left open before. What is the probability of the event A3 ? (This is the event that p3 has a different birthday from both p1 and p2 .) 2 If p1 and p2 have different birthdays, the probability is 1 − 365 : this is the calculation we already did. On the other hand, if p1 and p2 have the same birthday, 1 then the probability is 1 − 365 . These two numbers are P(A3 | A2 ) and P(A3 | A2 ) respectively. So, by the Theorem of Total Probability, P(A3 ) = P(A3 | A2 )P(A2 ) + P(A3 | A2 )P(A2 ) 1 1 1 2 = (1 − 365 )(1 − 365 ) + (1 − 365 ) 365 = 0.9945 to 4 d.p. Problem How many people would you need to pick at random to ensure that the chance of two of them being born in the same month are better than even? Assuming all months equally likely, if Bi is the event that pi is born in a different month from any of p1 , . . . , pi−1 , then as before we find that P(Bi | B1 , · · · , Bi−1 ) = 1 − i−1 , 12 so
1 2 P(B1 ∩ · · · ∩ Bi ) = (1 − 12 )(1 − 12 )(1 − i−1 ). 12

We calculate that this probability is (11/12) × (10/12) × (9/12) = 0.5729

34 for i = 4 and

CHAPTER 2. CONDITIONAL PROBABILITY

(11/12) × (10/12) × (9/12) × (8/12) = 0.3819 for i = 5. So, with five people, it is more likely that two will have the same birth month. A true story. Some years ago, in a probability class with only ten students, the lecturer started discussing the Birthday Paradox. He said to the class, “I bet that no two people in the room have the same birthday”. He should have been on safe ground, since q11 = 0.859. (Remember that there are eleven people in the room!) However, a student in the back said “I’ll take the bet”, and after a moment all the other students realised that the lecturer would certainly lose his wager. Why? (Answer in the next chapter.)

2.7 Worked examples
Question Each person has two genes for cystic fibrosis. Each gene is either N or C. Each child receives one gene from each parent. If your genes are NN or NC or CN then you are normal; if they are CC then you have cystic fibrosis. (a) Neither of Sally’s parents has cystic fibrosis. Nor does she. However, Sally’s sister Hannah does have cystic fibrosis. Find the probability that Sally has at least one C gene (given that she does not have cystic fibrosis). (b) In the general population the ratio of N genes to C genes is about 49 to 1. You can assume that the two genes in a person are independent. Harry does not have cystic fibrosis. Find the probability that he has at least one C gene (given that he does not have cystic fibrosis). (c) Harry and Sally plan to have a child. Find the probability that the child will have cystic fibrosis (given that neither Harry nor Sally has it). Solution During this solution, we will use a number of times the following principle. Let A and B be events with A ⊆ B. Then A ∩ B = A, and so P(A | B) = P(A ∩ B) P(A) = . P(B) P(B)

(a) This is the same as the eye colour example discussed earlier. We are given that Sally’s sister has genes CC, and one gene must come from each parent. But

2.7. WORKED EXAMPLES

35

neither parent is CC, so each parent is CN or NC. Now by the basic rules of genetics, all the four combinations of genes for a child of these parents, namely CC,CN, NC, NN, will have probability 1/4. If S1 is the event ‘Sally has at least one C gene’, then S1 = {CN, NC,CC}; and if S2 is the event ‘Sally does not have cystic fibrosis’, then S2 = {CN, NC, NN}. Then P(S1 ∩ S2 ) 2/4 2 P(S1 | S2 ) = = = . P(S2 ) 3/4 3 (b) We know nothing specific about Harry, so we assume that his genes are randomly and independently selected from the population. We are given that the probability of a random gene being C or N is 1/50 and 49/50 respectively. Then the probabilities of Harry having genes CC, CN, NC, NN are respectively (1/50)2 , (1/50) · (49/50), (49/50) · (1/50), and (49/50)2 , respectively. So, if H1 is the event ‘Harry has at least one C gene’, and H2 is the event ‘Harry does not have cystic fibrosis’, then P(H1 | H2 ) = P(H1 ∩ H2 ) (49/2500) + (49/2500) 2 = = . P(H2 ) (49/2500) + (49/2500) + (2401/2500) 51

(c) Let X be the event that Harry’s and Sally’s child has cystic fibrosis. As in (a), this can only occur if Harry and Sally both have CN or NC genes. That is, X ⊆ S3 ∩ H3 , where S3 = S1 ∩ S2 and H3 = H1 ∩ H2 . Now if Harry and Sally are both CN or NC, these genes pass independently to the baby, and so P(X | S3 ∩ H3 ) = 1 P(X) = . P(S3 ∩ H3 ) 4

(Remember the principle that we started with!) We are asked to find P(X | S2 ∩ H2 ), in other words (since X ⊆ S3 ∩ H3 ⊆ S2 ∩ H2 ), P(X) . P(S2 ∩ H2 ) Now Harry’s and Sally’s genes are independent, so P(S3 ∩ H3 ) = P(S3 ) · P(H3 ), P(S2 ∩ H2 ) = P(S2 ) · P(H2 ). Thus, P(X) P(X) P(S3 ∩ H3 ) = · P(S2 ∩ H2 ) P(S3 ∩ H3 ) P(S2 ∩ H2 )

36

CHAPTER 2. CONDITIONAL PROBABILITY 1 P(S1 ∩ S2 ) P(H1 ∩ H2 ) · · 4 P(S2 ) P(H2 ) 1 · P(S1 | S2 ) · P(H1 | H2 ) = 4 1 2 2 = · · 4 3 51 1 = . 153 =

I thank Eduardo Mendes for pointing out a mistake in my previous solution to this problem. Question The Land of Nod lies in the monsoon zone, and has just two seasons, Wet and Dry. The Wet season lasts for 1/3 of the year, and the Dry season for 2/3 of the year. During the Wet season, the probability that it is raining is 3/4; during the Dry season, the probability that it is raining is 1/6. (a) I visit the capital city, Oneirabad, on a random day of the year. What is the probability that it is raining when I arrive? (b) I visit Oneirabad on a random day, and it is raining when I arrive. Given this information, what is the probability that my visit is during the Wet season? (c) I visit Oneirabad on a random day, and it is raining when I arrive. Given this information, what is the probability that it will be raining when I return to Oneirabad in a year’s time? (You may assume that in a year’s time the season will be the same as today but, given the season, whether or not it is raining is independent of today’s weather.) Solution (a) Let W be the event ‘it is the wet season’, D the event ‘it is the dry season’, and R the event ‘it is raining when I arrive’. We are given that P(W ) = 1/3, P(D) = 2/3, P(R | W ) = 3/4, P(R | D) = 1/6. By the ToTP, P(R) = P(R | W )P(W ) + P(R | D)P(D) = (3/4) · (1/3) + (1/6) · (2/3) = 13/36. (b) By Bayes’ Theorem, P(W | R) = P(R | W )P(W ) (3/4) · (1/3) 9 = = . P(R) 13/36 13

2.7. WORKED EXAMPLES

37

(c) Let R be the event ‘it is raining in a year’s time’. The information we are given is that P(R ∩ R | W ) = P(R | W )P(R | W ) and similarly for D. Thus P(R ∩ R ) = P(R ∩ R | W )P(W ) + P(R ∩ R | D)P(D) 89 = (3/4)2 · (1/3) + (1/6)2 · (2/3) = , 432 and so P(R | R) = 89 P(R ∩ R ) 89/432 = = . P(R) 13/36 156

38

CHAPTER 2. CONDITIONAL PROBABILITY

Chapter 3 Random variables
In this chapter we define random variables and some related concepts such as probability mass function, expected value, variance, and median; and look at some particularly important types of random variables including the binomial, Poisson, and normal.

3.1

What are random variables?

The Holy Roman Empire was, in the words of the historian Voltaire, “neither holy, nor Roman, nor an empire”. Similarly, a random variable is neither random nor a variable: A random variable is a function defined on a sample space. The values of the function can be anything at all, but for us they will always be numbers. The standard abbreviation for ‘random variable’ is r.v. Example I select at random a student from the class and measure his or her height in centimetres. Here, the sample space is the set of students; the random variable is ‘height’, which is a function from the set of students to the real numbers: h(S) is the height of student S in centimetres. (Remember that a function is nothing but a rule for associating with each element of its domain set an element of its target or range set. Here the domain set is the sample space S , the set of students in the class, and the target space is the set of real numbers.) Example I throw a six-sided die twice; I am interested in the sum of the two numbers. Here the sample space is

S = {(i, j) : 1 ≤ i, j ≤ 6},
39

40

CHAPTER 3. RANDOM VARIABLES

and the random variable F is given by F(i, j) = i + j. The target set is the set {2, 3, . . . , 12}. The two random variables in the above examples are representatives of the two types of random variables that we will consider. These definitions are not quite precise, but more examples should make the idea clearer. A random variable F is discrete if the values it can take are separated by gaps. For example, F is discrete if it can take only finitely many values (as in the second example above, where the values are the integers from 2 to 12), or if the values of F are integers (for example, the number of nuclear decays which take place in a second in a sample of radioactive material – the number is an integer but we can’t easily put an upper limit on it.) A random variable is continuous if there are no gaps between its possible values. In the first example, the height of a student could in principle be any real number between certain extreme limits. A random variable whose values range over an interval of real numbers, or even over all real numbers, is continuous. One could concoct random variables which are neither discrete nor continuous (e.g. the possible, values could be 1, 2, 3, or any real number between 4 and 5), but we will not consider such random variables. We begin by considering discrete random variables.

3.2 Probability mass function
Let F be a discrete random variable. The most basic question we can ask is: given any value a in the target set of F, what is the probability that F takes the value a? In other words, if we consider the event A = {x ∈ S : F(x) = a} what is P(A)? (Remember that an event is a subset of the sample space.) Since events of this kind are so important, we simplify the notation: we write P(F = a) in place of P({x ∈ S : F(x) = a}). (There is a fairly common convention in probability and statistics that random variables are denoted by capital letters and their values by lower-case letters. In fact, it is quite common to use the same letter in lower case for a value of the random variable; thus, we would write P(F = f ) in the above example. But remember that this is only a convention, and you are not bound to it.)

3.3. EXPECTED VALUE AND VARIANCE

41

The probability mass function of a discrete random variable F is the function, formula or table which gives the value of P(F = a) for each element a in the target set of F. If F takes only a few values, it is convenient to list it in a table; otherwise we should give a formula if possible. The standard abbreviation for ‘probability mass function’ is p.m.f. Example I toss a fair coin three times. The random variable X gives the number of heads recorded. The possible values of X are 0, 1, 2, 3, and its p.m.f. is a 0 1 2 3 P(X = a) 1 3 3 1 8 8 8 8 For the sample space is {HHH, HHT, HT H, HT T, T HH, T HT, T T H, T T T }, and each outcome is equally likely. The event X = 1, for example, when written as a set of outcomes, is equal to {HT T, T HT, T T H}, and has probability 3/8. Two random variables X and Y are said to have the same distribution if the values they take and their probability mass functions are equal. We write X ∼ Y in this case. In the above example, if Y is the number of tails recorded during the experiment, then X and Y have the same distribution, even though their actual values are different (indeed, Y = 3 − X).

3.3

Expected value and variance

Let X be a discrete random variable which takes the values a1 , . . . , an . The expected value or mean of X is the number E(X) given by the formula n E(X) = ∑ ai P(X = ai ). i=1 That is, we multiply each value of X by the probability that X takes that value, and sum these terms. The expected value is a kind of ‘generalised average’: if each of the values is equally likely, so that each has probability 1/n, then E(X) = (a1 + · · · + an )/n, which is just the average of the values. There is an interpretation of the expected value in terms of mechanics. If we put a mass pi on the axis at position ai for i = 1, . . . , n, where pi = P(X = ai ), then the centre of mass of all these masses is at the point E(X). If the random variable X takes infinitely many values, say a1 , a2 , a3 , . . ., then we define the expected value of X to be the infinite sum


E(X) = ∑ ai P(X = ai ). i=1 42

CHAPTER 3. RANDOM VARIABLES

Of course, now we have to worry about whether this means anything, that is, whether this infinite series is convergent. This is a question which is discussed at great length in analysis. We won’t worry about it too much. Usually, discrete random variables will only have finitely many values; in the few examples we consider where there are infinitely many values, the series will usually be a geometric series or something similar, which we know how to sum. In the proofs below, we assume that the number of values is finite. The variance of X is the number Var(X) given by Var(X) = E(X 2 ) − E(X)2 . Here, X 2 is just the random variable whose values are the squares of the values of X. Thus n E(X ) = ∑ a2 P(X = ai ) i i=1 2

(or an infinite sum, if necessary). The next theorem shows that, if E(X) is a kind of average of the values of X, then Var(X) is a measure of how spread-out the values are around their average. Proposition 3.1 Let X be a discrete random variable with E(X) = µ. Then n Var(X) = E((X − µ)2 ) = ∑ (ai − µ)2 P(X = ai ). i=1 For the second term is equal to the third by definition, and the third is n i=1 n

∑ (ai − µ)2P(X = ai) ∑ (a2 − 2µai + µ2)P(X = ai) i n n i=1 n i=1 i=1

= =

i=1

∑ a2P(X = ai) − 2µ i

∑ aiP(X = ai) + µ2

∑ P(X = ai)

.

(What is happening here is that the entire sum consists of n rows with three terms in each row. We add it up by columns instead of by rows, getting three parts with n terms in each part.) Continuing, we find E((X − µ)2 ) = E(X 2 ) − 2µE(X) + µ2 = E(X 2 ) − E(X)2 , and we are done. (Remember that E(X) = µ, and that ∑n P(X = ai ) = 1 since i=1 the events X = ai form a partition.)

3.4. JOINT P.M.F. OF TWO RANDOM VARIABLES

43

Some people take the conclusion of this proposition as the definition of variance. Example I toss a fair coin three times; X is the number of heads. What are the expected value and variance of X? E(X) = 0 × (1/8) + 1 × (3/8) + 2 × (3/8) + 3 × (1/8) = 3/2, Var(X) = 02 × (1/8) + 12 × (3/8) + 22 × (3/8) + 32 × (1/8) − (3/2)2 = 3/4. If we calculate the variance using Proposition 3.1, we get 3 Var(X) = − 2
2

1 1 × + − 8 2

2

3 1 × + 8 2

2

3 3 × + 8 2

2

×

1 3 = . 8 4

Two properties of expected value and variance can be used as a check on your calculations. • The expected value of X always lies between the smallest and largest values of X. • The variance of X is never negative. (For the formula in Proposition 3.1 is a sum of terms, each of the form (ai − µ)2 (a square, hence non-negative) times P(X = ai ) (a probability, hence non-negative).

3.4

Joint p.m.f. of two random variables

Let X be a random variable taking the values a1 , . . . , an , and let Y be a random variable taking the values b1 , . . . , bm . We say that X and Y are independent if, for any possible values i and j, we have P(X = ai ,Y = b j ) = P(X = ai ) · P(Y = b j ). Here P(X = ai ,Y = b j ) means the probability of the event that X takes the value ai and Y takes the value b j . So we could re-state the definition as follows: The random variables X and Y are independent if, for any value ai of X and any value b j of Y , the events X = ai and Y = b j are independent (events). Note the difference between ‘independent events’ and ‘independent random variables’.

44

CHAPTER 3. RANDOM VARIABLES

Example In Chapter 2, we saw the following: I have two red pens, one green pen, and one blue pen. I select two pens without replacement. Then the events ‘exactly one red pen selected’ and ‘exactly one green pen selected’ turned out to be independent. Let X be the number of red pens selected, and Y the number of green pens selected. Then P(X = 1,Y = 1) = P(X = 1) · P(Y = 1). Are X and Y independent random variables? No, because P(X = 2) = 1/6, P(Y = 1) = 1/2, but P(X = 2,Y = 1) = 0 (it is impossible to have two red and one green in a sample of two). On the other hand, if I roll a die twice, and X and Y are the numbers that come up on the first and second throws, then X and Y will be independent, even if the die is not fair (so that the outcomes are not all equally likely). If we have more than two random variables (for example X,Y, Z), we say that they are mutually independent if the events that the random variables take specific values (for example, X = a, Y = b, Z = c) are mutually independent. (You may want to revise the material on mutually independent events.) What about the expected values of random variables? For expected value, it is easy, but for variance it helps if the variables are independent: Theorem 3.2 Let X and Y be random variables. (a) E(X +Y ) = E(X) + E(Y ). (b) If X and Y are independent, then Var(X +Y ) = Var(X) + Var(Y ).

We will see the proof later. If two random variables X and Y are not independent, then knowing the p.m.f. of each variable does not tell the whole story. The joint probability mass function (or joint p.m.f.) of X and Y is the table giving, for each value ai of X and each value b j of Y , the probability that X = ai and Y = b j . We arrange the table so that the rows correspond to the values of X and the columns to the values of Y . Note that summing the entries in the row corresponding to the value ai gives the probability that X = ai ; that is, the row sums form the p.m.f. of X. Similarly the column sums form the p.m.f. of Y . (The row and column sums are sometimes called the marginal distributions or marginals.) In particular, X and Y are independent r.v.s if and only if each entry of the table is equal to the product of its row sum and its column sum.

3.4. JOINT P.M.F. OF TWO RANDOM VARIABLES

45

Example I have two red pens, one green pen, and one blue pen, and I choose two pens without replacement. Let X be the number of red pens that I choose and Y the number of green pens. Then the joint p.m.f. of X and Y is given by the following table: Y 0 1 1 0 0 6 X 1 2
1 3 1 6 1 3

0

The row and column sums give us the p.m.f.s for X and Y : a 0 1 2 P(X = a)
1 6 2 3 1 6

b 0 1 P(Y = b)
1 2 1 2

Now we give the proof of Theorem 3.2. We consider the joint p.m.f. of X and Y . The random variable X +Y takes the values ai + b j for i = 1, . . . , n and j = 1, . . . , m. Now the probability that it takes a given value ck is the sum of the probabilities P(X = ai ,Y = b j ) over all i and j such that ai + b j = ck . Thus, E(X +Y ) = = =

∑ ck P(X +Y = ck ) k n m i=1 j=1 n i=1

∑ ∑ (ai + b j )P(X = ai,Y = b j ) m j=1 m j=1 n i=1

∑ ai ∑ P(X = ai,Y = b j ) +

∑ b j ∑ P(X = ai,Y = b j ) .

Now ∑m P(X = ai ,Y = b j ) is a row sum of the joint p.m.f. table, so is equal to j=1 P(X = ai ), and similarly ∑n P(X = ai ,Y = b j ) is a column sum and is equal to i=1 P(Y = b j ). So n m j=1

E(X +Y ) =

i=1

∑ aiP(X = ai) + ∑ b j P(Y = b j )

= E(X) + E(Y ). The variance is a bit trickier. First we calculate E((X +Y )2 ) = E(X 2 + 2XY +Y 2 ) = E(X 2 ) + 2E(XY ) + E(Y 2 ),

46

CHAPTER 3. RANDOM VARIABLES

using part (a) of the Theorem. We have to consider the term E(XY ). For this, we have to make the assumption that X and Y are independent, that is, P(X = a1 ,Y = b j ) = P(X = ai ) · P(Y = b j ). As before, we have n m

E(XY ) = = =

i=1 j=1 n n i=1 j=1 n i=1

∑ ∑ aib j P(X = ai,Y = b j )

∑ ∑ aib j P(X = ai)P(Y = b j ) m j=1

∑ aiP(X = ai) ·

∑ b j P(Y = b j )

= E(X) · E(Y ). So Var(X +Y ) = = = = E((X +Y )2 ) − (E(X +Y ))2 (E(X 2 ) + 2E(XY ) + E(Y 2 )) − (E(X)2 + 2E(X)E(Y ) + E(Y )2 ) (E(X 2 ) − E(X)2 ) + 2(E(XY ) − E(X)E(Y )) + (E(Y 2 ) − E(Y )2 ) Var(X) + Var(Y ).

To finish this section, we consider constant random variables. (If the thought of a ‘constant variable’ worries you, remember that a random variable is not a variable at all but a function, and there is nothing amiss with a constant function.) Proposition 3.3 Let C be a constant random variable with value c. Let X be any random variable. (a) E(C) = c, Var(C) = 0. (b) E(X + c) = E(X) + c, Var(X + c) = Var(X). (c) E(cX) = cE(X), Var(cX) = c2 Var(X). Proof (a) The random variable C takes the single value c with P(C = c) = 1. So E(C) = c · 1 = c. Also, Var(C) = E(C2 ) − E(C)2 = c2 − c2 = 0. (For C2 is a constant random variable with value c2 .)

3.5. SOME DISCRETE RANDOM VARIABLES

47

(b) This follows immediately from Theorem 3.2, once we observe that the constant random variable C and any random variable X are independent. (This is true because P(X = a,C = c) = P(X = a) · 1.) Then E(X + c) = E(X) + E(C) = E(X) + c, Var(X + c) = Var(X) + Var(C) = Var(X). (c) If a1 , . . . , an are the values of X, then ca1 , . . . , can are the values of cX, and P(cX = cai ) = P(x = ai ). So n E(cX) =

i=1 n

∑ caiP(cX = cai) i=1 = c ∑ ai P(X = ai ) = cE(X). Then Var(cX) = = = = E(c2 X 2 ) − E(cX)2 c2 E(X 2 ) − (cE(X))2 c2 (E(X 2 ) − E(X)2 ) c2 Var(X).

3.5

Some discrete random variables

We now look at five types of discrete random variables, each depending on one or more parameters. We describe for each type the situations in which it arises, and give the p.m.f., the expected value, and the variance. If the variable is tabulated in the New Cambridge Statistical Tables, we give the table number, and some examples of using the tables. You should have a copy of the tables to follow the examples. A summary of this information is given in Appendix B. Before we begin, a comment on the New Cambridge Statistical Tables. They don’t give the probability mass function (or p.m.f.), but a closely related function called the cumulative distribution function. It is defined for a discrete random variable as follows. Let X be a random variable taking values a1 , a2 , . . . , an . We assume that these are arranged in ascending order: a1 < a2 < · · · < an . The cumulative distribution function, or c.d.f., of X is given by FX (ai ) = P(X ≤ ai ).

48

CHAPTER 3. RANDOM VARIABLES

We see that it can be expressed in terms of the p.m.f. of X as follows: i FX (ai ) = P(X = a1 ) + · · · + P(X = ai ) =

j=1

∑ P(X = a j ).

In the other direction, we cn recover the p.m.f. from the c.d.f.: P(X = ai ) = FX (ai ) − FX (ai−1 ). We won’t use the c.d.f. of a discrete random variable except for looking up the tables. It is much more important for continuous random variables!

Bernoulli random variable Bernoulli(p)
A Bernoulli random variable is the simplest type of all. It only takes two values, 0 and 1. So its p.m.f. looks as follows: x 0 1 P(X = x) q p Here, p is the probability that X = 1; it can be any number between 0 and 1. Necessarily q (the probability that X = 0) is equal to 1 − p. So p determines everything. For a Bernoulli random variable X, we sometimes describe the experiment as a ‘trial’, the event X = 1 as‘success’, and the event X = 0 as ‘failure’. For example, if a biased coin has probability p of coming down heads, then the number of heads that we get when we toss the coin once is a Bernoulli(p) random variable. More generally, let A be any event in a probability space S . With A, we associate a random variable IA (remember that a random variable is just a function on S ) by the rule 1 if s ∈ A; IA (s) = 0 if s ∈ A. / The random variable IA is called the indicator variable of A, because its value indicates whether or not A occurred. It is a Bernoulli(p) random variable, where p = P(A). (The event IA = 1 is just the event A.) Some people write 1 A instead of IA . Calculation of the expected value and variance of a Bernoulli random variable is easy. Let X ∼ Bernoulli(p). (Remember that ∼ means “has the same p.m.f. as”.) E(X) = 0 · q + 1 · p = p; Var(X) = 02 · q + 12 · p − p2 = p − p2 = pq. (Remember that q = 1 − p.)

3.5. SOME DISCRETE RANDOM VARIABLES

49

Binomial random variable Bin(n, p)
Remember that for a Bernoulli random variable, we describe the event X = 1 as a ‘success’. Now a binomial random variable counts the number of successes in n independent trials each associated with a Bernoulli(p) random variable. For example, suppose that we have a biased coin for which the probability of heads is p. We toss the coin n times and count the number of heads obtained. This number is a Bin(n, p) random variable. A Bin(n, p) random variable X takes the values 0, 1, 2, . . . , n, and the p.m.f. of X is given by P(X = k) = nCk qn−k pk for k = 0, 1, 2, . . . , n, where q = 1 − p. This is because there are nCk different ways of obtaining k heads in a sequence of n throws (the number of choices of the k positions in which the heads occur), and the probability of getting k heads and n − k tails in a particular order is qn−k pk . Note that we have given a formula rather than a table here. For small values we could tabulate the results; for example, for Bin(4, p): k P(X = k) n 0 q4

1 4q3 p

2 6q2 p2

3 4qp3

4 p4

Note: when we add up all the probabilities in the table, we get

∑ nCk qn−k pk = (q + p)n = 1, k=0 as it should be: here we used the binomial theorem n (x + y)n =

∑ nCk xn−k yk . k=0 (This argument explains the name of the binomial random variable!) If X ∼ Bin(n, p), then E(X) = np, Var(X) = npq.

There are two ways to prove this, an easy way and a harder way. The easy way only works for the binomial, but the harder way is useful for many random variables. However, you can skip it if you wish: I have set it in smaller type for this reason. Here is the easy method. We have a coin with probability p of coming down heads, and we toss it n times and count the number X of heads. Then X is our Bin(n, p) random variable. Let Xk be the random variable defined by Xk = 1 if we get heads on the kth toss, 0 if we get tails on the kth toss.

50

CHAPTER 3. RANDOM VARIABLES

In other words, Xi is the indicator variable of the event ‘heads on the kth toss’. Now we have X = X1 + X2 + · · · + Xn (can you see why?), and X1 , . . . , Xn are independent Bernoulli(p) random variables (since they are defined by different tosses of a coin). So, as we saw earlier, E(Xi ) = p, Var(Xi ) = pq. Then, by Theorem 21, since the variables are independent, we have E(X) = p + p + · · · + p = np, Var(X) = pq + pq + · · · + pq = npq.
The other method uses a gadget called the probability generating function. We only use it here for calculating expected values and variances, but if you learn more probability theory you will see other uses for it. Let X be a random variable whose values are non-negative integers. (We don’t insist that it takes all possible values; this method is fine for the binomial Bin(n, p), which takes values between 0 and n. To save space, we write pk for the probability P(X = k). Now the probability generating function of X is the power series GX (x) = ∑ pk xk . (The sum is over all values k taken by X.) We use the notation [F(x)]x=1 for the result of substituting x = 1 in the series F(x). Proposition 3.4 Let GX (x) be the probability generating function of a random variable X. Then (a) [GX (x)]x=1 = 1; (b) E(X) = (c) Var(X) = d dx GX (x) x=1 ; d2 G (x) + E(X) − E(X)2 . dx2 X x=1

Part (a) is just the statement that probabilities add up to 1: when we substitute x = 1 in the power series for GX (x) we just get ∑ pk . For part (b), when we differentiate the series term-by-term (you will learn later in Analysis that this is OK), we get d GX (x) = ∑ kpk xk−1 . dx Now putting x = 1 in this series we get

∑ kpk = E(X).
For part (c), differentiating twice gives d2 GX (x) = ∑ k(k − 1)pk xk−2 . dx2 Now putting x = 1 in this series we get

∑ k(k − 1)pk = ∑ k2 pk − ∑ kpk = E(X 2 ) − E(X).
Adding E(X) and subtracting E(X)2 gives E(X 2 ) − E(X)2 , which by definition is Var(X).

3.5. SOME DISCRETE RANDOM VARIABLES
Now let us appply this to the binomial random variable X ∼ Bin(n, p). We have pk = P(X = k) = nCk qn−k pk , so the probability generating function is n 51

∑ nCk qn−k pk xk = (q + px)n , k=0 by the Binomial Theorem. Putting x = 1 gives (q + p)n = 1, in agreement with Proposition 3.4(a). Differentiating once, using the Chain Rule, we get np(q + px)n−1 . Putting x = 1 we find that E(X) = np. Differentiating again, we get n(n − 1)p2 (q + px)n−2 . Putting x = 1 gives n(n − 1)p2 . Now adding E(X) − E(X)2 , we get Var(X) = n(n − 1)p2 + np − n2 p2 = np − np2 = npq.

The binomial random variable is tabulated in Table 1 of the Cambridge Statistical Tables [1]. As explained earlier, the tables give the cumulative distribution function. For example, suppose that the probability that a certain coin comes down heads is 0.45. If the coin is tossed 15 times, what is the probability of five or fewer heads? Turning to the page n = 15 in Table 1 and looking at the row 0.45, you read off the answer 0.2608. What is the probability of exactly five heads? This is P(5 or fewer) − P(4 or fewer), and from tables the answer is 0.2608 − 0.1204 = 0.1404. The tables only go up to p = 0.5. For larger values of p, use the fact that the number of failures in Bin(n, p) is equal to the number of successes in Bin(n, 1 − p). So the probability of five heads in 15 tosses of a coin with p = 0.55 is 0.9745− 0.9231 = 0.0514. Another interpretation of the binomial random variable concerns sampling. Suppose that we have N balls in a box, of which M are red. We sample n balls from the box with replacement; let the random variable X be the number of red balls in the sample. What is the distribution of X? Since each ball has probability M/N of being red, and different choices are independent, X ∼ Bin(n, p), where p = M/N is the proportion of red balls in the sample. What about sampling without replacement? This leads us to our next random variable:

Hypergeometric random variable Hg(n, M, N)
Suppose that we have N balls in a box, of which M are red. We sample n balls from the box without replacement. Let the random variable X be the number of

52

CHAPTER 3. RANDOM VARIABLES

red balls in the sample. Such an X is called a hypergeometric random variable Hg(n, M, N). The random variable X can take any of the values 0, 1, 2, . . . , n. Its p.m.f. is given by the formula MC · N−MC k n−k P(X = k) = . N Cn For the number of samples of n balls from N is N Cn ; the number of ways of choosing k of the M red balls and n − k of the N − M others is MCk · N−MCn−k ; and all choices are equally likely. The expected value and variance of a hypergeometric random variable are as follows (we won’t go into the proofs): E(X) = n M , N Var(X) = n M N N −M N N −n . N −1

You should compare these to the values for a binomial random variable. If we let p = M/N be the proportion of red balls in the hat, then E(X) = np, and Var(X) is equal to npq multiplied by a ‘correction factor’ (N − n)/(N − 1). In particular, if the numbers M and N − M of red and non-red balls in the hat are both very large compared to the size n of the sample, then the difference between sampling with and without replacement is very small, and indeed the ‘correction factor’ is close to 1. So we can say that Hg(n, M, N) is approximately Bin(n, M/N) if n is small compared to M and N − M. Consider our example of choosing two pens from four, where two pens are red, one green, and one blue. The number X of red pens is a Hg(2, 2, 4) random variable. We calculated earlier that P(X = 0) = 1/6, P(X = 1) = 2/3 and P(X = 2) = 1/6. From this we find by direct calculation that E(X) = 1 and Var(X) = 1/3. These agree with the formulae above.

Geometric random variable Geom(p)
The geometric random variable is like the binomial but with a different stopping rule. We have again a coin whose probability of heads is p. Now, instead of tossing it a fixed number of times and counting the heads, we toss it until it comes down heads for the first time, and count the number of times we have tossed the coin. Thus, the values of the variable are the positive integers 1, , 2, 3, . . . (In theory we might never get a head and toss the coin infinitely often, but if p > 0 this possibility is ‘infinitely unlikely’, i.e. has probability zero, as we will see.) We always assume that 0 < p < 1. More generally, the number of independent Bernoulli trials required until the first success is obtained is a geometric random variable.

3.5. SOME DISCRETE RANDOM VARIABLES The p.m.f of a Geom(p) random variable is given by P(X = k) = qk−1 p,

53

where q = 1 − p. For the event X = k means that we get tails on the first k − 1 tosses and heads on the kth, and this event has probability qk−1 p, since ‘tails’ has probability q and different tosses are independent. Let’s add up these probabilities:

k=1

∑ qk−1 p = p + qp + q2 p + · · · = 1 − q = 1,



p

since the series is a geometric progression with first term p and common ratio q, where q < 1. (Just as the binomial theorem shows that probabilities sum to 1 for a binomial random variable, and gives its name to the random variable, so the geometric progression does for the geometric random variable.) We calculate the expected value and the variance using the probability generating function. If X ∼ Geom(p), the result will be that E(X) = 1/p,
We have GX (x) = k=1 Var(X) = q/p2 .

∑ qk−1 pxk = 1 − qx ,



px

again by summing a geometric progression. Differentiating, we get d (1 − qx)p + pxq p GX (x) = = . 2 dx (1 − qx) (1 − qx)2 Putting x = 1, we obtain E(X) = Differentiating again gives 2pq/(1 − qx)3 , so Var(X) = 2pq 1 1 q + − 2 = 2. p3 p p p 1 p = . (1 − q)2 p

For example, if we toss a fair coin until heads is obtained, the expected number of tosses until the first head is 2 (so the expected number of tails is 1); and the variance of this number is also 2.

54

CHAPTER 3. RANDOM VARIABLES

Poisson random variable Poisson(λ)
The Poisson random variable, unlike the ones we have seen before, is very closely connected with continuous things. Suppose that ‘incidents’ occur at random times, but at a steady rate overall. The best example is radioactive decay: atomic nuclei decay randomly, but the average number λ which will decay in a given interval is constant. The Poisson random variabe X counts the number of ‘incidents’ which occur in a given interval. So if, on average, there are 2.4 nuclear decays per second, then the number of decays in one second starting now is a Poisson(2.4) random variable. Another example might be the number of telephone calls a minute to a busy telephone number. Although we will not prove it, the p.m.f. for a Poisson(λ) variable X is given by the formula λk P(X = k) = e−λ . k! Let’s check that these probabilities add up to one. We get λk ∑ k=0 k!


e−λ = eλ · e−λ = 1,

since the expression in brackets is the sum of the exponential series. By analogy with what happened for the binomial and geometric random variables, you might have expected that this random variable would be called ‘exponential’. Unfortunately, this name has been given to a closely-related continuous random variable which we will meet later. However, if you speak a little French, you might use as a mnemonic the fact that if I go fishing, and the fish are biting at the rate of λ per hour on average, then the number of fish I will catch in the next hour is a Poisson(λ) random variable. The expected value and variance of a Poisson(λ) random variable X are given by E(X) = Var(X) = λ.
Again we use the probability generating function. If X ∼ Poisson(λ), then GX (x) = (λx)k −λ e = eλ(x−1) , k=0 k!




again using the series for the exponential function. Differentiation gives λeλ(x−1) , so E(X) = λ. Differentiating again gives λ2 eλ(x−1) , so Var(X) = λ2 + λ − λ2 = λ.

3.6. CONTINUOUS RANDOM VARIABLES

55

The cumulative distribution function of a Poisson random variable is tabulated in Table 2 of the New Cambridge Statistical Tables. So, for example, we find from the tables that, if 2.4 fish bite per hour on average, then the probability that I will catch no fish in the next hour is 0.0907, while the probability that I catch at five or fewer is 0.9643 (so that the probability that I catch six or more is 0.0357). There is another situation in which the Poisson distribution arises. Suppose I am looking for some very rare event which only occurs once in 1000 trials on average. So I conduct 1000 independent trials. How many occurrences of the event do I see? This number is really a binomial random variable Bin(1000, 1/1000). But it turns out to be Poisson(1), to a very good approximation. So, for example, the probability that the event doesn’t occur is about 1/e. The general rule is: If n is large, p is small, and np = λ, then Bin(n, p) can be approximated by Poisson(λ).

3.6

Continuous random variables

We haven’t so far really explained what a continuous random variable is. Its target set is the set of real numbers, or perhaps the non-negative real numbers or just an interval. The crucial property is that, for any real number a, we have (X = a) = 0; that is, the probability that the height of a random student, or the time I have to wait for a bus, is precisely a, is zero. So we can’t use the probability mass function for continuous random variables; it would always be zero and give no information. We use the cumulative distribution function or c.d.f. instead. Remember from last week that the c.d.f. of the random variable X is the function FX defined by FX (x) = P(X ≤ x). Note: The name of the function is FX ; the lower case x refers to the argument of the function, the number which is substituted into the function. It is common but not universal to use as the argument the lower-case version of the name of the random variable, as here. Note that FX (y) is the same function written in terms of the variable y instead of x, whereas FY (x) is the c.d.f. of the random variable Y (which might be a completely different function.) Now let X be a continuous random variable. Then, since the probability that X takes the precise value x is zero, there is no difference between P(X ≤ x) and P(X < x). Proposition 3.5 The c.d.f. is an increasing function (this means that FX (x) ≤ FX (y) if x < y), and approaches the limits 0 as x → −∞ and 1 as x → ∞.

56

CHAPTER 3. RANDOM VARIABLES The function is increasing because, if x < y, then FX (y) − FX (x) = P(X ≤ y) − P(X ≤ x) = P(x < X ≤ y) ≥ 0.

Also FX (∞) = 1 because X must certainly take some finite value; and FX (−∞) = 0 because no value is smaller than −∞! Another important function is the probability density function fX . It is obtained by differentiating the c.d.f.: d FX (x). dx Now fX (x) is non-negative, since it is the derivative of an increasing function. If we know fX (x), then FX is obtained by integrating. Because FX (−∞) = 0, we have Z x FX (x) = fX (t)dt. fX (x) =
−∞

Note the use of the “dummy variable” t in this integral. Note also that P(a ≤ X ≤ b) = FX (b) − FX (a) =
Z b a fX (t)dt.

You can think of the p.d.f. like this: the probability that the value of X lies in a very small interval from x to x + h is approximately fX (x) · h. So, although the probability of getting exactly the value x is zero, the probability of being close to x is proportional to fX (x). There is a mechanical analogy which you may find helpful. Remember that we modelled a discrete random variable X by placing at each value a of X a mass equal to P(X = a). Then the total mass is one, and the expected value of X is the centre of mass. For a continuous random variable, imagine instead a wire of variable thickness, so that the density of the wire (mass per unit length) at the point x is equal to fX (x). Then again the total mass is one; the mass to the left of x is FX (x); and again it will hold that the centre of mass is at E(X). Most facts about continuous random variables are obtained by replacing the p.m.f. by the p.d.f. and replacing sums by integrals. Thus, the expected value of X is given by Z


E(X) =
−∞

x fX (x)dx,

and the variance is (as before) Var(X) = E(X 2 ) − E(X)2 , where E(X ) = It is also true that Var(X) = E((X
2

x2 fX (x)dx. −∞ − µ)2 ), where µ = E(X).

Z ∞

3.7. MEDIAN, QUARTILES, PERCENTILES

57

We will see examples of these calculations shortly. But here is a small example to show the ideas. The support of a continuous random variable is the smallest interval containing all values of x where fX (x) > 0. Suppose that the random variable X has p.d.f. given by 2x if 0 ≤ x ≤ 1, 0 otherwise. The support of Xis the interval [0, 1]. We check the integral: fX (x) =
Z ∞
−∞

Z 1

fX (x)dx =

2x dx = x2

0

x=1 x=0

= 1.

The cumulative distribution function of X is 0 if x < 0, x2 if 0 ≤ x ≤ 1, −∞ 1 if x > 1. (Study this carefully to see how it works.) We have
Z x

FX (x) =

fX (t)dt =

2 2x2 dx = , 3 −∞ 0 Z ∞ Z 1 1 E(X 2 ) = x2 fX (x)dx = 2x3 dx = , 2 −∞ 0 2 1 2 1 = . − Var(X) = 2 3 18
Z ∞

Z 1

E(X) =

x fX (x)dx =

3.7

Median, quartiles, percentiles

Another measure commonly used for continuous random variables is the median; this is the value m such that “half of the distribution lies to the left of m and half to the right”. More formally, m should satisfy FX (m) = 1/2. It is not the same as the mean or expected value. In the example at the end of the last section, we saw that E(X) = 2/3. The median of X is the value of m for which FX (m) = 1/2. Since √ FX (x) = x2 for 0 ≤ x ≤ 1, we see that m = 1/ 2. If there is a value m such that the graph of y = fX (x) is symmetric about x = m, then both the expected value and the median of X are equal to m. The lower quartile l and the upper quartile u are similarly defined by FX (l) = 1/4, FX (u) = 3/4.

Thus, the probability that X lies between l and u is 3/4 − 1/4 = 1/2, so the quartiles give an estimate of how spread-out the distribution is. More generally, we define the nth percentile of X to be the value of xn such that FX (xn ) = n/100,

58

CHAPTER 3. RANDOM VARIABLES

that is, the probability that X is smaller than xn is n%. Reminder If the c.d.f. of X is FX (x) and the p.d.f. is fX (x), then • differentiate FX to get fX , and integrate fX to get FX ; • use fX to calculate E(X) and Var(X); • use FX to calculate P(a ≤ X ≤ b) (this is FX (b) − FX (a)), and the median and percentiles of X.

3.8 Some continuous random variables
In this section we examine three important continuous random variables: the uniform, exponential, and normal. The details are summarised in Appendix B.

Uniform random variable U(a, b)
Let a and b be real numbers with a < b. A uniform random variable on the interval [a, b] is, roughly speaking, “equally likely to be anywhere in the interval”. In other words, its probability density function is constant on the interval [a, b] (and zero outside the interval). What should the constant value c be? The integral of the p.d.f. is the area of a rectangle of height c and base b − a; this must be 1, so c = 1/(b − a). Thus, the p.d.f. of the random variable X ∼ U(a, b) is given by fX (x) = 1/(b − a) if a ≤ x ≤ b, 0 otherwise. By integration, we find that the c.d.f. is FX (x) = 0 if x < a, (x − a)/(b − a) if a ≤ x ≤ b, 1 if x > b.

Further calculation (or the symmetry of the p.d.f.) shows that the expected value and the median of X are both given by (a + b)/2 (the midpoint of the interval), while Var(X) = (b − a)2 /12. The uniform random variable doesn’t really arise in practical situations. However, it is very useful for simulations. Most computer systems include a random number generator, which apparently produces independent values of a uniform random variable on the interval [0, 1]. Of course, they are not really random, since the computer is a deterministic machine; but there should be no obvious pattern to

3.8. SOME CONTINUOUS RANDOM VARIABLES

59

the numbers produced, and in a large number of trials they should be distributed uniformly over the interval. You will learn in the Statistics course how to use a uniform random variable to construct values of other types of discrete or continuous random variables. Its great simplicity makes it the best choice for this purpose.

Exponential random variable Exp(λ)
The exponential random variable arises in the same situation as the Poisson: be careful not to confuse them! We have events which occur randomly but at a constant average rate of λ per unit time (e.g. radioactive decays, fish biting). The Poisson random variable, which is discrete, counts how many events will occur in the next unit of time. The exponential random variable, which is continuous, measures exactly how long from now it is until the next event occurs. Not that it takes non-negative real numbers as values. If X ∼ Exp(λ), the p.d.f. of X is fX (x) = 0 λe−λx if x < 0, if x ≥ 0.

By integration, we find the c.d.f. to be FX (x) = Further calculation gives E(X) = 1/λ, Var(X) = 1/λ2 . 0 1 − e−λx if x < 0, if x ≥ 0.

The median m satisfies 1 − e−λm = 1/2, so that m = log 2/λ. (The logarithm is to base e, so that log 2 = 0.69314718056 approximately.

Normal random variable N(µ, σ2 )
The normal random variable is the commonest of all in applications, and the most important. There is a theorem called the central limit theorem which says that, for virtually any random variable X which is not too bizarre, if you take the sum (or the average) of n independent random variables with the same distribution as X, the result will be approximately normal, and will become more and more like a normal variable as n grows. This partly explains why a random variable affected by many independent factors, like a man’s height, has an approximately normal distribution.

60

CHAPTER 3. RANDOM VARIABLES

More precisely, if n is large, then a Bin(n, p) random variable is well approximated by a normal random variable with the same expected value np and the same variance npq. (If you are approximating any discrete random variable by a continuous one, you should make a “continuity correction” – see the next section for details and an example.) The p.d.f. of the random variable X ∼ N(µ, σ2 ) is given by the formula
2 2 1 fX (x) = √ e−(x−µ) /2σ . σ 2π

We have E(X) = µ and Var(X) = σ2 . The picture below shows the graph of this function for µ = 0, the familiar ‘bell-shaped curve’.
............ .......... ................. ....... ...... ...... ..... .... .... .... .... . .... .... . .... .... . .... .... . .... .... .... . .... .... . .... .... . ..... ..... ....... ... ........ ....... ... ......... ......... ........... ...... .................... .............. ............ .......................

The c.d.f. of X is obtained as usual by integrating the p.d.f. However, it is not possible to write the integral of this function (which, stripped of its constants, is 2 e−x ) in terms of ‘standard’ functions. So there is no alternative but to make tables of its values. The crucial fact that means that we don’t have to tabulate the function for all values of µ and σ is the following: Proposition 3.6 If X ∼ N(µ, σ2 ), and Y = (X − µ)/σ, then Y ∼ N(0, 1). So we only need tables of the c.d.f. for N(0, 1) – this is the so-called standard normal random variable – and we can find the c.d.f. of any normal random variable. The c.d.f. of the standard normal is given in Table 4 of the New Cambridge Statistical Tables [1]. The function is called Φ in the tables. For example, suppose that X ∼ N(6, 25).What is the probability that X ≤ 8? Putting Y = (X − 6)/5, so that Y ∼ N(0, 1), we find that X ≤ 8 if and only if Y ≤ (8 − 6)/5 = 0.4. From the tables, the probability of this is Φ(0.4) = 0.6554. The p.d.f. of a standard normal r.v. Y is symmetric about zero. This means that, for any positive number c, Φ(−c) = P(Y ≤ −c) = P(Y ≥ c) = 1 − P(Y ≤ c) = 1 − Φ(c). So it is only necessary to tabulate the function for positive values of its argument. So, if X ∼ N(6, 25) and Y = (X − 6)/5 as before, then P(X ≤ 3) = P(Y ≤ −0.6) = 1 − P(Y ≤ 0.6) = 1 − 0.7257 = 0.2743.

3.9. ON USING TABLES

61

3.9

On using tables

We end this section with a few comments about using tables, not tied particularly to the normal distribution (though most of the examples will come from there).

Interpolation
Any table is limited in the number of entries it contains. Tabulating something with the input given to one extra decimal place would make the table ten times as bulky! Interpolation can be used to extend the range of values tabulated. Suppose that some function F is tabulated with the input given to three places of decimals. It is probably true that F is changing at a roughly constant rate between, say, 0.28 and 0.29. So F(0.283) will be about three-tenths of the way between F(0.28) and F(0.29). For example, if Φ is the c.d.f. of the normal distribution, then Φ(0.28) = 0.6103 and Φ(0.29) = 0.6141, so Φ(0.283) = 0.6114. (Three-tenths of 0.0038 is 0.0011.)

Using tables in reverse
This means, if you have a table of values of F, use it to find x such that F(x) is a given value c. Usually, c won’t be in the table and we have to interpolate between values x1 and x2 , where F(x1 ) is just less than c and F(x2 ) is just greater. For example, if Φ is the c.d.f. of the normal distribution, and we want the upper quartile, then we find from tables Φ(0.67) = 0.7486 and Φ(0.68) = 0.7517, so the required value is about 0.6745 (since 0.0014/0.0031 = 0.45). In this case, the percentile points of the standard normal r.v. are given in Table 5 of the New Cambridge Statistical Tables [1], so you don’t need to do this. But you will find it necessary in other cases.

Continuity correction
Suppose we know that a discrete random variable X is well approximated by a continuous random variable Y . We are given a table of the c.d.f. of Y and want to find information about X. For example, suppose that X takes integer values and we want to find P(a ≤ X ≤ b), where a and b are integers. This probability is equal to P(X = a) + P(x = a + 1) + · · · + P(X = b). To say that X can be approximated by Y means that, for example, P(X = a) is approximately equal to fY (a), where fY is the p.d.f. of Y . This is equal to the area

62

CHAPTER 3. RANDOM VARIABLES

of a rectangle of height fY (a) and base 1 (from a − 0.5 to a + 0.5). This in turn is, to a good approximation, the area under the curve y = fY (x) from x = a − 0.5 to x = a + 0.5, since the pieces of the curve above and below the rectangle on either side of x = a will approximately cancel. Similarly for the other values.
...... ...... ...... ...... Y ...... ....... ....... ....... ........ ........ ......... ......... .......... ............ .............. ..

y= f (x)

P(X=a) u

a−0.5

a

a+0.5

Adding all these pieces. we find that P(a ≤ X ≤ b) is approximately equal to the area under the curve y = fY (x) from x = a − 0.5 to x = b + 0.5. This area is given by FY (b + 0.5) − FY (a − 0.5), since FY is the integral of fY . Said otherwise, this is P(a − 0.5 ≤ Y ≤ b + 0.5). We summarise the continuity correction: Suppose that the discrete random variable X, taking integer values, is approximated by the continuous random variable Y . Then P(a ≤ X ≤ b) ≈ P(a−0.5 ≤ Y ≤ b+0.5) = FY (b+0.5)−FY (a−0.5). (Here, ≈ means “approximately equal”.) Similarly, for example, P(X ≤ b) ≈ P(Y ≤ b + 0.5), and P(X ≥ a) ≈ P(Y ≥ a − 0.5). Example The probability that a light bulb will fail in a year is 0.75, and light bulbs fail independently. If 192 bulbs are installed, what is the probability that the number which fail in a year lies between 140 and 150 inclusive? Solution Let X be the number of light bulbs which fail in a year. Then X ∼ Bin(192, 3/4), and so E(X) = 144, Var(X) = 36. So X is approximated by Y ∼ N(144, 36), and P(140 ≤ X ≤ 150) ≈ P(139.5 ≤ Y ≤ 150.5) by the continuity correction.

3.10. WORKED EXAMPLES Let Z = (Y − 144)/6. Then Z ∼ N(0, 1), and P(139.5 ≤ Y ≤ 150.5) = P 150.5 − 144 139.5 − 144 ≤Z≤ 6 6 = P(−0.75 ≤ Z ≤ 1.083) = 0.8606 − 0.2268 (from tables) = 0.6338.

63

3.10

Worked examples

Question I roll a fair die twice. Let the random variable X be the maximum of the two numbers obtained, and let Y be the modulus of their difference (that is, the value of Y is the larger number minus the smaller number). (a) Write down the joint p.m.f. of (X,Y ). (b) Write down the p.m.f. of X, and calculate its expected value and its variance. (c) Write down the p.m.f. of Y , and calculate its expected value and its variance. (d) Are the random variables X and Y independent? Solution (a) Y 0 1 2 3 X 4 5 6
1 36 1 36 1 36 1 36 1 36 1 36

1 0
2 36 2 36 2 36 2 36 2 36

2 0 0
2 36 2 36 2 36 2 36

3 0 0 0
2 36 2 36 2 36

4 0 0 0 0
2 36 2 36

5 0 0 0 0 0
2 36

The best way to produce this is to write out a 6 × 6 table giving all possible values for the two throws, work out for each cell what the values of X and Y are, and then count the number of occurrences of each pair. For example: X = 5, Y = 2 can occur in two ways: the numbers thrown must be (5, 3) or (3, 5). (b) Take row sums: x P(X = x) 1
1 36

2
3 36

3
5 36

4
7 36

5
9 36

6
11 36

64 Hence in the usual way E(X) = (c) Take column sums: y P(Y = y) and so E(Y ) = 0
6 36

CHAPTER 3. RANDOM VARIABLES

161 , 36 1
10 36

Var(X) =

2555 . 1296 4
4 36

2
8 36

3
6 36

5
2 36

35 665 , Var(Y ) = . 18 324 (d) No: e.g. P(X = 1,Y = 2) = 0 but P(X = 1) · P(Y = 2) =

8 1296 .

Question An archer shoots an arrow at a target. The distance of the arrow from the centre of the target is a random variable X whose p.d.f. is given by fX (x) = (3 + 2x − x2 )/9 if x ≤ 3, 0 if x > 3.

The archer’s score is determined as follows: Distance X < 0.5 0.5 ≤ X < 1 1 ≤ X < 1.5 1.5 ≤ X < 2 X ≥ 2 Score 10 7 4 1 0

Construct the probability mass function for the archer’s score, and find the archer’s expected score. Solution First we work out the probability of the arrow being in each of the given bands: P(X < 0.5) = FX (0.5) − FX (0) =
Z 0.5 3 + 2x − x2
0

9

dx
1/2 0

9x + 3x2 − x3 = 27 41 = . 216

Similarly we find that P(0.5 ≤ X < 1) = 47/216, P(1 ≤ X < 1.5) = 47/216, P(1.5 ≤ X < 2) = 41/216, and P(X ≥ 2) = 40/216. So the p.m.f. fot the archer’s score S is s 0 1 4 7 10 40 41 47 47 41 P(S = s) 216 216 216 216 216

3.10. WORKED EXAMPLES Hence E(S) = 41 + 47 · 4 + 47 · 7 + 41 · 10 121 = . 216 27

65

Question Let T be the lifetime in years of new bus engines. Suppose that T is continuous with probability density function  0  for x < 1 fT (x) = d   for x > 1 x3 for some constant d. (a) Find the value of d. (b) Find the mean and median of T . (c) Suppose that 240 new bus engines are installed at the same time, and that their lifetimes are independent. By making an appropriate approximation, find the probability that at most 10 of the engines last for 4 years or more. Solution (a) The integral of fT (x), over the support of T , must be 1. That is, 1 = dx x3 −d ∞ = 2x2 1 = d/2,
1

Z ∞ d

so d = 2. (b) The c.d.f. of T is obtained by integrating the p.d.f.; that is, it is  0  for x < 1 FT (x) =  1 − 1 for x > 1  x2 The mean of T is
Z ∞
1

x fT (x) dx =

Z ∞ 2
1

x2

dx = 2.

The√ median is the value m such that FT (m) = 1/2. That is, 1 − 1/m2 = 1/2, or m = 2.

66

CHAPTER 3. RANDOM VARIABLES (c) The probability that an engine lasts for four years or more is 1 − FT (4) = 1 − 1 − 1 4
2

=

1 . 16

So, if 240 engines are installed, the number which last for four years or more is a binomial random variable X ∼ Bin(240, 1/16), with expected value 240 × (1/16) = 15 and variance 240 × (1/16) × (15/16) = 225/16. We approximate X by Y ∼ N(15, (15/4)2 ). Using the continuity correction, P(X ≤ 10) ≈ P(Y ≤ 10.5). Now, if Z = (Y − 15)/(15/4), then Z ∼ N(0, 1), and P(Y ≤ 10.5) = P(Z ≤ −1.2) = 1 − P(Z ≤ 1.2) = 0.1151 using the table of the standard normal distribution. Note that we start with the continuous random variable T , move to the discrete random variable X, and then move on to the continuous random variables Y and Z, where finally Z is standard normal and so is in the tables. A true story The answer to the question at the end of the last chapter: As the students in the class obviously knew, the class included a pair of twins! (The twins were Leo and Willy Moser, who both had successful careers as mathematicians.) But what went wrong with our argument for the Birthday Paradox? We assumed (without saying so) that the birthdays of the people in the room were independent; but of course the birthdays of twins are clearly not independent!

Chapter 4 More on joint distribution
We have seen the joint p.m.f. of two discrete random variables X and Y , and we have learned what it means for X and Y to be independent. Now we examine this further to see measures of non-independence and conditional distributions of random variables.

4.1

Covariance and correlation

In this section we consider a pair of discrete random variables X and Y . Remember that X and Y are independent if P(X = ai ,Y = b j ) = P(X = ai ) · P(Y = b j ) holds for any pair (ai , b j ) of values of X and Y . We introduce a number (called the covariance of X and Y ) which gives a measure of how far they are from being independent. Look back at the proof of Theorem 21(b), where we showed that if X and Y are independent then Var(X +Y ) = Var(X) + Var(Y ). We found that, in any case, Var(X +Y ) = Var(X) + Var(Y ) + 2(E(XY ) − E(X)E(Y )), and then proved that if X and Y are independent then E(XY ) = E(X)E(Y ), so that the last term is zero. Now we define the covariance of X and Y to be E(XY ) − E(X)E(Y ). We write Cov(X,Y ) for this quantity. Then the argument we had earlier shows the following: Theorem 4.1 (a) Var(X +Y ) = Var(X) + Var(Y ) + 2 Cov(X,Y ). (b) If X and Y are independent, then Cov(X,Y ) = 0. 67

68

CHAPTER 4. MORE ON JOINT DISTRIBUTION In fact, a more general version of (a), proved by the same argument, says that Var(aX + bY ) = a2 Var(X) + b2 Var(Y ) + 2ab Cov(X,Y ). (4.1)

Another quantity closely related to covariance is the correlation coefficient, corr(X,Y ), which is just a “normalised” version of the covariance. It is defined as follows: Cov(X,Y ) . corr(X,Y ) = Var(X) Var(Y ) The point of this is the first part of the following theorem. Theorem 4.2 Let X and Y be random variables. Then (a) −1 ≤ corr(X,Y ) ≤ 1; (b) if X and Y are independent, then corr(X,Y ) = 0; (c) if Y = mX + c for some constants m = 0 and c, then corr(X,Y ) = 1 if m > 0, and corr(X,Y ) = −1 if m < 0. The proof of the first part is optional: see the end of this section. But note that this is another check on your calculations: if you calculate a correlation coefficient which is bigger than 1 or smaller than −1, then you have made a mistake. Part (b) follows immediately from part (b) of the preceding theorem.
For part (c), suppose that Y = mX + c. Let E(X) = µ and Var(X) = α, so that E(X 2 ) = µ2 + α. Now we just calculate everything in sight. E(Y ) = E(mX + c) = mE(X) + c = mµ + c E(Y 2 ) = E(m2 X 2 + 2mcX + c2 ) = m2 (µ2 + α) + 2mcµ + c2 Var(Y ) = E(Y 2 ) − E(Y )2 = m2 α E(XY ) = E(mX 2 + cX) = m(µ2 + α) + cµ; Cov(X,Y ) = E(XY ) − E(X)E(Y ) = mα √ corr(X,Y ) = Cov(X,Y )/ Var(X) Var(Y ) = mα/ m2 α2 +1 if m > 0, = −1 if m < 0.

Thus the correlation coefficient is a measure of the extent to which the two variables are related. It is +1 if Y increases linearly with X; 0 if there is no relation between them; and −1 if Y decreases linearly as X increases. More generally, a positive correlation indicates a tendency for larger X values to be associated with larger Y values; a negative value, for smaller X values to be associated with larger Y values.

4.1. COVARIANCE AND CORRELATION

69

Example I have two red pens, one green pen, and one blue pen, and I choose two pens without replacement. Let X be the number of red pens that I choose and Y the number of green pens. Then the joint p.m.f. of X and Y is given by the following table: Y 0 1 1 0 0 6 X 1 2
1 3 1 6 1 3

0

From this we can calculate the marginal p.m.f. of X and of Y and hence find their expected values and variances: E(X) = 1, E(Y ) = 1/2, Also, E(XY ) = 1/3, since the sum E(XY ) = ∑ ai b j P(X = ai ,Y = b j ) i, j

Var(X) = 1/3, Var(Y ) = 1/4.

contains only one term where all three factors are non-zero. Hence Cov(X,Y ) = 1/3 − 1/2 = −1/6, and corr(X,Y ) = −1/6 1 = −√ . 3 1/12

The negative correlation means that small values of X tend to be associated with larger values of Y . Indeed, if X = 0 then Y must be 1, and if X = 2 then Y must be 0, but if X = 1 then Y can be either 0 or 1. Example We have seen that if X and Y are independent then Cov(X,Y ) = 0. However, it doesn’t work the other way around. Consider the following joint p.m.f. Y −1 0 1 −1 1 0 1 5 5 X 0 1 0
1 5 1 5

0
1 5

0

70

CHAPTER 4. MORE ON JOINT DISTRIBUTION

Now calculation shows that E(X) = E(Y ) = E(XY ) = 0, so Cov(X,Y ) = 0. But X and Y are not independent: for P(X = −1) = 2/5, P(Y = 0) = 1/5, but P(X = −1,Y = 0) = 0. We call two random variables X and Y uncorrelated if Cov(X,Y ) = 0 (in other words, if corr(X,Y ) = 0). So we can say: Independent random variables are uncorrelated, but uncorrelated random variables need not be independent.
Here is the proof that the correlation coefficient lies between −1 and 1. Clearly this is exactly equivalent to proving that its square is at most 1, that is, that Cov(X,Y )2 ≤ Var(X) · Var(Y ). This depends on the following fact: Let p, q, r be real numbers with p > 0. Suppose that px2 + 2qx + r ≥ 0 for all real numbers x. Then q2 ≤ pr. For, when we plot the graph y = px2 + 2qx + r, we get a parabola; the hypothesis means that this parabola never goes below the X-axis, so that either it lies entirely above the axis, or it touches it in one point. This means that the quadratic equation px2 + 2qx + r = 0 either has no real roots, or has two equal real roots. From high-school algebra, we know that this means that q2 ≤ pr. Now let p = Var(X), q = Cov(X,Y ), and r = Var(Y ). Equation (4.1) shows that px2 + 2qx + r = Var(xX +Y ). (Note that x is an arbitrary real number here and has no connection with the random variable X!) Since the variance of a random variable is never negative, we see that px2 + 2qx + r ≥ 0 for all choices of x. Now our argument above shows that q2 ≤ pr, that is, Cov(X,Y )2 ≤ Var(X) · Var(Y ), as required.

4.2 Conditional random variables
Remember that the conditional probability of event B given event A is P(B | A) = P(A ∩ B)/P(A). Suppose that X is a discrete random variable. Then the conditional probability that X takes a certain value ai , given A, is just P(X = ai | A) = P(A holds and X = ai ) . P(A)

This defines the probability mass function of the conditional random variable X | A. So we can, for example, talk about the conditional expectation E(X | A) = ∑ ai P(X = ai | A). i 4.2. CONDITIONAL RANDOM VARIABLES

71

Now the event A might itself be defined by a random variable; for example, A might be the event that Y takes the value b j . In this case, we have P(X = ai | Y = b j ) = P(X = ai ,Y = b j ) . P(Y = b j )

In other words, we have taken the column of the joint p.m.f. table of X and Y corresponding to the value Y = b j . The sum of the entries in this column is just P(Y = b j ), the marginal distribution of Y . We divide the entries in the column by this value to obtain a new distribution of X (whose probabilities add up to 1). In particular, we have E(X | Y = b j ) = ∑ ai P(X = ai | Y = b j ). i Example I have two red pens, one green pen, and one blue pen, and I choose two pens without replacement. Let X be the number of red pens that I choose and Y the number of green pens. Then the joint p.m.f. of X and Y is given by the following table: Y 0 1 1 0 0 6 X 1 2
1 3 1 6 1 3

0

In this case, the conditional distributions of X corresponding to the two values of Y are as follows: a 0 1 2 P(X = a | Y = 0) 0 We have
2 3 1 3

a 0 1 2 P(X = a | Y = 1)
1 3 2 3

0

4 2 E(X | Y = 0) = , E(X | Y = 1) = . 3 3 If we know the conditional expectation of X for all values of Y , we can find the expected value of X:

Proposition 4.3 E(X) = ∑ E(X | Y = b j )P(Y = b j ). j Proof:

E(X) =

∑ aiP(X = ai) i 72

CHAPTER 4. MORE ON JOINT DISTRIBUTION = = = In the above example, we have E(X) = E(X | Y = 0)P(Y = 0) + E(X | Y = 1)P(Y = 1) = (4/3) × (1/2) + (2/3) × (1/2) = 1.

∑ ai ∑ P(X = ai | Y = b j )P(Y = b j ) i j

∑ ∑ aiP(X = ai | Y = b j j j i

P(Y = b j )

∑ E(X | Y = b j )P(Y = b j ).

Example Let us revisit the geometric random variable and calculate its expected value. Recall the situation: I have a coin with probability p of showing heads; I toss it repeatedly until heads appears for the first time; X is the number of tosses. Let Y be the Bernoulli random variable whose value is 1 if the result of the first toss is heads, 0 if it is tails. If Y = 1, then we stop the experiment then and there; so if Y = 1, then necessarily X = 1, and we have E(X | Y = 1) = 1. On the other hand, if Y = 0, then the sequence of tosses from that point on has the same distribution as the original experiment; so E(X | Y = 0) = 1 + E(X) (the 1 counting the first toss). So E(X) = E(X | Y = 0)P(Y = 0) + E(X | Y = 1)P(Y = 1) = (1 + E(X)) · q + 1 · p = E(X)(1 − p) + 1; rearranging this equation, we find that E(X) = 1/p, confirming our earlier value. In Proposition 2.1, we saw that independence of events can be characterised in terms of conditional probabilities: A and B are independent if and only if they satisfy P(A | B) = P(A). A similar result holds for independence of random variables: Proposition 4.4 Let X and Y be discrete random variables. Then X and Y are independent if and only if, for any values ai and b j of X and Y respectively, we have P(X = ai | Y = b j ) = P(X = ai ). This is obtained by applying Proposition 15 to the events X = ai and Y = b j . It can be stated in the following way: X and Y are independent if the conditional p.m.f. of X | (Y = b j ) is equal to the p.m.f. of X, for any value b j of Y .

4.3. JOINT DISTRIBUTION OF CONTINUOUS R.V.S

73

4.3

Joint distribution of continuous r.v.s

For continuous random variables, the covariance and correlation can be defined by the same formulae as in the discrete case; and Equation (4.1) remains valid. But we have to examine what is meant by independence for continuous random variables. The formalism here needs even more concepts from calculus than we have used before: functions of two variables, partial derivatives, double integrals. I assume that this is unfamiliar to you, so this section will be brief and can mostly be skipped. Let X and Y be continuous random variables. The joint cumulative distribution function of X and Y is the function FX,Y of two real variables given by FX,Y (x, y) = P(X ≤ x,Y ≤ y). We define X and Y to be independent if P(X ≤ x,Y ≤ y) = P(X ≤ x) · P(Y ≤ y), for any x and y, that is, FX,Y (x, y) = FX (x) · FY (y). (Note that, just as in the onevariable case, X is part of the name of the function, while x is the argument of the function.) The joint probability density function of X and Y is fX,Y (x, y) = ∂2 FX,Y (x, y). ∂x∂y

In other words, differentiate with respect to x keeping y constant, and then differentiate with respect to y keeping x constant (or the other way round: the answer is the same for all functions we consider.) The probability that the pair of values of (X,Y ) corresponds to a point in some region of the plane is obtained by taking the double integral of fX,Y over that region. For example, P(a ≤ X ≤ b, c ≤ Y ≤ d) =
Z dZ b c a

fX,Y (x, y)dx dy

(the right hand side means, integrate with respect to x between a and b keeping y fixed; the result is a function of y; integrate this function with respect to y from c to d.) The marginal p.d.f. of X is given by
Z ∞

fX (x) =

−∞

fX,Y (x, y)dy,

and the marginal p.d.f. of Y is similarly
Z ∞

fY (y) =

−∞

fX,Y (x, y)dx.

74

CHAPTER 4. MORE ON JOINT DISTRIBUTION

Then the conditional p.d.f. of X | (Y = b) is fX|(Y =b) (x) = fX,Y (x, b) . fY (b)

The expected value of XY is, not surprisingly,
Z ∞Z ∞

E(XY ) =
−∞ −∞

xy fX,Y (x, y)dx dy,

and then as in the discrete case Cov(X,Y ) = E(XY ) − E(X)E(Y ), Finally, and importantly, The continuous random variables X and Y are independent if and only if fX,Y (x, y) = fX (x) · fY (y). As usual this holds if and only if the conditional p.d.f. of X | (Y = b) is equal to the marginal p.d.f. of X, for any value b. Also, if X and Y are independent, then Cov(X,Y ) = corr(X,Y ) = 0 (but not conversely!). corr(X,Y ) = Cov(X,Y ) . Var(X) Var(Y )

4.4 Transformation of random variables
If a continuous random variable Y is a function of another r.v. X, we can find the distribution of Y in terms of that of X. Example Let X √ Y be random variables. Suppose that X ∼ U[0, 4] (uniform and on [0, 4]) and Y = X. What is the support of Y ? Find the cumulative distribution function and the probability density function of Y . Solution (a) The support of X is [0, 4], and Y = (b) We have fX (x) = x/4 for 0 ≤ x ≤ 4. Now FY (y) = = = = √ X, so the support of Y is [0, 2].

P(Y ≤ y) P(X ≤ y2 ) FX (y2 ) y2 /4

4.4. TRANSFORMATION OF RANDOM VARIABLES

75

for 0 ≤ y ≤ 2; of course FY (y) = 0 for √ < 0 and FY (y) = 1 for y > 2. (Note that y 2 , since Y = X.) Y ≤ y if and only if X ≤ y (c) We have fY (y) = d FY (y) = y/2 if 0 ≤ y ≤ 2, dy 0 otherwise.

The argument in (b) is the key. If we know Y as a function of X, say Y = g(X), where g is an increasing function, then the event Y ≤ y is the same as the event X ≤ h(Y ), where h is the inverse function of g. This means that y = g(x) if and √ only if x = h(y). (In our example, g(x) = x, and so h(y) = y2 .) Thus FY (y) = FX (h(y)), and so, by the Chain Rule, fY (y) = fX (h(y))h (y), where h is the derivative of h. (This is because fX (x) is the derivative of FX (x) with respect to its argument x, and the Chain Rule says that if x = h(y) we must multiply by h (y) to find the derivative with respect to y.) Applying this formula in our example we have fY (y) = 1 y · 2y = 4 2

for 0 ≤ y ≤ 2, since the p.d.f. of X is fX (x) = 1/4 for 0 ≤ x ≤ 4. Here is a formal statement of the result. Theorem 4.5 Let X be a continuous random variable. Let g be a real function which is either strictly increasing or strictly decreasing on the support of X, and which is differentiable there. Let Y = g(X). Then (a) the support of Y is the image of the support of X under g; (b) the p.d.f. of Y is given by fY (y) = fX (h(y))|h (y)|, where h is the inverse function of g. For example, here is the proof of Proposition 3.6: if X ∼ N(µ, σ2 ) and Y = (X − µ)/σ, then Y ∼ N(0, 1). Recall that
2 2 1 fX (x) = √ e−(x−µ) /2σ . σ 2π

76

CHAPTER 4. MORE ON JOINT DISTRIBUTION

We have Y = g(X), where g(x) = (x − µ)/σ; this function is everywhere strictly increasing (the graph is a straight line with slope 1/σ), and the inverse function is x = h(y) = σy + µ. Thus, h (y) = σ, and
2 1 fY (y) = fX (σy + µ) · σ = √ e−y /2 , 2π

the p.d.f. of a standard normal variable. However, rather than remember this formula, together with the conditions for its validity, I recommend going back to the argument we used in the example. If the transforming function g is not monotonic (that is, not either increasing or decreasing), then life is a bit more complicated. For example, if X is a random variable taking both positive and negative values, and Y = X 2 , then a given value √ √ y of Y could arise from either of the values y and − y of X, so we must work out the two contributions and add them up. Example X ∼ N(0, 1) and Y = X 2 . Find the p.d.f. of Y . √ 2 The p.d.f. of X is (1/ 2π)e−x /2 . Let Φ(x) be its c.d.f., so that P(X ≤ x) = Φ(x), and 2 1 Φ (x) = √ e−x /2 . 2π √ √ Now Y = X 2 , so Y ≤ y if and only if − y ≤ X ≤ y. Thus FY (y) = P(Y ≤ y) = P(− y ≤ X ≤ y) (by symmetry of N(0, 1)) = Φ( y) − Φ(− y) = Φ( y − (1 − Φ( y)) = 2Φ( y) − 1. So fY (y) = d FY (y) dy 1 √ = 2Φ ( y) · √ 2 y 1 −y/2 = √ e . 2πy

(by the Chain Rule)

Of course, this is valid for y > 0; for y < 0, the p.d.f. is zero.

4.5. WORKED EXAMPLES

77

Note the 2 in the line labelled “by the√ Chain Rule”. If you blindly applied the formula of Theorem 4.5, using h(y) = y, you would not get this 2; it arises from the fact that, since Y = X 2 , each value of Y corresponds to two values of X (one positive, one negative), and each value gives the same contribution, by the symmetry of the p.d.f. of X.

4.5

Worked examples

Question Two numbers X and Y are chosen independently from the uniform distribution on the unit interval [0, 1]. Let Z be the maximum of the two numbers. Find the p.d.f. of Z, and hence find its expected value, variance and median. Solution The c.d.f.s of X and Y are identical, that is, 0 if x < 0, FX (x) = FY (x) = x if 0 < x < 1, 1 if x > 1. (The variable can be called x in both cases; its name doesn’t matter.) The key to the argument is to notice that Z = max(X,Y ) ≤ x if and only if X ≤ x and Y ≤ x.

(For, if both X and Y are smaller than a given value x, then so is their maximum; but if at least one of them is greater than x, then again so is their maximum.) For 0 ≤ x ≤ 1, we have P(X ≤ x) = P(Y ≤ x) = x; by independence, P(X ≤ x and Y ≤ x) = x · x = x2 . Thus P(Z ≤ x) = x2 . Of course this probability is 0 if x < 0 and is 1 if x > 1. So the c.d.f. of Z is 0 if x < 0, FZ (x) = x2 if 0 < x < 1, 1 if x > 1. The√ median of Z is the value of m such that FZ (m) = 1/2, that is m2 = 1/2, or m = 1/ 2. We obtain the p.d.f. of Z by differentiating: fZ (x) = 2x 0 if 0 < x < 1, otherwise.
2

Then we can find E(Z) and Var(Z) in the usual way:
Z 1

E(Z) =
0

2 2x dx = , 3
2

Z 1

Var(Z) =
0

2 2x dx − 3
3

=

1 . 18

78

CHAPTER 4. MORE ON JOINT DISTRIBUTION

Question I roll a fair die bearing the numbers 1 to 6. If N is the number showing on the die, I then toss a fair coin N times. Let X be the number of heads I obtain. (a) Write down the p.m.f. for X. (b) Calculate E(X) without using this information. Solution (a) If we were given that N = n, say, then X would be a binomial Bin(n, 1/2) random variable. So P(X = k | N = n) = nCk (1/2)n . By the ToTP,
6

P(X = k) =

n=1

∑ P(X = k | N = n)P(N = n).

Clearly P(N = n) = 1/6 for n = 1, . . . , 6. So to find P(X = k), we add up the probability that X = k for a Bin(n, 1/2) r.v. for n = k, . . . , 6 and divide by 6. (We start at k because you can’t get k heads with fewer than k coin tosses!) The answer comes to k 0 1 2 3 4 5 6 63 120 99 64 29 8 1 P(X = k) 384 384 384 384 384 384 384 For example, P(X = 4) =
4C (1/2)4 + 5C (1/2)5 + 6C (1/2)6 4 4 4

6

=

4 + 10 + 15 . 384

(b) By Proposition 4.3,
6

E(X) =

n=1

∑ E(X | (N = n))P(N = n).

Now if we are given that N = n then, as we remarked, X has a binomial Bin(n, 1/2) distribution, with expected value n/2. So
6

E(X) =

n=1

∑ (n/2) · (1/6) =

1+2+3+4+5+6 7 = . 2·6 4

Try working it out from the p.m.f. to check that the answer is the same!

Appendix A Mathematical notation
The Greek alphabet
Name Capital Lowercase alpha A α beta B β gamma Γ γ delta ∆ δ epsilon E ε zeta Z ζ eta H η theta Θ θ iota I ι kappa K κ lambda Λ λ mu M µ nu N ν xi Ξ ξ omicron O o pi Π π rho P ρ sigma Σ σ tau T τ upsilon ϒ υ phi Φ φ chi X χ psi Ψ ψ omega Ω ω

Mathematicians use the Greek alphabet for an extra supply of symbols. Some, like π, have standard meanings. You don’t need to learn this; keep it for reference. Apologies to Greek students: you may not recognise this, but it is the Greek alphabet that mathematicians use! Pairs that are often confused are zeta and xi, or nu and upsilon, which look alike; and chi and xi, or epsilon and upsilon, which sound alike.

79

80

APPENDIX A. MATHEMATICAL NOTATION

Numbers
Notation N Z R |x| a a/b or b a|b mC or m n n n! b i=a

Meaning Natural numbers Integers Real numbers modulus a over b a divides b m choose n n factorial xa + xa+1 + · · · + xb (see section on Summation below) x is approximately equal to y

Example 1, 2, 3, . . . (some people include 0) . . .√ −1, 0, 1, 2, . . . , −2, 1 2 , 2, π, . . . |2| = 2, |−3| = 3 12/3 = 4, 2/4 = 0.5 4 | 12
5C 2

= 10

5! = 120 i=1 ∑ xi

∑ i2 = 12 + 22 + 32 = 14

3

x≈y

Sets
Notation {. . .} Meaning a set Example {1, 2, 3} NOTE: {1, 2} = {2, 1} 2 ∈ {1, 2, 3} {x : x2 = 4} = {−2, 2} |{1, 2, 3}| = 3 {1, 2, 3} ∪ {2, 4} = {1, 2, 3, 4} {1, 2, 3} ∩ {2, 4} = {2} {1, 2, 3} \ {2, 4} = {1, 3} {1, 3} ⊆ {1, 2, 3} everything not in A / {1, 2} ∩ {3, 4} = 0 NOTE: (1, 2) = (2, 1) {1, 2} × {1, 3} = {(1, 1), (2, 1), (1, 3), (2, 3)}

x∈A x is an element of the set A {x : . . .} the set of all x such that . . . or {x | . . .} |A| cardinality of A (number of elements in A) A∪B A union B (elements in either A or B) A∩B A intersection B (elements in both A and B) A\B set difference (elements in A but not B) A⊆B A is a subset of B (or equal) A complement of A / empty set (no elements) 0 (x, y) ordered pair A×B Cartesian product (set of all ordered pairs)

81

Summation
What is it?
Let a1 , a2 , a3 , . . . be numbers. The notation n i=1

∑ ai

(read “sum, from i equals 1 to n, of ai ”), means: add up the numbers a1 , a2 , . . . , an ; that is, n i=1 n

∑ ai = a1 + a2 + · · · + an.

The notation

j=1 m

∑ a j means exactly the same thing. The variable i or j is called

a “dummy variable”. The notation i=1 ∑ ai is not the same, since (if m and n are different) it is telling

us to add up a different number of terms. The sum doesn’t have to start at 1. For example,
20 i=10

∑ ai = a10 + a11 + · · · + a20. i Sometimes I get lazy and don’t bother to write out the values: I just say ∑ ai to mean: add up all the relevant values. For example, if X is a discrete random variable, then we say that E(X) = ∑ ai P(X = ai ) i where the sum is over all i such that ai is a value of the random variable X.

Manipulation
The following three rules hold. n i=1 n i=1 n

∑ (ai + bi) = ∑ ai + ∑ bi. i=1 (A.1)

Imagine the as and bs written out with a1 + b1 on the first line, a2 + b2 on the second line, and so on. The left-hand side says: add the two terms in each line,

82

APPENDIX A. MATHEMATICAL NOTATION

and then add up all the results. The right-hand side says: add the first column (all the as) and the second column (all the bs), and then add the results. The answers must be the same. n i=1 m j=1 n m

∑ ai ·

∑ bj

=∑

i=1 j=1

∑ aib j .

(A.2)

The double sum says add up all these products, for all values of i and j. A simple example shows how it works: (a1 + a2 )(b1 + b2 ) = a1 b1 + a1 b2 + a2 b1 + a2 b2 . If in place of numbers, we have functions of x, then we can “differentiate term-by-term”: n d n d fi (x) = ∑ fi (x). (A.3) ∑ dx i+1 i+1 dx The left-hand side says: add up the functions and differentiate the sum. The right says: differentiate each function and add up the derivatives. Another useful result is the Binomial Theorem: n (x + y)n =

∑ nCk xn−k yk . k=0 Infinite sums


Sometimes we meet infinite sums, which we write as

i=1

∑ ai for example.

This

doesn’t just mean “add up infinitely many values”, since that is not possible. We need Analysis to give us a definition in general. But sometimes we know the answer another way: for example, if ai = ari−1 , where −1 < r < 1, then

i=1

∑ ai = a + ar + ar2 + · · · = 1 − r ,



a

using the formula for the sum of the “geometric series”. You also need to know the sum of the “exponential series” xi x2 x3 x4 = 1 + x + + + + · · · = ex . ∑ 2 6 24 i=0 i!


Do the three rules of the preceding section hold? Sometimes yes, sometimes no. In Analysis you will see some answers to this question. In all the examples you meet in this book, the rules will be valid.

Appendix B Probability and random variables
Notation
In the table, A and B are events, X and Y are random variables. Notation P(A) P(A | B) X =Y X ∼Y Meaning Page probability of A 3 conditional probability of A given B 24 the values of X and Y are equal X and Y have the same distribution 41 (that is, same p.m.f. or same p.d.f.) E(X) expected value of X 41 Var(X) variance of X 42 Cov(X,Y ) covariance of X and Y 67 corr(X,Y ) correlation coefficient of X and Y 68 X |B conditional random variable 70 X | (Y = b) 71

Bernoulli random variable Bernoulli(p) (p. 48)
• Occurs when there is a single trial with a fixed probability p of success. • Takes only the values 0 and 1. • p.m.f. P(X = 0) = q, P(X = 1) = p, where q = 1 − p. • E(X) = p, Var(X) = pq.

83

84

APPENDIX B. PROBABILITY AND RANDOM VARIABLES

Binomial random variable Bin(n, p) (p. 49)
• Occurs when we are counting the number of successes in n independent trials with fixed probability p of success in each trial, e.g. the number of heads in n coin tosses. Also, sampling with replacement from a population with a proportion p of distinguished elements. • The sum of n independent Bernoulli(p) random variables. • Values 0, 1, 2, . . . , n. • p.m.f. P(X = k) = nCk qn−k pk for 0 ≤ k ≤ n, where q = 1 − p. • E(X) = np, Var(X) = npq.

Hypergeometric random variable Hg(n, M, N) (p. 51)
• Occurs when we are sampling n elements without replacement from a population of N elements of which M are distinguished. • Values 0, 1, 2, . . . , n. • p.m.f. P(X = k) = (MCk · N−MCn−k )/N Cn . • E(X) = n M M , Var(X) = n N N N −M N N −n . N −1

• Approximately Bin(n, M/N) if n is small compared to N, M, N − M.

Geometric random variable Geom(p) (p. 52)
• Describes the number of trials up to and including the first success in a sequence of independent Bernoulli trials, e.g. number of tosses until the first head when tossing a coin. • Values 1, 2, . . . (any positive integer). • p.m.f. P(X = k) = qk−1 p, where q = 1 − p. • E(X) = 1/p, Var(X) = q/p2 .

85

Poisson random variable Poisson(λ) (p. 54)
• Describes the number of occurrences of a random event in a fixed time interval, e.g. the number of fish caught in a day. • Values 0, 1, 2, . . . (any non-negative integer) • p.m.f. P(X = k) = e−λ λk /k! . • E(X) = λ, Var(X) = λ. • If n is large, p is small, and np = λ, then Bin(n, p) is approximately equal to Poisson(λ) (in the sense that the p.m.f.s are approximately equal).

Uniform random variable U[a, b] (p. 58)
• Occurs when a number is chosen at random from the interval [a, b], with all values equally likely. • p.d.f. f (x) = 0 if x < a, 1/(b − a) if a ≤ x ≤ b, 0 if x > b. 0 if x < a, (x − a)/(b − a) if a ≤ x ≤ b, 1 if x > b.

• c.d.f. F(x) =

• E(X) = (a + b)/2, Var(X) = (b − a)2 /12.

Exponential random variable Exp(λ) (p. 59)
• Occurs in the same situations as the Poisson random variable, but measures the time from now until the first occurrence of the event. • p.d.f. f (x) = • c.d.f. F(x) = 0 λ e−λx 0 1 − e−λx if x < 0, if x ≥ 0. if x < 0, if x ≥ 0.

• E(X) = 1/λ, Var(X) = 1/λ2 . • However long you wait, the time until the next occurrence has the same distribution.

86

APPENDIX B. PROBABILITY AND RANDOM VARIABLES

Normal random variable N(µ, σ2 ) (p. 59)
• The limit of the sum (or average) of many independent Bernoulli random variables. This also works for many other types of random variables: this statement is known as the Central Limit Theorem.
2 2 1 • p.d.f. f (x) = √ e−(x−µ) /2σ . σ 2π

• No simple formula for c.d.f.; use tables. • E(X) = µ, Var(X) = σ2 . • For large n, Bin(n, p) is approximately N(np, npq). • Standard normal N(0, 1) is given in the table. If X ∼ N(µ, σ2 ), then (X − µ)/σ ∼ N(0, 1). The c.d.f.s of the Binomial, Poisson, and Standard Normal random variables are tabulated in the New Cambridge Statistical Tables, Tables 1, 2 and 4.

Similar Documents

Premium Essay

Pdf, Docx

...cha06369_tn05.qxd 3/4/03 11:05 AM Page 186 technical note five F A C I L I T Y L AYO U T 187 Basic Production Layout Formats Process layout defined Product layout defined Group technology (cellular) layout defined Fixed-position layout defined 188 Process Layout Computerized layout techniques—CRAFT Systematic layout planning CRAFT defined Systematic layout planning (SLP) defined 193 Product Layout Assembly lines Assembly-line balancing Splitting tasks Flexible and U-shaped line layouts Mixed-model line balancing Current thoughts on assembly lines Workstation cycle time defined Assembly-line balancing defined Precedence relationship defined 200 Group Technology (Cellular) Layout Developing a GT layout Virtual GT cells 202 203 Fixed-Position Layout Retail Service Layout Servicescapes Ambient conditions Spatial layout and functionality Signs, symbols, and artifacts 206 207 215 216 Office Layout Conclusion Case: Soteriou’s Souvlaki Case: State Automobile License Renewals technical note TECHNICAL NOTE FIVE cha06369_tn05.qxd 3/4/03 11:05 AM Page 187 FACILITY LAYOUT technical note 187 PLANET EARTH ORBITING THE ASSEMBLY LINE IN A GLOBE FACTORY. THE GLOBES ARE MOVING THROUGH THE FACTORY USING A TRANSPORT SYSTEM SUSPENDED FROM THE CEILING OF THE FACTORY. Layout decisions entail determining the placement of departments, work groups within the departments, workstations, machines, and stock-holding points...

Words: 15017 - Pages: 61

Free Essay

Pdf, Docx

...Mid-term Questions Chapter 1-4 Chapter 1: 1. What are the responsibilities of the DBA and the database designers? 2. What four main types of actions involve databases? Briefly discuss each. 3. Discuss the main characteristics of the database approach and how it differs from traditional file systems. 4. What are the responsibilities of the DBA and the database designers? 5. Specify all the relationships among the records of the database shown in Figure 1.2. 6. Give some additional views that may be needed by other user groups for the database shown in Figure 1.2. 7. Identify some informal queries and update operations that you would expect to apply to the database shown in Figure 1.2. Chapter 2: 8. What is the difference between procedural and nonprocedural DMLs? 9. What is the difference between a database schema and a database state? 10. If you were designing a Web-based system to make airline reservations and to sell airline tickets, which DBMS Architecture would you choose from Section 2.5? Why? Why would the other architectures not be a good choice? 11. What is the difference between procedural and nonprocedural DMLs? 12. Discuss the different types of user-friendly interfaces and the types of users who typically use each. 13. Consider Figure 2.1. In addition to constraints relating the values of columns in one table to columns in another table, there are also constraints that impose restrictions on values in a column or a combination of columns within a table. One such...

Words: 513 - Pages: 3

Free Essay

Pdf, Docx

...-2- BRI-1004 the brutal suppression of demonstrators in China in June 1989. The same search on Google.cn provided a much smaller list and included pictures of a smiling couple in the square.2 The decision to develop Google.cn was complicated. In the words of Elliot Schrage, Google’s vice president of Global Communications and Public Affairs: [Google, Inc., faced a choice to] compromise our mission by failing to serve our users in China or compromise our mission by entering China and complying with Chinese laws that require us to censor search results.… Based on what we know today and what we see in China, we believe our decision to launch the Google.cn service in addition to our Google.com service is a reasonable one, better for Chinese users and better for Google.… Self-censorship, like that which we are now required to perform in China, is something that conflicts deeply with our core principles.… This was not something we did enthusiastically or something that we’re proud of at all.3 MacLean knew that he was perfectly prepared for his current position as director of International Business. After earning a computer-science degree, MacLean had traveled extensively, implementing information systems with an IT consulting firm. He was well-versed in the technical and cultural components of this current project. It was his first job after earning an MBA. He had worked very hard as a summer intern to get his foot in the door at Google, Inc., and landed a job offer in his second...

Words: 7187 - Pages: 29

Free Essay

Pdf, Docx,

...Proden Hancing Tools Multiline Text Editor includes tabs, in-place editing, and text-block manipulation. · Multiple Undo/Redo provides historical tracking. · Small icons on the status bar notify you of digital signatures, standards compliance, xref updates, and more. · Enhanced graphics tools include access to 24-bit True Color, PANTONE, RAL DESIGN and RAL CLASSIC color systems. Create gradient effects and print shaded 3D views. · The most popular AutoCAD Express Tools are included. n the status bar notify you of digital signatures, standards compliance, xref updates, and more. · Enhanced graphics tools include access to 24-bit True Color, PANTONE, RAL DESIGN and RAL CLASSIC color systems. Create gradient effects and print shaded 3D views. · The most popular AutoCAD Express Tools are included. n the status bar notify you of digital signatures, standards compliance, xref updates, and more. · Enhanced graphics tools include access to 24-bit True Color, PANTONE, RAL DESIGN and RAL CLASSIC color systems. Create gradient effects and print shaded 3D views. · The most popular AutoCAD Express Tools are included. n the status bar notify you of digital signatures, standards compliance, xref updates, and more. · Enhanced graphics tools include access to 24-bit True Color, PANTONE, RAL DESIGN and RAL CLASSIC color systems. Create gradient effects and print shaded 3D views. · The most popular AutoCAD Express Tools are included.n the status bar notify you of digital signatures...

Words: 367 - Pages: 2

Premium Essay

Pdf, Docx

...WORK-LIFE BALANCE DOING IT RIGHT AND AVOIDING THE PITFALLS Jim Bird This is a preprint of an article accepted for publication in Employment Relations Today, Autumn 2006, vol. 33, no. 3. Copyright 2006, Wiley Periodicals, Inc. The demand for work-life-balance solutions by employees and managers is expanding at an unprecedented rate. As a result, work-life balance is an increasingly hot topic in boardrooms and government halls today. Over the coming decade it will be one of the most important issues that executives and human resource professionals will be expected to manage. This article provides the methods for you to accelerate the implementation of a very successful work-life strategy within your organization. First we cover why work-life is critical to the key objectives of your organization and its executive team. A brief history of work-life efforts follows so you can learn from the trial and error of others and avoid their mistakes. Finally, we describe the two parts of a successful work-life strategy and how you can most quickly and effectively implement them. Let’s start with the senior executive concerns and opportunities that work-life affects. Growth and profit impact. Accelerated on and off-the-job stresses and expectations are adversely affecting top and bottom-line growth, unnecessarily driving down productivity. A well-implemented work-life strategy greatly reduces both the real and perceived overwork and out-of-balance pressures that hamper productivity, producing...

Words: 4179 - Pages: 17

Free Essay

Docx, Pdf

...Strategy Development at New Town Council Andy Bailey and Julie Verity The two cases New Town Council and Castle Press, illustrate the process of strategy development within different organisational contexts. Both cases are based on the views of the strategy development process as seen by members of the respective top management teams. The New Town case describes how four members of the top management team view their strategy process. Both cases a reconstructed around two general themes. The process of strategy development and the organisational context in which it takes place. The two cases illustrate differences in the strategy development process, demonstrating that the process of managing strategy development in one organisation may not be the same as, or necessarily appropriate to, and managing strategy development in another organisation. I'm not really comfortable with the way we develop strategy here, but I'm not sure I know how to manage the process to make it more coherent either. CHIEFEXECUTIVE, NEW TOWN COUNCIL, 1995 New Town Council formed one part of a two-tier system of local government responsible for the provision of services within a geographical area of the UK; the other part was the county council. Both authorities operated under Acts of Parliament, with specific duties laid down by these Acts and with central government controlling many of the activities of local authorities. The county council had responsibility for school education, fire and police...

Words: 3575 - Pages: 15

Premium Essay

Pdf, Docx

...Group Nguyen Hoang Hai Tran Thien Nghia Tran Trung Hieu Dang The Vinh Kim 1. In this case, consumer-consumer rivalry is the appropriate answer for the question. Since Levi Strauss & Co. is a buyer competing against other buyer (known at other bidders) to obtain the jean. 2. The maximum amount must be paid for this asset is the present value: PV = 150,000(1+0.09) + … + 150,000(1+0.09)5 = $583,447.69 3. A. The equation for net benefit: N(Q) = B(Q) – C(Q) = 50 + 20Q – 5Q^2 B. When Q = 1, N(1) = 65 When Q = 5, N(5) = 25 C. The equation for the marginal net benefits: MNB(Q) = MB(Q) – MC(Q) = 20 – 10Q D. When Q = 1, MNB(1) = 10 When Q = 5, MNB(5) = -30 E. Setting MNB(Q) = 0, we have Q = 2 is the appropriate answer for this question. F. When Q = 2, MNB(2) = 0 4. A. The value of the firm before it pays out current dividend is: PVfirm = 550,000*(1 + 0.08)(0.08-0.05) = $19.8 million B. The value of the firm after it pays out current dividend is: PVfirm = 550,000*(1 + 0.05)(0.08-0.05) = $19.25 million 5. The present value of a preferred stock is: PVstock = 750.04 = $1.875 6. Table: Q | B(Q) | C(Q) | N(Q) | MB(Q) | MC(Q) | MNB(Q) | 100 | 1200 | 950 | 250 | 210 | 40 | 170 | 101 | 1400 | 1000 | 400 | 200 | 50 | 150 | 102 | 1590 | 1060 | 530 | 190 | 60 | 130 | 103 | 1770 | 1130 | 640 | 180 | 70 | 110 | 104 | 1940 | 1210 | 730 | 170 | 80 | 90 | 105 | 2100 | 1300 | 800 | 160...

Words: 1542 - Pages: 7

Free Essay

Pdf, Docx

...Mathematics Classes 9-10 Chapter One Real Number Mathematics is originated from the process of expressing quantities in symbols or numbers. The history of numbers is as ancient as the history of human civilization. Greek Philosopher Aristotle According to the formal inauguration of mathematics occurs in the practice of mathematics by the sect of priest in ancient Egypt. So, the number based mathematics is the creation of about two thousand years before the birth of Christ. After that, moving from many nations and civilization, numbers and principles of numbers have gained an universal form at present. The mathematicians in India first introduce zero (0) and 10 based place value system for counting natural numbers, which is considered a milestone in describing numbers. Chinese and Indian mathematicians extended the idea zero, real numbers, negative number, integer and fractional numbers which the Arabian mathematicians accepted in the middle age. But the credit of expressing number through decimal fraction is awarded to the Muslim Mathematicians. Again they introduce first the irrational numbers in square root form as a solution of the quadratic equation in algebra in the 11th century. According to the historians, very near to 50 BC the Greek Philosophers also felt the necessity of irrational number for drawing geometric figures, especially for the square root of 2. In the 19th century European Mathematicians gave the real numbers a complete shape...

Words: 95046 - Pages: 381

Free Essay

Docx, Pdf

...S Qua quy trình tuyển dụng tại Cty SDC, chúng ta có thể thấy những ưu và nhược điểm sau đây Ưu điểm: 1. Quy trình tuyển dụng khá đầy đủ các bước và có phân rõ từng bộ phận chịu trách nhiệm về nhiệm vụ của mình, vì vậy quá trình tuyển dụng diễn ra nhanh chóng, không làm mất thời gian 1. Các kế hoạch, thông báo tuyển dụng đều có biểu mẫu sẵn giúp nhà tuyển dụng và ứng viên dễ dàng áp dụng và năm rõ thông tin cần thiết 2. Thông báo tuyển dụng đa dạng giúp thu hút nhiều ứng viên vào cty 3. Hình thức tuyển dụng của cty là phỏng vấn trực tiếp, làm bài thi thực tế do cty đưa ra. Qua đó ta có thể thấy doanh nghiệp áp dụng quy trình tuyển dụng 1 cách khách quan và mang tính khoa học 4. Hướng dẫn nhân sự mới tìm hiểu về nội quy, chính sách...giúp nhân sự đó thích nghi với môi trường mới 5. Kết quả thử việc được đánh giá và xem xét qua nhiều bộ phận giúp nhà cty đảm bảo được nhân sự mới có năng lực giúp ích cho 6. Ký hợp đồng lao động chính thức đảm bảo lợi ích cho đôi bên Khuyến điểm: 7. Quy trình tuyển dụng cứng nhắc, không linh hoạt, không áp dụng phù hợp với từng thành viên 8. Hội đồng tuyển dụng không qua lớp đào tạo, chỉ dựa vào kinh nghiệm và đánh giá chủ quan của mình để đưa ra quyết định 9. Còn tồn tại tính thiên vị cho người quen, bỏ qua các bước phỏng vấn. Cho vào làm việc ngay 10. Không có khóa đào tạo về chuyên làm cho nhân sự mới chưa thích nghi với công việc ngay 11. Mỗi phòng ban đảm nhận giai đoạn tuyển dụng riêng...

Words: 372 - Pages: 2

Premium Essay

Docx, Pdf

...An Internship Report Name of Organization: Submitted By: Javed Hussain MC062000032 S u b m i t t e d To : Instructor PRO619 Virtual University of Pakistan Dedication I dedicated this Internship Report to My family members and Virtual University of Pakistan 2 Acknowledgement All praise to Allah, the most merciful, kind and beneficent, and the source of all knowledge, wisdom within and beyond my comprehension. He is the only God, who can help us in every field of life. All respect and possible tributes goes to my Holy Prophet Mohammad (SAW), who is forever guidance and knowledge for all human beings on this earth. Heart full thanks for Mr. Muhammad Imran Wahid Principal IPS- Intellectual Prestige Computer Science College Mailsi (PMLS01), for special arrangement of equipments in computer lab and providing informative books in library. Without his co-operation it was not possible for us to complete my MBA Program. I am very grateful to my Project supervisor at Virtual University of Pakistan. He guided and helped me through timely suggestions, valuable advices and specially the sympathetic attitude, which always inspired me for hard work. I am proud to say that I am very grateful to my family whose kind prayers and cooperation helped us at every step of my work. Special thanks go to my parents for their cooperation for the sake of my knowledge. I am really very thankful to Ch. Ikram Ullah Operational Manager National Bank Main Branch Mailsi for her...

Words: 9541 - Pages: 39

Free Essay

Pdf, Docx

...Principles of Islamic economic system: a) Sole purpose is to obey and please Allah b) The wealth and asset in all their forms given under trust by Allah c) Moral values and guiding factors for all economic activities d) Maximum equitable utilization of human and material resources given by Allah e) Human dignity and respect of labor f) Maximum freedom for economic activity within a just framework g) Equitable distribution of wealth and income and disciplined private ownership h) Simplicity economy and austerity in expenditure i) Adal and Ihsan (justice and kindness) j) Strict prohibition of Riba, interest and usury in all forms. Some of the principles of the Islamic economic system, as laid down by the Qur’an and the Sunnah, are discussed as follows: 1. Allah determines Right and Wrong: We have already discussed in the first chapter that Islamic economic system makes distinction between what is permitted being lawful (Halal) and what is forbidden being unlawful (Haram). To determine what is permitted or lawful (Halal) and what is forbidden or unlawful (haram) is the soul prerogative of God. None but God is empowered to pronounce what is right and what is wrong. Allah has made demarcation between lawful and unlawful in the economic sphere and has allowed man to enjoy those food items and other articles of use which are lawful and avoid those things which are unlawful. The Qur’an says: “O ye who believe ! Forbid not the good things which Allah hath made lawful for...

Words: 1586 - Pages: 7

Premium Essay

Pdf, Docx

...The Effect of E-recruitment On the Recruitment Process: Evidence from Case Studies of Three Danish MNCs Anna B. Holm, Aarhus University, Denmark annah@asb.dk Abstract. The aim of this research is to determine whether the introduction of e-recruitment has an impact on the process and underlying tasks, subtasks and activities of recruitment. Three large organizations with wellestablished e-recruitment practices were included in the study. The case studies were conducted in Denmark in 2008-2009 using qualitative research methods. The findings indicate that e-recruitment had a noticeable effect on the overall recruitment process in the studied organizations. The investigation revealed changes in the sequence, divisibility and repetitiveness of a number of tasks and subtasks. The new process design supported by information and communications technologies was identified and is presented in the paper. This process allowed recruiters in the study to perform recruitment tasks more efficiently. However, practitioners should be aware of the increasing demands of the quality of online communication with applicants, and with it the electronic communication skills of recruitment professionals. Keywords: recruitment, e-recruitment, web-based recruitment, online recruitment, staffing, e-HRM 1 Introduction The first decade of the twenty-first century saw rapid growth in the use of online recruitment [25] and the transformation of electronic recruitment into one of...

Words: 8291 - Pages: 34

Premium Essay

Docx, Pdf

...2009 Nursing Turnover: Costs, Causes, & Solutions Steven T. Hunt, Ph.D., SPHR Director of Business Transformation SuccessFactors Inc. (www.successfactors.com) E-mail: shunt@successfactors.com Copyright © 2009 SuccessFactors, Inc. Invest in People …. Drive Business Results SuccessFactors Healthcare Executive Summary Nursing turnover is a major issue impacting the performance and profitability of healthcare organizations. Healthcare organizations require a stable, highly trained and fully engaged nursing staff to provide effective levels of patient care. The financial cost of losing a single nurse has been calculated to equal about twice the nurse’s annual salaryi. The average hospital is estimated to lose about $300,000 per year for each percentage increase in annual nurse turnoverii. Losing these critical employees negatively impacts the bottom line of healthcare organizations in a variety of ways including: Decreased quality of patient care Increased contingent staff costs Increased staffing costs Loss of patients Increased nurse and medical staff turnover Increased accident and absenteeism rates The primary causes of nurse turnover can be analyzed by I) understanding why nurses choose to work for an organization and ensuring this ‘employee value proposition’ is met; and II) identifying things that occur after nurses are hired that lead them to quit even though their initial job expectations were met. I. Primary factors that influence...

Words: 5301 - Pages: 22

Free Essay

Pdf, Docx

...Introduction to Statistics Statistical Problems 1. A pharmaceutical Co. wants to know if a new drug is superior to already existing drugs, or possible side effects. 2. How fuel efficient a certain car model is? 3. Is there any relationship between your GPA and employment opportunities? 4. If you answer all questions on a (T, F) (or multiple choice) examination completely randomly, what are your chances of passing? 5. What is the effect of package designs on sales? 6. ………………….. Question??? 1. What is Statistics? 2. Why we study Statistics? Larson & Farber, Elementary Statistics: Picturing the World, 3e 2 STA 13- SYLLABUS Instructor Phone: MsC. Pham Thanh Hieu mobile:0917.522.383, email: hieuphamthanh@gmail.com Goals of  To learn how to interpret statistical summaries appearing the course in journals, newspaper reports, internet, television …..and many real-world problems.  To learn about the concepts of probability and probabilistic reasoning  Understand variability and sampling distributions  To learn how to interpret and analyze data arising in your own work (coursework and research) STA 13- SYLLABUS Grading: - One Midterms : 30% total, multiple choice exams, closed book exam, one sheet with handwritten notes (no larger than 9 ½ x 11, two sided) is allowed - Final Exam : 50% (multiple choice + short answer exam) comprehensive; closed book exam, two sheets with handwritten notes (no larger than 9 ½ x 11, two...

Words: 2522 - Pages: 11

Premium Essay

Pdf, Docx

...MODEL CONTRACTS FOR SMALL FIRMS LEGAL GUIDANCE FOR DOING INTERNATIONAL BUSINESS © International Trade Centre, August 2010 Model Contracts for Small Firms: International Commercial Sale of Goods Contents Foreword Acknowledgements Introduction Chapter 1 International Contractual Alliance Introduction ITC Model Contract for an International Contractual Alliance Chapter 2 International Corporate Joint Venture Introduction ITC Model Contract for an International Corporate Joint Venture Chapter 3 International Commercial Sale of Goods Introduction ITC Model Contract for the International Commercial Sale of Goods (short version) ITC Model Contract for the International Commercial Sale of Goods (standard version) Chapter 4 International Long-Term Supply of Goods Introduction ITC Model Contract for the International Long-Term Supply of Goods Chapter 5 International Contract Manufacture Agreement Introduction ITC Model International Contract Manufacture Agreement Chapter 6 International Distribution of Goods Introduction ITC Model Contract for the International Distribution of Goods ii © International Trade Centre, August 2010 Model Contracts for Small Firms: International Commercial Sale of Goods Chapter 7 International Commercial Agency Introduction ITC Model Contract for an International Commercial Agency Chapter 8 International Supply of Services Introduction ITC Model Contract for the International Supply...

Words: 9949 - Pages: 40