diff --git a/index.html b/index.html new file mode 100644 index 00000000..a6ea1b7e --- /dev/null +++ b/index.html @@ -0,0 +1,420 @@ + + + + + + + + + + + + + +Resampling statistics + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + +

There are two editions of this book; one with examples in the Python +programming language, and another with examples in the R language.

+
+

Python edition

+ +
+
+

R edition

+ +
+ + + + +
+ + + + + + + + + + + + + + + diff --git a/python-book/about_technology.html b/python-book/about_technology.html new file mode 100644 index 00000000..02e1668a --- /dev/null +++ b/python-book/about_technology.html @@ -0,0 +1,901 @@ + + + + + + + + + +Resampling statistics - 4  Introducing Python and the Jupyter notebook + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook

+
+ + + +
+ + + + +
+ + +
+ +

This chapter introduces you to the technology we will use throughout the book. By technology, we mean two things:

+ +

The chapter introduces Python and its libraries, and then gives an example to introduce Python and the Jupyter Notebook. If you have not used Python before, the example notebook will get you started. The example also shows how we will be using notebooks through the rest of the book.

+
+

4.1 Python and its packages

+

This version of the book uses the Python [^python-lang] programming language to implement resampling algorithms.

+

Python is a programming language that can be used for many tasks. It is a popular language for teaching, but is also used widely in industry and academia. It is one of the most widely used programming languages in the world, and the most popular language for data science.

+

For many of the initial examples, we will also be using the NumPy [^numpy] package for Python. A package is a library of Python code and data. NumPy is a package that makes it easier to work with sequences of data values, such as sequences of numbers. These are typical in probability and statistics.

+

Later, we be using the Matplotlib [^matplotlib] package. This is the main Python package with code for producing plots, such as bar charts, histograms, and scatter plots. See the rest of the book for more details on these plots.

+

Still further on in the book, we will use more specialized libraries for data manipulation and analysis. Pandas [^pandas] is the standard Python package for loading data files and working with data tables. SciPy [^scipy] is a package that houses a wide range of numerical routines, including some simple statistical methods. The Statsmodels [^statsmodels] package has code for many more statistical procedures. We will often find ourselves comparing the results of our own resampling algorithms to those in SciPy and Statsmodels.

+
+

It is very important that Python is a programming language and not a set of canned routines for “doing statistics”. It means that we can explore the ideas of probability and statistics using the language of Python to express those ideas. It also means that you, and we, and anyone else in the world, can write new code to share with others, so they can benefit from our work, understand it, and improve it. This book is one example; we have written the Python code in this book as clearly as we can to make it easy to follow, and to explain the underlying ideas. We hope you will help us by testing what we have done and sending us suggestions for ways we could improve. Please see the preface for more information about how to do that.

+
+

4.2 The environment

+

Many of the chapters have sections with code for you to run, and experiment with. These sections contain Jupyter notebooks 1]. Jupyter notebooks are interactive web pages that allow you to read, write and run Python code. We mark the start of each notebook in the text with a note and link heading like the one you see below. In the web edition of this book, you can click on the Download link in this header to download the section as a notebook. You can also click on the Interact link in this header to open the notebook on a cloud computer. This allows you to interact with the notebook on the cloud computer. You can run the code, and experiment by making changes.

+

In the print version of the book, we point you to the web version, to get the links.

+

At the end of this chapter, we explain how to run these notebooks on your own computer. In the next section you will see an example notebook; you might want to run this in the cloud to get started.

+
+
+

4.3 Getting started with the notebook

+

The next section contains a notebook called “Billie’s Bill”. If you are looking at the web edition, you will see links to interact with this notebook in the cloud, or download it to your computer.

+
+

Start of billies_bill notebook

+ + +

The text in this notebook section assumes you have opened the page as an interactive notebook, on your own computer, or one of the Jupyter web interfaces.

+

A notebook can contain blocks of text — like this one — as well as code, and the results from running the code.

+

If you are in the notebook interface (rather than reading this in the textbook), you will see the Jupyter menu near the top of the page, with headings “File”, “Edit” and so on.

+
+

Underneath that, by default, you may see a row of icons - the “Toolbar”.

+

In the toolbar, you may see icons to run the current cell, among others.

+

To move from one cell to the next, you can click the run icon in the toolbar, but it is more efficient to press the Shift key, and press Enter (with Shift still held down). We will write this as Shift-Enter.

+
+

In this, our first notebook, we will be using Python to solve one of those difficult and troubling problems in life — working out the bill in a restaurant.

+
+

4.4 The meal in question

+

Alex and Billie are at a restaurant, getting ready to order. They do not have much money, so they are calculating the expected bill before they order.

+

Alex is thinking of having the fish for £10.50, and Billie is leaning towards the chicken, at £9.25. First they calculate their combined bill.

+

Below this text you see a code cell. It contains the Python code to calculate the total bill. Press Shift-Enter in the cell below, to see the total.

+
+
10.50 + 9.25
+
+
19.75
+
+
+

The contents of the cell above is Python code. As you would predict, Python understands numbers like 10.50, and it understands + between the numbers as an instruction to add the numbers.

+

When you press Shift-Enter, Python finds 10.50, realizes it is a number, and stores that number somewhere in memory. It does the same thing for 9.25, and then it runs the addition operation on these two numbers in memory, which gives the number 19.75.

+

Finally, Python sends the resulting number (19.75) back to the notebook for display. The notebook detects that Python sent back a value, and shows it to us.

+

This is exactly what a calculator would do.

+
+
+

4.5 Comments

+

Unlike a calculator, we can also put notes next to our calculations, to remind us what they are for. One way of doing this is to use a “comment”. You have already seen comments in the previous chapter.

+

A comment is some text that the computer will ignore. In Python, you can make a comment by starting a line with the # (hash) character. For example, the next cell is a code cell, but when you run it, it does not show any result. In this case, that is because the computer sees the # at the beginning of the line, and then ignores the rest.

+
+
# This bit of text is for me to read, and the computer to ignore.
+
+

Many of the code cells you see will have comments in them, to explain what the code is doing.

+

Practice writing comments for your own code. It is a very good habit to get into. You will find that experienced programmers write many comments on their code. They do not do this to show off, but because they have a lot of experience in reading code, and they know that comments make it much easier to read and understand code.

+
+
+

4.6 More calculations

+

Let us continue with the struggle that Alex and Billie are having with their bill.

+

They realize that they will also need to pay a tip.

+

They think it would be reasonable to leave a 15% tip. Now they need to multiply their total bill by 0.15, to get the tip. The bill is about £20, so they know that the tip will be about £3.

+

In Python * means multiplication. This is the equivalent of the “×” key on a calculator.

+

What about this, for the correct calculation?

+
+
# The tip - with a nasty mistake.
+10.50 + 9.25 * 0.15
+
+
11.8875
+
+
+

Oh dear, no, that isn’t doing the right calculation.

+

Python follows the normal rules of precedence with calculations. These rules tell us to do multiplication before addition.

+

See https://en.wikipedia.org/wiki/Order_of_operations for more detail on the standard rules.

+

In the case above the rules tell Python to first calculate 9.25 * 0.15 (to get 1.3875) and then to add the result to 10.50, giving 11.8875.

+

We need to tell Python we want it to do the addition and then the multiplication. We do this with round brackets (parentheses):

+
+
+
+ +
+
+ +
+
+
+

There are three types of brackets in Python.

+

These are:

+
    +
  • round brackets or parentheses: ();
  • +
  • square brackets: [];
  • +
  • curly brackets: {}.
  • +
+

Each type of bracket has a different meaning in Python. In the examples, play close to attention to the type of brackets we are using.

+
+
+
+
# The bill plus tip - mistake fixed.
+(10.50 + 9.25) * 0.15
+
+
2.9625
+
+
+

The obvious next step is to calculate the bill including the tip.

+
+
# The bill, including the tip
+10.50 + 9.25 + (10.50 + 9.25) * 0.15
+
+
22.7125
+
+
+

At this stage we start to feel that we are doing too much typing. Notice that we had to type out 10.50 + 9.25 twice there. That is a little boring, but it also makes it easier to make mistakes. The more we have to type, the greater the chance we have to make a mistake.

+

To make things simpler, we would like to be able to store the result of the calculation 10.50 + 9.25, and then re-use this value, to calculate the tip.

+

This is the role of variables. A variable is a value with a name.

+

Here is a variable:

+
+
# The cost of Alex's meal.
+a = 10.50
+
+

a is a name we give to the value 10.50. You can read the line above as “The variable a gets the value 10.50”. We can also talk of setting the variable. Here we are setting a to equal 10.50.

+

Now, when we use a in code, it refers to the value we gave it. For example, we can put a on a line on its own, and Python will show us the value of a:

+
+
# The value of a
+a
+
+
10.5
+
+
+

We did not have to use the name a — we can choose almost any name we like. For example, we could have chosen alex_meal instead:

+
+
# The cost of Alex's meal.
+# alex_meal gets the value 10.50
+alex_meal = 10.50
+
+

We often set variables like this, and then display the result, all in the same cell. We do this by first setting the variable, as above, and then, on the final line of the cell, we put the variable name on a line on its own, to ask Python to show us the value of the variable. Here we set billie_meal to have the value 9.25, and then show the value of billie_meal, all in the same cell.

+
+
# The cost of Billie's meal.
+billie_meal = 9.25
+# Show the value of billies_meal
+billie_meal
+
+
9.25
+
+
+

Of course, here, we did not learn much, but we often set variable values with the results of a calculation. For example:

+
+
# The cost of both meals, before tip.
+bill_before_tip = 10.50 + 9.25
+# Show the value of both meals.
+bill_before_tip
+
+
19.75
+
+
+

But wait — we can do better than typing in the calculation like this. We can use the values of our variables, instead of typing in the values again.

+
+
# The cost of both meals, before tip, using variables.
+bill_before_tip = alex_meal + billie_meal
+# Show the value of both meals.
+bill_before_tip
+
+
19.75
+
+
+

We make the calculation clearer by writing the calculation this way — we are calculating the bill before the tip by adding the cost of Alex’s and Billie’s meal — and that’s what the code looks like. But this also allows us to change the variable value, and recalculate. For example, say Alex decided to go for the hummus plate, at £7.75. Now we can tell Python that we want alex_meal to have the value 7.75 instead of 10.50:

+
+
# The new cost of Alex's meal.
+# alex_meal gets the value 7.75
+alex_meal = 7.75
+# Show the value of alex_meal
+alex_meal
+
+
7.75
+
+
+

Notice that alex_meal now has a new value. It was 10.50, but now it is 7.75. We have reset the value of alex_meal. In order to use the new value for alex_meal, we must recalculate the bill before tip with exactly the same code as before:

+
+
# The new cost of both meals, before tip.
+bill_before_tip = alex_meal + billie_meal
+# Show the value of both meals.
+bill_before_tip
+
+
17.0
+
+
+

Notice that, now we have rerun this calculation, we have reset the value for bill_before_tip to the correct value corresponding to the new value for alex_meal.

+

All that remains is to recalculate the bill plus tip, using the new value for the variable:

+
+
# The cost of both meals, after tip.
+bill_after_tip = bill_before_tip + bill_before_tip * 0.15
+# Show the value of both meals, after tip.
+bill_after_tip
+
+
19.55
+
+
+

Now we are using variables with relevant names, the calculation looks right to our eye. The code expresses the calculation as we mean it: the bill after tip is equal to the bill before the tip, plus the bill before the tip times 0.15.

+
+
+

4.7 And so, on

+

Now you have done some practice with the notebook, and with variables, you are ready for a new problem in probability and statistics, in the next chapter.

+

End of billies_bill notebook

+
+
+
+
+

4.8 Running the code on your own computer

+

Many people, including your humble authors, like to be able to run code examples on their own computers. This section explains how you can set up to run the notebooks on your own computer.

+

Once you have done this setup, you can use the “download” link

+
+

You will need to install the Python language on your computer, and then install the following packages:

+
    +
  • NumPy
  • +
  • Matplotlib - for plots
  • +
  • SciPy - a collection of modules for scientific computing;
  • +
  • Pandas - for loading, saving and manipulating data tables;
  • +
  • Statsmodels - for traditional statistical analysis.
  • +
  • Jupyter - to run the Jupyter Notebook on your own computer.
  • +
+

One easy way to all install all these packages on Windows, Mac or Linux, is to use the Anaconda Python distribution [^anaconda_distro]. Anaconda provides a single installer that will install Python and all the packages above, by default.

+

Another method is to install Python from the Python website [^python-lang]. Then use the Pip [^pip] installer to install the packages you need.

+

To use Pip, start a terminal (Start key, “cmd” in Windows, Command key and space then “Terminal” on Mac), and then, at the prompt, type:

+

Now you should be able to start the Jupyter notebook application. See the Jupyter documentation for how to start Jupyter. Open the notebook you downloaded for the chapter; you will now be able to run the code on your own computer, and experiment by making changes.

+
+ + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/acknowlegements.html b/python-book/acknowlegements.html new file mode 100644 index 00000000..648460ce --- /dev/null +++ b/python-book/acknowlegements.html @@ -0,0 +1,628 @@ + + + + + + + + + +Resampling statistics - 33  Acknowledgements + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

33  Acknowledgements

+
+ + + +
+ + + + +
+ + +
+ +
+

33.1 For the second edition

+

Many people have helped in the long evolution of this work. First was the late Max Beberman, who in 1967 immediately recognized the potential of resampling statistics for high school students as well as for all others. Louis Guttman and Joseph Doob provided important encouragement about the theoretical and practical value of resampling statistics. Allen Holmes cooperated with me in teaching the first class at University High School in Urbana, Illinois, in 1967. Kenneth Travers found and supervised several PhD students — David Atkinson and Carolyn Shevokas outstanding among them — who experimented with resampling statistics in high school and college classrooms and proved its effectiveness; Travers also carried the message to many secondary school teachers in person and in his texts. In 1973 Dan Weidenfield efficiently wrote the first program for the mainframe (then called “Simple Stats”). Derek Kumar wrote the first interactive program for the Apple II. Chad McDaniel developed the IBM version, with touchup by Henry van Kuijk and Yoram Kochavi. Carlos Puig developed the powerful 1990 version of the program. William E. Kirwan, Robert Dorfman, and Rudolf Lamone have provided their good offices for us to harness the resources of the University of Maryland and, in particular, the College of Business and Management. Terry Oswald worked day and night with great dedication on the program and on commercial details to start the marketing of RESAMPLING STATS. In mid-1989, Peter Bruce assumed the overall stewardship of RESAMPLING STATS, and has been proceeding with energy, good judgment, and courage. He has contributed to this volume in many ways, always excellently (including the writing and re-writing of programs, as well as explanations of the bootstrap and of the interpretation of p-values). Vladimir Koliadin wrote the code for several of the problems in this edition, and Cheinan Marks programmed the Windows and Macintosh versions of Resampling Stats. Toni York handled the typesetting and desktop publishing through various iterations, Barbara Shaw provided expert proofreading and desktop publishing services for the second printing of the second edition, and Chris Brest produced many of the figures. Thanks to all of you, and to others who should be added to the list.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/bayes_simulation.html b/python-book/bayes_simulation.html new file mode 100644 index 00000000..519a49fc --- /dev/null +++ b/python-book/bayes_simulation.html @@ -0,0 +1,1346 @@ + + + + + + + + + +Resampling statistics - 31  Bayesian Analysis by Simulation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

31  Bayesian Analysis by Simulation

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

This branch of mathematics [probability] is the only one, I believe, in which good writers frequently get results entirely erroneous. (Peirce 1923, Doctrine of Chances, II)

+
+

Bayesian analysis is a way of thinking about problems in probability and statistics that can help one reach otherwise-difficult decisions. It also can sometimes be used in science. The range of its recommended uses is controversial, but this chapter deals only with those uses of Bayesian analysis that are uncontroversial.

+

Better than defining Bayesian analysis in formal terms is to demonstrate its use. We shall start with the simplest sort of problem, and proceed gradually from there.

+
+

31.1 Simple decision problems

+
+

31.1.1 Assessing the Likelihood That a Used Car Will Be Sound

+

Consider a problem in estimating the soundness of a used car one considers purchasing (after (Wonnacott and Wonnacott 1990, 93–94)). Seventy percent of the cars are known to be OK on average, and 30 percent are faulty. Of the cars that are really OK, a mechanic correctly identifies 80 percent as “OK” but says that 20 percent are “faulty”; of those that are faulty, the mechanic correctly identifies 90 percent as faulty and says (incorrectly) that 10 percent are OK.

+

We wish to know the probability that if the mechanic says a car is “OK,” it really is faulty. Phrased differently, what is the probability of a car being faulty if the mechanic said it was OK?

+

We can get the desired probabilities directly by simulation without knowing Bayes’ rule, as we shall see. But one must be able to model the physical problem correctly in order to proceed with the simulation; this requirement of a clearly visualized model is a strong point in favor of simulation.

+
    +
  1. Note that we are only interested in outcomes where the mechanic approved a car.

  2. +
  3. For each car, generate a label of either “faulty” or “working” with probabilities of 0.3 and 0.7, respectively.

  4. +
  5. For each faulty car, we generate one of two labels, “approved” or “not approved” with probabilities 0.1 and 0.9, respectively.

  6. +
  7. For each working car, we generate one of two labels, “approved” or “not approved” with probabilities 0.7 and 0.3, respectively.

  8. +
  9. Out of all cars “approved”, count how many are “faulty”. The ratio between these numbers is our answer.

  10. +
+

Here is the whole thing:

+
+
import numpy as np
+
+N = 10000  # number of cars
+
+# Counters for number of approved, number of approved and faulty
+approved = 0
+approved_and_faulty = 0
+
+for i in range(N):
+
+    # Decide whether the car is faulty or working, with a probability of
+    # 0.3 and 0.7 respectively
+    car = np.random.choice(['faulty', 'working'], p=[0.3, 0.7])
+
+    if car == 'faulty':
+        # What the mechanic says of a faulty car
+        mechanic_says = np.random.choice(['approved', 'not approved'], p=[0.1, 0.9])
+    else:
+        # What the mechanic says of a working car
+        mechanic_says = np.random.choice(['approved', 'not approved'], p=[0.7, 0.3])
+
+    if mechanic_says == 'approved':
+        approved += 1
+
+        if car == 'faulty':
+            approved_and_faulty += 1
+
+k = approved_and_faulty / approved
+
+print(f'{k * 100:.2}%')
+
+
5.7%
+
+
+

The answer looks to be somewhere between 5 and 6%. The code clearly follows the description step by step, but it is also quite slow. If we can improve the code, we may be able to do our simulation with more cars, and get a more accurate answer.

+

Let’s use arrays to store the states of all cars in the lot simultaneously:

+
+
N = 1000000  # number of cars; we made this number larger by a factor of 100
+
+# Generate an array with as many entries as there are cars, each
+# being either 'working' or 'faulty'
+cars = np.random.choice(['working', 'faulty'], p=[0.7, 0.3], size=N)
+
+# Count how many cars are working
+N_working = np.sum(cars == 'working')
+
+# All the rest are faulty
+N_faulty = N - N_working
+
+# Create a new array in which to store what a mechanic says
+# about the car: 'approved' or 'not approved'
+mechanic_says = np.empty_like(cars, dtype=object)
+
+# We start with the working cars; what does the mechanic say about them?
+# Generate 'approved' or 'not approved' labels with the given probabilities.
+mechanic_says[cars == 'working'] = np.random.choice(
+    ['approved', 'not approved'], p=[0.8, 0.2], size=N_working
+)
+
+# Similarly, for each faulty car, generate 'approved'/'not approved'
+# labels with the given probabilities.
+mechanic_says[cars == 'faulty'] = np.random.choice(
+    ['approved', 'not approved'], p=[0.1, 0.9], size=N_faulty
+)
+
+# Identify all cars that were approved
+# This produces a binary mask, an array that looks like:
+# [True, False, False, True, ... ]
+approved = (mechanic_says == 'approved')
+
+# Identify cars that are faulty AND were approved
+faulty_but_approved = (cars == 'faulty') & approved
+
+# Count the number of cars that are faulty but approved, as well as
+# the total number of cars that were approved
+N_faulty_but_approved = np.sum(faulty_but_approved)
+N_approved = np.sum(approved)
+
+# Calculate the ratio, which is the answer we seek
+k = N_faulty_but_approved / N_approved
+
+print(f'{k * 100:.2}%')
+
+
5.1%
+
+
+

The code now runs much faster, and with a larger number of cars we see that the answer is closer to a 5% chance of a car being broken after it has been approved by a mechanic.

+
+
+

31.1.2 Calculation without simulation

+

Simulation forces us to model our problem clearly and concretely in code. Such code is most often easier to reason about than opaque statistical methods. Running the simulation gives a good sense of what the correct answer should be. Thereafter, we can still look into different — sometimes more elegant or accurate — ways of modeling and solving the problem.

+

Let’s examine the following diagram of our car selection:

+

+

We see that there are two paths, highlighted, that results in a car being approved by a mechanic. Either a car can be working, and correctly identified as such by a mechanic; or the car can be broken, while the mechanic mistakenly determines it to be working. Our question only pertains to these two paths, so we do not need to study the rest of the tree.

+

In the long run, in our simulation, about 70% of the cars will end with the label “working”, and about 30% will end up with the label “faulty”. We just took 10000 sample cars above but, in fact, the larger the number of cars we take, the closer we will get to 70% “working” and 30% “faulty”. So, with many samples, we can think of 70% of these samples flowing down the “working” path, and 30% flowing along the “faulty” path.

+

Now, we want to know, of all the cars approved by a mechanic, how many are faulty:

+

\[ \frac{\mathrm{cars_{\mathrm{faulty}}}}{\mathrm{cars}_{\mathrm{approved}}} \]

+

We follow the two highlighted paths in the tree:

+
    +
  1. Of a large sample of cars, 30% are faulty. Of these, 10% are approved by a mechanic. That is, 30% * 10% = 3% of all cars.
  2. +
  3. Of all cars, 70% work. Of these, 80% are approved by a mechanic. That is, 70% * 80% = 56% of all cars.
  4. +
+

The percentage of faulty cars, out of approved cars, becomes:

+

\[ +3\% / (56\% + 3\%) = 5.08\% +\]

+

Notation-wise, it is a bit easier to calculate these sums using proportions rather than percentages:

+
    +
  1. Faulty cars approved by a mechanic: 0.3 * 0.1 = 0.03
  2. +
  3. Working cars approved by a mechanic: 0.7 * 0.8 = 0.56
  4. +
+

Fraction of faulty cars out of approved cars: 0.03 / (0.03 + 0.56) = 0.0508

+

We see that every time the tree branches, it filters the cars: some go to one branch, the rest to another. In our code, we used the AND (&) operator to find the intersection between faulty AND approved cars, i.e., to filter out from all faulty cars only the cars that were ALSO approved.

+
+
+
+

31.2 Probability interpretation

+
+

31.2.1 Probability from proportion

+

In these examples, we often calculate proportions. In the given simulation:

+
    +
  • How many cars are approved by a mechanic? 59/100.
  • +
  • How many of those 59 were faulty? 3/59.
  • +
+

We often also count how commonly events occur: “it rained 4 out of the 10 days”.

+

An extension of this idea is to predict the probability of an event occurring, based on what we had seen in the past. We can say “out of 100 days, there was some rain on 20 of them; we therefore estimate that the probability of rain occurring is 20/100”. Of course, this is not a complex or very accurate weather model; for that, we’d need to take other factors—such as season—into consideration. Overall, the more observations we have, the better our probability estimates become. We discussed this idea previously in “The Law of Large Numbers”.

+ +
+

31.2.1.1 Ratios of proportions

+

At our mechanic’s yard, we can ask “how many red cars here are faulty”? To calculate that, we’d first count the number of red cars, then the number of those red cars that are also broken, then calculate the ratio: red_cars_faulty / red_cars.

+

We could just as well have worked in percentages: percentage_of_red_cars_broken / percentage_of_cars_that_are_red, since that is (red_cars_broken / 100) / (red_cars / 100)—the same ratio calculated before.

+

Our point is that the denominator doesn’t matter when calculating ratios, so we could just as well have written:

+

(red_cars_broken / all_cars) / (red_cars / all_cars)

+

or

+

\[ +P(\text{cars that are red and that are broken}) / P(\text{red cars}) +\]

+ +
+
+
+

31.2.2 Probability relationships: conditional probability

+

Here’s one way of writing the probability that a car is broken:

+

\[ +P(\text{car is broken}) +\]

+

We can shorten “car is broken” to B, and write the same thing as:

+

\[ +P(B) +\]

+

Similarly, we could write the probability that a car is red as:

+

\[ +P(R) +\]

+

We might also want to express the conditional probability, as in the probability that the car is broken, given that we already know that the car is red:

+

\[ +P(\text{car is broken GIVEN THAT car is red}) +\]

+

That is getting getting pretty verbose, so we will shorten this as we did above:

+

\[ +P(B \text{ GIVEN THAT } R) +\]

+

To make things even more compact, we write “GIVEN THAT” as a vertical bar | — so the whole thing becomes:

+

\[ +P(B | R) +\]

+

We read this as “the probability that the car is broken given that the car is red”. Such a probability is known as a conditional probability. We discuss these in more details in Ch TKTK.

+ +

In our original problem, we ask what the chance is of a car being broken given that a mechanic approved it. As discussed under “Ratios of proportions”, it can be calculated with:

+

\[ +P(\text{car broken | mechanic approved}) += P(\text{car broken and mechanic approved}) / P(\text{mechanic approved}) +\]

+

We have already used \(B\) to mean “broken” (above), so let us use \(A\) to mean “mechanic approved”. Then we can write the statement above in a more compact way:

+

\[ +P(B | A) = P(B \text{ and } A) / P(A) +\]

+

To put this generally, conditional probabilities for two events \(X\) and \(Y\) can be written as:

+

\(P(X | Y) = P(X \text{ and } Y) / P(Y)\)

+

Where (again) \(\text{ and }\) means that both events occur.

+
+
+

31.2.3 Example: conditional probability

+

Let’s discuss a very relevant example. You get a COVID test, and the test is negative. Now, you would like to know what the chance is of you having COVID.

+

We have the following information:

+
    +
  • 1.5% of people in your area have COVID
  • +
  • The false positive rate of the tests (i.e., that they detect COVID when it is absent) is very low at 0.5%
  • +
  • The false negative rate (i.e., that they fail to detect COVID when it is present) is quite high at 40%
  • +
+

+

Again, we start with our simulation.

+
+
# The number of people
+N = 1000000
+
+# For each person, generate a True or False label,
+# indicating that they have / don't have COVID
+person_has_covid = np.random.choice(
+    [True, False], p=[0.015, 0.985],
+    size=N
+)
+
+# Calculate the numbers of people with and without COVID
+N_with_covid = np.sum(person_has_covid)
+N_without_covid = N - N_with_covid
+
+# In this array, we will store, for each person, whether they
+# had a positive or a negative test
+test_result = np.zeros_like(person_has_covid, dtype=bool)
+
+# Draw test results for people with COVID
+test_result[person_has_covid] = np.random.choice(
+    [True, False], p=[0.6, 0.4],
+    size=N_with_covid
+)
+
+# Draw test results for people without COVID
+test_result[~person_has_covid] = np.random.choice(
+    [True, False], p=[0.005, 0.995],
+    size=N_without_covid
+)
+
+# Get the COVID statuses of all those with negative tests
+# (`test_result` is a boolean mask, like `[True, False, False, True, ...]`,
+# and `~test_result` flips all boolean values to `[False, True, True, False, ...]`.
+covid_status_negative_test = person_has_covid[~test_result]
+
+# Now, count how many with COVID had a negative test results
+N_with_covid_and_negative_test = np.sum(covid_status_negative_test)
+
+# And how many people, overall, had negative test results
+N_with_negative_test = len(covid_status_negative_test)
+
+k = N_with_covid_and_negative_test / N_with_negative_test
+
+print(k)
+
+
0.0061110186992100815
+
+
+

This gives around 0.006 or 0.6%.

+

Now that we have a rough indication of what the answer should be, let’s try and calculate it directly, based on the tree of informatiom shown earlier.

+

We will use these abbreviations:

+
    +
  • \(C^+\) means Covid positive (you do actually have Covid).
  • +
  • \(C^-\) means Covid negative (you do not actually have Covid).
  • +
  • \(T^+\) means the Covid test was positive.
  • +
  • \(T^-\) means the Covid test was negative.
  • +
+

For example \(P(C^+ | T^-)\) is the probability (\(P\)) that you do actually have Covid (\(C^+\)) given that (\(|\)) the test was negative (\(T^-\)).

+

We would like to know the probability of having COVID given that your test was negative (\(P(C^+ | T^-)\)). Using the conditional probability relationship from above, we can write:

+

\[ +P(C^+ | T^-) = P(C^+ \text{ and } T^-) / P(T^-) +\]

+

We see from the tree diagram that \(P(C^+ \text{ and } T^-) = P(T^- | C^+) * P(C^+) = .4 * .015 = 0.006\).

+ +

We observe that \(P(T^-) = P(T^- \text{ and } C^-) + P(T^- \text{ and } C^+)\), i.e. that we can obtain a negative test result through two paths, having COVID or not having COVID. We expand these further as conditional probabilities:

+

\(P(T^- \text{ and } C^-) = P(T^- | C^-) * P(C^-)\)

+

and

+

\(P(T^- \text{ and } C^+) = P(T^- | C^+) * P(C^+)\).

+

We can now calculate

+

\[ +P(T^-) = P(T^- | C^-) * P(C^-) + P(T^- | C^+) * P(C^+) +\]

+

\[ += .995 * .985 + .4 * .015 = 0.986 +\]

+

The answer, then, is:

+

\(P(C^+ | T^-) = 0.006 / 0.986 = 0.0061\) or 0.61%.

+

This matches very closely our simulation result, so we have some confidence that we have done the calculation correctly.

+
+
+

31.2.4 Estimating Driving Risk for Insurance Purposes

+

Another sort of introductory problem, following after (Feller 1968, p 122):

+

A mutual insurance company charges its members according to the risk of having an car accident. It is known that there are two classes of people — 80 percent of the population with good driving judgment and with a probability of .06 of having an accident each year, and 20 percent with poor judgment and a probability of .6 of having an accident each year. The company’s policy is to charge $100 for each percent of risk, i. e., a driver with a probability of .6 should pay 60*$100 = $6000.

+

If nothing is known of a driver except that they had an accident last year, what fee should they pay?

+

Another way to phrase this question is: given that a driver had an accident last year, what is the probability of them having an accident overall?

+

We will proceed as follows:

+
    +
  1. Generate a population of N people. Label each as good driver or poor driver.
  2. +
  3. Simulate the last year for each person: did they have an accident or not?
  4. +
  5. Select only the ones that had an accident last year.
  6. +
  7. Among those, calculate what their average risk is of making an accident. This will indicate the appropriate insurance premium.
  8. +
+
+
N = 100000
+cost_per_percent = 100
+
+people = np.random.choice(
+    ['good driver', 'poor driver'], p=[0.8, 0.2],
+    size=N
+)
+
+good_driver = (people == 'good driver')
+poor_driver = ~good_driver
+
+# Did they have an accident last year?
+had_accident = np.zeros(N, dtype=bool)
+had_accident[good_driver] = np.random.choice(
+    [True, False], p=[0.06, 0.94],
+    size=np.sum(good_driver)
+)
+had_accident[poor_driver] = np.random.choice(
+    [True, False], p=[0.6, 0.4],
+    size=np.sum(poor_driver)
+)
+
+ppl_with_accidents = people[had_accident]
+N_good_driver_accidents = np.sum(ppl_with_accidents == 'good driver')
+N_poor_driver_accidents = np.sum(ppl_with_accidents == 'poor driver')
+N_all_with_accidents = N_good_driver_accidents + N_poor_driver_accidents
+
+avg_risk_percent = (N_good_driver_accidents * 0.06 +
+                    N_poor_driver_accidents * 0.6) / N_all_with_accidents * 100
+
+premium = avg_risk_percent * cost_per_percent
+
+print(f'{premium:.0f} USD')
+
+
4484 USD
+
+
+

The answer should be around 4450 USD.

+
+
+

31.2.5 Screening for Disease

+ +

This is a classic Bayesian problem (quoted by Tversky and Kahneman (1982, 154), from Cascells et al. (1978, 999)):

+
+

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

+
+

Tversky and Kahneman note that among the respondents — students and staff at Harvard Medical School — “the most common response, given by almost half of the participants, was 95%” — very much the wrong answer.

+

To obtain an answer by simulation, we may rephrase the question above with (hypothetical) absolute numbers as follows:

+

If a test to detect a disease whose prevalence has been estimated to be about 100,000 in the population of 100 million persons over age 40 (that is, about 1 in a thousand) has been observed to have a false positive rate of 60 in 1200 observations, and never gives a negative result if a person really has the disease, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

+

If the raw numbers are not available, the problem can be phrased in such terms as “about 1 case in 1000” and “about 5 false positives in 100 cases.”)

+

One may obtain an answer as follows:

+
    +
  1. Construct bucket A with 999 white beads and 1 black bead, and bucket B with 95 green beads and 5 red beads. A more complete problem that also discusses false negatives would need a third bucket.

  2. +
  3. Pick a bead from bucket A. If black, record “T,” replace the bead, and end the trial. If white, continue to step 3.

  4. +
  5. If a white bead is drawn from bucket A, select a bead from bucket B. If red, record “F” and replace the bead, and if green record “N” and replace the bead.

  6. +
  7. Repeat steps 2-4 perhaps 10,000 times, and in the results count the proportion of “T”s to (“T”s plus “F”s) ignoring the “N”s).

    +

    Of course 10,000 draws would be tedious, but even after a few hundred draws a person would be likely to draw the correct conclusion that the proportion of “T”s to (“T”s plus “F”s) would be small. And it is easy with a computer to do 10,000 trials very quickly.

    +

    Note that the respondents in the Cascells et al. study were not naive; the medical staff members were supposed to understand statistics. Yet most doctors and other personnel offered wrong answers. If simulation can do better than the standard deductive method, then simulation would seem to be the method of choice. And only one piece of training for simulation is required: Teach the habit of saying “I’ll simulate it” and then actually doing so.

  8. +
+
+
+
+

31.3 Fundamental problems in statistical practice

+

Box and Tiao (1992) begin their classic exposition of Bayesian statistics with the analysis of a famous problem first published by Fisher (1959, 18).

+
+

…there are mice of two colors, black and brown. The black mice are of two genetic kinds, homozygotes (BB) and heterozygotes (Bb), and the brown mice are of one kind (bb). It is known from established genetic theory that the probabilities associated with offspring from various matings are as listed in Table 31.1.

+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 31.1: Probabilities for Genetic Character of Mice Offspring (Box and Tiao 1992, 12–14)
BB (black)Bb (black)bb (brown)
BB mated with bb010
Bb mated with bb0½½
Bb mated with Bb¼½¼
+
+

Suppose we have a “test” mouse which has been produced by a mating between two (Bb) mice and is black. What is the genetic kind of this mouse?

+

To answer that, we look at the information in the last line of the table: it shows that the probabilities of a test mouse is of kind BB and Bb are precisely known, and are 1/3 and 2/3 respectively ((1/4)/(1/4 + 1/2) vs (1/2)/(1/4 + 1/2)). We call this our “prior” estimate — in other words, our estimate before seeing data.

+

Suppose the test mouse is now mated with a brown mouse (of kind bb) and produces seven black offspring. Before, we thought that it was more likely for the parent to be of kind Bb than of kind BB. But if that were true, then we would have expected to have seen some brown offspring (the probability of mating Bb with bb resulting in brown offspring is given as 0.5). Therefore, we sense that it may now be more likely that the parent was of type BB instead. How do we quantify that?

+

One can calculate, as Fisher (1959, 19) did, the probabilities after seeing the data (we call this the posterior probability). This is typically done using using Bayes’ rule.

+

But instead of doing that, let’s take the easy route out and simulate the situation instead.

+
    +
  1. We begin, as do Box and Tiao, by restricting our attention to the third line in Table Table 31.1. We draw a mouse with label ‘BB’, ‘Bb’, or ‘bb’, using those probabilities. We were told that the “test mouse” is black, so if we draw ‘bb’, we try again. (Alternatively, we could draw ‘BB’ and ‘Bb’ with probabilities of 1/3 and 2/3 respectively.)

  2. +
  3. We now want to examine the offspring of the test mouse when mated with a brown “bb” mouse. Specifically, we are only interested in cases where all offspring were black. We will store the genetic kind of the parents of such offspring so that we can count them later.

    +

    If our test mouse is “BB”, we already know that all their offspring will be black (“Bb”). Thus, store “BB” in the parent list.

  4. +
  5. If our test mouse is “Bb”, we have a bit more work to do. Draw seven offspring from the middle row of Table tbl-mice-genetics. If all the offspring are black, store “Bb” in the parent list.

  6. +
  7. Repeat steps 1-3 perhaps 10000 times.

  8. +
  9. Now, out of all parents count the numbers of “BB” vs “Bb”.

  10. +
+

We will do a naïve implementation that closely follows the logic described above, followed by a slightly optimized version.

+
+
N = 100000
+
+parents = []
+
+for i in range(N):
+    test_mouse = np.random.choice(['BB', 'Bb', 'bb'], p=[0.25, 0.5, 0.25])
+
+    # The test mouse is black; since we drew a brown mouse skip this trial
+    if test_mouse == 'bb':
+        continue
+
+    # If the test mouse is 'BB', all 7 children are guaranteed to
+    # be 'Bb' black.
+    # Therefore, add 'BB' to the parent list.
+    if test_mouse == 'BB':
+        parents.append('BB')
+
+    # If the parent mouse is 'Bb', we draw 7 children to
+    # see whether all of them are black ('Bb').
+    # The probabilities come from the middle row of the table.
+    if test_mouse == 'Bb':
+      children = np.random.choice(['Bb', 'bb'], p=[0.5, 0.5], size=7)
+      if np.all(children == 'Bb'):
+          parents.append('Bb')
+
+# Now, count how many parents were 'BB' vs 'Bb'
+parents = np.array(parents)
+
+parents_BB = (parents == 'BB')
+parents_Bb = (parents == 'Bb')
+N_B = len(parents)
+
+p_BB = np.sum(parents_BB) / N_B
+p_Bb = np.sum(parents_Bb) / N_B
+
+print(f'p_BB = {p_BB:.3f}')
+
+
p_BB = 0.986
+
+
print(f'p_Bb = {p_Bb:.3f}')
+
+
p_Bb = 0.014
+
+
print(f'Ratio: {p_BB/p_Bb:.1f}')
+
+
Ratio: 69.4
+
+
+

We see that all the offspring being black considerably changes the situation! We started with the odds being 2:1 in favor of Bb vs BB. The “posterior” or “after the evidence” ratio is closer to 64:1 in favor of BB! (1973, pp. 12-14)

+

Let’s tune the code a bit to run faster. Instead of doing the trials one mouse at a time, we will do the whole bunch together.

+
+
N = 1000000
+
+# In N trials, pair two Bb mice and generate a child
+test_mice = np.random.choice(['BB', 'Bb', 'bb'], p=[0.25, 0.5, 0.25], size=N)
+
+# The resulting test mouse is black, so filter out all brown ones
+test_mice = test_mice[test_mice != 'bb']
+M = len(test_mice)
+
+# Each test mouse will now be mated with a brown mouse, producing 7 offspring.
+# We then store whether all the offspring were black or not.
+all_offspring_black = np.zeros(M, dtype=bool)
+
+# If a test mouse is 'BB', we are assured that all its offspring
+# will be black
+all_offspring_black[test_mice == 'BB'] = True
+
+# If a test mouse is 'Bb', we have to generate its offspring and
+# see whether they are all black or not
+test_mice_Bb = (test_mice == 'Bb')
+N_test_mice_Bb = np.sum(test_mice_Bb)
+
+# Generate all offspring of all 'Bb' test mice
+offspring = np.random.choice(
+    ['Bb', 'bb'], p=[0.5, 0.5], size=(N_test_mice_Bb, 7)
+)
+all_offspring_black[test_mice_Bb] = np.all(offspring == 'Bb', axis=1)
+
+# Find the genetic types of the parents of all-black offspring
+parents = test_mice[all_offspring_black]
+
+# Calculate what fraction of parents were 'BB' vs 'Bb'
+parents_BB = (parents == 'BB')
+parents_Bb = (parents == 'Bb')
+N_B = np.sum(all_offspring_black)
+
+p_BB = np.sum(parents_BB) / N_B
+p_Bb = np.sum(parents_Bb) / N_B
+
+print(f'p_BB = {p_BB:.3f}')
+
+
p_BB = 0.985
+
+
print(f'p_Bb = {p_Bb:.3f}')
+
+
p_Bb = 0.015
+
+
print(f'Ratio: {p_BB/p_Bb:.1f}')
+
+
Ratio: 64.1
+
+
+

This yields a similar result, but in much shorter time — which means we can increase the number of trials and get a more accurate result.

+ +

Creating the correct simulation procedure is not trivial, because Bayesian reasoning is subtle — a reason it has been the cause of controversy for more than two centuries. But it certainly is not easier to create a correct procedure using analytic tools (except in the cookbook sense of plug-and-pray). And the difficult mathematics that underlie the analytic method (see e.g. (Box and Tiao 1992, Appendix A1.1) make it almost impossible for the statistician to fully understand the procedure from beginning to end. If one is interested in insight, the simulation procedure might well be preferred.1

+
+
+

31.4 Problems based on normal and other distributions

+

This section should be skipped by all except advanced practitioners of statistics.

+

Much of the work in Bayesian analysis for scientific purposes treats the combining of prior distributions having Normal and other standard shapes with sample evidence which may also be represented with such standard functions. The mathematics involved often is formidable, though some of the calculational formulas are fairly simple and even intuitive.

+

These problems may be handled with simulation by replacing the Normal (or other) distribution with the original raw data when data are available, or by a set of discrete sub-universes when distributions are subjective.

+

Measured data from a continuous distribution present a special problem because the probability of any one observed value is very low, often approaching zero, and hence the probability of a given set of observed values usually cannot be estimated sensibly; this is the reason for the conventional practice of working with a continuous distribution itself, of course. But a simulation necessarily works with discrete values. A feasible procedure must bridge this gulf.

+

The logic for a problem of Schlaifer’s (1961, example 17.1) will only be sketched out. The procedure is rather novel, but it has not heretofore been published and therefore must be considered tentative and requiring particular scrutiny.

+
+

31.4.1 An Intermediate Problem in Conditional Probability

+

Schlaifer employs a quality-control problem for his leading example of Bayesian estimation with Normal sampling. A chemical manufacturer wants to estimate the amount of yield of a crucial ingredient X in a batch of raw material in order to decide whether it should receive special handling. The yield ranges between 2 and 3 pounds (per gallon), and the manufacturer has compiled the distribution of the last 100 batches.

+

The manufacturer currently uses the decision rule that if the mean of nine samples from the batch (which vary only because of measurement error, which is the reason that he takes nine samples rather than just one) indicates that the batch mean is greater than 2.5 gallons, the batch is accepted. The first question Schlaifer asks, as a sampling-theory waystation to the more general question, is the likelihood that a given batch with any given yield — say 2.3 gallons — will produce a set of samples with a mean as great or greater than 2.5 gallons.

+

We are told that the manufacturer has in hand nine samples from a given batch; they are 1.84, 1.75, 1.39, 1.65, 3.53, 1.03,

+

2.73, 2.86, and 1.96, with a mean of 2.08. Because we are also told that the manufacturer considers the extent of sample variation to be the same at all yield levels, we may — if we are again working with 2.3 as our example of a possible universe — therefore add (2.3 minus 2.08 =) 0.22 to each of these nine observations, so as to constitute a bootstrap-type universe; we do this on the grounds that this is our best guess about the constitution of that distribution with a mean at (say) 2.3.

+

We then repeatedly draw samples of nine observations from this distribution (centered at 2.3) to see how frequently its mean exceeds 2.5. This work is so straightforward that we need not even state the steps in the procedure.

+
+
+

31.4.2 Estimating the Posterior Distribution

+

Next we estimate the posterior distribution. Figure 31.1 shows the prior distribution of batch yields, based on 100 previous batches.

+
+
+
+
+

+
Figure 31.1: Posterior distribution of batch yields
+
+
+
+
+

Notation: S m = set of batches (where total S = 100) with a particular mean m (say, m = 2.1). x i = particular observation (say, x 3 = 1.03). s = the set of x i .

+

We now perform for each of the S m (categorized into the tenth-of-gallon divisions between 2.1 and 3.0 gallons), each corresponding to one of the yields ranging from 2.1 to 3.0, the same sort of sampling operation performed for S m=2.3 in the previous problem. But now, instead of using the manufacturer’s decision criterion of 2.5, we construct an interval of arbitrary width around the sample mean of 2.08 — say at .1 intervals from 2.03 to 2.13 — and then work with the weighted proportions of sample means that fall into this interval.

+
    +
  1. Using a bootstrap-like approach, we presume that the sub-universe of observations related to each S m equals the mean of that S m — say, 2.1) plus (minus) the mean of the x i (equals 2.05) added to (subtracted from) each of the nine x i , say, 1.03 + .05 = 1.08. For a distribution centered at 2.3, the values would be (1.84 + .22 = 2.06, 1.75 + .22 = 1.97…).
  2. +
  3. Working with the distribution centered at 2.3 as an example: Constitute a universe of the values (1.84+.22=2.06, 1.75 + .22 = 1.97…). Here we may notice that the variability in the sample enters into the analysis at this point, rather than when the sample evidence is combined with the prior distribution; this is in contrast to conventional Bayesian practice where the posterior is the result of the prior and sample means weighted by the reciprocals of the variances (see e.g. (Box and Tiao 1992, 17 and Appendix A1.1)).
  4. +
  5. Draw nine observations from this universe (with replacement, of course), compute the mean, and record.
  6. +
  7. Repeat step 2 perhaps 1000 times and plot the distribution of outcomes.
  8. +
  9. Compute the percentages of the means within (say) .5 on each side of the sample mean, i. e. from 2.03–2.13. The resulting number — call it UP i — is the un-standardized (un-normalized) effect of this sub-distribution in the posterior distribution.
  10. +
  11. Repeat steps 1-5 to cover each other possible batch yield from 2.0 to 3.0 (2.3 was just done).
  12. +
  13. Weight each of these sub-distributions — actually, its UP i — by its prior probability, and call that WP i -.
  14. +
  15. Standardize the WP i s to a total probability of 1.0. The result is the posterior distribution. The value found is 2.283, which the reader may wish to compare with a theoretically-obtained result (which Schlaifer does not give).
  16. +
+

This procedure must be biased because the numbers of “hits” will differ between the two sides of the mean for all sub-distributions except that one centered at the same point as the sample, but the extent and properties of this bias are as-yet unknown. The bias would seem to be smaller as the interval is smaller, but a small interval requires a large number of simulations; a satisfactorily narrow interval surely will contain relatively few trials, which is a practical problem of still-unknown dimensions.

+

Another procedure — less theoretically justified and probably more biased — intended to get around the problem of the narrowness of the interval, is as follows:

+
    +
  1. (5a.) Compute the percentages of the means on each side of the sample mean, and note the smaller of the two (or in another possible process, the difference of the two). The resulting number — call it UP i — is the un-standardized (un-normalized) weight of this sub-distribution in the posterior distribution.
  2. +
+

Another possible criterion — a variation on the procedure in 5a — is the difference between the two tails; for a universe with the same mean as the sample, this difference would be zero.

+
+
+
+

31.5 Conclusion

+

All but the simplest problems in conditional probability are confusing to the intuition even if not difficult mathematically. But when one tackles Bayesian and other problems in probability with experimental simulation methods rather than with logic, neither simple nor complex problems need be difficult for experts or beginners.

+

This chapter shows how simulation can be a helpful and illuminating way to approach problems in Bayesian analysis.

+

Simulation has two valuable properties for Bayesian analysis:

+
    +
  1. It can provide an effective way to handle problems whose analytic solution may be difficult or impossible.
  2. +
  3. Simulation can provide insight to problems that otherwise are difficult to understand fully, as is peculiarly the case with Bayesian analysis.
  4. +
+

Bayesian problems of updating estimates can be handled easily and straightforwardly with simulation, whether the data are discrete or continuous. The process and the results tend to be intuitive and transparent. Simulation works best with the original raw data rather than with abstractions from them via percentages and distributions. This can aid the understanding as well as facilitate computation.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/confidence_1.html b/python-book/confidence_1.html new file mode 100644 index 00000000..20c39376 --- /dev/null +++ b/python-book/confidence_1.html @@ -0,0 +1,707 @@ + + + + + + + + + +Resampling statistics - 26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples

+
+ + + +
+ + + + +
+ + +
+ +
+

26.1 Introduction

+

This chapter discusses how to assess the accuracy of a point estimate of the mean, median, or other statistic of a sample. We want to know: How close is our estimate of (say) the sample mean likely to be to the population mean? The chapter begins with an intuitive discussion of the relationship between a) a statistic derived from sample data, and b) a parameter of a universe from which the sample is drawn. Then we discuss the actual construction of confidence intervals using two different approaches which produce the same numbers though they have different logic. The following chapter shows illustrations of these procedures.

+

The accuracy of an estimate is a hard intellectual nut to crack, so hard that for hundreds of years statisticians and scientists wrestled with the problem with little success; it was not until the last century or two that much progress was made. The kernel of the problem is learning the extent of the variation in the population. But whereas the sample mean can be used straightforwardly to estimate the population mean, the extent of variation in the sample does not directly estimate the extent of the variation in the population, because the variation differs at different places in the distribution, and there is no reason to expect it to be symmetrical around the estimate or the mean.

+

The intellectual difficulty of confidence intervals is one reason why they are less prominent in statistics literature and practice than are tests of hypotheses (though statisticians often favor confidence intervals). Another reason is that tests of hypotheses are more fundamental for pure science because they address the question that is at the heart of all knowledge-getting: “Should these groups be considered different or the same ?” The statistical inference represented by confidence limits addresses what seems to be a secondary question in most sciences (though not in astronomy or perhaps physics): “How reliable is the estimate?” Still, confidence intervals are very important in some applied sciences such as geology — estimating the variation in grades of ores, for example — and in some parts of business and industry.

+

Confidence intervals and hypothesis tests are not disjoint ideas. Indeed, hypothesis testing of a single sample against a benchmark value is (in all schools of thought, I believe) operationally identical with the most common way (Approach 1 below) of constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different for confidence limits and hypothesis tests.

+

The logic of confidence intervals is on shakier ground, in my judgment, than that of hypothesis testing, though there are many thoughtful and respected statisticians who argue that the logic of confidence intervals is better grounded and leads less often to error.

+

Confidence intervals are considered by many to be part of the same topic as estimation , being an estimation of accuracy, in their view. And confidence intervals and hypothesis testing are seen as sub-cases of each other by some people. Whatever the importance of these distinctions among these intellectual tasks in other contexts, they need not concern us here.

+
+
+

26.2 Estimating the accuracy of a sample mean

+

If one draws a sample that is very, very large — large enough so that one need not worry about sample size and dispersion in the case at hand — from a universe whose characteristics one knows , one then can deduce the probability that the sample mean will fall within a given distance of the population mean. Intuitively, it seems as if one should also be able to reverse the process — to infer something about the location of the population mean from the sample mean . But this inverse inference turns out to be a slippery business indeed.

+

Let’s put it differently: It is all very well to say — as one logically may — that on average the sample mean (or other point estimator) equals a population parameter in most situations.

+

But what about the result of any particular sample? How accurate or inaccurate an estimate of the population mean is the sample likely to produce?

+

Because the logic of confidence intervals is subtle, most statistics texts skim right past the conceptual difficulties, and go directly to computation. Indeed, the topic of confidence intervals has been so controversial that some eminent statisticians refuse to discuss it at all. And when the concept is combined with the conventional algebraic treatment, the composite is truly baffling; the formal mathematics makes impossible any intuitive understanding. For students, “pluginski” is the only viable option for passing exams.

+

With the resampling method, however, the estimation of confidence intervals is easy. The topic then is manageable though subtle and challenging — sometimes pleasurably so. Even beginning undergraduates can enjoy the subtlety and find that it feels good to stretch the brain and get down to fundamentals.

+

One thing is clear: Despite the subtlety of the topic, the accuracy of estimates must be dealt with, one way or another.

+

I hope the discussion below resolves much of the confusion of the topic.

+
+
+

26.3 The logic of confidence intervals

+

To preview the treatment of confidence intervals presented below: We do not learn about the reliability of sample estimates of the mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because this cannot be done in principle . Instead, we investigate the behavior of various universes in the neighborhood of the sample, universes whose characteristics are chosen on the basis of their similarity to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes that are implicitly suggested by the sample evidence but are not logically implied by that evidence.

+

The examples worked in the following chapter help explain why statistics is a difficult subject. The procedure required to transit successfully from the original question to a statistical probability, and then through a sensible interpretation of the probability, involves a great many choices about the appropriate model based on analysis of the problem at hand; a wrong choice at any point dooms the procedure. The actual computation of the probability — whether done with formulaic probability theory or with resampling simulation — is only a very small part of the procedure, and it is the least difficult part if one proceeds with resampling. The difficulties in the statistical process are not mathematical but rather stem from the hard clear thinking needed to understand the nature of the situation and to ascertain the appropriate way to model it.

+

Again, the purpose of a confidence interval is to help us assess the reliability of a statistic of the sample — for example, its mean or median — as an estimator of the parameter of the universe. The line of thought runs as follows: It is possible to map the distribution of the means (or other such parameter) of samples of any given size (the size of interest in any investigation usually being the size of the observed sample) and of any given pattern of dispersion (which we will assume for now can be estimated from the sample) that a universe in the neighborhood of the sample will produce. For example, we can compute how large an interval to the right and left of a postulated universe’s mean is required to include 45 percent of the samples on either side of the mean.

+

What cannot be done is to draw conclusions from sample evidence about the nature of the universe from which it was drawn, in the absence of some information about the set of universes from which it might have been drawn. That is, one can investigate the behavior of one or more specified universes, and discover the absolute and relative probabilities that the given specified universe(s) might produce such a sample. But the universe(s) to be so investigated must be specified in advance (which is consistent with the Bayesian view of statistics). To put it differently, we can employ probability theory to learn the pattern(s) of results produced by samples drawn from a particular specified universe, and then compare that pattern to the observed sample. But we cannot infer the probability that that sample was drawn from any given universe in the absence of knowledge of the other possible sources of the sample. That is a subtle difference, I know, but I hope that the following discussion makes it understandable.

+
+
+

26.4 Computing confidence intervals

+

In the first part of the discussion we shall leave aside the issue of estimating the extent of the dispersion — a troublesome matter, but one which seldom will result in unsound conclusions even if handled crudely. To start from scratch again: The first — and seemingly straightforward — step is to estimate the mean of the population based on the sample data. The next and more complex step is to ask about the range of values (and their probabilities) that the estimate of the mean might take — that is, the construction of confidence intervals. It seems natural to assume that if our best guess about the population mean is the value of the sample mean, our best guesses about the various values that the population mean might take if unbiased sampling error causes discrepancies between population parameters and sample statistics, should be values clustering around the sample mean in a symmetrical fashion (assuming that asymmetry is not forced by the distribution — as for example, the binomial is close to symmetric near its middle values). But how far away from the sample mean might the population mean be?

+

Let’s walk slowly through the logic, going back to basics to enhance intuition. Let’s start with the familiar saying, “The apple doesn’t fall far from the tree.” Imagine that you are in a very hypothetical place where an apple tree is above you, and you are not allowed to look up at the tree, whose trunk has an infinitely thin diameter. You see an apple on the ground. You must now guess where the trunk (center) of the tree is. The obvious guess for the location of the trunk is right above the apple. But the trunk is not likely to be exactly above the apple because of the small probability of the trunk being at any particular location, due to sampling dispersion.

+

Though you find it easy to make a best guess about where the mean is (the true trunk), with the given information alone you have no way of making an estimate of the probability that the mean is one place or another, other than that the probability is the same that the tree is to the north or south, east or west, of you. You have no idea about how far the center of the tree is from you. You cannot even put a maximum on the distance it is from you, and without a maximum you could not even reasonably assume a rectangular distribution, or a Normal distribution, or any other.

+

Next you see two apples. What guesses do you make now? The midpoint between the two obviously is your best guess about the location of the center of the tree. But still there is no way to estimate the probability distribution of the location of the center of the tree.

+

Now assume you are given still another piece of information: The outermost spread of the tree’s branches (the range) equals the distance between the two apples you see. With this information, you could immediately locate the boundaries of the location of the center of the tree. But this is only because the answer you sought was given to you in disguised form.

+

You could, however, come up with some statements of relative probabilities. In the absence of prior information on where the tree might be, you would offer higher odds that the center (the trunk) is in any unit of area close to the center of your two apples than in a unit of area far from the center. That is, if you are told that either one apple, or two apples, came from one of two specified trees whose locations are given , with no reason to believe it is one tree or the other (later, we can put other prior probabilities on the two trees), and you are also told the dispersions, you now can put relative probabilities on one tree or the other being the source. (Note to the advanced student: This is like the Neyman-Pearson procedure, and it is easily reconciled with the Bayesian point of view to be explored later. One can also connect this concept of relative probability to the Fisherian concept of maximum likelihood — which is a probability relative to all others). And you could list from high to low the probabilities for each unit of area in the neighborhood of your apple sample. But this procedure is quite different from making any single absolute numerical probability estimate of the location of the mean.

+

Now let’s say you see 10 apples on the ground. Of course your best estimate is that the trunk of the tree is at their arithmetic center. But how close to the actual tree trunk (the population mean) is your estimate likely to be? This is the question involved in confidence intervals. We want to estimate a range (around the center, which we estimate with the center mean of the sample, we said) within which we are pretty sure that the trunk lies.

+

To simplify, we consider variation along only one dimension — that is, on (say) a north-south line rather than on two dimensions (the entire surface).

+

We first note that you have no reason to estimate the trunk’s location to be outside the sample pattern, or at its edge, though it could be so in principle.

+

If the pattern of the 10 apples is tight, you imagine the pattern of the likely locations of the population mean to be tight; if not, not. That is, it is intuitively clear that there is some connection between how spread out are the sample observations and your confidence about the location of the population mean . For example, consider two patterns of a thousand apples, one with twice the spread of another, where we measure spread by (say) the diameter of the circle that holds the inner half of the apples for each tree, or by the standard deviation. It makes sense that if the two patterns have the same center point (mean), you would put higher odds on the tree with the smaller spread being within some given distance — say, a foot — of the estimated mean. But what odds would you give on that bet?

+
+
+

26.5 Procedure for estimating confidence intervals

+

Here is a canonical list of questions that help organize one’s thinking when constructing confidence intervals. The list is comparable to the lists for questions in probability and for hypothesis testing provided in earlier chapters. This set of questions will be applied operationally in Chapter 27.

+

What Is The Question?

+

What is the purpose to be served by answering the question? Is this a “probability” or a “statistics” question?

+

If the Question Is a Statistical Inference Question:

+

What is the form of the statistics question?

+

Hypothesis test or confidence limits or other inference?

+

Assuming Question Is About Confidence Limits:

+

What is the description of the sample that has been observed?

+

Raw data?

+

Statistics of the sample?

+

Which universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess of the properties of the universe whose parameter you wish to make statements about? Finite or infinite? Bayesian possibilities?

+

Which parameter do you wish to make statements about?

+

Mean, median, standard deviation, range, interquartile range, other?

+

Which symbols for the observed entities?

+

Discrete or continuous?

+

What values or ranges of values?

+

If the universe is as guessed at, for which samples do you wish to estimate the variation? (Answer: samples the same size as has been observed)

+

Here one may continue with the conventional method, using perhaps a t or F or chi-square test or whatever. Everything up to now is the same whether continuing with resampling or with standard parametric test.

+

What procedure to produce the original entities in the sample?

+

What universe will you draw them from? Random selection?

+

What size resample?

+

Simple (single step) or complex (multiple “if” drawings)?

+

What procedure to produce resamples?

+

With or without replacement? Number of drawings?

+

What to record as result of resample drawing?

+

Mean, median, or whatever of resample

+

Stating the Distribution of Results

+

Histogram, frequency distribution, other?

+

Choice Of Confidence Bounds

+

One or two-tailed?

+

90%, 95%, etc.?

+

Computation of Probabilities Within Chosen Bounds

+
+
+

26.6 Summary

+

This chapter discussed the theoretical basis for assessing the accuracy of population averages from sample data. The following chapter shows two very different approaches to confidence intervals, and provides examples of the computations.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/confidence_2.html b/python-book/confidence_2.html new file mode 100644 index 00000000..16cef9c8 --- /dev/null +++ b/python-book/confidence_2.html @@ -0,0 +1,1306 @@ + + + + + + + + + +Resampling statistics - 27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals

+
+ + + +
+ + + + +
+ + +
+ +

There are two broad conceptual approaches to the question at hand: 1) Study the probability of various distances between the sample mean and the likeliest population mean; and 2) study the behavior of particular border universes. Computationally, both approaches often yield the same result, but their interpretations differ. Approach 1 follows the conventional logic although carrying out the calculations with resampling simulation.

+
+

27.1 Approach 1: The distance between sample and population mean

+

If the study of probability can tell us the probability that a given population will produce a sample with a mean at a given distance x from the population mean, and if a sample is an unbiased estimator of the population, then it seems natural to turn the matter around and interpret the same sort of data as telling us the probability that the estimate of the population mean is that far from the “actual” population mean. A fly in the ointment is our lack of knowledge of the dispersion, but we can safely put that aside for now. (See below, however.)

+

This first approach begins by assuming that the universe that actually produced the sample has the same amount of dispersion (but not necessarily the same mean) that one would estimate from the sample. One then produces (either with resampling or with Normal distribution theory) the distribution of sample means that would occur with repeated sampling from that designated universe with samples the size of the observed sample. One can then compute the distance between the (assumed) population mean and (say) the inner 45 percent of sample means on each side of the actually observed sample mean.

+

The crucial step is to shift vantage points. We look from the sample to the universe, instead of from a hypothesized universe to simulated samples (as we have done so far). This same interval as computed above must be the relevant distance as when one looks from the sample to the universe. Putting this algebraically, we can state (on the basis of either simulation or formal calculation) that for any given population S, and for any given distance \(d\) from its mean \(\mu\), that \(P((\mu - \bar{x}) < d) = \alpha\), where \(\bar{x}\) is a randomly generated sample mean and \(\alpha\) is the probability resulting from the simulation or calculation.

+

The above equation focuses on the deviation of various sample means (\(\bar{x}\)) from a stated population mean (\(\mu\)). But we are logically entitled to read the algebra in another fashion, focusing on the deviation of \(\mu\) from a randomly generated sample mean. This implies that for any given randomly generated sample mean we observe, the same probability (\(\alpha\)) describes the probability that \(\mu\) will be at a distance \(d\) or less from the observed \(\bar{x}\). (I believe that this is the logic underlying the conventional view of confidence intervals, but I have yet to find a clear-cut statement of it; in any case, it appears to be logically correct.)

+

To repeat this difficult idea in slightly different words: If one draws a sample (large enough to not worry about sample size and dispersion), one can say in advance that there is a probability \(p\) that the sample mean (\(\bar{x}\)) will fall within \(z\) standard deviations of the population mean (\(\mu\)). One estimates the population dispersion from the sample. If there is a probability \(p\) that \(\bar{x}\) is within \(z\) standard deviations of \(\mu\), then with probability \(p\), \(\mu\) must be within that same \(z\) standard deviations of \(\bar{x}\). To repeat, this is, I believe, the heart of the standard concept of the confidence interval, to the extent that there is thought through consensus on the matter.

+

So we can state for such populations the probability that the distance between the population and sample means will be \(d\) or less. Or with respect to a given distance, we can say that the probability that the population and sample means will be that close together is \(p\).

+

That is, we start by focusing on how much the sample mean diverges from the known population mean. But then — and to repeat once more this key conceptual step — we refocus our attention to begin with the sample mean and then discuss the probability that the population mean will be within a given distance. The resulting distance is what we call the “confidence interval.”

+

Please notice that the distribution (universe) assumed at the beginning of this approach did not include the assumption that the distribution is centered on the sample mean or anywhere else. It is true that the sample mean is used for purposes of reporting the location of the estimated universe mean . But despite how the subject is treated in the conventional approach, the estimated population mean is not part of the work of constructing confidence intervals. Rather, the calculations apply in the same way to all universes in the neighborhood of the sample (which are assumed, for the purpose of the work, to have the same dispersion). And indeed, it must be so, because the probability that the universe from which the sample was drawn is centered exactly at the sample mean is very small.

+

This independence of the confidence-intervals construction from the mean of the sample (and the mean of the estimated universe) is surprising at first, but after a bit of thought it makes sense.

+

In this first approach, as noted more generally above, we do not make estimates of the confidence intervals on the basis of any logical inference from any one particular sample to any one particular universe, because this cannot be done in principle ; it is the futile search for this connection that for decades roiled the brains of so many statisticians and now continues to trouble the minds of so many students. Instead, we investigate the behavior of (in this first approach) the universe that has a higher probability of producing the observed sample than does any other universe (in the absence of any additional evidence to the contrary), and whose characteristics are chosen on the basis of its resemblance to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes, the universe(s) being implicitly suggested by the sample evidence but not logically implied by that evidence. And there are no grounds for dispute about exactly what is being done — only about how to interpret the results.

+

One difficulty with the above approach is that the estimate of the population dispersion does not rest on sound foundations; this matter will be discussed later, but it is not likely to lead to a seriously misleading conclusion.

+

A second difficulty with this approach is in interpreting the result. What is the justification for focusing our attention on a universe centered on the sample mean? While this particular universe may be more likely than any other, it undoubtedly has a low probability. And indeed, the statement of the confidence intervals refers to the probabilities that the sample has come from universes other than the universe centered at the sample mean, and quite a distance from it.

+

My answer to this question does not rest on a set of meaningful mathematical axioms, and I assert that a meaningful axiomatic answer is impossible in principle. Rather, I reason that we should consider the behavior of this universe because other universes near it will produce much the same results, differing only in dispersion from this one, and this difference is not likely to be crucial; this last assumption is all-important, of course. True, we do not know what the dispersion might be for the “true” universe. But elsewhere (Simon, forthcoming) I argue that the concept of the “true universe” is not helpful — or maybe even worse than nothing — and should be forsworn. And we can postulate a dispersion for any other universe we choose to investigate. That is, for this postulation we unabashedly bring in any other knowledge we may have. The defense for such an almost-arbitrary move would be that this is a second-order matter relative to the location of the estimated universe mean, and therefore it is not likely to lead to serious error. (This sort of approximative guessing sticks in the throats of many trained mathematicians, of course, who want to feel an unbroken logic leading backwards into the mists of axiom formation. But the axioms themselves inevitably are chosen arbitrarily just as there is arbitrariness in the practice at hand, though the choice process for axioms is less obvious and more hallowed by having been done by the masterminds of the past. (See Simon (1998), on the necessity for judgment.) The absence of a sequence of equations leading from some first principles to the procedure described in the paragraph above is evidence of what is felt to be missing by those who crave logical justification. The key equation in this approach is formally unassailable, but it seems to come from nowhere.)

+

In the examples in the following chapter may be found computations for two population distributions — one binomial and one quantitative — of the histograms of the sample means produced with this procedure.

+

Operationally, we use the observed sample mean, together with an estimate of the dispersion from the sample, to estimate a mean and dispersion for the population. Then with reference to the sample mean we state a combination of a distance (on each side) and a probability pertaining to the population mean. The computational examples will illustrate this procedure.

+

Once we have obtained a numerical answer, we must decide how to interpret it. There is a natural and almost irresistible tendency to talk about the probability that the mean of the universe lies within the intervals, but this has proven confusing and controversial. Interpretation in terms of a repeated process is not very satisfying intuitively.1

+

In my view, it is not worth arguing about any “true” interpretation of these computations. One could sensibly interpret the computations in terms of the odds a decision maker, given the evidence, would reasonably offer about the relative probabilities that the sample came from one of two specified universes (one of them probably being centered on the sample); this does provide some information on reliability, but this procedure departs from the concept of confidence intervals.

+
+

27.1.1 Example: Counted Data: The Accuracy of Political Polls

+

Consider the reliability of a randomly selected 1988 presidential election poll, showing 840 intended votes for Bush and 660 intended votes for Dukakis out of 1500 (Wonnacott and Wonnacott 1990, 5). Let us work through the logic of this example.

+ +
    +
  • What is the question? Stated technically, what are the 95% confidence limits for the proportion of Bush supporters in the population? (The proportion is the mean of a binomial population or sample, of course.) More broadly, within which bounds could one confidently believe that the population proportion was likely to lie? At this stage of the work, we must already have translated the conceptual question (in this case, a decision-making question from the point of view of the candidates) into a statistical question. (See Chapter 20 on translating questions into statistical form.)
  • +
  • What is the purpose to be served by answering this question? There is no sharp and clear answer in this case. The goal could be to satisfy public curiosity, or strategy planning for a candidate (though a national proportion is not as helpful for planning strategy as state data would be). A secondary goal might be to help guide decisions about the sample size of subsequent polls.
  • +
  • Is this a “probability” or a “probability-statistics” question? The latter; we wish to infer from sample to population rather than the converse.
  • +
  • Given that this is a statistics question: What is the form of the statistics question — confidence limits or hypothesis testing? Confidence limits.
  • +
  • Given that the question is about confidence limits: What is the description of the sample that has been observed? a) The raw sample data — the observed numbers of interviewees are 840 for Bush and 660 for Dukakis — constitutes the best description of the universe. The statistics of the sample are the given proportions — 56 percent for Bush, 44 percent for Dukakis.
  • +
  • Which universe? (Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe about whose parameter you wish to make statements? The best guess is that the population proportion is the sample proportion — that is, the population contains 56 percent Bush votes, 44 percent Dukakis votes.
  • +
  • Possibilities for Bayesian analysis? Not in this case, unless you believe that the sample was biased somehow.
  • +
  • Which parameter(s) do you wish to make statements about? Mean, median, standard deviation, range, interquartile range, other? We wish to estimate the proportion in favor of Bush (or Dukakis).
  • +
  • Which symbols for the observed entities? Perhaps 56 green and 44 yellow balls, if a bucket is used, or “0” and “1” if the computer is used.
  • +
  • Discrete or continuous distribution? In principle, discrete. (All distributions must be discrete in practice.)
  • +
  • What values or ranges of values?* “0” or “1.”
  • +
  • Finite or infinite? Infinite — the sample is small relative to the population.
  • +
  • If the universe is what you guess it to be, for which samples do you wish to estimate the variation? A sample the same size as the observed poll.
  • +
+

Here one may continue either with resampling or with the conventional method. Everything done up to now would be the same whether continuing with resampling or with a standard parametric test.

+
+
+
+

27.2 Conventional Calculational Methods

+

Estimating the Distribution of Differences Between Sample and Population Means With the Normal Distribution.

+

In the conventional approach, one could in principle work from first principles with lists and sample space, but that would surely be too cumbersome. One could work with binomial proportions, but this problem has too large a sample for tree-drawing and quincunx techniques; even the ordinary textbook table of binomial coefficients is too small for this job. Calculating binomial coefficients also is a big job. So instead one would use the Normal approximation to the binomial formula.

+

(Note to the beginner: The distribution of means that we manipulate has the Normal shape because of the operation of the Law of Large Numbers (The Central Limit theorem). Sums and averages, when the sample is reasonably large, take on this shape even if the underlying distribution is not Normal. This is a truly astonishing property of randomly drawn samples — the distribution of their means quickly comes to resemble a “Normal” distribution, no matter the shape of the underlying distribution. We then standardize it with the standard deviation or other devices so that we can state the probability distribution of the sampling error of the mean for any sample of reasonable size.)

+

The exercise of creating the Normal shape empirically is simply a generalization of particular cases such as we will later create here for the poll by resampling simulation. One can also go one step further and use the formula of de Moivre-Laplace-Gauss to describe the empirical distributions, and to serve instead of the empirical distributions. Looking ahead now, the difference between resampling and the conventional approach can be said to be that in the conventional approach we simply plot the Gaussian distribution very carefully, and use a formula instead of the empirical histograms, afterwards putting the results in a standardized table so that we can read them quickly without having to recreate the curve each time we use it. More about the nature of the Normal distribution may be found in Simon (forthcoming).

+

All the work done above uses the information specified previously — the sample size of 1500, the drawing with replacement, the observed proportion as the criterion.

+
+
+

27.3 Confidence Intervals Empirically — With Resampling

+

Estimating the Distribution of Differences Between Sample and Population Means By Resampling

+
    +
  • What procedure to produce entities?: Random selection from bucket or computer.
  • +
  • Simple (single step) or complex (multiple “if” drawings)?: Simple.
  • +
  • What procedure to produce resamples? That is, with or without replacement? With replacement.
  • +
  • Number of drawings observations in actual sample, and hence, number of drawings in resamples? 1500.
  • +
  • What to record as result of each resample drawing? Mean, median, or whatever of resample? The proportion is what we seek.
  • +
  • Stating the distribution of results : The distribution of proportions for the trial samples.
  • +
  • Choice of confidence bounds? : 95%, two tails (choice made by the textbook that posed the problem).
  • +
  • Computation of probabilities within chosen bounds : Read the probabilistic result from the histogram of results.
  • +
  • Computation of upper and lower confidence bounds: Locate the values corresponding to the 2.5th and 97.5th percentile of the resampled proportions.
  • +
+

Because the theory of confidence intervals is so abstract (even with the resampling method of computation), let us now walk through this resampling demonstration slowly, using the conventional Approach 1 described previously. We first produce a sample, and then see how the process works in reverse to estimate the reliability of the sample, using the Bush-Dukakis poll as an example. The computer program follows below.

+
    +
  • Step 1: Draw a sample of 1500 voters from a universe that, based on the observed sample, is 56 percent for Bush, 44 percent for Dukakis. The first such sample produced by the computer happens to be 53 percent for Bush; it might have been 58 percent, or 55 percent, or very rarely, 49 percent for Bush.
  • +
  • Step 2: Repeat step 1 perhaps 400 or 1000 times.
  • +
  • Step 3: Estimate the distribution of means (proportions) of samples of size 1500 drawn from this 56-44 percent Bush- Dukakis universe; the resampling result is shown below.
  • +
  • Step 4: In a fashion similar to what was done in steps 13, now compute the 95 percent confidence intervals for some other postulated universe mean — say 53% for Bush, 47% for Dukakis. This step produces a confidence interval that is not centered on the sample mean and the estimated universe mean, and hence it shows the independence of the procedure from that magnitude. And we now compare the breadth of the estimated confidence interval generated with the 53-47 percent universe against the confidence interval derived from the corresponding distribution of sample means generated by the “true” Bush-Dukakis population of 56 percent — 44 percent. If the procedure works well, the results of the two procedures should be similar.
  • +
+

Now we interpret the results using this first approach. The histogram shows the probability that the difference between the sample mean and the population mean — the error in the sample result — will be about 2.5 percentage points too low. It follows that about 47.5 percent (half of 95 percent) of the time, a sample like this one will be between the population mean and 2.5 percent too low. We do not know the actual population mean. But for any observed sample like this one, we can say that there is a 47.5 percent chance that the distance between it and the mean of the population that generated it is minus 2.5 percent or less.

+

Now a crucial step: We turn around the statement just above, and say that there is an 47.5 percent chance that the population mean is less than three percentage points higher than the mean of a sample drawn like this one, but at or above the sample mean. (And we do the same for the other side of the sample mean.) So to recapitulate: We observe a sample and its mean. We estimate the error by experimenting with one or more universes in that neighborhood, and we then give the probability that the population mean is within that margin of error from the sample mean.

+
+

27.3.1 Example: Measured Data Example — the Bootstrap

+

A feed merchant decides to experiment with a new pig ration — ration A — on twelve pigs. To obtain a random sample, he provides twelve customers (selected at random) with sufficient food for one pig. After 4 weeks, the 12 pigs experience an average gain of 508 ounces. The weight gain of the individual pigs are as follows: 496, 544, 464, 416, 512, 560, 608, 544, 480, 466, 512, 496.

+

The merchant sees that the ration produces results that are quite variable (from a low of 466 ounces to a high of 560 ounces) and is therefore reluctant to advertise an average weight gain of 508 ounces. He speculates that a different sample of pigs might well produce a different average weight gain.

+

Unfortunately, it is impractical to sample additional pigs to gain additional information about the universe of weight gains. The merchant must rely on the data already gathered. How can these data be used to tell us more about the sampling variability of the average weight gain?

+

Recalling that all we know about the universe of weight gains is the sample we have observed, we can replicate that sample millions of times, creating a “pseudo-universe” that embodies all our knowledge about the real universe. We can then draw additional samples from this pseudo-universe and see how they behave.

+

More specifically, we replicate each observed weight gain millions of times — we can imagine writing each result that many times on separate pieces of paper — then shuffle those weight gains and pick out a sample of 12. Average the weight gain for that sample, and record the result. Take repeated samples, and record the result for each. We can then make a histogram of the results; it might look something like this:

+
+
+
+
+

+
+
+
+
+

Though we do not know the true average weight gain, we can use this histogram to estimate the bounds within which it falls. The merchant can consider various weight gains for advertising purposes, and estimate the probability that the true weight gain falls below the value. For example, he might wish to advertise a weight gain of 500 ounces. Examining the histogram, we see that about 36% of our samples yielded weight gains less than 500 ounces. The merchant might wish to choose a lower weight gain to advertise, to reduce the risk of overstating the effectiveness of the ration.

+

This illustrates the “bootstrap” method. By re-using our original sample many times (and using nothing else), we are able to make inferences about the population from which the sample came. This problem would conventionally be addressed with the “t-test.”

+
+
+

27.3.2 Example: Measured Data Example: Estimating Tree Diameters

+
    +
  • What is the question? A horticulturist is experimenting with a new type of tree. She plants 20 of them on a plot of land, and measures their trunk diameter after two years. She wants to establish a 90% confidence interval for the population average trunk diameter. For the data given below, calculate the mean of the sample and calculate (or describe a simulation procedure for calculating) a 90% confidence interval around the mean. Here are the 20 diameters, in centimeters and in no particular order (Table 27.1):

    +
    + + ++++++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Table 27.1: Tree Diameters, in Centimeters
    8.57.69.35.511.46.96.512.98.74.8
    4.28.16.55.86.72.411.17.18.87.2
    +
  • +
  • What is the purpose to be served by answering the question? Either research & development, or pure science.

  • +
  • Is this a “probability” or a “statistics” question? Statistics.

  • +
  • What is the form of the statistics question? Confidence limits.

  • +
  • What is the description of the sample that has been observed? The raw data as shown above.

  • +
  • Statistics of the sample ? Mean of the tree data.

  • +
  • Which universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe whose parameter you wish to make statements about? Answer: The universe is like the sample above but much, much bigger. That is, in the absence of other information, we imagine this “bootstrap” universe as a collection of (say) one million trees of 8.5 centimeters width, one million of 7.2 centimeters, and so on. We’ll see in a moment that the device of sampling with replacement makes it unnecessary for us to work with such a large universe; by replacing each element after we draw it in a resample, we achieve the same effect as creating an almost-infinite universe from which to draw the resamples. (Are there possibilities for Bayesian analysis?) No Bayesian prior information will be included.

  • +
  • Which parameter do you wish to make statements about? The mean.

  • +
  • Which symbols for the observed entities? Cards or computer entries with numbers 8.5…7.2, sample of an infinite size.

  • +
  • If the universe is as guessed at, for which samples do you wish to estimate the variation? Samples of size 20.

  • +
+

Here one may continue with the conventional method. Everything up to now is the same whether continuing with resampling or with a standard parametric test. The information listed above is the basis for a conventional test.

+

Continuing with resampling:

+
    +
  • What procedure will be used to produce the trial entities? Random selection: simple (single step), not complex (multiple “if”) sample drawings).
  • +
  • What procedure to produce resamples? With replacement. As noted above, sampling with replacement allows us to forego creating a very large bootstrap universe; replacing the elements after we draw them achieves the same effect as would an infinite universe.
  • +
  • Number of drawings? 20 trees
  • +
  • What to record as result of resample drawing? The mean.
  • +
  • How to state the distribution of results? See histogram.
  • +
  • Choice of confidence bounds? 90%, two-tailed.
  • +
  • Computation of values of the resample statistic corresponding to chosen confidence bounds? Read from histogram.
  • +
+

As has been discussed in Chapter 19, it often is more appropriate to work with the median than with the mean. One reason is that the median is not so sensitive to the extreme observations as is the mean. Another reason is that one need not assume a Normal distribution for the universe under study: this consideration affects conventional statistics but usually does not affect resampling, but it is worth keeping mind when a statistician is making a choice between a parametric (that is, Normal-based) and a non-parametric procedure.

+
+
+

27.3.3 Example: Determining a Confidence Interval for the Median Aluminum Content in Theban Jars

+

Data for the percentages of aluminum content in a sample of 18 ancient Theban jars (Catling and Jones 1977) are as follows, arranged in ascending order: 11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5, 15.0, 15.1, 15.8, 16.0, 16.3, 16.5, 16.9, 17.0, 17.2, 17.5, 19.0. Consider now putting a confidence interval around the median of 15.45 (halfway between the middle observations 15.1 and 15.8).

+

One may simply estimate a confidence interval around the median with a bootstrap procedure by substituting the median for the mean in the usual bootstrap procedure for estimating a confidence limit around the mean, as follows:

+
+
import numpy as np
+import matplotlib.pyplot as plt
+
+rnd = np.random.default_rng()
+
+data = np.array(
+    [11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5, 15.0, 15.1, 15.8, 16.0, 16.3,
+     16.5, 16.9, 17.0, 17.2, 17.5, 19.0]
+)
+observed_median = np.median(data)
+
+n = 10000
+medians = np.zeros(n)
+
+for i in range(n):
+    sample = rnd.choice(data, size=18, replace=True)
+    # In the line above, replace=True is the default, so we could leave it out to
+    # get the same result.  We added it just to emphasize that bootstrap samples
+    # are samples _with_ replacement.
+    medians[i] = np.median(sample)
+
+plt.hist(medians, bins='auto')
+
+print('Observed median aluminum content:', observed_median)
+
+
Observed median aluminum content: 15.45
+
+
pp = np.percentile(medians, (2.5, 97.5))
+print('Estimate of 95 percent confidence interval:', pp)
+
+
Estimate of 95 percent confidence interval: [14.15 16.7 ]
+
+
+
+
+

+
+
+
+
+

(This problem would be approached conventionally with a binomial procedure leading to quite wide confidence intervals (Deshpande, Gore, and Shanubhogue 1995, 32)).

+ +
+
+

27.3.4 Example: Confidence Interval for the Median Price Elasticity of Demand for Cigarettes

+

The data for a measure of responsiveness of demand to a price change (the “elasticity” — percent change in demand divided by percent change in price) are shown for cigarette price changes as follows (Table 27.2). I (JLS) computed the data from cigarette sales data preceding and following a tax change in a state (Lyon and Simon 1968).

+
+ + ++++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 27.2: Price elasticity of demand in various states at various dates
1.7251.139.957.863.802.517.407.304
.204.125.122.106.031-.032-.1-.142
-.174-.234-.240-.251-.277-.301-.302-.302
-.307-.328-.329-.346-.357-.376-.377-.383
-.385-.393-.444-.482-.511-.538-.541-.549
-.554-.600-.613-.644-.692-.713-.724-.734
-.749-.752-.753-.766-.805-.866-.926-.971
-.972-.975-1.018-1.024-1.066-1.118-1.145-1.146
-1.157-1.282-1.339-1.420-1.443-1.478-2.041-2.092
-7.100
+
+

The positive observations (implying an increase in demand when the price rises) run against all theory, but can be considered to be the result simply of measurement errors, and treated as they stand. Aside from this minor complication, the reader may work this example similarly to the case of the Theban jars. Consider this program:

+
+
import numpy as np
+import matplotlib.pyplot as plt
+
+rnd = np.random.default_rng()
+
+data = np.array([
+    1.725, 1.139, 0.957, 0.863, 0.802, 0.517, 0.407, 0.304,
+    0.204, 0.125, 0.122, 0.106, 0.031, -0.032, -0.1,  -0.142,
+    -0.174, -0.234, -0.240, -0.251, -0.277, -0.301, -0.302, -0.302,
+    -0.307, -0.328, -0.329, -0.346, -0.357, -0.376, -0.377, -0.383,
+    -0.385, -0.393, -0.444, -0.482, -0.511, -0.538, -0.541, -0.549,
+    -0.554, -0.600, -0.613, -0.644, -0.692, -0.713, -0.724, -0.734,
+    -0.749, -0.752, -0.753, -0.766, -0.805, -0.866, -0.926, -0.971,
+    -0.972, -0.975, -1.018, -1.024, -1.066, -1.118, -1.145, -1.146,
+    -1.157, -1.282, -1.339, -1.420, -1.443, -1.478, -2.041, -2.092,
+    -7.100
+])
+data_median = np.median(data)
+
+n = 10000
+
+medians = np.zeros(n)
+
+for i in range(n):
+    sample = np.random.choice(data, size=73, replace=True)
+    medians[i] = np.median(sample)
+
+plt.hist(medians, bins='auto')
+
+print('Observed median elasticity', data_median)
+
+
Observed median elasticity -0.511
+
+
pp = np.percentile(medians, (2.5, 97.5))
+print('Estimate of 95 percent confidence interval', pp)
+
+
Estimate of 95 percent confidence interval [-0.692 -0.357]
+
+
+
+
+

+
+
+
+
+
+
+
+

27.4 Measured Data Example: Confidence Intervals For a Difference Between Two Means

+

This is another example from the mice data.

+

Returning to the data on the survival times of the two groups of mice in Section 24.0.3. It is the view of this book that confidence intervals should be calculated for a difference between two groups only if one is reasonably satisfied that the difference is not due to chance. Some statisticians might choose to compute a confidence interval in this case nevertheless, some because they believe that the confidence-interval machinery is more appropriate to deciding whether the difference is the likely outcome of chance than is the machinery of a hypothesis test in which you are concerned with the behavior of a benchmark or null universe. So let us calculate a confidence interval for these data, which will in any case demonstrate the technique for determining a confidence interval for a difference between two samples.

+

Our starting point is our estimate for the difference in mean survival times between the two samples — 30.63 days. We ask “How much might this estimate be in error? If we drew additional samples from the control universe and additional samples from the treatment universe, how much might they differ from this result?”

+

We do not have the ability to go back to these universes and draw more samples, but from the samples themselves we can create hypothetical universes that embody all that we know about the treatment and control universes. We imagine replicating each element in each sample millions of times to create a hypothetical control universe and (separately) a hypothetical treatment universe. Then we can draw samples (separately) from these hypothetical universes to see how reliable is our original estimate of the difference in means (30.63 days).

+

Actually, we use a shortcut — instead of copying each sample element a million times, we simply replace it after drawing it for our resample, thus creating a universe that is effectively infinite.

+

Here are the steps:

+
    +
  • Step 1: Consider the two samples separately as the relevant universes.
  • +
  • Step 2: Draw a sample of 7 with replacement from the treatment group and calculate the mean.
  • +
  • Step 3: Draw a sample of 9 with replacement from the control group and calculate the mean.
  • +
  • Step 4: Calculate the difference in means (treatment minus control) & record.
  • +
  • Step 5: Repeat steps 2-4 many times.
  • +
  • Step 6: Review the distribution of resample means; the 5th and 95th percentiles are estimates of the endpoints of a 90 percent confidence interval.
  • +
+

Here is a Python example:

+
+
import numpy as np
+import matplotlib.pyplot as plt
+
+rnd = np.random.default_rng()
+
+treatment = np.array([94, 38, 23, 197, 99, 16, 141])
+control = np.array([52, 10, 40, 104, 51, 27, 146, 30, 46])
+
+observed_diff = np.mean(treatment) - np.mean(control)
+
+n = 10000
+mean_delta = np.zeros(n)
+
+for i in range(n):
+    treatment_sample = rnd.choice(treatment, size=7, replace=True)
+    control_sample = rnd.choice(control, size=9, replace=True)
+    mean_delta[i] = np.mean(treatment_sample) - np.mean(control_sample)
+
+plt.hist(mean_delta, bins='auto')
+
+print('Observed difference in means:', observed_diff)
+
+
Observed difference in means: 30.63492063492064
+
+
pp = np.percentile(mean_delta, (5, 95))
+print('Estimate of 90 percent confidence interval:', pp)
+
+
Estimate of 90 percent confidence interval: [-12.6515873  74.7484127]
+
+
+
+
+

+
+
+
+
+

Interpretation: This means that one can be 90 percent confident that the mean of the difference (which is estimated to be 30.635) falls between -12.652) and 74.748). So the reliability of the estimate of the mean is very small.

+
+
+

27.5 Count Data Example: Confidence Limit on a Proportion, Framingham Cholesterol Data

+

The Framingham cholesterol data were used in Section 21.2.6 to illustrate the first classic question in statistical inference — interpretation of sample data for testing hypotheses. Now we use the same data for the other main theme in statistical inference — the estimation of confidence intervals. Indeed, the bootstrap method discussed above was originally devised for estimation of confidence intervals. The bootstrap method may also be used to calculate the appropriate sample size for experiments and surveys, another important topic in statistics.

+

Consider for now just the data for the sub-group of 135 high-cholesterol men in Table 21.4. Our second classic statistical question is as follows: How much confidence should we have that if we were to take a much larger sample than was actually obtained, the sample mean (that is, the proportion 10/135 = .07) would be in some close vicinity of the observed sample mean? Let us first carry out a resampling procedure to answer the questions, waiting until afterwards to discuss the logic of the inference.

+
    +
  1. Construct a bucket containing 135 balls — 10 red (infarction) and 125 green (no infarction) to simulate the universe as we guess it to be.
  2. +
  3. Mix, choose a ball, record its color, replace it, and repeat 135 times (to simulate a sample of 135 men).
  4. +
  5. Record the number of red balls among the 135 balls drawn.
  6. +
  7. Repeat steps 2-3 perhaps 10000 times, and observe how much the total number of reds varies from sample to sample. We arbitrarily denote the boundary lines that include 47.5 percent of the hypothetical samples on each side of the sample mean as the 95 percent “confidence limits” around the mean of the actual population.
  8. +
+

Here is a Python program:

+
+
import numpy as np
+import matplotlib.pyplot as plt
+
+rnd = np.random.default_rng()
+
+men = np.repeat([1, 0], repeats=[10, 125])
+
+n = 10000
+z = np.zeros(n)
+
+for i in range(n):
+    sample = rnd.choice(men, size=135, replace=True)
+    infarctions = np.sum(sample == 1)
+    z[i] = infarctions / 135
+
+plt.hist(z, bins='auto')
+
+pp = np.percentile(z, (2.5, 97.5))
+print('Estimate of 95 percent confidence interval', pp)
+
+
Estimate of 95 percent confidence interval [0.02962963 0.11851852]
+
+
+
+
+

+
+
+
+
+

(The result is the 95 percent confidence interval, enclosing 95 percent of the resample results)

+

The variation in the histogram above highlights the fact that a sample containing only 10 cases of infarction is very small, and the number of observed cases — or the proportion of cases — necessarily varies greatly from sample to sample. Perhaps the most important implication of this statistical analysis, then, is that we badly need to collect additional data.

+

Again, this is a classic problem in confidence intervals, found in all subject fields. The language used in the cholesterol-infarction example is exactly the same as the language used for the Bush-Dukakis poll above except for labels and numbers.

+

As noted above, the philosophic logic of confidence intervals is quite deep and controversial, less obvious than for the hypothesis test. The key idea is that we can estimate for any given universe the probability P that a sample’s mean will fall within any given distance D of the universe’s mean; we then turn this around and assume that if we know the sample mean, the probability is P that the universe mean is within distance D of it. This inversion is more slippery than it may seem. But the logic is exactly the same for the formulaic method and for resampling. The only difference is how one estimates the probabilities — either with a numerical resampling simulation (as here), or with a formula or other deductive mathematical device (such as counting and partitioning all the possibilities, as Galileo did when he answered a gambler’s question about three dice). And when one uses the resampling method, the probabilistic calculations are the least demanding part of the work. One then has mental capacity available to focus on the crucial part of the job — framing the original question soundly, choosing a model for the facts so as to properly resemble the actual situation, and drawing appropriate inferences from the simulation.

+
+
+

27.6 Approach 2: Probability of various universes producing this sample

+

A second approach to the general question of estimate accuracy is to analyze the behavior of a variety of universes centered at other points on the line, rather than the universe centered on the sample mean. One can ask the probability that a distribution centered away from the sample mean, with a given dispersion, would produce (say) a 10-apple scatter having a mean as far away from the given point as the observed sample mean. If we assume the situation to be symmetric, we can find a point at which we can say that a distribution centered there would have only a (say) 5 percent chance of producing the observed sample. And we can also say that a distribution even further away from the sample mean would have an even lower probability of producing the given sample. But we cannot turn the matter around and say that there is any particular chance that the distribution that actually produced the observed sample is between that point and the center of the sample.

+

Imagine a situation where you are standing on one side of a canyon, and you are hit by a baseball, the only ball in the vicinity that day. Based on experiments, you can estimate that a baseball thrower who you see standing on the other side of the canyon has only a 5 percent chance of hitting you with a single throw. But this does not imply that the source of the ball that hit you was someone else standing in the middle of the canyon, because that is patently impossible. That is, your knowledge about the behavior of the “boundary” universe does not logically imply anything about the existence and behavior of any other universes. But just as in the discussion of testing hypotheses, if you know that one possibility is unlikely, it is reasonable that as a result you will draw conclusions about other possibilities in the context of your general knowledge and judgment.

+

We can find the “boundary” distribution(s) we seek if we a) specify a measure of dispersion, and b) try every point along the line leading away from the sample mean, until we find that distribution that produces samples such as that observed with a (say) 5 percent probability or less.

+

To estimate the dispersion, in many cases we can safely use an estimate based on the sample dispersion, using either resampling or Normal distribution theory. The hardest cases for resampling are a) a very small sample of data, and b) a proportion near 0 or near 1.0 (because the presence or absence in the sample of a small number of observations can change the estimate radically, and therefore a large sample is needed for reliability). In such situations one should use additional outside information, or Normal distribution theory, or both.

+

We can also create a confidence interval in the following fashion: We can first estimate the dispersion for a universe in the general neighborhood of the sample mean, using various devices to be “conservative,” if we like.2 Given the estimated dispersion, we then estimate the probability distribution of various amounts of error between observed sample means and the population mean. We can do this with resampling simulation as follows: a) Create other universes at various distances from the sample mean, but with other characteristics similar to the universe that we postulate for the immediate neighborhood of the sample, and b) experiment with those universes. One can also apply the same logic with a more conventional parametric approach, using general knowledge of the sampling distribution of the mean, based on Normal distribution theory or previous experience with resampling. We shall not discuss the latter method here.

+

As with approach 1, we do not make any probability statements about where the population mean may be found. Rather, we discuss only what various hypothetical universes might produce , and make inferences about the “actual” population’s characteristics by comparison with those hypothesized universes.

+

If we are interested in (say) a 95 percent confidence interval, we want to find the distribution on each side of the sample mean that would produce a sample with a mean that far away only 2.5 percent of the time (2 * .025 = 1-.95). A shortcut to find these “border distributions” is to plot the sampling distribution of the mean at the center of the sample, as in Approach 1. Then find the (say) 2.5 percent cutoffs at each end of that distribution. On the assumption of equal dispersion at the two points along the line, we now reproduce the previously-plotted distribution with its centroid (mean) at those 2.5 percent points on the line. The new distributions will have 2.5 percent of their areas on the other side of the mean of the sample.

+
+

27.6.1 Example: Approach 2 for Counted Data: the Bush-Dukakis Poll

+

Let’s implement Approach 2 for counted data, using for comparison the Bush-Dukakis poll data discussed earlier in the context of Approach 1.

+

We seek to state, for universes that we select on the basis that their results will interest us, the probability that they (or it, for a particular universe) would produce a sample as far or farther away from the mean of the universe in question as the mean of the observed sample — 56 percent for Bush. The most interesting universe is that which produces such a sample only about 5 percent of the time, simply because of the correspondence of this value to a conventional breakpoint in statistical inference. So we could experiment with various universes by trial and error to find this universe.

+

We can learn from our previous simulations of the Bush — Dukakis poll in Approach 1 that about 95 percent of the samples fall within .025 on either side of the sample mean (which we had been implicitly assuming is the location of the population mean). If we assume (and there seems no reason not to) that the dispersions of the universes we experiment with are the same, we will find (by symmetry) that the universe we seek is centered on those points .025 away from .56, or .535 and .585.

+

From the standpoint of Approach 2, then, the conventional sample formula that is centered at the mean can be considered a shortcut to estimating the boundary distributions. We say that the boundary is at the point that centers a distribution which has only a (say) 2.5 percent chance of producing the observed sample; it is that distribution which is the subject of the discussion, and not the distribution which is centered at \(\mu = \bar{x}\). Results of these simulations are shown in Figure 27.1.

+
+
+

+
Figure 27.1: Approach 2 for Bush-Dukakis problem
+
+
+

About these distributions centered at .535 and .585 — or more importantly for understanding an election situation, the universe centered at .535 — one can say: Even if the “true” value is as low as 53.5 percent for Bush, there is only a 2 ½ percent chance that a sample as high as 56 percent pro-Bush would be observed. (The values of a 2 ½ percent probability and a 2 ½ percent difference between 56 percent and 53.5 percent coincide only by chance in this case.) It would be even more revealing in an election situation to make a similar statement about the universe located at 50-50, but this would bring us almost entirely within the intellectual ambit of hypothesis testing.

+

To restate, then: Moving progressively farther away from the sample mean, we can eventually find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can then say that this point represents a “limit” or “boundary” so that the interval between it and the sample mean may be called a confidence interval.

+
+
+

27.6.2 Example: Approach 2 for Measured Data: The Diameters of Trees

+

To implement Approach 2 for measured data, one may proceed exactly as with Approach 1 above except that the output of the simulation with the sample mean as midpoint will be used for guidance about where to locate trial universes for Approach 2. The results for the tree diameter data (Table 27.1) are shown in Figure 27.2.

+
+
+

+
Figure 27.2: Approach 2 for tree diameters
+
+
+
+
+
+

27.7 Interpretation of Approach 2

+

Now to interpret the results of the second approach: Assume that the sample is not drawn in a biased fashion (such as the wind blowing all the apples in the same direction), and that the population has the same dispersion as the sample. We can then say that distributions centered at the two endpoints of the 95 percent confidence interval (each of them including a tail in the direction of the observed sample mean with 2.5 percent of the area), or even further away from the sample mean, will produce the observed sample only 5 percent of the time or less .

+

The result of the second approach is more in the spirit of a hypothesis test than of the usual interpretation of confidence intervals. Another statement of the result of the second approach is: We postulate a given universe — say, a universe at (say) the two-tailed 95 percent boundary line. We then say: The probability that the observed sample would be produced by a universe with a mean as far (or further) from the observed sample’s mean as the universe under investigation is only 2.5 percent. This is similar to the probability value interpretation of a hypothesis-test framework. It is not a direct statement about the location of the mean of the universe from which the sample has been drawn. But it is certainly reasonable to derive a betting-odds interpretation of the statement just above, to wit: The chances are 2½ in 100 (or, the odds are 2½ to 97½ ) that a population located here would generate a sample with a mean as far away as the observed sample. And it would seem legitimate to proceed to the further betting-odds statement that (assuming we have no additional information) the odds are 97 ½ to 2 ½ that the mean of the universe that generated this sample is no farther away from the sample mean than the mean of the boundary universe under discussion. About this statement there is nothing slippery, and its meaning should not be controversial.

+

Here again the tactic for interpreting the statistical procedure is to restate the facts of the behavior of the universe that we are manipulating and examining at that moment. We use a heuristic device to find a particular distribution — the one that is at (say) the 97 ½ –2 ½ percent boundary — and simply state explicitly what the distribution tells us implicitly: The probability of this distribution generating the observed sample (or a sample even further removed) is 2 ½ percent. We could go on to say (if it were of interest to us at the moment) that because the probability of this universe generating the observed sample is as low as it is, we “reject” the “hypothesis” that the sample came from a universe this far away or further. Or in other words, we could say that because we would be very surprised if the sample were to have come from this universe, we instead believe that another hypothesis is true. The “other” hypothesis often is that the universe that generated the sample has a mean located at the sample mean or closer to it than the boundary universe.

+

The behavior of the universe at the 97 ½ –2 ½ percent boundary line can also be interpreted in terms of our “confidence” about the location of the mean of the universe that generated the observed sample. We can say: At this boundary point lies the end of the region within which we would bet 97 ½ to 2 ½ that the mean of the universe that generated this sample lies to the (say) right of it.

+

As noted in the preview to this chapter, we do not learn about the reliability of sample estimates of the population mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because in principle this cannot be done . Instead, in this second approach we investigate the behavior of various universes at the borderline of the neighborhood of the sample, those universes being chosen on the basis of their resemblances to the sample. We seek, for example, to find the universes that would produce samples with the mean of the observed sample less than (say) 5 percent of the time. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of hypothesized universes, the hypotheses being implicitly suggested by the sample evidence but not logically implied by that evidence.

+

Approaches 1 and 2 may (if one chooses) be seen as identical conceptually as well as (in many cases) computationally (except for the asymmetric distributions mentioned earlier). But as I see it, the interpretation of them is rather different, and distinguishing them helps one’s intuitive understanding.

+
+
+

27.8 Exercises

+

Solutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.

+
+

27.8.1 Exercise 1

+

In a sample of 200 people, 7 percent are found to be unemployed. Determine a 95 percent confidence interval for the true population proportion.

+
+
+

27.8.2 Exercise 2

+

A sample of 20 batteries is tested, and the average lifetime is 28.85 months. Establish a 95 percent confidence interval for the true average value. The sample values (lifetimes in months) are listed below.

+

30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31

+
+
+

27.8.3 Exercise 3

+

Suppose we have 10 measurements of Optical Density on a batch of HIV negative control:

+

.02 .026 .023 .017 .022 .019 .018 .018 .017 .022

+

Derive a 95 percent confidence interval for the sample mean. Are there enough measurements to produce a satisfactory answer?

+ + + +
+
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/confidence_2_files/figure-html/unnamed-chunk-1-1.png b/python-book/confidence_2_files/figure-html/unnamed-chunk-1-1.png new file mode 100644 index 00000000..5b18743a Binary files /dev/null and b/python-book/confidence_2_files/figure-html/unnamed-chunk-1-1.png differ diff --git a/python-book/confidence_2_files/figure-html/unnamed-chunk-2-3.png b/python-book/confidence_2_files/figure-html/unnamed-chunk-2-3.png new file mode 100644 index 00000000..24c7b224 Binary files /dev/null and b/python-book/confidence_2_files/figure-html/unnamed-chunk-2-3.png differ diff --git a/python-book/confidence_2_files/figure-html/unnamed-chunk-4-1.png b/python-book/confidence_2_files/figure-html/unnamed-chunk-4-1.png new file mode 100644 index 00000000..d509df43 Binary files /dev/null and b/python-book/confidence_2_files/figure-html/unnamed-chunk-4-1.png differ diff --git a/python-book/confidence_2_files/figure-html/unnamed-chunk-6-1.png b/python-book/confidence_2_files/figure-html/unnamed-chunk-6-1.png new file mode 100644 index 00000000..00b92760 Binary files /dev/null and b/python-book/confidence_2_files/figure-html/unnamed-chunk-6-1.png differ diff --git a/python-book/confidence_2_files/figure-html/unnamed-chunk-8-1.png b/python-book/confidence_2_files/figure-html/unnamed-chunk-8-1.png new file mode 100644 index 00000000..14e7bf71 Binary files /dev/null and b/python-book/confidence_2_files/figure-html/unnamed-chunk-8-1.png differ diff --git a/python-book/correlation_causation.html b/python-book/correlation_causation.html new file mode 100644 index 00000000..c98119cd --- /dev/null +++ b/python-book/correlation_causation.html @@ -0,0 +1,2934 @@ + + + + + + + + + +Resampling statistics - 29  Correlation and Causation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

29  Correlation and Causation

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

29.1 Preview

+

The correlation (speaking in a loose way for now) between two variables measures the strength of the relationship between them. A positive “linear” correlation between two variables x and y implies that high values of x are associated with high values of y, and that low values of x are associated with low values of y. A negative correlation implies the opposite; high values of x are associated with low values of y. By definition a “correlation coefficient” close to zero indicates little or no linear relationship between two variables; correlation coefficients close to 1 and -1 denote a strong positive or negative relationship. We will generally use a simpler measure of correlation than the correlation coefficient, however.

+

One way to measure correlation with the resampling method is to rank both variables from highest to lowest, and investigate how often in randomly-generated samples the rankings of the two variables are as close to each other as the rankings in the observed variables. A better approach, because it uses more of the quantitative information contained in the data though it requires more computation, is to multiply the values for the corresponding pairs of values for the two variables, and compare the sum of the resulting products to the analogous sum for randomly-generated pairs of the observed variable values. The last section of the chapter shows how the strength of a relationship can be determined when the data are counted, rather than measured. First comes some discussion of the philosophical issues involved in correlation and causation.

+
+
+

29.2 Introduction to correlation and causation

+

The questions in examples Section 12.1 to Section 13.3.3 have been stated in the following form: Does the independent variable (say, irradiation; or type of pig ration) have an effect upon the dependent variable (say, sex of fruit flies; or weight gain of pigs)? This is another way to state the following question: Is there a causal relationship between the independent variable(s) and the dependent variable? (“Independent” or “control” is the name we give to the variable(s) the researcher believes is (are) responsible for changes in the other variable, which we call the “dependent” or “response” variable.)

+

A causal relationship cannot be defined perfectly neatly. Even an experiment does not determine perfectly whether a relationship deserves to be called “causal” because, among other reasons, the independent variable may not be clear-cut. For example, even if cigarette smoking experimentally produces cancer in rats, it might be the paper and not the tobacco that causes the cancer. Or consider the fabled gentlemen who got experimentally drunk on bourbon and soda on Monday night, scotch and soda on Tuesday night, and brandy and soda on Wednesday night — and stayed sober Thursday night by drinking nothing. With a vast inductive leap of scientific imagination, they treated their experience as an empirical demonstration that soda, the common element each evening, was the cause of the inebriated state they had experienced. Notice that their deduction was perfectly sound, given only the recent evidence they had. Other knowledge of the world is necessary to set them straight. That is, even in a controlled experiment there is often no way except subject-matter knowledge to avoid erroneous conclusions about causality. Nothing except substantive knowledge or scientific intuition would have led them to the recognition that it is the alcohol rather than the soda that made them drunk, as long as they always took soda with their drinks . And no statistical procedure can suggest to them that they ought to experiment with the presence and absence of soda. If this is true for an experiment, it must also be true for an uncontrolled study.

+

Here are some tests that a relationship usually must pass to be called causal. That is, a working definition of a particular causal relationship is expressed in a statement that has these important characteristics:

+
    +
  1. It is an association that is strong enough so that the observer believes it to have a predictive (explanatory) power great enough to be scientifically useful or interesting. For example, he is not likely to say that wearing glasses causes (or is a cause of) auto accidents if the observed correlation is .07, even if the sample is large enough to make the correlation statistically significant. In other words, unimportant relationships are not likely to be labeled causal.

    +

    Various observers may well differ in judging whether or not an association is strong enough to be important and therefore “causal.” And the particular field in which the observer works may affect this judgment. This is an indication that whether or not a relationship is dubbed “causal” involves a good deal of human judgment and is subject to dispute.

  2. +
  3. The “side conditions” must be sufficiently few and sufficiently observable so that the relationship will apply under a wide enough range of conditions to be considered useful or interesting. In other words, the relationship must not require too many “if”s, “and”s, and “but”s in order to hold . For example, one might say that an increase in income caused an increase in the birth rate if this relationship were observed everywhere. But, if the relationship were found to hold only in developed countries, among the educated classes, and among the higher-income groups, then it would be less likely to be called “causal” — even if the correlation were extremely high once the specified conditions had been met. A similar example can be made of the relationship between income and happiness.

  4. +
  5. For a relationship to be called “causal,” there should be sound reason to believe that, even if the control variable were not the “real” cause (and it never is), other relevant “hidden” and “real” cause variables must also change consistently with changes in the control variables. That is, a variable being manipulated may reasonably be called “causal” if the real variable for which it is believed to be a proxy must always be tied intimately to it. (Between two variables, v and w, v may be said to be the “more real” cause and w a “spurious” cause, if v and w require the same side conditions, except that v does not require w as a side condition.) This third criterion (non-spuriousness) is of particular importance to policy makers. The difference between it and the previous criterion for side conditions is that a plenitude of very restrictive side conditions may take the relationship out of the class of causal relationships, even though the effects of the side conditions are known . This criterion of nonspuriousness concerns variables that are as yet unknown and unevaluated but that have a possible ability to upset the observed association.

    +

    Examples of spurious relationships and hidden-third-factor causation are commonplace. For a single example, toy sales rise in December. There is no danger in saying that December causes an increase in toy sales, even though it is “really” Christmas that causes the increase, because Christmas and December practically always accompany each other.

    +

    Belief that the relationship is not spurious is increased if many likely variables have been investigated and none removes the relationship. This is further demonstration that the test of whether or not an association should be called “causal” cannot be a logical one; there is no way that one can express in symbolic logic the fact that many other variables have been tried without changing the relationship in question.

  6. +
  7. The more tightly a relationship is bound into (that is, deduced from, compatible with, and logically connected to) a general framework of theory, the stronger is its claim to be called “causal.” For an economics example, observed positive relationships between the interest rate and business investment and between profits and investment are more likely to be called “causal” than is the relationship between liquid assets and investment. This is so because the first two statements can be deduced from classical price theory, whereas the third statement cannot. Connection to a theoretical framework provides support for belief that the side conditions necessary for the statement to hold true are not restrictive and that the likelihood of spurious correlation is not great; because a statement is logically connected to the rest of the system, the statement tends to stand or fall as the rest of the system stands or falls. And, because the rest of the system of economic theory has, over a long period of time and in a wide variety of tests, been shown to have predictive power, a statement connected with it is cloaked in this mantle.

  8. +
+

The social sciences other than economics do not have such well-developed bodies of deductive theory, and therefore this criterion of causality does not weigh as heavily in sociology, for instance, as in economics. Rather, the other social sciences seem to substitute a weaker and more general criterion, that is, whether or not the statement of the relationship is accompanied by other statements that seem to “explain” the “mechanism” by which the relationship operates. Consider, for example, the relationship between the phases of the moon and the suicide rate. The reason that sociologists do not call it causal is that there are no auxiliary propositions that explain the relationship and describe an operative mechanism. On the other hand, the relationship between broken homes and juvenile delinquency is often referred to as “causal,” in large part because a large body of psychoanalytic theory serves to explain why a child raised without one or the other parent, or in the presence of parental strife, should not adjust readily.

+

Furthermore, one can never decide with perfect certainty whether in any given situation one variable “causes” a particular change in another variable. At best, given your particular purposes in investigating a phenomena, you may be safe in judging that very likely there is causal influence.

+

In brief, it is correct to say (as it is so often said) that correlation does not prove causation — if we add the word “completely” to make it “correlation does not completely prove causation.” On the other hand, causation can never be “proven” completely by correlation or any other tool or set of tools, including experimentation. The best we can do is make informed judgments about whether to call a relationship causal.

+

It is clear, however, that in any situation where we are interested in the possibility of causation, we must at least know whether there is a relationship (correlation) between the variables of interest; the existence of a relationship is necessary for a relationship to be judged causal even if it is not sufficient to receive the causal label. And in other situations where we are not even interested in causality, but rather simply want to predict events or understand the structure of a system, we may be interested in the existence of relationships quite apart from questions about causations. Therefore our next set of problems deals with the probability of there being a relationship between two measured variables, variables that can take on any values (say, the values on a test of athletic scores) rather than just two values (say, whether or not there has been irradiation.)1

+

Another way to think about such problems is to ask whether two variables are independent of each other — that is, whether you know anything about the value of one variable if you know the value of the other in a particular case — or whether they are not independent but rather are related.

+
+
+

29.3 A Note on Association Compared to Testing a Hypothesis

+

Problems in which we investigate a) whether there is an association , versus b) whether there is a difference between just two groups, often look very similar, especially when the data constitute a 2-by-2 table. There is this important difference between the two types of analysis, however: Questions about association refer to variables — say weight and age — and it never makes sense to ask whether there is a difference between variables (except when asking whether they measure the same quantity). Questions about similarity or difference refer to groups of individuals , and in such a situation it does make sense to ask whether or not two groups are observably different from each other.

+

Example 23-1: Is Athletic Ability Directly Related to Intelligence? (Is There Correlation Between Two Variables or Are They Independent?) (Program “Ability1”)

+

A scientist often wants to know whether or not two characteristics go together, that is, whether or not they are correlated (that is, related or associated). For example, do youths with high athletic ability tend to also have high I.Q.s?

+

Hypothetical physical-education scores of a group of ten high-school boys are shown in Table 23-1, ordered from high to low, along with the I.Q. score for each boy. The ranks for each student’s athletic and I.Q. scores are then shown in columns 3 and 4.

+

Table 23-1

+

Hypothetical Athletic and I.Q. Scores for High School Boys

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Athletic ScoreI.Q. ScoreAthletic RankI.Q.Rank
(1)(2)(3)(4)
9711413
9412021
9310737
9011344
8711852
8610168
8610976
8511085
8110099
76991010
+

We want to know whether a high score on athletic ability tends to be found along with a high I.Q. score more often than would be expected by chance. Therefore, our strategy is to see how often high scores on both variables are found by chance. We do this by disassociating the two variables and making two separate and independent universes, one composed of the athletic scores and another of the I.Q. scores. Then we draw pairs of observations from the two universes at random, and compare the experimental patterns that occur by chance to what actually is observed to occur in the world.

+

The first testing scheme we shall use is similar to our first approach to the pig rations — splitting the results into just “highs” and “lows.” We take ten cards, one of each denomination from “ace” to “10,” shuffle, and deal five cards to correspond to the first five athletic ranks. The face values then correspond to the

+

I.Q. ranks. Under the benchmark hypothesis the athletic ranks will not be associated with the I.Q. ranks. Add the face values in the first five cards in each trial; the first hand includes 2, 4, 5, 6, and 9, so the sum is 26. Record, shuffle, and repeat perhaps ten times. Then compare the random results to the sum of the observed ranks of the five top athletes, which equals 17.

+

The following steps describe a slightly different procedure than that just described, because this one may be easier to understand:

+

Step 1. Convert the athletic and I.Q. scores to ranks. Then constitute a universe of spades, “ace” to “10,” to correspond to the athletic ranks, and a universe of hearts, “ace” to “10,” to correspond to the IQ ranks.

+

Step 2. Deal out the well-shuffled cards into pairs, each pair with an athletic score and an I.Q. score.

+

Step 3. Locate the cards with the top five athletic ranks, and add the I.Q. rank scores on their paired cards. Compare this sum to the observed sum of 17. If 17 or less, indicate “yes,” otherwise “no.” (Why do we use “17 or less” rather than “less than 17”? Because we are asking the probability of a score this low or lower .)

+

Step 4. Repeat steps 2 and 3 ten times.

+

Step 5. Calculate the proportion “yes.” This estimates the probability sought.

+

In Table 23-2 we see that the observed sum (17) is lower than the sum of the top 5 ranks in all but one (shown by an asterisk) of the ten random trials (trial 5), which suggests that there is a good chance (9 in 10) that the five best athletes will not have I.Q. scores that high by chance. But it might be well to deal some more to get a more reliable average. We add thirty hands, and thirty-nine of the total forty hands exceed the observed rank value, so the probability that the observed correlation of athletic and I.Q. scores would occur by chance is about

+

.025. In other words, if there is no real association between the variables, the probability that the top 5 ranks would sum to a number this low or lower is only 1 in 40, and it therefore seems reasonable to believe that high athletic ability tends to accompany a high I.Q.

+

Table 23-2

+

Results of 40 Random Trials of The Problem “Ability”

+

(Note: Observed sum of IQ ranks: 17)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialSum of IQ RanksYes or No
126No
223No
322No
437No
* 516Yes
622No
722No
828No
938No
1022No
1135No
1236No
1331No
1429No
1532No
1625No
1725No
1829No
1925No
2022No
2130No
2231No
2335No
2425No
2533No
2630No
2724No
2829No
2930No
3031No
3130No
3221No
3325No
3419No
3529No
3623No
3723No
3834No
3923No
4026No
+

The RESAMPLING STATS program “Ability1” creates an array containing the I.Q. rankings of the top 5 students in athletics. The SUM of these I.Q. rankings constitutes the observed result to be tested against randomly-drawn samples. We observe that the actual I.Q. rankings of the top five athletes sums to 17. The more frequently that the sum of 5 randomly-generated rankings (out of 10) is as low as this observed number, the higher is the probability that there is no relationship between athletic performance and I.Q. based on these data.

+

First we record the NUMBERS “1” through “10” into vector

+

A. Then we SHUFFLE the numbers so the rankings are in a random order. Then TAKE the first 5 of these numbers and put them in another array, D, and SUM them, putting the result in E. We repeat this procedure 1000 times, recording each result in a scorekeeping vector: Z. Graphing Z, we get a HIS- TOGRAM that shows us how often our randomly assigned sums are equal to or below 17.

+ +
' Program file: "correlation_causation_00.rss"
+
+REPEAT 1000
+    ' Repeat the experiment 1000 times.
+    NUMBERS 1,10 a
+    ' Constitute the set of I.Q. ranks.
+    SHUFFLE a b
+    ' Shuffle them.
+    TAKE b 1,5 d
+    ' Take the first 5 ranks.
+    SUM d e
+    ' Sum those ranks.
+    SCORE e z
+    ' Keep track of the result of each trial.
+END
+' End the experiment, go back and repeat.
+HISTOGRAM z
+' Produce a histogram of trial results.
+

ABILITY1: Random Selection of 5 Out of 10 Ranks

+

+

Sum of top 5 ranks

+

We see that in only about 2% of the trials did random selection of ranks produce a total of 17 or lower. RESAMPLING STATS will calculate this for us directly:

+ +
' Program file: "ability1.rss"
+
+COUNT z <= 17 k
+' Determine how many trials produced sums of ranks \<= 17 by chance.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the results.
+
+' Note: The file "ability1" on the Resampling Stats software disk contains
+' this set of commands.
+

Why do we sum the ranks of the first five athletes and compare them with the second five athletes, rather than comparing the top three, say, with the bottom seven? Indeed, we could have looked at the top three, two, four, or even six or seven. The first reason for splitting the group in half is that an even split uses the available information more fully, and therefore we obtain greater efficiency. (I cannot prove this formally here, but perhaps it makes intuitive sense to you.) A second reason is that getting into the habit of always looking at an even split reduces the chances that you will pick and choose in such a manner as to fool yourself. For example, if the I.Q. ranks of the top five athletes were 3, 2, 1, 10, and 9, we would be deceiving ourselves if, after looking the data over, we drew the line between athletes 3 and 4. (More generally, choosing an appropriate measure before examining the data will help you avoid fooling yourself in such matters.)

+

A simpler but less efficient approach to this same problem is to classify the top-half athletes by whether or not they were also in the top half of the I.Q. scores. Of the first five athletes actually observed, four were in the top five I.Q. scores. We can then shuffle five black and five red cards and see how often four or more (that is, four or five) blacks come up with the first five cards. The proportion of times that four or more blacks occurs in the trial is the probability that an association as strong as that observed might occur by chance even if there is no association. Table 23-3 shows a proportion of five trials out of twenty.

+

In the RESAMPLING STATS program “Ability2” we first note that the top 5 athletes had 4 of the top 5 I.Q. scores. So we constitute the set of 10 IQ rankings (vector A). We then SHUFFLE A and TAKE 5 I.Q. rankings (out of 10). We COUNT how many are in the top 5, and keep SCORE of the result. After REPEATing 1000 times, we find out how often we select 4 of the top 5.

+

Table 23-3

+

Results of 20 Random Trials of the Problem “ABILITY2”

+

Observed Score: 4

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialScoreYes or No
14Yes
22No
32No
42No
53No
62No
74Yes
83No
93No
104Yes
113No
121No
133No
143No
154Yes
163No
172No
182No
192No
204Yes
+ +
' Program file: "ability2.rss"
+
+REPEAT 1000
+    ' Do 1000 experiments.
+    NUMBERS 1,10 a
+    ' Constitute the set of I.Q. ranks.
+    SHUFFLE a b
+    ' Shuffle them.
+    TAKE b 1,5 c
+    ' Take the first 5 ranks.
+    COUNT c between 1 5 d
+    ' Of those 5, count how many are among the top half of the ranks (1-5).
+    SCORE d z
+    ' Keep track of that result in z
+END
+' End one experiment, go back and repeat until all 1000 are complete.
+COUNT z >= 4 k
+' Determine how many trials produced 4 or more top ranks by chance.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the result.
+
+' Note: The file "ability2" on the Resampling Stats software disk contains
+' this set of commands.
+

So far we have proceeded on the theory that if there is any relationship between athletics and I.Q., then the better athletes have higher rather than lower I.Q. scores. The justification for this assumption is that past research suggests that it is probably true. But if we had not had the benefit of that past research, we would then have had to proceed somewhat differently; we would have had to consider the possibility that the top five athletes could have I.Q. scores either higher or lower than those of the other students. The results of the “two-tail” test would have yielded odds weaker than those we observed.

+

Example 23-2: Athletic Ability and I.Q. a Third Way.

+

(Program “Ability3”).

+

Example 23-1 investigated the relationship between I.Q. and athletic score by ranking the two sets of scores. But ranking of scores loses some efficiency because it uses only an “ordinal” (rank-ordered) rather than a “cardinal” (measured) scale; the numerical shadings and relative relationships are lost when we convert to ranks. Therefore let us consider a test of correlation that uses the original cardinal numerical scores.

+

First a little background: Figure 29.1 and Figure 29.2 show two hypothetical cases of very high association among the I.Q. and athletic scores used in previous examples. Figure 29.1 indicates that the higher the I.Q. score, the higher the athletic score. With a boy’s athletic score you can thus predict quite well his I.Q. score by means of a hand-drawn line — or vice versa. The same is true of Figure 29.2, but in the opposite direction. Notice that even though athletic score is on the x-axis (horizontal) and I.Q. score is on the y-axis (vertical), the athletic score does not cause the I.Q. score. (It is an unfortunate deficiency of such diagrams that some variable must arbitrarily be placed on the x-axis, whether you intend to suggest causation or not.)

+
+
+
+
+

+
Figure 29.1: Hypothetical Scores for I.Q. and Athletic Ability — 1
+
+
+
+
+
+
+
+
+

+
Figure 29.2: Hypothetical Scores for I.Q. and Athletic Ability — 2
+
+
+
+
+

In Figure 29.3, which plots the scores as given in table 23-1 the prediction of athletic score given I.Q. score, or vice versa, is less clear-cut than in Figure 29.1. On the basis of Figure 29.3 alone, one can say only that there might be some association between the two variables.

+
+
+
+
+

+
Figure 29.3: Given Scores for I.Q. and Athletic Ability
+
+
+
+
+
+
+

29.4 Correlation: sum of products

+

Now let us take advantage of a handy property of numbers. The more closely two sets of numbers match each other in order, the higher the sums of their products. Consider the following arrays of the numbers 1, 2, and 3:

+

1 x 1 = 1

+

2 x 2 = 4 (columns in matching order) 3 x 3 = 9

+

SUM = 14

+

1 x 2 = 2

+

2 x 3 = 6 (columns not in matching order) 3 x 1 = 3

+

SUM = 11

+

I will not attempt a mathematical proof, but the reader is encouraged to try additional combinations to be sure that the highest sum is obtained when the order of the two columns is the same. Likewise, the lowest sum is obtained when the two columns are in perfectly opposite order:

+

1 x 3 = 3

+

2 x 2 = 4 (columns in opposite order) 3 x 1 = 3

+

SUM = 10

+

Consider the cases in Table 23-4 which are chosen to illustrate a perfect (linear) association between x (Column 1) and y 1 (Column 2), and also between x (Column 1) and y 2 (Column 4); the numbers shown in Columns 3 and 5 are those that would be consistent with perfect associations. Notice the sum of the multiples of the x and y values in the two cases. It is either higher ( xy 1) or lower ( xy 2) than for any other possible way of arranging the y ’s. Any other arrangement of the y’s ( y 3, in Column 6, for example, chosen at random), when multiplied by the x ’s in Column 1, ( xy 3), produces a sum that falls somewhere between the sums of xy 1 and xy 2, as is the case with any other set of y 3’s which is not perfectly correlated with the x ’s.

+

Table 23-5, below, shows that the sum of the products of the observed I.Q. scores multiplied by athletic scores (column 7) is between the sums that would occur if the I.Q. scores were ranked from best to worst (column 3) and worst to best (column 5). The extent of correlation (association) can thus be measured by whether the sum of the multiples of the observed x

+

and y values is relatively much higher or much lower than are sums of randomly-chosen pairs of x and y .

+

Table 23-4

+

Comparison of Sums of Multiplications

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Strong Positive RelationshipStrong Negative RelationshipRandom Pairings
XY1X*Y1Y2X*Y2Y3X*Y3
224102048
4416832832
6636636636
8864448216
101010022010100
SUMS:220156192
+

Table 23-5

+

Sums of Products: IQ and Athletic Scores

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
1234567
AthleticHypotheticalCol. 1 xHypotheticalCol. 1 xActualCol. 1 x
ScoreI.Q.Col.2I.Q.Col. 4I.Q.Col.6
971201164099960311411058
9411811092100940012011280
931141060210193931079951
9011310170107963011310170
871109570109948311810266
86109937411084601018686
86107920211397181099374
85101858511496901109350
81100810011895581008100
769975241209120997524
SUMS:958599505595759
+

3 Cases:

+
    +
  • Perfect positive correlation (hypothetical); column 3

  • +
  • Perfect negative correlation (hypothetical); column 5

  • +
  • Observed; column 7

  • +
+

Now we attack the I.Q. and athletic-score problem using the property of numbers just discussed. First multiply the x and y values of the actual observations, and sum them to be 95,759 (Table 23-5). Then write the ten observed I.Q. scores on cards, and assign the cards in random order to the ten athletes, as shown in column 1 in Table 23-6.

+

Multiply by the x’s, and sum as in Table 23-7. If the I.Q. scores and athletic scores are positively associated , that is, if high I.Q.s and high athletic scores go together, then the sum of the multiplications for the observed sample will be higher than for most of the random trials. (If high I.Q.s go with low athletic scores, the sum of the multiplications for the observed sample will be lower than most of the random trials.)

+

Table 23-6

+

Random Drawing of I.Q. Scores and Pairing (Randomly) Against Athletic Scores (20 Trials)

+

Trial Number

+

Athletic 1 2 3 4 5 6 7 8 9 10

+

Score

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
97114109110118107114107120100114
94101113113101118100110109120107
931071181009912010111499110113
901131011181141011131001189999
87120100101100110107113114101118
86100110120107113110118101118101
8611010799109100120120113114120
85999910412099109101107109109
811181201141101149999100107109
76109114109113109118109110113110
Trial Number
Athletic Score11121314151617181920
971091181011091071009911399110
94101110114118101107114101109113
93120120100120114113100100120100
901101181091109910910710911099
8710010012099118114110110107101
8611899107100109118113118100118
86991019910110099101107114120
85107114110114120110120120118100
81114107113113110101109114101100
7611310911810711312011899118107
+

Table 23-7

+

Results of Sum Products for Above 20 Random Trials

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialSum of MultiplicationsTrialSum of Multiplications
195,4301195,406
295,4261295,622
395,4461395,250
495,3811495,599
595,5421595,323
695,3621695,308
795,5081795,220
895,5901895,443
995,3791995,421
1095,5322095,528
+

More specifically, by the steps:

+

Step 1. Write the ten I.Q. scores on one set of cards, and the ten athletic scores on another set of cards.

+

Step 2. Pair the I.Q. and athletic-score cards at random. Multiply the scores in each pair, and add the results of the ten multiplications.

+

Step 3. Subtract the experimental sum in step 2 from the observed sum, 95,759.

+

Step 4. Repeat steps 2 and 3 twenty times.

+

Step 5. Compute the proportion of trials where the difference is negative, which estimates the probability that an association as strong as the observed would occur by chance.

+

The sums of the multiplications for 20 trials are shown in Table 23-7. No random-trial sum was as high as the observed sum, which suggests that the probability of an association this strong happening by chance is so low as to approach zero. (An empirically-observed probability is never actually zero.)

+

This program can be solved particularly easily with RESAMPLING STATS. The arrays A and B in program “Ability3” list the athletic scores and the I.Q. scores respectively of 10 “actual” students ordered from highest to lowest athletic score. We MULTIPLY the corresponding elements of these arrays and proceed to compare the sum of these multiplications to the sums of experimental multiplications in which the elements are selected randomly.

+

Finally, we COUNT the trials in which the sum of the products of the randomly-paired athletic and I.Q. scores equals or exceeds the sum of the products in the observed data.

+ +
' Program file: "correlation_causation_03.rss"
+
+NUMBERS (97 94 93 90 87 86 86 85 81 76) a
+' Record athletic scores, highest to lowest.
+NUMBERS (114 120 107 113 118 101 109 110 100 99) b
+' Record corresponding IQ scores for those students.
+MULTIPLY a b c
+' Multiply the two sets of scores together.
+SUM c d
+' Sum the results — the "observed value."
+REPEAT 1000
+    ' Do 1000 experiments.
+    SHUFFLE a e
+    ' Shuffle the athletic scores so we can pair them against IQ scores.
+    MULTIPLY e b f
+    ' Multiply the shuffled athletic scores by the I.Q. scores. (Note that we
+    ' could shuffle the I.Q. scores too but it would not achieve any greater
+    ' randomization.)
+    SUM f j
+    ' Sum the randomized multiplications.
+    SUBTRACT d j k
+    ' Subtract the sum from the sum of the "observed" multiplication.
+    SCORE k z
+    ' Keep track of the result in z.
+END
+' End one trial, go back and repeat until 1000 trials are complete.
+HISTOGRAM z
+' Obtain a histogram of the trial results.
+

Random Sums of Products

+

ATHLETES & IQ SCORES

+

+

observed sum less random sum

+

We see that obtaining a chance trial result as great as that observed was rare. RESAMPLING STATS will calculate this proportion for us:

+ +
' Program file: "ability3.rss"
+
+COUNT z <= 0 k
+' Determine in how many trials the random sum of products was less than
+' the observed sum of products.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Note: The file "ability3" on the Resampling Stats software disk contains
+' this set of commands.
+

Example 23-3: Correlation Between Adherence to Medication Regime and Change in Cholesterol

+

Efron and Tibshirani (1993, 72) show data on the extents to which 164 men a) took the drug prescribed to them (cholostyramine), and b) showed a decrease in total plasma cholesterol. Table 23-8 shows these values (note that a positive value in the “decrease in cholesterol” column denotes a decrease in cholesterol, while a negative value denotes an increase.)

+

Table 23-8

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TakenTakenTakenTaken
0 -5.2527-1.50 7159.5095 32.50
0 -7.252823.50 7114.7595 70.75
0 -6.252933.00 7263.0095 18.25
0 11.50314.25 720.0095 76.00
2 21.003218.75 7342.0095 75.75
2 -23.00328.50 7441.2595 78.75
2 5.75333.25 7536.2595 54.75
3 3.253327.75 7666.5095 77.00
3 8.753430.75 7761.7596 68.00
4 8.7534-1.50 7714.0096 73.00
4 -10.25341.00 7836.0096 28.75
7 -10.50347.75 7839.5096 26.75
8 19.7535-15.75 811.0096 56.00
8 -0.503633.50 8253.5096 47.50
8 29.253636.25 8446.5096 30.25
8 36.25375.50 8551.0096 21.00
9 10.753825.50 8539.0097 79.00
9 19.504120.25 87-0.2597 69.00
9 17.254333.25 871.0097 80.00
10 3.504556.75 8746.7597 86.00
10 11.25454.25 8711.5098 54.75
11 -13.004732.50 872.7598 26.75
12 24.005054.50 8848.7598 80.00
13 2.5050-4.25 8956.7598 42.25
15 3.005142.75 9029.2598 6.00
15 5.505162.75 9072.5098 104.75
16 21.255264.25 9141.7598 94.25
16 29.755330.25 9248.5098 41.25
17 7.505414.75 9261.2598 40.25
18 -16.505447.25 9229.5099 51.50
20 4.505618.00 9259.7599 82.75
20 39.005713.75 9371.0099 85.00
21 -5.755748.75 9337.7599 70.00
21 -21.005843.00 9341.00100 92.00
21 0.256027.75 939.75100 73.75
22 -10.256244.50 9353.75100 54.00
24 -0.506422.50 9462.50100 69.50
25 -19.0064-14.50 9439.00100 101.50
25 15.7564-20.75 943.25100 68.00
26 6.006746.25 9460.00100 44.75
27 10.506839.50 95113.25100 86.75
+

% Prescribed Dosage

+

Decrease in Cholesterol

+

% Prescribed Dosage

+

Decrease in Cholesterol

+

% Prescribed Dosage

+

Decrease in Cholesterol

+

% Prescribed Dosage

+

Decrease in Cholesterol

+

The aim is to assess the effect of the compliance on the improvement. There are two related issues:

+
    +
  1. What form of regression should be fitted to these data, which we address later, and

  2. +
  3. Is there reason to believe that the relationship is meaningful? That is, we wish to ascertain if there is any meaningful correlation between the variables — because if there is no relationship between the variables, there is no basis for regressing one on the other. Sometimes people jump ahead in the latter question to first run the regression and then ask whether the regression slope coefficient(s) is (are) different than zero, but this usually is not sound practice. The sensible way to proceed is first to graph the data to see whether there is visible indication of a relationship.

  4. +
+

Efron and Tibshirani do this, and they find sufficient intuitive basis in the graph to continue the analysis. The next step is to investigate whether a measure of relationship is statistically significant; this we do as follows (program “inp10”):

+
    +
  1. Multiply the observed values for each of the 164 participants on the independent x variable (cholostyramine — percent of prescribed dosage actually taken) and the dependent y variable (cholesterol), and sum the results — it’s 439,140.

  2. +
  3. Randomly shuffle the dependent variable y values among the participants. The sampling is being done without replacement, though an equally good argument could be made for sampling with replacement; the results do not differ meaningfully, however, because the sample size is so large.

  4. +
  5. Then multiply these x and y hypothetical values for each of the 164 participants, sum the results and record.

  6. +
  7. Repeat steps 2 and 3 perhaps 1000 times.

  8. +
  9. Determine how often the shuffled sum-of-products exceeds the observed value (439,140).

  10. +
+

The following program in RESAMPLING STATS provides the solution:

+ +
' Program file: "correlation_causation_05.rss"
+
+READ FILEinp10x y
+' Data
+MULTIPLY x y xy
+' Step 1 above
+SUM xy xysum
+' Note: xysum = 439,140 (4.3914e+05)
+REPEAT 1000
+    ' Do 1000 simulations (step 4 above)
+    SHUFFLE x xrandom
+    ' Step 2 above
+    MULTIPLY xrandom y xy
+    ' Step 3 above
+    SUM xy newsum
+    ' Step 3 above
+    SCORE newsum scrboard
+    ' Step 3 above
+END
+' Step 4 above
+COUNT scorboard >=439140 prob
+' Step 5 above
+PRINT xysum prob
+' Result: prob = 0. Interpretation: 1000 simulated random shufflings never
+' produced a sum-of-products as high as the observed value. Hence we rule
+' out random chance as an explanation for the observed correlation.
+

Example 23-3: Is There A Relationship Between Drinking Beer And Being In Favor of Selling Beer? (Testing for a Relationship Between Counted-Data Variables.) (Program “Beerpoll”)

+

The data for athletic ability and I.Q. were measured. Therefore, we could use them in their original “cardinal” form, or we could split them up into “high” and “low” groups. Often, however, the individual observations are recorded only as “yes” or “no,” which makes it more difficult to ascertain the existence of a relationship. Consider the poll responses in Table 23-8 to two public-opinion survey questions: “Do you drink beer?” and “Are you in favor of local option on the sale of beer?”.2

+ +

Table 23-9

+

Results of Observed Sample For Problem “Beerpoll”

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Do you favor local option on the sale of beer?Do you drink beer?
YesNoTotal
Favor452065
Don’t Favor7613
Total522678
+

Here is the statistical question: Is a person’s opinion on “local option” related to whether or not he drinks beer? Our resampling solution begins by noting that there are seventy-eight respondents, sixty-five of whom approve local option and thirteen of whom do not. Therefore write “approve” on sixty-five index cards and “not approve” on thirteen index cards. Now take another set of seventy-eight index cards, preferably of a different color, and write “yes” on fifty-two of them and “no” on twenty-six of them, corresponding to the numbers of people who do and do not drink beer in the sample. Now lay them down in random pairs , one from each pile.

+

If there is a high association between the variables, then real life observations will bunch up in the two diagonal cells in the upper left and lower right in Table 23-8. (Ignore the “total” data for now.) Therefore, subtract one sum of two diagonal cells from the other sum for the observed data: (45 + 6) - (20 + 7) = 24. Then compare this difference to the comparable differences found in random trials. The proportion of times that the simulated-trial difference exceeds the observed difference is the probability that the observed difference of +24 might occur by chance, even if there is no relationship between the two variables. (Notice that, in this case, we are working on the assumption that beer drinking is positively associated with approval of local option and not the inverse. We are interested only in differences that are equal to or exceed +24 when the northeast-southwest diagonal is subtracted from the northwest-southeast diagonal.)

+

We can carry out a resampling test with this procedure:

+

Step 1. Write “approve” on 65 and “disapprove” on 13 red index cards, respectively; write “Drink” and “Don’t drink” on 52 and 26 white cards, respectively.

+

Step 2. Pair the two sets of cards randomly. Count the numbers of the four possible pairs: (1) “approve-drink,” (2) “disapprove-don’t drink,” (3) “disapprove-drink,” and (4) “approve-don’t drink.” Record the number of these combinations, as in Table 23-10, where columns 1-4 correspond to the four cells in Table 23-9.

+

Step 3. Add (column 1 plus column 4) and (column 2 plus column 3), and subtract the result in the second parenthesis from the result in the first parenthesis. If the difference is equal to or greater than 24, record “yes,” otherwise “no.”

+

Step 4. Repeat steps 2 and 3 perhaps a hundred times.

+

Step 5. Calculate the proportion “yes,” which estimates the probability that an association this great or greater would be observed by chance.

+

Table 23-10

+

Results of One Random Trial of the Problem “Beerpoll”

+ ++++++++ + + + + + + + + + + + + + + + + + + +
(1)(2)(3)(4)(5)
TrialApprove YesApprove NoDisappr ove YesDisappr ove No

(Col 1 + Col 4) -

+

(Col 2 + Col 3)

+

1 43 22 9 4 47-31=16

+

A series of ten trials in this case (see Table 23-9) indicates that the observed difference is very often exceeded, which suggests that there is no relationship between beer drinking and opinion.

+

The RESAMPLING STATS program “Beerpoll” does this repetitively. From the “actual” sample results we know that 52 respondents drink beer and 26 do not. We create the vector “drink” with 52 “1”s for those who drink beer, and 26 “2”s for those who do not. We also create the vector “sale” with 65 “1”s (approve) and 13 “2”s (disapprove). In the actual sample, 51 of the 78 respondents had “consistent” responses to the two questions — that is, people who both favor the sale of beer and drink beer, or who are against the sale of beer and do not drink beer. We want to randomly pair the responses to the two questions to compare against that observed result to test the relationship.

+

To accomplish this aim, we REPEAT the following procedure 1000 times. We SHUFFLE drink to drink$ so that the responses are randomly ordered. Now when we SUBTRACT the corresponding elements of the two arrays, a “0” will appear in each element of the new array c for which there was consistency in the response of the two questions. We therefore COUNT the times that c equals “0” and place this result in d, and the number of times c does not equal 0, and place this result in e. Find the difference (d minus e), and SCORE this to z.

+

SCORE Z stores for each trial the number of consistent responses minus inconsistent responses. To determine whether the results of the actual sample indicate a relationship between the responses to the two questions, we check how often the random trials had a difference (between consistent and inconsistent responses) as great as 24, the value in the observed sample.

+ +
' Program file: "beerpoll.rss"
+
+URN 52#1 26#0 drink
+' Constitute the set of 52 beer drinkers, represented by 52 "1"s, and the
+' set of 26 non-drinkers, represented by "2"s.
+URN 57#1 21#0 sale
+' The same set of individuals classified by whether they favor ("1") or
+' don't favor ("0") the sale of beer.
+
+' Note: F is now the vector {1 1 1 1 1 1 \... 0 0 0 0 0 \...} where 1 =
+' people in favor, 0 = people opposed.
+REPEAT 1000
+    ' Repeat the experiment 1000 times.
+    SHUFFLE drink drink$
+    ' Shuffle the beer drinkers/non-drinker, call the shuffled set drink\*.
+
+    ' Note: drink\$ is now a vector like {1 1 1 0 1 0 0 1 0 1 1 0 0 \...}
+    ' where 1 = drinker, 0 = non-drinker.
+END
+SUBTRACT drink$ sale c
+' Subtract the favor/don't favor set from the drink/don't drink set.
+' Consistent responses are someone who drinks favoring the sale of beer (a
+' "1" and a "1") or someone who doesn't drink opposing the sale of beer.
+' When subtracted, consistent responses *(and only consistent responses)*
+' produce a "0."
+COUNT c =0 d
+' Count the number of consistent responses (those equal to "0").
+COUNT c <> 0 e
+' Count the "inconsistent" responses (those not equal to "0").
+SUBTRACT d e f
+' Find the difference
+SCORE f z
+' Keep track of the results of each trial.
+
+' End one trial, go back and repeat until all 1000 trials are complete.
+HISTOGRAM z
+' Produce a histogram of the trial result.
+
+' Note: The file "beerpoll" on the Resampling Stats software disk contains
+' this set of commands.
+

Are Drinkers More Likely to Favor Local Option & Vice Versa

+

+

# consistent responses thru chance draw

+

The actual results showed a difference of 24. In the histogram we see that a difference that large or larger happened just by chance pairing — without any relationship between the two variables — 23% of the time. Hence, we conclude that there is little evidence of a relationship between the two variables.

+

Though the test just described may generally be appropriate for data of this sort, it may well not be appropriate in some particular case. Let’s consider a set of data where even if the test showed that an association existed, we would not believe the test result to be meaningful.

+

Suppose the survey results had been as presented in Table 23-11. We see that non-beer drinkers have a higher rate of approval of allowing beer drinking, which does not accord with experience or reason. Hence, without additional explanation we would not believe that a meaningful relationship exists among these variables even if the test showed one to exist. (Still another reason to doubt that a relationship exists is that the absolute differences are too small — there is only a 6% difference in disapproval between drink and don’t drink groups — to mean anything to anyone. On both grounds, then, it makes sense simply to act as if there were no difference between the two groups and to run no test .).

+

Table 23-11

+

Beer Poll In Which Results Are Not In Accord With Expectation Or Reason

+ + + + + + + + + + + + + + + + + + + + + +
% Approve% DisapproveTotal
Beer Drinkers71%29%100%
Non-Beer Drinkers77%23%100%
+

The lesson to be learned from this is that one should inspect the data carefully before applying a statistical test, and only test for “significance” if the apparent relationships accord with theory, general understanding, and common sense.

+

Example 23-4: Do Athletes Really Have “Slumps”? (Are Successive Events in a Series Independent, or is There a Relationship Between Them?)

+

The important concept of independent events was introduced earlier. Various scientific and statistical decisions depend upon whether or not a series of events is independent. But how does one know whether or not the events are independent? Let us consider a baseball example.

+

Baseball players and their coaches believe that on some days and during some weeks a player will bat better than on other days and during other weeks. And team managers and coaches act on the belief that there are periods in which players do poorly — slumps — by temporarily replacing the player with another after a period of poor performance. The underlying belief is that a series of failures indicates a temporary (or permanent) change in the player’s capacity to play well, and it therefore makes sense to replace him until the evil spirit passes on, either of its own accord or by some change in the player’s style.

+

But even if his hits come randomly, a player will have runs of good luck and runs of bad luck just by chance — just as does a card player. The problem, then, is to determine whether (a) the runs of good and bad batting are merely runs of chance, and the probability of success for each event remains the same throughout the series of events — which would imply that the batter’s ability is the same at all times, and coaches should not take recent performance heavily into account when deciding which players should play; or (b) whether a batter really does have a tendency to do better at some times than at others, which would imply that there is some relationship between the occurrence of success in one trial event and the probability of success in the next trial event, and therefore that it is reasonable to replace players from time to time.

+

Let’s analyze the batting of a player we shall call “Slug.” Here are the results of Slug’s first 100 times at bat during the 1987 season (“H” = hit, “X” = out):

+

X X X X X X H X X H X H H X X X X X X X X H X X X X X H X X X X H H X X X X X H X X H X H X X X H H X X X X X H X H X X X X H H X H H X X X X X X X X X X H X X X H X X H X X H X H X X H X X X H X X X.

+

Now, do Slug’s hits tend to come in bunches? That would be the case if he really did have a tendency to do better at some times than at others. Therefore, let us compare Slug’s results with those of a deck of cards or a set of random numbers that we know has no tendency to do better at some times than at others.

+

During this period of 100 times at bat, Slug has averaged one hit in every four times at bat — a .250 batting average. This average is the same as the chance of one card suit’s coming up. We designate hearts as “hits” and prepare a deck of 100 cards, twenty-five “H”s (hearts, or “hit”) and seventy-five “X”s (other suit, or “out”). Here is the sequence in which the 100 randomly-shuffled cards fell:

+

X X H X X X X H H X X X H H H X X X X X H X X X H X X H X X X X H X H H X X X X X X X X X H X X X X X X H H X X X X X H H H X X X X X X H X H X H X X H X H X X X X X X X X X H X X X X X X X H H H X X.

+

Now we can compare whether or not Slug’s hits are bunched up more than they would be by random chance; we can do so by counting the clusters (also called “runs”) of consecutive hits and outs for Slug and for the cards. Slug had forty-three clusters, which is more than the thirty-seven clusters in the cards; it therefore does not seem that there is a tendency for Slug’s hits to cluster together. (A larger number of clusters indicates a lower tendency to cluster.)

+

Of course, the single trial of 100 cards shown above might have an unusually high or low number of clusters. To be safer, lay out, (say,) ten trials of 100 cards each, and compare Slug’s number of clusters with the various trials. The proportion of trials with more clusters than Slug’s indicates whether or not Slug’s hits have a tendency to bunch up. (But caution: This proportion cannot be interpreted directly as a probability.)

+

Now the steps:

+

Step 1. Constitute a bucket with 3 slips of paper that say “out” and one that says “hit.” Or “01-25” = hits (H), “26-00” = outs (X), Slug’s long-run average.

+

Step 2. Sample 100 slips of paper, with replacement, record “hit” or “out” each time, or write a series of “H’s” or “X’s” corresponding to 100 numbers, each selected randomly between 1 and 100.

+

Step 3. Count the number of “clusters,” that is, the number of “runs” of the same event, “H”s or “X”s.

+

Step 4. Compare the outcome in step 3 with Slug’s outcome, 43 clusters. If 43 or fewer; write “yes,” otherwise “no.”

+

Step 5. Repeat steps 2-4 a hundred times.

+

Step 6. Compute the proportion “yes.” This estimates the probability that Slug’s record is not characterized by more “slumps” than would be caused by chance. A very low proportion of “yeses” indicates longer (and hence fewer) “streaks” and “slumps” than would result by chance.

+

In RESAMPLING STATS, we can do this experiment 1000 times.

+ +
' Program file: "sluggo.rss"
+
+REPEAT 1000
+    URN 3#0 1#1 a
+    SAMPLE 100 a b
+    ' Sample 100 "at-bats" from a
+    RUNS b >=1 c
+    ' How many runs (of any length \>=1) are there in the 100 at-bats?
+    SCORE c z
+END
+HISTOGRAM z
+' Note: The file "sluggo" on the Resampling Stats software disk contains
+' this set of commands.
+

Examining the histogram, we see that 43 runs is not at all an unusual occurrence:

+

“Runs” in 100 At-Bats

+

+

# “runs” of same outcome

+

The manager wants to look at this matter in a somewhat different fashion, however. He insists that the existence of slumps is proven by the fact that the player sometimes does not get a hit for an abnormally long period of time. One way of testing whether or not the coach is right is by comparing an average player’s longest slump in a 100-at-bat season with the longest run of outs in the first card trial. Assume that Slug is a player picked at random . Then compare Slug’s longest slump — say, 10 outs in a row — with the longest cluster of a single simulated 100-at-bat trial with the cards, 9 outs. This result suggests that Slug’s apparent slump might well have resulted by chance.

+

The estimate can be made more accurate by taking the average longest slump (cluster of outs) in ten simulated 400-at-bat trials. But notice that we do not compare Slug’s slump against the longest slump found in ten such simulated trials. We want to know the longest cluster of outs that would be found under average conditions, and the hand with the longest slump is not average or typical. Determining whether to compare Slug’s slump with the average longest slump or with the longest of the ten longest slumps is a decision of crucial importance. There are no mathematical or logical rules to help you. What is required is hard, clear thinking. Experience can help you think clearly, of course, but these decisions are not easy or obvious even to the most experienced statisticians.

+

The coach may then refer to the protracted slump of one of the twenty-five players on his team to prove that slumps really occur. But, of twenty-five random 100-at-bat trials, one will contain a slump longer than any of the other twenty-four, and that slump will be considerably longer than average. A fair comparison, then, would be between the longest slump of his longest-slumping player, and the longest run of outs found among twenty-five random trials. In fact, the longest run among twenty-five hands of 100 cards was fifteen outs in a row. And, if we had set some of the hands for lower (and higher) batting averages than .250, the longest slump in the cards would have been even longer.

+

Research by Roberts and his students at the University of Chicago shows that in fact slumps do not exist, as I conjectured in the first publication of this material in 1969. (Of course, a batter feels as if he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, sandlot players and professionals alike feel confident — just as gamblers often feel that they’re on a “streak.” But there seems to be no connection between a player’s performance and whether he feels hot or cold, astonishing as that may be.)

+

Averages over longer periods may vary systematically, as Ty Cobb’s annual batting average varied non-randomly from season to season, Roberts found. But short-run analyses of dayto-day and week-to-week individual and team performances in most sports have shown results similar to the outcomes that a lottery-type random-number machine would produce.

+

Remember, too, the study by Gilovich, Vallone, and Twersky of basketball mentioned in Chapter 14. To repeat, their analyses “provided no evidence for a positive correlation between the outcomes of successive shots.” That is, knowing whether a shooter has or has not scored on the previous sheet — or in any previous sequence of shots — is useless for predicting whether he will score again.

+

The species homo sapiens apparently has a powerful propensity to believe that one can find a pattern even when there is no pattern to be found. Two decades ago I cooked up several series of random numbers that looked like weekly prices of publicly-traded stocks. Players in the experiment were told to buy and sell stocks as they chose. Then I repeatedly gave them “another week’s prices,” and allowed them to buy and sell again. The players did all kinds of fancy calculating, using a wild variety of assumptions — although there was no possible way that the figuring could help them.

+

When I stopped the game before completing the 10 buy-andsell sessions they expected, subjects would ask that the game go on. Then I would tell them that there was no basis to believe that there were patterns in the data, because the “prices” were just randomly-generated numbers. Winning or losing therefore did not depend upon the subjects’ skill. Nevertheless, they demanded that the game not stop until the 10 “weeks” had been played, so they could find out whether they “won” or “lost.”

+

This study of batting illustrates how one can test for independence among various trials. The trials are independent if each observation is randomly chosen with replacement from the universe, in which case there is no reason to believe that one observation will be related to the observations directly before and after; as it is said, “the coin has no memory.”

+

The year-to-year level of Lake Michigan is an example in which observations are not independent. If Lake Michigan is very high in one year, it is likely to be higher than average the following year because some of the high level carries over from one year into the next.3 We could test this hypothesis by writing down whether the level in each year from, say, 1860 to 1975 was higher or lower than the median level for those years. We would then count the number of runs of “higher” and “lower” and compare the number of runs of “black” and “red” with a deck of that many cards; we would find fewer runs in the lake level than in an average hand of 116 (1976-1860) cards, though this test is hardly necessary. (But are the changes in Lake Michigan’s level independent from year to year? If the level went up last year, is there a better than 50-50 chance that the level will also go up this year? The answer to this question is not so obvious. One could compare the numbers of runs of ups and downs against an average hand of cards, just as with the hits and outs in baseball.)

+

Exercise for students: How could one check whether the successive numbers in a random-number table are independent?

+
+
+

29.5 Exercises

+

Solutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.

+

Exercise 23-1

+

Table 23-12 shows voter participation rates in the various states in the 1844 presidential election. Should we conclude that there was a negative relationship between the participation rate (a) and the vote spread (b) between the parties in the election? (Adapted from (Noreen 1989, 20, Table 2-4):

+

Table 23-12

+

Voter Participation In The 1844 Presidential Election

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateParticipation (a)Spread (b)
Maine67.513
New Hampshire65.619
Vermont65.718
Massachusetts59.312
Rhode Island39.820
Connecticut76.15
New York73.61
New Jersey81.61
Pennsylvania75.52
Delaware85.03
Maryland80.35
Virginia54.56
North Carolina79.15
Georgia94.04
Kentucky80.38
Tennessee89.61
Louisiana44.73
Alabama82.78
Mississippi89.713
Ohio83.62
Indiana84.92
Illinois76.312
Missouri74.717
Arkansas68.826
Michigan79.36
National Average74.99
+

The observed correlation coefficient between voter participation and spread is -.37398. Is this more negative that what might occur by chance, if no correlation exists?

+

Exercise 23-2

+

We would like to know whether, among major-league baseball players, home runs (per 500 at-bats) and strikeouts (per 500 at-bat’s) are correlated. We first use the procedure as used above for I.Q. and athletic ability — multiplying the elements within each pair. (We will later use a more “sophisticated” measure, the correlation coefficient.)

+

The data for 18 randomly-selected players in the 1989 season are as follows, as they would appear in the first lines of the program.

+ +
' Program file: "correlation_causation_08.rss"
+
+NUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32) homeruns
+NUMBERS (135 153 120 161 138 175 126 200 205 147 165 124 169 156 36 98 82 131) strikeout
+' Exercise: Complete this program.
+

Exercise 23-3

+

In the previous example relating strikeouts and home runs, we used the procedure of multiplying the elements within each pair. Now we use a more “sophisticated” measure, the correlation coefficient, which is simply a standardized form of the multiplicands, but sufficiently well known that we calculate it with a pre-set command.

+

Exercise: Write a program that uses the correlation coefficient to test the significance of the association between home runs and strikeouts.

+

Exercise 23-4

+

All the other things equal, an increase in a country’s money supply is inflationary and should have a negative impact on the exchange rate for the country’s currency. The data in the following table were computed using data from tables in the 1983/1984 Statistical Yearbook of the United Nations :

+

Table 23-13

+

Money Supply and Exchange Rate Changes

+

% Change % Change % Change % Change

+

Exch. Rate Money Supply Exch. Rate Money Supply

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Australia0.0890.035Belgium0.1340.003
Botswana0.3510.085Burma0.0640.155
Burundi0.0640.064Canada0.0620.209
Chile0.4650.126China0.4110.555
Costa Rica0.1000.100Cyprus0.1580.044
Denmark0.1400.351Ecuador0.2420.356
Fiji0.0930.000Finland0.1240.164
France0.1490.090Germany0.1560.061
Greece0.3020.202Hungary0.1330.049
India0.1870.184Indonesia0.0800.132
Italy0.1670.124Jamaica0.5040.237
Japan0.0810.069Jordan0.0920.010
Kenya0.1440.141Korea0.0400.006
Kuwait0.038-0.180Lebanon0.6190.065
Madagascar0.3370.244Malawi0.2050.203
Malaysia0.037-0.006Malta0.0030.003
Mauritania0.1800.192Mauritius0.2260.136
Mexico0.3380.599Morocco0.0760.076
Netherlands0.1580.078New Zealand0.3700.098
Nigeria0.0790.082Norway0.1770.242
Papua0.0750.209Philippines0.4110.035
Portugal0.2880.166Romania-0.0290.039
Rwanda0.0590.083Samoa0.3480.118
Saudi Arabia0.0230.023Seychelles0.0630.031
Singapore0.0240.030Solomon Is0.1010.526
Somalia0.4810.238South Africa0.6240.412
Spain0.1070.086Sri Lanka0.0510.141
Switzerland0.1860.186Tunisia0.1930.068
Turkey0.5730.181UK0.2550.154
USA0.0000.156Vanatuva0.0080.331
Yemen0.2530.247Yugoslavia0.6850.432
Zaire0.3430.244Zambia0.4570.094
Zimbabwe0.3590.164
+

Percentage changes in exchange rates and money supply between 1983 and 1984 for various countries.

+

Are changes in the exchange rates and in money supplies related to each other? That is, are they correlated?

+ +

Exercise: Should the algorithm of non-computer resampling steps be similar to the algorithm for I.Q. and athletic ability shown in the text? One can also work with the correlation coefficient rather then the sum-of-products method, and expect to get the same result.

+
    +
  1. Write a series of non-computer resampling steps to solve this problem.

  2. +
  3. Write a computer program to implement those steps.

  4. +
+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/diagrams/basketball_shots.svg b/python-book/diagrams/basketball_shots.svg new file mode 100644 index 00000000..0b0962b1 --- /dev/null +++ b/python-book/diagrams/basketball_shots.svg @@ -0,0 +1,349 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Success=1/3x1/3x1/3=1/27 + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + + diff --git a/python-book/diagrams/batch_posterior.svg b/python-book/diagrams/batch_posterior.svg new file mode 100644 index 00000000..b6991a3a --- /dev/null +++ b/python-book/diagrams/batch_posterior.svg @@ -0,0 +1,114 @@ + + + + + + + + + + + + + + + + + + + .2.1002.02.22.42.62.83.03.2 + + + + + + diff --git a/python-book/diagrams/car-tree.png b/python-book/diagrams/car-tree.png new file mode 100644 index 00000000..e4de635c Binary files /dev/null and b/python-book/diagrams/car-tree.png differ diff --git a/python-book/diagrams/commanders_tree.svg b/python-book/diagrams/commanders_tree.svg new file mode 100644 index 00000000..e3b37eba --- /dev/null +++ b/python-book/diagrams/commanders_tree.svg @@ -0,0 +1,114 @@ + + + + + + + + + niceday(P=.7)nastyday(P=.3)Cmdrswin(P=.65)=.455(ProbabilityofnicedayCmdrswin) + and + Cmdrslose(P=.35)Cmdrswin(P=.55)Cmdrslose(P=.45) + + diff --git a/python-book/diagrams/covid-tree.png b/python-book/diagrams/covid-tree.png new file mode 100644 index 00000000..35975f00 Binary files /dev/null and b/python-book/diagrams/covid-tree.png differ diff --git a/python-book/diagrams/drunks_walk.svg b/python-book/diagrams/drunks_walk.svg new file mode 100644 index 00000000..13dbfb67 --- /dev/null +++ b/python-book/diagrams/drunks_walk.svg @@ -0,0 +1,238 @@ + + + +image/svg+xml765432101234567765432101234567765432101234567765432101234567 +xx +Myhouse1W,4SHishouse3E,2N + \ No newline at end of file diff --git a/python-book/diagrams/given_iq_athletic.svg b/python-book/diagrams/given_iq_athletic.svg new file mode 100644 index 00000000..b2cd7d46 --- /dev/null +++ b/python-book/diagrams/given_iq_athletic.svg @@ -0,0 +1,171 @@ + + + + + + + + + xxxxxxxxxx + 120110115105100 + I.Q.Score + 85801009590AthleticScore + + + + + + diff --git a/python-book/diagrams/hypot_iq_athletic_1.svg b/python-book/diagrams/hypot_iq_athletic_1.svg new file mode 100644 index 00000000..9256d7de --- /dev/null +++ b/python-book/diagrams/hypot_iq_athletic_1.svg @@ -0,0 +1,162 @@ + + + + + + + + + xxxxxxxx + 120110100 + I.Q.Score + 758595AthleticScore + + + + + + + + + + diff --git a/python-book/diagrams/hypot_iq_athletic_2.svg b/python-book/diagrams/hypot_iq_athletic_2.svg new file mode 100644 index 00000000..3da307c5 --- /dev/null +++ b/python-book/diagrams/hypot_iq_athletic_2.svg @@ -0,0 +1,178 @@ + + + + + + + + + xxxxxxxxxxxx + 120110100 + I.Q.Score + 758595AthleticScore + + + + + + + + + + diff --git a/python-book/diagrams/liquor_price_plots.svg b/python-book/diagrams/liquor_price_plots.svg new file mode 100644 index 00000000..c4a59cda --- /dev/null +++ b/python-book/diagrams/liquor_price_plots.svg @@ -0,0 +1,632 @@ + + + + + + + + + + + 05350400450500550CentsMean:$4.84PRIVATE + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 05350400450500550CentsMean:$4.35GOVERNMENT + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 05350400450500550CentsPRIVATE+GOVERNMENT + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/python-book/diagrams/mercury_price_indexes.svg b/python-book/diagrams/mercury_price_indexes.svg new file mode 100644 index 00000000..77f20056 --- /dev/null +++ b/python-book/diagrams/mercury_price_indexes.svg @@ -0,0 +1,2281 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 02040608010012018501870189019101930195019701990 + Indexforwages + + + + 0502575100150125175 + IndexforCPI + DividedbywagesDividedbyCPI + + diff --git a/python-book/diagrams/mercury_reserves.svg b/python-book/diagrams/mercury_reserves.svg new file mode 100644 index 00000000..45a908c4 --- /dev/null +++ b/python-book/diagrams/mercury_reserves.svg @@ -0,0 +1,251 @@ + + + + + + + + + + + 050,000100,000150,000200,000250,000195019551960196519701975198019851990 + Metrictons + after1979reservebasechangeafter1979reservebasechange + + + + 051015202530354045195019551960196519701975198019851990 + Years + + + + + + + + + + + + + YearYear + + diff --git a/python-book/diagrams/nile_height.svg b/python-book/diagrams/nile_height.svg new file mode 100644 index 00000000..4215f6fd --- /dev/null +++ b/python-book/diagrams/nile_height.svg @@ -0,0 +1,135 @@ + + + + + + + + + + + + Height(cm) + 12501200115011001050810 + A.D. + 820830840850Year + + diff --git a/python-book/diagrams/np_round_function_named.svg b/python-book/diagrams/np_round_function_named.svg new file mode 100644 index 00000000..a3d53fee --- /dev/null +++ b/python-book/diagrams/np_round_function_named.svg @@ -0,0 +1,16 @@ + + + + + +3.1415 + + + Arguments:Return value:Name:roundround togiven decimalplaces3.142a =decimals = + diff --git a/python-book/diagrams/pop_prop_disp.svg b/python-book/diagrams/pop_prop_disp.svg new file mode 100644 index 00000000..d7b920f5 --- /dev/null +++ b/python-book/diagrams/pop_prop_disp.svg @@ -0,0 +1,90 @@ + + + + + + + + + + .51.0PopulationProportion + Errorinaveragesamplein% + + diff --git a/python-book/diagrams/rnd_choice_pl.svg b/python-book/diagrams/rnd_choice_pl.svg new file mode 100644 index 00000000..e09063cf --- /dev/null +++ b/python-book/diagrams/rnd_choice_pl.svg @@ -0,0 +1,13 @@ + + + + + +[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]Argument:Example return value:Name:rnd.choiceselect element at random ...8 + diff --git a/python-book/diagrams/round_function_named.svg b/python-book/diagrams/round_function_named.svg new file mode 100644 index 00000000..ef51c0c6 --- /dev/null +++ b/python-book/diagrams/round_function_named.svg @@ -0,0 +1,16 @@ + + + + + +3.1415 + + + Arguments:Return value:Name:roundround togiven decimalplaces3.142x =digits = + diff --git a/python-book/diagrams/round_function_ndigits_pl.svg b/python-book/diagrams/round_function_ndigits_pl.svg new file mode 100644 index 00000000..b89c3f66 --- /dev/null +++ b/python-book/diagrams/round_function_ndigits_pl.svg @@ -0,0 +1,16 @@ + + + + + +3.1415 + + + Arguments:Return value:Name:roundround togiven decimalplaces3.142 + diff --git a/python-book/diagrams/round_function_pl.svg b/python-book/diagrams/round_function_pl.svg new file mode 100644 index 00000000..2b760992 --- /dev/null +++ b/python-book/diagrams/round_function_pl.svg @@ -0,0 +1,13 @@ + + + + + +3.7Argument:Return value:Name:roundround tonearest integer3 + diff --git a/python-book/diagrams/ships_gold_silver.svg b/python-book/diagrams/ships_gold_silver.svg new file mode 100644 index 00000000..75169d0d --- /dev/null +++ b/python-book/diagrams/ships_gold_silver.svg @@ -0,0 +1,867 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 5 + + + + 4 + + + + 3 + + + + 1 + + + + 2 + + + + 9 + + + + 8 + + + + 7 + + + + 6G + a + G + a + G + b + G + b + G + c + G + c + SSSSG + a + G + b + G + c + SSSP=/ + 13 + P=/ + 13 + P=/ + 13 + P(G)=.5P(G)=/ + 23 + P(S)=.5P(S)=/ + 13 + GP=?P=?2G1SI + + + IIIII + + + II + + + II + + + IIIII + + + + + + + diff --git a/python-book/diagrams/success_case1.svg b/python-book/diagrams/success_case1.svg new file mode 100644 index 00000000..ee22ad7f --- /dev/null +++ b/python-book/diagrams/success_case1.svg @@ -0,0 +1,239 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512313426 + + + + + diff --git a/python-book/diagrams/success_case2.svg b/python-book/diagrams/success_case2.svg new file mode 100644 index 00000000..751a4481 --- /dev/null +++ b/python-book/diagrams/success_case2.svg @@ -0,0 +1,264 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512223331113426 + + + + + + + diff --git a/python-book/diagrams/success_case3.svg b/python-book/diagrams/success_case3.svg new file mode 100644 index 00000000..fbc4912e --- /dev/null +++ b/python-book/diagrams/success_case3.svg @@ -0,0 +1,239 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512233113426 + + + + + + + diff --git a/python-book/diagrams/success_case4.svg b/python-book/diagrams/success_case4.svg new file mode 100644 index 00000000..27e005d6 --- /dev/null +++ b/python-book/diagrams/success_case4.svg @@ -0,0 +1,660 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512332113426 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512332113426 + + + + + diff --git a/python-book/diagrams/success_case5.svg b/python-book/diagrams/success_case5.svg new file mode 100644 index 00000000..3667495a --- /dev/null +++ b/python-book/diagrams/success_case5.svg @@ -0,0 +1,499 @@ + + + + + + + + pick#:pick#:pick#: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 222255533336661111444 + + + + + + + + + + + + + + + + + + + + + + 513426 + + + + + + + + + + + + + + + + + + + or(and)or(and) + + diff --git a/python-book/diagrams/white_balls_universe.svg b/python-book/diagrams/white_balls_universe.svg new file mode 100644 index 00000000..3df2eb16 --- /dev/null +++ b/python-book/diagrams/white_balls_universe.svg @@ -0,0 +1,178 @@ + + + + + + + + + + + + + + + + + + + + + + + 2468101214161820 + + .14.12.10.08.06.04.020 + Probability + + diff --git a/python-book/exercise_solutions.html b/python-book/exercise_solutions.html new file mode 100644 index 00000000..94b57b1e --- /dev/null +++ b/python-book/exercise_solutions.html @@ -0,0 +1,867 @@ + + + + + + + + + +Resampling statistics - 32  Exercise Solutions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

32  Exercise Solutions

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

32.1 Solution 18-2

+ +
URN 36#1 36#0 pit
+URN 77#1 52#0 chi
+REPEAT 1000
+    SAMPLE 72 pit pit$
+    SAMPLE 129 chi chi$
+    MEAN pit$ p
+    MEAN chi$ c
+    SUBTRACT p c d
+    SCORE d scrboard
+END
+HISTOGRAM scrboard
+PERCENTILE scrboard (2.5 97.5) interval
+PRINT interval
+

+

Results:

+

INTERVAL = -0.25921 0.039083 (estimated 95 percent confidence interval).

+
+
+

32.2 Solution 21-1

+ +
REPEAT 1000
+    GENERATE 200  1,100 a
+    COUNT a <= 7 b
+    DIVIDE b 200 c
+    SCORE c scrboard
+END
+HISTOGRAM scrboard
+PERCENTILE z (2.5 97.5) interval
+PRINT interval
+

+

Result:

+

INTERVAL = 0.035 0.105 [estimated 95 percent confidence interval]

+
+
+

32.3 Solution 21-2

+

We use the “bootstrap” technique of drawing many bootstrap re-samples with replacement from the original sample, and observing how the re-sample means are distributed.

+ +
NUMBERS (30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31) a
+
+REPEAT 1000
+    ' Do 1000 trials or simulations
+    SAMPLE 20 a b
+    ' Draw 20 lifetimes from a, randomly and with replacement
+    MEAN b c
+    ' Find the average lifetime of the 20
+    SCORE c scrboard
+    ' Keep score
+END
+
+HISTOGRAM scrboard
+' Graph the experiment results
+
+PERCENTILE scrboard (2.5 97.5) interval
+' Identify the 2.5th and 97.5th percentiles. These percentiles will
+' enclose 95 percent of the resample means.
+

+

Result:

+

INTERVAL = 27.7 30.05 [estimated 95 percent confidence interval]

+
+
+

32.4 Solution 21-3

+ +
NUMBERS (.02 .026 .023 .017 .022 .019 .018 .018 .017 .022) a
+REPEAT 1000
+    SAMPLE 10 a b
+    MEAN b c
+    SCORE c scrboard
+END
+HISTOGRAM scrboard
+PERCENTILE scrboard (2.5 97.5) interval
+PRINT interval
+

+

Result:

+

INTERVAL = 0.0187 0.0219 [estimated 95 percent confidence interval]

+
+
+

32.5 Solution 23-1

+
    +
  1. Create two groups of paper cards: 25 with participation rates, and 25 with the spread values. Arrange the cards in pairs in accordance with the table, and compute the correlation coefficient between the shuffled participation and spread variables.

  2. +
  3. Shuffle one of the sets, say that with participation, and compute correlation between shuffled participation and spread.

  4. +
  5. Repeat step 2 many, say 1000, times. Compute the proportion of the trials in which correlation was at least as negative as that for the original data.

  6. +
+ +
DATA (67.5  65.6  65.7  59.3 39.8  76.1  73.6  81.6  75.5  85.0  80.3
+54.5  79.1  94.0  80.3  89.6  44.7  82.7 89.7  83.6 84.9  76.3  74.7
+68.8  79.3) partic1
+
+DATA (13 19 18 12 20 5 1 1 2 3 5 6 5 4 8 1 3 18 13 2 2 12 17 26 6)
+spread1
+
+CORR partic1 spread1 corr
+
+' compute correlation - it’s -.37
+REPEAT 1000
+    SHUFFLE partic1 partic2
+    ' shuffle the participation rates
+    CORR partic2 spread1 corrtria
+    ' compute re-sampled correlation
+    SCORE corrtria z
+    ' keep the value in the scoreboard
+END
+HISTOGRAM z
+COUNT z <= -.37 n
+' count the trials when result  <= -.37
+DIVIDE n 1000 prob
+' compute the proportion of such trials
+PRINT prob
+

Conclusion: The results of 5 Monte Carlo experiments each of a thousand such simulations are as follows:

+

prob = 0.028, 0.045, 0.036, 0.04, 0.025.

+

From this we may conclude that the voter participation rates probably are negatively related to the vote spread in the election. The actual value of the correlation (-.37398) cannot be explained by chance alone. In our Monte Carlo simulation of the null-hypothesis a correlation that negative is found only 3 percent — 4 percent of the time.

+

Distribution of the test statistic’s value in 1000 independent trials corresponding to the null-hypothesis:

+

+
+
+

32.6 Solution 23-2

+ +
NUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)
+homeruns
+NUMBERS (135 153 120 161 138 175 126 200 205 147 165 124
+169 156 36 98 82 131) strikeout
+MULTIPLY homerun strikeout r
+SUM r s
+REPEAT 1000
+    SHUFFLE strikeout  strikout2
+    MULTIPLY strikout2 homeruns c
+    SUM c cc
+    SUBTRACT s cc d
+    SCORE d scrboard
+END
+HISTOGRAM scrboard
+COUNT scrboard >=s k
+DIVIDE k 1000 kk
+PRINT kk
+

+

Result: kk = 0

+

Interpretation: In 1000 simulations, random shuffling never produced a value as high as observed. Therefore, we conclude that random chance could not be responsible for the observed degree of correlation.

+
+
+

32.7 Solution 23-3

+ +
NUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)
+homeruns
+NUMBERS (135 153 120 161 138 175 126 200 205 147 165 124
+169 156 36 98 82 131) strikeou
+CORR homeruns strikeou r
+    REPEAT 1000
+    SHUFFLE strikeou  strikou2
+    CORR strikou2 homeruns r$
+    SCORE r$ scrboard
+END
+HISTOGRAM scrboard
+COUNT scrboard >=0.62 k
+DIVIDE k 1000 kk
+PRINT kk r
+

+

Result: kk = .001

+

Interpretation: A correlation coefficient as high as the observed value (.62) occurred only 1 out of 1000 times by chance. Hence, we rule out chance as an explanation for such a high value of the correlation coefficient.

+
+
+

32.8 Solution 23-4

+ +
READ FILEnoreen2.datexrate msuppl
+' read data from file
+CORR exrate msuppl stat
+' compute correlation stat (it’s .419)
+REPEAT 1000
+    SHUFFLE msuppl msuppl$
+    ' shuffle money supply values
+    CORR exrate msuppl$  stat$
+    ' compute correlation
+    SCORE stat$ scrboard
+    ' keep the value in a scoreboard
+END
+PRINT stat
+HISTOGRAM scrboard
+COUNT scrboard >=0.419 k
+DIVIDE k 1000 prob
+PRINT prob
+

Distribution of the correlation after permutation of the data:

+

+

Result: prob = .001

+

Interpretation: The observed correlation (.419) between the exchange rate and the money supply is seldom exceeded by random experiments with these data. Thus, the observed result 0.419 cannot be explained by chance alone and we conclude that it is statistically significant.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/framing_questions.html b/python-book/framing_questions.html new file mode 100644 index 00000000..3e93dc00 --- /dev/null +++ b/python-book/framing_questions.html @@ -0,0 +1,784 @@ + + + + + + + + + +Resampling statistics - 20  Framing Statistical Questions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

20  Framing Statistical Questions

+
+ + + +
+ + + + +
+ + +
+ +
+

20.1 Introduction

+

Chapter 3 - Chapter 15 discussed problems in probability theory. That is, we have been estimating the probability of a composite event resulting from a system in which we know the probabilities of the simple events — the “parameters” of the situation.

+

Then Chapter 17 - Chapter 19 discussed the underlying philosophy of statistical inference.

+

Now we turn to inferential-statistical problems. Up until now, we have been estimating the complex probabilities of known universes — the topic of probability . Now as we turn to problems in statistics , we seek to learn the characteristics of an unknown system — the basic probabilities of its simple events and parameters. (Here we note again, however, that in the process of dealing with them, all statistical-inferential problems eventually are converted into problems of pure probability). To assess the characteristics of the system in such problems, we employ the characteristics of the sample(s) that have been drawn from it.

+

For further discussion on the distinction between inferential statistics and probability theory, see Chapter 2 - Chapter 3.

+

This chapter begins the topic of hypothesis testing . The issue is: whether to adjudge that a particular sample (or samples) come(s) from a particular universe. A two-outcome yes-no universe is discussed first. Then we move on to “measured-data” universes, which are more complex than yes-no outcomes because the variables can take on many values, and because we ask somewhat more complex questions about the relationships of the samples to the universes. This topic is continued in subsequent chapters.

+

In a typical hypothesis-testing problem presented in this chapter, one sample of hospital patients is treated with a new drug and a second sample is not treated but rather given a “placebo.” After obtaining results from the samples, the “null” or “test” or “benchmark” hypothesis would be that the resulting drug and placebo samples are drawn from the same universe. This device of the null hypothesis is the equivalent of stating that the drug had no effect on the patients. It is a special intellectual strategy developed to handle such statistical questions.

+

We start with the scientific question: Does the medicine have an effect? We then translate it into a testable statistical question: How likely is it that the sample means come from the same universe? This process of question-translation is the crucial step in hypothesis-testing and inferential statistics. The chapter then explains how to solve these problems using resampling methods after you have formulated the proper statistical question.

+

Though the examples in the chapter mostly focus on tests of hypotheses, the procedures also apply to confidence intervals, which will be discussed later.

+
+
+

20.2 Translating scientific questions into probabilistic and statistical questions

+

The first step in using probability and statistics is to translate the scientific question into a statistical question. Once you know exactly which prob-stats question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy (though subtle). The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms.

+

Though this translation is difficult, it involves no mathematics. Rather, this step requires only hard thought. You cannot beg off by saying, “I have no brain for math!” The need is for a brain that will do clear thinking, rather than a brain especially talented in mathematics. A person who uses conventional methods can avoid this hard thinking by simply grabbing the formula for some test without understanding why s/he chooses that test. But resampling pushes you to do this thinking explicitly.

+

This crucial process of translating from a pre-statistical question to a statistical question takes place in all statistical inference. But its nature comes out most sharply with respect to testing hypotheses, so most of what will be said about it will be in that context.

+
+
+

20.3 The three types of questions

+

Let’s consider the natures of conceptual, operational, and statistical questions.

+
+

20.3.1 The Scientific Question

+

A study for either scientific or decision-making purposes properly begins with a general question about the nature of the world — that is, a conceptual or theoretical question. One must then transform this question into an operational-empirical form that one can study scientifically. Thence comes the translation into a technical-statistical question.

+

The scientific-conceptual-theoretical question can be an issue of theory, or a policy choice, or the result of curiosity at large.

+

Examples include: Can a bioengineer increase the chance of female calves being born? Is copper becoming less scarce? Are the prices of liquor systematically different in states where the liquor stores are publicly owned compared to states where they are privately owned? Does a new formulation of pig rations lead to faster hog growth? Was the rate of unemployment higher last month than the long-run average, or was the higher figure likely to be the result of sampling error? What are the margins of probable error for an unemployment survey?

+
+
+

20.3.2 The Operational-Empirical Question

+

The operational-empirical question is framed in measurable quantities in a meaningful design. Examples include: How likely is this state of affairs (say, the new pig-food formulation) to cause an event such as was observed (say, the observed increase in hog growth)? How likely is it that the mean unemployment rate of a sample taken from the universe of interest (say, the labor force, with an unemployment rate of 10 percent) will be between 11 percent and 12 percent? What is the probability of getting three girls in the first four children if the probability of a girl is .48? How unlikely is it to get nine females out of ten calves in an experiment on your farm? Did the price of copper fall between 1800 and the present? These questions are in the form of empirical questions, which have already been transformed by operationalizing from scientific-conceptual questions.

+
+
+

20.3.3 The Statistical Question

+

At this point one must decide whether the conceptual-scientific question is of the form of either a) or b):

+
    +
  1. A test about whether some sample will frequently happen by chance rather than being very surprising — a test of the “significance” of a hypothesis. Such hypothesis testing takes the following form: How likely is a given “universe” to produce some sample like x? This leads to interpretation about: How likely is a given universe to be the cause of this observed sample?
  2. +
  3. A question about the accuracy of the estimate of a parameter of the population based upon sample evidence (an inquiry about “confidence intervals”). This sort of question is considered by some (but not by me) to be a question in estimation — that is, one’s best guess about (say) the magnitude and probable error of the mean or median of a population. This is the form of a question about confidence limits — how likely is the mean to be between x and y?
  4. +
+

Notice that the statistical question is framed as a question in probability.

+
+
+
+

20.4 Illustrative translations

+

The best way to explain how to translate a scientific question into a statistical question is to illustrate the process.

+
+

20.4.1 Illustration A — beliefs about smoking

+

Were doctors’ beliefs as of 1964 about the harmfulness of cigarette smoking (and doctors’ own smoking behavior) affected by the social groups among whom the doctors live (Simon 1967)? That was the theoretical question. We decided to define the doctors’ reference groups as the states in which they live, because data about doctors and smoking were available state by state (Modern Medicine, 1964). We could then translate this question into an operational and testable scientific hypothesis by asking this question: Do doctors in tobacco-economy states differ from doctors in other states in their smoking, and in their beliefs about smoking?

+

Which numbers would help us answer this question, and how do we interpret those numbers? We now were ready to ask the statistical question: Do doctors in tobacco-economy states “belong to the same universe” (with respect to smoking) as do other doctors? That is, do doctors in tobacco-economy states have the same characteristics — at least, those characteristics we are interested in, smoking in this case — as do other doctors? Later we shall see that the way to proceed is to consider the statistical hypothesis that these doctors do indeed belong to that same universe; that hypothesis and the universe will be called “benchmark hypothesis” and “benchmark universe” respectively — or in more conventional usage, the “null hypothesis.”

+

If the tobacco-economy doctors do indeed belong to the benchmark universe — that is, if the benchmark hypothesis is correct — then there is a 49/50 chance that doctors in some state other than the state in which tobacco is most important will have the highest rate of cigarette smoking. But in fact we observe that the state in which tobacco accounts for the largest proportion of the state’s income — North Carolina — had (as of 1964) a higher proportion of doctors who smoked than any other state. (Furthermore, a lower proportion of doctors in North Carolina than in any other state said that they believed that smoking is a health hazard.)

+

Of course, it is possible that it was just chance that North Carolina doctors smoked most, but the chance is only 1 in 50 if the benchmark hypothesis is correct. Obviously, some state had to have the highest rate, and the chance for any other state was also 1 in 50. But, because our original scientific hypothesis was that North Carolina doctors’ smoking rate would be highest, and we then observed that it was highest even though the chance was only 1 in 50, the observation became interesting and meaningful to us. It means that the chances are strong that there was a connection between the importance of tobacco in the economy of a state and the rate of cigarette smoking among doctors living there (as of 1964).

+

To consider this problem from another direction, it would be rare for North Carolina to have the highest smoking rate for doctors if there were no special reason for it; in fact, it would occur only once in fifty times. But, if there were a special reason — and we hypothesize that the tobacco economy provides the reason — then it would not seem unusual or rare for North Carolina to have the highest rate; therefore we choose to believe in the not-so-unusual phenomenon, that the tobacco economy caused doctors to smoke cigarettes.

+

Like many (most? all?) actual situations, the cigarettes and doctors’ smoking issue is a rather messy business. Did I have a clear-cut, theoretically-derived prediction before I began? Maybe I did a bit of “data dredging” — that is, maybe I started with a vague expectation, and only arrived at my sharp hypothesis after I saw the data. This would weaken the probabilistic interpretation of the test of significance — but this is something that a scientific investigator does not like to do because it weakens his/her claim for attention and chance of publication. On the other hand, if one were a Bayesian, one could claim that one had a prior probability that the observed effect would occur, and the observed data strengthens that prior; but this procedure would not seem proper to many other investigators. The only wholly satisfactory conclusion is to obtain more data — but as of 1993, there does not seem to have been another data set collected since 1964, and collecting a set by myself is not feasible.

+

This clearly is a case of statistical inference that one could argue about, though perhaps it is true that all cases where the data are sufficiently ambiguous as to require a test of significance are also sufficiently ambiguous that they are properly subject to argument.

+

For some decades the hypothetico-deductive framework was the leading point of view in empirical science. It insisted that the empirical and statistical investigation should be preceded by theory, and only propositions suggested by the theory should be tested. Investigators were not supposed to go back and forth from data to theory to testing. It is now clear that this is an ivory-tower irrelevance, and no one lived by the hypothetico-deductive strictures anyway — just pretended to. Furthermore, there is no sound reason to feel constrained by it, though it strengthens your conclusions if you had theoretical reason in advance to expect the finding you obtained.

+
+
+

20.4.2 Illustration B — is it a cure?

+

Does medicine CCC cure some particular cancer? That’s the scientific question. So you give the medicine to six patients who have the cancer and you do not give it to six similar patients who have the cancer. Your sample contains only twelve people because it is not feasible for you to obtain a larger sample. Five of six “medicine” patients get well, two of six “no medicine” patients get well. Does the medicine cure the cancer? That is, if future cancer patients take the medicine, will their rate of recovery be higher than if they did not take the medicine?

+

One way to translate the scientific question into a statistical question is to ask: Do the “medicine” patients belong to the same universe as the “no medicine” patients? That is, we ask whether “medicine” patients still have the same chances of getting well from the cancer as do the “no medicine” patients, or whether the medicine has bettered the chances of those who took it and thus removed them from the original universe, with its original chances of getting well. The original universe, to which the “no medicine” patients must still belong, is the benchmark universe. Shortly we shall see that we proceed by comparing the observed results against the benchmark hypothesis that the “medicine” patients still belong to the benchmark universe — that is, they still have the same chance of getting well as the “no medicine” patients.

+

We want to know whether or not the medicine does any good. This question is the same as asking whether patients who take medicine are still in the same population (universe) as “no medicine” patients, or whether they now belong to a different population in which patients have higher chances of getting well. To recapitulate our translations, we move from asking: Does the medicine cure the cancer? to, Do “medicine” patients have the same chance of getting well as “no medicine” patients?; and finally, to: Do “medicine” patients belong to the same universe (population) as “no medicine” patients? Remember that “population” in this sense does not refer to the population at large, but rather to a group of cancer sufferers (perhaps an infinitely large group) who have given chances of getting well, on the average. Groups with different chances of getting well are called “different populations” (universes). Shortly we shall see how to answer this statistical question. We must keep in mind that our ultimate concern in cases like this one is to predict future results of the medicine, that is, to predict whether use of the medicine will lead to a higher recovery rate than would be observed without the medicine.

+
+
+

20.4.3 Illustration C — a better method for teaching reading

+

Is method Alpha a better method of teaching reading than method Beta? That is, will method Alpha produce a higher average reading score in the future than will method Beta? Twenty children taught to read with method Alpha have an average reading score of 79, whereas children taught with method Beta have an average score of 84. To translate this scientific question into a statistical question we ask: Do children taught with method Alpha come from the same universe (population) as children taught with method Beta? Again, “universe” (population) does not mean the town or social group the children come from, and indeed the experiment will make sense only if the children do come from the same population, in that sense of “population.” What we want to know is whether or not the children belong to the same statistical population (universe), defined according to their reading ability, after they have studied with method Alpha or method Beta.

+
+
+

20.4.4 Illustration D — better fertilizer

+

If one plot of ground is treated with fertilizer, and another similar plot is not treated, the benchmark (null) hypothesis is that the corn raised on the treated plot is no different than the corn raised on the untreated lot — that is, that the corn from the treated plot comes from (“belongs to”) the same universe as the corn from the untreated plot. If our statistical test makes it seem very unlikely that a universe like that from which the untreated-plot corn comes would also produce corn such as came from the treated plot, then we are willing to believe that the fertilizer has an effect. For a psychological example, substitute the words “group of children” for “plot,” “special training” for “fertilizer,” and “I.Q. score” for “corn.”

+

There is nothing sacred about the benchmark (null) hypothesis of “no difference.” You could just as well test the benchmark hypothesis that the corn comes from a universe that averages 110 bushels per acre, if you have reason to be especially interested in knowing whether or not the fertilizer produces more than 110 bushels per acre. But in many cases it is reasonable to test the probability that a sample comes from the population that does not receive the special treatment of medicine, fertilizer, or training.

+
+
+
+

20.5 Generalizing from sample to universe

+

So far we have discussed the scientific question and the statistical question. Remember that there is always a generalization question, too: Do the statistical results from this particular sample of, say, rats apply to a universe of humans? This question can be answered only with wisdom, common sense, and general knowledge, and not with probability statistics.

+

Translating from a scientific question into a statistical question is mostly a matter of asking the probability that some given benchmark universe (population) will produce one or more observed samples. Notice that we must (at least for general scientific testing purposes) ask about a given universe whose composition we assume to be known , rather than about a range of universes, or about a universe whose properties are unknown. In fact, there is really only one question that probability statistics can answer: Given some particular benchmark universe of some stated composition, what is the probability that an observed sample would come from it? (Please notice the subtle but all-important difference between the words “would come” in the previous sentence, and the word “came.”) A variation of this question is: Given two (or more) samples, what is the probability that they would come from the same universe — that is, that the same universe would produce both of them? In this latter case, the relevant benchmark universe is implicitly the universe whose composition is the two samples combined.

+

The necessity for stating the characteristics of the universe in question becomes obvious when you think about it for a moment. Probability-statistical testing adds up to comparing a sample with a particular benchmark universe, and asking whether there probably is a difference between the sample and the universe. To carry out this comparison, we ask how likely it is that the benchmark universe would produce a sample like the observed sample.

+ +

But in order to find out whether or not a universe could produce a given sample, we must ask whether or not some particular universe — with stated characteristics — could produce the sample. There is no doubt that some universe could produce the sample by a random process; in fact, some universe did. The only sensible question, then, is whether or not a particular universe, with stated (or known) characteristics, is likely to produce such a sample. In the case of the medicine, the universe with which we compare the sample who took the medicine is the benchmark universe to which that sample would belong if the medicine had had no effect. This comparison leads to the benchmark (null) hypothesis that the sample comes from a population in which the medicine (or other experimental treatment) seems to have no effect . It is to avoid confusion inherent in the term “null hypothesis” that I replace it with the term “benchmark hypothesis.”

+

The concept of the benchmark (null) hypothesis is not easy to grasp. The best way to learn its meaning is to see how it is used in practice. For example, we say we are willing to believe that the medicine has an effect if it seems very unlikely from the number who get well that the patients given the medicine still belong to the same benchmark universe as the patients given no medicine at all — that is, if the benchmark hypothesis is unlikely.

+
+
+

20.6 The steps in statistical inference

+

These are the steps in conducting statistical inference

+
    +
  • Step 1. Frame a question in the form of: What is the chance of getting the observed sample x from some specified population X? For example, what is the probability of getting a sample of 9 females and one male from a population where the probability of getting a single female is .48?
  • +
  • Step 2. Reframe the question in the form of: What kinds of samples does population X produce, with which probabilities? That is, what is the probability of the observed sample x (9 females in 10 calves), given that a population is X (composed of 48 percent females)? Or in notation, what is \(P(x | X)\)?
  • +
  • Step 3. Actually investigate the behavior of S with respect to S and other samples. This can be done in two ways:
  • +
+
    +
  1. Use the calculus of probability (the formulaic method), perhaps resorting to the Monte Carlo method if an appropriate formula does not exist. Or
  2. +
  3. Resampling (in the larger sense), which equals the Monte Carlo method minus its use for approximations, investigation of complex functions in statistics and other theoretical mathematics, and non-resampling uses elsewhere in science. Resampling in the more restricted sense includes bootstrap, permutation, and other non-parametric methods. More about the resampling procedure follows in the paragraphs to come, and then in later chapters in the book.
  4. +
+
    +
  • Step 4. Interpret the probabilities that result from step 3 in terms of acceptance or rejection of hypotheses, surety of conclusions, and as inputs to decision theory.1
  • +
+

The following short definition of statistical inference summarizes the previous four steps:

+
+

Statistical inference equals the selection of a probabilistic model to resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.

+
+

Stating the steps to be followed in a procedure is an operational definition of the procedure. My belief in the clarifying power of this device (the operational definition) is embodied in the set of steps given in Chapter 15 for the various aspects of statistical inference. A canonical question-and-answer procedure for testing hypotheses will be found in Chapter 25, and one for confidence intervals will be found in Chapter 26.

+
+
+

20.7 Summary

+

We define resampling to include problems in inferential statistics as well as problems in probability as follows: Using the entire set of data you have in hand, or using the given data-generating mechanism (such as a die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples. That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.

+

Problems in pure probability may at first seem different in nature than problems in statistical inference. But the same logic as stated in this definition applies to both varieties of problems. The difference is that in probability problems the “model” is known in advance — say, the model implicit in a deck of poker cards plus a game’s rules for dealing and counting the results — rather than the model being assumed to be best estimated by the observed data, as in resampling statistics.

+

The hardest job in using probability statistics, and the most important, is to translate the scientific question into a form to which statistics can give a sensible answer. You must translate scientific questions into the appropriate form for statistical operations , so that you know which operations to perform. This is the part of the job that requires hard, clear thinking — though it is non-mathematical thinking — and it is the part that someone else usually cannot easily do for you.

+

Once you know exactly which probability-statistical question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy. The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms. Though this step is hard, it involves no mathematics . This step requires only hard, clear thinking . You cannot beg off by saying “I have no brain for math!” To flub this step is to admit that you have no brain for clear thinking, rather than no brain for mathematics.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/how_big_sample.html b/python-book/how_big_sample.html new file mode 100644 index 00000000..feff4c80 --- /dev/null +++ b/python-book/how_big_sample.html @@ -0,0 +1,2102 @@ + + + + + + + + + +Resampling statistics - 30  How Large a Sample? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

30  How Large a Sample?

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

30.1 Issues in determining sample size

+

Sometime in the course of almost every study — preferably early in the planning stage — the researcher must decide how large a sample to take. Deciding the size of sample to take is likely to puzzle and distress you at the beginning of your research career. You have to decide somehow, but there are no simple, obvious guides for the decision.

+

For example, one of the first studies I worked on was a study of library economics (Fussler and Simon 1961), which required taking a sample of the books from the library’s collections. Sampling was expensive, and we wanted to take a correctly sized sample. But how large should the sample be? The longer we searched the literature, and the more people we asked, the more frustrated we got because there just did not seem to be a clear-cut answer. Eventually we found out that, even though there are some fairly rational ways of fixing the sample size, most sample sizes in most studies are fixed simply (and irrationally) by the amount of money that is available or by the sample size that similar research has used in the past.

+

The rational way to choose a sample size is by weighing the benefits you can expect in information against the cost of increasing the sample size. In principle you should continue to increase the sample size until the benefit and cost of an additional sampled unit are equal.1

+

The benefit of additional information is not easy to estimate even in applied research, and it is extraordinarily difficult to estimate in basic research. Therefore, it has been the practice of researchers to set up target goals of the degree of accuracy they wish to achieve, or to consider various degrees of accuracy that might be achieved with various sample sizes, and then to balance the degree of accuracy with the cost of achieving that accuracy. The bulk of this chapter is devoted to learning how the sample size is related to accuracy in simple situations.

+

In complex situations, however, and even in simple situations for beginners, you are likely to feel frustrated by the difficulties of relating accuracy to sample size, in which case you cry out to a supervisor, “Don’t give me complicated methods, just give me a rough number based on your greatest experience.” My inclination is to reply to you, “Sometimes life is hard and there is no shortcut.” On the other hand, perhaps you can get more information than misinformation out of knowing sample sizes that have been used in other studies. Table 24-1 shows the middle (modal), 25th percentile, and 75th percentile scores for — please keep this in mind — National Opinion Surveys in the top panel. The bottom panel shows how subgroup analyses affect sample size.

+

Pretest sample sizes are smaller, of course, perhaps 25-100 observations. Samples in research for Master’s and Ph.D. theses are likely to be closer to a pretest than to national samples.

+

Table 24-1

+

Most Common Sample Sizes Used for National and Regional Studies By Subject Matter

+

Subject Matter National Regional

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Subject MatterModeQ3Q1ModeQ3Q1
Financial1000+100 400 50
Medical1000+1000+5001000+ 1000+ 250
Other Behavior1000+700 1000 300
Attitudes1000+1000+500700 1000 400
Laboratory Experiments100 200 50
+

Typical Sample Sizes for Studies of Human and Institutional Populations

+

People or Households Institutions

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
People or householdsInstitutions
Subgroup AnalysesNationalSpecialNationalSpecial
None or few1000-1500200-500200-50050-200
Average1500-2500500-1000500-1000200-500
Many2500+1000+1000+500+
+

SOURCE: From Applied Sampling, by Seymour Sudman (1976, 86 — 87) copyright Academic Press, reprinted by permission.

+

Once again, the sample size ought to depend on the proportions of the sample that have the characteristics you are interested in, the extent to which you want to learn about subgroups as well as the universe as a whole, and of course the purpose of your study, the value of the information, and the cost. Also, keep in mind that the added information that you obtain from an additional sample observation tends to be smaller as the sample size gets larger. You must quadruple the sample to halve the error.

+

Now let us consider some specific cases. The first examples taken up here are from the descriptive type of study, and the latter deal with sample sizes in relationship research.

+
+
+

30.2 Some practical examples

+

Example 24-1

+

What proportion of the homes in Countryville are tuned into television station WCNT’s ten o’clock news program? That is the question your telephone survey aims to answer, and you want to know how many randomly selected homes you must telephone to obtain a sufficiently large sample.

+

Begin by guessing the likeliest answer, say 30 percent in this case. Do not worry if you are off by 5 per cent or even 10 per cent; and you will probably not be further off than that. Select a first-approximation sample size of perhaps 400; this number is selected from my general experience, but it is just a starting point. Then proceed through the first 400 numbers in the random-number table, marking down a yes for numbers 1-3 and no for numbers 4-10 (because 3/10 was your estimate of the proportion listening). Then add the number of yes and no . Carry out perhaps ten sets of such trials, the results of which are in Table 24-2.

+

Table 24-2

+

% DIFFERENCE FROM

+

Trial Number “Yes” Number “No” Expected Mean of 30%

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
(120 “Yes”)
11152851.25
21192810.25
31162841.00
41142861.50
51072933.25
61162841.00
71322683.00
81232770.75
91212790.25
101142861.50
Mean1.37
+

Based on these ten trials, you can estimate that if you take a sample of 400 and if the “real” viewing level is 30 percent, your average percentage error will be 1.375 percent on either side of 30 percent. That is, with a sample of 400, half the time your error will be greater than 1.375 percent if 3/10 of the universe is listening.

+

Now you must decide whether the estimated error is small enough for your needs. If you want greater accuracy than a sample of 400 will give you, increase the sample size, using this important rule of thumb: To cut the error in half, you must quadruple the sample size. In other words, if you want a sample that will give you an error of only 0.55 percent on the average, you must increase the sample size to 1,600 interviews. Similarly, if you cut the sample size to 100, the average error will be only 2.75 percent (double 1.375 percent) on either side of 30 percent. If you distrust this rule of thumb, run ten or so trials on sample sizes of 100 or 1,600, and see what error you can expect to obtain on the average.

+

If the “real” viewership is 20 percent or 40 percent, instead of 30 percent, the accuracy you will obtain from a sample size of 400 will not be very different from an “actual” viewership of 30 percent, so do not worry about that too much, as long as you are in the right general vicinity.

+

Accuracy is slightly greater in smaller universes but only slightly. For example, a sample of 400 would give perfect accuracy if Countryville had only 400 residents. And a sample of 400 will give slightly greater accuracy for a town of 800 residents than for a city of 80,000 residents. But, beyond the point at which the sample is a large fraction of the total universe, there is no difference in accuracy with increases in the size of universe. This point is very important. For any given level of accuracy, identical sample sizes give the same level of accuracy for Podunk (population 8,000) or New York City (population 8 million). The ratio of the sample size to the population of Podunk or New York City means nothing at all, even though it intuitively seems to be important.

+

The size of the sample must depend upon which population or subpopulations you wish to describe. For example, Alfred Kinsey’s sample size for the classic “Sexual Behavior in the Human Male” (1948) would have seemed large, by customary practice, for generalizations about the United States population as a whole. But, as Kinsey explains: “… the chief concern of the present study is an understanding of the sexual behavior of each segment of the population, and that it is only secondarily concerned with generalization for the population as a whole.” (1948, 82, italics added). Therefore Kinsey’s sample had to include subsamples large enough to obtain the desired accuracy in each of these sub-universes. The U.S. Census offers a similar illustration. When the U.S. Bureau of the Census aims to estimate only a total or an average for the United States as a whole — as, for example, in the Current Population Survey estimate of unemployment — a sample of perhaps 50,000 is big enough. But the decennial census aims to make estimates for all the various communities in the country, estimates that require adequate subsamples in each of these sub-universes; such is the justification for the decennial census’ sample size of so many millions. Television ratings illustrate both types of purpose. Nielsen ratings, for example, are sold primarily to national network advertisers. These advertisers on national television networks usually sell their goods all across the country and are therefore interested primarily in the total United States viewership for a program, rather than in the viewership in various demographic subgroups. The appropriate calculations for Nielsen sample size will therefore refer to the total United States sample. But other organizations sell rating services to local television and radio stations for use in soliciting advertising over the local stations rather than over the network as a whole. Each local sample must then be large enough to provide reasonable accuracy, and, considered as a whole, the samples for the local stations therefore add up to a much larger sample than the Nielsen and other nationwide samples.

+

The problem may be handled with the following Python program. This program represents viewers with the string 'viewers' and non-viewers as 'not viewers'. It then asks rnd.choice to choose randomly between 'viewer' and 'not viewer' with a 30% (p=0.3) chance of getting a 'viewer' and a 70% chance of getting a 'not viewer'. It gets a sample of 400 such numbers, counts (with np.sum the “viewers” then finds how much this sample diverges from the expected number of viewers (30% of 400 = 120). It repeats this procedure 10000 times, and then calculates the average divergence.

+
+

Start of viewer_numbers notebook

+ + +
+
import numpy as np
+
+# set up the random number generator
+rnd = np.random.default_rng()
+
+
+
# set the number of trials
+n_trials = 10000
+
+# an empty array to store the scores
+scores = np.zeros(n_trials)
+
+# What are the options to choose from?
+options = ['viewer', 'not viewer']
+
+# do n_trials trials
+for i in range(n_trials):
+
+    # Choose 'viewer' 30% of the time.
+    a = rnd.choice(options, size=400, p=[0.3, 0.7])
+
+    # count the viewers
+    b = np.sum(a == 'viewer')
+
+    # how different from expected?
+    c = 120 - b
+
+    # absolute value of the difference
+    d = np.abs(c)
+
+    # express as a proportion of sample
+    e = d / 400
+
+    # keep score of the result
+    scores[i] = e
+
+# find the mean divergence
+k = np.mean(scores)
+
+# Show the result
+k
+
+
0.018184000000000002
+
+
+ +

End of viewer_numbers notebook

+
+

It is a simple matter to go back and try a sample size of (say) 1600 rather than 400, and examine the effect on the mean difference.

+

Example 24-2

+

This example, like Example 24-1, illustrates the choice of sample size for estimating a summarization statistic. Later examples deal with sample sizes for probability statistics.

+

Hark back to the pig-ration problems presented earlier, and consider the following set of pig weight-gains recorded for ration A: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 30. Assume that

+

our purpose now is to estimate the average weight gain for ration A, so that the feed company can advertise to farmers how much weight gain to expect from ration A. If the universe is made up of pig weight-gains like those we observed, we can simulate the universe with, say, 1 million weight gains of thirty-one pounds, 1 million of thirty-four pounds, and so on for the twelve observed weight gains. Or, more conveniently, as accuracy will not be affected much, we can make up a universe of say, thirty cards for each thirty-one-pound gain, thirty cards for each thirty-four-pound gains and so forth, yielding a deck of 30 x 12 = 360 cards. Then shuffle, and, just for a starting point, try sample sizes of twelve pigs. The means of the samples for twenty such trials are as in Table 24-3.

+

Now ask yourself whether a sample size of twelve pigs gives you enough accuracy. There is a .5 chance that the mean for the sample will be more than .65 or .92 pound (the two median deviations) or (say) .785 pound (the midpoint of the two medians) from the mean of the universe that generates such samples, which in this situation is 31.75 pounds. Is this close enough? That is up to you to decide in light of the purposes for which you are running the experiment. (The logic of the inference you make here is inevitably murky, and use of the term “real mean” can make it even murkier, as is seen in the discussion in Chapters 20-22 on confidence intervals.)

+

To see how accuracy is affected by larger samples, try a sample size of forty-eight “pigs” dealt from the same deck. (But, if the sample size were to be much larger than forty-eight, you might need a “universe” greater than 360 cards.) The results of twenty trials are in Table 24-4.

+

In half the trials with a sample size of forty-eight the difference between the sample mean and the “real” mean of 31.75 will be .36 or .37 pound (the median deviations), smaller than with the values of .65 and .92 for samples of 12 pigs. Again, is this too little accuracy for you? If so, increase the sample size further.

+

Table 24-3

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialMean

Absolut e Devisatio n of Trial Mean

+

from Actual Mean

TrialMean

Absolut e Deviation of Trial Mean

+

from Actual Mean

131.77.021132.10.35
232.271.521230.671.08
331.75.001332.42.67
430.83.921430.671.08
530.521.231532.25.50
631.60.151631.60.15
732.46.711732.33.58
831.10.651833.081.33
932.42.351933.011.26
1030.601.152030.601.15
Mean31.75
+

The attentive reader of this example may have been troubled by this question: How do you know what kind of a distribution of values is contained in the universe before the sample is taken? The answer is that you guess, just as in Example 24-1 you guessed at the mean of the universe. If you guess wrong, you will get either more accuracy or less accuracy than you expected from a given sample size, but the results will not be fatal; if you obtain more accuracy than you wanted, you have wasted some money, and, if you obtain less accuracy, your sample dispersion will tell you so, and you can then augment the sample to boost the accuracy. But an error in guessing will not introduce error into your final results.

+

Table 24-4

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialMean

Absolut e Deviation of Trial Mean

+

from Actual Mean

TrialMean

Absolut e Deviation of Trial Mean

+

from Actual Mean

131.80.051131.93.18
232.27.521232.40.65
331.82.071331.32.43
431.39.361432.07.68
531.22.531532.03.28
631.88.131631.95.20
731.37.381731.75.00
831.48.271831.11.64
931.20.551931.96.21
1032.01.262031.32.43
Mean31.75
+

The guess should be based on something, however. One source for guessing is your general knowledge of the likely dispersion; for example, if you were estimating male heights in Rhode Island, you would be able to guess what proportion of observations would fall within 2 inches, 4 inches, 6 inches, and 8 inches, perhaps, of the real value. Or, much better yet, a very small pretest will yield quite satisfactory estimates of the dispersion.

+

Here is a RESAMPLING STATS program that will let you try different sample sizes, and then take bootstrap samples to determine the range of sampling error. You set the sample size with the DATA command, and the NUMBERS command records the data. Above I noted that we could sample without replacement from a “deck” of thirty “31”’s, thirty “34”’s, etc, as a substitute for creating a universe of a million “31”’s, a million “34”’s, etc. We can achieve the same effect if we replace each card after we sample it; this is equivalent to creating a “deck” of an infinite number of “31”’s, “34”’s, etc. That is what the SAMPLE command does, below. Note that the sample size is determined by the value of the “sampsize” variable, which you set at the beginning. From here on the program takes the MEAN of each sample, keeps SCORE of that result, and produces a HISTOGRAM. The PERCENTILE command will also tell you what values enclose 90% of all sample results, excluding those below the 5th percentile and above the 95th percentile.

+

Here is a program for a sample size of 12.

+ +
' Program file: "how_big_sample_01.rss"
+
+DATA (12) sampsize
+NUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a
+REPEAT 1000
+    SAMPLE sampsize a b
+    MEAN b c
+    SCORE c z
+END
+HISTOGRAM z
+PERCENTILE z (5 95) k
+PRINT k
+' **Bin Center Freq Pct Cum Pct**
+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
29.020.20.2
29.540.40.6
30.0303.03.6
30.5717.110.7
31.016216.226.9
31.520920.947.8
32.023723.771.5
32.514314.385.8
33.0909.094.8
33.5373.798.5
34.0121.299.7
34.530.3100.0
k = 30.41733.25
+

Example 24-3

+

This is the first example of sample-size estimation for probability (testing) statistics, rather than the summarization statistics dealt with above.

+

Recall the problem of the sex of fruit-fly offspring discussed in Example 15-1. The question now is, how large a sample is needed to determine whether the radiation treatment results in a sex ratio other than a 50-50 male-female split?

+

The first step is, as usual, difficult but necessary. As the researcher, you must guess what the sex ratio will be if the treatment does have an effect. Let’s say that you use all your general knowledge of genetics and of this treatment and that you guess the sex ratio will be 75 percent males and 25 percent females if the treatment alters the ratio from 50-50.

+

In the random-number table let “01-25” stand for females and “26-00” for males. Take twenty successive pairs of numbers for each trial, and run perhaps fifty trials, as in Table 24-5.

+

Table 24-5

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
14161871334416
26141931735614
36142071336317
45152141637812
55152241638416
63172351539317
77132481240614
86142541641515
93172611942218
102182751543812
116142831744416
121192981245614
136143081246515
143173151547317
151193231748515
165153341649317
1751550515
+

Trial Females Males Trial Females Males Trial Females Males

+

In Example 15-1 with a sample of twenty flies that contained fourteen or more males, we found only an 8% probability that such an extreme sample would result from a 50-50 universe. Therefore, if we observe such an extreme sample, we rule out a 50-50 universe.

+

Now Table 24-5 tells us that, if the ratio is really 75 to 25, then a sample of twenty will show fourteen or more males forty-two of fifty times (84 percent of the time). If we take a sample of twenty flies and if the ratio is really 75-25, we will make the correct decision by deciding that the split is not 50-50 84 percent of the time.

+

Perhaps you are not satisfied with reaching the right conclusion only 84 percent of the time. In that case, still assuming that the ratio will really be 75-25 if it is not 50-50, you need to take a sample larger than twenty flies. How much larger? That depends on how much surer you want to be. Follow the same procedure for a sample size of perhaps eighty flies. First work out for a sample of eighty, as was done in Example 15-1 for a sample of twenty, the number of males out of eighty that you would need to find for the odds to be, say, 9 to 1 that the universe is not 50-50; your estimate turns out to be forty-eight males. Then run fifty trials of eighty flies each on the basis of 75-25 probability, and see how often you would not get as many as forty-eight males in the sample. Table 24-6 shows the results we got. No trial was anywhere near as low as forty-eight, which suggests that a sample of eighty is larger than necessary if the split is really 75-25.

+

Table 24-6

+

+

+

Trial Females Males Trial Females Males Trial Females Males

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
12159181367342159
22258191961351763
31367201763362258
41565211763371961
52258221862382159
62159232654392159
71367242060402159
82456251664412159
91664262258421862
102159271664431961
112060282159441763
121961292258451367
132159302159461664
141763312258472159
152268321961481664
162268331070491763
171763502159
+

Table 24-7

+

Trial Females Males Trial Females Males Trial Females Males

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
13545183248343545
23644192852353644
33545203248362951
43545213347373644
53644223743383644
63644233644393149
73644243149402951
83446252753413050
93446263050423545
102951273149433248
112951283347443050
123248293743453743
132951303050463149
143149313149473644
152852323248483464
163347333446492951
173644503743
+

+

It is obvious that, if the split you guess at is 60 to 40 rather than 75 to 25, you will need a bigger sample to obtain the “correct” result with the same probability. For example, run some eighty-fly random-number trials with 1-40 representing males and 51-100 representing females. Table 24-7 shows that only twenty-four of fifty (48 percent) of the trials reach the necessary cut-off at which one would judge that a sample of eighty really does not come from a universe that is split 50-50; therefore, a sample of eighty is not big enough if the split is 60-40.

+

To review the main principles of this example: First, the closer together the two possible universes from which you think the sample might have come (50-50 and 60-40 are closer together than are 50-50 and 75-25), the larger the sample needed to distinguish between them. Second, the surer you want to be that you reach the right decision based upon the sample evidence, the larger the sample you need.

+

The problem may be handled with the following RESAMPLING STATS program. We construct a benchmark universe that is 60-40 male-female, and take samples of size 80, observing whether the numbers of males and females differs enough in these resamples to rule out a 50-50 universe. Recall that we need at least 48 males to say that the proportion of males is not 50%.

+ +
' Program file: "how_big_sample_02.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 80 1,10 a
+    ' Generate 80 "flies," each represented by a number between 1 and 10 where
+    ' \<= 6 is a male
+    COUNT a <=6 b
+    ' Count the males
+    SCORE b z
+    ' Keep score
+END
+COUNT z >=48 k
+' How many of the trials produced more than 48 males?
+DIVIDE k 1000 kk
+' Convert to a proportion
+PRINT kk
+' If the result "kk" is close to 1, we then know that samples of size 80
+' will almost always produce samples with enough males to avoid misleading
+' us into thinking that they could have come from a universe in which
+' males and females are split 50-50.
+

Example 24-3

+

Referring back to Example 15-3, on the cable-television poll, how large a sample should you have taken? Pretend that the data have not yet been collected. You need some estimate of how the results will turn out before you can select a sample size. But you have not the foggiest idea how the results will turn out. Therefore, go out and take a very small sample, maybe ten people, to give you some idea of whether people will split quite evenly or unevenly. Seven of your ten initial interviews say they are for CATV. How large a sample do you now need to provide an answer of which you can be fairly sure?

+

Using the techniques of the previous chapter, we estimate roughly that from a sample of fifty people at least thirty-two would have to vote the same way for you to believe that the odds are at least 19 to 1 that the sample does not misrepresent the universe, that is, that the sample does not show a majority different from that of the whole universe if you polled everyone. This estimate is derived from the resampling experiment described in example 15-3. The table shows that if half the people (or more) are against cable television, only one in twenty times will thirty-two (or more) people of a sample of fifty say that they are for cable television; that is, only one of twenty trials with a 50-50 universe will produce as many as thirty-two yeses if a majority of the population is against it.

+

Therefore, designate numbers 1-30 as no and 31-00 as yes in the random-number table (that is, 70 percent, as in your estimate based on your presample of ten), work through a trial sample size of fifty, and count the number of yeses . Run through perhaps ten or fifteen trials, and reckon how often the observed number of yeses exceeds thirty-two, the number you must exceed for a result you can rely on. In Table 24-8 we see that a sample of fifty respondents, from a universe split 70-30, will show that many yeses a preponderant proportion of the time — in fact, in fifteen of fifteen experiments; therefore, the sample size of fifty is large enough if the split is “really” 70-30.

+

Table 24-8

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialNoYesTrialNoYes
1133791535
2143610941
31832111535
41040121535
5133713941
61535141634
71436151733
+

The following RESAMPLING STATS program takes samples of size 50 from a universe that is 70% “yes.” It then observes how often such samples produce more than 32 “yeses” — the number we must get if we are to be sure that the sample is not from a 50/50 universe.

+ +
' Program file: "how_big_sample_03.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 50 1,10 a
+    ' Generate 50 numbers between 1 and 10, let 1-7 = yes.
+    COUNT a <=7 b
+    ' Count the "yeses"
+    SCORE b z
+    ' Keep score of the result
+END
+COUNT z >=32 k
+' Count how often the sample result \>= our 32 cutoff (recall that samples
+' with 32 or fewer "yeses" cannot be ruled out of a 50/50 universe)
+DIVIDE k 1000 kk
+' Convert to a proportion
+

If “kk” is close to 1, we can be confident that this sample will be large enough to avoid a result that we might mistakenly think comes from a 50/50 universe (provided that the real universe is 70% favorable).

+

Example 24-4

+

How large a sample is needed to determine whether there is any difference between the two pig rations in Example 15-7? The first step is to guess the results of the tests. You estimate that the average for ration A will be a weight gain of thirty-two pounds. You further guess that twelve pigs on ration A might gain thirty-six, thirty-five, thirty-four, thirty-three, thirty-three, thirty-two, thirty-two, thirty-one, thirty-one, thirty, twentynine, and twenty-eight pounds. This set of guesses has an equal number of pigs above and below the average and more pigs close to the average than farther away. That is, there are more pigs at 33 and 31 pounds than at 36 and 28 pounds. This would seem to be a reasonable distribution of pigs around an average of 32 pounds. In similar fashion, you guess an average weight gain of 28 pounds for ration B and a distribution of 32, 31, 30, 29, 29, 28, 28, 27, 27, 26, 25, and 24 pounds.

+

Let us review the basic strategy. We want to find a sample size large enough so that a large proportion of the time it will reveal a difference between groups big enough to be accepted as not attributable to chance. First, then, we need to find out how big the difference must be to be accepted as evidence that the difference is not attributable to chance. We do so from trials with samples that size from the benchmark universe. We state that a difference larger than the benchmark universe will usually produce is not attributable to chance.

+

In this case, let us try samples of 12 pigs on each ration. First we draw two samples from a combined benchmark universe made up of the results that we have guessed will come from ration A and ration B. (The procedure is the same as was followed in Example 15-7.) We find that in 19 out of 20 trials the difference between the two observed groups of 12 pigs was 3 pounds or less. Now we investigate how often samples of 12 pigs, drawn from the separate universes, will show a mean difference as large as 3 pounds. We do so by making up a deck of 25 or 50 cards for each of the 12 hypothesized A’s and each of the 12 B’s, with the ration name and the weight gain written on it — that is, a deck of, say, 300 cards for each ration. Then from each deck we draw a set of 12 cards at random, record the group averages, and find the difference.

+

Here is the same work done with more runs on the computer:

+ +
' Program file: "how_big_sample_04.rss"
+
+NUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a
+NUMBERS (32 32 31 30 29 29 29 28 28 26 26 24) b
+REPEAT 1000
+    SAMPLE 12 a aa
+    MEAN aa aaa
+    SAMPLE 12 b bb
+    MEAN bb bbb
+    SUBTRACT aaa bbb c
+    SCORE c z
+END
+HISTOGRAM z
+' **Difference in mean weights between resamples**
+

+

Therefore, two samples of twelve pigs each are clearly large enough, and, in fact, even smaller samples might be sufficient if the universes are really like those we guessed at. If, on the other hand, the differences in the guessed universes had been smaller, then twelve-pig groups would have seemed too small and we would then have had to try out larger sample sizes, say forty-eight pigs in each group and perhaps 200 pigs in each group if forty-eight were not enough. And so on until the sample size is large enough to promise the accuracy we want. (In that case, the decks would also have to be much larger, of course.)

+

If we had guessed different universes for the two rations, then the sample sizes required would have been larger or smaller. If we had guessed the averages for the two samples to be closer together, then we would have needed larger samples. Also, if we had guessed the weight gains within each universe to be less spread out, the samples could have been smaller and vice versa.

+

The following RESAMPLING STATS program first records the data from the two samples, and then draws from decks of infinite size by sampling with replacement from the original samples.

+ +
' Program file: "how_big_sample_05.rss"
+
+DATA (36 35 34 33 33 32 32 31 31 30 29 28) a
+DATA (32 31 30 29 29 28 28 27 27 26 25 24) b
+REPEAT 1000
+    SAMPLE 12 a aa
+    ' Draw a sample of 12 from ration a with replacement (this is like drawing
+    ' from a large deck made up of many replicates of the elements in a)
+    SAMPLE 12 b bb
+    ' Same for b
+    MEAN aa aaa
+    ' Find the averages of the resamples
+    MEAN bb bbb
+    SUBTRACT aaa bbb c
+    ' Find the difference
+    SCORE c z
+END
+COUNT z >=3 k
+' How often did the difference exceed the cutoff point for our
+' significance test of 3 pounds?
+DIVIDE k 1000 kk
+PRINT kk
+' If kk is close to zero, we know that the sample size is large enough
+' that samples drawn from the universes we have hypothesized will not
+' mislead us into thinking that they could come from the same universe.
+
+
+

30.3 Step-wise sample-size determination

+

Often it is wisest to determine the sample size as you go along, rather than fixing it firmly in advance. In sequential sampling, you continue sampling until the split is sufficiently even to make you believe you have a reliable answer.

+

Related techniques work in a series of jumps from sample size to sample size. Step-wise sampling makes it less likely that you will take a sample that is much larger than necessary. For example, in the cable-television case, if you took a sample of perhaps fifty you could see whether the split was as wide as 32-18, which you figure you need for 9 to 1 odds that your answer is right. If the split were not that wide, you would sample another fifty, another 100, or however large a sample you needed until you reached a split wide enough to satisfy you that your answer was reliable and that you really knew which way the entire universe would vote.

+

Step-wise sampling is not always practical, however, and the cable-television telephone-survey example is unusually favorable for its use. One major pitfall is that the early responses to a mail survey, for example, do not provide a random sample of the whole, and therefore it is a mistake simply to look at the early returns when the split is not wide enough to justify a verdict. If you have listened to early radio or television reports of election returns, you know how misleading the reports from the first precincts can be if we regard them as a fair sample of the whole.2

+

Stratified sampling is another device that helps reduce the sample size required, by balancing the amounts of information you obtain in the various strata. (Cluster sampling does not reduce the sample size. Rather, it aims to reduce the cost of obtaining a sample that will produce a given level of accuracy.)

+
+
+

30.4 Summary

+

Sample sizes are too often determined on the basis of convention or of the available budget. A more rational method of choosing the size of the sample is by balancing the diminution of error expected with a larger sample, and its value, against the cost of increasing the sample size. The relationship of various sample sizes to various degrees of accuracy can be estimated with resampling methods, which are illustrated here.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/images/13-Chap-9_002.png b/python-book/images/13-Chap-9_002.png new file mode 100644 index 00000000..3bdb1595 Binary files /dev/null and b/python-book/images/13-Chap-9_002.png differ diff --git a/python-book/images/17_d10s.png b/python-book/images/17_d10s.png new file mode 100644 index 00000000..32576729 Binary files /dev/null and b/python-book/images/17_d10s.png differ diff --git a/python-book/images/20_d10s.jpg b/python-book/images/20_d10s.jpg new file mode 100755 index 00000000..9bb9ac84 Binary files /dev/null and b/python-book/images/20_d10s.jpg differ diff --git a/python-book/images/21-Chap-17_000.png b/python-book/images/21-Chap-17_000.png new file mode 100644 index 00000000..7e9c7cfa Binary files /dev/null and b/python-book/images/21-Chap-17_000.png differ diff --git a/python-book/images/21-Chap-17_001.png b/python-book/images/21-Chap-17_001.png new file mode 100644 index 00000000..9e21f1ab Binary files /dev/null and b/python-book/images/21-Chap-17_001.png differ diff --git a/python-book/images/21-Chap-17_002.png b/python-book/images/21-Chap-17_002.png new file mode 100644 index 00000000..d44041ba Binary files /dev/null and b/python-book/images/21-Chap-17_002.png differ diff --git a/python-book/images/21-Chap-17_003.png b/python-book/images/21-Chap-17_003.png new file mode 100644 index 00000000..ca1ed7c4 Binary files /dev/null and b/python-book/images/21-Chap-17_003.png differ diff --git a/python-book/images/22-Chap-18_000.png b/python-book/images/22-Chap-18_000.png new file mode 100644 index 00000000..cd6ff97d Binary files /dev/null and b/python-book/images/22-Chap-18_000.png differ diff --git a/python-book/images/22-Chap-18_001.png b/python-book/images/22-Chap-18_001.png new file mode 100644 index 00000000..db94c27c Binary files /dev/null and b/python-book/images/22-Chap-18_001.png differ diff --git a/python-book/images/22-Chap-18_002.png b/python-book/images/22-Chap-18_002.png new file mode 100644 index 00000000..ddf5bf04 Binary files /dev/null and b/python-book/images/22-Chap-18_002.png differ diff --git a/python-book/images/22-Chap-18_006.png b/python-book/images/22-Chap-18_006.png new file mode 100644 index 00000000..4eeb5c4e Binary files /dev/null and b/python-book/images/22-Chap-18_006.png differ diff --git a/python-book/images/22-Chap-18_007.png b/python-book/images/22-Chap-18_007.png new file mode 100644 index 00000000..bd0269eb Binary files /dev/null and b/python-book/images/22-Chap-18_007.png differ diff --git a/python-book/images/22-Chap-18_008.png b/python-book/images/22-Chap-18_008.png new file mode 100644 index 00000000..e32fe0ca Binary files /dev/null and b/python-book/images/22-Chap-18_008.png differ diff --git a/python-book/images/22-Chap-18_009.png b/python-book/images/22-Chap-18_009.png new file mode 100644 index 00000000..74fca22a Binary files /dev/null and b/python-book/images/22-Chap-18_009.png differ diff --git a/python-book/images/25-Chap-21_004.png b/python-book/images/25-Chap-21_004.png new file mode 100644 index 00000000..c66d1a56 Binary files /dev/null and b/python-book/images/25-Chap-21_004.png differ diff --git a/python-book/images/25-Chap-21_005.png b/python-book/images/25-Chap-21_005.png new file mode 100644 index 00000000..ac7136fe Binary files /dev/null and b/python-book/images/25-Chap-21_005.png differ diff --git a/python-book/images/27-Chap-23_000.png b/python-book/images/27-Chap-23_000.png new file mode 100644 index 00000000..bc428ca0 Binary files /dev/null and b/python-book/images/27-Chap-23_000.png differ diff --git a/python-book/images/27-Chap-23_004.png b/python-book/images/27-Chap-23_004.png new file mode 100644 index 00000000..a925791c Binary files /dev/null and b/python-book/images/27-Chap-23_004.png differ diff --git a/python-book/images/27-Chap-23_005.png b/python-book/images/27-Chap-23_005.png new file mode 100644 index 00000000..ccdbd10f Binary files /dev/null and b/python-book/images/27-Chap-23_005.png differ diff --git a/python-book/images/27-Chap-23_006.png b/python-book/images/27-Chap-23_006.png new file mode 100644 index 00000000..9285f3b7 Binary files /dev/null and b/python-book/images/27-Chap-23_006.png differ diff --git a/python-book/images/28-Chap-24_000.png b/python-book/images/28-Chap-24_000.png new file mode 100644 index 00000000..5f0dbd7d Binary files /dev/null and b/python-book/images/28-Chap-24_000.png differ diff --git a/python-book/images/28-Chap-24_001.png b/python-book/images/28-Chap-24_001.png new file mode 100644 index 00000000..f5ef2338 Binary files /dev/null and b/python-book/images/28-Chap-24_001.png differ diff --git a/python-book/images/28-Chap-24_002.png b/python-book/images/28-Chap-24_002.png new file mode 100644 index 00000000..129e8554 Binary files /dev/null and b/python-book/images/28-Chap-24_002.png differ diff --git a/python-book/images/28-Chap-24_003.png b/python-book/images/28-Chap-24_003.png new file mode 100644 index 00000000..49f99e9a Binary files /dev/null and b/python-book/images/28-Chap-24_003.png differ diff --git a/python-book/images/28-Chap-24_004.png b/python-book/images/28-Chap-24_004.png new file mode 100644 index 00000000..36eff509 Binary files /dev/null and b/python-book/images/28-Chap-24_004.png differ diff --git a/python-book/images/30-Exercise-sol_000.png b/python-book/images/30-Exercise-sol_000.png new file mode 100644 index 00000000..70c0be45 Binary files /dev/null and b/python-book/images/30-Exercise-sol_000.png differ diff --git a/python-book/images/30-Exercise-sol_001.png b/python-book/images/30-Exercise-sol_001.png new file mode 100644 index 00000000..a6c4936e Binary files /dev/null and b/python-book/images/30-Exercise-sol_001.png differ diff --git a/python-book/images/30-Exercise-sol_002.png b/python-book/images/30-Exercise-sol_002.png new file mode 100644 index 00000000..33f896a1 Binary files /dev/null and b/python-book/images/30-Exercise-sol_002.png differ diff --git a/python-book/images/30-Exercise-sol_003.png b/python-book/images/30-Exercise-sol_003.png new file mode 100644 index 00000000..9c26c296 Binary files /dev/null and b/python-book/images/30-Exercise-sol_003.png differ diff --git a/python-book/images/30-Exercise-sol_004.png b/python-book/images/30-Exercise-sol_004.png new file mode 100644 index 00000000..82f38d7b Binary files /dev/null and b/python-book/images/30-Exercise-sol_004.png differ diff --git a/python-book/images/30-Exercise-sol_005.png b/python-book/images/30-Exercise-sol_005.png new file mode 100644 index 00000000..5a99a19f Binary files /dev/null and b/python-book/images/30-Exercise-sol_005.png differ diff --git a/python-book/images/30-Exercise-sol_006.png b/python-book/images/30-Exercise-sol_006.png new file mode 100644 index 00000000..98152f59 Binary files /dev/null and b/python-book/images/30-Exercise-sol_006.png differ diff --git a/python-book/images/30-Exercise-sol_007.png b/python-book/images/30-Exercise-sol_007.png new file mode 100644 index 00000000..93a927c5 Binary files /dev/null and b/python-book/images/30-Exercise-sol_007.png differ diff --git a/python-book/images/nile_levels.png b/python-book/images/nile_levels.png new file mode 100644 index 00000000..a4253fd6 Binary files /dev/null and b/python-book/images/nile_levels.png differ diff --git a/python-book/images/one_d10s.jpg b/python-book/images/one_d10s.jpg new file mode 100644 index 00000000..1d16ce3d Binary files /dev/null and b/python-book/images/one_d10s.jpg differ diff --git a/python-book/index.html b/python-book/index.html new file mode 100644 index 00000000..f3bff8ed --- /dev/null +++ b/python-book/index.html @@ -0,0 +1,653 @@ + + + + + + + + + + + + + +Resampling statistics + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Resampling statistics

+
+ + + +
+ +
+
Authors
+
+

Julian Lincoln Simon

+

Matthew Brett

+

Stéfan van der Walt

+

Ian Nimmo-Smith

+
+
+ + + +
+ + +
+ + +
+

Python edition

+
+

There are two editions of this book; one with examples in the R programming language 1, and another with examples in the Python language 2.

+

This is the Python edition.

+

The files on this website are free to view and download. We release the content under the Creative Commons Attribution / No Derivatives 4.0 License. If you’d like a physical copy of the book, you should be able to order it from Sage, when it is published.

+

We wrote this book in RMarkdown with Quarto. It is automatically rebuilt from source by Github

+ + + + + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/inference_ideas.html b/python-book/inference_ideas.html new file mode 100644 index 00000000..ba3c3113 --- /dev/null +++ b/python-book/inference_ideas.html @@ -0,0 +1,997 @@ + + + + + + + + + +Resampling statistics - 17  The Basic Ideas in Statistical Inference + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

17  The Basic Ideas in Statistical Inference

+
+ + + +
+ + + + +
+ + +
+ +

Probabilistic statistical inference is a crucial part of the process of informing ourselves about the world around us. Statistics and statistical inference help us understand our world and make sound decisions about how to act.

+

More specifically, statistical inference is the process of drawing conclusions about populations or other collections of objects about which we have only partial knowledge from samples. Technically, inference may be defined as the selection of a probabilistic model to resemble the process you wish to investigate, investigation of that model’s behavior, and interpretation of the results. Fuller understanding of the nature of statistical inference comes with practice in handling a variety of problems.

+

Until the 18th century, humanity’s extensive knowledge of nature and technology was not based on formal probabilistic statistical inference. But now that we have already dealt with many of the big questions that are easy to answer without probabilistic statistics, and now that we live in a more ramified world than in earlier centuries, the methods of inferential statistics become ever more important.

+

Furthermore, statistical inference will surely become ever more important in the future as we voyage into realms that are increasingly difficult to comprehend. The development of an accurate chronometer to tell time on sea voyages became a crucial need when Europeans sought to travel to the New World. Similarly, probability and statistical inference become crucial as we voyage out into space and down into the depths of the ocean and the earth, as well as probe into the secrets of the microcosm and of the human mind and soul.

+

Where probabilistic statistical inference is employed, the inferential procedures may well not be the crucial element. For example, the wording of the questions asked in a public-opinion poll may be more critical than the statistical-inferential procedures used to discern the reliability of the poll results. Yet we dare not disregard the role of the statistical procedures.

+
+

17.1 Knowledge without probabilistic statistical inference

+

Let us distinguish two kinds of knowledge with which inference at large (that is, not just probabilistic statistical inference) is mainly concerned: a) one or more absolute measurements on one or more dimensions of a collection of one or more items — for example, your income, or the mean income of the people in your country; and b) comparative measurements and evaluations of two or more collections of items (especially whether they are equal or unequal)—for example, the mean income in Brazil compared to the mean income in Argentina. Types (a) and (b) both include asking whether there has been a change between one observation and another.

+

What is the conceptual basis for gathering these types of knowledge about the world? I believe that our rock bottom conceptual tool is the assumption of what we may call sameness , or continuity , or constancy , or repetition , or equality , or persistence ; “constancy” and “continuity” will be the terms used most frequently here, and I shall use them interchangeably.

+

Continuity is a non-statistical concept. It is a best guess about the next point beyond the known observations, without any idea of the accuracy of the estimate. It is like testing the ground ahead when walking in a marsh. It is local rather than global. We’ll talk a bit later about why continuity seems to be present in much of the world that we encounter.

+

The other great concept in statistical inference, and perhaps in all inference taken together, is representative (usually random) sampling, to be discussed in Chapter 18. Representative sampling — which depends upon the assumption of sameness (homogeneity) throughout the universe to be investigated — is quite different than continuity; representative sampling assumes that there is no greater chance of a connection between any two elements that might be drawn into the sample than between any other two elements; the order of drawing is immaterial. In contrast, continuity assumes that there is a greater chance of connection between two contiguous elements than between either one of the elements and any of the many other elements that are not contiguous to either. Indeed, the process of randomizing is a device for doing away with continuity and autocorrelation within some bounded closed system — the sample “frame.” It is an attempt to map (describe) the entire area ahead using the device of the systematic survey. Random representative sampling enables us to make probabilistic inferences about a population based on the evidence of a sample.

+ +

To return now to the concept of sameness: Examples of the principle are that we assume: a) our house will be in the same place tomorrow as today; b) a hammer will break an egg every time you hit the latter with the former (or even the former with the latter); c) if you observe that the first fifteen persons you see walking out of a door at the airport are male, the sixteenth probably will be male also; d) paths in the village stay much the same through a person’s life; e) religious ritual changes little through the decades; f) your best guess about tomorrow’s temperature or stock price is that will be the same as today’s. This principle of constancy is related to David Hume’s concept of constant conjunction .

+

When my children were young, I would point to a tree on our lawn and ask: “Do you think that tree will be there tomorrow?” And when they would answer “Yes,” I’d ask, “Why doesn’t the tree fall?” That’s a tough question to answer.

+

There are two reasonable bases for predicting that the tree will be standing tomorrow. First and most compelling for most of us is that almost all trees continue standing from day to day, and this particular one has never fallen; hence, what has been in the past is likely to continue. This assessment requires no scientific knowledge of trees, yet it is a very functional way to approach most questions concerning the trees — such as whether to hang a clothesline from it, or whether to worry that it will fall on the house tonight. That is, we can predict the outcome in this case with very high likelihood of being correct even though we do not utilize anything that would be called either science or statistical inference. (But what do you reply when your child says: “Why should I wear a seat belt? I’ve never been in an accident”?)

+

A second possible basis for prediction that the tree will be standing is scientific analysis of the tree’s roots — how the tree’s weight is distributed, its sickness or health, and so on. Let’s put aside this sort of scientific-engineering analysis for now.

+

The first basis for predicting that the tree will be standing tomorrow — sameness — is the most important heuristic device in all of knowledge-gathering. It is often a weak heuristic; certainly the prediction about the tree would be better grounded (!) after a skilled forester examines the tree. But persistence alone might be a better heuristic in a particular case than an engineering-scientific analysis alone.

+

This heuristic appears more obvious if the child — or the adult — were to respond to the question about the tree with another question: Why should I expect it to fall ? In the absence of some reason to expect change, it is quite reasonable to expect no change. And the child’s new question does not duck the central question we have asked about the tree, any more than one ducks a probability estimate by estimating the complementary probability (that is, unity minus the probability sought); indeed, this is a very sound strategy in many situations.

+ +

Constancy can refer to location, time, relationship to another variable, or yet another dimension. Constancy may also be cyclical. Some cyclical changes can be charted or mapped with relative certainty — for example the life-cycles of persons, plants, and animals; the diurnal cycle of dark and light; and the yearly cycle of seasons. The courses of some diseases can also be charted. Hence these kinds of knowledge have long been well known.

+

Consider driving along a road. One can predict that the price of the next gasoline station will be within a few cents of the gasoline station that you just passed. But as you drive further and further, the dispersion increases as you cross state lines and taxes differ. This illustrates continuity.

+

The attention to constancy can focus on a single event, such as leaves of similar shape appearing on the same plant. Or attention can focus on single sequences of “production,” as in the process by which a seed produces a tree. For example, let’s say you see two puppies — one that looks like a low-slung dachshund, and the other a huge mastiff. You also see two grown male dogs, also apparently dachshund and mastiff. If asked about the parentage of the small ones, you are likely — using the principle of sameness — to point — quickly and with surety — to the adult dogs of the same breed. (Here it is important to notice that this answer implicitly assumes that the fathers of the puppies are among these dogs. But the fathers might be somewhere else entirely; it is in these ways that the principle of sameness can lead you astray.)

+

When applying the concept of sameness, the object of interest may be collections of data, as in Semmelweiss’s (1983, 64) data on the consistent differences in rates of maternal deaths from childbed fever in two clinics with different conditions (see Table 17.1), or the similarities in sex ratios from year to year in Graunt’s (1759, 304) data on christenings in London (Table 17.2), or the stark effect in John Snow’s (Winslow 1980, 276) data on the numbers of cholera cases associated with two London water suppliers (Table 17.3), or Kanehiro Takaki’s (Kornberg 1991, 9) discovery of the reduction in beriberi among Japanese sailors as a result of a change in diet (Table 17.4). These data seem so overwhelmingly clear cut that our naive statistical sense makes the relationships seem deterministic, and the conclusions seems straightforward. (But the same statistical sense frequently misleads us when considering sports and stock market data.)

+
+ + +++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 17.1: Deaths of Mothers from childbed fever in two clinics
First clinicSecond clinic
BirthsDeathsRateBirthsDeathsRate
18413,0362377.72,442863.5
18423,28751815.82,6592027.5
18433,0602748.92,7391645.9
18443,1572608.22,956682.3
18453,4922416.83,241662.03
18454,01045911.43,7541052.7
Total20,0421,98917,791691
Average9.923.38
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 17.2: Ratio of number of male to number of female christenings in London
PeriodMale / Female ratio
1629-16361.072
1637-16401.073
1641-16481.063
1649-16561.095
1657-16601.069
+
+
+ + + + + + + + + + + + + + + + + + + + + + +
Table 17.3: Rates of death from cholera for three water suppliers
Water supplierCholera deaths per 10,000 houses
Southwark and Vauxhall71
Lambeth5
Rest of London9
+
+
+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 17.4: Takaki’s Japanese Naval Records of Deaths from Beriberi
YearDietTotal Navy PersonnelDeaths from Beriberi
1880Rice diet4,9561,725
1881Rice diet4,6411,165
1882Rice diet4,7691,929
1883Rice Diet5,3461,236
1884Change to new diet5,638718
1885New diet6,91841
1886New diet8,4753
1887New diet9,1060
1888New diet9,1840
+
+

Constancy and sameness can be seen in macro structures; consider, for example, the constant location of your house. Constancy can also be seen in micro aggregations — for example, the raindrops and rain that account for the predictably fluctuating height of the Nile, or the ratio of boys to girls born in London, cases in which we can average to see the “statistical” sameness. The total sum of the raindrops produces the level of a reservoir or a river from year to year, and the sum of the behaviors of collections of persons causes the birth rates in the various years.

+

Statistical inference is only needed when a person thinks that s/he might have found a pattern but the pattern is not completely obvious to all. Probabilistic inference works to test — either to confirm or discount — the belief in the pattern’s existence. We will see such cases in the following chapter.

+

People have always been forced to think about and act in situations that have not been constant — that is, situations where the amount of variability in the phenomenon makes it impossible to draw clear cut, sensible conclusions. For example, the appearance of game animals in given places and at given times has always been uncertain to hunters, and therefore it has always been difficult to know which target to hunt in which place at what time. And of course variability of the weather has always made it a very uncertain element. The behavior of one’s enemies and friends has always been uncertain, too, though uncertain in a manner different from the behavior of wild animals; there often is a gaming element in interactions with other humans. But in earlier times, data and techniques did not exist to enable us to bring statistical inference to bear.

+
+
+

17.2 The treatment of uncertainty

+

The purpose of statistical inference is to help us peer through the veil of variability when it obscures the main thrust of the data, so as to improve the decisions we make. Statistical inference (or in most cases, simply probabilistic estimation) can help:

+
    +
  • a gambler deciding on the appropriate odds in a betting game when there seems to be little or no difference between two or more outcomes;
  • +
  • an astronomer deciding upon one or another value as the central estimate for the location of a star when there is considerable variation in the observations s/he has made of the star;
  • +
  • a basketball coach pondering whether to remove from the game her best shooter who has heretofore done poorly tonight;
  • +
  • an oil-drilling firm debating whether to follow up a test-well drilling with a full-bore drilling when the probability of success is not overwhelming but the payoff to a gusher could be large.
  • +
+

Returning to the tree near the Simon house: Let’s change the facts. Assume now that one major part of the tree is mostly dead, and we expect a big winter storm tonight. What is the danger that the tree will fall on the house? Should we spend $1500 to have the mostly-dead third of it cut down? We know that last year a good many trees fell on houses in the neighborhood during such a storm.

+

We can gather some data on the proportion of old trees this size that fell on houses — about 5 in 100, so far as we can tell. Now it is no longer an open-and-shut case about whether the tree will be standing tomorrow, and we are using statistical inference to help us with our thinking. We proceed to find a set of trees that we consider similar to this one , and study the variation in the outcomes of such trees. So far we have estimated that the average for this group of trees — the mean (proportion) that fell in the last big storm — is 5 percent. Averages are much more “stable” — that is, more similar to each other — than are individual cases.

+

Notice how we use the crucial concept of sameness: We assume that our tree is like the others we observed, or at least that it is not systematically different from most of them and it is more-or-less average.

+

How would our thinking be different if our data were that one tree in 10 had fallen instead of 5 in 100? This is a question in statistical inference.

+ +

How about if we investigate further and find that 4 of 40 elms fell, but only one of 60 oaks , and ours is an oak tree. Should we consider that oaks and elms have different chances of falling? Proceeding a bit further, we can think of the question as: Should we or should we not consider oaks and elms as different? This is the type of statistical inference called “hypothesis testing”: We apply statistical procedures to help us decide whether to treat the two classes of trees as the same or different. If we should consider them the same, our worries about the tree falling are greater than if we consider them different with respect to the chance of damage.1

+

Notice that statistical inference was not necessary for accurate prediction when I asked the kids about the likelihood of a live tree falling on a day when there would be no storm. So it is with most situations we encounter. But when the assumption of constancy becomes shaky for one reason or another, as with the sick tree falling in a storm, we need a more refined form of thinking. We collect data on a large number of instances, inquire into whether the instances in which we are interested (our tree and the chance of it falling) are representative — that is, whether it resembles what we would get if we drew a sample randomly — and we then investigate the behavior of this large class of instances to see what light it throws on the instances(s) in which we are interested.

+

The procedure in this case — which we shall discuss in greater detail later on — is to ask: If oaks and elms are not different, how likely is it that only one of 60 oaks would fall whereas 4 of 40 elms would fall? Again, notice the assumption that our tree is “representative” of the other trees about which we have information — that it is not systematically different from most of them, but rather that it is more-or-less average. Our tree certainly was not chosen randomly from the set of trees we are considering. But for purposes of our analysis, we proceed as if it had been chosen randomly — because we deem it “representative.”

+

This is the first of two roles that the concept of randomness plays in statistical thinking. Here is an example of the second use of the concept of randomness: We conduct an experiment — plant elm and oak trees at randomly-selected locations on a plot of land, and then try to blow them down with a wind-making machine. (The random selection of planting spots is important because some locations on a plot of ground have different growing characteristics than do others.) Some purists object that only this sort of experimental sampling is a valid subject of statistical inference; it can never be appropriate, they say, to simply assume on the basis of other knowledge that the tree is representative. I regard that purist view as a helpful discipline on our thinking. But accepting its conclusion — that one should not apply statistical inference except to randomly-drawn or randomly-constituted samples — would take from us a tool that has proven useful in a variety of activities.

+

As discussed earlier in this chapter, the data in some (probably most) scientific situations are so overwhelming that one can proceed without probabilistic inference. Historical examples include those shown above of Semmelweiss and puerperal fever, and John Snow and cholera.2 But where there was lack of overwhelming evidence, the causation of many diseases long remained unclear for lack of statistical procedures. This led to superstitious beliefs and counter-productive behavior, such as quarantines against plague often were. Some effective practices also arose despite the lack of sound theory, however — the waxed costumes of doctors, and the burning of mattresses, despite the wrong theory about the causation of plague; see (Cipolla 1981).

+

So far I have spoken only of predictability and not of other elements of statistical knowledge such as understanding and control . This is simply because statistical correlation is the bed rock of most scientific understanding, and predictability. Later we will expand the discussion beyond predictability; it holds no sacred place here.

+
+
+

17.3 Where statistical inference becomes crucial

+

There was little role for statistical inference until about three centuries ago because there existed very few scientific data. When scientific data began to appear, the need emerged for statistical inference to improve the interpretation of the data. As we saw, statistical inference is not needed when the evidence is overwhelming. A thousand cholera cases at one well and zero at another obviously does not require a statistical test. Neither would 999 cases to one, or even 700 cases to 300, because our inbred and learned statistical senses can detect that the two situations are different. But probabilistic inference is needed when the number of cases is relatively small or where for other reasons the data are somewhat ambiguous.

+

For example, when working with the 17th century data on births and deaths, John Graunt — great statistician though he was — drew wrong conclusions about some matters because he lacked modern knowledge of statistical inference. For example, he found that in the rural parish of Romsey “there were born 15 Females for 16 Males, whereas in London there were 13 for 14, which shows, that London is somewhat more apt to produce Males, then the country” (p. 71). He suggests that the “curious” inquire into the causes of this phenomenon, apparently not recognizing — and at that time he had no way to test — that the difference might be due solely to chance. He also notices (p. 94) that the variations in deaths among years in Romsey were greater than in London, and he attempted to explain this apparent fact (which is just a statistical artifact) rather than understanding that this is almost inevitable because Romsey is so much smaller than London. Because we have available to us the modern understanding of variability, we can now reach sound conclusions on these matters.3

+

Summary statistics — such as the simple mean — are devices for reducing a large mass of data (inevitably confusing unless they are absolutely clear cut) to something one can manage to understand. And probabilistic inference is a device for determining whether patterns should be considered as facts or artifacts.

+

Here is another example that illustrates the state of early quantitative research in medicine:

+
+

Exploring the effect of a common medicinal substance, Bőcker examined the effect of sasparilla on the nitrogenous and other constituents of the urine. An individual receiving a controlled diet was given a decoction of sasparilla for a period of twelve days, and the volume of urine passed daily was carefully measured. For a further twelve days that same individual, on the same diet, was given only distilled water, and the daily quantity of urine was again determined. The first series of researches gave the following figures (in cubic centimeters): 1,467, 1,744, 1,665, 1,220, 1,161, 1,369, 1,675, 2,199, 887, 1,634, 943, and 2,093 (mean = 1,499); the second series: 1,263, 1,740, 1,538, 1,526, 1,387, 1,422, 1,754, 1,320, 1,809, 2,139, 1,574, and 1,114 (mean = 1,549). Much uncertainty surrounded the exactitude of these measurements, but this played little role in the ensuing discussion. The fundamental issue was not the quality of the experimental data but how inferences were drawn from those data (Coleman 1987, 207).

+
+

The experimenter Böcker had no reliable way of judging whether the data for the two groups were or were not meaningfully different, and therefore he arrived at the unsound conclusion that there was indeed a difference. (Gustav Radicke used this example as the basis for early work on statistical significance (Støvring 1999).)

+

Another example: Joseph Lister convinced the scientific world of the germ theory of infection, and the possibility of preventing death with a disinfectant, with these data: Prior to the use of antiseptics — 16 post-operative deaths in 35 amputations; subsequent to the use of antiseptics — 6 deaths in 40 amputations (Winslow 1980, 303). But how sure could one be that a difference of that size might not occur just by chance? No one then could say, nor did anyone inquire, apparently.

+

Here’s another example of great scientists falling into error because of a too-primitive approach to data (Feller 1968, 1:69–70): Charles Darwin wanted to compare two sets of measured data, each containing 16 observations. At Darwin’s request, Francis Galton compared the two sets of data by ranking each, and then comparing them pairwise. The a’s were ahead 13 times. Without knowledge of the actual probabilities Galton concluded that the treatment was effective. But, assuming perfect randomness, the probability that the a’s beat [the others] 13 times or more equals 3/16. This means that in three out of sixteen cases a perfectly ineffectual treatment would appear as good or better than the treatment classified as effective by Galton.

+

That is, Galton and Darwin reached an unsound conclusion. As Feller (1968, 1:70) says, “This shows that a quantitative analysis may be a valuable supplement to our rather shaky intuition”.

+

Looking ahead, the key tool in situations like Graunt’s and Böcker’s and Lister’s is creating ceteris paribus — making “everything else the same” — with random selection in experiments, or at least with statistical controls in non-experimental situations.

+
+
+

17.4 Conclusions

+

In all knowledge-seeking and decision-making, our aim is to peer into the unknown and reduce our uncertainty a bit. The two main concepts that we use — the two great concepts in all of scientific knowledge-seeking, and perhaps in all practical thinking and decision-making — are a) continuity (or non-randomness) and the extent to which it applies in given situation, and b) random sampling, and the extent to which we can assume that our observations are indeed chosen by a random process.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/inference_intro.html b/python-book/inference_intro.html new file mode 100644 index 00000000..5ab2bdd9 --- /dev/null +++ b/python-book/inference_intro.html @@ -0,0 +1,738 @@ + + + + + + + + + +Resampling statistics - 18  Introduction to Statistical Inference + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

18  Introduction to Statistical Inference

+
+ + + +
+ + + + +
+ + +
+ +

The usual goal of a statistical inference is a decision about which of two or more hypotheses a person will thereafter choose to believe and act upon. The strategy of such inference is to consider the behavior of a given universe in terms of the samples it is likely to produce, and if the observed sample is not a likely outcome of sampling from that universe, we then proceed as if the sample did not in fact come from that universe. (The previous sentence is a restatement in somewhat different form of the core of statistical analysis.)

+
+

18.1 Statistical inference and random sampling

+

Continuity and sameness is the fundamental concept in inference in general, as discussed in Chapter 17. Random sampling is the second great concept in inference, and it distinguishes probabilistic statistical inference from non-statistical inference as well as from non-probabilistic inference based on statistical data.

+

Let’s begin the discussion with a simple though unrealistic situation. Your friend Arista a) looks into a cardboard carton, b) reaches in, c) pulls out her hand, and d) shows you a green ball. What might you reasonably infer?

+

You might at least be fairly sure that the green ball came from the carton, though you recognize that Arista might have had it concealed in her hand when she reached into the carton. But there is not much more you might reasonably conclude at this point except that there was at least one green ball in the carton to start with. There could be no more balls; there could be many green balls and no others; there could be a thousand red balls and just one green ball; and there could be one green ball, a hundred balls of different colors, and two pounds of mud — given that she looked in first, it is not improbable that she picked out the only green ball among other material of different sorts.

+

There is not much you could say with confidence about the probability of yourself reaching into the same carton with your eyes closed and pulling out a single green ball. To use other language (which some philosophers might say is not appropriate here as the situation is too specific), there is little basis for induction about the contents of the box. Nor is the situation very different if your friend reaches in three times in a row and hands you a green ball each time.

+

So far we have put our question rather vaguely. Let us frame a more precise inquiry: What do we predict about the next item(s) we might draw from the carton? If we assume — based on who-knows-what information or notions — that another ball will emerge, we could simply use the principle of sameness and (until we see a ball of another color) predict that the next ball will be green, whether one or three or 100 balls is (are) drawn.

+

But now what about if Arista pulls out nine green balls and one red ball? The principle of sameness cannot be applied as simply as before. Based on the last previous ball, the next one will be red. But taking into account all the balls we have seen, the next will “probably” be green. We have no solid basis on which to go further. There cannot be any “solution” to the “problem” of reaching a general conclusion on the basis of these specific pieces of evidence.

+

Now consider what you might conclude if you were told that a single green ball had been drawn with a random sampling procedure from a box containing nothing but balls. Knowledge that the sample was drawn randomly from a given universe is grounds for belief that one knows much more than if a sample were not drawn randomly. First, you would be sure — if you had reasonable basis to believe that the sampling really was random, which is not easy to guarantee — that the ball came from the box. Second, you would guess that the proportion of green balls is not very small, because if there are only a few green balls and many other-colored balls, it would be unusual — that is, the event would have a low probability — to draw a green ball. Not impossible, but unlikely. And we can compute the probability of drawing a green ball — or any other combination of colors — for different assumed compositions within the box . So the knowledge that the sampling process is random greatly increases our ability — or our confidence in our ability — to infer the contents of the box.

+

Let us note well the strategy of the previous paragraph: Ask about the probability that one or more various possible contents of the box (the “universe”) will produce the observed sample , on the assumption that the sample was drawn randomly. This is the central strategy of all statistical inference , though I do not find it so stated elsewhere. We shall come back to this idea shortly.

+

There are several kinds of questions one might ask about the contents of the box. One general category includes questions about our best guesses of the box’s contents — that is, questions of estimation . Another category includes questions about our surety of that description, and our surety that the contents are similar or different from the contents of other boxes; the consideration of surety follows after estimates are made. The estimation questions can be subtle and unexpected (Savage 1972, chap. 15), but do not cause major controversy about the foundations of statistics. So we can quickly move on to questions about the extent of surety in our estimations.

+

Consider your reaction if the sampling produces 10 green balls in a row, or 9 out of 10. If you had no other information (a very important assumption that we will leave aside for now), your best guess would be that the box contains all green balls, or a proportion of 9 of 10, in the two cases respectively. This estimation process seems natural enough.

+

You would be surprised if someone told you that instead of the box containing the proportion in the sample, it contained just half green balls. How surprised? Intuitively, the extent of your surprise would depend on the probability that a half-green “universe” would produce 10 or 9 green balls out of 10. This surprise is a key element in the logic of the hypothesis-testing branch of statistical inference.

+

We learn more about the likely contents of the box by asking about the probability that various specific populations of balls within the box would produce the particular sample that we received. That is, we can ask how likely a collection of 25 percent green balls is to produce (say) 9 of 10 green ones, and how likely collections of 50 percent, 75 percent, 90 percent (and any other collections of interest) are to produce the observed sample. That is, we ask about the consistency between any particular hypothesized collection within the box and the sample we observe. And it is reasonable to believe that those universes which have greater consistency with the observed sample — that is, those universes that are more likely to produce the observed sample — are more likely to be in the box than other universes. This (to repeat, as I shall repeat many times) is the basic strategy of statistical investigation. If we observe 9 of 10 green balls, we then determine that universes with (say) 9/10 and 10/10 green balls are more consistent with the observed evidence than are universes of 0/10 and 1/10 green balls. So by this process of considering specific universes that the box might contain, we make possible more specific inferences about the box’s probable contents based on the sample evidence than we could without this process.

+

Please notice the role of the assessment of probabilities here: By one technical means or another (either simulation or formulas), we assess the probabilities that a particular universe will produce the observed sample, and other samples as well.

+

It is of the highest importance to recognize that without additional knowledge (or assumption) one cannot make any statements about the probability of the sample having come from any particular universe , on the basis of the sample evidence. (Better read that last sentence again.) We can only speak about the probability that a particular universe will produce the observed sample, a very different matter. This issue will arise again very sharply in the context of confidence intervals.

+

Let us generalize the steps in statistical inference:

+
    +
  1. Frame the original question as: What is the chance of getting the observed sample x from population X? That is, what is probability of (If x then X)?

  2. +
  3. Proceed to this question: What kinds of samples does X produce, with which probability? That is, what is the probability of this particular x coming from X? That is, what is p(x|X)?

  4. +
  5. Actually investigate the behavior of X with respect to x and other samples. One can do this in two ways:

    +
      +
    1. Use the formulaic calculus of probability, perhaps resorting to Monte Carlo methods if an appropriate formula does not exist. Or,
    2. +
    3. Use resampling (in the larger sense), the domain of which equals (all Monte Carlo experimentation) minus (the use of Monte Carlo methods for approximations, investigation of complex functions in statistics and other theoretical mathematics, and uses elsewhere in science). Resampling in its more restricted sense includes the bootstrap, permutation tests, and other non-parametric methods.
    4. +
  6. +
  7. Interpretation of the probabilities that result from step 3 in terms of

    +
      +
    1. acceptance or rejection of hypotheses, ii) surety of conclusions, or iii) inputs to decision theory.
    2. +
  8. +
+

Here is a short definition of statistical inference:

+
+

The selection of a probabilistic model that might resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.

+
+

We will get even more specific about the procedure when we discuss the canonical procedures for hypothesis testing and for the finding of confidence intervals in the chapters on those subjects.

+

The discussion so far has been in the spirit of what is known as hypothesis testing . The result of a hypothesis test is a decision about whether or not one believes that the sample is likely to have been drawn randomly from the “benchmark universe” X. The logic is that if the probability of such a sample coming from that universe is low, we will then choose to believe the alternative — to wit, that the sample came from the universe that resembles the sample.

+ +

The underlying idea is that if an event would be very surprising if it really happened — as it would be very surprising if the dog had really eaten the homework (see Chapter 21) — we are inclined not to believe in that possibility. (This logic will be explored further in later chapters on hypothesis testing.)

+

We have so far assumed that our only relevant knowledge is the sample. And though we almost never lack some additional information, this can be a sensible way to proceed when we wish to suppress any other information or speculation. This suppression is controversial; those known as Bayesians or subjectivists want us to take into account all the information we have. But even they would not dispute suppressing information in certain cases — such as a teacher who does not want to know students’ IQ scores because s/he might want avoid the possibility of unconsciously being affected by that score, or an employer who wants not to know the potential employee’s ethnic or racial background even though the hiring process might be more “successful” on some metric, or a sports coach who refuses to pick the starting team each year until the players have competed for the positions.

+ +

Now consider a variant on the green-ball situation discussed above. Assume now that you are told that samples of balls are alternately drawn from one of two specified universes — two buckets of balls, one with 50 percent green balls and the other with 80 percent green balls. Now you are shown a sample of nine green and one red balls drawn from one of those buckets. On the basis of your sample you can then say how probable it is that the sample came from one or the other universe . You proceed by computing the probabilities (often called the likelihoods in this situation) that each of those two universes would individually produce the observed samples — probabilities that you could arrive at with resampling, with Pascal’s Triangle, or with a table of binomial probabilities, or with the Normal approximation and the Z distribution, or with yet other devices. Those probabilities are .01 and .27, and the ratio of the two (0.1/.27) is a bit less than .04. That is, fair betting odds are about 1 to 27.

+

Let us consider a genetics problem on this model. Plant A produces 3/4 black seeds and 1/4 reds; plant B produces all reds. You get a red seed. Which plant would you guess produced it? You surely would guess plant B. Now, how about 9 reds and a black, from Plants A and C, the latter producing 50 percent reds on average?

+

To put the question more precisely: What betting odds would you give that the one red seed came from plant B? Let us reason this way: If you do this again and again, 4 of 5 of the red seeds you see will come from plant B. Therefore, reasonable (or “fair”) odds are 4 to 1, because this is in accord with the ratios with which red seeds are produced by the two plants — 4/4 to 1/4.

+

How about the sample of 9 reds and a black, and plants A and C? It would make sense that the appropriate odds would be derived from the probabilities of the two plants producing that particular sample, probabilities which we computed above.

+

Now let us move to a bit more complex problem: Consider two buckets — bucket G with 2 red and 1 black balls, and bucket H with 100 red and 100 black balls. Someone flips a coin to decide which bucket will be drawn from, reaches into that bucket, and chooses two balls without replacing the first one before drawing the second. Both are red. What are the odds that the sample came from bucket G? Clearly, the answer should derive from the probabilities that the two buckets would produce the observed sample.

+

(Now just for fun, how about if the first ball drawn is thrown back after examining? What now are the appropriate odds?)

+

Let’s restate the central issue. One can state the probability that a particular plant which produces on average 1 red and 3 black seeds will produce one red seed, or 5 reds among a sample of 10. But without further assumptions — such as the assumption above that the possibilities are limited to two specific universes — one cannot say how likely a given red seed is to have come from a given plant, even if we know that that plant produces only reds. (For example, it may have come from other plants producing only red seeds.)

+

When we limit the possibilities to two universes (or to a larger set of specified universes) we are able to put a probability on one hypothesis or another. But to repeat, in many or most cases, one cannot reasonably assume it is only one or the other. And then we cannot state any odds that the sample came from a particular universe. This is a very difficult point to grasp, experience shows, but a crucial one. (It is the sort of subtle issue that makes statistics so difficult.)

+

The additional assumptions necessary to talk about the probability that the red seed came from a given plant are the stuff of statistical inference. And they must be combined with such “objective” probabilistic assessments as the probability that a 1-red-3-black plant will produce one red, or 5 reds among 10 seeds.

+

Now let us move one step further. Instead of stating as a fact under our control that there is a .5 chance of the sample being drawn from each of the two buckets in the problem above, let us assume that we do not know the probability of each bucket being picked, but instead we estimate a probability of .5 for each bucket, based on a variety of other information that all is uncertain. But though the facts are now different, the most reasonable estimate of the odds that the observed sample was drawn from one or the other bucket will not be different than before — because in both situations we were working with a “prior probability” of .5.

+ +

Now let us go a step further by allowing the universes from which the sample may have come to have different assumed probabilities as well as different compositions. That is, we now consider prior probabilities other than .5.

+

How do we decide which universe(s) to investigate for the probability of producing the observed sample, and of producing samples that are even less likely, in the sense of being more surprising? That judgment depends upon the purpose of your analysis, upon your point of view of how statistics ought to be done, and upon some other factors.

+

It should be noted that the logic described so far applies in exactly the same fashion whether we do our work estimating probabilities with the resampling method or with conventional methods. We can figure the probability of nine or more green chips from a universe of (say) p = .7 with either approach.

+

So far we have discussed the comparison of various hypotheses and possible universes. We must also consider where the consideration of the reliability of estimates comes in. This leads to the concept of confidence limits, which will be discussed in Chapter 26 and Chapter 27.

+
+
+

18.2 Samples Whose Observations May Have More Than Two Values

+

So far we have discussed samples and universes that we can characterize as proportions of elements which can have only one of two characteristics — green or other, in this case, which is equivalent to “1” or “0.” This expositional choice has been solely for clarity. All the ideas discussed above pertain just as well to samples whose observations may have more than two values, and which may be either discrete or continuous.

+
+
+

18.3 Summary and conclusions

+

A statistical question asks about the probabilities of a sample having arisen from various source universes in light of the evidence of a sample. In every case, the statistical answer comes from considering the behavior of particular specified universes in relation to the sample evidence and to the behavior of other possible universes. That is, a statistical problem is an exercise in postulating universes of interest and interpreting the probabilistic distributions of results of those universes. The preceding sentence is the key operational idea in statistical inference.

+

Different sorts of realistic contexts call for different ways of framing the inquiry. For each of the established models there are types of problems which fit that model better than other models, and other types of problems for which the model is quite inappropriate.

+

Fundamental wisdom in statistics, as in all other contexts, is to employ a large tool kit rather than just applying only a hammer, screwdriver, or wrench no matter what the problem is at hand. (Philosopher Abraham Kaplan once stated Kaplan’s Law of scientific method: Give a small boy a hammer and there is nothing that he will encounter that does not require pounding.) Studying the text of a poem statistically to infer whether Shakespeare or Bacon was the more likely author is quite different than inferring whether bioengineer Smythe can produce an increase in the proportion of calves, and both are different from decisions about whether to remove a basketball player from the game or to produce a new product.

+

Some key points: 1) In statistical inference as in all sound thinking, one’s purpose is central . All judgments should be made relative to that purpose, and in light of costs and benefits. (This is the spirit of the Neyman-Pearson approach). 2) One cannot avoid making judgments; the process of statistical inference cannot ever be perfectly routinized or objectified. Even in science, fitting a model to experience requires judgment. 3) The best ways to infer are different in different situations — economics, psychology, history, business, medicine, engineering, physics, and so on. 4) Different tools must be used when the situations call for them — sequential vs. fixed sampling, Neyman-Pearson vs. Fisher, and so on. 5) In statistical inference it is wise not to argue about the proper conclusion when the data and procedures are ambiguous. Instead, whenever possible, one should go back and get more data, hence lessening the importance of the efficiency of statistical tests. In some cases one cannot easily get more data, or even conduct an experiment, as in biostatistics with cancer patients. And with respect to the past one cannot produce more historical data. But one can gather more and different kinds of data, e.g. the history of research on smoking and lung cancer.

+ + + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/intro.html b/python-book/intro.html new file mode 100644 index 00000000..bcb03958 --- /dev/null +++ b/python-book/intro.html @@ -0,0 +1,852 @@ + + + + + + + + + +Resampling statistics - 1  Introduction + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

1  Introduction

+
+ + + +
+ + + + +
+ + +
+ +
+

1.1 Uses of Probability and Statistics

+

This chapter introduces you to probability and statistics. First come examples of the kinds of practical problems that this knowledge can solve for us. One reason that the term “statistic” often scares and confuses people is that the term has several sorts of meanings. We discuss the meanings of “statistics” in the section “Types of statistics”. Then comes a discussion on the relationship of probabilities to decisions. Following this we talk about the limitations of probability and statistics. And last is a discussion of why statistics can be such a difficult subject. Most important, this chapter describes the types of problems the book will tackle.

+

At the foundation of sound decision-making lies the ability to make accurate estimates of the probabilities of future events. Probabilistic problems confront everyone — a company owner considering whether to expand their business, to the scientist testing a vaccine, to the individual deciding whether to buy insurance.

+
+
+

1.2 What kinds of problems shall we solve?

+

These are some examples of the kinds of problems that we can handle with the methods described in this book:

+
    +
  1. You are a doctor trying to develop a treatment for COVID19. Currently you are working on a medicine labeled AntiAnyVir. You have data from patients to whom medicine AntiAnyVir was given. You want to judge on the basis of those results whether AntiAnyVir really improves survival or whether it is no better than a sugar pill.

  2. +
  3. You are the campaign manager for the Republicrat candidate for President of the United States. You have the results from a recent poll taken in New Hampshire. You want to know the chance that your candidate would win in New Hampshire if the election were held today.

  4. +
  5. You are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.

  6. +
  7. You are an environmental scientist monitoring levels of phosphorus pollution in a lake. The phosphorus levels have been fluctuated around a relatively low level until recently, but they have been higher in the last few years. Does these recent higher levels indicate some important change or can we put them down to some chance and ordinary variation from year to year?

  8. +
+

The core of all these problems, and of the others that we will deal with in this book, is that you want to know the “chance” or “probability” — different words for the same idea — that some event will or will not happen, or that something is true or false. To put it another way, we want to answer questions about “What is the probability that…?”, given the body of information that you have in hand.

+

The question “What is the probability that…?” is usually not the ultimate question that interests us at a given moment.

+

Eventually, a person wants to use the estimated probability to help make a decision concerning some action one might take. These are the kinds of decisions, related to the questions about probability stated above, that ultimately we would like to make:

+
    +
  1. Should you (the researcher) advise doctors to prescribe medicine AntiAnyVir for COVID19 patients, or, should you (the researcher) continue to study AntiAnyVir before releasing it for use? A related matter: should you and other research workers feel sufficiently encouraged by the results of medicine AntiAnyVir so that you should continue research in this general direction rather than turning to some other promising line of research? These are just two of the possible decisions that might be influenced by the answer to the question about the probability that medicine AntiAnyVir is effective in treating COVID19.

  2. +
  3. Should you advise the Republicrat presidential candidate to go to New Hampshire to campaign? If the poll tells you conclusively that she or he will not win in New Hampshire, you might decide that it is not worthwhile investing effort to campaign there. Similarly, if the poll tells you conclusively that they surely will win in New Hampshire, you probably would not want to campaign further there. But if the poll is not conclusive in one direction or the other, you might choose to invest the effort to campaign in New Hampshire. Analysis of the chances of winning in New Hampshire based on the poll data can help you make this decision sensibly.

  4. +
  5. Should your company buy more ambulances? Clearly the answer to this question is affected by the probability that a given number of your ambulances will be out of action on a given day. But of course this estimated probability will be only one part of the decision.

  6. +
  7. Should we search for new causes of phosphorus pollution as a result of the recent measurements from the lake? If the causes have not changed, and the recent higher values were just the result of ordinary variation, our search will end up wasting time and money that could have been better spent elsewhere.

  8. +
+

The kinds of questions to which we wish to find probabilistic and statistical answers may be found throughout the social, biological and physical sciences; in business; in politics; in engineering; and in most other forms of human endeavor.

+
+
+

1.3 Types of statistics

+

The term statistics sometimes causes confusion and therefore needs explanation.

+

Statistics can mean two related things. It can refer to a certain sort of number — of which more below. Or it can refer to the field of inquiry that studies these numbers.

+

A statistic is a number that we can calculate from a larger collection of numbers we are interested in. For example, table Table 1.1 has some yearly measures of “soluble reactive phosphorus” (SRP) from Lough Erne — a lake in Ireland (Zhou, Gibson, and Foy 2000).

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1.1: Soluble Reactive Phosphorus in Lough Erne
YearSRP
197426.2
197522.8
197637.2
198354.7
198437.7
198754.3
198935.7
199172.0
199285.1
199386.7
199493.3
1995107.2
199680.3
199770.7
+
+ + +
+
+

We may want to summarize this set of SRP measurements. For example, we could add up all the SRP values to give the total. We could also divide the total by the number of measurements, to give the average. Or we could measure the spread of the values by finding the minimum and the maximum — see table Table 1.2). All these numbers are descriptive statistics, because they are summaries that describe the collection of SRP measurements.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1.2: Statistics for SRP levels
Descriptive statistics for SRP
Total863.9
Mean61.7
Minimum22.8
Maximum107.2
+
+ + +
+
+

Descriptive statistics are nothing new to you; you have been using many of them all your life.

+

We can calculate other numbers that can be useful for drawing conclusions or inferences from a collection of numbers; these are inferential statistics. Inferential statistics are often probability values that give the answer to questions like “What are the chances that …”.

+

For example, imagine we suspect there was some environmental change in 1990. We see that the average SRP value before 1990 was 38.4 and the average SRP value after 1990 was 85. That gives us a difference in the average of 46.6. But, could this difference be due to chance fluctuations from year to year? Were we just unlucky in getting a few larger measurements in later years? We could use methods that you will see in this book to calculate a probability to answer that question. The probability value is an inferential statistic, because we can use it to draw an inference about the measures.

+

Inferential statistics use descriptive statistics as their input. Inferential statistics can be used for two purposes: to aid scientific understanding by estimating the probability that a statement is true or not, and to aid in making sound decisions by estimating which alternative among a range of possibilities is most desirable.

+
+
+

1.4 Probabilities and decisions

+

There are two differences between questions about probabilities and the ultimate decision problems:

+
    +
  1. Decision problems always involve evaluation of the consequences — that is, taking into account the benefits and the costs of the consequences — whereas pure questions about probabilities are estimated without evaluations of the consequences.

  2. +
  3. Decision problems often involve a complex combination of sets of probabilities and consequences, together with their evaluations. For example: In the case of the contractor’s ambulances, it is clear that there will be a monetary loss to the contractor if she makes a commitment to have 17 ambulances available for tomorrow and then cannot produce that many. Furthermore, the contractor must take into account the further consequence that there may be a loss of goodwill for the future if she fails to meet her obligations tomorrow — and then again there may not be any such loss; and if there is such loss of goodwill it might be a loss worth $10,000 or $20,000 or $30,000. Here the decision problem involves not only the probability that there will be fewer than 17 ambulances tomorrow but also the immediate monetary loss and the subsequent possible losses of goodwill, and the valuation of all these consequences.

  4. +
+

Continuing with the decision concerning whether to do more research on medicine AntiAnyVir: If you do decide to continue research on AntiAnyVir, (a) you may, or (b) you may not, come up with an important general treatment for viral infections within, say, the next 3 years. If you do come up with such a general treatment, of course it will have very great social benefits. Furthermore, (c) if you decide not to do further research on AntiAnyVir now, you can direct your time and that of other people to research in other directions, with some chance that the other research will produce a less-general but nevertheless useful treatment for some relatively infrequent viral infections. Those three possibilities have different social benefits. The probability that medicine AntiAnyVir really has some benefit in treating COVID19, as judged by your prior research, obviously will influence your decision on whether or not to do more research on medicine AntiAnyVir. But that judgment about the probability is only one part of the overall web of consequences and evaluations that must be taken into account when making your decision whether or not to do further research on medicine AntiAnyVir.

+

Why does this book limit itself to the specific probability questions when ultimately we are interested in decisions? A first reason is division of labor. The more general aspects of the decision-making process in the face of uncertainty are treated well in other books. This book’s special contribution is its new approach to the crucial process of estimating the chances that an event will occur.

+

Second, the specific elements of the overall decision-making process taught in this book belong to the interrelated subjects of probability theory and statistics . Though probabilistic and statistical theory ultimately is intended to be part of the general decision-making process, often only the estimation of probabilities is done systematically, and the rest of the decision-making process — for example, the decision whether or not to proceed with further research on medicine AntiAnyVir — is done in informal and unsystematic fashion. This is regrettable, but the fact that this is standard practice is an additional reason why the treatment of statistics and probability in this book is sufficiently complete.

+

A third reason that this book covers only statistics and not numerical reasoning about decisions is because most college and university statistics courses and books are limited to statistics.

+
+
+

1.5 Limitations of probability and statistics

+

Statistical testing is not equivalent to research, and research is not the same as statistical testing. Rather, statistical inference is a handmaiden of research, often but not always necessary in the research process.

+

A working knowledge of the basic ideas of statistics, especially the elements of probability, is unsurpassed in its general value to everyone in a modern society. Statistics and probability help clarify one’s thinking and improve one’s capacity to deal with practical problems and to understand the world. To be efficient, a social scientist or decision-maker is almost certain to need statistics and probability.

+

On the other hand, important research and top-notch decision-making have been done by people with absolutely no formal knowledge of statistics. And a limited study of statistics sometimes befuddles students into thinking that statistical principles are guides to research design and analysis. This mistaken belief only inhibits the exercise of sound research thinking. Alfred Kinsey long ago put it this way:

+
+

… no statistical treatment can put validity into generalizations which are based on data that were not reasonably accurate and complete to begin with. It is unfortunate that academic departments so often offer courses on the statistical manipulation of human material to students who have little understanding of the problems involved in securing the original data. … When training in these things replaces or at least precedes some of the college courses on the mathematical treatment of data, we shall come nearer to having a science of human behavior. (Kinsey, Pomeroy, and Martin 1948, p 35).

+
+

In much — even most — research in social and physical sciences, statistical testing is not necessary. Where there are large differences between different sorts of circumstances for example, if a new medicine cures 90 patients out of 100 and the old medicine cures only 10 patients out of 100 — we do not need refined statistical tests to tell us whether or not the new medicine really has an effect. And the best research is that which shows large differences, because it is the large effects that matter. If the researcher finds that s/he must use refined statistical tests to reveal whether there are differences, this sometimes means that the differences do not matter much.

+

To repeat, then, some or even much research — especially in the physical and biological sciences — does not need the kind of statistical manipulation that will be described in this book. But most decision problems do need the kind of probabilistic and statistical input that is described in this book.

+

Another matter: If the raw data are of poor quality, probabilistic and statistical manipulation cannot be very useful. In the example of the contractor and her ambulances, if the contractor’s estimate that a given ambulance has a one-in-ten chance of being unfit for service out-of-order on a given day is very inaccurate, then our calculation of the probability that three or more ambulances will be out of order on a given day will not be helpful, and may be misleading. To put it another way, one cannot make bread without flour, yeast, and water. And good raw data are the flour, yeast and water necessary to get an accurate estimate of a probability. The most refined statistical and probabilistic manipulations are useless if the input data are poor — the result of unrepresentative samples, uncontrolled experiments, inaccurate measurement, and the host of other ways that information gathering can go wrong. (See Simon and Burstein (1985) for a catalog of the obstacles to obtaining good data.) Therefore, we should constantly direct our attention to ensuring that the data upon which we base our calculations are the best it is possible to obtain.

+
+
+

1.6 Why is Statistics Such a Difficult Subject?

+

Why is statistics such a tough subject for so many people?

+

“Among mathematicians and statisticians who teach introductory statistics, there is a tendency to view students who are not skillful in mathematics as unintelligent,” say two of the authors of a popular introductory text (McCabe and McCabe 1989, p 2). As these authors imply, this view is out-and-out wrong; lack of general intelligence on the part of students is not the root of the problem.

+

Scan this book and you will find almost no formal mathematics. Yet nearly every student finds the subject very difficult — as difficult as anything taught at universities. The root of the difficulty is that the subject matter is extremely difficult. Let’s find out why .

+

It is easy to find out with high precision which movie is playing tonight at the local cinema; you can look it up on the web or call the cinema and ask. But consider by contrast how difficult it is to determine with accuracy:

+
    +
  1. Whether we will save lives by recommending vitamin D supplements for the whole population as protection against viral infections. Some evidence suggests that low vitamin D levels predispose to more severe lung infections, and that taking supplements can help (Martineau et al. 2017). But, how certain can we be of the evidence? How safe are the supplements? Does the benefit, and the risk, differ by ethnicity?
  2. +
  3. What will be the result of more than a hundred million Americans voting for president a month from now; the best attempt usually is a sample of 2000 people, selected in some fashion or another that is far from random, weeks before the election, asked questions that are by no means the same as the actual voting act, and so on;
  4. +
  5. How men feel about women and vice versa.
  6. +
+

The cleverest and wisest people have pondered for thousands of years how to obtain answers to questions like these, and made little progress. Dealing with uncertainty was completely outside the scope of the ancient philosophers. It was not until two or three hundred years ago that people began to make any progress at all on these sorts of questions, and it was only about one century ago that we began to have reasonably competent procedures — simply because the problems are inherently difficult. So it is no wonder that the body of these methods is difficult.

+

So: The bad news is that the subject is extremely difficult. The good news is that you — and that means you — can understand it with hard thinking, even if you have no mathematical background beyond arithmetic and you think that you have no mathematical capability. That’s because the difficulty lies in such matters as pin-pointing the right question, but not in any difficulties of mathematical manipulation.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/monte_carlo.html b/python-book/monte_carlo.html new file mode 100644 index 00000000..e1eb1c00 --- /dev/null +++ b/python-book/monte_carlo.html @@ -0,0 +1,709 @@ + + + + + + + + + +Resampling statistics - 15  The Procedures of Monte Carlo Simulation (and Resampling) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

15  The Procedures of Monte Carlo Simulation (and Resampling)

+
+ + + +
+ + + + +
+ + +
+ +

Until now, the steps to follow in solving particular problems have been chosen to fit the specific facts of that problem. And so they always must. Now let’s generalize what we have done in the previous chapters on probability into a general procedure for such problems, which will in turn become the basis for a detailed procedure for resampling simulation in statistics. The generalized procedure describes what we are doing when we estimate a probability using Monte Carlo simulation problem-solving operations.

+
+

15.1 A definition and general procedure for Monte Carlo simulation

+

This is what we shall mean by the term Monte Carlo simulation when discussing problems in probability: Using the given data-generating mechanism (such as a coin or die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples . That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.

+

This definition fits both problems in pure probability as well as problems in statistics, but in the latter case the process is called resampling . The reason that the same definition fits is that at the core of every problem in inferential statistics lies a problem in probability ; that is, the procedure for handling every statistics problem is the procedure for handling a problem in probability. (There is related discussion of definitions in Chapter 8 and Chapter 20.)

+

The following series of steps should apply to all problems in probability. I’ll first state the procedure straight through without examples, and then show how it applies to individual examples.

+
    +
  • Step A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.
  • +
  • Step B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.
  • +
  • Step C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.
  • +
  • Step D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.
  • +
+

Now let us apply the general procedure to some examples to make it more concrete.

+

Here are four problems to be used as illustrations:

+
    +
  1. Three percent gizmos — if on average 3 percent of the gizmos sent out are defective, what is the chance that there will be more than 10 defectives in a shipment of 200?
  2. +
  3. Three girls, 106 in 206 — what are the chances of getting three or more girls in the first four children, if the probability of a female birth is 106/206?
  4. +
  5. Less than 20 baskets — what are the chances of Joe Hothand scoring 20 or fewer baskets in 57 shots if his long-run average is 47 percent?
  6. +
  7. Same birthday in 25 — what is the probability of two or more people in a group of 25 persons having the same birthday — i. e., the same month and same day of the month?
  8. +
+
+
+

15.2 Apply step A — construct a simulation universe

+

As a reminder:

+
    +
  • Step A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.
  • +
+

For our example problems:

+
    +
  1. Three percent gizmos: A random drawing with replacement from the set of numbers 1 through 100 with 1 through 3 designated as defective, simulates the system that produces 3 defective gizmos among 100.
  2. +
  3. Three girls, 106 in 206: You could take two decks of cards, from which you take out both Aces of spades, and replace these with a Joker. You now have 103 cards (206 / 2), of which 53 (106 / 2) are red, counting the Joker as red. You could also use a random drawing from two sets of numbers, one comprising 1 through 106 and the other 107 through 206. Either universe can simulate the system that produces a single male or female birth, when we are estimating the probability of three girls in the first four children. Notice that in this universe the probability of a girl remains the same from trial event to trial event — that is, the trials are independent — demonstrating a universe from which we sample with replacement.
  4. +
  5. Less than 20 baskets: A random drawing with replacement from a bucket containing a hundred balls, 47 red and 53 black, simulates the system that produces 47 percent baskets for Joe Hothand.
  6. +
  7. Same birthday in 25: A random drawing with replacement from the numbers 1 through 365 simulates the system that produces a birthday.
  8. +
+

This step A includes two operations:

+
    +
  1. Decide which symbols will stand for the elements of the universe you will simulate.
  2. +
  3. Determine whether the sampling will be with or without replacement. (This can be ambiguous in a complex modeling situation.)
  4. +
+

Hard thinking is required in order to determine the appropriate “real” universe whose properties interest you.

+
+
+

15.3 Apply step B — specify the procedure

+
    +
  • Step B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.
  • +
+

For example:

+
    +
  1. Three percent gizmos: For a single gizmo, you can draw a single number from an infinite universe. Or one can use a finite set with replacement and shuffling.
  2. +
  3. Three girls, 106 in 206: In the case of three or more daughters among four children, you could use the deck of 103 cards, from Step A, of which 53 count as red. To simulate one child, you can draw a card and then replace it, noting female for a red card or a Joker. Or if you are using random numbers from the computer, the random numbers automatically simulate replacement. Just as the chances of having a boy or a girl do not change depending on the sex of the preceding child, so we want to ensure through sampling with replacement that the chances do not change each time we choose from the deck of cards.
  4. +
  5. Less than 20 baskets: In the case of Joe Hothand’s shooting, the procedure is to consider the numbers 1 through 47 as “baskets,” and 48 through 100 as “misses,” with the same other considerations as the gizmos.
  6. +
  7. Same birthday in 25. In the case of the birthday problem, the drawing must be with replacement, because the fact that you have drawn — say — a 10 (10th day in year), should not affect the chances of drawing 10 for a second person in the room.
  8. +
+

Recording the outcome of the sampling must be indicated as part of this step, e.g., “record ‘yes’ if girl or basket, ‘no’ if a boy or a miss.”

+
+
+

15.4 Apply step C — describe any composite events

+
    +
  • Step C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.
  • +
+

For example:

+
    +
  1. Three percent gizmos: For the gizmos, draw a sample of 200.
  2. +
  3. Three girls, 106 in 206: For the three or more girls among four children, the procedure for each simple event of a single birth was described in step B. Now we must specify repeating the simple event four times, and counting whether the outcome is or is not three girls.
  4. +
  5. Less than 20 baskets: In the case of Joe Hothand’s shots, we must draw 57 numbers to make up a sample of shots, and examine whether there are 20 or more misses.
  6. +
+

Recording the results as “ten or more defectives,” “three or more girls” or “two or less girls,” and “20 or more misses” or “19 or fewer,” is part of this step. This record indicates the results of all the trials and is the basis for a tabulation of the final result.

+
+
+

15.5 Apply step D — calculate the probability

+
    +
  • Step D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.
  • +
+

For example: the proportions of “yes” and “no,” and “20 or more” and “19 or fewer” estimate the probability we seek in step C.

+

The above procedure is similar to the procedure followed with the analytic formulaic method except that the latter method constructs notation and manipulates it.

+
+
+

15.6 Summary

+

This chapter gives a more general description of the specific steps used in prior chapters to solve problems in probability.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/more_sampling_tools.html b/python-book/more_sampling_tools.html new file mode 100644 index 00000000..fe43d664 --- /dev/null +++ b/python-book/more_sampling_tools.html @@ -0,0 +1,2048 @@ + + + + + + + + + +Resampling statistics - 10  Two puzzles and more tools + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

10  Two puzzles and more tools

+
+ + + +
+ + + + +
+ + +
+ +
+

10.1 Introduction

+

In the next chapter we will deal with some more involved problems in probability, as a preparation for statistics, where we use reasoning from probability to draw conclusions about a world like our own, where variation often appears to be more or less random.

+

Before we get down to the business of complex probabilistic problems in the next few chapters, let’s consider a couple of peculiar puzzles. These puzzles allow us to introduce some more of the key tools in Python for Monte Carlo resampling, and show the power of such simulation to help solve, and then reason about, problems in probability.

+
+
+

10.2 The treasure fleet recovered

+

This is a classic problem in probability:1

+
+

A Spanish treasure fleet of three ships was sunk at sea off Mexico. One ship had a chest of gold forward and another aft, another ship had a chest of gold forward and a chest of silver aft, while a third ship had a chest of silver forward and another chest of silver aft. Divers just found one of the ships and a chest of gold in it, but they don’t know whether it was from forward or aft. They are now taking bets about whether the other chest found on the same ship will contain silver or gold. What are fair odds?

+
+

These are the logical steps one may distinguish in arriving at a correct answer with deductive logic (portrayed in Figure 10.1).

+
    +
  1. Postulate three ships — Ship I with two gold chests (G-G), ship II with one gold and one silver chest (G-S), and ship III with S-S. (Choosing notation might well be considered one or more additional steps.)

  2. +
  3. Assert equal probabilities of each ship being found.

  4. +
  5. Step 2 implies equal probabilities of being found for each of the six chests.

  6. +
  7. Fact: Diver finds a chest of gold.

  8. +
  9. Step 4 implies that S-S ship III was not found; hence remove it from subsequent analysis.

  10. +
  11. Three possibilities: 6a) Diver found chest I-Ga, 6b) diver found I-Gb, 6c) diver found II-Gc.

    +

    From step 2, the cases a, b, and c in step 6 have equal probabilities.

  12. +
  13. If possibility 6a is the case, then the other chest is I-Gb; the comparable statements for cases 6b and 6c are I-Ga and II-S.

  14. +
  15. From steps 6 and 7: From equal probabilities of the three cases, and no other possible outcome, \(P(6a) = 1/3\), \(P(6b) = 1/3\), \(P(6c) = 1/3\).

  16. +
  17. So \(P(G) = P(6a) + P(6b)\) = 1/3 + 1/3 = 2/3.

  18. +
+

See Figure 10.1.

+
+
+
+
+

+
Figure 10.1: Ships with Gold and Silver
+
+
+
+
+

The following simulation arrives at the correct answer.

+
    +
  1. Write “Gold” on three pieces of paper and “Silver” on three pieces of paper. These represent the chests.
  2. +
  3. Get three buckets each with two pieces of paper. Each bucket represents a ship, each piece of paper represents a chest in that ship. One bucket has two pieces of paper with “Gold” written on them; one has pieces of paper with “Gold” and “Silver”, and one has “Silver” and “Silver”.
  4. +
  5. Choose a bucket at random, to represent choosing a ship at random.
  6. +
  7. Shuffle the pieces of paper in the bucket and pick one, to represent choosing the first chest from that ship at random.
  8. +
  9. If the piece of paper says “Silver”, the first chest we found in this ship was silver, and we stop the trial and make no further record. If “Gold”, continue.
  10. +
  11. Get the second piece of paper from the bucket, representing the second chest on the chosen ship. Record whether this was “Silver” or “Gold” on the scoreboard.
  12. +
  13. Repeat steps (3 - 6) many times, and calculate the proportion of “Gold”s on the scoreboard. (The answer should be about \(\frac{2}{3}\).)
  14. +
+ +

Here is a notebook simulation with Python:

+
+

Start of gold_silver_ships notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
# The 3 buckets.  Each bucket represents a ship.  Each has two chests.
+bucket1 = ['Gold', 'Gold']  # Chests in first ship.
+bucket2 = ['Gold',  'Silver']  # Chests in second ship.
+bucket3 = ['Silver', 'Silver']  # Chests in third ship.
+
+
+
# For each trial, we will have one of three states:
+#
+# 1. When opening the first chest, it did not contain gold.
+#    We will reject these trials, since they do not match our
+#    experiment description.
+# 2. Gold was found in the first and the second chest.
+# 3. Gold was found in the first, but silver in the second chest.
+#
+# We need a placeholder value for all trials, and will make that
+# "No gold in chest 1, chest 2 never opened".
+second_chests = np.repeat(['No gold in chest 1, chest 2 never opened'], 10000)
+
+for i in range(10000):
+    # Select a ship at random from the three ships.
+    ship_no = rnd.choice([1, 2, 3])
+    # Get the chests from this ship (represented by a bucket).
+    if ship_no == 1:
+        bucket = bucket1
+    if ship_no == 2:
+        bucket = bucket2
+    if ship_no == 3:
+        bucket = bucket3
+
+    # We shuffle the order of the chests in this ship, to simulate
+    # the fact that we don't know which of the two chests we have
+    # found first, forward or aft.
+    shuffled = rnd.permuted(bucket)
+
+    if shuffled[0] == 'Gold':  # We found a gold chest first.
+        # Store whether the Second chest was silver or gold.
+        second_chests[i] = shuffled[1]
+
+    # End loop, go back to beginning.
+
+# Number of times we found gold in the second chest.
+n_golds = np.sum(second_chests == 'Gold')
+# Number of times we found silver in the second chest.
+n_silvers = np.sum(second_chests == 'Silver')
+# As a ratio of golds to all second chests (where the first was gold).
+print(n_golds / (n_golds + n_silvers))
+
+
0.6625368731563421
+
+
+

End of gold_silver_ships notebook

+
+

In the code above, we have first chosen the ship number at random, and then used a set of if ... statements to get the pair of chests corresponding to the given ship. There are simpler and more elegant ways of writing this code, but they would need some Python features that we haven’t covered yet.2

+
+
+

10.3 Back to Boolean arrays

+

The code above implements the procedure we might well use if we were simulating the problem physically. We do a trial, and we record the result. We do this on a piece of paper if we are doing a physical simulation, and in the second_chests array in code.

+

Finally we tally up the results. If we are doing a physical simulation, we go back over the all the trial results and counting up the “Gold” and “Silver” outcomes. In code we use the comparisons == 'Gold' and == 'Silver' to find the trials of interest, and then count them up with np.sum.

+

Boolean arrays are a fundamental tool in Python, and we will use them in nearly all our simulations.

+

Here is a remind of how those arrays work.

+

First, let’s slice out the first 10 values of the second_chests trial-by-trial results tally from the simulation above:

+
+
# Get values at positions 0 through 9 (up to, but not including position 10)
+first_10_chests = second_chests[:10]
+first_10_chests
+
+
array(['Silver', 'No gold in chest 1, chest 2 never opened',
+       'No gold in chest 1, chest 2 never opened', 'Gold', 'Gold',
+       'No gold in chest 1, chest 2 never opened', 'Gold', 'Gold',
+       'No gold in chest 1, chest 2 never opened', 'Gold'], dtype='<U40')
+
+
+

Before we started the simulation, we set second_chests to contain 10,000 strings, where each string was “No gold in chest 1, chest 2 never opened”. In the simulation, we check whether there was gold in the first chest, and, if not, we don’t change the value in second_chest, and the value remains as “No gold in chest 1, chest 2 never opened”.

+

Only if there was gold in the first chest, do we go on to check whether the second chest contains silver or gold. Therefore, we only set a new value in second_chests where there was gold in the first chest.

+

Now let’s show the effect of running a comparison on first_10_chests:

+
+
were_gold = (first_10_chests == 'Gold')
+were_gold
+
+
array([False, False, False,  True,  True, False,  True,  True, False,
+        True])
+
+
+
+
+
+ +
+
+Parentheses and Boolean comparisons +
+
+
+

Notice the round brackets (parentheses) around (first_10_chests == 'Gold'). In this particular case, we would get the same result without the parentheses, so the paretheses are optional— although see below for an example where the they are not optional. In general, you will see we put parentheses around all expressions that generate Boolean arrays, and we recommend you do too. It is good habit to get into, to make it clear that this is an expression that generates a value.

+
+
+

The == 'Gold' comparison is asking a question. It is asking that question of an array, and the array contains multiple values. NumPy treats this comparison as asking the question of each element in the array. We get an answer for the question for each element. The answer for position 0 is True if the element at position 0 is equal to 'Gold' and False otherwise, and so on, for positions 1, 2 and so on. We started with 10 strings. After the comparison == 'Gold' we have 10 Boolean values, where a Boolean value can either be True or False.

+ + +

Now we have an array with True for the “Gold” results and False otherwise, we can count the number of “Gold” results by using np.sum on the array. As you remember (Section 5.14) np.sum counts True as 1 and False as 0, so the sum of the Boolean array is just the number of True values in the array — the count that we need.

+
+
# The number of True values — so the number of "Gold" chests.
+np.sum(were_gold)
+
+
5
+
+
+
+
+

10.4 Boolean arrays and another take on the ships problem

+

If we are doing a physical simulation, we usually want to finish up all the work for the trial during the trial, so we have one outcome from the trial. This makes it easier to tally up the results in the end.

+

We have no such constraint when we are using code, so it is sometimes easier to record several results from the trial, and do the final combinations and tallies at the end. We will show you what we mean with a slight variation on the two-ships code you saw above.

+
+

Start of gold_silver_booleans notebook

+ + +

Notice that the first part of the code is identical to the first approach to this problem. There are two key differences — see the comments for an explanation.

+
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
# The 3 buckets, each representing two chests on a ship.
+# As before.
+bucket1 = ['Gold', 'Gold']  # Chests in first ship.
+bucket2 = ['Gold',  'Silver']  # Chests in second ship.
+bucket3 = ['Silver', 'Silver']  # Chests in third ship.
+
+
+
# Here is where the difference starts.  We are now going to fill in
+# the result for the first chest _and_ the result for the second chest.
+#
+# Later we will fill in all these values, so the string we put here
+# does not matter.
+
+# Whether the first chest was Gold or Silver.
+first_chests = np.repeat(['To be announced'], 10000)
+# Whether the second chest was Gold or Silver.
+second_chests = np.repeat(['To be announced'], 10000)
+
+for i in range(10000):
+    # Select a ship at random from the three ships.
+    # As before.
+    ship_no = rnd.choice([1, 2, 3])
+    # Get the chests from this ship.
+    # As before.
+    if ship_no == 1:
+        bucket = bucket1
+    if ship_no == 2:
+        bucket = bucket2
+    if ship_no == 3:
+        bucket = bucket3
+
+    # As before.
+    shuffled = rnd.permuted(bucket)
+
+    # Here is the big difference - we store the result for the first and second
+    # chests.
+    first_chests[i] = shuffled[0]
+    second_chests[i] = shuffled[1]
+
+# End loop, go back to beginning.
+
+# We will do the calculation we need in the next cell.  For now
+# just display the first 10 values.
+ten_first_chests = first_chests[:10]
+print('The first 10 values of "first_chests:', ten_first_chests)
+
+
The first 10 values of "first_chests: ['Gold' 'Silver' 'Silver' 'Gold' 'Gold' 'Silver' 'Gold' 'Gold' 'Silver'
+ 'Gold']
+
+
ten_second_chests = second_chests[:10]
+print('The first 10 values of "second_chests', ten_second_chests)
+
+
The first 10 values of "second_chests ['Silver' 'Gold' 'Silver' 'Gold' 'Gold' 'Silver' 'Gold' 'Gold' 'Silver'
+ 'Gold']
+
+
+

In this variant, we recorded the type of first chest for each trial (“Gold” or “Silver”), and the type of second chest of the second chest (“Gold” or “Silver”).

+

We would like to count the number of times there was “Gold” in the first chest and “Gold” in the second.

+
+

10.5 Combining Boolean arrays

+

We can do the count we need by combining the Boolean arrays with the & operator. & combines Boolean arrays with a logical and. Logical and is a rule for combining two Boolean values, where the rule is: the result is True if the first value is True and the second value if True.

+

Here we use the & operator to combine some Boolean values on the left and right of the operator:

+
+
True & True   # Both are True, so result is True
+
+
True
+
+
+
+
True & False   # At least one of the values is False, so result is False
+
+
False
+
+
+
+
False & True   # At least one of the values is False, so result is False
+
+
False
+
+
+
+
False & False   # At least one (in fact both) are False, result is False.
+
+
False
+
+
+
+
+
+
+ +
+
+& and and in Python +
+
+
+

In fact Python has another operation to apply this logical and operation to values — the and operator:

+
+
print(True and True)
+
+
True
+
+
print(True and False)
+
+
False
+
+
print(False and True)
+
+
False
+
+
print(False and False)
+
+
False
+
+
+

You will see this and operator often in Python code, but it does not work well when combining Numpy arrays, so we will use the similar & operator, that does work on arrays.

+
+
+
+

Above you saw that the == operator (as in == 'Gold'), when applied to arrays, asks the question of every element in the array.

+

First make the Boolean arrays.

+
+
ten_first_gold = (ten_first_chests == 'Gold')
+print("Ten first == 'Gold'", ten_first_gold)
+
+
Ten first == 'Gold' [ True False False  True  True False  True  True False  True]
+
+
ten_second_gold = (ten_second_chests == 'Gold')
+print("Ten second == 'Gold'", ten_second_gold)
+
+
Ten second == 'Gold' [False  True False  True  True False  True  True False  True]
+
+
+

Now let us use & to combine Boolean arrays:

+
+
ten_both = (ten_first_gold & ten_second_gold)
+ten_both
+
+
array([False, False, False,  True,  True, False,  True,  True, False,
+        True])
+
+
+

Notice that Python does the comparison elementwise — element by element.

+

You saw that when we did second_chests == 'Gold' this had the effect of asking the == 'Gold' question of each element, so there will be one answer per element in second_chests. In that case there was an array to the left of == and a single value to the right. We were comparing an array to a value.

+

Here we are asking the & question of ten_first_gold and ten_second_gold. Here there is an array to the left and an array to the right. We are asking the & question 10 times, but the first question we are asking is:

+
+
# First question, giving first element of result.
+(ten_first_gold[0] & ten_second_gold[0])
+
+
False
+
+
+

The second question is:

+
+
# Second question, giving second element of result.
+(ten_first_gold[1] & ten_second_gold[1])
+
+
False
+
+
+

and so on. We have ten elements on each side, and 10 answers, giving an array (ten_both) of 10 elements. Each element in ten_both is the answer to the & question for the elements at the corresponding positions in ten_first_gold and ten_second_gold.

+

We could also create the Boolean arrays and do the & operation all in one step, like this:

+
+
ten_both = (ten_first_chests == 'Gold') & (ten_second_chests == 'Gold')
+ten_both
+
+
array([False, False, False,  True,  True, False,  True,  True, False,
+        True])
+
+
+ +
+
+
+
+ +
+
+Parentheses, arrays and comparisons +
+
+
+

Again you will notice the round brackets (parentheses) around (ten_first_chests == 'Gold') and (ten_second_chests == 'Gold'). Above, you saw us recommend you always use paretheses around Boolean expressions like this. The parentheses make the code easier to read — but be careful — in this case, we actually need the parentheses to make Python do what we want; see the footnote for more detail.3

+
+
+
+

Remember, we wanted the answer to the question: how many trials had “Gold” in the first chest and “Gold” in the second. We can answer that question for the first 10 trials with np.sum:

+
+
n_ten_both = np.sum(ten_both)
+n_ten_both
+
+
5
+
+
+

We can answer the same question for all the trials, in the same way:

+
+
first_gold = (first_chests == 'Gold')
+second_gold = (second_chests == 'Gold')
+n_both_gold = np.sum(first_gold & second_gold)
+n_both_gold
+
+
3369
+
+
+

We could also do the same calculation all in one line:

+
+
# Notice the parentheses - we need these - see above.
+n_both_gold = np.sum((first_chests == 'Gold') & (second_chests == 'Gold'))
+n_both_gold
+
+
3369
+
+
+

We can then count all the ships where the first chest was gold:

+
+
n_first_gold = np.sum(first_chests == 'Gold')
+n_first_gold
+
+
5085
+
+
+

The final calculation is the proportion of second chests that are gold, given the first chest was also gold:

+
+
p_g_given_g = n_both_gold / n_first_gold
+p_g_given_g
+
+
0.6625368731563421
+
+
+

Of course we won’t get exactly the same results from the two simulations, in the same way that we won’t get exactly the same results from any two runs of the same simulation, because of the random values we are using. But the logic for the two simulations are the same, and we are doing many trials (10,000), so the results will be very similar.

+

End of gold_silver_booleans notebook

+
+
+
+
+

10.6 The Monty Hall problem

+

The Monty Hall Problem is a puzzle in probability that is famous for its deceptive simplicity. It has its own long Wikipedia page: https://en.wikipedia.org/wiki/Monty_Hall_problem.

+

Here is the problem in the form it is best known; a letter to the columnist Marilyn vos Savant, published in Parade Magazine (1990):

+
+

Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?

+
+

In fact the first person to propose (and solve) this problem was Steve Selvin, a professor of public health at the University of California, Berkeley (Selvin 1975).

+

Most people, including at least one of us, your humble authors, quickly come to the wrong conclusion. The most common but incorrect answer is that it will make no difference if you switch doors or stay with your original choice. The obvious intuition is that, after Monty opens his door, there are two doors that might have the car behind them, and therefore, there is a 50% chance it will be behind any one of the two. It turns out that answer is wrong; you will double your chances of winning by switching doors. Did you get the answer right?

+

If you got the answer wrong, you are in excellent company. As you can see from the commentary in Savant (1990), many mathematicians wrote to Parade magazine to assert that the (correct) solution was wrong. Paul Erdős was one of the most famous mathematicians of the 20th century; he could not be convinced of the correct solution until he had seen a computer simulation (Vazsonyi 1999), of the type we will do below.

+

To simulate a trial of this problem, we need to select a door at random to house the car, and another door at random, to be the door the contestant chooses. We number the doors 1, 2 and 3. Now we need two random choices from the options 1, 2 or 3, one for the door with the car, the other for the contestant door. To chose a door for the car, we could throw a die, and chose door 1 if the die shows 1 or 4, door 2 if the die shows 2 or 5, and door 3 for 3 or 6. Then we throw the die again to chose the contestant door.

+

But throwing dice is a little boring; we have to find the die, then throw it many times, and record the results. Instead we can ask the computer to chose the doors at random.

+

For this simulation, let us do 25 trials. We ask the computer to create two sets of 25 random numbers from 1 through 3. The first set is the door with the car behind it (“Car door”). The second set have the door that the contestant chose at random (“Our door”). We put these in a table, and make some new, empty columns to fill in later. The first new column is “Monty opens”. In due course, we will use this column to record the door that Monty Hall will open on this trial. The last two columns express the outcome. The first is “Stay wins”. This has “Yes” if we win on this trial by sticking to our original choice of door, and “No” otherwise. The last column is “Switch wins”. This has “Yes” if we win by switching doors, and “No” otherwise. See table Table 10.1).

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 10.1: 25 simulations of the Monty Hall problem
Car doorOur doorMonty opensStay winsSwitch wins
133
231
313
411
523
621
722
813
912
1031
1122
1232
1322
1431
1512
1621
1733
1832
1911
2032
2122
2231
2331
2411
2523
+
+
+
+

In the first trial in Table 10.1), the computer selected door 3 for car, and door 3 for the contestant. Now Monty must open a door, and he cannot open our door (door 3) so he has the choice of opening door 1 or door 2; he chooses randomly, and opens door 2. On this trial, we win if we stay with our original choice, and we lose if we change to the remaining door, door 1.

+

Now we go the second trial. The computer chose door 3 for the car, and door 1 for our choice. Monty cannot choose our door (door 1) or the door with the car behind it (door 3), so he must open door 2. Now if we stay with our original choice, we lose, but if we switch, we win.

+

You may want to print out table Table 10.1, and fill out the blank columns, to work through the logic.

+

After doing a few more trials, and some reflection, you may see that there are two different situations here: the situation when our initial guess was right, and the situation where our initial guess was wrong. When our initial guess was right, we win by staying with our original choice, but when it was wrong, we always win by switching. The chance of our initial guess being correct is 1/3 (one door out of three). So the chances of winning by staying are 1/3, and the chances of winning by switching are 2/3. But remember, you don’t need to follow this logic to get the right answer. As you will see below, the resampling simulation shows us that the Switch strategy wins.

+

Table Table 10.2 is a version of table Table 10.1 for which we have filled in the blank columns using the logic above.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 10.2: 25 simulations of the Monty Hall problem, filled out
Car doorOur doorMonty opensStay winsSwitch wins
1331YesNo
2312NoYes
3132NoYes
4112YesNo
5231NoYes
6213NoYes
7223YesNo
8132NoYes
9123NoYes
10312NoYes
11221YesNo
12321NoYes
13221YesNo
14312NoYes
15123NoYes
16213NoYes
17332YesNo
18321NoYes
19112YesNo
20321NoYes
21221YesNo
22312NoYes
23312NoYes
24112YesNo
25231NoYes
+
+
+
+

The proportion of times “Stay” wins in these 25 trials is 0.36. The proportion of times “Switch” wins is 0.64; the Switch strategy wins about twice as often as the Stay strategy.

+
+
+

10.7 Monty Hall with Python

+

Now you have seen what the results might look like for a physical simulation, you can exercise some of your newly-strengthened Python muscles to do the simulation with code.

+
+

Start of monty_hall notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+

The Monty Hall problem has a slightly complicated structure, so we will start by looking at the procedure for one trial. When we have that clear, we will put that procedure into a for loop for the simulation.

+

Let’s start with some variables. Let’s call the door I choose my_door.

+

We choose that door at random from a sequence of all possible doors. Call the doors 1, 2 and 3 from left to right.

+
+
# List of doors to chose from.
+doors = [1, 2, 3]
+
+# We choose one door at random.
+my_door = rnd.choice(doors)
+
+# Show the result
+my_door
+
+
2
+
+
+

We choose one of the doors to be the door with the car behind it:

+
+
# One door at random has the car behind it.
+car_door = rnd.choice(doors)
+
+# Show the result
+car_door
+
+
2
+
+
+

Now we need to decide which door Monty will open.

+

By our set up, Monty cannot open our door (my_door). By the set up, he has not opened (and cannot open) the door with the car behind it (car_door).

+

my_door and car_door might be the same.

+

So, to get Monty’s choices, we want to take all doors (doors) and remove my_door and car_door. That leaves the door or doors Monty can open.

+

Here are the doors Monty cannot open. Remember, a third of the time my_door and car_door will be the same, so we will include the same door twice, as doors Monty can’t open.

+
+
cant_open = [my_door, car_door]
+cant_open
+
+
[2, 2]
+
+
+

We want to find the remaining doors from doors after removing the doors named in cant_open.

+

NumPy has a good function for this, called np.setdiff1d. It calculates the set difference between two sequences, such as arrays.

+

The set difference between two sequences is the members that are in the first sequence, but are not in the second sequence. Here are a few examples of this set difference function in NumPy.

+
+

Notice that we are using lists as the input (first and second) sequences here. We can use lists or arrays or any other type of sequence in Python. (See Section 6.3.2 for an introduction to lists).

+

Numpy functions like np.setdiff1d always return an array.

+
+
+
# Members in [1, 2, 3] that are *not* in [1]
+# 1, 2, 3, removing 1, if present.
+np.setdiff1d([1, 2, 3], [1])
+
+
array([2, 3])
+
+
+
+
# Members in [1, 2, 3] that are *not* in [2, 3]
+# 1, 2, 3, removing 2 and 3, if present.
+np.setdiff1d([1, 2, 3], [2, 3])
+
+
array([1])
+
+
+
+
# Members in [1, 2, 3] that are *not* in [2, 2]
+# 1, 2, 3, removing 2 and 2 again, if present.
+np.setdiff1d([1, 2, 3], [2, 2])
+
+
array([1, 3])
+
+
+

This logic allows us to choose the doors Monty can open:

+
+
montys_choices = np.setdiff1d(doors, [my_door, car_door])
+montys_choices
+
+
array([1, 3])
+
+
+

Notice that montys_choices will only have one element left when my_door and car_door were different, but it will have two elements if my_door and car_door were the same.

+

Let’s play out those two cases:

+
+
my_door = 1  # For example.
+car_door = 2  # For example.
+# Monty can only choose door 3 now.
+montys_choices = np.setdiff1d(doors, [my_door, car_door])
+montys_choices
+
+
array([3])
+
+
+
+
my_door = 1  # For example.
+car_door = 1  # For example.
+# Monty can choose either door 2 or door 3.
+montys_choices = np.setdiff1d(doors, [my_door, car_door])
+montys_choices
+
+
array([2, 3])
+
+
+

If Monty can only choose one door, we’ll take that. Otherwise we’ll chose a door at random from the two doors available.

+
+
if len(montys_choices) == 1:  # Only one door available.
+    montys_door = montys_choices[0]  # Take the first (of 1!).
+else:  # Two doors to choose from:
+    # Choose at random.
+    montys_door = rnd.choice(montys_choices)
+montys_door
+
+
2
+
+
+
+

In fact, we can avoid that if len( check for the number of doors, because rnd.choice will also work on a sequence of length 1 — in that case, it always returns the single element in the sequence, like this:

+
+
# rnd.choice on sequence with single element - always returns that element.
+rnd.choice([2])
+
+
2
+
+
+

That means we can simplify the code above to:

+
+
# Choose single door left to choose, or door at random if two.
+montys_door = rnd.choice(montys_choices)
+montys_door
+
+
3
+
+
+
+

Now we know Monty’s door, we can identify the other door, by removing our door, and Monty’s door, from the available options:

+
+
remaining_doors = np.setdiff1d(doors, [my_door, montys_door])
+# There is only one remaining door, take that.
+other_door = remaining_doors[0]
+other_door
+
+
2
+
+
+

The logic above gives us the full procedure for one trial.

+
+
my_door = rnd.choice(doors)
+car_door = rnd.choice(doors)
+# Which door will Monty open?
+montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+# Choose single door left to choose, or door at random if two.
+montys_door = rnd.choice(montys_choices)
+# Now find the door we'll open if we switch.
+remaining_doors = np.setdiff1d(doors, [my_door, montys_door])
+# There is only one door left.
+other_door = remaining_doors[0]
+# Calculate the result of this trial.
+if my_door == car_door:
+    stay_wins = True
+if other_door == car_door:
+    switch_wins = True
+
+

All that remains is to put that trial procedure into a loop, and collect the results as we repeat the procedure many times.

+
+
# Arrays to store the results for each trial.
+stay_wins = np.repeat([False], 10000)
+switch_wins = np.repeat([False], 10000)
+
+# A list of doors to chose from.
+doors = [1, 2, 3]
+
+for i in range(10000):
+    # You will recognize the below as the single-trial procedure above.
+    my_door = rnd.choice(doors)
+    car_door = rnd.choice(doors)
+    # Which door will Monty open?
+    montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+    # Choose single door left to choose, or door at random if two.
+    montys_door = rnd.choice(montys_choices)
+    # Now find the door we'll open if we switch.
+    remaining_doors = np.setdiff1d(doors, [my_door, montys_door])
+    # There is only one door left.
+    other_door = remaining_doors[0]
+    # Calculate the result of this trial.
+    if my_door == car_door:
+        stay_wins[i] = True
+    if other_door == car_door:
+        switch_wins[i] = True
+
+p_for_stay = np.sum(stay_wins) / 10000
+p_for_switch = np.sum(switch_wins) / 10000
+
+print('p for stay:', p_for_stay)
+
+
p for stay: 0.3326
+
+
print('p for switch:', p_for_switch)
+
+
p for switch: 0.6674
+
+
+

We can also follow the same strategy as we used for the second implementation of the two-ships problem (Section 10.4).

+

Here, as in the second two-ships implementation, we do not calculate the trial results (stay_wins, switch_wins) in each trial. Instead, we store the doors for each trial, and then use Boolean arrays to calculate the results for all trials, at the end.

+
+
# Instead of storing the trial results, we store the doors for each trial.
+my_doors = np.zeros(10000)
+car_doors = np.zeros(10000)
+other_doors = np.zeros(10000)
+
+doors = [1, 2, 3]
+
+for i in range(10000):
+    my_door = rnd.choice(doors)
+    car_door = rnd.choice(doors)
+    # Which door will Monty open?
+    montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+    # Choose single door left to choose, or door at random if two.
+    montys_door = rnd.choice(montys_choices)
+    # Now find the door we'll open if we switch.
+    remaining_doors = np.setdiff1d(doors, [my_door, montys_door])
+    # There is only one door left.
+    other_door = remaining_doors[0]
+
+    # Store the doors we chose.
+    my_doors[i] = my_door
+    car_doors[i] = car_door
+    other_doors[i] = other_door
+
+# Now - at the end of all the trials, we use Boolean arrays to calculate the
+# results.
+stay_wins = my_doors == car_doors
+switch_wins = other_doors == car_doors
+
+p_for_stay = np.sum(stay_wins) / 10000
+p_for_switch = np.sum(switch_wins) / 10000
+
+print('p for stay:', p_for_stay)
+
+
p for stay: 0.3374
+
+
print('p for switch:', p_for_switch)
+
+
p for switch: 0.6626
+
+
+
+

10.7.1 Insight from the Monty Hall simulation

+

The code simulation gives us an estimate of the right answer, but it also forces us to set out the exact mechanics of the problem. For example, by looking at the code, we see that we can calculate “stay_wins” with this code alone:

+
+
# Just choose my door and the car door for each trial.
+my_doors = np.zeros(10000)
+car_doors = np.zeros(10000)
+doors = [1, 2, 3]
+
+for i in range(10000):
+    my_doors[i] = rnd.choice(doors)
+    car_doors[i] = rnd.choice(doors)
+
+# Calculate whether I won by staying.
+stay_wins = my_doors == car_doors
+p_for_stay = np.sum(stay_wins) / 10000
+
+print('p for stay:', p_for_stay)
+
+
p for stay: 0.3244
+
+
+

This calculation, on its own, tells us the answer, but it also points to another insight — whatever Monty does with the doors, it doesn’t change the probability that our initial guess is right, and that must be 1 in 3 (0.333). If the probability of stay_win is 1 in 3, and we only have one other door to switch to, the probability of winning after switching must be 2 in 3 (0.666).

+
+
+

10.7.2 Simulation and a variant of Monty Hall

+

You have seen that you can avoid the silly mistakes that many of us make with probability — by asking the computer to tell you the result before you start to reason from first principles.

+

As an example, consider the following variant of the Monty Hall problem.

+

The set up to the problem has us choosing a door (my_door above), and then Monty opens one of the other two doors.

+

Sometimes (in fact, 2/3 of the time) there is a car behind one of Monty’s doors. We’ve obliged Monty to open the other door, and his choice is forced.

+

When his choice was not forced, we had Monty choose the door at random.

+

For example, let us say we chose door 1.

+

Let us say that the car is also under door 1.

+

Monty has the option of choosing door 2 or door 3, and he chooses randomly between them.

+
+
my_door = 1  # We chose door 1 at random.
+car_door = 1  # This trial, by chance, the car door is 1.
+# Monty is left with doors 2 and 3 to choose from.
+montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+# He chooses randomly.
+montys_door = rnd.choice(montys_choices)
+# Show the result
+montys_door
+
+
2
+
+
+

Now — let us say we happen to know that Monty is rather lazy, and he will always choose the left-most (lower-numbered) door of the two options.

+

In the previous example, Monty had the option of choosing door 2 and 3. In this new scenario, we know that he will always choose door 2 (the left-most door).

+
+
my_door = 1  # We chose door 1 at random.
+car_door = 1  # This trial, by chance, the car door is 1.
+# Monty is left with doors 2 and 3 to choose from.
+montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+# He chooses the left-most door, always.
+montys_door = montys_choices[0]
+# Show the result
+montys_door
+
+
2
+
+
+

It feels as if we have more information about where the car is, when we know this. Consider the situation where we have chosen door 1, and Monty opens door 3. We know that he would have preferred to open door 2, if he was allowed. We therefore know he wasn’t allowed to open door 2, and that means the car is definitely under door 2.

+
+
my_door = 1  # We chose door 1 at random.
+car_door = 2  # This trial, by chance, the car door under door 2.
+# Monty is left with door 3 only to choose from.
+montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+# He chooses the left-most door, always.  But in this case, the left-most
+# available door is 3 (he can't choose 2, it is the car_door).
+# Notice the doors were in order, so the left-most door is the first door
+# in the array.
+montys_door = montys_choices[0]
+# Show the result
+montys_door
+
+
3
+
+
+

To take that into account, we might try a different strategy. We will stick to our own choice if Monty has chosen the left-most of the two doors he had available to him, because he might have chosen that door because there was a car underneath the other door, or because there was a car under neither, but he preferred the left door. But, if Monty chooses the right-most of the two-doors available to him, we will switch from our own choice to the other (unopened) door, because we can be sure that the car is under the other (unopened) door.

+

Call this the “switch if Monty chooses right door” strategy, or “switch if right” for short.

+

Can you see quickly whether this will be better than the “always stay” strategy? Will it be better than the “always switch” strategy? Take a moment to think it through, and write down your answers.

+

If you can quickly see the answer to both questions — well done — but, are you sure you are right?

+

We can test by simulation.

+

For our test of the “switch is right” strategy, we can tell if one door is to the right of another door by comparison; higher numbers mean further to the right: 2 is right of 1, and 3 is right of 2.

+
+
# Door 3 is right of door 1.
+3 > 1
+
+
True
+
+
+
+
# A test of the switch-if-right strategy.
+# The car doors.
+car_doors = np.zeros(10000)
+# The door we chose using the strategy.
+strategy_doors = np.zeros(10000)
+
+doors = [1, 2, 3]
+
+for i in range(10000):
+    my_door = rnd.choice(doors)
+    car_door = rnd.choice(doors)
+    # Which door will Monty open?
+    montys_choices  = np.setdiff1d(doors, [my_door, car_door])
+    # Choose Monty's door from the remaining options.
+    # This time, he always prefers the left door.
+    montys_door = montys_choices[0]
+    # Now find the door we'll open if we switch.
+    remaining_doors = np.setdiff1d(doors, [my_door, montys_door])
+    # There is only one door remaining - but is Monty's door
+    # to the right of this one?  Then Monty had to shift.
+    other_door = remaining_doors[0]
+    if montys_door > other_door:
+        # Monty's door was the right-hand door, the car is under the other one.
+        strategy_doors[i] = other_door
+    else:  # We stick with the door we first thought of.
+        strategy_doors[i] = my_door
+    # Store the car door for this trial.
+    car_doors[i] = car_door
+
+strategy_wins = strategy_doors == car_doors
+
+p_for_strategy = np.sum(strategy_wins) / 10000
+
+print('p for strategy:', p_for_strategy)
+
+
p for strategy: 0.6641
+
+
+

We find that the “switch-if-right” has around the same chance of success as the “always-switch” strategy — of about 66.6%, or 2 in 3. Were your initial answers right? Now you’ve seen the result, can you see why it should be so? It may not be obvious — the Monty Hall problem is deceptively difficult. But our case here is that the simulation first gives you an estimate of the correct answer, and then, gives you a good basis for thinking more about the problem. That is:

+
    +
  • simulation is useful for estimation and
  • +
  • simulation is useful for reflection.
  • +
+

End of monty_hall notebook

+
+
+
+
+

10.8 Why use simulation?

+

Doing these simulations has two large benefits. First, it gives us the right answer, saving us from making a mistake. Second, the process of simulation forces us to think about how the problem works. This can give us better understanding, and make it easier to reason about the solution.

+

We will soon see that these same advantages also apply to reasoning about statistics.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/notebooks/ambulances.ipynb b/python-book/notebooks/ambulances.ipynb new file mode 100644 index 00000000..4a6ad078 --- /dev/null +++ b/python-book/notebooks/ambulances.ipynb @@ -0,0 +1,500 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "ea3262d2", + "metadata": {}, + "source": [ + "# Ambulances" + ] + }, + { + "cell_type": "markdown", + "id": "2927d3ff", + "metadata": {}, + "source": [ + "The first thing to say about the code you will see below is there are\n", + "some lines that do not do anything; these are the lines beginning with a\n", + "`#` character (read `#` as “hash”). Lines beginning with `#` are called\n", + "*comments*. When Python sees a `#` at the start of a line, it ignores\n", + "everything else on that line, and skips to the next. Here’s an example\n", + "of a comment:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bd338ecf", + "metadata": {}, + "outputs": [], + "source": [ + "# Python will completely ignore this text." + ] + }, + { + "cell_type": "markdown", + "id": "42a28497", + "metadata": {}, + "source": [ + "Because Python ignores lines beginning with `#`, the text after the `#`\n", + "is just for us, the humans reading the code. The person writing the code\n", + "will often use comments to explain what the code is doing.\n", + "\n", + "Our next task is to use Python to simulate a single day of ambulances.\n", + "We will again represent each ambulance by a random number from 0 through\n", + "9. 20 of these numbers represents a simulation of all 20 ambulances\n", + "available to the contractor. We call a simulation of all ambulances for\n", + "a specific day one *trial*.\n", + "\n", + "Before we begin our first trial, we need to load some helpful routines\n", + "from the NumPy software library. NumPy is a Python library that has many\n", + "important functions for creating and working with numerical data. We\n", + "will use routines from NumPy in almost all our examples." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "434bbcdb", + "metadata": {}, + "outputs": [], + "source": [ + "# Get the Numpy library, and call it \"np\" for short.\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "id": "7db357c6", + "metadata": {}, + "source": [ + "We also need to ask NumPy for an object that can generate random\n", + "numbers. Such an object is known as a “random number generator”." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dcbd1383", + "metadata": {}, + "outputs": [], + "source": [ + "# Ask NumPy for a random number generator.\n", + "# Name it `rnd` — short for \"random\"\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "markdown", + "id": "f98bef94", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "NumPy’s Random Number Generator\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Here are some examples of the random operations we can perform with\n", + "NumPy:\n", + "\n", + "1. Make a random choice between three words:\n", + "\n", + " ``` python\n", + " rnd.choice(['apple', 'orange', 'banana'])\n", + " ```\n", + "\n", + "2. Make five random choices of three words, using the “size=” argument:\n", + "\n", + " ``` python\n", + " rnd.choice(['apple', 'orange', 'banana'], size=5)\n", + " ```\n", + "\n", + "3. Shuffle a list of numbers:\n", + "\n", + " ``` python\n", + " rnd.permutation([1, 2, 3, 4, 5])\n", + " ```\n", + "\n", + "4. Generate five random numbers between 1 and 10:\n", + "\n", + " ``` python\n", + " rnd.integers(1, 11, size=5)\n", + " ```\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Recall that we want twenty 10-sided dice — one per ambulance. Our dice\n", + "should be 10-sided, because each ambulance has a 1-in-10 chance of being\n", + "out of order.\n", + "\n", + "The program to simulate one trial of the ambulances problem therefore\n", + "begins with these commands:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3612a106", + "metadata": {}, + "outputs": [], + "source": [ + "# Ask NumPy to generate 20 numbers from 0 through 9.\n", + "\n", + "# These are the numbers we will ask NumPy to select from.\n", + "# We store the numbers together in an *array*.\n", + "numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n", + "\n", + "# Get 20 (size=20) values from the *numbers* list.\n", + "# Store the 20 numbers with the name \"a\"\n", + "a = rnd.choice(numbers, size=20)\n", + "\n", + "# The result is a sequence (array) of 20 numbers.\n", + "a" + ] + }, + { + "cell_type": "markdown", + "id": "42a2b1c2", + "metadata": {}, + "source": [ + "The commands above ask the computer to store the results of the random\n", + "drawing in a location in the computer’s memory to which we give a name\n", + "such as “a” or “ambulances” or “aardvark” — the name is up to us.\n", + "\n", + "Next, we need to count the number of defective ambulances:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fbb8547b", + "metadata": {}, + "outputs": [], + "source": [ + "# Count the number of nines in the random numbers.\n", + "# The \"a == 9\" part identifies all the numbers equal to 9.\n", + "# The \"sum\" part counts how many numbers \"a == 9\" found.\n", + "b = np.sum(a == 9)\n", + "# Show the result\n", + "b" + ] + }, + { + "cell_type": "markdown", + "id": "f9a41d86", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Counting sequence elements\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "We see that the code uses:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "39b7131a", + "metadata": {}, + "outputs": [], + "source": [ + "np.sum(a == 9)" + ] + }, + { + "cell_type": "markdown", + "id": "d601bd43", + "metadata": {}, + "source": [ + "What exactly happens here under the hood? First `a == 9` creates an\n", + "sequence of values that only contains\n", + "\n", + "`True` or `False`\n", + "\n", + "values, depending on whether each element is equal to 9 or not.\n", + "\n", + "Then, we ask Python to add up (`sum`). Python counts `True` as 1, and\n", + "`False` as 0; thus we can use `sum` to count the number of `True`\n", + "values.\n", + "\n", + "This comes down to asking “how many elements in `a` are equal to 9”.\n", + "\n", + "Don’t worry, we will go over this again in the next chapter.\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "The `sum` command is a *counting* operation. It asks the computer to\n", + "*count* the number of `9`s among the twenty numbers that are in location\n", + "`a` following the random draw carried out by the `rnd.choice` operation.\n", + "The result of the `sum` operation will be somewhere between 0 and 20,\n", + "the number of simulated ambulances that were out-of-order on a given\n", + "simulated day. The result is then placed in another location in the\n", + "computer’s memory that we label `b`.\n", + "\n", + "Above you see that we have worked out how to tell the computer to do a\n", + "single trial — one simulated day.\n", + "\n", + "### 2.3.1 Repeating trials\n", + "\n", + "We could run the code above for one trial over and over, and write down\n", + "the result on a piece of paper. If we did this 100 times we would have\n", + "100 counts of the number of simulated ambulances that had broken down\n", + "for each simulated day. To answer our question, we will then count the\n", + "number of times the count was more than three, and divide by 100, to get\n", + "an estimate of the proportion of days with more than three out-of-order\n", + "ambulances.\n", + "\n", + "One of the great things about the computer is that it is very good at\n", + "repeating tasks many times, so we do not have to. Our next task is to\n", + "ask the computer to repeat the single trial many times — say 1000 times\n", + "— and count up the results for us.\n", + "\n", + "Of course Python is very good at repeating things, but the instructions\n", + "to tell Python to repeat things will take a little while to get used to.\n", + "Soon, we will spend some time going over it in more detail. For now\n", + "though, we show you how what it looks like, and ask you to take our word\n", + "for it.\n", + "\n", + "The standard way to repeat steps in Python is a `for` loop. For example,\n", + "let us say we wanted to display (`print`) “Hello” five times. Here is\n", + "how we would do that with a `for` loop:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "530490d2", + "metadata": {}, + "outputs": [], + "source": [ + "# Read the next line as \"repeat the following steps five times\".\n", + "for i in np.arange(0, 5):\n", + " # The indented stuff is the code we repeat five times.\n", + " # Print \"Hello\" to the screen.\n", + " print(\"Hello\")" + ] + }, + { + "cell_type": "markdown", + "id": "eacbc47f", + "metadata": {}, + "source": [ + "You can probably see where we are going here. We are going to put the\n", + "code for one trial inside a `for` loop, to repeat that trial code many\n", + "times.\n", + "\n", + "Our next job is to *store* the results of each trial. If we are going to\n", + "run 1000 trials, we need to store 1000 results.\n", + "\n", + "To do this, we start with a sequence of 1000 zeros, that we will fill in\n", + "later, like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "33a5bdf8", + "metadata": {}, + "outputs": [], + "source": [ + "# Ask NumPy to make a sequence of 1000 zeros that we will use\n", + "# to store the results of our 1000 trials.\n", + "# Call this sequence \"z\"\n", + "z = np.zeros(1000)" + ] + }, + { + "cell_type": "markdown", + "id": "1ce33403", + "metadata": {}, + "source": [ + "For now, `z` contains 1000 zeros, but we will soon use a `for` loop to\n", + "execute 1000 trials. For each trial we will calculate our result (the\n", + "number of broken-down ambulances), and we will store the result in the\n", + "`z` store. We end up with 1000 trial results stored in `z`.\n", + "\n", + "With these parts, we are now ready to solve the ambulance problem, using\n", + "Python.\n", + "\n", + "### 2.3.2 The solution\n", + "\n", + "This is our big moment! Here we will combine the elements shown above to\n", + "perform our ambulance simulation over, say, 1000 days. Just a quick\n", + "reminder: we do not expect you to understand all the detail of the code\n", + "below; we will cover that later. For now, see if you can follow along\n", + "with the gist of it.\n", + "\n", + "To solve resampling problems, we typically proceed as we have done\n", + "above. We figure out the structure of a single trial and then place that\n", + "trial in a `for` loop that executes it multiple times (once for each\n", + "day, in our case).\n", + "\n", + "Now, let us apply this procedure to our ambulance problem. We simulate\n", + "1000 days. You will see that we have just taken the parts above, and put\n", + "them together. The only new part here, is the step at the end, where we\n", + "store the result of the trial. Bear with us for that; we will come to it\n", + "soon." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c9c53bba", + "metadata": {}, + "outputs": [], + "source": [ + "# Ask NumPy to make a sequence of 1000 zeros that we will use\n", + "# to store the results of our 1000 trials.\n", + "# Call this sequence \"z\"\n", + "z = np.zeros(1000)\n", + "\n", + "# These are the numbers we will ask NumPy to select from.\n", + "numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n", + "\n", + "# Read the next line as \"repeat the following steps 1000 times\".\n", + "for i in np.arange(0, 1000):\n", + " # The indented stuff is the code we repeat 1000 times.\n", + "\n", + " # Get 20 (size=20) values from the *numbers* list.\n", + " # Store the 20 numbers with the name \"a\"\n", + " a = rnd.choice(numbers, size=20)\n", + "\n", + " # Count the number of nines in the random numbers.\n", + " # The \"a == 9\" part identifies all the numbers equal to 9.\n", + " # The \"sum\" part counts how many numbers \"a == 9\" found.\n", + " b = np.sum(a == 9)\n", + "\n", + " # Store the result from this trial in the sequence \"z\"\n", + " z[i] = b\n", + "\n", + " # Now go back and repeat the trial, until done." + ] + }, + { + "cell_type": "markdown", + "id": "5908d27b", + "metadata": {}, + "source": [ + "The `z[i] = b` statement that follows the `sum` *counting* operation\n", + "simply keeps track of the results of each trial, placing the number of\n", + "defective ambulances for each trial inside the sequence called `z`. The\n", + "sequence has 1000 positions: one for each trial.\n", + "\n", + "When we have run the code above, we have stored 1000 trial results in\n", + "the sequence `z`. These are 1000 counts of out-of-order ambulances, one\n", + "for each of our simulated days. Our last task is to calculate the\n", + "proportion of these days for which we had more than three broken-down\n", + "ambulances.\n", + "\n", + "Since our aim is to count the number of days in which more than 3 (4 or\n", + "more) defective ambulances occur, we use another *counting* `sum`\n", + "command at the end of the 1000 trials. This command *counts* how many\n", + "times more than 3 defects occurred in the 1000 days recorded in our `z`\n", + "sequence, and we place the result in another location, `k`. This gives\n", + "us the total number of days where 4 or more defective ambulances are\n", + "seen to occur. Then we divide the number in `k` by 1000, the number of\n", + "trials. Thus we obtain an estimate of the chance, expressed as a\n", + "probability between 0 and 1, that 4 or more ambulances will be defective\n", + "on a given day. And we store that result in a location that we call\n", + "`kk`, which Python subsequently prints to the screen." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eb07aa62", + "metadata": {}, + "outputs": [], + "source": [ + "# How many trials resulted in more than 3 ambulances out of order?\n", + "k = np.sum(z > 3)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / 1000\n", + "\n", + "# Print the result.\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "da23f2a5", + "metadata": {}, + "source": [ + "This is the estimate we wanted; the proportion of days where more than\n", + "three ambulances were out of action.\n", + "\n", + "We have crept up on the solution, so it might not be clear to you how\n", + "few steps you needed to do this task. Here is the whole solution to the\n", + "problem, without the comments:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "600ac1a0", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()\n", + "\n", + "z = np.zeros(1000)\n", + "numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n", + "\n", + "for i in np.arange(0, 1000):\n", + " a = rnd.choice(numbers, size=20)\n", + " b = np.sum(a == 9)\n", + " z[i] = b\n", + "\n", + "k = np.sum(z > 3)\n", + "kk = k / 1000\n", + "print(kk)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/basketball_shots.ipynb b/python-book/notebooks/basketball_shots.ipynb new file mode 100644 index 00000000..e8abcd87 --- /dev/null +++ b/python-book/notebooks/basketball_shots.ipynb @@ -0,0 +1,77 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "48096db3", + "metadata": {}, + "source": [ + "# Three or more basketball shots" + ] + }, + { + "cell_type": "markdown", + "id": "020bcf37", + "metadata": {}, + "source": [ + "We simulate the probability of scoring three or more baskets from five\n", + "shots, if each shot has a 25% probability of success." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b4aed889", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bf7b9a93", + "metadata": {}, + "outputs": [], + "source": [ + "n_baskets = np.zeros(10000)\n", + "\n", + "# Do 10000 experimental trials.\n", + "for i in range(10000):\n", + "\n", + " # Generate 5 random numbers, each between 1 and 4, put them in \"a\".\n", + " # Let \"1\" represent a basket, \"2\" through \"4\" be a miss.\n", + " a = rnd.integers(1, 5, size=5)\n", + "\n", + " # Count the number of baskets, put that result in b.\n", + " b = np.sum(a == 1)\n", + "\n", + " # Keep track of each experiment's results in z.\n", + " n_baskets[i] = b\n", + "\n", + " # End the experiment, go back and repeat until all 10000 are completed, then\n", + " # proceed.\n", + "\n", + "# Determine how many experiments produced more than two baskets, put that\n", + "# result in k.\n", + "n_more_than_2 = np.sum(n_baskets > 2)\n", + "\n", + "# Convert to a proportion.\n", + "prop_more_than_2 = n_more_than_2 / 10000\n", + "\n", + "# Print the result.\n", + "print(prop_more_than_2)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/billies_bill.ipynb b/python-book/notebooks/billies_bill.ipynb new file mode 100644 index 00000000..b163ac6b --- /dev/null +++ b/python-book/notebooks/billies_bill.ipynb @@ -0,0 +1,492 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "e868010b", + "metadata": {}, + "source": [ + "# Billie's Bill" + ] + }, + { + "cell_type": "markdown", + "id": "49609752", + "metadata": {}, + "source": [ + "The text in this notebook section assumes you have opened the page as an\n", + "interactive notebook, on your own computer, or one of the Jupyter web\n", + "interfaces.\n", + "\n", + "A notebook can contain blocks of text — like this one — as well as code,\n", + "and the results from running the code.\n", + "\n", + "Jupyter Notebooks are made up of *cells*. This is a cell with text — a\n", + "text cell.\n", + "\n", + "Notebook text can have formatting, such as links.\n", + "\n", + "For example, this sentence ends with a link to the earlier [second\n", + "edition of this\n", + "book](https://resample.statistics.com/intro-text-online).\n", + "\n", + "If you are in the notebook interface (rather than reading this in the\n", + "textbook), you will see the Jupyter menu near the top of the page, with\n", + "headings “File”, “Edit” and so on.\n", + "\n", + "Underneath that, by default, you may see a row of icons - the “Toolbar”.\n", + "\n", + "In the toolbar, you may see icons to run the current cell, among others.\n", + "\n", + "To move from one cell to the next, you can click the run icon in the\n", + "toolbar, but it is more efficient to press the Shift key, and press\n", + "Enter (with Shift still held down). We will write this as Shift-Enter.\n", + "\n", + "In this, our first notebook, we will be using Python to solve one of\n", + "those difficult and troubling problems in life — working out the bill in\n", + "a restaurant.\n", + "\n", + "## 4.4 The meal in question\n", + "\n", + "Alex and Billie are at a restaurant, getting ready to order. They do not\n", + "have much money, so they are calculating the expected bill before they\n", + "order.\n", + "\n", + "Alex is thinking of having the fish for £10.50, and Billie is leaning\n", + "towards the chicken, at £9.25. First they calculate their combined bill.\n", + "\n", + "Below this text you see a *code* cell. It contains the Python code to\n", + "calculate the total bill. Press Shift-Enter in the cell below, to see\n", + "the total." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "65dcb5ca", + "metadata": {}, + "outputs": [], + "source": [ + "10.50 + 9.25" + ] + }, + { + "cell_type": "markdown", + "id": "1add5b0b", + "metadata": {}, + "source": [ + "The contents of the cell above is Python code. As you would predict,\n", + "Python understands numbers like `10.50`, and it understands `+` between\n", + "the numbers as an instruction to add the numbers.\n", + "\n", + "When you press Shift-Enter, Python finds `10.50`, realizes it is a\n", + "number, and stores that number somewhere in memory. It does the same\n", + "thing for `9.25`, and then it runs the *addition* operation on these two\n", + "numbers in memory, which gives the number 19.75.\n", + "\n", + "Finally, Python sends the resulting number (19.75) back to the notebook\n", + "for display. The notebook detects that Python sent back a value, and\n", + "shows it to us.\n", + "\n", + "This is exactly what a calculator would do.\n", + "\n", + "## 4.5 Comments\n", + "\n", + "Unlike a calculator, we can also put notes next to our calculations, to\n", + "remind us what they are for. One way of doing this is to use a\n", + "“comment”. You have already seen comments in the previous chapter.\n", + "\n", + "A comment is some text that the computer will ignore. In Python, you can\n", + "make a comment by starting a line with the `#` (hash) character. For\n", + "example, the next cell is a code cell, but when you run it, it does not\n", + "show any result. In this case, that is because the computer sees the `#`\n", + "at the beginning of the line, and then ignores the rest." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e4fe5388", + "metadata": {}, + "outputs": [], + "source": [ + "# This bit of text is for me to read, and the computer to ignore." + ] + }, + { + "cell_type": "markdown", + "id": "5c36ee6b", + "metadata": {}, + "source": [ + "Many of the code cells you see will have comments in them, to explain\n", + "what the code is doing.\n", + "\n", + "Practice writing comments for your own code. It is a very good habit to\n", + "get into. You will find that experienced programmers write many comments\n", + "on their code. They do not do this to show off, but because they have a\n", + "lot of experience in reading code, and they know that comments make it\n", + "much easier to read and understand code.\n", + "\n", + "## 4.6 More calculations\n", + "\n", + "Let us continue with the struggle that Alex and Billie are having with\n", + "their bill.\n", + "\n", + "They realize that they will also need to pay a tip.\n", + "\n", + "They think it would be reasonable to leave a 15% tip. Now they need to\n", + "multiply their total bill by 0.15, to get the tip. The bill is about\n", + "£20, so they know that the tip will be about £3.\n", + "\n", + "In Python `*` means multiplication. This is the equivalent of the “×”\n", + "key on a calculator.\n", + "\n", + "What about this, for the correct calculation?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6fd688eb", + "metadata": {}, + "outputs": [], + "source": [ + "# The tip - with a nasty mistake.\n", + "10.50 + 9.25 * 0.15" + ] + }, + { + "cell_type": "markdown", + "id": "e1d57a83", + "metadata": {}, + "source": [ + "Oh dear, no, that isn’t doing the right calculation.\n", + "\n", + "Python follows the normal rules of *precedence* with calculations. These\n", + "rules tell us to do multiplication before addition.\n", + "\n", + "See for more detail\n", + "on the standard rules.\n", + "\n", + "In the case above the rules tell Python to first calculate `9.25 * 0.15`\n", + "(to get `1.3875`) and then to add the result to `10.50`, giving\n", + "`11.8875`.\n", + "\n", + "We need to tell Python we want it to do the *addition* and *then* the\n", + "multiplication. We do this with round brackets (parentheses):\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "There are three types of brackets in Python.\n", + "\n", + "These are:\n", + "\n", + "- *round brackets* or *parentheses*: `()`;\n", + "- *square brackets*: `[]`;\n", + "- *curly brackets*: `{}`.\n", + "\n", + "Each type of bracket has a different meaning in Python. In the examples,\n", + "play close to attention to the type of brackets we are using.\n", + "\n", + "
\n", + "\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "13c74243", + "metadata": {}, + "outputs": [], + "source": [ + "# The bill plus tip - mistake fixed.\n", + "(10.50 + 9.25) * 0.15" + ] + }, + { + "cell_type": "markdown", + "id": "d211c34a", + "metadata": {}, + "source": [ + "The obvious next step is to calculate the bill *including the tip*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a46a88a5", + "metadata": {}, + "outputs": [], + "source": [ + "# The bill, including the tip\n", + "10.50 + 9.25 + (10.50 + 9.25) * 0.15" + ] + }, + { + "cell_type": "markdown", + "id": "0630bce9", + "metadata": {}, + "source": [ + "At this stage we start to feel that we are doing too much typing. Notice\n", + "that we had to type out `10.50 + 9.25` twice there. That is a little\n", + "boring, but it also makes it easier to make mistakes. The more we have\n", + "to type, the greater the chance we have to make a mistake.\n", + "\n", + "To make things simpler, we would like to be able to *store* the result\n", + "of the calculation `10.50 + 9.25`, and then re-use this value, to\n", + "calculate the tip.\n", + "\n", + "This is the role of *variables*. A *variable* is a value with a name.\n", + "\n", + "Here is a variable:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "874e1916", + "metadata": {}, + "outputs": [], + "source": [ + "# The cost of Alex's meal.\n", + "a = 10.50" + ] + }, + { + "cell_type": "markdown", + "id": "de37cf79", + "metadata": {}, + "source": [ + "`a` is a *name* we give to the value 10.50. You can read the line above\n", + "as “The variable `a` *gets the value* 10.50”. We can also talk of\n", + "*setting* the variable. Here we are *setting* `a` to equal 10.50.\n", + "\n", + "Now, when we use `a` in code, it refers to the value we gave it. For\n", + "example, we can put `a` on a line on its own, and Python will show us\n", + "the *value* of `a`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "85865a9a", + "metadata": {}, + "outputs": [], + "source": [ + "# The value of a\n", + "a" + ] + }, + { + "cell_type": "markdown", + "id": "5811397d", + "metadata": {}, + "source": [ + "We did not have to use the name `a` — we can choose almost any name we\n", + "like. For example, we could have chosen `alex_meal` instead:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9c1c9bed", + "metadata": {}, + "outputs": [], + "source": [ + "# The cost of Alex's meal.\n", + "# alex_meal gets the value 10.50\n", + "alex_meal = 10.50" + ] + }, + { + "cell_type": "markdown", + "id": "62fe4df4", + "metadata": {}, + "source": [ + "We often set variables like this, and then display the result, all in\n", + "the same cell. We do this by first setting the variable, as above, and\n", + "then, on the final line of the cell, we put the variable name on a line\n", + "on its own, to ask Python to show us the value of the variable. Here we\n", + "set `billie_meal` to have the value 9.25, and then show the value of\n", + "`billie_meal`, all in the same cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4c6f3e13", + "metadata": {}, + "outputs": [], + "source": [ + "# The cost of Billie's meal.\n", + "billie_meal = 9.25\n", + "# Show the value of billies_meal\n", + "billie_meal" + ] + }, + { + "cell_type": "markdown", + "id": "d08674b5", + "metadata": {}, + "source": [ + "Of course, here, we did not learn much, but we often set variable values\n", + "with the results of a calculation. For example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "44840851", + "metadata": {}, + "outputs": [], + "source": [ + "# The cost of both meals, before tip.\n", + "bill_before_tip = 10.50 + 9.25\n", + "# Show the value of both meals.\n", + "bill_before_tip" + ] + }, + { + "cell_type": "markdown", + "id": "3d6bc649", + "metadata": {}, + "source": [ + "But wait — we can do better than typing in the calculation like this. We\n", + "can use the values of our variables, instead of typing in the values\n", + "again." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d58d3e75", + "metadata": {}, + "outputs": [], + "source": [ + "# The cost of both meals, before tip, using variables.\n", + "bill_before_tip = alex_meal + billie_meal\n", + "# Show the value of both meals.\n", + "bill_before_tip" + ] + }, + { + "cell_type": "markdown", + "id": "3ddba715", + "metadata": {}, + "source": [ + "We make the calculation clearer by writing the calculation this way — we\n", + "are calculating the bill before the tip by adding the cost of Alex’s and\n", + "Billie’s meal — and that’s what the code looks like. But this also\n", + "allows us to *change* the variable value, and recalculate. For example,\n", + "say Alex decided to go for the hummus plate, at £7.75. Now we can tell\n", + "Python that we want `alex_meal` to have the value 7.75 instead of 10.50:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "de4e8e71", + "metadata": {}, + "outputs": [], + "source": [ + "# The new cost of Alex's meal.\n", + "# alex_meal gets the value 7.75\n", + "alex_meal = 7.75\n", + "# Show the value of alex_meal\n", + "alex_meal" + ] + }, + { + "cell_type": "markdown", + "id": "f45ef761", + "metadata": {}, + "source": [ + "Notice that `alex_meal` now has a new value. It was 10.50, but now it is\n", + "7.75. We have *reset* the value of `alex_meal`. In order to use the new\n", + "value for `alex_meal`, we must *recalculate* the bill before tip with\n", + "*exactly the same code as before*:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1ae7fb72", + "metadata": {}, + "outputs": [], + "source": [ + "# The new cost of both meals, before tip.\n", + "bill_before_tip = alex_meal + billie_meal\n", + "# Show the value of both meals.\n", + "bill_before_tip" + ] + }, + { + "cell_type": "markdown", + "id": "f9ef6f0f", + "metadata": {}, + "source": [ + "Notice that, now we have rerun this calculation, we have *reset* the\n", + "value for `bill_before_tip` to the correct value corresponding to the\n", + "new value for `alex_meal`.\n", + "\n", + "All that remains is to recalculate the bill plus tip, using the new\n", + "value for the variable:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "387a22a1", + "metadata": {}, + "outputs": [], + "source": [ + "# The cost of both meals, after tip.\n", + "bill_after_tip = bill_before_tip + bill_before_tip * 0.15\n", + "# Show the value of both meals, after tip.\n", + "bill_after_tip" + ] + }, + { + "cell_type": "markdown", + "id": "11f9415d", + "metadata": {}, + "source": [ + "Now we are using variables with relevant names, the calculation looks\n", + "right to our eye. The code expresses the calculation as we mean it: the\n", + "bill after tip is equal to the bill before the tip, plus the bill before\n", + "the tip times 0.15.\n", + "\n", + "## 4.7 And so, on\n", + "\n", + "Now you have done some practice with the notebook, and with variables,\n", + "you are ready for a new problem in probability and statistics, in the\n", + "next chapter." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/birthday_problem.ipynb b/python-book/notebooks/birthday_problem.ipynb new file mode 100644 index 00000000..71283946 --- /dev/null +++ b/python-book/notebooks/birthday_problem.ipynb @@ -0,0 +1,81 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "8c6b1b3a", + "metadata": {}, + "source": [ + "# The Birthday Problem" + ] + }, + { + "cell_type": "markdown", + "id": "2010106b", + "metadata": {}, + "source": [ + "Here we answer the question: “What is the probability that two or more\n", + "people among a roomful of (say) twenty-five people will have the same\n", + "birthday?”" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5d9b80c8", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ce4b8a41", + "metadata": {}, + "outputs": [], + "source": [ + "n_with_same_birthday = np.zeros(10000)\n", + "\n", + "days_of_year = np.arange(1, 366) # 1 through 365\n", + "\n", + "# Do 10000 trials (experiments)\n", + "for i in range(10000):\n", + " # Generate 25 numbers randomly between \"1\" and \"365\" put them in a.\n", + " a = rnd.choice(days_of_year, size=25)\n", + "\n", + " # Looking in a, count the number of multiples and put the result in\n", + " # b. We request multiples > 1 because we are interested in any multiple,\n", + " # whether it is a duplicate, triplicate, etc. Had we been interested only\n", + " # in duplicates, we would have put in np.sum(counts == 2).\n", + " counts = np.bincount(a)\n", + " n_duplicates = np.sum(counts > 1)\n", + "\n", + " # Score the result of each trial to our store\n", + " n_with_same_birthday[i] = n_duplicates\n", + "\n", + " # End the loop for the trial, go back and repeat the trial until all 10000\n", + " # are complete, then proceed.\n", + "\n", + "# Determine how many trials had at least one multiple.\n", + "k = np.sum(n_with_same_birthday)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / 10000\n", + "\n", + "# Print the result.\n", + "print(kk)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/bullseye.ipynb b/python-book/notebooks/bullseye.ipynb new file mode 100644 index 00000000..e6ca6eb9 --- /dev/null +++ b/python-book/notebooks/bullseye.ipynb @@ -0,0 +1,95 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "8e1765df", + "metadata": {}, + "source": [ + "# Bullseye" + ] + }, + { + "cell_type": "markdown", + "id": "185c6096", + "metadata": {}, + "source": [ + "This notebook solves the “bullseye” problem: assume from past experience\n", + "that a given archer puts 10 percent of his shots in the black\n", + "(“bullseye”) and 60 percent of his shots in the white ring around the\n", + "bullseye, but misses with 30 percent of his shots. How likely is it that\n", + "in three shots the shooter will get exactly one bullseye, two in the\n", + "white, and no misses?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "03564a24", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2937d5ed", + "metadata": {}, + "outputs": [], + "source": [ + "# Make an array to store the results of each trial.\n", + "white_counts = np.zeros(10000)\n", + "\n", + "# Do 10000 experimental trials\n", + "for i in range(10000):\n", + "\n", + " # To represent 3 shots, generate 3 numbers at random between \"1\" and \"10\"\n", + " # and put them in a. We will let a \"1\" denote a bullseye, \"2\"-\"7\" a shot in\n", + " # the white, and \"8\"-\"10\" a miss.\n", + " a = rnd.integers(1, 11, size=3)\n", + "\n", + " # Count the number of bullseyes, put that result in b.\n", + " b = np.sum(a == 1)\n", + "\n", + " # If there is exactly one bullseye, we will continue with counting the\n", + " # other shots. (If there are no bullseyes, we need not bother — the\n", + " # outcome we are interested in has not occurred.)\n", + " if b == 1:\n", + "\n", + " # Count the number of shots in the white, put them in c. (Recall we are\n", + " # doing this only if we got one bullseye.)\n", + " c = np.sum((a >= 2) & (a <=7))\n", + "\n", + " # Keep track of the results of this second count.\n", + " white_counts[i] = c\n", + "\n", + " # End the \"if\" sequence — we will do the following steps without regard\n", + " # to the \"if\" condition.\n", + "\n", + " # End the above experiment and repeat it until 10000 repetitions are\n", + " # complete, then continue.\n", + "\n", + "# Count the number of occasions on which there are two in the white and a\n", + "# bullseye.\n", + "n_desired = np.sum(white_counts == 2)\n", + "\n", + "# Convert to a proportion.\n", + "prop_desired = n_desired / 10000\n", + "\n", + "# Print the results.\n", + "print(prop_desired)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/cards_pennies.ipynb b/python-book/notebooks/cards_pennies.ipynb new file mode 100644 index 00000000..ea023c97 --- /dev/null +++ b/python-book/notebooks/cards_pennies.ipynb @@ -0,0 +1,128 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "1395c47e", + "metadata": {}, + "source": [ + "# Cards and pennies" + ] + }, + { + "cell_type": "markdown", + "id": "8de84e3a", + "metadata": {}, + "source": [ + "An answer for the following puzzle: “… shuffle a packet of four cards —\n", + "two red, two black — and deal them face down in a row. Two cards are\n", + "picked at random, say by placing a penny on each. What is the\n", + "probability that those two cards are the same color?”" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "91da23ea", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "073b222c", + "metadata": {}, + "outputs": [], + "source": [ + "# Numbers representing the slips in the hat.\n", + "N = np.array([1, 1, 2, 2])\n", + "\n", + "# An array in which we will store the result of each trial.\n", + "z = np.repeat(['No result yet'], 10000)\n", + "\n", + "for i in range(10000):\n", + " # Shuffle the numbers in N into a random order.\n", + " shuffled = rnd.permuted(N)\n", + "\n", + " A = shuffled[0] # The first slip from the shuffled array.\n", + " B = shuffled[1] # The second slip from the shuffled array.\n", + "\n", + " # Set the result of this trial.\n", + " if A == B:\n", + " z[i] = 'Yes'\n", + " else:\n", + " z[i] = 'No'\n", + "\n", + "# How many times did we see \"Yes\"?\n", + "k = np.sum(z == 'Yes')\n", + "\n", + "# The proportion.\n", + "kk = k / 10000\n", + "\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "9b1f9a25", + "metadata": {}, + "source": [ + "Now let’s play the game differently, first picking one card and *putting\n", + "it back and shuffling* before picking a second card. What are the\n", + "results now? You can try it with the cards, but here is another program,\n", + "similar to the last, to run that variation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "38bcdaa5", + "metadata": {}, + "outputs": [], + "source": [ + "# The cards / pennies game - but replacing the slip and re-shuffling, before\n", + "# drawing again.\n", + "\n", + "# An array in which we will store the result of each trial.\n", + "z = np.repeat(['No result yet'], 10000)\n", + "\n", + "for i in range(10000):\n", + " # Shuffle the numbers in N into a random order.\n", + " first_shuffle = rnd.permuted(N)\n", + " # Draw a slip of paper.\n", + " A = first_shuffle[0] # The first slip.\n", + "\n", + " # Shuffle again (with all the slips).\n", + " second_shuffle = rnd.permuted(N)\n", + " # Draw a slip of paper.\n", + " B = second_shuffle[0] # The second slip.\n", + "\n", + " # Set the result of this trial.\n", + " if A == B:\n", + " z[i] = 'Yes'\n", + " else:\n", + " z[i] = 'No'\n", + "\n", + "# How many times did we see \"Yes\"?\n", + "k = np.sum(z == 'Yes')\n", + "\n", + "# The proportion.\n", + "kk = k / 10000\n", + "\n", + "print(kk)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/contract_poll.ipynb b/python-book/notebooks/contract_poll.ipynb new file mode 100644 index 00000000..ac3d848d --- /dev/null +++ b/python-book/notebooks/contract_poll.ipynb @@ -0,0 +1,86 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "4d5345b5", + "metadata": {}, + "source": [ + "# Contract poll simulation" + ] + }, + { + "cell_type": "markdown", + "id": "cf909969", + "metadata": {}, + "source": [ + "This Python notebook generates samples of 50 simulated voters on the\n", + "assumption that only 50 percent are in favor of the contract. Then it\n", + "counts (`sum`s) the number of samples where over 29 (30 or more) of the\n", + "50 respondents said they were in favor of the contract. (That is, we use\n", + "a “one-tailed test.”) The result in the `kk` variable is the chance of a\n", + "“false positive,” that is, 30 or more people saying they favor a\n", + "contract when support for the proposal is actually split evenly down the\n", + "middle." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7eeb5f4b", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "# We will do 10,000 iterations.\n", + "n = 10_000\n", + "\n", + "# Make an array of integers to store the \"Yes\" counts.\n", + "yeses = np.zeros(n, dtype=int)\n", + "\n", + "for i in range(n):\n", + " answers = rnd.choice(['No', 'Yes'], size=50)\n", + " yeses[i] = np.sum(answers == 'Yes')\n", + "\n", + "# Produce a histogram of the trial results.\n", + "# Use integer bins for histogram, from 10 through 40.\n", + "plt.hist(yeses, bins=range(10, 41))\n", + "plt.title('Number of yes votes out of 50, in null universe')" + ] + }, + { + "cell_type": "markdown", + "id": "2625429e", + "metadata": {}, + "source": [ + "In the histogram above, we see that about 11 percent of our trials had\n", + "30 or more voters in favor, despite the fact that they were drawn from a\n", + "population that was split 50-50. Python will calculate this proportion\n", + "directly if we add the following commands to the above:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "41131825", + "metadata": {}, + "outputs": [], + "source": [ + "k = np.sum(yeses >= 30)\n", + "kk = k / n\n", + "print('Proportion >= 30:', np.round(kk, 2))" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/female_calves.ipynb b/python-book/notebooks/female_calves.ipynb new file mode 100644 index 00000000..75f3406e --- /dev/null +++ b/python-book/notebooks/female_calves.ipynb @@ -0,0 +1,95 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "dece1bc2", + "metadata": {}, + "source": [ + "# Female calf numbers simulation" + ] + }, + { + "cell_type": "markdown", + "id": "331865af", + "metadata": {}, + "source": [ + "This notebook uses simulation to test the null hypothesis that the\n", + "chances of any one calf being female is 100 / 206." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "01359641", + "metadata": {}, + "outputs": [], + "source": [ + "# set the number of trials\n", + "n_trials = 10000\n", + "\n", + "# set the size of each sample\n", + "sample_size = 10\n", + "\n", + "# Probability of any one calf being female.\n", + "p_female = 100 / 206\n", + "\n", + "# an array to store the results\n", + "scores = np.zeros(n_trials)\n", + "\n", + "# for 10000 repeats\n", + "for i in range(n_trials):\n", + "\n", + " a = rnd.choice(['female', 'male'],\n", + " p=[p_female, 1 - p_female],\n", + " size = sample_size)\n", + " b = np.sum(a == 'female')\n", + "\n", + " # store the result of the current trial\n", + " scores[i] = b\n", + "\n", + "# plot a histogram of the scores\n", + "plt.title(f\"Number of females in {n_trials} samples of \\n{sample_size} simulated calves\")\n", + "plt.hist(scores)\n", + "plt.xlabel('Number of Females')\n", + "plt.ylabel('Frequency')\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ce24e859", + "metadata": {}, + "outputs": [], + "source": [ + "# count the number of scores that were greater than or equal to 9\n", + "k = np.sum(scores >= 9)\n", + "\n", + "# express as a proportion\n", + "kk = k / n_trials\n", + "\n", + "# show the proportion\n", + "print(f\"The probability of 9 or 10 females occurring by chance is {kk}\")" + ] + }, + { + "cell_type": "markdown", + "id": "cff68ff7", + "metadata": {}, + "source": [ + "We read from the result in vector `kk` in the “calves” program that the\n", + "probability of 9 or 10 females occurring by chance is a bit more than\n", + "one percent." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/fifteen_points_in_bridge.ipynb b/python-book/notebooks/fifteen_points_in_bridge.ipynb new file mode 100644 index 00000000..dc774274 --- /dev/null +++ b/python-book/notebooks/fifteen_points_in_bridge.ipynb @@ -0,0 +1,120 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "a4dab045", + "metadata": {}, + "source": [ + "# Fifteen points in a bridge hand" + ] + }, + { + "cell_type": "markdown", + "id": "e5902494", + "metadata": {}, + "source": [ + "Let us assume that ace counts as 4, king = 3, queen = 2, and jack = 1." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b8a7c92a", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "import matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "59f94a80", + "metadata": {}, + "outputs": [], + "source": [ + "# Constitute a deck with 4 jacks (point value 1), 4 queens (value 2), 4\n", + "# kings (value 3), 4 aces (value 4), and 36 other cards with no point\n", + "# value\n", + "whole_deck = np.repeat([1, 2, 3, 4, 0], [4, 4, 4, 4, 36])\n", + "whole_deck" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "101617f5", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "# Do N trials.\n", + "for i in range(N):\n", + " # Shuffle the deck of cards and draw 13\n", + " hand = rnd.choice(whole_deck, size=13, replace=False)\n", + "\n", + " # Total the points.\n", + " points = np.sum(hand)\n", + "\n", + " # Keep score of the result.\n", + " trial_results[i] = points\n", + "\n", + " # End one experiment, go back and repeat until all N trials are done." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f94f886e", + "metadata": {}, + "outputs": [], + "source": [ + "# Produce a histogram of trial results.\n", + "plt.hist(trial_results, bins=range(25), align='left', rwidth=0.75)\n", + "plt.title('Points in bridge hands');" + ] + }, + { + "cell_type": "markdown", + "id": "58555e00", + "metadata": {}, + "source": [ + "From this histogram, we see that in about 4 percent of our trials we\n", + "obtained a total of exactly 15 points. We can also compute this\n", + "directly:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6430f167", + "metadata": {}, + "outputs": [], + "source": [ + "# How many times did we have a hand with fifteen points?\n", + "k = np.sum(trial_results == 15)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / N\n", + "\n", + "# Show the result.\n", + "kk" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/fine_win.ipynb b/python-book/notebooks/fine_win.ipynb new file mode 100644 index 00000000..2c476f70 --- /dev/null +++ b/python-book/notebooks/fine_win.ipynb @@ -0,0 +1,358 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d481a07b", + "metadata": {}, + "source": [ + "# Fine day and win" + ] + }, + { + "cell_type": "markdown", + "id": "e96dd5d4", + "metadata": {}, + "source": [ + "This notebook calculates the chances that the Commanders win on a fine\n", + "day.\n", + "\n", + "We also go through the logic of the `if` statement, and its associated\n", + "`else` clause." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0acb8950", + "metadata": {}, + "outputs": [], + "source": [ + "# Load the NumPy array library.\n", + "import numpy as np\n", + "\n", + "# Make a random number generator\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ca5ea4b9", + "metadata": {}, + "outputs": [], + "source": [ + "# blue means \"nice day\", yellow means \"not nice\".\n", + "bucket_A = np.repeat(['blue', 'yellow'], [7, 3])\n", + "bucket_A" + ] + }, + { + "cell_type": "markdown", + "id": "e92d34e9", + "metadata": {}, + "source": [ + "Now let us draw a ball at random from bucket_A:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "150f5066", + "metadata": {}, + "outputs": [], + "source": [ + "a_ball = rnd.choice(bucket_A)\n", + "a_ball" + ] + }, + { + "cell_type": "markdown", + "id": "d52b7865", + "metadata": {}, + "source": [ + "How we run our first `if` statement. Running this code will display “The\n", + "ball was blue” if the ball was blue, otherwise it will not display\n", + "anything:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e8e11660", + "metadata": {}, + "outputs": [], + "source": [ + "if a_ball == 'blue':\n", + " print('The ball was blue')" + ] + }, + { + "cell_type": "markdown", + "id": "475ce4d0", + "metadata": {}, + "source": [ + "Notice that the header line has `if`, followed by the conditional\n", + "expression (question) `a_ball == 'blue'`. The header line finishes with\n", + "a colon `:`. The *body* of the `if` statement is one or more *indented*\n", + "lines. Here there is only one line: `print('The ball was blue')`. Python\n", + "only runs the body of the if statement if the *condition* is `True`.[^1]\n", + "\n", + "To confirm we see “The ball was blue” if `a_ball` is `'blue'` and\n", + "nothing otherwise, we can set `a_ball` and re-run the code:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f46276d3", + "metadata": {}, + "outputs": [], + "source": [ + "# Set value of a_ball so we know what it is.\n", + "a_ball = 'blue'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "024e3906", + "metadata": {}, + "outputs": [], + "source": [ + "if a_ball == 'blue':\n", + " # The conditional statement is True in this case, so the body does run.\n", + " print('The ball was blue')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "29a2585c", + "metadata": {}, + "outputs": [], + "source": [ + "a_ball = 'yellow'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ec1e2ba6", + "metadata": {}, + "outputs": [], + "source": [ + "if a_ball == 'blue':\n", + " # The conditional statement is False, so the body does not run.\n", + " print('The ball was blue')" + ] + }, + { + "cell_type": "markdown", + "id": "e329900d", + "metadata": {}, + "source": [ + "We can add an `else` clause to the `if` statement. Remember the *body*\n", + "of the `if` statement runs if the *conditional expression* (here\n", + "`a_ball == 'blue')` is `True`. The `else` clause runs if the conditional\n", + "statement is `False`. This may be clearer with an example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8d98847b", + "metadata": {}, + "outputs": [], + "source": [ + "a_ball = 'blue'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "19ae7d91", + "metadata": {}, + "outputs": [], + "source": [ + "if a_ball == 'blue':\n", + " # The conditional expression is True in this case, so the body runs.\n", + " print('The ball was blue')\n", + "else:\n", + " # The conditional expression was True, so the else clause does not run.\n", + " print('The ball was not blue')" + ] + }, + { + "cell_type": "markdown", + "id": "913517e4", + "metadata": {}, + "source": [ + "Notice that the `else` clause of the `if` statement starts with a header\n", + "line — `else` — followed by a colon `:`. It then has its own indented\n", + "*body* of indented code. The body of the `else` clause only runs if the\n", + "initial conditional expression is *not* `True`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dcaaf361", + "metadata": {}, + "outputs": [], + "source": [ + "a_ball = 'yellow'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dfabe508", + "metadata": {}, + "outputs": [], + "source": [ + "if a_ball == 'blue':\n", + " # The conditional expression was False, so the body does not run.\n", + " print('The ball was blue')\n", + "else:\n", + " # but the else clause does run.\n", + " print('The ball was not blue')" + ] + }, + { + "cell_type": "markdown", + "id": "44af89e8", + "metadata": {}, + "source": [ + "With this machinery, we can now implement the full logic of step 4\n", + "above:\n", + "\n", + " If you have drawn a blue ball from bucket A:\n", + " Draw a ball from bucket B\n", + " if the ball is green:\n", + " record \"yes\"\n", + " otherwise:\n", + " record \"no\".\n", + "\n", + "Here is bucket B. Remember green means “win” (65% of the time) and red\n", + "means “lose” (35% of the time). We could call this the “Commanders win\n", + "when it is a nice day” bucket:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "69ef9627", + "metadata": {}, + "outputs": [], + "source": [ + "bucket_B = np.repeat(['green', 'red'], [65, 35])" + ] + }, + { + "cell_type": "markdown", + "id": "db76890a", + "metadata": {}, + "source": [ + "The full logic for step 4 is:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "19a334d8", + "metadata": {}, + "outputs": [], + "source": [ + "# By default, say we have no result.\n", + "result = 'No result'\n", + "a_ball = rnd.choice(bucket_A)\n", + "# If you have drawn a blue ball from bucket A:\n", + "if a_ball == 'blue':\n", + " # Draw a ball at random from bucket B\n", + " b_ball = rnd.choice(bucket_B)\n", + " # if the ball is green:\n", + " if b_ball == 'green':\n", + " # record \"yes\"\n", + " result = 'yes'\n", + " # otherwise:\n", + " else:\n", + " # record \"no\".\n", + " result = 'no'\n", + "# Show what we got in this case.\n", + "result" + ] + }, + { + "cell_type": "markdown", + "id": "54f31d3a", + "metadata": {}, + "source": [ + "Now we have everything we need to run many trials with the same logic." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a123656c", + "metadata": {}, + "outputs": [], + "source": [ + "# The result of each trial.\n", + "# To start with, say we have no result for all the trials.\n", + "z = np.repeat(['No result'], 10000)\n", + "\n", + "# Repeat trial procedure 10000 times\n", + "for i in range(10000):\n", + " # draw one \"ball\" for the weather, store in \"a_ball\"\n", + " # blue is \"nice day\", yellow is \"not nice\"\n", + " a_ball = rnd.choice(bucket_A)\n", + " if a_ball == 'blue': # nice day\n", + " # if no rain, check on game outcome\n", + " # green is \"win\" (give nice day), red is \"lose\" (given nice day).\n", + " b_ball = rnd.choice(bucket_B)\n", + " if b_ball == 'green': # Commanders win\n", + " # Record result.\n", + " z[i] = 'yes'\n", + " else:\n", + " z[i] = 'no'\n", + " # End of trial, go back to the beginning until done.\n", + "\n", + "# Count of the number of times we got \"yes\".\n", + "k = np.sum(z == 'yes')\n", + "# Show the proportion of *both* fine day *and* wins\n", + "kk = k / 10000\n", + "kk" + ] + }, + { + "cell_type": "markdown", + "id": "17453c68", + "metadata": {}, + "source": [ + "The above procedure gives us the probability that it will be a nice day\n", + "and the Commanders will win — about 46%.\n", + "\n", + "[^1]: In this case, the result of the conditional expression is in fact\n", + " either `True` or `False`. Python is more liberal on what it allows\n", + " in the conditional expression; it will take whatever the result is,\n", + " and then force the result into either `True` or `False`, in fact, by\n", + " wrapping the result with the `bool` function, that takes anything as\n", + " input, and returns either `True` or `False`. Therefore, we could\n", + " refer to the result of the conditional expression as something\n", + " “truthy” — that is - something that comes back as `True` or `False`\n", + " from the `bool` function. In the case here, that does not arise,\n", + " because the result is in fact either exactly `True` or exactly\n", + " `False`." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/five_spades_four_clubs.ipynb b/python-book/notebooks/five_spades_four_clubs.ipynb new file mode 100644 index 00000000..7c988086 --- /dev/null +++ b/python-book/notebooks/five_spades_four_clubs.ipynb @@ -0,0 +1,105 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "ff30d741", + "metadata": {}, + "source": [ + "# Five spades and four clubs" + ] + }, + { + "cell_type": "markdown", + "id": "c818e95f", + "metadata": {}, + "source": [ + "**This is an example of multiple-outcome sampling without replacement,\n", + "order does not matter**.\n", + "\n", + "The problem is similar to the example in\n", + "sec-four-girls-one-boy, except\n", + "that now there are four equally-likely outcomes instead of only two. A\n", + "Python solution is:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3e007470", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e11e12cc", + "metadata": {}, + "outputs": [], + "source": [ + "# Constitute the deck of 52 cards.\n", + "# Repeat the suit names 13 times each, to make a 52 card deck.\n", + "deck = np.repeat(['spade', 'club', 'diamond', 'heart'],\n", + " [13, 13, 13, 13])\n", + "# Show the deck\n", + "deck" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7347c9fd", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "# Repeat the trial N times.\n", + "for i in range(N):\n", + "\n", + " # Shuffle the deck and draw 13 cards.\n", + " hand = rnd.choice(deck, size=13, replace=False)\n", + "\n", + " # Count the number of spades in \"hand\", put the result in \"n_spades\".\n", + " n_spades = np.sum(hand == 'spade')\n", + "\n", + " # If we have five spades, we'll continue on to count the clubs. If we don't\n", + " # have five spades, the number of clubs is irrelevant — we have not gotten\n", + " # the hand we are interested in.\n", + " if n_spades == 5:\n", + " # Count the clubs, put the result in \"n_clubs\"\n", + " n_clubs = np.sum(hand == 'club')\n", + " # Keep track of the number of clubs in each trial\n", + " trial_results[i] = n_clubs\n", + "\n", + " # End one experiment, go back and repeat until all N trials are done.\n", + "\n", + "# Count the number of trials where we got 4 clubs. This is the answer we want -\n", + "# the number of hands out of 1000 with 5 spades and 4 clubs. (Recall that we\n", + "# only counted the clubs if the hand already had 5 spades.)\n", + "n_5_and_4 = np.sum(trial_results == 4)\n", + "\n", + "# Convert to a proportion.\n", + "prop_5_and_4 = n_5_and_4 / N\n", + "\n", + "# Print the result\n", + "print(prop_5_and_4)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/five_spades_four_girls.ipynb b/python-book/notebooks/five_spades_four_girls.ipynb new file mode 100644 index 00000000..823a5781 --- /dev/null +++ b/python-book/notebooks/five_spades_four_girls.ipynb @@ -0,0 +1,157 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "0ee1c387", + "metadata": {}, + "source": [ + "# Five spades, four girls" + ] + }, + { + "cell_type": "markdown", + "id": "542ce1d8", + "metadata": {}, + "source": [ + "This is a compound problem: what are the chances of *both* five or more\n", + "spades in one bridge hand, and four girls and a boy in a five-child\n", + "family?\n", + "\n", + "“Compound” does not necessarily mean “complicated”. It means that the\n", + "problem is a compound of two or more simpler problems.\n", + "\n", + "A natural way to handle such a compound problem is in stages, as we saw\n", + "in the archery problem of\n", + "sec-one-black-archery. If a\n", + "“success” is achieved in the first stage, go on to the second stage; if\n", + "not, don’t go on. More specifically in this example:\n", + "\n", + "- **Step 1.** Use a bridge card deck, and five coins with heads =\n", + " “girl”.\n", + "- **Step 2.** Deal a 13-card bridge hand and count the spades. If 5 or\n", + " more spades, record “no” and end the experimental trial. Otherwise,\n", + " continue to step 3.\n", + "- **Step 3.** Throw five coins, and count “heads.” If four heads, record\n", + " “yes,” otherwise record “no.”\n", + "- **Step 4.** Repeat steps 2 and 3 a thousand times.\n", + "- **Step 5.** Compute the proportion of “yes” in step 3. This estimates\n", + " the probability sought.\n", + "\n", + "The Python solution to this compound problem is neither long nor\n", + "difficult. We tackle it almost as if the two parts of the problem were\n", + "to be dealt with separately. We first determine, in a random bridge\n", + "hand, whether 5 spades or more are dealt, as was done in the problem\n", + "sec-five-spades-four-clubs.\n", + "Then, `if` 5 or more spades are found, we use `rnd.choice` to generate a\n", + "random family of 5 children. This means that we need not generate\n", + "families if 5 or more spades were not dealt to the bridge hand, because\n", + "a “success” is only recorded if both conditions are met. After we record\n", + "the number of girls in each sample of 5 children, we need only finish\n", + "the loop (by unindenting the next line and then use `np.sum` to count\n", + "the number of samples that had 4 girls, storing the result in `k`. Since\n", + "we only drew samples of children for those trials in which a bridge hand\n", + "of 5 spades had already been dealt, `k` will have the number of trials\n", + "out of 10000 in which both conditions were met." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1cee1676", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "230220ea", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "# Deck with 13 spades and 39 other cards\n", + "deck = np.repeat(['spade', 'others'], [13, 52 - 13])\n", + "\n", + "for i in range(N):\n", + " # Shuffle deck and draw 13 cards\n", + " hand = rnd.choice(deck, size=13, replace=False)\n", + "\n", + " n_spades = np.sum(hand == 'spade')\n", + "\n", + " if n_spades >= 5:\n", + " # Generate a family, zeros for boys, ones for girls\n", + " children = rnd.choice(['girl', 'boy'], size=5)\n", + " n_girls = np.sum(children == 'girl')\n", + " trial_results[i] = n_girls\n", + "\n", + "k = np.sum(trial_results == 4)\n", + "\n", + "kk = k / N\n", + "\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "32485046", + "metadata": {}, + "source": [ + "Here is an alternative approach to the same problem, but getting the\n", + "result at the end of the loop, by combining Boolean arrays (see\n", + "sec-combine-booleans)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7efb99d7", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_spades = np.zeros(N)\n", + "trial_girls = np.zeros(N)\n", + "\n", + "# Deck with 13 spades and 39 other cards\n", + "deck = np.repeat(['spade', 'other'], [13, 39])\n", + "\n", + "for i in range(N):\n", + " # Shuffle deck and draw 13 cards\n", + " hand = rnd.choice(deck, 13, replace=False)\n", + "\n", + " n_spades = np.sum(hand == 'spade')\n", + " trial_spades[i] = n_spades\n", + "\n", + " # Generate a family, zeros for boys, ones for girls\n", + " children = rnd.choice(['girl', 'boy'], size=5)\n", + " n_girls = np.sum(children == 'girl')\n", + " trial_girls[i] = n_girls\n", + "\n", + "k = np.sum((trial_spades >= 5) & (trial_girls == 4))\n", + "\n", + "kk = k / N\n", + "\n", + "print(kk)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/four_girls_one_boy.ipynb b/python-book/notebooks/four_girls_one_boy.ipynb new file mode 100644 index 00000000..5766fbe4 --- /dev/null +++ b/python-book/notebooks/four_girls_one_boy.ipynb @@ -0,0 +1,124 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "f3ba8df1", + "metadata": {}, + "source": [ + "# Four girls and one boy" + ] + }, + { + "cell_type": "markdown", + "id": "a0a286fa", + "metadata": {}, + "source": [ + "What is the probability of selecting four girls and one boy when\n", + "selecting five students from any group of twenty-five girls and\n", + "twenty-five boys?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "84ecf36d", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aa4c63ca", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "# Constitute the set of 25 girls and 25 boys.\n", + "whole_class = np.repeat(['girl', 'boy'], [25, 25])\n", + "\n", + "# Repeat the following steps N times.\n", + "for i in range(N):\n", + "\n", + " # Shuffle the numbers\n", + " shuffled = rnd.permuted(whole_class)\n", + "\n", + " # Take the first 5 numbers, call them c.\n", + " c = shuffled[:5]\n", + "\n", + " # Count how many girls there are, put the result in d.\n", + " d = np.sum(c == 'girl')\n", + "\n", + " # Keep track of each trial result in z.\n", + " trial_results[i] = d\n", + "\n", + " # End the experiment, go back and repeat until all 1000 trials are\n", + " # complete.\n", + "\n", + "# Count the number of times we got four girls, put the result in k.\n", + "k = np.sum(trial_results == 4)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / N\n", + "\n", + "# Print the result.\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "692d05d2", + "metadata": {}, + "source": [ + "We can also find the probabilities of other outcomes from a histogram of\n", + "trial results obtained with the following command:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7865ca73", + "metadata": {}, + "outputs": [], + "source": [ + "# Import the plotting package.\n", + "import matplotlib.pyplot as plt\n", + "\n", + "# Do histogram, with one bin for each possible number.\n", + "plt.hist(trial_results, bins=range(7), align='left', rwidth=0.75)\n", + "plt.title('# of girls');" + ] + }, + { + "cell_type": "markdown", + "id": "0e8a7d39", + "metadata": {}, + "source": [ + "In the resulting histogram we can see that in 15 percent of the trials,\n", + "4 of the 5 selected were girls.\n", + "\n", + "It should be noted that for this problem — as for most other problems —\n", + "there are several other resampling procedures that will also do the job\n", + "correctly.\n", + "\n", + "In analytic probability theory this problem is worked with a formula for\n", + "“combinations.”" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/four_girls_then_one_boy_25.ipynb b/python-book/notebooks/four_girls_then_one_boy_25.ipynb new file mode 100644 index 00000000..14b26b0b --- /dev/null +++ b/python-book/notebooks/four_girls_then_one_boy_25.ipynb @@ -0,0 +1,296 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "86144475", + "metadata": {}, + "source": [ + "# Four girls then one boy from 25/25" + ] + }, + { + "cell_type": "markdown", + "id": "03b4cd97", + "metadata": {}, + "source": [ + "**In this problem, order matters; we are sampling without replacement,\n", + "with two outcomes, several of each item.**\n", + "\n", + "What is the probability of getting an ordered series of *four girls and\n", + "then one boy* , from a universe of 25 girls and 25 boys? This\n", + "illustrates Case 3 above. Clearly we can use the same sampling mechanism\n", + "as in the example\n", + "sec-four-girls-one-boy, but now\n", + "we record “yes” for a smaller number of composite events.\n", + "\n", + "We record “no” even if a single one boy is chosen but he is chosen 1st,\n", + "2nd, 3rd, or 4th, whereas in\n", + "sec-four-girls-one-boy, such\n", + "outcomes are recorded as “yes”-es.\n", + "\n", + "- **Step 1.** Generate a class (array) of length 50, consisting of 25\n", + " strings valued “boy” and 25 strings valued “girl”.\n", + "- **Step 2.** Shuffle the class array, and select the first five\n", + " elements.\n", + "- **Step 3.** If the first five elements are exactly\n", + " `'girl', 'girl', 'girl', 'girl', 'boy'`, write “yes,” otherwise\n", + " “no.”\n", + "- **Step 4.** Repeat steps 2 and 3, say, 10,000 times, and count the\n", + " proportion of “yes” results, which estimates the probability sought.\n", + "\n", + "Let us start the single trial procedure like so:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "20287b27", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9818a4c6", + "metadata": {}, + "outputs": [], + "source": [ + "# Constitute the set of 25 girls and 25 boys.\n", + "whole_class = np.repeat(['girl', 'boy'], [25, 25])\n", + "\n", + "# Shuffle the class into a random order.\n", + "shuffled = rnd.permuted(whole_class)\n", + "# Take the first 5 class members, call them c.\n", + "c = shuffled[:5]\n", + "# Show the result.\n", + "c" + ] + }, + { + "cell_type": "markdown", + "id": "d415272d", + "metadata": {}, + "source": [ + "Our next step (step 3) is to check whether `c` is exactly equal to the\n", + "result of interest. The result of interest is:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "34a20193", + "metadata": {}, + "outputs": [], + "source": [ + "# The result we are looking for - four girls and then a boy.\n", + "result_of_interest = np.repeat(['girl', 'boy'], [4, 1])\n", + "result_of_interest" + ] + }, + { + "cell_type": "markdown", + "id": "19d60d2b", + "metadata": {}, + "source": [ + "We can then use an array *comparison* with `==` to do an element by\n", + "element (*elementwise*) check, asking whether the corresponding elements\n", + "are equal:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0123969d", + "metadata": {}, + "outputs": [], + "source": [ + "# A Boolean array, with True where corresponding elements are equal, False\n", + "# otherwise.\n", + "are_equal = c == result_of_interest\n", + "are_equal" + ] + }, + { + "cell_type": "markdown", + "id": "3d819ce0", + "metadata": {}, + "source": [ + "We are nearly finished with step 3 — it only remains to check whether\n", + "*all* of the elements were equal, by checking whether *all* of the\n", + "values in `are_equal` are `True`.\n", + "\n", + "We know that there are 5 elements, so we could check whether there are 5\n", + "`True` values with `np.sum`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e3b1b049", + "metadata": {}, + "outputs": [], + "source": [ + "# Are there exactly 5 True values in `are_equal`?\n", + "np.sum(are_equal) == 5" + ] + }, + { + "cell_type": "markdown", + "id": "946c20e5", + "metadata": {}, + "source": [ + "Another way to ask the same question is by using the `np.all` function\n", + "on `are_equal`. This returns `True` if *all* the elements in `are_equal`\n", + "are `True`, and `False` otherwise.\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Testing whether all elements of an array are the same\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "The `np.all`, applied to a Boolean array (as here), checks whether *all*\n", + "of the elements in the Boolean array are `True`. If so, it returns\n", + "`True`, otherwise, it returns `False`.\n", + "\n", + "For example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "21d9d070", + "metadata": {}, + "outputs": [], + "source": [ + "# All elements are True, `np.all` returns True\n", + "np.all([True, True, True, True])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "28940447", + "metadata": {}, + "outputs": [], + "source": [ + "# At least one element is False, `np.all` returns False\n", + "np.all([True, True, False, True])" + ] + }, + { + "cell_type": "markdown", + "id": "10457d57", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "Here is the full procedure for steps 2 and 3 (a single trial):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "598f3a90", + "metadata": {}, + "outputs": [], + "source": [ + "# Shuffle the class into a random order.\n", + "shuffled = rnd.permuted(whole_class)\n", + "# Take the first 5 class members, call them c.\n", + "c = shuffled[:5]\n", + "# For each element, test whether the result is the result of interest.\n", + "are_equal = c == result_of_interest\n", + "# Check whether we have the result we are looking for.\n", + "is_four_girls_then_one_boy = np.all(are_equal)" + ] + }, + { + "cell_type": "markdown", + "id": "acbc023d", + "metadata": {}, + "source": [ + "All that remains is to put the single trial procedure into a loop." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2e282423", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "# Repeat the following steps 1000 times.\n", + "for i in range(N):\n", + "\n", + " # Shuffle the class into a random order.\n", + " shuffled = rnd.permuted(whole_class)\n", + " # Take the first 5 class members, call them c.\n", + " c = shuffled[:5]\n", + " # For each element, test whether the result is the result of interest.\n", + " are_equal = c == result_of_interest\n", + " # Check whether we have the result we are looking for.\n", + " is_four_girls_then_one_boy = np.all(are_equal)\n", + "\n", + " # Store the result of this trial.\n", + " trial_results[i] = is_four_girls_then_one_boy\n", + "\n", + " # End the experiment, go back and repeat until all N trials are\n", + " # complete.\n", + "\n", + "# Count the number of times we got four girls then a boy\n", + "k = np.sum(trial_results)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / N\n", + "\n", + "# Print the result.\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "0368087a", + "metadata": {}, + "source": [ + "This type of problem is conventionally done with a *permutation*\n", + "formula." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/framingham_hearts.ipynb b/python-book/notebooks/framingham_hearts.ipynb new file mode 100644 index 00000000..9f6121b4 --- /dev/null +++ b/python-book/notebooks/framingham_hearts.ipynb @@ -0,0 +1,87 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "088cd7b4", + "metadata": {}, + "source": [ + "# Framingham heart data" + ] + }, + { + "cell_type": "markdown", + "id": "fc9ec4b6", + "metadata": {}, + "source": [ + "We use simulation to investigate the relationship between serum\n", + "cholesterol and heart attacks in the Framingham data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8c12f8c1", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "n = 10_0005\n", + "men = np.repeat(['infarction', 'no infarction'], [31, 574])\n", + "\n", + "n_high = 135 # Number of men with high cholesterol\n", + "n_low = 470 # Number of men with low cholesterol\n", + "\n", + "infarct_differences = np.zeros(n)\n", + "\n", + "for i in range(n):\n", + " highs = rnd.choice(men, size=n_high)\n", + " lows = rnd.choice(men, size=n_low)\n", + " high_infarcts = np.sum(highs == 'infarction')\n", + " low_infarcts = np.sum(lows == 'infarction')\n", + " high_prop = high_infarcts / n_high\n", + " low_prop = low_infarcts / n_low\n", + " infarct_differences[i] = high_prop - low_prop\n", + "\n", + "plt.hist(infarct_differences, bins=np.arange(-0.1, 0.1, 0.005))\n", + "plt.title('Infarct proportion differences in null universe')\n", + "\n", + "# How often was the resampled difference >= the observed difference?\n", + "k = np.sum(infarct_differences >= 0.029)\n", + "# Convert this result to a proportion\n", + "kk = k / n\n", + "\n", + "print('Proportion of trials with difference >= observed:',\n", + " np.round(kk, 2))" + ] + }, + { + "cell_type": "markdown", + "id": "0d80441f", + "metadata": {}, + "source": [ + "The results of the test using this program may be seen in the histogram.\n", + "We find — perhaps surprisingly — that a difference as large as observed\n", + "would occur by chance around 10 percent of the time. (If we were not\n", + "guided by the theoretical expectation that high serum cholesterol\n", + "produces heart disease, we might include the 10 percent difference going\n", + "in the other direction, giving a 20 percent chance). Even a ten percent\n", + "chance is sufficient to call into question the conclusion that high\n", + "serum cholesterol is dangerous. At a minimum, this statistical result\n", + "should call for more research before taking any strong action clinically\n", + "or otherwise." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/fruit_fly.ipynb b/python-book/notebooks/fruit_fly.ipynb new file mode 100644 index 00000000..2884f542 --- /dev/null +++ b/python-book/notebooks/fruit_fly.ipynb @@ -0,0 +1,115 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "8aaf5eda", + "metadata": {}, + "source": [ + "# Fruit fly simulation" + ] + }, + { + "cell_type": "markdown", + "id": "82e9fbb0", + "metadata": {}, + "source": [ + "This notebook uses simulation to test the null hypothesis that it is\n", + "equally likely that new fruit files are male or female." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "42b1ede8", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "\n", + "# set up the random number generator\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "187c84f7", + "metadata": {}, + "outputs": [], + "source": [ + "# Set the number of trials\n", + "n_trials = 10000\n", + "\n", + "# set the sample size for each trial\n", + "sample_size = 20\n", + "\n", + "# An empty array to store the trials\n", + "scores = np.zeros(n_trials)\n", + "\n", + "# Do 1000 trials\n", + "for i in range(n_trials):\n", + "\n", + " # Generate 20 simulated fruit flies, where each has an equal chance of being\n", + " # male or female\n", + " a = rnd.choice(['male', 'female'], size = sample_size, p = [0.5, 0.5], replace = True)\n", + "\n", + " # count the number of males in the sample\n", + " b = np.sum(a == 'male')\n", + "\n", + " # store the result of this trial\n", + " scores[i] = b\n", + "\n", + "# Produce a histogram of the trial results\n", + "plt.title(f\"Number of males in {n_trials} samples of \\n{sample_size} simulated fruit flies\")\n", + "plt.hist(scores)\n", + "plt.xlabel('Number of Males')\n", + "plt.ylabel('Frequency')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "id": "277e1289", + "metadata": {}, + "source": [ + "In the histogram above, we see that in 16 percent of the trials, the\n", + "number of males was 14 or more, or 6 or fewer. Or instead of reading the\n", + "results from the histogram, we can calculate the result by tacking on\n", + "the following commands to the above program:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e588bc0c", + "metadata": {}, + "outputs": [], + "source": [ + "# Determine the number of trials in which we had 14 or more males.\n", + "j = np.sum(scores >= 14)\n", + "\n", + "# Determine the number of trials in which we had 6 or fewer males.\n", + "k = np.sum(scores <= 6)\n", + "\n", + "# Add the two results together.\n", + "m = j + k\n", + "\n", + "# Convert to a proportion.\n", + "mm = m / n_trials\n", + "\n", + "# Print the results.\n", + "print(mm)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/gold_silver_booleans.ipynb b/python-book/notebooks/gold_silver_booleans.ipynb new file mode 100644 index 00000000..bf8af3b1 --- /dev/null +++ b/python-book/notebooks/gold_silver_booleans.ipynb @@ -0,0 +1,568 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "1f8ad3f0", + "metadata": {}, + "source": [ + "# Another approach to ships with gold and silver" + ] + }, + { + "cell_type": "markdown", + "id": "459c4596", + "metadata": {}, + "source": [ + "This notebook is a variation on the problem with gold and silver chests\n", + "in ships. It shows how we can count and tally the results at the end,\n", + "rather than in the trial itself.\n", + "\n", + "Notice that the first part of the code is identical to the first\n", + "approach to this problem. There are two key differences — see the\n", + "comments for an explanation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d9b95120", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "30134e36", + "metadata": {}, + "outputs": [], + "source": [ + "# The 3 buckets, each representing two chests on a ship.\n", + "# As before.\n", + "bucket1 = ['Gold', 'Gold'] # Chests in first ship.\n", + "bucket2 = ['Gold', 'Silver'] # Chests in second ship.\n", + "bucket3 = ['Silver', 'Silver'] # Chests in third ship." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1ad2c649", + "metadata": {}, + "outputs": [], + "source": [ + "# Here is where the difference starts. We are now going to fill in\n", + "# the result for the first chest _and_ the result for the second chest.\n", + "#\n", + "# Later we will fill in all these values, so the string we put here\n", + "# does not matter.\n", + "\n", + "# Whether the first chest was Gold or Silver.\n", + "first_chests = np.repeat(['To be announced'], 10000)\n", + "# Whether the second chest was Gold or Silver.\n", + "second_chests = np.repeat(['To be announced'], 10000)\n", + "\n", + "for i in range(10000):\n", + " # Select a ship at random from the three ships.\n", + " # As before.\n", + " ship_no = rnd.choice([1, 2, 3])\n", + " # Get the chests from this ship.\n", + " # As before.\n", + " if ship_no == 1:\n", + " bucket = bucket1\n", + " if ship_no == 2:\n", + " bucket = bucket2\n", + " if ship_no == 3:\n", + " bucket = bucket3\n", + "\n", + " # As before.\n", + " shuffled = rnd.permuted(bucket)\n", + "\n", + " # Here is the big difference - we store the result for the first and second\n", + " # chests.\n", + " first_chests[i] = shuffled[0]\n", + " second_chests[i] = shuffled[1]\n", + "\n", + "# End loop, go back to beginning.\n", + "\n", + "# We will do the calculation we need in the next cell. For now\n", + "# just display the first 10 values.\n", + "ten_first_chests = first_chests[:10]\n", + "print('The first 10 values of \"first_chests:', ten_first_chests)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2013ecb7", + "metadata": {}, + "outputs": [], + "source": [ + "ten_second_chests = second_chests[:10]\n", + "print('The first 10 values of \"second_chests', ten_second_chests)" + ] + }, + { + "cell_type": "markdown", + "id": "e19d0b8c", + "metadata": {}, + "source": [ + "In this variant, we recorded the type of first chest for each trial\n", + "(“Gold” or “Silver”), and the type of second chest of the second chest\n", + "(“Gold” or “Silver”).\n", + "\n", + "**We would like to count the number of times there was “Gold” in the\n", + "first chest *and* “Gold” in the second.**\n", + "\n", + "## 10.5 Combining Boolean arrays\n", + "\n", + "We can do the count we need by *combining* the Boolean arrays with the\n", + "`&` operator. `&` combines Boolean arrays with a *logical and*. *Logical\n", + "and* is a rule for combining two Boolean values, where the rule is: the\n", + "result is `True` if the first value is `True` *and* the second value if\n", + "`True`.\n", + "\n", + "Here we use the `&` *operator* to combine some Boolean values on the\n", + "left and right of the operator:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "67744de8", + "metadata": {}, + "outputs": [], + "source": [ + "True & True # Both are True, so result is True" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5466c7c4", + "metadata": {}, + "outputs": [], + "source": [ + "True & False # At least one of the values is False, so result is False" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0a8abe48", + "metadata": {}, + "outputs": [], + "source": [ + "False & True # At least one of the values is False, so result is False" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4315de29", + "metadata": {}, + "outputs": [], + "source": [ + "False & False # At least one (in fact both) are False, result is False." + ] + }, + { + "cell_type": "markdown", + "id": "6ca42455", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "`&` and `and` in Python\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "In fact Python has another operation to apply this *logical and*\n", + "operation to values — the `and` operator:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ac99b882", + "metadata": {}, + "outputs": [], + "source": [ + "print(True and True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b4de903e", + "metadata": {}, + "outputs": [], + "source": [ + "print(True and False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6860f6d5", + "metadata": {}, + "outputs": [], + "source": [ + "print(False and True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d7f167ae", + "metadata": {}, + "outputs": [], + "source": [ + "print(False and False)" + ] + }, + { + "cell_type": "markdown", + "id": "5c04033d", + "metadata": {}, + "source": [ + "You will see this `and` operator often in Python code, but it does not\n", + "work well when combining Numpy *arrays*, so we will use the similar `&`\n", + "operator, that does work on arrays.\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Above you saw that the `==` operator (as in `== 'Gold'`), when applied\n", + "to arrays, asks the question of every element in the array.\n", + "\n", + "First make the Boolean arrays." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "35767a47", + "metadata": {}, + "outputs": [], + "source": [ + "ten_first_gold = (ten_first_chests == 'Gold')\n", + "print(\"Ten first == 'Gold'\", ten_first_gold)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e87dbe3e", + "metadata": {}, + "outputs": [], + "source": [ + "ten_second_gold = (ten_second_chests == 'Gold')\n", + "print(\"Ten second == 'Gold'\", ten_second_gold)" + ] + }, + { + "cell_type": "markdown", + "id": "d311d194", + "metadata": {}, + "source": [ + "Now let us use `&` to combine Boolean arrays:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ad5b8c28", + "metadata": {}, + "outputs": [], + "source": [ + "ten_both = (ten_first_gold & ten_second_gold)\n", + "ten_both" + ] + }, + { + "cell_type": "markdown", + "id": "68b773f9", + "metadata": {}, + "source": [ + "Notice that Python does the comparison *elementwise* — element by\n", + "element.\n", + "\n", + "You saw that when we did `second_chests == 'Gold'` this had the effect\n", + "of asking the `== 'Gold'` question of *each element*, so there will be\n", + "one answer per element in `second_chests`. In that case there was an\n", + "array to the *left* of `==` and a single value to the *right*. We were\n", + "comparing an array to a value.\n", + "\n", + "Here we are asking the `&` question of `ten_first_gold` and\n", + "`ten_second_gold`. Here there is an array to the *left* and an array to\n", + "the *right*. We are asking the `&` question 10 times, but the first\n", + "question we are asking is:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "70ada9f2", + "metadata": {}, + "outputs": [], + "source": [ + "# First question, giving first element of result.\n", + "(ten_first_gold[0] & ten_second_gold[0])" + ] + }, + { + "cell_type": "markdown", + "id": "301fb84a", + "metadata": {}, + "source": [ + "The second question is:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3bf8bcb1", + "metadata": {}, + "outputs": [], + "source": [ + "# Second question, giving second element of result.\n", + "(ten_first_gold[1] & ten_second_gold[1])" + ] + }, + { + "cell_type": "markdown", + "id": "570d5c90", + "metadata": {}, + "source": [ + "and so on. We have ten elements on *each side*, and 10 answers, giving\n", + "an array (`ten_both`) of 10 elements. Each element in `ten_both` is the\n", + "answer to the `&` question for the elements at the corresponding\n", + "positions in `ten_first_gold` and `ten_second_gold`.\n", + "\n", + "We could also create the Boolean arrays and do the `&` operation all in\n", + "one step, like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c4733ca3", + "metadata": { + "lines_to_next_cell": 0 + }, + "outputs": [], + "source": [ + "ten_both = (ten_first_chests == 'Gold') & (ten_second_chests == 'Gold')\n", + "ten_both" + ] + }, + { + "cell_type": "markdown", + "id": "8b1f1879", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "970c660b", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Parentheses, arrays and comparisons\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Again you will notice the round brackets (parentheses) around\n", + "`(ten_first_chests == 'Gold')` and `(ten_second_chests == 'Gold')`.\n", + "Above, you saw us recommend you always use paretheses around Boolean\n", + "expressions like this. The parentheses make the code easier to read —\n", + "but be careful — in this case, we actually *need* the parentheses to\n", + "make Python do what we want; see the footnote for more detail.[^1]\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Remember, we wanted the answer to the question: how many trials had\n", + "“Gold” in the first chest *and* “Gold” in the second. We can answer that\n", + "question for the first 10 trials with `np.sum`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4f9ab7e2", + "metadata": {}, + "outputs": [], + "source": [ + "n_ten_both = np.sum(ten_both)\n", + "n_ten_both" + ] + }, + { + "cell_type": "markdown", + "id": "63c1f0e3", + "metadata": {}, + "source": [ + "We can answer the same question for *all* the trials, in the same way:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "32cd5bb4", + "metadata": {}, + "outputs": [], + "source": [ + "first_gold = (first_chests == 'Gold')\n", + "second_gold = (second_chests == 'Gold')\n", + "n_both_gold = np.sum(first_gold & second_gold)\n", + "n_both_gold" + ] + }, + { + "cell_type": "markdown", + "id": "a2b144b1", + "metadata": {}, + "source": [ + "We could also do the same calculation all in one line:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ade1bd0a", + "metadata": {}, + "outputs": [], + "source": [ + "# Notice the parentheses - we need these - see above.\n", + "n_both_gold = np.sum((first_chests == 'Gold') & (second_chests == 'Gold'))\n", + "n_both_gold" + ] + }, + { + "cell_type": "markdown", + "id": "b9edf317", + "metadata": {}, + "source": [ + "We can then count all the ships where the first chest was gold:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "00940166", + "metadata": {}, + "outputs": [], + "source": [ + "n_first_gold = np.sum(first_chests == 'Gold')\n", + "n_first_gold" + ] + }, + { + "cell_type": "markdown", + "id": "05223fbb", + "metadata": {}, + "source": [ + "The final calculation is the proportion of second chests that are gold,\n", + "given the first chest was also gold:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "621b937c", + "metadata": {}, + "outputs": [], + "source": [ + "p_g_given_g = n_both_gold / n_first_gold\n", + "p_g_given_g" + ] + }, + { + "cell_type": "markdown", + "id": "0f425d98", + "metadata": {}, + "source": [ + "Of course we won’t get exactly the same results from the two\n", + "simulations, in the same way that we won’t get exactly the same results\n", + "from any two runs of the same simulation, because of the random values\n", + "we are using. But the logic for the two simulations are the same, and we\n", + "are doing many trials (10,000), so the results will be very similar.\n", + "\n", + "[^1]: We warned that we need parentheses around our `&` expressions to\n", + " get the result we want. We would add the parentheses in any case, as\n", + " good practice, but here we also *need* the parentheses in\n", + " `(ten_first_chests == 'Gold') & (ten_second_chests == 'Gold')`.\n", + " Remember *operator precedence*; for example, the multiply operator\n", + " `*` has *higher precedence* than the operator `+`, so `3 + 5 * 2` is\n", + " equal to `3 + (5 * 2)` = 13. If we want to do addition before\n", + " multiplication, we use parentheses to tell Python the order it\n", + " should use: `(3 + 5) * 2` = 16.\n", + "\n", + " The same applies for the two operators `==` and `&` here. In fact\n", + " `&` has a higher precedence than `==`. This means that, if we write\n", + " the expression without parentheses —\n", + " `ten_first_chests == 'Gold' & ten_second_chests == 'Gold'` — because\n", + " of operator precedence, Python takes this to mean\n", + " `ten_first_chests == ('Gold' & ten_second_chests) == 'Gold'`. Python\n", + " does not know what to do with `'Gold' & ten_second_chests` and\n", + " generates an error of form\n", + " `'bitwise_and' not supported for the input types`. The error tells\n", + " you that Python does not know how to apply `&` (`'bitwise_and'`) to\n", + " the string `'Gold`’ and the array `ten_second_chests`.\n", + "\n", + " This is the same error you would get for running the code\n", + " `'Gold' & ten_second_chests` on its own.\n", + "\n", + " The point to take away is, that when you are using `&` to combine\n", + " Boolean arrays in Python, remember operator precedence, and, when in\n", + " doubt, put parentheses around the expressions on either side of `&`,\n", + " as here." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/gold_silver_ships.ipynb b/python-book/notebooks/gold_silver_ships.ipynb new file mode 100644 index 00000000..b791b049 --- /dev/null +++ b/python-book/notebooks/gold_silver_ships.ipynb @@ -0,0 +1,103 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "34885138", + "metadata": {}, + "source": [ + "# Ships with gold and silver" + ] + }, + { + "cell_type": "markdown", + "id": "4f050d6b", + "metadata": {}, + "source": [ + "In which we solve the problem of gold and silver chests in a discovered\n", + "ship." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b9d9185d", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "14409395", + "metadata": {}, + "outputs": [], + "source": [ + "# The 3 buckets. Each bucket represents a ship. Each has two chests.\n", + "bucket1 = ['Gold', 'Gold'] # Chests in first ship.\n", + "bucket2 = ['Gold', 'Silver'] # Chests in second ship.\n", + "bucket3 = ['Silver', 'Silver'] # Chests in third ship." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ac5c6b55", + "metadata": {}, + "outputs": [], + "source": [ + "# For each trial, we will have one of three states:\n", + "#\n", + "# 1. When opening the first chest, it did not contain gold.\n", + "# We will reject these trials, since they do not match our\n", + "# experiment description.\n", + "# 2. Gold was found in the first and the second chest.\n", + "# 3. Gold was found in the first, but silver in the second chest.\n", + "#\n", + "# We need a placeholder value for all trials, and will make that\n", + "# \"No gold in chest 1, chest 2 never opened\".\n", + "second_chests = np.repeat(['No gold in chest 1, chest 2 never opened'], 10000)\n", + "\n", + "for i in range(10000):\n", + " # Select a ship at random from the three ships.\n", + " ship_no = rnd.choice([1, 2, 3])\n", + " # Get the chests from this ship (represented by a bucket).\n", + " if ship_no == 1:\n", + " bucket = bucket1\n", + " if ship_no == 2:\n", + " bucket = bucket2\n", + " if ship_no == 3:\n", + " bucket = bucket3\n", + "\n", + " # We shuffle the order of the chests in this ship, to simulate\n", + " # the fact that we don't know which of the two chests we have\n", + " # found first, forward or aft.\n", + " shuffled = rnd.permuted(bucket)\n", + "\n", + " if shuffled[0] == 'Gold': # We found a gold chest first.\n", + " # Store whether the Second chest was silver or gold.\n", + " second_chests[i] = shuffled[1]\n", + "\n", + " # End loop, go back to beginning.\n", + "\n", + "# Number of times we found gold in the second chest.\n", + "n_golds = np.sum(second_chests == 'Gold')\n", + "# Number of times we found silver in the second chest.\n", + "n_silvers = np.sum(second_chests == 'Silver')\n", + "# As a ratio of golds to all second chests (where the first was gold).\n", + "print(n_golds / (n_golds + n_silvers))" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/liquor_prices.ipynb b/python-book/notebooks/liquor_prices.ipynb new file mode 100644 index 00000000..d24d07c9 --- /dev/null +++ b/python-book/notebooks/liquor_prices.ipynb @@ -0,0 +1,96 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "79e2aa8e", + "metadata": {}, + "source": [ + "# Public and private liquor prices" + ] + }, + { + "cell_type": "markdown", + "id": "602fcedd", + "metadata": {}, + "source": [ + "This notebook asks the question whether the difference in the means of\n", + "private and government-specified prices of a particular whiskey could\n", + "plausibly have come about as a result of random sampling." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a03acc97", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()\n", + "\n", + "# Import the plotting library\n", + "import matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "be631edd", + "metadata": {}, + "outputs": [], + "source": [ + "fake_diffs = np.zeros(10000)\n", + "\n", + "priv = np.array([\n", + " 4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54, 4.75,\n", + " 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75, 5.20, 5.10,\n", + " 4.80, 4.29])\n", + "\n", + "govt = np.array([\n", + " 4.65, 4.55, 4.11, 4.15, 4.20, 4.55, 3.80, 4.00, 4.19, 4.75, 4.74, 4.50,\n", + " 4.10, 4.00, 5.05, 4.20])\n", + "\n", + "actual_diff = np.mean(priv) - np.mean(govt)\n", + "\n", + "# Join the two vectors of data\n", + "both = np.concatenate((priv, govt))\n", + "\n", + "# Repeat 10000 simulation trials\n", + "for i in range(10000):\n", + "\n", + " # Sample 26 with replacement for private group\n", + " fake_priv = np.random.choice(both, size=26)\n", + "\n", + " # Sample 16 with replacement for govt. group\n", + " fake_govt = np.random.choice(both, size=16)\n", + "\n", + " # Find the mean of the \"private\" group.\n", + " p = np.mean(fake_priv)\n", + "\n", + " # Mean of the \"govt.\" group\n", + " g = np.mean(fake_govt)\n", + "\n", + " # Difference in the means\n", + " diff = p - g\n", + "\n", + " # Keep score of the trials\n", + " fake_diffs[i] = diff\n", + "\n", + "# Graph of simulation results to compare with the observed result.\n", + "plt.hist(fake_diffs)\n", + "plt.xlabel('Difference in average prices (cents)')\n", + "plt.title('Average price difference (Actual difference = '\n", + "f'{actual_diff * 100:.0f} cents)');" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/monty_hall.ipynb b/python-book/notebooks/monty_hall.ipynb new file mode 100644 index 00000000..ac8ba538 --- /dev/null +++ b/python-book/notebooks/monty_hall.ipynb @@ -0,0 +1,745 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "19408d50", + "metadata": {}, + "source": [ + "# The Monty Hall problem" + ] + }, + { + "cell_type": "markdown", + "id": "3c4355a7", + "metadata": {}, + "source": [ + "Here we do a Python simulation of the Monty Hall problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0fa7df77", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "markdown", + "id": "5075e008", + "metadata": {}, + "source": [ + "The Monty Hall problem has a slightly complicated structure, so we will\n", + "start by looking at the procedure for one trial. When we have that\n", + "clear, we will put that procedure into a `for` loop for the simulation.\n", + "\n", + "Let’s start with some variables. Let’s call the door I choose `my_door`.\n", + "\n", + "We choose that door at random from a sequence of all possible doors.\n", + "Call the doors 1, 2 and 3 from left to right." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f89462e3", + "metadata": {}, + "outputs": [], + "source": [ + "# List of doors to chose from.\n", + "doors = [1, 2, 3]\n", + "\n", + "# We choose one door at random.\n", + "my_door = rnd.choice(doors)\n", + "\n", + "# Show the result\n", + "my_door" + ] + }, + { + "cell_type": "markdown", + "id": "766e1bd5", + "metadata": {}, + "source": [ + "We choose one of the doors to be the door with the car behind it:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6d8c194e", + "metadata": {}, + "outputs": [], + "source": [ + "# One door at random has the car behind it.\n", + "car_door = rnd.choice(doors)\n", + "\n", + "# Show the result\n", + "car_door" + ] + }, + { + "cell_type": "markdown", + "id": "e76bfe9c", + "metadata": {}, + "source": [ + "Now we need to decide which door Monty will open.\n", + "\n", + "By our set up, Monty cannot open our door (`my_door`). By the set up, he\n", + "has not opened (and cannot open) the door with the car behind it\n", + "(`car_door`).\n", + "\n", + "`my_door` and `car_door` might be the same.\n", + "\n", + "So, to get Monty’s choices, we want to take all doors (`doors`) and\n", + "remove `my_door` and `car_door`. That leaves the door or doors Monty can\n", + "open.\n", + "\n", + "Here are the doors Monty cannot open. Remember, a third of the time\n", + "`my_door` and `car_door` will be the same, so we will include the same\n", + "door twice, as doors Monty can’t open." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4a0b8e19", + "metadata": {}, + "outputs": [], + "source": [ + "cant_open = [my_door, car_door]\n", + "cant_open" + ] + }, + { + "cell_type": "markdown", + "id": "0d647ccf", + "metadata": {}, + "source": [ + "We want to find the remaining doors from `doors` after removing the\n", + "doors named in `cant_open`.\n", + "\n", + "NumPy has a good function for this, called `np.setdiff1d`. It calculates\n", + "the *set difference* between two sequences, such as arrays.\n", + "\n", + "The set difference between two sequences is the members that *are* in\n", + "the first sequence, but are *not* in the second sequence. Here are a few\n", + "examples of this set difference function in NumPy.\n", + "\n", + "Notice that we are using *lists* as the input (first and second)\n", + "sequences here. We can use lists or arrays or any other type of sequence\n", + "in Python. (See sec-lists for an introduction\n", + "to lists).\n", + "\n", + "Numpy functions like `np.setdiff1d` always *return* an array." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9d08265a", + "metadata": {}, + "outputs": [], + "source": [ + "# Members in [1, 2, 3] that are *not* in [1]\n", + "# 1, 2, 3, removing 1, if present.\n", + "np.setdiff1d([1, 2, 3], [1])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7ce03f1e", + "metadata": {}, + "outputs": [], + "source": [ + "# Members in [1, 2, 3] that are *not* in [2, 3]\n", + "# 1, 2, 3, removing 2 and 3, if present.\n", + "np.setdiff1d([1, 2, 3], [2, 3])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c98c67c8", + "metadata": {}, + "outputs": [], + "source": [ + "# Members in [1, 2, 3] that are *not* in [2, 2]\n", + "# 1, 2, 3, removing 2 and 2 again, if present.\n", + "np.setdiff1d([1, 2, 3], [2, 2])" + ] + }, + { + "cell_type": "markdown", + "id": "e430dbc5", + "metadata": {}, + "source": [ + "This logic allows us to choose the doors Monty can open:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1f5f4d57", + "metadata": {}, + "outputs": [], + "source": [ + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "montys_choices" + ] + }, + { + "cell_type": "markdown", + "id": "6d4929e2", + "metadata": {}, + "source": [ + "Notice that `montys_choices` will only have one element left when\n", + "`my_door` and `car_door` were different, but it will have two elements\n", + "if `my_door` and `car_door` were the same.\n", + "\n", + "Let’s play out those two cases:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4593a444", + "metadata": {}, + "outputs": [], + "source": [ + "my_door = 1 # For example.\n", + "car_door = 2 # For example.\n", + "# Monty can only choose door 3 now.\n", + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "montys_choices" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3dbc433a", + "metadata": {}, + "outputs": [], + "source": [ + "my_door = 1 # For example.\n", + "car_door = 1 # For example.\n", + "# Monty can choose either door 2 or door 3.\n", + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "montys_choices" + ] + }, + { + "cell_type": "markdown", + "id": "d7af8049", + "metadata": {}, + "source": [ + "If Monty can only choose one door, we’ll take that. Otherwise we’ll\n", + "chose a door at random from the two doors available." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "83861f23", + "metadata": {}, + "outputs": [], + "source": [ + "if len(montys_choices) == 1: # Only one door available.\n", + " montys_door = montys_choices[0] # Take the first (of 1!).\n", + "else: # Two doors to choose from:\n", + " # Choose at random.\n", + " montys_door = rnd.choice(montys_choices)\n", + "montys_door" + ] + }, + { + "cell_type": "markdown", + "id": "99b6c7f7", + "metadata": {}, + "source": [ + "In fact, we can avoid that `if len(` check for the number of doors,\n", + "because `rnd.choice` will also work on a sequence of length 1 — in that\n", + "case, it always returns the single element in the sequence, like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "34bdcb3b", + "metadata": {}, + "outputs": [], + "source": [ + "# rnd.choice on sequence with single element - always returns that element.\n", + "rnd.choice([2])" + ] + }, + { + "cell_type": "markdown", + "id": "03980f6b", + "metadata": {}, + "source": [ + "That means we can simplify the code above to:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bd290a01", + "metadata": {}, + "outputs": [], + "source": [ + "# Choose single door left to choose, or door at random if two.\n", + "montys_door = rnd.choice(montys_choices)\n", + "montys_door" + ] + }, + { + "cell_type": "markdown", + "id": "fe300ef3", + "metadata": {}, + "source": [ + "Now we know Monty’s door, we can identify the other door, by removing\n", + "our door, and Monty’s door, from the available options:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "57bedb8d", + "metadata": {}, + "outputs": [], + "source": [ + "remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n", + "# There is only one remaining door, take that.\n", + "other_door = remaining_doors[0]\n", + "other_door" + ] + }, + { + "cell_type": "markdown", + "id": "5866a9ab", + "metadata": {}, + "source": [ + "The logic above gives us the full procedure for one trial." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "56ec7b4d", + "metadata": {}, + "outputs": [], + "source": [ + "my_door = rnd.choice(doors)\n", + "car_door = rnd.choice(doors)\n", + "# Which door will Monty open?\n", + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "# Choose single door left to choose, or door at random if two.\n", + "montys_door = rnd.choice(montys_choices)\n", + "# Now find the door we'll open if we switch.\n", + "remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n", + "# There is only one door left.\n", + "other_door = remaining_doors[0]\n", + "# Calculate the result of this trial.\n", + "if my_door == car_door:\n", + " stay_wins = True\n", + "if other_door == car_door:\n", + " switch_wins = True" + ] + }, + { + "cell_type": "markdown", + "id": "31aaa1d3", + "metadata": {}, + "source": [ + "All that remains is to put that trial procedure into a loop, and collect\n", + "the results as we repeat the procedure many times." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "441e6687", + "metadata": {}, + "outputs": [], + "source": [ + "# Arrays to store the results for each trial.\n", + "stay_wins = np.repeat([False], 10000)\n", + "switch_wins = np.repeat([False], 10000)\n", + "\n", + "# A list of doors to chose from.\n", + "doors = [1, 2, 3]\n", + "\n", + "for i in range(10000):\n", + " # You will recognize the below as the single-trial procedure above.\n", + " my_door = rnd.choice(doors)\n", + " car_door = rnd.choice(doors)\n", + " # Which door will Monty open?\n", + " montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + " # Choose single door left to choose, or door at random if two.\n", + " montys_door = rnd.choice(montys_choices)\n", + " # Now find the door we'll open if we switch.\n", + " remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n", + " # There is only one door left.\n", + " other_door = remaining_doors[0]\n", + " # Calculate the result of this trial.\n", + " if my_door == car_door:\n", + " stay_wins[i] = True\n", + " if other_door == car_door:\n", + " switch_wins[i] = True\n", + "\n", + "p_for_stay = np.sum(stay_wins) / 10000\n", + "p_for_switch = np.sum(switch_wins) / 10000\n", + "\n", + "print('p for stay:', p_for_stay)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5d37d08e", + "metadata": {}, + "outputs": [], + "source": [ + "print('p for switch:', p_for_switch)" + ] + }, + { + "cell_type": "markdown", + "id": "8c611363", + "metadata": {}, + "source": [ + "We can also follow the same strategy as we used for the second\n", + "implementation of the two-ships problem\n", + "(sec-ships-booleans).\n", + "\n", + "Here, as in the second two-ships implementation, we do not calculate the\n", + "trial results (`stay_wins`, `switch_wins`) in each trial. Instead, we\n", + "store the *doors* for each trial, and then use Boolean arrays to\n", + "calculate the results for all trials, at the end." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6f68fc9b", + "metadata": {}, + "outputs": [], + "source": [ + "# Instead of storing the trial results, we store the doors for each trial.\n", + "my_doors = np.zeros(10000)\n", + "car_doors = np.zeros(10000)\n", + "other_doors = np.zeros(10000)\n", + "\n", + "doors = [1, 2, 3]\n", + "\n", + "for i in range(10000):\n", + " my_door = rnd.choice(doors)\n", + " car_door = rnd.choice(doors)\n", + " # Which door will Monty open?\n", + " montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + " # Choose single door left to choose, or door at random if two.\n", + " montys_door = rnd.choice(montys_choices)\n", + " # Now find the door we'll open if we switch.\n", + " remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n", + " # There is only one door left.\n", + " other_door = remaining_doors[0]\n", + "\n", + " # Store the doors we chose.\n", + " my_doors[i] = my_door\n", + " car_doors[i] = car_door\n", + " other_doors[i] = other_door\n", + "\n", + "# Now - at the end of all the trials, we use Boolean arrays to calculate the\n", + "# results.\n", + "stay_wins = my_doors == car_doors\n", + "switch_wins = other_doors == car_doors\n", + "\n", + "p_for_stay = np.sum(stay_wins) / 10000\n", + "p_for_switch = np.sum(switch_wins) / 10000\n", + "\n", + "print('p for stay:', p_for_stay)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a3ea44fe", + "metadata": {}, + "outputs": [], + "source": [ + "print('p for switch:', p_for_switch)" + ] + }, + { + "cell_type": "markdown", + "id": "a414b7f5", + "metadata": {}, + "source": [ + "### 10.7.1 Insight from the Monty Hall simulation\n", + "\n", + "The code simulation gives us an estimate of the right answer, but it\n", + "also forces us to set out the exact mechanics of the problem. For\n", + "example, by looking at the code, we see that we can calculate\n", + "“stay_wins” with this code alone:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "92dd1747", + "metadata": {}, + "outputs": [], + "source": [ + "# Just choose my door and the car door for each trial.\n", + "my_doors = np.zeros(10000)\n", + "car_doors = np.zeros(10000)\n", + "doors = [1, 2, 3]\n", + "\n", + "for i in range(10000):\n", + " my_doors[i] = rnd.choice(doors)\n", + " car_doors[i] = rnd.choice(doors)\n", + "\n", + "# Calculate whether I won by staying.\n", + "stay_wins = my_doors == car_doors\n", + "p_for_stay = np.sum(stay_wins) / 10000\n", + "\n", + "print('p for stay:', p_for_stay)" + ] + }, + { + "cell_type": "markdown", + "id": "f386abac", + "metadata": {}, + "source": [ + "This calculation, on its own, tells us the answer, but it also points to\n", + "another insight — whatever Monty does with the doors, it doesn’t change\n", + "the probability that our *initial guess* is right, and that must be 1 in\n", + "3 (0.333). If the probability of `stay_win` is 1 in 3, and we only have\n", + "one other door to switch to, the probability of winning after switching\n", + "must be 2 in 3 (0.666).\n", + "\n", + "### 10.7.2 Simulation and a variant of Monty Hall\n", + "\n", + "You have seen that you can avoid the silly mistakes that many of us make\n", + "with probability — by asking the computer to tell you the result\n", + "*before* you start to reason from first principles.\n", + "\n", + "As an example, consider the following variant of the Monty Hall problem.\n", + "\n", + "The set up to the problem has us choosing a door (`my_door` above), and\n", + "then Monty opens one of the other two doors.\n", + "\n", + "Sometimes (in fact, 2/3 of the time) there is a car behind one of\n", + "Monty’s doors. We’ve obliged Monty to open the *other* door, and his\n", + "choice is forced.\n", + "\n", + "When his choice was not forced, we had Monty choose the door at random.\n", + "\n", + "For example, let us say we chose door 1.\n", + "\n", + "Let us say that the car is also under door 1.\n", + "\n", + "Monty has the option of choosing door 2 or door 3, and he chooses\n", + "randomly between them." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7505ce0", + "metadata": {}, + "outputs": [], + "source": [ + "my_door = 1 # We chose door 1 at random.\n", + "car_door = 1 # This trial, by chance, the car door is 1.\n", + "# Monty is left with doors 2 and 3 to choose from.\n", + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "# He chooses randomly.\n", + "montys_door = rnd.choice(montys_choices)\n", + "# Show the result\n", + "montys_door" + ] + }, + { + "cell_type": "markdown", + "id": "7a8398d4", + "metadata": {}, + "source": [ + "Now — let us say we happen to know that Monty is rather lazy, and he\n", + "will always choose the left-most (lower-numbered) door of the two\n", + "options.\n", + "\n", + "In the previous example, Monty had the option of choosing door 2 and 3.\n", + "In this new scenario, we know that he will always choose door 2 (the\n", + "left-most door)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "471fb695", + "metadata": {}, + "outputs": [], + "source": [ + "my_door = 1 # We chose door 1 at random.\n", + "car_door = 1 # This trial, by chance, the car door is 1.\n", + "# Monty is left with doors 2 and 3 to choose from.\n", + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "# He chooses the left-most door, always.\n", + "montys_door = montys_choices[0]\n", + "# Show the result\n", + "montys_door" + ] + }, + { + "cell_type": "markdown", + "id": "640adf10", + "metadata": {}, + "source": [ + "It feels as if we have more information about where the car is, when we\n", + "know this. Consider the situation where we have chosen door 1, and Monty\n", + "opens door 3. We know that he would have preferred to open door 2, if he\n", + "was allowed. We therefore know he wasn’t allowed to open door 2, and\n", + "that means the car is definitely under door 2." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "38628bd2", + "metadata": {}, + "outputs": [], + "source": [ + "my_door = 1 # We chose door 1 at random.\n", + "car_door = 2 # This trial, by chance, the car door under door 2.\n", + "# Monty is left with door 3 only to choose from.\n", + "montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + "# He chooses the left-most door, always. But in this case, the left-most\n", + "# available door is 3 (he can't choose 2, it is the car_door).\n", + "# Notice the doors were in order, so the left-most door is the first door\n", + "# in the array.\n", + "montys_door = montys_choices[0]\n", + "# Show the result\n", + "montys_door" + ] + }, + { + "cell_type": "markdown", + "id": "14be87f2", + "metadata": {}, + "source": [ + "To take that into account, we might try a different strategy. We will\n", + "stick to our own choice if Monty has chosen the left-most of the two\n", + "doors he had available to him, because he might have chosen that door\n", + "because there was a car underneath the other door, or because there was\n", + "a car under neither, but he preferred the left door. But, if Monty\n", + "chooses the right-most of the two-doors available to him, we will switch\n", + "from our own choice to the other (unopened) door, because we can be sure\n", + "that the car is under the other (unopened) door.\n", + "\n", + "Call this the “switch if Monty chooses right door” strategy, or “switch\n", + "if right” for short.\n", + "\n", + "Can you see quickly whether this will be better than the “always stay”\n", + "strategy? Will it be better than the “always switch” strategy? Take a\n", + "moment to think it through, and write down your answers.\n", + "\n", + "If you can quickly see the answer to both questions — well done — but,\n", + "are you sure you are right?\n", + "\n", + "We can test by simulation.\n", + "\n", + "For our test of the “switch is right” strategy, we can tell if one door\n", + "is to the right of another door by comparison; higher numbers mean\n", + "further to the right: 2 is right of 1, and 3 is right of 2." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2343c0b9", + "metadata": {}, + "outputs": [], + "source": [ + "# Door 3 is right of door 1.\n", + "3 > 1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "38a660c2", + "metadata": {}, + "outputs": [], + "source": [ + "# A test of the switch-if-right strategy.\n", + "# The car doors.\n", + "car_doors = np.zeros(10000)\n", + "# The door we chose using the strategy.\n", + "strategy_doors = np.zeros(10000)\n", + "\n", + "doors = [1, 2, 3]\n", + "\n", + "for i in range(10000):\n", + " my_door = rnd.choice(doors)\n", + " car_door = rnd.choice(doors)\n", + " # Which door will Monty open?\n", + " montys_choices = np.setdiff1d(doors, [my_door, car_door])\n", + " # Choose Monty's door from the remaining options.\n", + " # This time, he always prefers the left door.\n", + " montys_door = montys_choices[0]\n", + " # Now find the door we'll open if we switch.\n", + " remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n", + " # There is only one door remaining - but is Monty's door\n", + " # to the right of this one? Then Monty had to shift.\n", + " other_door = remaining_doors[0]\n", + " if montys_door > other_door:\n", + " # Monty's door was the right-hand door, the car is under the other one.\n", + " strategy_doors[i] = other_door\n", + " else: # We stick with the door we first thought of.\n", + " strategy_doors[i] = my_door\n", + " # Store the car door for this trial.\n", + " car_doors[i] = car_door\n", + "\n", + "strategy_wins = strategy_doors == car_doors\n", + "\n", + "p_for_strategy = np.sum(strategy_wins) / 10000\n", + "\n", + "print('p for strategy:', p_for_strategy)" + ] + }, + { + "cell_type": "markdown", + "id": "3cee288e", + "metadata": {}, + "source": [ + "We find that the “switch-if-right” has around the same chance of success\n", + "as the “always-switch” strategy — of about 66.6%, or 2 in 3. Were your\n", + "initial answers right? Now you’ve seen the result, can you see why it\n", + "should be so? It may not be obvious — the Monty Hall problem is\n", + "deceptively difficult. But our case here is that the simulation first\n", + "gives you an estimate of the correct answer, and then, gives you a good\n", + "basis for thinking more about the problem. That is:\n", + "\n", + "- simulation is useful for estimation and\n", + "- simulation is useful for reflection." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/one_pair.ipynb b/python-book/notebooks/one_pair.ipynb new file mode 100644 index 00000000..18a2b999 --- /dev/null +++ b/python-book/notebooks/one_pair.ipynb @@ -0,0 +1,104 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "b13b472e", + "metadata": {}, + "source": [ + "# One pair" + ] + }, + { + "cell_type": "markdown", + "id": "0ec967cc", + "metadata": {}, + "source": [ + "This is a simulation to find the probability of exactly one pair in a\n", + "poker hand of five cards." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e9e996c8", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4c078bb5", + "metadata": {}, + "outputs": [], + "source": [ + "# Create a bucket (vector) called a with four \"1's,\" four \"2's,\" four \"3's,\"\n", + "# etc., to represent a deck of cards\n", + "one_suit = np.arange(1, 14)\n", + "one_suit" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "79dd4e61", + "metadata": {}, + "outputs": [], + "source": [ + "# Repeat values for one suit four times to make a 52 card deck of values.\n", + "deck = np.repeat(one_suit, 4)\n", + "deck" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "80b6c965", + "metadata": {}, + "outputs": [], + "source": [ + "# Array to store result of each trial.\n", + "z = np.zeros(10000)\n", + "\n", + "# Repeat the following steps 10000 times\n", + "for i in range(10000):\n", + " # Shuffle the deck\n", + " shuffled = rnd.permuted(deck)\n", + "\n", + " # Take the first five cards to make a hand.\n", + " hand = shuffled[:5]\n", + "\n", + " # How many pairs?\n", + " # Counts for each card rank.\n", + " repeat_nos = np.bincount(hand)\n", + " n_pairs = np.sum(repeat_nos == 2)\n", + "\n", + " # Keep score of # of pairs\n", + " z[i] = n_pairs\n", + "\n", + " # End loop, go back and repeat\n", + "\n", + "# How often was there 1 pair?\n", + "k = np.sum(z == 1)\n", + "\n", + "# Convert to proportion.\n", + "kk = k / 10000\n", + "\n", + "# Show the result.\n", + "print(kk)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/pennies.ipynb b/python-book/notebooks/pennies.ipynb new file mode 100644 index 00000000..6d597494 --- /dev/null +++ b/python-book/notebooks/pennies.ipynb @@ -0,0 +1,122 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "cb46f199", + "metadata": {}, + "source": [ + "# Simulating the pennies game" + ] + }, + { + "cell_type": "markdown", + "id": "e8a0501c", + "metadata": {}, + "source": [ + "This notebook calculates the probability that one player will run out of\n", + "pennies within 200 turns of the Pennies game." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b3e7efba", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f8170b2a", + "metadata": {}, + "outputs": [], + "source": [ + "someone_won = np.zeros(10000)\n", + "\n", + "# Do 10000 trials\n", + "for i in range(10000):\n", + "\n", + " # Record the number 10: a's stake\n", + " a_stake = 10\n", + "\n", + " # Same for b\n", + " b_stake = 10\n", + "\n", + " # An indicator flag that will be set to \"1\" when somebody wins.\n", + " flag = 0\n", + "\n", + " # Repeat the following steps 200 times.\n", + " # Notice we use \"j\" as the counter variable, to avoid overwriting\n", + " # \"i\", the counter variable for the 10000 trials.\n", + " for j in range(200):\n", + " # Generate the equivalent of a coin flip, letting 1 = heads,\n", + " # 2 = tails\n", + " c = rnd.integers(1, 3)\n", + "\n", + " # If it's a heads\n", + " if c == 1:\n", + "\n", + " # Add 1 to b's stake\n", + " b_stake = b_stake + 1\n", + "\n", + " # Subtract 1 from a's stake\n", + " a_stake = a_stake - 1\n", + "\n", + " # End the \"if\" condition\n", + "\n", + " # If it's a tails\n", + " if c == 2:\n", + "\n", + " # Add one to a's stake\n", + " a_stake = a_stake + 1\n", + "\n", + " # Subtract 1 from b's stake\n", + " b_stake = b_stake - 1\n", + "\n", + " # End the \"if\" condition\n", + "\n", + " # If a has won\n", + " if a_stake == 20:\n", + "\n", + " # Set the indicator flag to 1\n", + " flag = 1\n", + "\n", + " # If b has won\n", + " if b_stake == 20:\n", + "\n", + " # Set the indicator flag to 1\n", + " flag = 1\n", + "\n", + " # End the repeat loop for 200 plays (note that the indicator flag stays at\n", + " # 0 if neither a nor b has won)\n", + "\n", + " # Keep track of whether anybody won\n", + " someone_won[i] = flag\n", + "\n", + "# End the 10000 trials\n", + "\n", + "# Find out how often somebody won\n", + "n_wins = np.sum(someone_won)\n", + "\n", + "# Convert to a proportion\n", + "prop_wins = n_wins / 10000\n", + "\n", + "# Print the results\n", + "print(prop_wins)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/pig_rations.ipynb b/python-book/notebooks/pig_rations.ipynb new file mode 100644 index 00000000..64fc6098 --- /dev/null +++ b/python-book/notebooks/pig_rations.ipynb @@ -0,0 +1,136 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "47d0dc24", + "metadata": {}, + "source": [ + "# Weight gain on pig rations" + ] + }, + { + "cell_type": "markdown", + "id": "511f516a", + "metadata": {}, + "source": [ + "We do a simulation of weight gain ranks for two different pig rations.\n", + "\n", + "The `ranks = np.arange(1, 25)` statement creates an array of numbers 1\n", + "through 24, which will represent the rankings of weight gains for each\n", + "of the 24 pigs. We repeat the following procedure for 10000 trials.\n", + "First we shuffle the elements of array `ranks` so that the rank numbers\n", + "for weight gains are randomized and placed in array `shuffled`. We then\n", + "select the first 12 elements of `shuffled` and place them in `first_12`;\n", + "this represents the rankings of a randomly-selected group of 12 pigs. We\n", + "next count (`sum`) in `n_top` the number of pigs whose rankings for\n", + "weight gain were in the top half — that is, a rank of less than 13. We\n", + "record that number in `top_ranks`, and then continue the loop, until we\n", + "finish our `n` trials.\n", + "\n", + "Since we did not know beforehand the direction of the effect of ration A\n", + "on weight gain, we want to count the times that *either more than 8* of\n", + "the random selection of 12 pigs were in the top half of the rankings,\n", + "*or that fewer than 4* of these pigs were in the top half of the weight\n", + "gain rankings — (The latter is the same as counting the number of times\n", + "that more than 8 of the 12 *non-selected* random pigs were in the top\n", + "half in weight gain.)\n", + "\n", + "We do so with the final two `sum` statements. By adding the two results\n", + "`n_gte_9` and `n_lte_3` together, we have the number of times out of\n", + "10,000 that differences in weight gains in two groups as dramatic as\n", + "those obtained in the actual experiment would occur by chance." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4fe7bdf5", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "# Constitute the set of the weight gain rank orders. ranks is now a vector\n", + "# consisting of the numbers 1 — 24, in that order.\n", + "ranks = np.arange(1, 25)\n", + "\n", + "n = 10_000\n", + "\n", + "top_ranks = np.zeros(n, dtype=int)\n", + "\n", + "for i in range(n):\n", + " # Shuffle the ranks of the weight gains.\n", + " shuffled = rnd.permuted(ranks)\n", + " # Take the first 12 ranks.\n", + " first_12 = shuffled[:12]\n", + " # Determine how many of these randomly selected 12 ranks are less than\n", + " # 12 (i.e. 1-12), put that result in n_top.\n", + " n_top = np.sum(first_12 <= 12)\n", + " # Keep track of each trial result in top_ranks\n", + " top_ranks[i] = n_top\n", + "\n", + "plt.hist(top_ranks, bins=np.arange(1, 12))\n", + "plt.title('Number of top 12 ranks in pig-ration trials')" + ] + }, + { + "cell_type": "markdown", + "id": "a8b31cde", + "metadata": {}, + "source": [ + "We see from the histogram that, in about 3 percent of the trials, either\n", + "more than 8 or fewer than 4 top half ranks (1-12) made it into the\n", + "random group of twelve that we selected. Python will calculate this for\n", + "us as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9ed9cda7", + "metadata": {}, + "outputs": [], + "source": [ + "# Determine how many of the trials yielded 9 or more top ranks.\n", + "n_gte_9 = np.sum(top_ranks >= 9)\n", + "# Determine how many trials yielded 3 or fewer of the top ranks.\n", + "# If there were 3 or fewer, then 9 or more of the top ranks must\n", + "# have been in the other group (not selected).\n", + "n_lte_3 = np.sum(top_ranks <= 3)\n", + "# Add the two together.\n", + "n_both = n_gte_9 + n_lte_3\n", + "# Convert to a proportion.\n", + "prop_both = n_both / n\n", + "\n", + "print('Trial proportion >=9 top ranks in either group:',\n", + " np.round(prop_both, 2))" + ] + }, + { + "cell_type": "markdown", + "id": "81e248bc", + "metadata": {}, + "source": [ + "The decisions that are warranted on the basis of the results depend upon\n", + "one’s purpose. If writing a scientific paper on the merits of ration A\n", + "is the ultimate purpose, it would be sensible to test another batch of\n", + "pigs to get further evidence. (Or you could proceed to employ another\n", + "sort of test for a slightly more precise evaluation.) But if the goal is\n", + "a decision on which type of ration to buy for a small farm and they are\n", + "the same price, just go ahead and buy ration A because, even if it is no\n", + "better than ration B, you have strong evidence that it is *no worse* ." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/pill_placebo.ipynb b/python-book/notebooks/pill_placebo.ipynb new file mode 100644 index 00000000..9a4e873b --- /dev/null +++ b/python-book/notebooks/pill_placebo.ipynb @@ -0,0 +1,110 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "4ac46e5f", + "metadata": {}, + "source": [ + "# Cures for pill vs placebo" + ] + }, + { + "cell_type": "markdown", + "id": "713956fe", + "metadata": {}, + "source": [ + "Now for a Python solution. Again, the benchmark hypothesis is that pill\n", + "P has no effect, and we ask how often, on this assumption, the results\n", + "that were obtained from the actual test of the pill would occur by\n", + "chance.\n", + "\n", + "Given that in the test 7 of 12 patients overall got well, the benchmark\n", + "hypothesis assumes 7/12 to be the chances of any random patient being\n", + "cured. We generate two similar samples of 6 patients, both taken from\n", + "the same universe composed of the combined samples — the bootstrap\n", + "procedure. We count (`sum`) the number who are “get well” in each\n", + "sample. Then we subtract the number who got well in the “pill” sample\n", + "from the number who got well in the “no-pill” sample. We record the\n", + "resulting difference for each trial in the variable `pill_betters`.\n", + "\n", + "In the actual test, 3 more patients got well in the sample given the\n", + "pill than in the sample given the placebo. We therefore count how many\n", + "of the trials yield results where the difference between the sample\n", + "given the pill and the sample not given the pill was greater than 2\n", + "(equal to or greater than 3). This result is the probability that the\n", + "results derived from the actual test would be obtained from random\n", + "samples drawn from a population which has a constant cure rate, pill or\n", + "no pill." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0012c098", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "# The bucket with the pieces of paper.\n", + "options = np.repeat(['get well', 'not well'], [7, 5])\n", + "\n", + "n = 10_000\n", + "\n", + "pill_betters = np.zeros(n, dtype=int)\n", + "\n", + "for i in range(n):\n", + " pill = rnd.choice(options, size=6)\n", + " pill_cures = np.sum(pill == 'get well')\n", + " placebo = rnd.choice(options, size=6)\n", + " placebo_cures = np.sum(placebo == 'get well')\n", + " pill_betters[i] = pill_cures - placebo_cures\n", + "\n", + "plt.hist(pill_betters, bins=range(-6, 7))\n", + "plt.title('Number of extra cures pill vs placebo in null universe')" + ] + }, + { + "cell_type": "markdown", + "id": "e63ba207", + "metadata": {}, + "source": [ + "Recall our actual observed results: In the medicine group, three more\n", + "patients were cured than in the placebo group. From the histogram, we\n", + "see that in only about 8 percent of the simulated trials did the\n", + "“medicine” group do as well or better. The results seem to suggest — but\n", + "by no means conclusively — that the medicine’s performance is not due to\n", + "chance. Further study would probably be warranted. The following\n", + "commands added to the above program will calculate this proportion\n", + "directly:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6af6d3c8", + "metadata": {}, + "outputs": [], + "source": [ + "# How many trials gave an advantage of 3 or greater to the pill?\n", + "k = np.sum(pill_betters >= 3)\n", + "# Convert to a proportion.\n", + "kk = k / n\n", + "# Print the result.\n", + "print('Proportion with advantage of 3 or more for pill:',\n", + " np.round(kk, 2))" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/planet_densities.ipynb b/python-book/notebooks/planet_densities.ipynb new file mode 100644 index 00000000..a6d696e0 --- /dev/null +++ b/python-book/notebooks/planet_densities.ipynb @@ -0,0 +1,83 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "86d09502", + "metadata": {}, + "source": [ + "# Planet densities and distance" + ] + }, + { + "cell_type": "markdown", + "id": "502de547", + "metadata": {}, + "source": [ + "We apply the logic of resampling to the problem of close and distant\n", + "planets and their densities." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bc89c3b1", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "# Steps 1 and 2.\n", + "actual_mean_diff = 8 / 2 - 7 / 3\n", + "\n", + "# Step 3\n", + "ranks = np.arange(1, 6)\n", + "\n", + "n = 10_000\n", + "\n", + "mean_differences = np.zeros(n)\n", + "\n", + "for i in range(n):\n", + " # Step 4\n", + " shuffled = rnd.permuted(ranks)\n", + " # Step 5\n", + " closer = shuffled[:2] # First 2\n", + " further = shuffled[2:] # Last 3\n", + " # Step 6\n", + " mean_close = np.mean(closer)\n", + " mean_far = np.mean(further)\n", + " # Step 7\n", + " mean_differences[i] = mean_close - mean_far\n", + "\n", + "# Step 9\n", + "k = np.sum(mean_differences >= actual_mean_diff)\n", + "prob = k / n\n", + "\n", + "print('Proportion of trials with mean difference >= 1.67:',\n", + " np.round(prob, 2))" + ] + }, + { + "cell_type": "markdown", + "id": "72c7b2d4", + "metadata": {}, + "source": [ + "Interpretation: 19 percent of the time, random shufflings produced a\n", + "difference in ranks as great as or greater than observed. Hence, on the\n", + "strength of this evidence, we should *not* conclude that there is a\n", + "statistically surprising difference in densities between the further\n", + "planets and the closer planets." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/sampling_tools.ipynb b/python-book/notebooks/sampling_tools.ipynb new file mode 100644 index 00000000..87a00734 --- /dev/null +++ b/python-book/notebooks/sampling_tools.ipynb @@ -0,0 +1,1248 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "37a413d4", + "metadata": {}, + "source": [ + "# Sampling tools" + ] + }, + { + "cell_type": "markdown", + "id": "cd717337", + "metadata": {}, + "source": [ + "## 6.2 Samples and labels\n", + "\n", + "Thus far we have used numbers such as 1 and 0 and 10 to represent the\n", + "elements we are sampling from. For example, in\n", + "sec-resampling-two, we were\n", + "simulating the chance of a particular juror being black, given that 26%\n", + "of the eligible jurors in the county were black. We used *integers* for\n", + "that task, where we started with all the integers from 0 through 99, and\n", + "asked NumPy to select values at random from those integers. When NumPy\n", + "selected an integer from 0 through 25, we chose to label the resulting\n", + "simulated juror as black — there are 26 integers in the range 0 through\n", + "25, so there is a 26% chance that any one integer will be in that range.\n", + "If the integer was from 26 through 99, the simulated juror was white\n", + "(there are 74 integers in the range 26 through 99).\n", + "\n", + "Here is the process of simulating a single juror, adapted from\n", + "sec-random-zero-through-ninety-nine:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "50ef039e", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "# Ask NumPy for a random number generator.\n", + "rnd = np.random.default_rng()\n", + "\n", + "# All the integers from 0 up to, but not including 100.\n", + "zero_thru_99 = np.arange(100)\n", + "\n", + "# Get one random numbers from 0 through 99\n", + "a = rnd.choice(zero_thru_99)\n", + "\n", + "# Show the result\n", + "a" + ] + }, + { + "cell_type": "markdown", + "id": "4d7f8e3f", + "metadata": {}, + "source": [ + "After that, we have to unpack our labeling of 0 through 25 as being\n", + "“black” and 26 through 99 as being “white”. We might do that like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e3f19663", + "metadata": {}, + "outputs": [], + "source": [ + "this_juror_is_black = a < 26\n", + "this_juror_is_black" + ] + }, + { + "cell_type": "markdown", + "id": "6fa8791d", + "metadata": {}, + "source": [ + "This all works as we want it to, but it’s just a little bit difficult to\n", + "remember the coding (less than 26 means “black”, greater than 25 means\n", + "“white”). We had to use that coding because we committed ourselves to\n", + "using random numbers to simulate the outcomes.\n", + "\n", + "However, Python can also store bits of text, called *strings*. Values\n", + "that are bits of text can be very useful because the text values can be\n", + "memorable labels for the entities we are sampling from, in our\n", + "simulations.\n", + "\n", + "Before we get to strings, let us consider the different types of value\n", + "we have seen so far.\n", + "\n", + "## 6.3 Types of values in Python\n", + "\n", + "You have already come across the idea that Python values can be integers\n", + "(positive or negative whole numbers), like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "46663bd9", + "metadata": {}, + "outputs": [], + "source": [ + "v = 10\n", + "v" + ] + }, + { + "cell_type": "markdown", + "id": "2052de62", + "metadata": {}, + "source": [ + "Here the variable `v` holds the value. We can see what type of value `v`\n", + "holds by using the `type` function:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a0bf7f1b", + "metadata": {}, + "outputs": [], + "source": [ + "type(v)" + ] + }, + { + "cell_type": "markdown", + "id": "ac0745fb", + "metadata": {}, + "source": [ + "As you may have noticed, Python can also have *floating point* values.\n", + "These are values with a decimal point — so numbers that do not have to\n", + "be integers, but can be any value between the integers. These floating\n", + "points values are of type `float`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9cd4e228", + "metadata": {}, + "outputs": [], + "source": [ + "f = 10.1\n", + "type(f)" + ] + }, + { + "cell_type": "markdown", + "id": "72a04887", + "metadata": {}, + "source": [ + "### 6.3.1 Numpy arrays\n", + "\n", + "You have also seen that Numpy contains another type, the *array*. An\n", + "array is a value that contains a sequence of values. For example, here\n", + "is an array of integers:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "57b26863", + "metadata": {}, + "outputs": [], + "source": [ + "arr = np.array([0, 10, 99, 4])\n", + "arr" + ] + }, + { + "cell_type": "markdown", + "id": "0721781e", + "metadata": {}, + "source": [ + "Notice that this value `arr` is of type `np.ndarray`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "acd8c345", + "metadata": {}, + "outputs": [], + "source": [ + "type(arr)" + ] + }, + { + "cell_type": "markdown", + "id": "f290fe48", + "metadata": {}, + "source": [ + "The array has its own internal record of what type of values it holds.\n", + "This is called the array `dtype`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3882939b", + "metadata": {}, + "outputs": [], + "source": [ + "arr.dtype" + ] + }, + { + "cell_type": "markdown", + "id": "1019434e", + "metadata": {}, + "source": [ + "The array `dtype` records the type of value stored in the array. All\n", + "values in the array must be of this type, and all values in the array\n", + "are therefore of the same type.\n", + "\n", + "The array above contains integers, but we can also make arrays\n", + "containing floating point values:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ad803c35", + "metadata": {}, + "outputs": [], + "source": [ + "float_arr = np.array([0.1, 10.1, 99.0, 4.3])\n", + "float_arr" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6c34fee6", + "metadata": {}, + "outputs": [], + "source": [ + "float_arr.dtype" + ] + }, + { + "cell_type": "markdown", + "id": "7cf1a4b4", + "metadata": {}, + "source": [ + "### 6.3.2 Lists\n", + "\n", + "We have elided past another Python type, the *list*. In fact we have\n", + "already used lists in making arrays. For example, here we make an array\n", + "with four values:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5e76fd65", + "metadata": {}, + "outputs": [], + "source": [ + "np.array([0, 10, 99, 4])" + ] + }, + { + "cell_type": "markdown", + "id": "ae830a65", + "metadata": {}, + "source": [ + "We could also write the statement above in two steps:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "44dbb3f8", + "metadata": {}, + "outputs": [], + "source": [ + "my_list = [0, 10, 99, 4]\n", + "np.array(my_list)" + ] + }, + { + "cell_type": "markdown", + "id": "dcf681b8", + "metadata": {}, + "source": [ + "In the first statement — `my_list = [0, 10, 99, 4]` — we construct a\n", + "*list* — a container for the four values. Let’s look at the `my_list`\n", + "value:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "616e28a0", + "metadata": {}, + "outputs": [], + "source": [ + "my_list" + ] + }, + { + "cell_type": "markdown", + "id": "45123bee", + "metadata": {}, + "source": [ + "Notice that we do not see `array` in the display — this is not an array\n", + "but a list:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7175e2ff", + "metadata": {}, + "outputs": [], + "source": [ + "type(my_list)" + ] + }, + { + "cell_type": "markdown", + "id": "32a0841b", + "metadata": {}, + "source": [ + "A list is a basic Python type. We can construct it by using the square\n", + "brackets notation that you see above; we start with `[`, then we put the\n", + "values we want to go in the list, separated by commas, followed by `]`.\n", + "Here is another list:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fc9a893d", + "metadata": {}, + "outputs": [], + "source": [ + "# Creating another list.\n", + "list_2 = [5, 10, 20]" + ] + }, + { + "cell_type": "markdown", + "id": "c98118cf", + "metadata": {}, + "source": [ + "As you saw, we have been building arrays by building lists, and then\n", + "passing the list to the `np.array` function, to create an array." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b7ae6271", + "metadata": {}, + "outputs": [], + "source": [ + "list_again = [100, 10, 0]\n", + "np.array(list_again)" + ] + }, + { + "cell_type": "markdown", + "id": "2f3458c4", + "metadata": {}, + "source": [ + "Of course, we can do this one line, as we have been doing up till now,\n", + "by constructing the list inside the parentheses of the function. So, the\n", + "following cell has just the same output as the cell above:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fec09967", + "metadata": {}, + "outputs": [], + "source": [ + "# Constructing the list inside the function brackets.\n", + "np.array([100, 10, 0])" + ] + }, + { + "cell_type": "markdown", + "id": "d28ae384", + "metadata": {}, + "source": [ + "Lists are like arrays in that they are values that contain values, but\n", + "they are unlike arrays in various ways — that we will not go into now.\n", + "We often use lists to construct sequences into lists to turn them into\n", + "arrays. For our purposes, and particularly for our calculations, arrays\n", + "are much more useful and efficient than lists.\n" + ] + }, + { + "cell_type": "markdown", + "id": "57294d92", + "metadata": {}, + "source": [ + "## 6.4 String values\n", + "\n", + "So far, all the values you have seen in Python arrays have been numbers.\n", + "Now we get on to values that are bits of text. These are called\n", + "*strings*.\n", + "\n", + "Here is a single Python string value:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2633e11e", + "metadata": {}, + "outputs": [], + "source": [ + "s = \"Resampling\"\n", + "s" + ] + }, + { + "cell_type": "markdown", + "id": "b35f8d41", + "metadata": {}, + "source": [ + "What is the `type` of the new bit-of-text value `s`?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "608dc119", + "metadata": {}, + "outputs": [], + "source": [ + "type(s)" + ] + }, + { + "cell_type": "markdown", + "id": "91415fb4", + "metadata": {}, + "source": [ + "The Python `str` value is a bit of text, and therefore consists of a\n", + "sequence of characters.\n", + "\n", + "As arrays are containers for other things, such as numbers, strings are\n", + "containers for characters.\n", + "\n", + "As we can find the number of elements in an array\n", + "(sec-array-length), we can find\n", + "the number of characters in a string with the `len` function:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "97b180ee", + "metadata": {}, + "outputs": [], + "source": [ + "# Number of characters in s\n", + "len(s)" + ] + }, + { + "cell_type": "markdown", + "id": "b6f0205f", + "metadata": {}, + "source": [ + "As we can *index* into array values to get individual elements\n", + "(sec-array-indexing), we can\n", + "index into string values to get individual characters:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "460a3589", + "metadata": {}, + "outputs": [], + "source": [ + "# Get the second character of the string\n", + "# Remember, Python's index positions start at 0.\n", + "second_char = s[1]\n", + "second_char" + ] + }, + { + "cell_type": "markdown", + "id": "d2a5df15", + "metadata": {}, + "source": [ + "## 6.5 Strings in arrays\n", + "\n", + "As we can store numbers as elements in arrays, we can also store strings\n", + "as array elements." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4d22b5f6", + "metadata": {}, + "outputs": [], + "source": [ + "# Just for clarity, make the list first.\n", + "# Lists can also contain strings.\n", + "list_of_strings = ['Julian', 'Lincoln', 'Simon']\n", + "# Then pass the list to np.array to make the array.\n", + "arr_of_strings = np.array(list_of_strings)\n", + "arr_of_strings" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f399d1ad", + "metadata": {}, + "outputs": [], + "source": [ + "# We can also create the list and the array in one line,\n", + "# as we have been doing up til now.\n", + "arr_of_strings = np.array(['Julian', 'Lincoln', 'Simon'])\n", + "arr_of_strings" + ] + }, + { + "cell_type": "markdown", + "id": "319af832", + "metadata": {}, + "source": [ + "Notice the array `dtype`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fd445acb", + "metadata": {}, + "outputs": [], + "source": [ + "arr_of_strings.dtype" + ] + }, + { + "cell_type": "markdown", + "id": "ad8cc0ee", + "metadata": {}, + "source": [ + "The `U` in the `dtype` tells you that the elements in the array are\n", + "[Unicode](https://en.wikipedia.org/wiki/Unicode) strings (Unicode is a\n", + "computer representation of text characters). The number after the `U`\n", + "gives the maximum number of characters for any string in the array, here\n", + "set to the length of the longest string when we created the array.\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Take care with Numpy string arrays\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "It is easy to run into trouble with Numpy string arrays where the\n", + "elements have a maximum length, as here. Remember, the `dtype` of the\n", + "array tells you what type of element the array can hold. Here the\n", + "`dtype` is telling you that the array can hold strings of maximum length\n", + "7 characters. Now imagine trying to put a longer string into the array —\n", + "what do you think would happen?\n", + "\n", + "This happens:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "108289fb", + "metadata": {}, + "outputs": [], + "source": [ + "# An array of small strings.\n", + "small_strings = np.array(['six', 'one', 'two'])\n", + "small_strings.dtype" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "288fd570", + "metadata": {}, + "outputs": [], + "source": [ + "# Set a new value for the first element (first string).\n", + "small_strings[0] = 'seven'\n", + "small_strings" + ] + }, + { + "cell_type": "markdown", + "id": "7ce469f7", + "metadata": {}, + "source": [ + "Numpy truncates the new string to match the original maximum length.\n", + "\n", + "For that reason, it is often useful to instruct Numpy that you want to\n", + "use effectively infinite length strings, by specifying the array `dtype`\n", + "as `object` *when you make the array*, like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d75b14f2", + "metadata": {}, + "outputs": [], + "source": [ + "# An array of small strings, but this time, tell Numpy\n", + "# that the strings should be of effectively infinite length.\n", + "small_strings_better = np.array(['six', 'one', 'two'], dtype=object)\n", + "small_strings_better" + ] + }, + { + "cell_type": "markdown", + "id": "97bb4883", + "metadata": {}, + "source": [ + "Notice that the code uses a *named function argument*\n", + "(sec-named-arguments), to\n", + "specify to `np.array` that the array elements should be of type\n", + "`object`. This type can store any Python value, and so, when the array\n", + "is storing strings, it will use Python’s own string values as elements,\n", + "rather than the more efficient but more fragile Unicode strings that\n", + "Numpy uses by default." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5b7f541b", + "metadata": { + "lines_to_next_cell": 0 + }, + "outputs": [], + "source": [ + "# Set a new value for the first element in the new array.\n", + "small_strings_better[0] = 'seven'\n", + "small_strings_better" + ] + }, + { + "cell_type": "markdown", + "id": "42640a17", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "id": "e6fa2255", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "id": "4091bcb3", + "metadata": {}, + "source": [ + "As for any array, you can select elements with *indexing*. When you\n", + "select an element with a given position (index), you get the *string* at\n", + "at that position:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4bd28fed", + "metadata": {}, + "outputs": [], + "source": [ + "# Julian Lincoln Simon's second name.\n", + "# (Remember, Python's positions start at 0).\n", + "middle_name = arr_of_strings[1]\n", + "middle_name" + ] + }, + { + "cell_type": "markdown", + "id": "77243327", + "metadata": {}, + "source": [ + "As for numbers, we can compare strings with, for example, the `==`\n", + "operator, that asks whether the two strings are equal:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ecb45e77", + "metadata": {}, + "outputs": [], + "source": [ + "middle_name == 'Lincoln'" + ] + }, + { + "cell_type": "markdown", + "id": "1f15a206", + "metadata": {}, + "source": [ + "## 6.6 Repeating elements\n", + "\n", + "Now let us go back to the problem of selecting black and white jurors.\n", + "\n", + "We started with the strategy of using numbers 0 through 25 to mean\n", + "“black” jurors, and 26 through 99 to mean “white” jurors. We selected\n", + "values at random from 0 through 99, and then worked out whether the\n", + "number meant a “black” juror (was less than 26) or a “white” juror (was\n", + "greater than 25).\n", + "\n", + "It would be good to use strings instead of numbers to identify the\n", + "potential jurors. Then we would not have to remember our coding of 0\n", + "through 25 and 26 through 99.\n", + "\n", + "If only there was a way to make an array of 100 strings, where 26 of the\n", + "strings were “black” and 74 were “white”. Then we could select randomly\n", + "from that array, and it would be immediately obvious that we had a\n", + "“black” or “white” juror.\n", + "\n", + "Luckily, of course, we can do that, by using the `np.repeat` function to\n", + "construct the array.\n", + "\n", + "Here is how that works:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "43d05f07", + "metadata": {}, + "outputs": [], + "source": [ + "# The values that we will repeat to fill up the larger array.\n", + "# Use a list to store the sequence of values.\n", + "juror_types = ['black', 'white']\n", + "# The number of times we want to repeat \"black\" and \"white\".\n", + "# Use a list to store the sequence of values.\n", + "repeat_nos = [26, 74]\n", + "# Repeat \"black\" 26 times and \"white\" 74 times.\n", + "# We have passed two lists here, but we could also have passed\n", + "# arrays - the Numpy repeat function converts the lists to arrays\n", + "# before it builds the repeats.\n", + "jury_pool = np.repeat(juror_types, repeat_nos)\n", + "# Show the result\n", + "jury_pool" + ] + }, + { + "cell_type": "markdown", + "id": "6864094a", + "metadata": {}, + "source": [ + "We can use this array of repeats of strings, to sample from. The result\n", + "is easier to grasp, because we are using the string labels, instead of\n", + "numbers:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e9fc1ae2", + "metadata": {}, + "outputs": [], + "source": [ + "# Select one juror at random from the black / white pool.\n", + "one_juror = rnd.choice(jury_pool)\n", + "one_juror" + ] + }, + { + "cell_type": "markdown", + "id": "f08d9550", + "metadata": {}, + "source": [ + "We can select our full jury of 12 jurors, and see the results in a more\n", + "obvious form:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c5d42e63", + "metadata": {}, + "outputs": [], + "source": [ + "# Select 12 jurors at random from the black / white pool.\n", + "one_jury = rnd.choice(jury_pool, 12)\n", + "one_jury" + ] + }, + { + "cell_type": "markdown", + "id": "8919d6c5", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Using the `size` argument to `rnd.choice`\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "In the code above, we have specified the *size* of the sample we want\n", + "(12) with the second argument to `rnd.choice`. As you saw in\n", + "sec-named-arguments, we can\n", + "also give names to the function arguments, in this case, to make it\n", + "clearer what we mean by “12” in the code above. In fact, from now on,\n", + "that is what we will do; we will specify the *size* of our sample by\n", + "using the *name* for the function argument to `rnd.choice` — `size` —\n", + "like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "34a27421", + "metadata": {}, + "outputs": [], + "source": [ + "# Select 12 jurors at random from the black / white pool.\n", + "# Specify the sample size using the \"size\" named argument.\n", + "one_jury = rnd.choice(jury_pool, size=12)\n", + "one_jury" + ] + }, + { + "cell_type": "markdown", + "id": "af126e76", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "We can use `==` on the array to get `True` values where the juror was\n", + "“black” and `False` values otherwise:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8c4a4a77", + "metadata": {}, + "outputs": [], + "source": [ + "are_black = one_jury == 'black'\n", + "are_black" + ] + }, + { + "cell_type": "markdown", + "id": "03f8ca24", + "metadata": {}, + "source": [ + "Finally, we can `np.sum` to find the number of black jurors\n", + "(sec-count-with-sum):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7e64fa4", + "metadata": {}, + "outputs": [], + "source": [ + "# Number of black jurors in this simulated jury.\n", + "n_black = np.sum(are_black)\n", + "n_black" + ] + }, + { + "cell_type": "markdown", + "id": "0abb6d3a", + "metadata": {}, + "source": [ + "Putting that all together, this is our new procedure to select one jury\n", + "and count the number of black jurors:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b6810ef8", + "metadata": {}, + "outputs": [], + "source": [ + "one_jury = rnd.choice(jury_pool, size=12)\n", + "are_black = one_jury == 'black'\n", + "n_black = np.sum(are_black)\n", + "n_black" + ] + }, + { + "cell_type": "markdown", + "id": "5f86a8c5", + "metadata": {}, + "source": [ + "Or we can be even more compact by putting several statements together\n", + "into one line:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3031d99b", + "metadata": {}, + "outputs": [], + "source": [ + "# The same as above, but on one line.\n", + "n_black = np.sum(rnd.choice(jury_pool, size=12) == 'black')\n", + "n_black" + ] + }, + { + "cell_type": "markdown", + "id": "ca3d789c", + "metadata": {}, + "source": [ + "## 6.7 Resampling with and without replacement\n", + "\n", + "Now let us return to the details of Robert Swain’s case, that you first\n", + "saw in sec-resampling-two.\n", + "\n", + "We looked at the composition of Robert Swain’s 12-person jury — but in\n", + "fact, by law, that does not have to be representative of the eligible\n", + "jurors. The 12-person jury is drawn from a jury *panel*, of 100 people,\n", + "and this should, in turn, be drawn from the population of all eligible\n", + "jurors in the county, consisting, at the time, of “all male citizens in\n", + "the community over 21 who are reputed to be honest, intelligent men and\n", + "are esteemed for their integrity, good character and sound judgment.”\n", + "So, unless there was some bias against black jurors, we might expect the\n", + "100-person jury panel to be a plausibly random sample of the eligible\n", + "jurors, of whom 26% were black. See [the Supreme Court case\n", + "judgement](https://supreme.justia.com/cases/federal/us/380/202) for\n", + "details.\n", + "\n", + "In fact, in Robert Swain’s trial, there were 8 black members in the\n", + "100-person jury panel. We will leave it to you to adapt the simulation\n", + "from sec-resampling-two to ask the\n", + "question — is 8% surprising as a random sample from a population with\n", + "26% black people?\n", + "\n", + "But we have a different question: given that 8 out of 100 of the jury\n", + "panel were black, is it surprising that none of the 12-person jury were\n", + "black? As usual, we can answer that question with simulation.\n", + "\n", + "Let’s think about what a single simulated jury selection would look\n", + "like.\n", + "\n", + "First we compile a representation of the actual jury panel, using the\n", + "tools we have used above." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "12ee70a2", + "metadata": {}, + "outputs": [], + "source": [ + "juror_types = ['black', 'white']\n", + "# in fact there were 8 black jurors and 92 white jurors.\n", + "panel_nos = [8, 92]\n", + "jury_panel = np.repeat(juror_types, panel_nos)\n", + "# Show the result\n", + "jury_panel" + ] + }, + { + "cell_type": "markdown", + "id": "4b4220f2", + "metadata": {}, + "source": [ + "Now consider taking a 12-person jury at random from this panel. We\n", + "select the first juror at random, so that juror has an 8 out of 100\n", + "chance of being black. But when we select the second jury member, the\n", + "situation has changed slightly. We can’t select the first juror again,\n", + "so our panel is now 99 people. If our first juror was black, then the\n", + "chances of selecting another black juror next are not 8 out of 100, but\n", + "7 out of 99 — a smaller chance. The problem is, as we shall see in more\n", + "detail later, the chances of getting a black juror as the second, and\n", + "third and fourth members of the jury depend on whether we selected a\n", + "black juror as the first and second and third jury members. At its most\n", + "extreme, imagine we had already selected eight jurors, and by some\n", + "strange chance, all eight were black. Now our chances of selecting a\n", + "black juror as the ninth juror are zero — there are no black jurors left\n", + "to select from the panel.\n", + "\n", + "In this case we are selecting jurors from the panel *without\n", + "replacement*, meaning, that once we have selected a particular juror, we\n", + "cannot select them again, and we do not put them back into the panel\n", + "when we select our next juror.\n", + "\n", + "This is the probability equivalent of the situation when you are dealing\n", + "a hand of cards. Let’s say someone is dealing you, and you only, a hand\n", + "of five cards. You get an ace as your first card. Your chances of\n", + "getting an ace as your first card were just the number of aces in the\n", + "deck divided by the number of cards — four in 52 – $\\frac{4}{52}$. But\n", + "for your second card, the probability has changed, because there is one\n", + "less ace remaining in the pack, and one less card, so your chances of\n", + "getting an ace as your second card are now $\\frac{3}{51}$. This is\n", + "sampling without replacement — in a normal game, you can’t get the same\n", + "card twice. Of course, you could imagine getting a hand where you\n", + "sampled *with replacement*. In that case, you’d get a card, you’d write\n", + "down what it was, and you’d give the card back to the dealer, who would\n", + "*replace* the card in the deck, shuffle again, and give you another\n", + "card.\n", + "\n", + "As you can see, the chances change if you are sampling with or without\n", + "replacement, and the kind of sampling you do, will dictate how you model\n", + "your chances in your simulations.\n", + "\n", + "Because this distinction is so common, and so important, the machinery\n", + "you have already seen in `rnd.choice` has simple ways for you to select\n", + "your sampling type. You have already seen sampling *with replacement*,\n", + "and it looks like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "73bc113d", + "metadata": {}, + "outputs": [], + "source": [ + "# Take a sample of 12 jurors from the panel *with replacement*\n", + "# With replacement is the default for `rnd.choice`.\n", + "strange_jury = rnd.choice(jury_panel, size=12)\n", + "strange_jury" + ] + }, + { + "cell_type": "markdown", + "id": "95a74a10", + "metadata": {}, + "source": [ + "This is a strange jury, because it can select any member of the jury\n", + "pool more than once. Perhaps that juror would have to fill two (or\n", + "more!) seats, or run quickly between them. But of course, that is not\n", + "how juries are selected. They are selected *without replacement*:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bda7f938", + "metadata": {}, + "outputs": [], + "source": [ + "# Take a sample of 12 jurors from the panel *without replacement*\n", + "ok_jury = rnd.choice(jury_panel, 12, replace=False)\n", + "ok_jury" + ] + }, + { + "cell_type": "markdown", + "id": "40bed0ae", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "Comments at the end of lines\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "
\n", + "\n", + "You have already seen comment lines. These are lines beginning with `#`,\n", + "to signal to Python that the rest of the line is text for humans to\n", + "read, but Python to ignore." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "332b38a0", + "metadata": {}, + "outputs": [], + "source": [ + "# This is a comment. Python ignores this line." + ] + }, + { + "cell_type": "markdown", + "id": "03799316", + "metadata": {}, + "source": [ + "You can also put comments at the *end of code lines*, by finishing the\n", + "code part of the line, and then putting a `#`, followed by more text.\n", + "Again, Python will ignore everything after the `#` as a text for humans,\n", + "but not for Python." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b0214297", + "metadata": {}, + "outputs": [], + "source": [ + "print('Hello') # This is a comment at the end of the line." + ] + }, + { + "cell_type": "markdown", + "id": "346e6b85", + "metadata": {}, + "source": [ + "
\n", + "\n", + "
\n", + "\n", + "To finish the procedure for simulating a single jury selection, we count\n", + "the number of black jurors:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "204c1a9e", + "metadata": {}, + "outputs": [], + "source": [ + "n_black = np.sum(ok_jury == 'black') # How many black jurors?\n", + "n_black" + ] + }, + { + "cell_type": "markdown", + "id": "61dc361e", + "metadata": {}, + "source": [ + "Now we have the procedure for one simulated trial, here is the procedure\n", + "for 10000 simulated trials." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b2405e9c", + "metadata": {}, + "outputs": [], + "source": [ + "counts = np.zeros(10000)\n", + "for i in np.arange(10000):\n", + " # Single trial procedure\n", + " jury = rnd.choice(jury_panel, size=12, replace=False)\n", + " n_black = np.sum(jury == 'black') # How many black jurors?\n", + " # Store the result\n", + " counts[i] = n_black\n", + "\n", + "# Number of juries with 0 black jurors.\n", + "zero_black = np.sum(counts == 0)\n", + "# Proportion\n", + "p_zero_black = zero_black / 10000\n", + "print(p_zero_black)" + ] + }, + { + "cell_type": "markdown", + "id": "56c1d989", + "metadata": {}, + "source": [ + "We have found that, when there are only 8% black jurors in the jury\n", + "panel, having no black jurors in the final jury happens about 34% of the\n", + "time, even in this case, where the jury is selected completely at random\n", + "from the jury panel.\n", + "\n", + "We should look for the main source of bias in the initial selection of\n", + "the jury panel, not in the selection of the jury from the panel.\n" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/sampling_variability.ipynb b/python-book/notebooks/sampling_variability.ipynb new file mode 100644 index 00000000..cac4fa07 --- /dev/null +++ b/python-book/notebooks/sampling_variability.ipynb @@ -0,0 +1,80 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "eed4109d", + "metadata": {}, + "source": [ + "# Experiment in sampling variability" + ] + }, + { + "cell_type": "markdown", + "id": "1884533c", + "metadata": {}, + "source": [ + "Try generating some rookie “seasons” yourself with the following\n", + "commands, ranging the batter’s “true” performance by changing the value\n", + "of `p_hit` (the probability of a hit)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "add2a46a", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bb37e47c", + "metadata": {}, + "outputs": [], + "source": [ + "# Simulate a rookie season of 400 at-bats.\n", + "\n", + "# You might try changing the value below and rerunning.\n", + "# This is the true (long-run) probability of a hit for this batter.\n", + "p_hit = 0.4\n", + "print('True average is:', p_hit)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "347043a5", + "metadata": {}, + "outputs": [], + "source": [ + "at_bats = rnd.choice(['Hit', 'Out'], p=[p_hit, 1 - p_hit], size=400)\n", + "simulated_average = np.sum(at_bats == 'Hit') / 400\n", + "# Show the result\n", + "print('Simulated average is:', simulated_average)" + ] + }, + { + "cell_type": "markdown", + "id": "a80c336c", + "metadata": {}, + "source": [ + "Simulate a set of 10 or 20 such rookie seasons, and look at the one who\n", + "did best. How did their rookie season compare to their “true” average?" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/santas_hats.ipynb b/python-book/notebooks/santas_hats.ipynb new file mode 100644 index 00000000..b66416a8 --- /dev/null +++ b/python-book/notebooks/santas_hats.ipynb @@ -0,0 +1,96 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "e423e101", + "metadata": {}, + "source": [ + "# Santas' hats" + ] + }, + { + "cell_type": "markdown", + "id": "f583ce93", + "metadata": {}, + "source": [ + "**The welcome staff at a restaurant mix up the hats of a party of six\n", + "Christmas Santas. What is the probability that at least one will get\n", + "their own hat?**.\n", + "\n", + "After a long Christmas day, six Santas meet in the pub to let off steam.\n", + "However, as luck would have it, their hosts have mixed up their hats.\n", + "When the hats are returned, what is the chance that at least one Santa\n", + "will get his own hat back?\n", + "\n", + "First, assign each of the six Santas a number, and place these numbers\n", + "in an array. Next, shuffle the array (this represents the mixed-up hats)\n", + "and compare to the original. The rest of the problem is the same as the\n", + "pairs one from before, except that we are now interested in any trial\n", + "where at least one ($\\ge 1$) Santa received the right hat." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c61cbdff", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "826bcaa1", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N, dtype=bool)\n", + "\n", + "# Assign numbers to each owner\n", + "owners = np.arange(6)\n", + "\n", + "# Each hat gets the number of their owner\n", + "hats = np.arange(6)\n", + "\n", + "for i in range(N):\n", + " # Randomly shuffle the hats and compare to their owners\n", + " shuffled_hats = rnd.permuted(hats)\n", + "\n", + " # In how many cases did at least one person get their hat back?\n", + " trial_results[i] = np.sum(shuffled_hats == owners) >= 1\n", + "\n", + "# How many times, over all trials, did at least one person get their hat back?\n", + "k = np.sum(trial_results)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / N\n", + "\n", + "# Print the result.\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "264a6de2", + "metadata": {}, + "source": [ + "We see that in roughly 64 percent of the trials at least one Santa\n", + "received their own hat back." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/three_girls.ipynb b/python-book/notebooks/three_girls.ipynb new file mode 100644 index 00000000..aed0116f --- /dev/null +++ b/python-book/notebooks/three_girls.ipynb @@ -0,0 +1,76 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "b0df5cf7", + "metadata": {}, + "source": [ + "# Three Girls" + ] + }, + { + "cell_type": "markdown", + "id": "0ac1f324", + "metadata": {}, + "source": [ + "This notebook estimates the probability that a family of four children\n", + "will have exactly three girls." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9f6ad65e", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c114f9ab", + "metadata": {}, + "outputs": [], + "source": [ + "girl_counts = np.zeros(10000)\n", + "\n", + "# Do 10000 trials\n", + "for i in range(10000):\n", + "\n", + " # Select 'girl' or 'boy' at random, four times.\n", + " children = rnd.choice(['girl', 'boy'], size=4)\n", + "\n", + " # Count the number of girls and put the result in b.\n", + " b = np.sum(children == 'girl')\n", + "\n", + " # Keep track of each trial result in z.\n", + " girl_counts[i] = b\n", + "\n", + " # End this trial, repeat the experiment until 10000 trials are complete,\n", + " # then proceed.\n", + "\n", + "# Count the number of experiments where we got exactly 3 girls, and put this\n", + "# result in k.\n", + "n_three_girls = np.sum(girl_counts == 3)\n", + "\n", + "# Convert to a proportion.\n", + "three_girls_prop = n_three_girls / 10000\n", + "\n", + "# Print the results.\n", + "print(three_girls_prop)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/three_of_a_kind.ipynb b/python-book/notebooks/three_of_a_kind.ipynb new file mode 100644 index 00000000..e9524a57 --- /dev/null +++ b/python-book/notebooks/three_of_a_kind.ipynb @@ -0,0 +1,88 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "29fd7370", + "metadata": {}, + "source": [ + "# Three of a kind" + ] + }, + { + "cell_type": "markdown", + "id": "4323faa1", + "metadata": {}, + "source": [ + "We count the number of times we get three of a kind in a random hand of\n", + "five cards." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f84890da", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c7bd6523", + "metadata": {}, + "outputs": [], + "source": [ + "# Create a bucket (vector) called a with four \"1's,\" four \"2's,\" four \"3's,\"\n", + "# etc., to represent a deck of cards\n", + "one_suit = np.arange(1, 14)\n", + "# Repeat values for one suit four times to make a 52 card deck of values.\n", + "deck = np.repeat(one_suit, 4)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e11ecf5c", + "metadata": {}, + "outputs": [], + "source": [ + "triples_per_trial = np.zeros(10000)\n", + "\n", + "# Repeat the following steps 10000 times\n", + "for i in range(10000):\n", + " # Shuffle the deck\n", + " shuffled = rnd.permuted(deck)\n", + "\n", + " # Take the first five cards.\n", + " hand = shuffled[:5]\n", + "\n", + " # How many triples?\n", + " repeat_nos = np.bincount(hand)\n", + " n_triples = np.sum(repeat_nos == 3)\n", + "\n", + " # Keep score of # of triples\n", + " triples_per_trial[i] = n_triples\n", + "\n", + " # End loop, go back and repeat\n", + "\n", + "# How often was there 1 pair?\n", + "n_triples = np.sum(triples_per_trial == 1)\n", + "\n", + "# Convert to proportion\n", + "print(n_triples / 10000)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/trump_clinton.ipynb b/python-book/notebooks/trump_clinton.ipynb new file mode 100644 index 00000000..e4dd2fb8 --- /dev/null +++ b/python-book/notebooks/trump_clinton.ipynb @@ -0,0 +1,94 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "3ff063e9", + "metadata": {}, + "source": [ + "# Trump/Clinton poll simulation" + ] + }, + { + "cell_type": "markdown", + "id": "f9d337f7", + "metadata": {}, + "source": [ + "What is the probability that a sample outcome such as actually observed\n", + "(840 Trump, 660 Clinton) would occur by chance if Clinton is “really”\n", + "ahead — that is, if Clinton has 50 percent (or more) of the support? To\n", + "restate in sharper statistical language: What is the probability that\n", + "the observed sample or one even more favorable to Trump would occur if\n", + "the universe has a mean of 50 percent or below?\n", + "\n", + "Here is a procedure that responds to that question:\n", + "\n", + "1. Create a benchmark universe with one ball marked “Trump” and another\n", + " marked “Clinton”\n", + "2. Draw a ball, record its marking, and replace. (We sample with\n", + " replacement to simulate the practically-infinite population of U. S.\n", + " voters.)\n", + "3. Repeat step 2 1500 times and count the number of “Trump”s. If 840 or\n", + " greater, record “Y”; otherwise, record “N.”\n", + "4. Repeat steps 3 and 4 perhaps 1000 or 10,000 times, and count the\n", + " number of “Y”s. The outcome estimates the probability that 840 or\n", + " more Trump choices would occur if the universe is “really” half or\n", + " more in favor of Clinton.\n", + "\n", + "This procedure may be done as follows with Python." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "52716a5b", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "# Number of repeats we will run.\n", + "n = 10_000\n", + "\n", + "# Make an integer array to store the counts.\n", + "trumps = np.zeros(n, dtype=int)\n", + "\n", + "for i in range(n):\n", + " votes = rnd.choice(['Trump', 'Clinton'], size=1500)\n", + " trumps[i] = np.sum(votes == 'Trump')\n", + "\n", + "# Integer bins from 675 through 825 in steps of 5.\n", + "plt.hist(trumps, bins=range(675, 826, 5))\n", + "plt.title('Number of Trump voters of 1500 in null-world simulation')\n", + "\n", + "# How often >= 840 Trump votes in random draw?\n", + "k = np.sum(trumps >= 840)\n", + "# As a proportion of simulated resamples.\n", + "kk = k / n\n", + "\n", + "print('Proportion voting for Trump:', kk)" + ] + }, + { + "cell_type": "markdown", + "id": "2c7e5a0f", + "metadata": {}, + "source": [ + "The value for `kk` is our estimate of the probability that Trump’s\n", + "“victory” in the sample would occur by chance if he really were behind.\n", + "In this case, our probability estimate is less than 1 in 10,000 (\\<\n", + "0.0001)." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/twenty_executives.ipynb b/python-book/notebooks/twenty_executives.ipynb new file mode 100644 index 00000000..54ff941d --- /dev/null +++ b/python-book/notebooks/twenty_executives.ipynb @@ -0,0 +1,81 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "7c2db60f", + "metadata": {}, + "source": [ + "# Twenty executives, two divisions" + ] + }, + { + "cell_type": "markdown", + "id": "5a12f720", + "metadata": {}, + "source": [ + "The top manager wants to spread the talent reasonably evenly, but she\n", + "does not want to label particular executives with a quality rating and\n", + "therefore considers distributing them with a random selection. She\n", + "therefore wonders: What are probabilities of the best ten among the\n", + "twenty being split among the divisions in the ratios 5 and 5, 4 and 6, 3\n", + "and 7, etc., if their names are drawn from a hat? One might imagine much\n", + "the same sort of problem in choosing two teams for a football or\n", + "baseball contest.\n", + "\n", + "One may proceed as follows:\n", + "\n", + "1. Put 10 balls labeled “W” (for “worst”) and 10 balls labeled “B”\n", + " (best) in a bucket.\n", + "2. Draw 10 balls without replacement and count the W’s.\n", + "3. Repeat (say) 400 times.\n", + "4. Count the number of times each split — 5 W’s and 5 B’s, 4 and 6,\n", + " etc. — appears in the results.\n", + "\n", + "The problem can be done with Python as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "af2cce4e", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "import matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e68af748", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "managers = np.repeat(['Worst', 'Best'], [10, 10])\n", + "\n", + "for i in range(N):\n", + " chosen = rnd.choice(managers, size=10, replace=False)\n", + " trial_results[i] = np.sum(chosen == 'Best')\n", + "\n", + "plt.hist(trial_results, bins=range(10), align='left', rwidth=0.75)\n", + "plt.title('Number of best managers chosen')" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/two_pairs.ipynb b/python-book/notebooks/two_pairs.ipynb new file mode 100644 index 00000000..97133613 --- /dev/null +++ b/python-book/notebooks/two_pairs.ipynb @@ -0,0 +1,78 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "403373a8", + "metadata": {}, + "source": [ + "# Two pairs" + ] + }, + { + "cell_type": "markdown", + "id": "2fb91456", + "metadata": {}, + "source": [ + "We count the number of times we get two pairs in a random hand of five\n", + "cards." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b3a07e8c", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "rnd = np.random.default_rng()\n", + "\n", + "one_suit = np.arange(1, 14)\n", + "deck = np.repeat(one_suit, 4)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a1fc0fa2", + "metadata": {}, + "outputs": [], + "source": [ + "pairs_per_trial = np.zeros(10000)\n", + "\n", + "# Repeat the following steps 10000 times\n", + "for i in range(10000):\n", + " # Shuffle the deck\n", + " shuffled = rnd.permuted(deck)\n", + "\n", + " # Take the first five cards.\n", + " hand = shuffled[:5]\n", + "\n", + " # How many pairs?\n", + " # Counts for each card rank.\n", + " repeat_nos = np.bincount(hand)\n", + " n_pairs = np.sum(repeat_nos == 2)\n", + "\n", + " # Keep score of # of pairs\n", + " pairs_per_trial[i] = n_pairs\n", + "\n", + " # End loop, go back and repeat\n", + "\n", + "# How often were there 2 pairs?\n", + "n_two_pairs = np.sum(pairs_per_trial == 2)\n", + "\n", + "# Convert to proportion\n", + "print(n_two_pairs / 10000)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/university_icebreaker.ipynb b/python-book/notebooks/university_icebreaker.ipynb new file mode 100644 index 00000000..17a969dd --- /dev/null +++ b/python-book/notebooks/university_icebreaker.ipynb @@ -0,0 +1,132 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "3be33901", + "metadata": {}, + "source": [ + "# An icebreaker for two universities" + ] + }, + { + "cell_type": "markdown", + "id": "069275e8", + "metadata": {}, + "source": [ + "**First put two groups of 10 people into 10 pairs. Then re-randomize the\n", + "pairings. What is the chance that four or more pairs are the same in the\n", + "second random pairing? This is a problem in the probability of matching\n", + "by chance**.\n", + "\n", + "Ten representatives each from two universities, Birmingham and Berkeley,\n", + "attend a meeting. As a social icebreaker, representatives are divided,\n", + "randomly, into pairs consisting of one person from each university.\n", + "\n", + "If they held a second round of the icebreaker, with a new random\n", + "pairing, what is the chance that four or more pairs will be the same?\n", + "\n", + "In approaching this problem, we start at the point where the first\n", + "icebreaker is complete. We now have to determine what happens after the\n", + "second round.\n", + "\n", + "- **Step 1.** Let “ace” through “10” of hearts represent the ten\n", + " representatives from Birmingham University. Let “ace” through “10” of\n", + " spades be their allocated partners (in round one) from Berkeley.\n", + "- **Step 2.** Shuffle the hearts and deal them out in a row; shuffle the\n", + " spades and deal in a row just below the hearts.\n", + "- **Step 3.** Count the pairs — a pair is one card from the heart row\n", + " and one card from the spade row — that contain the same denomination.\n", + " If 4 or more pairs match, record “yes,” otherwise “no.”\n", + "- **Step 4.** Repeat steps (2) and (3), say, 10,000 times.\n", + "- **Step 5.** Count the proportion “yes.” This estimates the probability\n", + " of 4 or more pairs.\n", + "\n", + "Exercise for the student: Write the steps to do this example with random\n", + "numbers. The Python solution follows below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "96ecacb1", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "rnd = np.random.default_rng()\n", + "\n", + "import matplotlib.pyplot as plt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "561f4153", + "metadata": {}, + "outputs": [], + "source": [ + "N = 10000\n", + "trial_results = np.zeros(N)\n", + "\n", + "# Assign numbers to each student, according to their pair, after the first\n", + "# icebreaker\n", + "birmingham = np.arange(10)\n", + "berkeley = np.arange(10)\n", + "\n", + "for i in range(N):\n", + " # Randomly shuffle the students from Berkeley\n", + " shuffled_berkeley = rnd.permuted(berkeley)\n", + "\n", + " # Randomly shuffle the students from Birmingham\n", + " # (This step is not really necessary — shuffling one array is enough to make the matching random.)\n", + " shuffled_birmingham = rnd.permuted(birmingham)\n", + "\n", + " # Count in how many cases people landed with the same person as in the\n", + " # first round, and store in trial_results.\n", + " matches = np.sum(shuffled_berkeley == shuffled_birmingham)\n", + " trial_results[i] = matches\n", + "\n", + "# Count the number of times we got 4 or more people assigned to the same person\n", + "k = np.sum(trial_results >= 4)\n", + "\n", + "# Convert to a proportion.\n", + "kk = k / N\n", + "\n", + "# Print the result.\n", + "print(kk)" + ] + }, + { + "cell_type": "markdown", + "id": "02226c44", + "metadata": {}, + "source": [ + "We see that in about 2 percent of the trials did 4 or more couples end\n", + "up being re-paired with their own partners. This can also be seen from\n", + "the histogram:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e339962c", + "metadata": {}, + "outputs": [], + "source": [ + "# Produce a histogram of trial results.\n", + "plt.hist(trial_results, bins=range(10), align='left', rwidth=0.75)\n", + "plt.title('Same pairs in round two');" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/notebooks/viewer_numbers.ipynb b/python-book/notebooks/viewer_numbers.ipynb new file mode 100644 index 00000000..4492ad1a --- /dev/null +++ b/python-book/notebooks/viewer_numbers.ipynb @@ -0,0 +1,96 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "5d60827b", + "metadata": {}, + "source": [ + "# Number of viewers" + ] + }, + { + "cell_type": "markdown", + "id": "90f8b10f", + "metadata": {}, + "source": [ + "The notebook calculates the expected number of viewers in a sample of\n", + "400, given that there is a 30% chance of any one person being a viewer,\n", + "and then calculates how far that value is from 120." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "617a1a46", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "# set up the random number generator\n", + "rnd = np.random.default_rng()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3c4cc364", + "metadata": { + "lines_to_next_cell": 0 + }, + "outputs": [], + "source": [ + "# set the number of trials\n", + "n_trials = 10000\n", + "\n", + "# an empty array to store the scores\n", + "scores = np.zeros(n_trials)\n", + "\n", + "# What are the options to choose from?\n", + "options = ['viewer', 'not viewer']\n", + "\n", + "# do n_trials trials\n", + "for i in range(n_trials):\n", + "\n", + " # Choose 'viewer' 30% of the time.\n", + " a = rnd.choice(options, size=400, p=[0.3, 0.7])\n", + "\n", + " # count the viewers\n", + " b = np.sum(a == 'viewer')\n", + "\n", + " # how different from expected?\n", + " c = 120 - b\n", + "\n", + " # absolute value of the difference\n", + " d = np.abs(c)\n", + "\n", + " # express as a proportion of sample\n", + " e = d / 400\n", + "\n", + " # keep score of the result\n", + " scores[i] = e\n", + "\n", + "# find the mean divergence\n", + "k = np.mean(scores)\n", + "\n", + "# Show the result\n", + "k" + ] + }, + { + "cell_type": "markdown", + "id": "e6628c54", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "main_language": "python", + "notebook_metadata_filter": "-all" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} \ No newline at end of file diff --git a/python-book/point_estimation.html b/python-book/point_estimation.html new file mode 100644 index 00000000..a4001eca --- /dev/null +++ b/python-book/point_estimation.html @@ -0,0 +1,898 @@ + + + + + + + + + +Resampling statistics - 19  Point Estimation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

19  Point Estimation

+
+ + + +
+ + + + +
+ + +
+ +

One of the great questions in statistical inference is: How big is it? This can mean — How long? How deep? How much time? At what angle?

+

This question about size may pertain to a single object, of which there are many measurements; an example is the location of a star in the heavens. Or the question may pertain to a varied set of elements and their measurements; examples include the effect of treatment with a given drug, and the incomes of the people of the United States in 1994.

+

From where the observer stands, having only the evidence of a sample in hand, it often is impossible to determine whether the data represent multiple observations of a single object, or single (or multiple) observations of multiple objects. For example, from crude measurements of weight you could not know whether one person is being weighed repeatedly, or several people have been weighed once. Hence all the following discussion of point estimation is the same for both of these situations.

+

The word “big” in the first sentence above is purposely vague, because there are many possible kinds of estimates that one might wish to make concerning a given object or collection. For a single object like a star, one surely will wish to make a best guess about its location. But about the effects of a drug treatment, or the incomes of a nation, there are many questions that one may wish to answer. The average effect or income is a frequent and important object of our interest. But one may also wish to know about the amount of dispersion in the distribution of treatment effects, or of incomes, or the symmetry of the distribution. And there are still other questions one may wish to answer.

+

Even if we focus on the average, the issue often is less clear cut than we may think at first. If we are to choose a single number to characterize the population (universe) from which a given set of data has been drawn, what should that representative number be for the case at hand? The answer must depend on the purpose with which we ask the question, of course. There are several main possibilities such as the mean, the median, and the mode.

+

Even if we confine our attention to the mean as our measure of the central tendency of a distribution, there are various ways of estimating it, each of them having a different rationale. The various methods of estimation often lead to the same estimate, especially if the distribution is symmetric (such as the distribution of errors you make in throwing darts at a dart board). But in an asymmetric case such as a distribution of incomes, the results may differ among the contending modes of estimation. So the entire topic is more messy than appears at first look. Though we will not inquire into the complexities, it is important that you understand that the matter is not as simple as it may seem. (See Savage (1972), Chapter 15, for more discussion of this topic.)

+
+

19.1 Ways to estimate the mean

+
+

19.1.1 The Method of Moments

+

Since elementary school you have been taught to estimate the mean of a universe (or calculate the mean of a sample) by taking a simple arithmetic average. A fancy name for that process is “the method of moments.” It is the equivalent of estimating the center of gravity of a pole by finding the place where it will balance on your finger. If the pole has the same size and density all along its length, that balance point will be halfway between the endpoints, and the point may be thought of as the arithmetic average of the distances from the balance point of all the one-centimeter segments of the pole.

+

Consider this example:

+

Example: Twenty-nine Out of Fifty People Polled Say They Will Vote For The Democrat. Who Will Win The Election? The Relationship Between The Sample Proportion and The Population Proportion in a Two-Outcome Universe.

+

You take a random sample of 50 people in Maryland and ask which party’s candidate for governor they will vote for. Twenty-nine say they will vote for the Democrat. Let’s say it is reasonable to assume in this case that people will vote exactly as they say they will. The statistical question then facing you is: What proportion of the voters in Maryland will vote for the Democrat in the general election?

+

Your intuitive best guess is that the proportion of the “universe” — which is composed of voters in the general election, in this case — will be the same as the proportion of the sample. That is, 58 percent = 29/50 is likely to be your guess about the proportion that will vote Democratic. Of course, your estimate may be too high or too low in this particular case, but in the long run — that is, if you take many samples like this one — on the average the sample mean will equal the universe (population) proportion, for reasons to be discussed later.

+

The sample mean seems to be the “natural” estimator of the population mean in this and many other cases. That is, it seems quite natural to say that the best estimate is the sample mean, and indeed it probably is best. But why? This is the problem of inverse probability that has bedeviled statisticians for two centuries.

+

If the only information that you have (or that seems relevant) is the evidence of the sample, then there would seem to be no basis for judging that the shape and location of the population differs to the “left” or “right” from that of the sample. That is often a strong argument.

+

Another way of saying much the same thing: If a sample has been drawn randomly, each single observation is a representative estimator of the mean; if you only have one observation, that observation is your best guess about the center of the distribution (if you have no reason to believe that the distribution of the population is peculiar — such as not being symmetrical). And therefore the sum of 2, 3…n of such observations (divided by their number) should have that same property, based on basic principles.

+

But if you are on a ship at sea and a leaf comes raining down from the sky, your best guess about the location of the tree from which it comes is not directly above you, and if two leaves fall, the midpoint of them is not the best location guess, either; you know that trees don’t grow at sea, and birds sometimes carry leaves out to sea.

+

We’ll return to this subject when we discuss criteria of methods.

+
+
+

19.1.2 Expected Value and the Method of Moments

+

Consider this gamble: You and another person roll a die. If it falls with the “6” upwards you get $4, and otherwise you pay $1. If you play 120 times, at the end of the day you would expect to have (20 * $4 - 100 * $1 =) -$20 dollars. We say that -$20 is your “expected value,” and your expected value per roll is (-$20 / 120 =) $.166 or the loss of 1/6 of a dollar. If you get $5 instead of $4, your expected value is $0.

+

This is exactly the same idea as the method of moments, and we even use the same term — “expected value,” or “expectation” — for the outcome of a calculation of the mean of a distribution. We say that the expected value for the success of rolling a “6” with a single cast of a die is 1/6, and that the expected value of rolling a “6” or a “5” is (1/6 + 1/6 = ) 2/6.

+
+
+

19.1.3 The Maximum Likelihood Principle

+

Another way of thinking about estimation of the population mean asks: Which population(s) would, among the possible populations, have the highest probability of producing the observed sample? This criterion frequently produces the same answer as the method of moments, but in some situations the estimates differ. Furthermore, the logic of the maximum-likelihood principle is important.

+

Consider that you draw without replacement six balls — 2 black and 4 white — from a bucket that contains twenty balls. What would you guess is the composition of the bucket from which they were drawn? Is it likely that those balls came from a bucket with 4 white and 16 black balls? Rather obviously not, because it would be most unusual to get all the 4 white balls in your draw. Indeed, we can estimate the probability of that happening with simulation or formula to be about .003.

+

How about a bucket with 2 black and 18 whites? The probability is much higher than with the previous bucket, but it still is low — about .075.

+

Let us now estimate the probabilities for all buckets across the range of probabilities. In Figure 19.1 we see that the bucket with the highest probability of producing the observed sample has the same proportions of black and white balls as does the sample. This is called the “maximum likelihood universe.” Nor should this be very surprising, because that universe obviously has an equal chance of producing samples with proportions below and above that observed proportion — as was discussed in connection with the method of moments.

+

We should note, however, that the probability that even such a maximum-likelihood universe would produce exactly the observed sample is very low (though it has an even lower probability of producing any other sample).

+
+
+
+
+

+
Figure 19.1: Number of White Balls in the Universe (N=20)
+
+
+
+
+
+
+
+

19.2 Choice of Estimation Method

+

When should you base your estimate on the method of moments, or of maximum likelihood, or still some other principle? There is no general answer. Sound estimation requires that you think long and hard about the purpose of your estimation, and fit the method to the purpose. I am well aware that this is a very vague statement. But though it may be an uncomfortable idea to live with, guidance to sound statistical method must be vague because it requires sound judgment and deep knowledge of the particular set of facts about the situation at hand.

+
+
+

19.3 Criteria of estimates

+

How should one judge the soundness of the process that produces an estimate? General criteria include representativeness and accuracy . But these are pretty vague; we’ll have to get more specific.

+
+

19.3.1 Unbiasedness

+

Concerning representativeness: We want a procedure that will not be systematically in error in one direction or another. In technical terms, we want an “unbiased estimate,” if possible. “Unbiased” in this case does not mean “friendly” or “unprejudiced,” but rather implies that on the average — that is, in the long run, after taking repeated samples — estimates that are too high will about balance (in percentage terms) those that are too low. The mean of the universe (or the proportion, if we are speaking of two-valued “binomial situations”) is a frequent object of our interest. And the sample mean is (in most cases) an unbiased estimate of the population mean.

+

Let’s now see an informal proof that the mean of a randomlydrawn sample is an “unbiased” estimator of the population mean. That is, the errors of the sample means will cancel out after repeated samples because the mean of a large number of sample means approaches the population mean. A second “law” to be informally proven is that the size of the inaccuracy of a sample proportion is largest when the population proportion is near 50 percent, and smallest when it approaches zero percent or 100 percent.

+

The statement that the sample mean is an unbiased estimate of the population mean holds for many but not all kinds of samples — proportions of two-outcome (Democrat-Republican) events (as in this case) and also the means of many measured-data universes (heights, speeds, and so on) that we will come to later.

+

But, you object, I have only said that this is so; I haven’t proven it. Quite right. Now we will go beyond this simple assertion, though we won’t reach the level of formal proof. This discussion applies to conventional analytic statistical theory as well as to the resampling approach.

+

We want to know why the mean of a repeated sample — or the proportion, in the case of a binomial universe — tends to equal the mean of the universe (or the proportion of a binomial sample). Consider a population of one thousand voters. Split the population into random sub-populations of 500 voters each; let’s call these sub-populations by the name “samples.” Almost inevitably, the proportions voting Democratic in the samples will not exactly equal the “true” proportions in the population. (Why not? Well, why should they split evenly? There is no general reason why they should.) But if the sample proportions do not equal the population proportion, we can say that the extent of the difference between the two sample proportions and the population proportion will be identical but in the opposite direction .

+

If the population proportion is 600/1000 = 60 percent, and one sample’s proportion is 340/500 = 68 percent, then the other sample’s proportion must be (600-340 = 260)/500 = 52 percent. So if in the very long run you would choose each of these two samples about half the time (as you would if you selected between the two samples randomly) the average of the sample proportions would be (68 percent + 52 percent)/2 = 60 percent. This shows that on the average the sample proportion is a fair and unbiased estimate of the population proportion — if the sample is half the size of the population.

+

If we now sub-divide each of our two samples of 500 (each of which was half the population size) into equal-size subsamples of 250 each, the same argument will hold for the proportions of the samples of 250 with respect to the sample of 500: The proportion of a 250-voter sample is an unbiased estimate of the proportion of the 500-voter sample from which it is drawn. It seems inductively reasonable, then, that if the proportion of a 250-voter sample is an unbiased estimate of the 500-voter sample from which it is drawn, and the proportion of a 500-voter sample is an unbiased estimate of the 1000-voter population, then the proportion of a 250-voter sample should be an unbiased estimate of the population proportion. And if so, this argument should hold for samples of 1/2 x 250 = 125, and so on — in fact for any size sample.

+

The argument given above is not a rigorous formal proof. But I doubt that the non-mathematician needs, or will benefit from, a more formal proof of this proposition. You are more likely to be persuaded if you demonstrate this proposition to yourself experimentally in the following manner:

+
    +
  • Step 1. Let “1-6” = Democrat, “7-10” = Republican
  • +
  • Step 2. Choose a sample of, say, ten random numbers, and record the proportion Democrat (the sample proportion).
  • +
  • Step 3. Repeat step 2 a thousand times.
  • +
  • Step 4. Compute the mean of the sample proportions, and compare it to the population proportion of 60 percent. This result should be close enough to reassure you that on the average the sample proportion is an “unbiased” estimate of the population proportion, though in any particular sample it may be substantially off in either direction.
  • +
+
+
+

19.3.2 Efficiency

+

We want an estimate to be accurate, in the sense that it is as close to the “actual” value of the parameter as possible. Sometimes it is possible to get more accuracy at the cost of biasing the estimate. More than that does not need to be said here.

+
+
+

19.3.3 Maximum Likelihood

+

Knowing that a particular value is the most likely of all values may be of importance in itself. For example, a person betting on one horse in a horse race is interested in his/her estimate of the winner having the highest possible probability, and is not the slightest bit interested in getting nearly the right horse. Maximum likelihood estimates are of particular interest in such situations.

+

See (Savage 1972, chap. 15), for many other criteria of estimators.

+
+
+
+

19.4 Criteria of the Criteria

+

What should we look for in choosing criteria? Logically, this question should precede the above list of criteria.

+

Savage (1972, chap. 15) has urged that we should always think in terms of the consequences of choosing criteria, in light of our purposes in making the estimate. I believe that he is making an important point. But it often is very hard work to think the matter through all the way to the consequences of the criteria chosen. And in most cases, such fine inquiry is not needed, in the sense that the estimating procedure chosen will be the same no matter what consequences are considered.1

+
+
+

19.5 Estimation of accuracy of the point estimate

+

So far we have discussed how to make a point estimate, and criteria of good estimators. We also are interested in estimating the accuracy of that estimate. That subject — which is harder to grapple with — is discussed in Chapter 26 and Chapter 27 on confidence intervals.

+

Most important: One cannot sensibly talk about the accuracy of probabilities in the abstract, without reference to some set of facts. In the abstract, the notion of accuracy loses any meaning, and invites confusion and argument.

+
+
+

19.6 Uses of the mean

+

Let’s consider when the use of a device such as the mean is valuable, in the context of the data on marksmen in Table 19.1.2. If we wish to compare marksman A versus marksman B, we can immediately see that marksman A hit the bullseye (80 shots for 3 points each time) as many times as marksman B hit either the bullseye or simply got in the black (30 shots for 3 points and 50 shots for 2 points), and A hit the black (2 points) as many times as B just got in the white (1 point). From these two comparisons covering all the shots, in both of which comparisons A does better, it is immediately obvious that marksman A is better than marksman B. We can say that A’s score dominates B’s score.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 19.1: Score percentages by marksman
Score# occurrencesProbability
Marksman A
100
220.2
380.8
Marksman B
120.2
250.5
330.3
Marksman C
110.1
260.6
330.3
+
+

When we turn to comparing marksman C to marksman D, however, we cannot say that one “dominates” the other as we could with the comparison of marksmen A and B. Therefore, we turn to a summarizing device. One such device that is useful here is the mean. For marksman C the mean score is \((40 * 1) + (10 * 2) + (50 * 3) = 210\), while for marksman D the mean score is \((10 * 1) + (60 * 2) + (30 * 3) = 220\). Hence we can say that D is better than C even though D’s score does not dominate C’s score in the bullseye category.

+

Another use of the mean (Gnedenko, Aleksandr, and Khinchin 1962, 68) is shown in the estimation of the number of matches that we need to start fires for an operation carried out 20 times in a day (Table 19.2). Let’s say that the number of cases where s/he needs 1, 2 … 5 matches to start a fire are as follows (along with their probabilities) based on the last 100 fires started:

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 19.2: Number of matches needed to start a fire
Number of MatchesNumber of CasesProbabilities
17.16
216.16
355.55
421.21
51.01
+
+

If you know that the operator will be lighting twenty fires, you can estimate the number of matches that s/he will need by multiplying the mean number of matches (which turns out be \(1 * .07 + 2 * 0.16 + 3 * 0.55 + 4 * 0.21 + 5 * 0.01 = 2.93\)) in the observed experience by 20. Here you are using the mean as an indication of a representative case.

+

It is common for writers to immediately produce the data in the forms of percentages or probabilities. But I think it is important to include in our discussion the absolute numbers, because this is what one must begin with in practice. And keeping the absolute numbers in mind is likely to avoid some confusions that arise if one immediately goes to percentages or to probabilities.

+

Still another use for the mean is when you have a set of observations with error in them. The mean of the observations probably is your best guess about which is the “right” one. Furthermore, the distance you are likely to be off the mark is less if you select the mean of the observations. An example might be a series of witnesses giving the police their guesses about the height of a man who overturned an outhouse. The mean probably is the best estimate to give to police officers as a description of the perpetrator (though it would be helpful to give the range of the observations as well).

+

We use the mean so often, in so many different circumstances, that we become used to it and never think about its nature. So let’s do so a bit now.

+

Different statistical ideas are appropriate for business and engineering decisions, biometrics, econometrics, scientific explanation (the philosophers’ case), and other fields. So nothing said here holds everywhere and always.

+

One might ask: What is the “meaning” of a mean? But that is not a helpful question. Rather, we should ask about the uses of a mean. Usually a mean is used to summarize a set of data. As we saw with marksmen C and D, it often is difficult to look at a table of data and obtain an overall idea of how big or how small the observations are; the mean (or other measurements) can help. Or if you wish to compare two sets of data where the distributions of observations overlap each other, comparing the means of the two distributions can often help you better understand the matter.

+

Another complication is the confusion between description and estimation , which makes it difficult to decide where to place the topic of descriptive statistics in a textbook. For example, compare the mean income of all men in the U. S., as measured by the decennial census. This mean of the universe can have a very different meaning from the mean of a sample of men with respect to the same characteristic. The sample mean is a point estimate, a statistical device, whereas the mean of the universe is a description. The use of the mean as an estimator is fraught with complications. Still, maybe it is no more complicated than deciding what describer to use for a population. This entire matter is much more complex than it appears at first glance.

+

When the sample size approaches in size the entire population — when the sample becomes closer and closer to being the same as the population — the two issues blend. What does that tell us? Anything? What is the relationship between a baseball player’s average for two weeks, and his/her lifetime average? This is subtle stuff — rivaling the subtleness of arguments about inference versus probability, and about the nature of confidence limits (see Chapter 26 and Chapter 27 ). Maybe the only solid answer is to try to stay super-clear on what you are doing for what purpose, and to ask continually what job you want the statistic (or describer) to do for you.

+

The issue of the relationship of sample size to population size arises here. If the sample size equals or approaches the population size, the very notion of estimation loses its meaning.

+

The notion of “best estimator” makes no sense in some situations, including the following: a) You draw one black ball from a bucket. You cannot put confidence intervals around your estimate of the proportion of black balls, except to say that the proportion is somewhere between 1 and 0. No one would proceed without bringing in more information. That is, when there is almost no information, you simply cannot make much of an estimate — and the resampling method breaks down, too. It does not help much to shift the discussion to the models of the buckets, because then the issue is the unknown population of the buckets, in which case we need to bring in our general knowledge. b) When the sample size equals or is close to the population size, as discussed in this section, the data are a description rather than an estimate, because the sample is getting to be much the same as the universe; that is, if there are twelve people in your family, and you randomly take a sample of the amount of sugar used by eight members of the family, the results of the sample cannot be very different than if you compute the amount for all twelve family members. In such a case, the interpretation of the mean becomes complex.

+

Underlying all estimation is the assumption of continuation, which follows from random sampling — that there is no reason to expect the next sample to be different from the present one in any particular fashion, mean or variation. But we do expect it to be different in some fashion because of sampling variability.

+
+
+

19.7 Conclusion

+

A Newsweek article says, “According to a recent reader’s survey in Bride’s magazine, the average blowout [wedding] will set you back about $16,000” (Feb 15, 1993, p. 67). That use of the mean (I assume) for the average, rather than the median, could cost the parents of some brides a pretty penny. It could be that the cost for the average person — that is, the median expenditure — might be a lot less than $16,000. (A few million dollar weddings could have a huge effect on a survey mean.) An inappropriate standard of comparison might enter into some family discussions as a result of this article, and cause higher outlays than otherwise. This chapter helps one understand the nature of such estimates.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/preface_second.html b/python-book/preface_second.html new file mode 100644 index 00000000..fc435008 --- /dev/null +++ b/python-book/preface_second.html @@ -0,0 +1,717 @@ + + + + + + + + + +Resampling statistics - Preface to the second edition + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Preface to the second edition

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+ +
+
+
+

This is a slightly edited version of the original preface to the second edition. We removed an introduction to the original custom software, and a look ahead at the original contents of the book.

+
+
+
+

Brief history of the resampling method

+

This book describes a revolutionary — but now fully accepted — approach to probability and statistics. Monte Carlo resampling simulation takes the mumbo-jumbo out of statistics and enables even beginning students to understand completely everything that is done.

+

Before we go further, let’s make the discussion more concrete with an example. Ask a class: What are the chances that three of a family’s first four children will be girls? After various entertaining class suggestions about procreating four babies, or surveying families with four children, someone in the group always suggests flipping a coin. This leads to valuable student discussion about whether the probability of a girl is exactly half (there are about 105 males born for each 100 females), whether .5 is a satisfactory approximation, whether four coins flipped once give the same answer as one coin flipped four times, and so on. Soon the class decides to take actual samples of coin flips. And students see that this method quickly arrives at estimates that are accurate enough for most purposes. Discussion of what is “accurate enough” also comes up, and that discussion is valuable, too.

+

The Monte Carlo method itself is not new. Near the end of World War II, a group of physicists at the Rand Corp. began to use random-number simulations to study processes too complex to handle with formulas. The name “Monte Carlo” came from the analogy to the gambling houses on the French Riviera. The application of Monte Carlo methods in teaching statistics also is not new. Simulations have often been used to illustrate basic concepts. What is new and radical is using Monte Carlo methods routinely as problem-solving tools for everyday problems in probability and statistics.

+

From here on, the related term resampling will be used throughout the book. Resampling refers to the use of the observed data or of a data generating mechanism (such as a die) to produce new hypothetical samples, the results of which can then be analyzed. The term computer-intensive methods also is frequently used to refer to techniques such as these.

+

The history of resampling is as follows: In the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly in their social science research. I sympathized with them. Even many experts are unable to understand intuitively the formal mathematical approach to the subject. Clearly, we need a method free of the formulas that bewilder almost everyone.

+

The solution is as follows: Beneath the logic of a statistical inference there necessarily lies a physical process. The resampling methods described in this book allow us to work directly with the underlying physical model by simulating it, rather than describing it with formulae. This general insight is also the heart of the specific technique Bradley Efron felicitously labeled ‘the bootstrap’ (1979), a device I introduced in 1969 that is now the most commonly used, and best known, resampling method.

+

The resampling approach was first tried with graduate students in 1966, and it worked exceedingly well. Next, under the auspices of the father of the “new math,” Max Beberman, I “taught” the method to a class of high school seniors in 1967. The word “taught” is in quotation marks because the pedagogical essence of the resampling approach is that the students discover the method for themselves with a minimum of explicit instruction from the teacher.

+

The first classes were a success and the results were published in 1969 (J. L. Simon and Holmes 1969). Three PhD experiments were then conducted under Kenneth Travers’ supervision, and they all showed overwhelming superiority for the resampling method (J. L. Simon, Atkinson, and Shevokas 1976). Subsequent research has confirmed this success.

+

The method was first presented at some length in the 1969 edition of my book Basic Research Methods in Social Science (J. L. Simon 1969) (third edition with Paul Burstein -Simon Julian Lincoln and Burstein (1985)).

+

For some years, the resampling method failed to ignite interest among statisticians. While many factors (including the accumulated intellectual and emotional investment in existing methods) impede the adoption of any new technique, the lack of readily available computing power and tools was an obstacle. (The advent of the personal computer in the 1980s changed that, of course.)

+

Then in the late 1970s, Efron began to publish formal analyses of the bootstrap — an important resampling application (Efron 1979). Interest among statisticians has exploded since then, in conjunction with the availability of easy, fast, and inexpensive computer simulations. The bootstrap has been the most widely used, but across-the-board application of computer intensive methods now seems at hand. As Noreen (1989) noted, “there is a computer-intensive alternative to just about every conventional parametric and non-parametric test.” And the bootstrap method has now been hailed by an official American Statistical Association volume as the only “great breakthrough” in statistics since 1970 (Kotz and Johnson 1992).

+

It seems appropriate now to offer the resampling method as the technique of choice for beginning students as well as for the advanced practitioners who have been exploring and applying the method.

+

Though the term “computer-intensive methods” is nowadays used to describe the techniques elaborated here, this book can be read either with or without the accompanying use of the computer. However, as a practical matter, users of these methods are unlikely to be content with manual simulations if a quick and simple computer-program alternative is available.

+

The ultimate test of the resampling method is how well you, the reader, learn it and like it. But knowing about the experiences of others may help beginners as well as experienced statisticians approach the scary subject of statistics with a good attitude. Students as early as junior high school, taught by a variety of instructors and in other languages as well as English, have — in a matter of 6 or 12 short hours — learned how to handle problems that students taught conventionally do not learn until advanced university courses. And several controlled experimental studies show that, on average, students who learn this method are more likely to arrive at correct solutions than are students who are taught conventional methods.

+

Best of all, the experiments comparing the resampling method against conventional methods show that students enjoy learning statistics and probability this way, and they don’t suffer statistics panic. This experience contrasts sharply with the reactions of students learning by conventional methods. (This is true even when the same teachers teach both methods as part of an experiment.)

+

A public offer: The intellectual history of probability and statistics began with gambling games and betting. Therefore, perhaps a lighthearted but very serious offer would not seem inappropriate here: I hereby publicly offer to stake $5,000 in a contest against any teacher of conventional statistics, with the winner to be decided by whose students get the larger number of simple and complex numerical problems correct, when teaching similar groups of students for a limited number of class hours — say, six or ten. And if I should win, as I am confident that I will, I will contribute the winnings to the effort to promulgate this teaching method. (Here it should be noted that I am far from being the world’s most skillful or charming teacher. It is the subject matter that does the job, not the teacher’s excellence.) This offer has been in print for many years now, but no one has accepted it.

+

The early chapters of the book contain considerable discussion of the resampling method, and of ways to teach it. This material is intended mainly for the instructor; because the method is new and revolutionary, many instructors appreciate this guidance. But this didactic material is also intended to help the student get actively involved in the learning process rather than just sitting like a baby bird with its beak open waiting for the mother bird to drop morsels into its mouth. You may skip this didactic material, of course, and I hope that it does not get in your way. But all things considered, I decided it was better to include this material early on rather than to put it in the back or in a separate publication where it might be overlooked.

+
+
+

Brief history of statistics

+

In ancient times, mathematics developed from the needs of governments and rich men to number armies, flocks, and especially to count the taxpayers and their possessions. Up until the beginning of the 20th century, the term statistic meant the number of something — soldiers, births, taxes, or what-have-you. In many cases, the term statistic still means the number of something; the most important statistics for the United States are in the Statistical Abstract of the United States . These numbers are now known as descriptive statistics. This book will not deal at all with the making or interpretation of descriptive statistics, because the topic is handled very well in most conventional statistics texts.

+

Another stream of thought entered the field of probability and statistics in the 17th century by way of gambling in France. Throughout history people had learned about the odds in gambling games by repeated plays of the game. But in the year 1654, the French nobleman Chevalier de Mere asked the great mathematician and philosopher Pascal to help him develop correct odds for some gambling games. Pascal, the famous Fermat, and others went on to develop modern probability theory.

+

Later these two streams of thought came together. Researchers wanted to know how accurate their descriptive statistics were — not only the descriptive statistics originating from sample surveys, but also the numbers arising from experiments. Statisticians began to apply the theory of probability to the accuracy of the data arising from sample surveys and experiments, and that became the theory of inferential statistics .

+

Here we find a guidepost: probability theory and statistics are relevant whenever there is uncertainty about events occurring in the world, or in the numbers describing those events.

+

Later, probability theory was also applied to another context in which there is uncertainty — decision-making situations. Descriptive statistics like those gathered by insurance companies — for example, the number of people per thousand in each age bracket who die in a five-year period — have been used for a long time in making decisions such as how much to charge for insurance policies. But in the modern probabilistic theory of decision-making in business, politics and war, the emphasis is different; in such situations the emphasis is on methods of combining estimates of probabilities that depend upon each other in complicated ways in order to arrive at the best decision. This is a return to the gambling origins of probability and statistics. In contrast, in standard insurance situations (not including war insurance or insurance on a dancer’s legs) the probabilities can be estimated with good precision without complex calculation, on the basis of a great many observations, and the main statistical task is gathering the information. In business and political decision-making situations, however, one often works with probabilities based on very limited information — often little better than guesses. There the task is how best to combine these guesses about various probabilities into an overall probability estimate.

+

Estimating probabilities with conventional mathematical methods is often so complex that the process scares many people. And properly so, because its difficulty leads to errors. The statistics profession worries greatly about the widespread use of conventional tests whose foundations are poorly understood. The wide availability of statistical computer packages that can easily perform these tests with a single command, regardless of whether the user understands what is going on or whether the test is appropriate, has exacerbated this problem. This led John Tukey to turn the field toward descriptive statistics with his techniques of “exploratory data analysis” (Tukey 1977). These descriptive methods are well described in many texts.

+

Probabilistic analysis also is crucial, however. Judgments about whether the government should allow a new medicine on the market, or whether an operator should adjust a screw machine, require more than eyeball inspection of data to assess the chance variability. But until now the teaching of probabilistic statistics, with its abstruse structure of mathematical formulas, mysterious tables of calculations, and restrictive assumptions concerning data distributions — all of which separate the student from the actual data or physical process under consideration — have been an insurmountable obstacle to intuitive understanding.

+

Now, however, the resampling method enables researchers and decision-makers in all walks of life to obtain the benefits of statistics and predictability without the shortcomings of conventional methods, free of mathematical formulas and restrictive assumptions. Resampling’s repeated experimental trials on the computer enable the data (or a data-generating mechanism representing a hypothesis) to express their own properties, without difficult and misleading assumptions.

+

So — good luck. I hope that you enjoy the book and profit from it.

+

Julian Lincoln Simon

+

1997

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/preface_third.html b/python-book/preface_third.html new file mode 100644 index 00000000..411b002f --- /dev/null +++ b/python-book/preface_third.html @@ -0,0 +1,726 @@ + + + + + + + + + +Resampling statistics - Preface to the third edition + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Preface to the third edition

+
+ + + +
+ + + + +
+ + +
+ +

The book in your hands, or on your screen, is the third edition of a book originally called “Resampling: the new statistics”, by Julian Lincoln Simon (1992).

+

One of the pleasures of writing an edition of someone else’s book is that we have some freedom to praise a previous version of our own book. We will do that, in the next section. Next we talk about the resampling methods in this book, and their place at the heart of “data science”, Finally, we discuss what we have changed, and why, and make some suggestions about where this book could fit into your learning and teaching.

+
+

What Simon saw

+

Simon gives the early history of this book in the original preface. He starts with the following observation:

+
+

In the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly…

+
+

Simon then applied his striking capacity for independent thought to the problem — and came to two essential conclusions.

+

The first was that introductory courses in statistics use far too much mathematics. Most students cannot follow along and quickly get lost, reducing the subject to — as Simon puts it — “mumbo-jumbo”.

+

On its own, this was not a new realization. Simon quotes a classic textbook by Wallis and Roberts (1956), in which they compare teaching statistics through mathematics to teaching in a foreign language. More recently, other teachers of statistics have come to the same conclusion. Cobb (2007) argues that it is practically impossible to teach students the level of mathematics they would need to understand standard introductory courses. As you will see below, Cobb also agrees with Simon about the solution.

+

Simon’s great contribution was to see how we can replace the mathematics, to better reveal the true heart of statistical thinking. His starting point appears in the original preface: “Beneath the logic of a statistical inference there necessarily lies a physical process”. Drawing conclusions from noisy data means building a model of the noisy world, and seeing how that model behaves. That model can be physical, where we generate the noisiness of the world using physical devices like dice and spinners and coin-tosses. In fact, Simon used exactly these kinds of devices in his first experiments in teaching (Simon 1969). He then saw that it was much more efficient to build these models with simple computer code, and the result was the first and second editions of this book, with their associated software, the Resampling Stats language.

+

Simon’s second conclusion follows from the first. Now that Simon had stripped away the unnecessary barrier of mathematics, he had got to the heart of what is interesting and difficult in statistics. Drawing conclusions from noisy data involves a lot of hard, clear thinking. We need to be honest with our students about that; statistics is hard, not because it is obscure (it need not be), but because it deals with difficult problems. It is exactly that hard logical thinking that can make statistics so interesting to our best students; “statistics” is just reasoning about the world when the world is noisy. Simon writes eloquently about this in a section in the introduction — “Why is statistics such a difficult subject” (Section 1.6).

+

We needed both of Simon’s conclusions to get anywhere. We cannot hope to teach two hard subjects at the same time; mathematics, and statistical reasoning. That is what Simon has done: he replaced the mathematics with something that is much easier to reason about. Then he can concentrate on the real, interesting problem — the hard thinking about data, and the world it comes from. To quote from a later section in this book (Section 2.4): “Once we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics.” Instead of asking “where would I look up the right recipe for this”, you find yourself asking “what kind of world do these data come from?” and “how can I reason about that world?”. Like Simon, we have found that this way of thinking and teaching is almost magically liberating and satisfying. We hope and believe that you will find the same.

+
+
+

Resampling and data science

+

The ideas in Simon’s book, first published in 1992, have found themselves at the center of the modern movement of data science.

+

In the section above, we described Simon’s path in discovering physical models as a way of teaching and explaining statistical tests. He saw that code was the right way to express these physical models, and therefore, to build and explain statistical tests.

+

Meanwhile, the wider world of data analysis has been coming to the same conclusion, but from the opposite direction. Simon saw the power of resampling for explanation, and then that code was the right way to express these explanations. The data science movement discovered first that code was essential for data analysis, and then that code was the right way to explain statistics.

+

The modern use of the phrase “data science” comes from the technology industry. From around 2007, companies such as LinkedIn and Facebook began to notice that there was a new type of data analyst that was much more effective than their predecessors. They came to call these analysts “data scientists”, because they had learned how to deal with large and difficult data while working in scientific fields such as ecology, biology, or astrophysics. They had done this by learning to use code:

+
+

Data scientists’ most basic, universal skill is the ability to write code. (Davenport and Patil 2012)

+
+

Further reflection (Donoho 2017) suggested that something deep was going on: that data science was the expression of a radical change in the way we analyze data, in academia, and in industry. At the center of this change — was code. Code is the language that allows us to tell the computer what it should do with data; it is the native language of data analysis.

+

This insight transforms the way with think of code. In the past, we have thought of code as a separate, specialized skill, that some of us learn. We take coding courses — we “learn to code”. If code is the fundamental language for analyzing data, then we need code to express what data analysis does, and explain how it works. Here we “code to learn”. Code is not an aim in itself, but a language we can use to express the simple ideas behind data analysis and statistics.

+

Thus the data science movement started from code as the foundation for data analysis, to using code to explain statistics. It ends at the same place as this book, from the other side of the problem.

+

The growth of data science is the inevitable result of taking computing seriously in education and research. We have already cited Cobb (2007) on the impossibility of teaching the mathematics students would need in order to understand traditional statistics courses. He goes on to explain why there is so much mathematics, and why we should remove it. In the age before ubiquitous computing, we needed mathematics to simplify calculations that we could not practically do by hand. Now we have great computing power in our phones and laptops, we do not have this constraint, and we can use simpler resampling methods to solve the same problems. As Simon shows, these are much easier to describe and understand. Data science, and teaching with resampling, are the obvious consequences of ubiquitous computing.

+
+
+

What we changed

+

This diversion, through data science, leads us to the changes that we have made for the new edition. The previous edition of this book is still excellent, and you can read it free, online, at http://www.resample.com/intro-text-online. It continues to be ahead of its time, and ahead of our time. Its one major drawback is that Simon bases much of the book around code written in a special language that he developed with Dan Weidenfeld, called Resampling Stats. Resampling Stats is well designed for expressing the steps in simulating worlds that include elements of randomness, and it was a useful contribution at the time that it was written. Since then, and particularly in the last decade, there have been many improvements in more powerful and general languages, such as Python and R. These languages are particularly suitable for beginners in data analysis, and they come with a huge range of tools and libraries for a many tasks in data analysis, including the kinds of models and simulations you will see in this book. We have updated the book to use Python, instead of Resampling Stats. If you already know Python or a similar language, such as R, you will have a big head start in reading this book, but even if you do not, we have written the book so it will be possible to pick up the Python code that you need to understand and build the kind of models that Simon uses. The advantage to us, your authors, is that we can use the very powerful tools associated with Python to make it easier to run and explain the code. The advantage to you, our readers, is that you can also learn these tools, and the Python language. They will serve you well for the rest of your career in data analysis.

+ +

Our second major change is that we have added some content that Simon specifically left out. Simon knew that his approach was radical for its time, and designed his book as a commentary, correction, and addition to traditional courses in statistics. He assumes some familiarity with the older world of normal distributions, t-tests, Chi-squared tests, analysis of variance, and correlation. In the time that has passed since he wrote the book, his approach to explanation has reached the mainstream. It is now perfectly possible to teach an introductory statistics course without referring to the older statistical methods. This means that the earlier editions of this book can now serve on their own as an introduction to statistics — but, used this way, at the time we write, this will leave our readers with some gaps to fill. Simon’s approach will give you a deep understanding of the ideas of statistics, and resampling methods to apply them, but you will likely come across other teachers and researchers using the traditional methods. To bridge this gap, we have added new sections that explain how resampling methods relate to their corresponding traditional methods. Luckily, we find these explanations add deeper understanding to the traditional methods. Teaching resampling is the best foundation for statistics, including the traditional methods.

+

Lastly, we have extended Simon’s explanation of Bayesian probability and inference. This is partly because Bayesian methods have become so important in statistical inference, and partly because Simon’s approach has such obvious application in explaining how Bayesian methods work.

+
+
+

Who should read this book, and when

+

As you have seen in the previous sections, this book uses a radical approach to explaining statistical inference — the science of drawing conclusions from noisy data. This approach is quickly becoming the standard in teaching of data science, partly because it is so much easier to explain, and partly because of the increasing role of code in data analysis.

+

Our book teaches the basics of using the Python language, basic probability, statistical inference through simulation and resampling, confidence intervals, and basic Bayesian reasoning, all through the use of model building in simple code.

+

Statistical inference is an important part of research methods for many subjects; so much so, that research methods courses may even be called “statistics” courses, or include “statistics” components. This book covers the basic ideas behind statistical inference, and how you can apply these ideas to draw practical statistical conclusions. We recommend it to you as an introduction to statistics. If you are a teacher, we suggest you consider this book as a primary text for first statistics courses. We hope you will find, as we have, that this method of explaining through building is much more productive and satisfying than the traditional method of trying to convey some “intuitive” understanding of fairly complicated mathematics. We explain the relationship of these resampling techniques to traditional methods. Even if you do need to teach your students t-tests, and analysis of variance, we hope you will share our experience that this way of explaining is much more compelling than the traditional approach.

+

Simon wrote this book for students and teachers who were interested to discover a radical new method of explanation in statistics and probability. The book will still work well for that purpose. If you have done a statistics course, but you kept feeling that you did not really understand it, or there was something fundamental missing that you could not put your finger on — good for you! — then, please, read this book. There is a good chance that it will give you deeper understanding, and reveal the logic behind the often arcane formulations of traditional statistics.

+

Our book is only part of a data science course. There are several important aspects to data science. A data science course needs all the elements we list above, but it should also cover the process of reading, cleaning, and reorganizing data using Python, or another language, such as

+

R

+

It may also go into more detail about the experimental design, and cover prediction techniques, such as classification with machine learning, and data exploration with plots, tables, and summary measures. We do not cover those here. If you are teaching a full data science course, we suggest that you use this book as your first text, as an introduction to code, and statistical inference, and then add some of the many excellent resources on these other aspects of data science that assume some knowledge of statistics and programming.

+
+
+

Welcome to resampling

+

We hope you will agree that Simon’s insights for understanding and explaining are — really extraordinary. We are catching up slowly. If you are like us, your humble authors, you will find that Simon has succeeded in explaining what statistics is, and exactly how it works, to anyone with the patience to work through the examples, and think hard about the problems. If you have that patience, the rewards are great. Not only will you understand statistics down to its deepest foundations, but you will be able to think of your own tests, for your own problems, and have the tools to implement them yourself.

+

Matthew Brett

+

Stéfan van der Walt

+

Ian Nimmo-Smith

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/probability_theory_1a.html b/python-book/probability_theory_1a.html new file mode 100644 index 00000000..dea3d706 --- /dev/null +++ b/python-book/probability_theory_1a.html @@ -0,0 +1,1224 @@ + + + + + + + + + +Resampling statistics - 8  Probability Theory, Part 1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

8  Probability Theory, Part 1

+
+ + + +
+ + + + +
+ + +
+ +
+

8.1 Introduction

+

Let’s assume we understand the nature of the system or mechanism that produces the uncertain events in which we are interested. That is, the probability of the relevant independent simple events is assumed to be known, the way we assume we know the probability of a single “6” with a given die. The task is to determine the probability of various sequences or combinations of the simple events — say, three “6’s” in a row with the die. These are the sorts of probability problems dealt with in this chapter.

+ +

The resampling method — or just call it simulation or Monte Carlo method, if you prefer — will be illustrated with classic examples. Typically, a single trial of the system is simulated with cards, dice, random numbers, or a computer program. Then trials are repeated again and again to estimate the frequency of occurrence of the event in which we are interested; this is the probability we seek. We can obtain as accurate an estimate of the probability as we wish by increasing the number of trials. The key task in each situation is designing an experiment that accurately simulates the system in which we are interested.

+

This chapter begins the Monte Carlo simulation work that culminates in the resampling method in statistics proper. The chapter deals with problems in probability theory — that is, situations where one wants to estimate the probability of one or more particular events when the basic structure and parameters of the system are known. In later chapters we move on to inferential statistics, where similar simulation work is known as resampling.

+
+
+

8.2 Definitions

+

A few definitions first:

+
    +
  • Simple Event : An event such as a single flip of a coin, or one draw of a single card. A simple event cannot be broken down into simpler events of a similar sort.
  • +
  • Simple Probability (also called “primitive probability”): The probability that a simple event will occur; for example, that my favorite football team, the Washington Commanders, will win on Sunday.
  • +
+

During a recent season, the “experts” said that the Commanders had a 60 percent chance of winning on Opening Day; that estimate is a simple probability. We can model that probability by putting into a bucket six green balls to stand for wins, and four red balls to stand for losses (or we could use 60 and 40 balls, or 600 and 400). For the outcome on any given day, we draw one ball from the bucket, and record a simulated win if the ball is green, a loss if the ball is red.

+

So far the bucket has served only as a physical representation of our thoughts. But as we shall see shortly, this representation can help us think clearly about the process of interest to us. It can also give us information that is not yet in our thoughts.

+

Estimating simple probabilities wisely depends largely upon gathering evidence well. It also helps to adjust one’s probability estimates skillfully to make them internally consistent. Estimating probabilities has much in common with estimating lengths, weights, skills, costs, and other subjects of measurement and judgment.

+

Some more definitions:

+
    +
  • Composite Event : A composite event is the combination of two or more simple events. Examples include all heads in three throws of a single coin; all heads in one throw of three coins at once; Sunday being a nice day and the Commanders winning; and the birth of nine females out of the next ten calves born if the chance of a female in a single birth is 0.48.
  • +
  • Compound Probability : The probability that a composite event will occur.
  • +
+

The difficulty in estimating simple probabilities such as the chance of the Commanders winning on Sunday arises from our lack of understanding of the world around us. The difficulty of estimating compound probabilities such as the probability of it being a nice day Sunday and the Commanders winning is the weakness in our mathematical intuition interacting with our lack of understanding of the world around us. Our task in the study of probability and statistics is to overcome the weakness of our mathematical intuition by using a systematic process of simulation (or the devices of formulaic deductive theory).

+

Consider now a question about a compound probability: What are the chances of the Commanders winning their first two games if we think that each of those games can be modeled by our bucket containing six red and four green balls? If one drawing from the bucket represents one game, a second drawing should represent the second game (assuming we replace the first ball drawn in order to keep the chances of winning the same for the two games). If so, two drawings from the bucket should represent two games. And we can then estimate the compound probability we seek with a series of two-ball trial experiments.

+

More specifically, our procedure in this case — the prototype of all procedures in the resampling simulation approach to probability and statistics — is as follows:

+
    +
  1. Put six green (“Win”) and four red (“Lose”) balls in a bucket.
  2. +
  3. Draw a ball, record its color, and replace it (so that the probability of winning the second simulated game is the same as the first).
  4. +
  5. Draw another ball and record its color.
  6. +
  7. If both balls drawn were green record “Yes”; otherwise record “No.”
  8. +
  9. Repeat steps 2-4 a thousand times.
  10. +
  11. Count the proportion of “Yes”s to the total number of “Yes”s and “No”s; the result is the probability we seek.
  12. +
+

Much the same procedure could be used to estimate the probability of the Commanders winning (say) 3 of their next 4 games. We will return to this illustration again and we will see how it enables us to estimate many other sorts of probabilities.

+
    +
  • Experiment or Experimental Trial, or Trial, or Resampling Experiment : A simulation experiment or trial is a randomly-generated composite event which has the same characteristics as the actual composite event in which we are interested (except that in inferential statistics the resampling experiment is generated with the “benchmark” or “null” universe rather than with the “alternative” universe).
  • +
  • Parameter : A numerical property of a universe. For example, the “true” mean (don’t worry about the meaning of “true”), and the range between largest and smallest members, are two of its parameters.
  • +
+
+
+

8.3 Theoretical and historical methods of estimation

+

As introduced in Section 3.5, there are two general ways to tackle any probability problem: theoretical-deductive and empirical , each of which has two sub-types. These concepts have complicated links with the concept of “frequency series” discussed earlier.

+
    +
  • Empirical Methods . One empirical method is to look at actual cases in nature — for example, examine all (or a sample of) the families in Brazil that have four children and count the proportion that have three girls among them. (This is the most fundamental process in science and in information-getting generally. But in general we do not discuss it in this book and leave it to courses called “research methods.” I regard that as a mistake and a shame, but so be it.) In some cases, of course, we cannot get data in such fashion because it does not exist.

    +

    Another empirical method is to manipulate the simple elements in such fashion as to produce hypothetical experience with how the simple elements behave. This is the heart of the resampling method, as well as of physical simulations such as wind tunnels.

  • +
  • Theoretical Methods . The most fundamental theoretical approach is to resort to first principles, working with the elements in their full deductive simplicity, and examining all possibilities. This is what we do when we use a tree diagram to calculate the probability of three girls in families of four children.

  • +
+ +

The formulaic approach is a theoretical method that aims to avoid the inconvenience of resorting to first principles, and instead uses calculation shortcuts that have been worked out in the past.

+

What the Book Teaches . This book teaches you the empirical method using hypothetical cases. Formulas can be misleading for most people in most situations, and should be used as a shortcut only when a person understands exactly which first principles are embodied in the formulas. But most of the time, students and practitioners resort to the formulaic approach without understanding the first principles that lie behind them — indeed, their own teachers often do not understand these first principles — and therefore they have almost no way to verify that the formula is right. Instead they use canned checklists of qualifying conditions.

+
+
+

8.4 Samples and universes

+

The terms “sample” and “universe” (or “population”) 1 were used earlier without definition. But now these terms must be defined.

+
+

8.4.1 The concept of a sample

+

For our purposes, a “sample” is a collection of observations for which you obtain the data to be used in the problem. Almost any set of observations for which you have data constitutes a sample. (You might, or might not, choose to call a complete census a sample.)

+ +
+
+
+

8.5 The concept of a universe or population

+

For every sample there must also be a universe “behind” it. But “universe” is harder to define, partly because it is often an imaginary concept. A universe is the collection of things or people that you want to say that your sample was taken from . A universe can be finite and well defined — “all live holders of the Congressional Medal of Honor,” “all presidents of major universities,” “all billion-dollar corporations in the United States.” Of course, these finite universes may not be easy to pin down; for instance, what is a “major university”? And these universes may contain some elements that are difficult to find; for instance, some Congressional Medal winners may have left the country, and there may not be adequate public records on some billion-dollar corporations.

+

Universes that are called “infinite” are harder to understand, and it is often difficult to decide which universe is appropriate for a given purpose. For example, if you are studying a sample of patients suffering from schizophrenia, what is the universe from which the sample comes? Depending on your purposes, the appropriate universe might be all patients with schizophrenia now alive, or it might be all patients who might ever live. The latter concept of the universe of patients with schizophrenia is imaginary because some of the universe does not exist. And it is infinite because it goes on forever.

+

Not everyone likes this definition of “universe.” Others prefer to think of a universe, not as the collection of people or things that you want to say your sample was taken from, but as the collection that the sample was actually taken from. This latter view equates the universe to the “sampling frame” (the actual list or set of elements you sample from) which is always finite and existent. The definition of universe offered here is simply the most practical, in our opinion.

+ +
+
+

8.6 The conventions of probability

+

Let’s review the basic conventions and rules used in the study of probability:

+
    +
  1. Probabilities are expressed as decimals between 0 and 1, like percentages. The weather forecaster might say that the probability of rain tomorrow is 0.2, or 0.97.
  2. +
  3. The probabilities of all the possible alternative outcomes in a single “trial” must add to unity. If you are prepared to say that it must either rain or not rain, with no other outcome being possible — that is, if you consider the outcomes to be mutually exclusive (a term that we discuss below), then one of those probabilities implies the other. That is, if you estimate that the probability of rain is 0.2 — written \(P(\text{rain}) = 0.2\) — that implies that you estimate that \(P(\text{no rain}) = 0.8\).
  4. +
+
+
+
+ +
+
+Writing probabilities +
+
+
+

We will now be writing some simple formulae using probability. Above we write the probability of rain tomorrow as \(P(\text{rain})\). This probability might be 0.2, and we could write this as:

+

\[ +P(\text{rain}) = 0.2 +\]

+

We can term “rain tomorrow” an event — the event may occur: \(\text{rain}\), or it may not occur: \(\text{no rain}\).

+

We often shorten the name of our event — here \(\text{rain}\) — to a single letter, such as \(R\). So, in this case, we could write \(P(\text{rain}) = 0.2\) as \(P(R) = 0.2\) — meaning the same thing. We tend to prefer single letters — as in \(P(R)\) — to longer names — as in \(P(\text{rain})\). This is because the single letters can be easier to read in these compact formulae.

+

Above we have written the probability of “rain tomorrow” event not occurring as \(P(\text{no rain})\). Another way of referring to an event not occurring is to suffix the event name with a caret (^) character like this: \(\ \hat{} R\). So read \(P(\ \hat{} R)\) as “the probability that it will not rain”, and it is just another way of writing \(P(\text{no rain})\). We sometimes call \(\ \hat{} R\) the complement of \(R\).

+

We use \(\text{and}\) between two events to mean both events occur.

+

For example, say we call the event “Commanders win the game” as \(W\). One example of a compound event (see above) would be the event \(W \text{and} R\), meaning, the event where the Commanders won the game and it rained.

+
+
+
+
+

8.7 Mutually exclusive events — the addition rule

+

Definition: If there are just two events \(A\) and \(B\) and they are “mutually exclusive” or “disjoint,” each implies the absence of the other. Green and red coats are mutually exclusive for you if (but only if) you never wear more than one coat at a time.

+

To state this idea formally, if \(A\) and \(B\) are mutually exclusive, then:

+

\[ +P(A \text{ and } B) = 0 +\]

+

If \(A\) is “wearing a green coat” and \(B\) is “wearing a red coat” (and you never wear two coats at the same time), then the probability that you are wearing a green coat and a red coat is 0: \(P(A \text{ and } B) = 0\).

+

In that case, outcomes \(A\) and \(B\), and hence outcome \(A\) and its own absence (written \(P(\ \hat{} A)\)), are necessarily mutually exclusive, and hence the two probabilities add to unity:

+ +

\[ +P(A) + P(\ \hat{} A) = 1 +\]

+

The sales of your store in a given year cannot be both above and below $1 million. Therefore if \(P(\text{sales > \$1 million}) = 0.2\), \(P(\text{sales <= +\$1 million}) = 0.8\).

+

This “complements” rule is useful as a consistency check on your estimates of probabilities. If you say that the probability of rain is 0.2, then you should check that you think that the probability of no rain is 0.8; if not, reconsider both the estimates. The same for the probabilities of your team winning and losing its next game.

+
+
+

8.8 Joint probabilities

+

Let’s return now to the Commanders. We said earlier that our best guess of the probability that the Commanders will win the first game is 0.6. Let’s complicate the matter a bit and say that the probability of the Commanders winning depends upon the weather; on a nice day we estimate a 0.65 chance of winning, on a nasty (rainy or snowy) day a chance of 0.55. It is obvious that we then want to know the chance of a nice day, and we estimate a probability of 0.7. Let’s now ask the probability that both will happen — it will be a nice day and the Commanders will win .

+

Before getting on with the process of estimation itself, let’s tarry a moment to discuss the probability estimates. Where do we get the notion that the probability of a nice day next Sunday is 0.7? We might have done so by checking the records of the past 50 years, and finding 35 nice days on that date. If we assume that the weather has not changed over that period (an assumption that some might not think reasonable, and the wisdom of which must be the outcome of some non-objective judgment), our probability estimate of a nice day would then be 35/50 = 0.7.

+

Two points to notice here: 1) The source of this estimate is an objective “frequency series.” And 2) the data come to us as the records of 50 days, of which 35 were nice. We would do best to stick with exactly those numbers rather than convert them into a single number — 70 percent. Percentages have a way of being confusing. (When his point score goes up from 2 to 3, my racquetball partner is fond of saying that he has made a “fifty percent increase”; that’s just one of the confusions with percentages.) And converting to a percent loses information: We no longer know how many observations the percent is based upon, whereas 35/50 keeps that information.

+

Now, what about the estimate that the Commanders have a 0.65 chance of winning on a nice day — where does that come from? Unlike the weather situation, there is no long series of stable data to provide that information about the probability of winning. Instead, we construct an estimate using whatever information or “hunch” we have. The information might include the Commanders’ record earlier in this season, injuries that have occurred, what the “experts” in the newspapers say, the gambling odds, and so on. The result certainly is not “objective,” or the result of a stable frequency series. But we treat the 0.65 probability in quite the same way as we treat the .7 estimate of a nice day. In the case of winning, however, we produce an estimate expressed directly as a percent.

+

If we are shaky about the estimate of winning — as indeed we ought to be, because so much judgment and guesswork inevitably goes into it — we might proceed as follows: Take hold of a bucket and two bags of balls, green and red. Put into the bucket some number of green balls — say 10. Now add enough red balls to express your judgment that the ratio is the ratio of expected wins to losses on a nice day, adding or subtracting green balls as necessary to get the ratio you want. If you end up with 13 green and 7 red balls, then you are “modeling” a probability of 0.65, as stated above. If you end up with a different ratio of balls, then you have learned from this experiment with your own mind processes that you think that the probability of a win on a nice day is something other than 0.65.

+

Don’t put away the bucket. We will be using it again shortly. And keep in mind how we have just been using it, because our use later will be somewhat different though directly related.

+

One good way to begin the process of producing a compound estimate is by portraying the available data in a “tree diagram” like Figure 8.1. The tree diagram shows the possible events in the order in which they might occur. A tree diagram is extremely valuable whether you will continue with either simulation or the formulaic method.

+
+
+
+
+

+
Figure 8.1: Tree diagram
+
+
+
+
+
+
+

8.9 The Monte Carlo simulation method (resampling)

+

The steps we follow to simulate an answer to the compound probability question are as follows:

+
    +
  1. Put seven blue balls (for “nice day”) and three yellow balls (“not nice”) into a bucket labeled A.
  2. +
  3. Put 65 green balls (for “win”) and 35 red balls (“lose”) into a bucket labeled B. This bucket represents the chance that the Commanders will when it is a nice day.
  4. +
  5. Draw one ball from bucket A. If it is blue, carry on to the next step; otherwise record “no” and stop.
  6. +
  7. If you have drawn a blue ball from bucket A, now draw a ball from bucket B, and if it is green, record “yes” on a score sheet; otherwise write “no.”
  8. +
  9. Repeat steps 3-4 perhaps 10000 times.
  10. +
  11. Count the number of “yes” trials.
  12. +
  13. Compute the probability you seek as (number of “yeses”/ 10000). (This is the same as (number of “yeses”/ (number of “yeses” + number of “noes”)
  14. +
+

Actually doing the above series of steps by hand is useful to build your intuition about probability and simulation methods. But the procedure can also be simulated with a computer. We will use Python to do this in a moment.

+
+
+

8.10 If statements in Python

+

Before we get to the simulation, we need another feature of Python, called a conditional or if statement.

+

Here we have rewritten step 4 above, but using indentation to emphasize the idea:

+
If you have drawn a blue ball from bucket A:
+    Draw a ball from bucket B
+    if the ball is green:
+        record "yes"
+    otherwise:
+        record "no".
+

Notice the structure. The first line is the header of the if statement. It has a condition — this is why if statements are often called conditional statements. The condition here is “you have drawn a blue ball from bucket A”. If this condition is met — it is True that you have drawn a blue ball from bucket A then we go on to do the stuff that is indented. Otherwise we do not do any of the stuff that is indented.

+

The indented stuff above is the body of the if statement. It is the stuff we do if the conditional at the top is True.

+

Now let’s see how we would write that in Python.

+

Let’s make bucket A. Remember, this is the weather bucket. It has seven blue balls (for 70% fine days) and 3 yellow balls (for 30% rainy days). See Section 6.6 for the np.repeat way of repeating elements multiple times.

+
+

Start of fine_win notebook

+ + +
+
# Load the NumPy array library.
+import numpy as np
+
+# Make a random number generator
+rnd = np.random.default_rng()
+
+
+
# blue means "nice day", yellow means "not nice".
+bucket_A = np.repeat(['blue', 'yellow'], [7, 3])
+bucket_A
+
+
array(['blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'yellow',
+       'yellow', 'yellow'], dtype='<U6')
+
+
+

Now let us draw a ball at random from bucket_A:

+
+
a_ball = rnd.choice(bucket_A)
+a_ball
+
+
'blue'
+
+
+

How we run our first if statement. Running this code will display “The ball was blue” if the ball was blue, otherwise it will not display anything:

+
+
if a_ball == 'blue':
+    print('The ball was blue')
+
+
The ball was blue
+
+
+
+

Notice that the header line has if, followed by the conditional expression (question) a_ball == 'blue'. The header line finishes with a colon :. The body of the if statement is one or more indented lines. Here there is only one line: print('The ball was blue'). Python only runs the body of the if statement if the condition is True.2

+
+

To confirm we see “The ball was blue” if a_ball is 'blue' and nothing otherwise, we can set a_ball and re-run the code:

+
+
# Set value of a_ball so we know what it is.
+a_ball = 'blue'
+
+
+
if a_ball == 'blue':
+    # The conditional statement is True in this case, so the body does run.
+    print('The ball was blue')
+
+
The ball was blue
+
+
+
+
a_ball = 'yellow'
+
+
+
if a_ball == 'blue':
+    # The conditional statement is False, so the body does not run.
+    print('The ball was blue')
+
+

We can add an else clause to the if statement. Remember the body of the if statement runs if the conditional expression (here a_ball == 'blue') is True. The else clause runs if the conditional statement is False. This may be clearer with an example:

+
+
a_ball = 'blue'
+
+
+
if a_ball == 'blue':
+    # The conditional expression is True in this case, so the body runs.
+    print('The ball was blue')
+else:
+    # The conditional expression was True, so the else clause does not run.
+    print('The ball was not blue')
+
+
The ball was blue
+
+
+
+

Notice that the else clause of the if statement starts with a header line — else — followed by a colon :. It then has its own indented body of indented code. The body of the else clause only runs if the initial conditional expression is not True.

+
+
+
a_ball = 'yellow'
+
+
+
if a_ball == 'blue':
+    # The conditional expression was False, so the body does not run.
+    print('The ball was blue')
+else:
+    # but the else clause does run.
+    print('The ball was not blue')
+
+
The ball was not blue
+
+
+

With this machinery, we can now implement the full logic of step 4 above:

+
If you have drawn a blue ball from bucket A:
+    Draw a ball from bucket B
+    if the ball is green:
+        record "yes"
+    otherwise:
+        record "no".
+

Here is bucket B. Remember green means “win” (65% of the time) and red means “lose” (35% of the time). We could call this the “Commanders win when it is a nice day” bucket:

+
+
bucket_B = np.repeat(['green', 'red'], [65, 35])
+
+

The full logic for step 4 is:

+
+
# By default, say we have no result.
+result = 'No result'
+a_ball = rnd.choice(bucket_A)
+# If you have drawn a blue ball from bucket A:
+if a_ball == 'blue':
+    # Draw a ball at random from bucket B
+    b_ball = rnd.choice(bucket_B)
+    # if the ball is green:
+    if b_ball == 'green':
+        # record "yes"
+        result = 'yes'
+    # otherwise:
+    else:
+        # record "no".
+        result = 'no'
+# Show what we got in this case.
+result
+
+
'yes'
+
+
+

Now we have everything we need to run many trials with the same logic.

+
+
# The result of each trial.
+# To start with, say we have no result for all the trials.
+z = np.repeat(['No result'], 10000)
+
+# Repeat trial procedure 10000 times
+for i in range(10000):
+    # draw one "ball" for the weather, store in "a_ball"
+    # blue is "nice day", yellow is "not nice"
+    a_ball = rnd.choice(bucket_A)
+    if a_ball == 'blue':  # nice day
+        # if no rain, check on game outcome
+        # green is "win" (give nice day), red is "lose" (given nice day).
+        b_ball = rnd.choice(bucket_B)
+        if b_ball == 'green':  # Commanders win
+            # Record result.
+            z[i] = 'yes'
+        else:
+            z[i] = 'no'
+    # End of trial, go back to the beginning until done.
+
+# Count of the number of times we got "yes".
+k = np.sum(z == 'yes')
+# Show the proportion of *both* fine day *and* wins
+kk = k / 10000
+kk
+
+
0.4603
+
+
+

The above procedure gives us the probability that it will be a nice day and the Commanders will win — about 46%.

+

End of fine_win notebook

+
+

Let’s say that we think that the Commanders have a 0.55 (55%) chance of winning on a not-nice day. With the aid of a bucket with a different composition — one made by substituting 55 green and 45 yellow balls in Step 4 — a similar procedure yields the chance that it will be a nasty day and the Commanders will win. With a similar substitution and procedure we could also estimate the probabilities that it will be a nasty day and the Commanders will lose, and a nice day and the Commanders will lose. The sum of these probabilities should come close to unity, because the sum includes all the possible outcomes. But it will not exactly equal unity because of what we call “sampling variation” or “sampling error.”

+

Please notice that each trial of the procedure begins with the same numbers of balls in the buckets as the previous trial. That is, you must replace the balls you draw after each trial in order that the probabilities remain the same from trial to trial. Later we will discuss the general concept of replacement versus non-replacement more fully.

+
+
+

8.11 The deductive formulaic method

+

It also is possible to get an answer with formulaic methods to the question about a nice day and the Commanders winning. The following discussion of nice-day-Commanders-win handled by formula is a prototype of the formulaic deductive method for handling other problems.

+

Return now to the tree diagram (Figure 8.1) above. We can read from the tree diagram that 70 percent of the time it will be nice, and of that 70 percent of the time, 65 percent of the games will be wins. That is, \(0.65 * 0.7 = 0.455\) = the probability of a nice day and a win. That is the answer we seek. The method seems easy, but it also is easy to get confused and obtain the wrong answer.

+
+
+

8.12 Multiplication rule

+

We can generalize what we have just done. The foregoing formula exemplifies what is known as the “multiplication rule”:

+

\[ +P(\text{nice day and win}) = P(\text{nice day}) * P(\text{winning | nice day}) +\]

+

where the vertical line in \(P(\text{winning | nice day})\) means “conditional upon” or “given that.” That is, the vertical line indicates a “conditional probability,” a concept we must consider in a minute.

+

The multiplication rule is a formula that produces the probability of the combination (juncture) of two or more events . More discussion of it will follow below.

+
+
+

8.13 Conditional and unconditional probabilities

+

Two kinds of probability statements — conditional and unconditional — must now be distinguished.

+

It is the appropriate concept when many factors, all small relative to each other rather than one force having an overwhelming influence, affect the outcome.

+

A conditional probability is formally written \(P(\text{Commanders win +| rain}) = 0.65\), and it is read “The probability that the Commanders will win if (given that) it rains is 0.65.” It is the appropriate concept when there is one (or more) major event of interest in decision contexts.

+

Let’s use another football example to explain conditional and unconditional probabilities. In the year this was being written, the University of Maryland had an unpromising football team. Someone may nevertheless ask what chance the team had of winning the post season game at the bowl to which only the best team in the University of Maryland’s league is sent. One may say that if by some miracle the University of Maryland does get to the bowl, its chance would be a bit less than 50- 50 — say, 0.40. That is, the probability of its winning, conditional on getting to the bowl is 0.40. But the chance of its getting to the bowl at all is very low, perhaps 0.01. If so, the unconditional probability of winning at the bowl is the probability of its getting there multiplied by the probability of winning if it gets there; that is, 0.01 x 0.40 = 0.004. (It would be even better to say that .004 is the probability of winning conditional only on having a team, there being a league, and so on, all of which seem almost sure things.) Every probability is conditional on many things — that war does not break out, that the sun continues to rise, and so on. But if all those unspecified conditions are very sure, and can be taken for granted, we talk of the probability as unconditional.

+

A conditional probability is a statement that the probability of an event is such-and-such if something else is so-and-so; it is the “if” that makes a probability statement conditional. True, in some sense all probability statements are conditional; for example, the probability of an even-numbered spade is 6/52 if the deck is a poker deck and not necessarily if it is a pinochle deck or Tarot deck. But we ignore such conditions for most purposes.

+

Most of the use of the concept of probability in the social sciences is conditional probability. All hypothesis-testing statistics (discussed starting in Chapter 20) are conditional probabilities.

+

Here is the typical conditional-probability question used in social-science statistics: What is the probability of obtaining this sample S (by chance) if the sample were taken from universe A? For example, what is the probability of getting a sample of five children with I.Q.s over 100 by chance in a sample randomly chosen from the universe of children whose average I.Q. is 100?

+

One way to obtain such conditional-probability statements is by examination of the results generated by universes like the conditional universe. For example, assume that we are considering a universe of children where the average I.Q. is 100.

+

Write down “over 100” and “under 100” respectively on many slips of paper, put them into a hat, draw five slips several times, and see how often the first five slips drawn are all over 100. This is the resampling (Monte Carlo simulation) method of estimating probabilities.

+

Another way to obtain such conditional-probability statements is formulaic calculation. For example, if half the slips in the hat have numbers under 100 and half over 100, the probability of getting five in a row above 100 is 0.03125 — that is, \(0.5^5\), or 0.5 x 0.5 x 0.5 x 0.5 x 0.5, using the multiplication rule introduced above. But if you are not absolutely sure you know the proper mathematical formula, you are more likely to come up with a sound answer with the simulation method.

+

Let’s illustrate the concept of conditional probability with four cards — two aces and two 3’s (or two black and two red). What is the probability of an ace? Obviously, 0.5. If you first draw an ace, what is the probability of an ace now? That is, what is the probability of an ace conditional on having drawn one already? Obviously not 0.5.

+

This change in the conditional probabilities is the basis of mathematician Edward Thorp’s famous system of card-counting to beat the casinos at blackjack (Twenty One).

+

Casinos can defeat card counting by using many decks at once so that conditional probabilities change more slowly, and are not very different than unconditional probabilities. Looking ahead, we will see that sampling with replacement, and sampling without replacement from a huge universe, are much the same in practice, so we can substitute one for the other at our convenience.

+

Let’s further illustrate the concept of conditional probability with a puzzle (from Gardner 2001, 288). “… shuffle a packet of four cards — two red, two black — and deal them face down in a row. Two cards are picked at random, say by placing a penny on each. What is the probability that those two cards are the same color?”

+

1. Play the game with the cards 100 times, and estimate the probability sought.

+

OR

+
    +
  1. Put slips with the numbers “1,” “1,” “2,” and “2” in a hat, or in an array named N on a computer.
  2. +
  3. Shuffle the slips of paper by shaking the hat or shuffling the array (of which more below).
  4. +
  5. Take two slips of paper from the hat or from N, to get two numbers.
  6. +
  7. Call the first number you selected A and the second B.
  8. +
  9. Are A and B the same? If so, record “Yes” otherwise “No”.
  10. +
  11. Repeat (2-5) 10000 times, and count the proportion of “Yes” results. That proportion equals the probability we seek to estimate.
  12. +
+

Before we proceed to do this procedure in Python, we need a command to shuffle an array.

+
+
+

8.14 Shuffling with rnd.permuted

+

In the recipe above, the array N has four values:

+
+
# Numbers representing the slips in the hat.
+N = np.array([1, 1, 2, 2])
+
+

For the physical simulation, we specified that we would shuffle the slips of paper with these numbers, meaning that we would jumble them up into a random order. When we have done this, we will select two slips — say the first two — from the shuffled slips.

+

As we will be discussing more in various places, this shuffle-then-draw procedure is also called resampling without replacement. The without replacement idea refers to the fact that, after shuffling, we take a first virtual slip of paper from the shuffled array, and then a second — but we do not replace the first slip of paper into the shuffled array before drawing the second. For example, say I drew a “1” from N for the first value. If I am sampling without replacement then, when I draw the next value, the candidates I am choosing from are now “1”, “2” and “2”, because I have removed the “1” I got as the first value. If I had instead been sampling with replacement, then I would put back the “1” I had drawn, and would draw the second sample from the full set of “1”, “1”, “2”, “2”.

+

::: python You can use rnd.permuted to shuffle an array into a random order.

+

Like rnd.choice, rnd.permuted is a function (actually, a method) of rnd, that takes an array as input, and produces a version of the array, where the elements are in random order.

+
+
# The array N, shuffled into a random order.
+shuffled = rnd.permuted(N)
+# The "slips" are now in random order.
+shuffled
+
+
array([2, 2, 1, 1])
+
+
+

See Section 11.4 for some more discussion of shuffling and sampling without replacement.

+
+
+

8.15 Code answers to the cards and pennies problem

+
+

Start of cards_pennies notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
# Numbers representing the slips in the hat.
+N = np.array([1, 1, 2, 2])
+
+# An array in which we will store the result of each trial.
+z = np.repeat(['No result yet'], 10000)
+
+for i in range(10000):
+    # Shuffle the numbers in N into a random order.
+    shuffled = rnd.permuted(N)
+
+    A = shuffled[0]  # The first slip from the shuffled array.
+    B = shuffled[1]  # The second slip from the shuffled array.
+
+    # Set the result of this trial.
+    if A == B:
+        z[i] = 'Yes'
+    else:
+        z[i] = 'No'
+
+# How many times did we see "Yes"?
+k = np.sum(z == 'Yes')
+
+# The proportion.
+kk = k / 10000
+
+print(kk)
+
+
0.337
+
+
+

Now let’s play the game differently, first picking one card and putting it back and shuffling before picking a second card. What are the results now? You can try it with the cards, but here is another program, similar to the last, to run that variation.

+
+
# The cards / pennies game - but replacing the slip and re-shuffling, before
+# drawing again.
+
+# An array in which we will store the result of each trial.
+z = np.repeat(['No result yet'], 10000)
+
+for i in range(10000):
+    # Shuffle the numbers in N into a random order.
+    first_shuffle = rnd.permuted(N)
+    # Draw a slip of paper.
+    A = first_shuffle[0]  # The first slip.
+
+    # Shuffle again (with all the slips).
+    second_shuffle = rnd.permuted(N)
+    # Draw a slip of paper.
+    B = second_shuffle[0]  # The second slip.
+
+    # Set the result of this trial.
+    if A == B:
+        z[i] = 'Yes'
+    else:
+        z[i] = 'No'
+
+# How many times did we see "Yes"?
+k = np.sum(z == 'Yes')
+
+# The proportion.
+kk = k / 10000
+
+print(kk)
+
+
0.5072
+
+
+

End of cards_pennies notebook

+
+

Why do you get different results in the two cases? Let’s ask the question differently: What is the probability of first picking a black card? Clearly, it is 50-50, or 0.5. Now, if you first pick a black card, what is the probability in the first game above of getting a second black card? There are two red and one black cards left, so now p = 1/3.

+

But in the second game, what is the probability of picking a second black card if the first one you pick is black? It is still 0.5 because we are sampling with replacement.

+

The probability of picking a second black card conditional on picking a first black card in the first game is 1/3, and it is different from the unconditional probability of picking a black card first. But in the second game the probability of the second black card conditional on first picking a black card is the same as the probability of the first black card.

+

So the reason you lose money if you play the first game at even odds against a carnival game operator is because the conditional probability is different than the original probability.

+

And an illustrative joke: The best way to avoid there being a live bomb aboard your plane flight is to take an inoperative bomb aboard with you; the probability of one bomb is very low, and by the multiplication rule, the probability of two bombs is very very low . Two hundred years ago the same joke was told about the midshipman who, during a battle, stuck his head through a hole in the ship’s side that had just been made by an enemy cannon ball because he had heard that the probability of two cannonballs striking in the same place was one in a million.

+

What’s wrong with the logic in the joke? The probability of there being a bomb aboard already, conditional on your bringing a bomb aboard, is the same as the conditional probability if you do not bring a bomb aboard. Hence you change nothing by bringing a bomb aboard, and do not reduce the probability of an explosion.

+
+
+

8.16 The Commanders again, plus leaving the game early

+

Let’s carry exactly the same process one tiny step further. Assume that if the Commanders win, there is a 0.3 chance you will leave the game early. Now let us ask the probability of a nice day, the Commanders winning, and you leaving early. You should be able to see that this probability can be estimated with three buckets instead of two. Or it can be computed with the multiplication rule as 0.65 * 0.7 * 0.3 = 0.1365 (about 0.14) — the probability of a nice day and a win and you leave early.

+

The book shows you the formal method — the multiplication rule, in this case — for several reasons: 1) Simulation is weak with very low probabilities, e.g. P(50 heads in 50 throws). But — a big but — statistics and probability is seldom concerned with very small probabilities. Even for games like poker, the orders of magnitude of 5 aces in a wild game with joker, or of a royal flush, matter little. 2) The multiplication rule is wonderfully handy and convenient for quick calculations in a variety of circumstances. A back-of-the-envelope calculation can be quicker than a simulation. And it can also be useful in situations where the probability you will calculate will be very small, in which case simulation can require considerable computer time to be accurate. (We will shortly see this point illustrated in the case of estimating the rate of transmission of AIDS by surgeons.) 3) It is useful to know the theory so that you are able to talk to others, or if you go on to other courses in the mathematics of probability and statistics.

+

The multiplication rule also has the drawback of sometimes being confusing, however. If you are in the slightest doubt about whether the circumstances are correct for applying it, you will be safer to perform a simulation as we did earlier with the Commanders, though in practice you are likely to simulate with the aid of a computer program, as we shall see shortly. So use the multiplication rule only when there is no possibility of confusion. Usually that means using it only when the events under consideration are independent.

+

Notice that the same multiplication rule gives us the probability of any particular sequence of hits and misses — say, a miss, then a hit, then a hit if the probability of a single miss is 2/3. Among the 2/3 of the trials with misses on the first shot, 1/3 will next have a hit, so 2/3 x 1/3 equals the probability of a miss then a hit. Of those 2/9 of the trials, 1/3 will then have a hit, or 2/3 x 1/3 x 1/3 = 2/27 equals the probability of the sequence miss-hit-hit.

+

The multiplication rule is very useful in everyday life. It fits closely to a great many situations such as “What is the chance that it will rain (.3) and that (if it does rain) the plane will not fly (.8)?” Hence the probability of your not leaving the airport today is 0.3 x 0.8 = 0.24.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/probability_theory_1b.html b/python-book/probability_theory_1b.html new file mode 100644 index 00000000..e4614c09 --- /dev/null +++ b/python-book/probability_theory_1b.html @@ -0,0 +1,802 @@ + + + + + + + + + +Resampling statistics - 9  Probability Theory Part I (continued) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

9  Probability Theory Part I (continued)

+
+ + + +
+ + + + +
+ + +
+ +
+

9.1 The special case of independence

+

A key concept in probability and statistics is that of the independence of two events in which we are interested. Two events are said to be “independent” when one of them does not have any apparent relationship to the other. If I flip a coin that I know from other evidence is a fair coin, and I get a head, the chance of then getting another head is still 50-50 (one in two, or one to one.) And, if I flip a coin ten times and get heads the first nine times, the probability of getting a head on the tenth flip is still 50-50. Hence the concept of independence is characterized by the phrase “The coin has no memory.” (Actually the matter is a bit more complicated. If you had previously flipped the coin many times and knew it to be a fair coin, then the odds would still be 50-50, even after nine heads. But, if you had never seen the coin before, the run of nine heads might reasonably make you doubt that the coin was a fair one.)

+

In the Washington Commanders example above, we needed a different set of buckets to estimate the probability of a nice day plus a win, and of a nasty day plus a win. But what if the Commanders’ chances of winning are the same whether the day is nice or nasty? If so, we say that the chance of winning is independent of the kind of day. That is, in this special case,

+

\[ +P(\text{win | nice day}) = P(\text{win | nasty day}) \text{ and } P(\text{nice +day and win}) +\]

+

\[ += P(\text{nice day}) * P(\text{winning | nice day}) +\]

+

\[ += P(\text{nice day}) * P(\text{winning}) +\]

+
+
+
+ +
+
+ +
+
+
+

See section Section 8.13 for an explanation of this notation.

+
+
+

In this case we need only one set of two buckets to make all the estimates.

+

Independence means that the elements are drawn from 2 or more separate sets of possibilities . That is, \(P(A | B) = P(A | \ \hat{} B) = P(A)\) and vice versa.

+ +

In other words, if the occurrence of the first event does not change this probability that the second event will occur, then the events are independent.

+

Another way to put the matter: Events A and B are said to be independent of each other if knowing whether A occurs does not change the probability that B will occur, and vice versa. If knowing whether A does occur alters the probability of B occurring, then A and B are dependent.

+

If two events are independent, the multiplication rule simplifies to \(P(A \text{ and } B) = P(A) * P(B)\) . I’ll repeat once more: This rule is simply a mathematical shortcut, and one can make the desired estimate by simulation.

+

Also again, if two events are not independent — that is, if \(P(A | B)\) is not equal to \(P(A)\) because \(P(A)\) is dependent upon the occurrence of \(B\), then the formula to be used now is, \(P(A \text{ and } B) = P(A | B) * P(B)\) , which is sufficiently confusing that you are probably better off with a simulation.

+

What about if each of the probabilities is dependent on the other outcome? There is no easy formulaic method to deal with such a situation.

+

People commonly make the mistake of treating independent events as non-independent, perhaps from superstitious belief. After a long run of blacks, roulette gamblers say that the wheel is “due” to come up red. And sportswriters make a living out of interpreting various sequences of athletic events that occur by chance, and they talk of teams that are “due” to win because of the “Law of Averages.” For example, if Barry Bonds goes to bat four times without a hit, all of us (including trained statisticians who really know better) feel that he is “due” to get a hit and that the probability of his doing so is very high — higher that is, than his season’s average. The so-called “Law of Averages” implies no such thing, of course.

+

Events are often dependent in subtle ways. A boy may telephone one of several girls chosen at random. But, if he calls the same girl again (or if he does not call her again), the second event is not likely to be independent of the first. And the probability of his calling her is different after he has gone out with her once than before he went out with her.

+

As noted in the section above, events A and B are said to be independent of each other if the conditional probabilities of A and B remain the same . And the conditional probabilities remain the same if sampling is conducted with replacement .

+ +

Let’s now re-consider the multiplication rule with the special but important case of independence.

+
+

9.1.1 Example: Four Events in a Row — The Multiplication Rule

+

Assume that we want to know the probability of four successful archery shots in a row, where the probability of a success on a given shot is .25.

+

Instead of simulating the process with resampling trials we can, if we wish, arrive at the answer with the “multiplication rule.” This rule says that the probability that all of a given number of independent events (the successful shots) will occur (four out of four in this case) is the product of their individual probabilities — in this case, 1/4 x 1/4 x 1/4 x 1/4 = 1/256. If in doubt about whether the multiplication rule holds in any given case, however, you may check by resampling simulation. For the case of four daughters in a row, assuming that the probability of a girl is .5, the probability is 1/2 x 1/2 x 1/2 x 1/2 = 1/16.

+

Better yet, we’d use the more exact probability of getting a girl: \(100/206\), and multiply out the result as \((100/206)^4\). An important point here, however: we have estimated the probability of a particular family having four daughters as 1 in 16 — that is, odds of 15 to 1. But note well: This is a very different idea from stating that the odds are 15 to 1 against some family’s having four daughters in a row. In fact, as many families will have four girls in a row as will have boy-girl-boy-girl in that order or girl-boy-girl-boy or any other series of four children. The chances against any particular series is the same — 1 in 16 — and one-sixteenth of all four-children families will have each of these series, on average. This means that if your next-door neighbor has four daughters, you cannot say how much “out of the ordinary” the event is. It is easy to slip into unsound thinking about this matter.

+ +

Why do we multiply the probabilities of the independent simple events to learn the probability that they will occur jointly (the composite event)? Let us consider this in the context of three basketball shots each with 1/3 probability of hitting.

+
+
+
+
+

+
Figure 9.1: Tree Diagram for 3 Basketball Shots, Probability of a Hit is 1/3
+
+
+
+
+

Figure 9.1 is a tree diagram showing a set of sequential simple events where each event is conditional upon a prior simple event. Hence every probability after the first is a conditional probability.

+

In Figure 9.1, follow the top path first. On approximately one-third of the occasions, the first shot will hit. Among that third of the first shots, roughly a third will again hit on the second shot, that is, 1/3 of 1/3 or 1/3 x 1/3 = 1/9. The top path makes it clear that in 1/3 x 1/3 = 1/9 of the trials, two hits in a row will occur. Then, of the 1/9 of the total trials in which two hits in a row occur, about 1/3 will go on to a third hit, or 1/3 x 1/3 x 1/3 = 1/27. Remember that we are dealing here with independent events; regardless of whether the player made his first two shots, the probability is still 1 in 3 on the third shot.

+
+
+
+

9.2 The addition of probabilities

+

Back to the Washington Redskins again. You ponder more deeply the possibility of a nasty day, and you estimate with more discrimination that the probability of snow is .1 and of rain it is .2 (with .7 of a nice day). Now you wonder: What is the probability of a rainy day or a nice day?

+

To find this probability by simulation:

+
    +
  1. Put 7 blue balls (nice day), 1 black ball (snowy day) and 2 gray balls (rainy day) into a bucket. You want to know the probability of a blue or a gray ball. To find this probability:

  2. +
  3. Draw one ball and record “yes” if its color is blue or gray, “no” otherwise.

  4. +
  5. Repeat step 1 perhaps 200 times.

  6. +
  7. Find the proportion of “yes” trials.

  8. +
+

This procedure certainly will do the job. And simulation may be unavoidable when the situation gets more complex. But in this simple case, you are likely to see that you can compute the probability by adding the .7 probability of a nice day and the .2 probability of a rainy day to get the desired probability. This procedure of formulaic deductive probability theory is called the addition rule .

+
+
+

9.3 The addition rule

+

The addition rule applies to mutually exclusive outcomes — that is, the case where if one outcome occurs, the other(s) cannot occur; one event implies the absence of the other when events are mutually exclusive. Green and red coats are mutually exclusive if you never wear more than one coat at a time. If there are only two possible mutually-exclusive outcomes, the outcomes are complementary . It may be helpful to note that mutual exclusivity equals total dependence; if one outcome occurs, the other cannot. Hence we write formally that

+

\[ +\text{If} P(A \text{ and } B) = 0 \text{ then } +\]

+

\[ +P(A \text{ or } B) = P(A) + P(B) +\]

+

An outcome and its absence are mutually exclusive, and their probabilities add to unity.

+

\[ +P(A) + P(\ \hat{} A) = 1 +\]

+

Examples include a) rain and no rain, and b) if \(P(\text{sales > \$1 million}) = 0.2\), then \(P(\text{sales <= \$1 million}) = 0.8\).

+

As with the multiplication rule, the addition rule can be a useful shortcut. The answer can always be obtained by simulation, too.

+

We have so far implicitly assumed that a rainy day and a snowy day are mutually exclusive. But that need not be so; both rain and snow can occur on the same day; if we take this possibility into account, we cannot then use the addition rule.

+

Consider the case in which seven days in ten are nice, one day is rainy, one day is snowy, and one day is both rainy and snowy. What is the chance that it will be either nice or snowy? The procedure is just as before, except that some rainy days are included because they are also snowy.

+

When A and B are not mutually exclusive — when it is possible that the day might be both rainy and snowy, or you might wear both red and green coats on the same day, we write (in the latter case) P(red and green coats) > 0, and the appropriate formula is

+

\[ +P(\text{red or green}) = P(\text{red}) + P(\text{green}) - P(\text{red and green}) ` +\]

+ +

In this case as in much of probability theory, the simulation for the case in which the events are not mutually exclusive is no more complex than when they are mutually exclusive; indeed, if you simulate you never even need to know the concept of mutual exclusivity or inquire whether that is your situation. In contrast, the appropriate formula for non-exclusivity is more complex, and if one uses formulas one must inquire into the characteristics of the situation and decide which formula to apply depending upon the classification; if you classify wrongly and therefore apply the wrong formula, the result is a wrong answer.

+ +

To repeat, the addition rule only works when the probabilities you are adding are mutually exclusive — that is, when the two cannot occur together.

+

The multiplication and addition rules are as different from each other as mortar and bricks; both, however, are needed to build walls. The multiplication rule pertains to a single outcome composed of two or more elements (e.g. weather, and win-or-lose), whereas the addition rule pertains to two or more possible outcomes for one element. Drawing from a card deck (with replacement) provides an analogy: the addition rule is like one draw with two or more possible cards of interest, whereas the multiplication rule is like two or more cards being drawn with one particular “hand” being of interest.

+
+
+

9.4 Theoretical devices for the study of probability

+

It may help you to understand the simulation approach to estimating composite probabilities demonstrated in this book if you also understand the deductive formulaic approach. So we’ll say a bit about it here.

+

The most fundamental concept in theoretical probability is the list of events that may occur, together with the probability of each one (often arranged so as to be equal probabilities). This is the concept that Galileo employed in his great fundamental work in theoretical probability about four hundred years ago when a gambler asked Galileo about the chances of getting a nine rather than a ten in a game of three dice (though others such as Cardano had tackled the subject earlier). 1

+

Galileo wrote down all the possibilities in a tree form, a refinement for mapping out the sample space.

+

Galileo simply displayed the events themselves — such as “2,” “4,” and “4,” making up a total of 10, a specific event arrived at in a specific way. Several different events can lead to a 10 with three dice. If we now consider each of these events, we arrive at the concept of the ways that a total of 10 can arise. We ask the number of ways that an outcome can and cannot occur. (See the paragraph above). This is equivalent both operationally and linguistically to the paths in (say) the quincunx device or Pascal’s Triangle which we shall discuss shortly.

+

A tree is the most basic display of the paths in a given situation. Each branch of the tree — a unique path from the start on the left-hand side to the endpoint on the right-hand side — contains the sequence of all the elements that make up that event, in the order in which they occur. The right-hand ends of the branches constitute a list of the outcomes. That list includes all possible permutations — that is, it distinguishes among outcomes by the orders in which the particular die outcomes occur.

+
+
+

9.5 The Concept of Sample Space

+

The formulaic approach begins with the idea of sample space , which is the set of all possible outcomes of the “experiment” or other situation that interests us. Here is a formal definition from Goldberg (1986, 46):

+
+

A sample space S associated with a real or conceptual experiment is a set such that (1) each element of S denotes an outcome of the experiment, and (2) any performance of the experiment results in an outcome that corresponds to one and only one element of S.

+
+

Because the sum of the probabilities for all the possible outcomes in a given experimental trial is unity, the sum of all the events in the sample space (S) = 1.

+

Early on, people came up with the idea of estimating probabilities by arraying the possibilities for, and those against, the event occurring. For example, the coin could fall in three ways — head, tail, or on its side. They then speedily added the qualification that the possibilities in the list must have an equal chance, to distinguish the coin falling on its side from the other possibilities (so ignore it). Or, if it is impossible to make the probabilities equal, make special allowance for inequality. Working directly with the sample space is the method of first principles . The idea of a list was refined to the idea of sample space, and “for” and “against” were refined to the “success” and “failure” elements among the total elements.

+

The concept of sample space raises again the issue of how to estimate the simple probabilities. While we usually can estimate the probabilities accurately in gambling games because we ourselves construct the games and therefore control the probabilities that they produce, we have much less knowledge of the structures that underlie the important problems in life — in science, business, the stock market, medicine, sports, and so on. We therefore must wrestle with the issue of what probabilities we should include in our theoretical sample space, or in our experiments. Often we proceed by choosing as an analogy a physical “model” whose properties we know and which we consider to be appropriate — such as a gambling game with coins, dice, cards. This model becomes our idealized setup. But this step makes crystal-clear that judgment is heavily involved in the process, because choosing the analogy requires judgment.

+

A Venn diagram is another device for displaying the elements that make up an event. But unlike a tree diagram, it does not show the sequence of those elements; rather, it shows the extent of overlap among various classes of elements .

+

A Venn diagram expresses by areas (especially rectangular Venn diagrams) the numbers at the end of the branches in a tree.

+

Pascal’s Triangle is still another device. It aggregates the last permutation branches in the tree into combinations — that is, without distinguishing by order. It shows analytically (by tracing them) the various paths that lead to various combinations.

+

The study of the mathematics of probability is the study of calculational shortcuts to do what tree diagrams do. If you don’t care about the shortcuts, then you don’t need the formal mathematics--though it may improve your mathematical insight (or it may not). The resampling method dispenses not only with the shortcuts but also with the entire counting of points in the sample space.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/probability_theory_2_compound.html b/python-book/probability_theory_2_compound.html new file mode 100644 index 00000000..fc24af0e --- /dev/null +++ b/python-book/probability_theory_2_compound.html @@ -0,0 +1,1666 @@ + + + + + + + + + +Resampling statistics - 11  Probability Theory, Part 2: Compound Probability + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

11  Probability Theory, Part 2: Compound Probability

+
+ + + +
+ + + + +
+ + +
+ +
+

11.1 Introduction

+

In this chapter we will deal with what are usually called “probability problems” rather than the “statistical inference problems” discussed in later chapters. The difference is that for probability problems we begin with a knowledge of the properties of the universe with which we are working. (See Section 8.9 on the definition of resampling.)

+

We start with some basic problems in probability. To make sure we do know the properties of the universe we are working with, we start with poker, and a pack of cards. Working with some poker problems, we rediscover the fundamental distinction between sampling with and without replacement.

+
+
+

11.2 Introducing a poker problem: one pair (two of a kind)

+

What is the chance that the first five cards chosen from a deck of 52 (bridge/poker) cards will contain two (and only two) cards of the same denomination (two 3’s for example)? (Please forgive the rather sterile unrealistic problems in this and the other chapters on probability. They reflect the literature in the field for 300 years. We’ll get more realistic in the statistics chapters.)

+

We shall estimate the odds the way that gamblers have estimated gambling odds for thousands of years. First, check that the deck is a standard deck and is not missing any cards. (Overlooking such small but crucial matters often leads to errors in science.) Shuffle thoroughly until you are satisfied that the cards are randomly distributed. (It is surprisingly hard to shuffle well.) Then deal five cards, and mark down whether the hand does or does not contain a pair of the same denomination.

+

At this point, we must decide whether three of a kind, four of a kind or two pairs meet our criterion for a pair. Since our criterion is “two and only two,” we decide not to count them.

+

Then replace the five cards in the deck, shuffle, and deal again. Again mark down whether the hand contains one pair of the same denomination. Do this many times. Then count the number of hands with one pair, and figure the proportion (as a percentage) of all hands.

+

Table 11.1 has the results of 25 hands of this procedure.

+
+
+ + +++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 11.1: Results of 25 hands for the problem “one pair”
HandCard 1Card 2Card 3Card 4Card 5One pair?
1King ♢King ♠Queen ♠10 ♢6 ♠Yes
28 ♢Ace ♢4 ♠10 ♢3 ♣No
34 ♢5 ♣Ace ♢Queen ♡10 ♠No
43 ♡Ace ♡5 ♣3 ♢Jack ♢Yes
56 ♠King ♣6 ♢3 ♣3 ♡No
6Queen ♣7 ♢Jack ♠5 ♡8 ♡No
79 ♣4 ♣9 ♠Jack ♣5 ♠Yes
83 ♠3 ♣3 ♡5 ♠5 ♢Yes
9Queen ♢4 ♠Queen ♣6 ♡4 ♢No
10Queen ♠3 ♣7 ♠7 ♡8 ♢Yes
118 ♡9 ♠7 ♢8 ♠Ace ♡Yes
12Ace ♠9 ♡4 ♣2 ♠Ace ♢Yes
134 ♡3 ♣Ace ♢9 ♡5 ♡No
1410 ♣7 ♠8 ♣King ♣4 ♢No
15Queen ♣8 ♠Queen ♠8 ♣5 ♣No
16King ♡10 ♣Jack ♠10 ♢10 ♡No
17Queen ♠Queen ♡Ace ♡King ♢7 ♡Yes
185 ♢6 ♡Ace ♡4 ♡6 ♢Yes
193 ♠5 ♡2 ♢King ♣9 ♡No
208 ♠Jack ♢7 ♣10 ♡3 ♡No
215 ♢4 ♠Jack ♡2 ♠King ♠No
225 ♢4 ♢Jack ♣King ♢2 ♠No
23King ♡King ♠6 ♡2 ♠5 ♣Yes
248 ♠9 ♠6 ♣Ace ♣5 ♢No
25Ace ♢7 ♠4 ♡9 ♢9 ♠Yes
% Yes44%
+
+
+

In this series of 25 experiments, 44 percent of the hands contained one pair, and therefore 0.44 is our estimate (for the time being) of the probability that one pair will turn up in a poker hand. But we must notice that this estimate is based on only 25 hands, and therefore might well be fairly far off the mark (as we shall soon see).

+

This experimental “resampling” estimation does not require a deck of cards. For example, one might create a 52-sided die, one side for each card in the deck, and roll it five times to get a “hand.” But note one important part of the procedure: No single “card” is allowed to come up twice in the same set of five spins, just as no single card can turn up twice or more in the same hand. If the same “card” did turn up twice or more in a dice experiment, one could pretend that the roll had never taken place; this procedure is necessary to make the dice experiment analogous to the actual card-dealing situation under investigation. Otherwise, the results will be slightly in error. This type of sampling is “sampling without replacement,” because each card is not replaced in the deck prior to dealing the next card (that is, prior to the end of the hand).

+
+
+

11.3 A first approach to the one-pair problem with code

+

We could also approach this problem using random numbers from the computer to simulate the values.

+

Let us first make some numbers from which to sample. We want to simulate a deck of playing cards analogous to the real cards we used previously. We don’t need to simulate all the features of a deck, but only the features that matter for the problem at hand. In our case, the feature that matters is the face value. We require a deck with four “1”s, four “2”s, etc., up to four “13”s, where 1 is an Ace, and 13 is a King. The suits don’t matter for our present purposes.

+

We first first make an array to represent the face values in one suit.

+
+
# Card values 1 through 13 (1 up to, not including 14).
+one_suit = np.arange(1, 14)
+one_suit
+
+
array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
+
+
+

We have the face values for one suit, but we need the face values for whole deck of cards — four suits. We do this by making a new array that consists of four repeats of one_suit:

+
+
# Repeat the one_suit array four times
+deck = np.repeat(one_suit, 4)
+deck
+
+
array([ 1,  1,  1,  1,  2,  2,  2,  2,  3,  3,  3,  3,  4,  4,  4,  4,  5,
+        5,  5,  5,  6,  6,  6,  6,  7,  7,  7,  7,  8,  8,  8,  8,  9,  9,
+        9,  9, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13,
+       13])
+
+
+
+
+

11.4 Shuffling the deck with Python

+

At this point we have a complete deck in the variable deck . But that “deck” is ordered by value, first ones (Aces) then 2s and so on. If we do not shuffle the deck, the results will be predictable. Therefore, we would like to select five of these “cards” (52 values) at random. There are two ways of doing this. The first is to use the ’rnd.choice`]{.python} tool in the familiar way, to choose 5 values at random from this strictly ordered deck. We want to draw these cards without replacement (of which more later). Without replacement means that once we have drawn a particular value, we cannot draw that value a second time — just as you cannot get the same card twice in a hand when the dealer deals you a hand of five cards.

+
+

So far, each of our uses of rnd.choice has done sampling with replacement, where you can get the same item more than once in a particular sample. Here we need without replacement. rnd.choice has an argument you can send, called replace, to tell it whether to replace values when drawing the sample. We have not used that argument so far, because the default is True — sampling with replacement. Here we need to use the argument — replace=False — to get sampling without replacement.

+
+
+
# One hand, sampling from the deck without replacement.
+hand = rnd.choice(deck, size=5, replace=False)
+hand
+
+
array([ 9,  4, 11,  9, 13])
+
+
+

The above is one way to get a random hand of five cards from the deck. Another way is to use the rnd.permuted function to shuffle the whole deck of 52 “cards” into a random order, just as a dealer would shuffle the deck before dealing. Then we could take — for example — the first five cards from the shuffled deck to give a random hand. See Section 8.14 for more on rnd.permuted.

+
+
# Shuffle the whole 52 card deck.
+shuffled = rnd.permuted(deck)
+# The "cards" are now in random order.
+shuffled
+
+
array([12, 13,  2,  9,  6,  7,  7,  7, 11, 13,  2,  8,  6,  9,  4,  1,  5,
+       12, 11,  9,  1,  2,  4,  2,  3,  3, 11,  6,  4, 11,  8,  7, 13,  8,
+       12,  5,  4,  5,  9,  8,  5,  6,  3,  1,  1, 12,  3, 13, 10, 10, 10,
+       10])
+
+
+

Now we can get our hand by taking the first five cards from the deck:

+
+
# Select the first five "cards" from the shuffled deck.
+hand = shuffled[:5]
+hand
+
+
array([12, 13,  2,  9,  6])
+
+
+

You have seen that we can use one of two procedures to a get random sample of five cards from deck, drawn without replacement:

+
    +
  1. Using rnd.choice with size=5 and replace=False to take the random sample directly from deck, or
  2. +
  3. shuffling the entire deck and then taking the first five “cards” from the result of the shuffle.
  4. +
+

Either is a valid way of getting five cards at random from the deck. It’s up to us which to choose — we slightly prefer to shuffle and take the first five, because it is more like the physical procedure of shuffling the deck and dealing, but which you prefer, is up to you.

+
+

11.4.1 A first-pass computer solution to the one-pair problem

+

Choosing the shuffle deal way, the cell to generate one hand is:

+
+
shuffled = rnd.permuted(deck)
+hand = shuffled[:5]
+hand
+
+
array([ 7,  4, 12,  1,  2])
+
+
+

Without doing anything further, we could run this cell many times, and each time, we could note down whether the particular hand had exactly one pair or not.

+

Table 11.2 has the result of running that procedure 25 times:

+
+
+ + +++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 11.2: Results of 25 hands using random numbers
HandCard 1Card 2Card 3Card 4Card 5One pair?
110571212Yes
269268Yes
3118961No
481021112No
51101185No
6810395No
71091319Yes
81343115No
9714136No
101151184Yes
117107139Yes
12211478No
131213102No
141021181No
151612125Yes
1648786Yes
17710944Yes
1834111112Yes
1910122131No
20964134Yes
2173397No
221341058No
23132988Yes
245127118No
25758107Yes
% Yes48%
+
+
+
+
+
+

11.5 Finding exactly one pair using code

+

Thus far we have had to look ourselves at the set of cards, or at the numbers, and decide if there was exactly one pair. We would like the computer to do this for us. Let us stay with the numbers we generated above by dealing the random hand from the deck of numbers. To find pairs, we will go through the following procedure:

+
    +
  • For each possible value (1 through 13), count the number of times each value has occurred in hand. Call the result of this calculation — repeat_nos.
  • +
  • Select repeat_nos values equal to 2;
  • +
  • Count the number of “2” values in repeat_nos. This the number of pairs, and excludes three of a kind or four a kind.
  • +
  • If the number of pairs is exactly one, label the hand as “Yes”, otherwise label it as “No”.
  • +
+
+
+

11.6 Finding number of repeats using np.bincount

+

Consider the following 5-card “hand” of values:

+
+
hand = np.array([5, 7, 5, 4, 7])
+
+

This hand represents a pair of 5s and a pair of 7s.

+

We want to detect the number of repeats for each possible card value, 1 through 13. Let’s say we are looking for 5s. We can detect which of the values are equal to 5 by making a Boolean array, where there is True for a value equal to 5, and False otherwise:

+
+
is_5 = (hand == 5)
+is_5
+
+
array([ True, False,  True, False, False])
+
+
+

We can then count the number of 5s with:

+
+
np.sum(is_5)
+
+
2
+
+
+

In one cell:

+
+
number_of_5s = np.sum(hand == 5)
+number_of_5s
+
+
2
+
+
+

We could do this laborious task for every possible card value (1 through 13):

+
+
number_of_1s = np.sum(hand == 1)  # Number of aces in hand
+number_of_2s = np.sum(hand == 2)  # Number of 2s in hand
+number_of_3s = np.sum(hand == 3)
+number_of_4s = np.sum(hand == 4)
+number_of_5s = np.sum(hand == 5)
+number_of_6s = np.sum(hand == 6)
+number_of_7s = np.sum(hand == 7)
+number_of_8s = np.sum(hand == 8)
+number_of_9s = np.sum(hand == 9)
+number_of_10s = np.sum(hand == 10)
+number_of_11s = np.sum(hand == 11)
+number_of_12s = np.sum(hand == 12)
+number_of_13s = np.sum(hand == 13)  # Number of Kings in hand.
+
+

Above, we store the result for each card in a separate variable; this is inconvenient, because we would have to go through each variable checking for a pair (a value of 2). It would be more convenient to store these results in an array. One way to do that would be to store the result for card value 1 at position (index, offset) 1, the result for value 2 at position 2, and so on, like this:

+
+
# Make array length 14.  We don't use position (offset) 0, and the last
+# position (offset) in this array will be 13.
+repeat_nos = np.zeros(14)
+repeat_nos[1] = np.sum(hand == 1)  # Number of aces in hand
+repeat_nos[2] = np.sum(hand == 2)  # Number of 2s in hand
+repeat_nos[3] = np.sum(hand == 3)
+repeat_nos[4] = np.sum(hand == 4)
+repeat_nos[5] = np.sum(hand == 5)
+repeat_nos[6] = np.sum(hand == 6)
+repeat_nos[7] = np.sum(hand == 7)
+repeat_nos[8] = np.sum(hand == 8)
+repeat_nos[9] = np.sum(hand == 9)
+repeat_nos[10] = np.sum(hand == 10)
+repeat_nos[11] = np.sum(hand == 11)
+repeat_nos[12] = np.sum(hand == 12)
+repeat_nos[13] = np.sum(hand == 13)  # Number of Kings in hand.
+# Show the result
+repeat_nos
+
+
array([0., 0., 0., 0., 1., 2., 0., 2., 0., 0., 0., 0., 0., 0.])
+
+
+

You may recognize all this repetitive typing as a good sign we could use a for loop to do the work — er — for us.

+
+
repeat_nos = np.zeros(14)
+for i in range(14):  # Set i to be first 0, then 1, ... through 13.
+    repeat_nos[i] = np.sum(hand == i)
+# Show the result
+repeat_nos
+
+
array([0., 0., 0., 0., 1., 2., 0., 2., 0., 0., 0., 0., 0., 0.])
+
+
+
+

Notice that we started our loop by checking for values equal to 0, and then values equal to 1 and so on. By our definition of the deck, no card can have value 0, so the first time through this loop, we will always get a count of 0. We could have saved ourselves a tiny amount of computing time if we had missed out that pointless step of checking 0, by using for i in range(1, 14): instead. In this case, we think the code is a little bit neater to read if we leave in the default start at 0, at a tiny cost in wasted computer effort.

+
+

In our particular hand, after we have done the count for 7s, we will always get 0 for card values 8, 9 … 13, because 7 was the highest card (maximum value) for our particular hand. As you might expect, there is a a Numpy function np.max that will quickly tell us the maximum value in the hand:

+
+
np.max(hand)
+
+
7
+
+
+

We can use np.max to make our loop more efficient, by stopping our checks when we’ve reached the maximum value, like this:

+
+
max_value = np.max(hand)
+# Only make an array large enough to house counts for the max value.
+repeat_nos = np.zeros(max_value + 1)
+for i in range(max_value + 1):  # Set i to 0, then 1 ... through max_value
+    repeat_nos[i] = np.sum(hand == i)
+# Show the result
+repeat_nos
+
+
array([0., 0., 0., 0., 1., 2., 0., 2.])
+
+
+

In fact, this is exactly what the function np.bincount does, so we can use that function instead of our loop, to do the same job:

+
+
repeat_nos = np.bincount(hand)
+repeat_nos
+
+
array([0, 0, 0, 0, 1, 2, 0, 2])
+
+
+
+
+

11.7 Looking for hands with exactly one pair

+

Now we have repeat_nos, we can proceed with the rest of the steps above.

+

We can count the number of cards that have exactly two repeats:

+
+
(repeat_nos == 2)
+
+
array([False, False, False, False, False,  True, False,  True])
+
+
+
+
n_pairs = np.sum(repeat_nos == 2)
+# Show the result
+n_pairs
+
+
2
+
+
+

The hand is of interest to us only if the number of pairs is exactly 1:

+
+
# Check whether there is exactly one pair in this hand.
+n_pairs == 1
+
+
False
+
+
+

We now have the machinery to use Python for all the logic in simulating multiple hands, and checking for exactly one pair.

+

Let’s do that, and use Python to do the full job of dealing many hands and finding pairs in each one. We repeat the procedure above using a for loop. The for loop commands the program to do ten thousand repeats of the statements in the “loop” (indented statements).

+

In the body of the loop (the part that gets repeated for each trial) we:

+
    +
  • Shuffle the deck.
  • +
  • Deal ourselves a new hand.
  • +
  • Calculate the repeat_nos for this new hand.
  • +
  • Calculate the number of pairs from repeat_nos; store this as n_pairs.
  • +
  • Put n_pairs for this repetition into the correct place in the scoring array z.
  • +
+

With that we end a single trial, and go back to the beginning, until we have done this 10000 times.

+

When those 10000 repetitions are over, the computer moves on to count (sum) the number of “1’s” in the score-keeping array z, each “1” indicating a hand with exactly one pair. We store this count at location k. We divide k by 10000 to get the proportion of hands that had one pair, and we print the result of k to the screen.

+
+

Start of one_pair notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
# Create a bucket (vector) called a with four "1's," four "2's," four "3's,"
+# etc., to represent a deck of cards
+one_suit = np.arange(1, 14)
+one_suit
+
+
array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13])
+
+
+
+
# Repeat values for one suit four times to make a 52 card deck of values.
+deck = np.repeat(one_suit, 4)
+deck
+
+
array([ 1,  1,  1,  1,  2,  2,  2,  2,  3,  3,  3,  3,  4,  4,  4,  4,  5,
+        5,  5,  5,  6,  6,  6,  6,  7,  7,  7,  7,  8,  8,  8,  8,  9,  9,
+        9,  9, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13,
+       13])
+
+
+
+
# Array to store result of each trial.
+z = np.zeros(10000)
+
+# Repeat the following steps 10000 times
+for i in range(10000):
+    # Shuffle the deck
+    shuffled = rnd.permuted(deck)
+
+    # Take the first five cards to make a hand.
+    hand = shuffled[:5]
+
+    # How many pairs?
+    # Counts for each card rank.
+    repeat_nos = np.bincount(hand)
+    n_pairs = np.sum(repeat_nos == 2)
+
+    # Keep score of # of pairs
+    z[i] = n_pairs
+
+    # End loop, go back and repeat
+
+# How often was there 1 pair?
+k = np.sum(z == 1)
+
+# Convert to proportion.
+kk = k / 10000
+
+# Show the result.
+print(kk)
+
+
0.4191
+
+
+

End of one_pair notebook

+
+

In one run of the program, the result in kk was 0.419, so our estimate would be that the probability of a single pair is 0.419.

+

How accurate are these resampling estimates? The accuracy depends on the number of hands we deal — the more hands, the greater the accuracy. If we were to examine millions of hands, 42 percent would contain a pair each; that is, the chance of getting a pair in the long run is 42 percent. It turns out the estimate of 48 percent based on 25 hands in Table 11.1 is fairly close to the long-run estimate, though whether or not it is close enough depends on one’s needs of course. If you need great accuracy, deal many more hands.

+

A note on the decks, hands, repeat_noss in the above program, etc.: These “variables” are called “array”s in Python. An array is an array (sequence) of elements that gets filled with numbers as Python conducts its operations.

+

To help keep things straight (though the program does not require it), we often use z to name the array that collects all the trial results, and k to denote our overall summary results. Or you could call it something like scoreboard — it’s up to you.

+

How many trials (hands) should be made for the estimate? There is no easy answer.1 One useful device is to run several (perhaps ten) equal sized sets of trials, and then examine whether the proportion of pairs found in the entire group of trials is very different from the proportions found in the various subgroup sets. If the proportions of pairs in the various subgroups differ greatly from one another or from the overall proportion, then keep running additional larger subgroups of trials until the variation from one subgroup to another is sufficiently small for your purposes. While such a procedure would be impractical using a deck of cards or any other physical means, it requires little effort with the computer and Python.

+
+
+

11.8 Two more tntroductory poker problems

+

Which is more likely, a poker hand with two pairs, or a hand with three of a kind? This is a comparison problem, rather than a problem in absolute estimation as was the previous example.

+

In a series of 100 “hands” that were “dealt” using random numbers, four hands contained two pairs, and two hands contained three of a kind. Is it safe to say, on the basis of these 100 hands, that hands with two pairs are more frequent than hands with three of a kind? To check, we deal another 300 hands. Among them we see fifteen hands with two pairs (3.75 percent) and eight hands with three of a kind (2 percent), for a total of nineteen to ten. Although the difference is not enormous, it is reasonably clear-cut. Another 400 hands might be advisable, but we shall not bother.

+

Earlier I obtained forty-four hands with one pair each out of 100 hands, which makes it quite plain that one pair is more frequent than either two pairs or three-of-a-kind. Obviously, we need more hands to compare the odds in favor of two pairs with the odds in favor of three-of-a-kind than to compare those for one pair with those for either two pairs or three-of-a-kind. Why? Because the difference in odds between one pair, and either two pairs or three-of-a-kind, is much greater than the difference in odds between two pairs and three-of-a-kind. This observation leads to a general rule: The closer the odds between two events, the more trials are needed to determine which has the higher odds.

+

Again it is interesting to compare the odds with the formulaic mathematical computations, which are 1 in 21 (4.75 percent) for a hand containing two pairs and 1 in 47 (2.1 percent) for a hand containing three-of-a-kind — not too far from the estimates of .0375 and .02 derived from simulation.

+

To handle the problem with the aid of the computer, we simply need to estimate the proportion of hands having triplicates and the proportion of hands with two pairs, and compare those estimates.

+

To estimate the hands with three-of-a-kind, we can use a notebook just like “One Pair” earlier, except using repeat_nos == 3 to search for triplicates instead of duplicates. The program, then, is:

+
+

Start of three_of_a_kind notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
# Create a bucket (vector) called a with four "1's," four "2's," four "3's,"
+# etc., to represent a deck of cards
+one_suit = np.arange(1, 14)
+# Repeat values for one suit four times to make a 52 card deck of values.
+deck = np.repeat(one_suit, 4)
+
+
+
triples_per_trial = np.zeros(10000)
+
+# Repeat the following steps 10000 times
+for i in range(10000):
+    # Shuffle the deck
+    shuffled = rnd.permuted(deck)
+
+    # Take the first five cards.
+    hand = shuffled[:5]
+
+    # How many triples?
+    repeat_nos = np.bincount(hand)
+    n_triples = np.sum(repeat_nos == 3)
+
+    # Keep score of # of triples
+    triples_per_trial[i] = n_triples
+
+    # End loop, go back and repeat
+
+# How often was there 1 pair?
+n_triples = np.sum(triples_per_trial == 1)
+
+# Convert to proportion
+print(n_triples / 10000)
+
+
0.0272
+
+
+

End of three_of_a_kind notebook

+
+

To estimate the probability of getting a two-pair hand, we revert to the original program (counting pairs), except that we examine all the results in the score-keeping vector z for hands in which we had two pairs, instead of one .

+
+

Start of two_pairs notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+one_suit = np.arange(1, 14)
+deck = np.repeat(one_suit, 4)
+
+
+
pairs_per_trial = np.zeros(10000)
+
+# Repeat the following steps 10000 times
+for i in range(10000):
+    # Shuffle the deck
+    shuffled = rnd.permuted(deck)
+
+    # Take the first five cards.
+    hand = shuffled[:5]
+
+    # How many pairs?
+    # Counts for each card rank.
+    repeat_nos = np.bincount(hand)
+    n_pairs = np.sum(repeat_nos == 2)
+
+    # Keep score of # of pairs
+    pairs_per_trial[i] = n_pairs
+
+    # End loop, go back and repeat
+
+# How often were there 2 pairs?
+n_two_pairs = np.sum(pairs_per_trial == 2)
+
+# Convert to proportion
+print(n_two_pairs / 10000)
+
+
0.0487
+
+
+

End of two_pairs notebook

+
+

For efficiency (though efficiency really is not important here because the computer performs its operations so cheaply) we could develop both estimates in a single program by simply generating 10000 hands, and count the number with three-of-a-kind and the number with two pairs.

+

Before we leave the poker problems, we note a difficulty with Monte Carlo simulation. The probability of a royal flush is so low (about one in half a million) that it would take much computer time to compute. On the other hand, considerable inaccuracy is of little matter. Should one care whether the probability of a royal flush is 1/100,000 or 1/500,000?

+
+
+

11.9 The concepts of replacement and non-replacement

+

In the poker example above, we did not replace the first card we drew. If we were to replace the card, it would leave the probability the same before the second pick as before the first pick. That is, the conditional probability remains the same. If we replace, conditions do not change. But if we do not replace the item drawn, the probability changes from one moment to the next. (Perhaps refresh your mind with the examples in the discussion of conditional probability including Section 9.1.1)

+

If we sample with replacement, the sample drawings remain independent of each other — a topic addressed in Section 9.1.

+

In many cases, a key decision in modeling the situation in which we are interested is whether to sample with or without replacement. The choice must depend on the characteristics of the situation.

+

There is a close connection between the lack of finiteness of the concept of universe in a given situation, and sampling with replacement. That is, when the universe (population) we have in mind is not small, or has no conceptual bounds at all, then the probability of each successive observation remains the same, and this is modeled by sampling with replacement. (“Not finite” is a less expansive term than “infinite,” though one might regard them as synonymous.)

+

Chapter 12 discusses problems whose appropriate concept of a universe is finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is not finite. This general procedure will be discussed several times, with examples included.

+ + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/probability_theory_3.html b/python-book/probability_theory_3.html new file mode 100644 index 00000000..62705ea2 --- /dev/null +++ b/python-book/probability_theory_3.html @@ -0,0 +1,1733 @@ + + + + + + + + + +Resampling statistics - 12  Probability Theory, Part 3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

12  Probability Theory, Part 3

+
+ + + +
+ + + + +
+ + +
+ +

This chapter discusses problems whose appropriate concept of a universe is not finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is finite.

+

How can a universe be infinite yet known? Consider, for example, the possible flips with a given coin; the number is not limited in any meaningful sense, yet we understand the properties of the coin and the probabilities of a head and a tail.

+
+

12.1 Example: The Birthday Problem

+

This examples illustrates the probability of duplication in a multi-outcome sample from an infinite universe.

+

As an indication of the power and simplicity of resampling methods, consider this famous examination question used in probability courses: What is the probability that two or more people among a roomful of (say) twenty-five people will have the same birthday? To obtain an answer we need simply examine the first twenty-five numbers from the random-number table that fall between “001” and “365” (the number of days in the year), record whether or not there is a duplication among the twenty-five, and repeat the process often enough to obtain a reasonably stable probability estimate.

+

Pose the question to a mathematical friend of yours, then watch her or him sweat for a while, and afterwards compare your answer to hers/his. I think you will find the correct answer very surprising. It is not unheard of for people who know how this problem works to take advantage of their knowledge by making and winning big bets on it. (See how a bit of knowledge of probability can immediately be profitable to you by avoiding such unfortunate occurrences?)

+

More specifically, these steps answer the question for the case of twenty-five people in the room:

+
    +
  • Step 1. Let three-digit random numbers 1-365 stand for the 365 days in the year. (Ignore leap year for simplicity.)
  • +
  • Step 2. Examine for duplication among the first twenty-five random numbers chosen “001-365.” (Triplicates or higher-order repeats are counted as duplicates here.) If there is one or more duplicate, record “yes.” Otherwise record “no.”
  • +
  • Step 3. Repeat perhaps a thousand times, and calculate the proportion of a duplicate birthday among twenty-five people.
  • +
+

You would probably use the computer to generate the initial random numbers.

+

Now try the program written as follows.

+
+

Start of birthday_problem notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
n_with_same_birthday = np.zeros(10000)
+
+days_of_year = np.arange(1, 366)  # 1 through 365
+
+# Do 10000 trials (experiments)
+for i in range(10000):
+    # Generate 25 numbers randomly between "1" and "365" put them in a.
+    a = rnd.choice(days_of_year, size=25)
+
+    # Looking in a, count the number of multiples and put the result in
+    # b. We request multiples > 1 because we are interested in any multiple,
+    # whether it is a duplicate, triplicate, etc. Had we been interested only
+    # in duplicates, we would have put in np.sum(counts == 2).
+    counts = np.bincount(a)
+    n_duplicates = np.sum(counts > 1)
+
+    # Score the result of each trial to our store
+    n_with_same_birthday[i] = n_duplicates
+
+    # End the loop for the trial, go back and repeat the trial until all 10000
+    # are complete, then proceed.
+
+# Determine how many trials had at least one multiple.
+k = np.sum(n_with_same_birthday)
+
+# Convert to a proportion.
+kk = k / 10000
+
+# Print the result.
+print(kk)
+
+
0.7799
+
+
+

End of birthday_problem notebook

+
+

We have dealt with this example in a rather intuitive and unsystematic fashion. From here on, we will work in a more systematic, step-by-step manner. And from here on the problems form an orderly sequence of the classical types of problems in probability theory (Chapter 12 and Chapter 13), and inferential statistics (Chapter 20 to Chapter 28.)

+
+
+

12.2 Example: Three Daughters Among Four Children

+

This problem illustrates a problem with two outcomes (Binomial 1) and sampling with Replacement Among Equally Likely Outcomes.

+

What is the probability that exactly three of the four children in a four-child family will be daughters?2

+

The first step is to state that the approximate probability that a single birth will produce a daughter is 50-50 (1 in 2). This estimate is not strictly correct, because there are roughly 106 male children born to each 100 female children. But the approximation is close enough for most purposes, and the 50-50 split simplifies the job considerably. (Such “false” approximations are part of the everyday work of the scientist. The appropriate question is not whether or not a statement is “only” an approximation, but whether or not it is a good enough approximation for your purposes.)

+

The probability that a fair coin will turn up heads is .50 or 50-50, close to the probability of having a daughter. Therefore, flip a coin in groups of four flips, and count how often three of the flips produce heads . (You must decide in advance whether three heads means three girls or three boys.) It is as simple as that.

+

In resampling estimation it is of the highest importance to work in a careful, step-by-step fashion — to write down the steps in the estimation, and then to do the experiments just as described in the steps. Here are a set of steps that will lead to a correct answer about the probability of getting three daughters among four children:

+
    +
  • Step 1. Using coins, let “heads” equal “girl” and “tails” equal “boy.”
  • +
  • Step 2. Throw four coins.
  • +
  • Step 3. Examine whether the four coins fall with exactly three heads up. If so, write “yes” on a record sheet; otherwise write “no.”
  • +
  • Step 4. Repeat step 2 perhaps two hundred times.
  • +
  • Step 5. Count the proportion “yes.” This proportion is an estimate of the probability of obtaining exactly 3 daughters in 4 children.
  • +
+

The first few experimental trials might appear in the record sheet as follows (Table 12.1):

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12.1: Example trials from the three-girls problem
Number of HeadsYes or No
1No
0No
3Yes
2No
1No
2No
+
+

The probability of getting three daughters in four births could also be found with a deck of cards, a random number table, a die, or with Python. For example, half the cards in a deck are black, so the probability of getting a black card (“daughter”) from a full deck is 1 in 2. Therefore, deal a card, record “daughter” or “son,” replace the card, shuffle, deal again, and so forth for 200 sets of four cards. Then count the proportion of groups of four cards in which you got four daughters.

+
+

Start of three_girls notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
girl_counts = np.zeros(10000)
+
+# Do 10000 trials
+for i in range(10000):
+
+    # Select 'girl' or 'boy' at random, four times.
+    children = rnd.choice(['girl', 'boy'], size=4)
+
+    # Count the number of girls and put the result in b.
+    b = np.sum(children == 'girl')
+
+    # Keep track of each trial result in z.
+    girl_counts[i] = b
+
+    # End this trial, repeat the experiment until 10000 trials are complete,
+    # then proceed.
+
+# Count the number of experiments where we got exactly 3 girls, and put this
+# result in k.
+n_three_girls = np.sum(girl_counts == 3)
+
+# Convert to a proportion.
+three_girls_prop = n_three_girls / 10000
+
+# Print the results.
+print(three_girls_prop)
+
+
0.2502
+
+
+

End of three_girls notebook

+
+

Notice that the procedure outlined in the steps above would have been different (though almost identical) if we asked about the probability of three or more daughters rather than exactly three daughters among four children. For three or more daughters we would have scored “yes” on our score-keeping pad for either three or four heads, rather than for just three heads. Likewise, in the computer solution we would have used the statement n_three_girls = np.sum(girl_counts >= 3) .

+

It is important that, in this case, in contrast to what we did in the example from Section 11.2 (the introductory poker example), the card is replaced each time so that each card is dealt from a full deck. This method is known as sampling with replacement . One samples with replacement whenever the successive events are independent ; in this case we assume that the chance of having a daughter remains the same (1 girl in 2 births) no matter what sex the previous births were 3. But, if the first card dealt is black and would not be replaced, the chance of the second card being black would no longer be 26 in 52 (.50), but rather 25 in 51 (.49), if the first three cards are black and would not be replaced, the chances of the fourth card’s being black would sink to 23 in 49 (.47).

+

To push the illustration further, consider what would happen if we used a deck of only six cards, half (3 of 6) black and half (3 of 6) red, instead of a deck of 52 cards. If the chosen card is replaced each time, the 6-card deck produces the same results as a 52-card deck; in fact, a two-card deck would do as well. But, if the sampling is done without replacement, it is impossible to obtain 4 “daughters” with the 6-card deck because there are only 3 “daughters” in the deck. To repeat, then, whenever you want to estimate the probability of some series of events where each event is independent of the other, you must sample with replacement .

+
+
+

12.3 Variations of the daughters problem

+

In later chapters we will frequently refer to a problem which is identical in basic structure to the problem of three girls in four children — the probability of getting 9 females in ten calf births if the probability of a female birth is (say) .5 — when we set this problem in the context of the possibility that a genetic engineering practice is effective in increasing the proportion of females (desirable for the production of milk).

+

So far we have assumed the simple case where we have an array of values that we are sampling from, and we are selecting each of these values into the sample with equal probability.

+

For example, we started with the simple assumption that a child is just as likely to be born a boy as a girl. Our input is:

+
+
input_values = ['girl', 'boy']
+
+

By default, rnd.choice will draw the input values with equal probability. Here, we draw a sample (children) of four values from the input, where each value in children has an equal chance of being “girl” or “boy”.

+
+
children = rnd.choice(input_values, size=4)
+children
+
+
array(['boy', 'boy', 'boy', 'girl'], dtype='<U4')
+
+
+

That is, rnd.choice gives each element in input_values an equal chance of being selected as the next element in children.

+

That is fine if we have some simple probability to simulate, like 0.5. But now let us imagine we want to get more precise. We happen to know that any given birth is just slightly more likely to be a boy than a girl.4. For example, the proportion of boys born in the UK is 0.513. Hence the proportion of girls is 1-0.513 = 0.487.

+
+
+

12.4 rnd.choice and the p argument

+

We could replicate this probability of 0.487 for ‘girl’ in the output sample by making an input array of 1000 strings, that contains 487 ‘girls’ and 513 ‘boys’:

+
+
big_girls = np.repeat(['girl', 'boy'], [487, 513])
+
+

Now if we sample using the default in rnd.choice, each element in the input big_girls array will have the same chance of appearing in the sample, but because there are 487 ‘girls’, and 513 ‘boys’, each with an equal chance of appearing in the sample, we will get a ‘girl’ in roughly 487 out of every 1000 elements we draw, and a boy roughly 513 / 1000 times. That is, our chance of any one element of being a ‘girl’ is, as we want, 0.487.

+
+
# Now each element has probability 0.487 of 'girl', 0.513 of 'boy'.
+realistic_children = rnd.choice(big_girls, size=4)
+realistic_children
+
+
array(['boy', 'boy', 'girl', 'boy'], dtype='<U4')
+
+
+

But, there is an easier way than compiling a big 1000 element array, and that is to use the p= argument to rnd.choice. This allows us to specify the probability with which we will draw each of the input elements into the output sample. For example, to draw ‘girl’ with probability 0.487 and ‘boy’ with probability 0.513, we would do:

+
+
# Draw 'girl' with probability (p) 0.487 and 'boy' 0.513.
+children_again = rnd.choice(['girl', 'boy'], size=4, p=[0.487, 0.513])
+children_again
+
+
array(['girl', 'boy', 'girl', 'girl'], dtype='<U4')
+
+
+

The p argument allows us to specify the probability of each element in the input array — so if we had three elements in the input array, we would need three probabilities in p. For example, let’s say we were looking at some poorly-entered hospital records, we might have ‘girl’ or ‘boy’ recorded as the child’s gender, but the record might be missing — ‘not-recorded’ — with a 19% chance:

+
+
# Draw 'girl' with probability (p) 0.4, 'boy' with p=0.41, 'not-recorded' with
+# p=0.19.
+rnd.choice(['girl', 'boy', 'not-recorded'], size=30, p=[0.4, 0.41, 0.19])
+
+
array(['girl', 'girl', 'girl', 'girl', 'boy', 'girl', 'girl',
+       'not-recorded', 'girl', 'boy', 'boy', 'girl', 'girl', 'boy',
+       'not-recorded', 'girl', 'not-recorded', 'boy', 'girl', 'boy',
+       'not-recorded', 'girl', 'boy', 'girl', 'boy', 'not-recorded',
+       'girl', 'girl', 'boy', 'not-recorded'], dtype='<U12')
+
+
+
+
+
+ +
+
+How does the p argument to rnd.choice work? +
+
+
+

You might wonder how Python does this trick of choosing the elements with different probabilities.

+

One way of doing this is to use uniform random numbers from 0 through 1. These are floating point numbers that can take any value, at random, from 0 through 1.

+
+
# Run this cell a few times to see random numbers anywhere from 0 through 1.
+rnd.uniform()
+
+
0.3358873070551027
+
+
+

Because this random uniform number has an equal chance of being anywhere in the range 0 through 1, there is a 50% chance that any given number will be less then 0.5 and a 50% chance it is greater than 0.5. (Of course it could be exactly equal to 0.5, but this is vanishingly unlikely, so we will ignore that for now).

+

So, if we thought girls were exactly as likely as boys, we could select from ‘girl’ and ‘boy’ using this simple logic:

+
+
if rnd.uniform() < 0.5:
+    result = 'girl'
+else:
+    result = 'boy'
+
+

But, by the same logic, there is a 0.487 chance that the random uniform number will be less than 0.487 and a 0.513 chance it will be greater. So, if we wanted to give ourselves a 0.487 chance of ‘girl’, we could do:

+
+
if rnd.uniform() < 0.487:
+    result = 'girl'
+else:
+    result = 'boy'
+
+

We can extend the same kind of logic to three options. For example, there is a 0.4 chance the random uniform number will be less than 0.4, a 0.41 chance it will be somewhere between 0.4 and 0.81, and a 0.19 chance it will be greater than 0.81.

+
+
+
+
+

12.5 The daughters problem with more accurate probabilities

+

We can use the probability argument to rnd.choice to do a more realistic simulation of the chance of a family with exactly three girls. In this case it is easy to make the chance for the Python simulation, but much more difficult using physical devices like coins to simulate the randomness.

+

Remember, the original code for the 50-50 case, has the following:

+
+
# Select 'girl' or 'boy' at random, four times.
+children = rnd.choice(['girl', 'boy'], size=4)
+
+# Count the number of girls and put the result in b.
+b = np.sum(children == 'girl')
+
+

The only change we need to the above, for the 0.487 - 0.513 case, is the one you see above:

+
+
# Give 'girl' 48.7% of the time, 'boy' 51.3% of the time.
+children = rnd.choice(['girl', 'boy'], size=4, p=[0.487, 0.513])
+
+b = np.sum(children == 'girl')
+
+

The rest of the program remains unchanged.

+
+
+

12.6 A note on clarifying and labeling problems

+

In conventional analytic texts and courses on inferential statistics, students are taught to distinguish between various classes of problems in order to decide which formula to apply. I doubt the wisdom of categorizing and labeling problems in that fashion, and the practice is unnecessary here. I consider it better that the student think through every new problem in the most fundamental terms. The exercise of this basic thinking avoids the mistakes that come from too-hasty and superficial pigeon-holing of problems into categories. Nevertheless, in order to help readers connect up the resampling material with the conventional curriculum of analytic methods, the examples presented here are given their conventional labels. And the examples given here cover the range of problems encountered in courses in probability and inferential statistics.

+

To repeat, one does not need to classify a problem when one proceeds with the Monte Carlo resampling method; you simply model the features of the situation you wish to analyze. In contrast, with conventional methods you must classify the situation and then apply procedures according to rules that depend upon the classification; often the decision about which rules to follow must be messy because classification is difficult in many cases, which contributes to the difficulty of choosing correct conventional formulaic methods.

+
+
+

12.7 Binomial trials

+

The problem of the three daughters in four births is known in the conventional literature as a “binomial sampling experiment with equally-likely outcomes.” “Binomial” means that the individual simple event (a birth or a coin flip) can have only two outcomes (boy or girl, heads or tails), “binomial” meaning “two names” in Latin.5

+

A fundamental property of binomial processes is that the individual trials are independent , a concept discussed earlier. A binomial sampling process is a series of binomial (one-of-two-outcome) events about which one may ask many sorts of questions — the probability of exactly X heads (“successes”) in N trials, or the probability of X or more “successes” in N trials, and so on.

+

“Equally likely outcomes” means we assume that the probability of a girl or boy in any one birth is the same (though this assumption is slightly contrary to fact); we represent this assumption with the equal-probability heads and tails of a coin. Shortly we will come to binomial sampling experiments where the probabilities of the individual outcomes are not equal.

+

The term “with replacement” was explained earlier; if we were to use a deck of red and black cards (instead of a coin) for this resampling experiment, we would replace the card each time a card is drawn.

+

The introductory poker example from Section 11.2, illustrated sampling without replacement, as will other examples to follow.

+

This problem would be done conventionally with the binomial theorem using probabilities of .5, or of .487 and .513, asking about 3 successes in 4 trials.

+
+
+

12.8 Example: Three or More Successful Basketball Shots in Five Attempts

+

This is an example of two-outcome sampling with unequally-likely outcomes, with replacement — a binomial experiment.

+

What is the probability that a basketball player will score three or more baskets in five shots from a spot 30 feet from the basket, if on the average she succeeds with 25 percent of her shots from that spot?

+

In this problem the probabilities of “success” or “failure” are not equal, in contrast to the previous problem of the daughters. Instead of a 50-50 coin, then, an appropriate “model” would be a thumbtack that has a 25 percent chance of landing “up” when it falls, and a 75 percent chance of landing down.

+

If we lack a thumbtack known to have a 25 percent chance of landing “up,” we could use a card deck and let spades equal “success” and the other three suits represent “failure.” Our resampling experiment could then be done as follows:

+
    +
  1. Let “spade” stand for “successful shot,” and the other suits stand for unsuccessful shot.
  2. +
  3. Draw a card, record its suit (“spade” or “other”) and replace. Do so five times (for five shots).
  4. +
  5. Record whether the outcome of step 2 was three or more spades. If so indicate “yes,” and otherwise “no.”
  6. +
  7. Repeat steps 2-4 perhaps four hundred times.
  8. +
  9. Count the proportion “yes” out of the four hundred throws. That proportion estimates the probability of getting three or more baskets out of five shots if the probability of a single basket is .25.
  10. +
+

The first four repetitions on your score sheet might look like this (Table 12.2):

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12.2: First four repetitions of 3 or more shots simulation
Card 1Card 2Card 3Card 4Card 5Result
SpadeOtherOtherOtherOtherNo
OtherOtherOtherOtherOtherNo
SpadeSpadeOtherSpadeSpadeYes
OtherSpadeOtherOtherSpadeNo
+
+

Instead of cards, we could have used two-digit random numbers, with (say) “1-25” standing for “success,” and “26-00” (“00” in place of “100”) standing for failure. Then the steps would simply be:

+
    +
  1. Let the random numbers “1-25” stand for “successful shot,” “26-00” for unsuccessful shot.
  2. +
  3. Draw five random numbers;
  4. +
  5. Count how many of the numbers are between “01” and “25.” If three or more, score “yes.”
  6. +
  7. Repeat step 2 four hundred times.
  8. +
+

If you understand the earlier “three_girls” program, then the program below should be easy: To create 10000 samples, we start with a for statement. We then sample 5 numbers between “1” and “4” into our variable a to simulate the 5 shots, each with a 25 percent — or 1 in 4 — chance of scoring. We decide that 1 will stand for a successful shot, and 2 through 4 will stand for a missed shot, and therefore we count (sum) the number of 1’s in a to determine the number of shots resulting in baskets in the current sample. The next step is to transfer the results of each trial to array n_baskets. We then finish the loop by unindenting the next line of code. The final step is to search the array n_baskets, after the 10000 samples have been generated and sum the times that 3 or more baskets were made. We place the results in n_more_than_2, calculate the proportion in propo_more_than_2, and then display the result.

+
+

Start of basketball_shots notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
n_baskets = np.zeros(10000)
+
+# Do 10000 experimental trials.
+for i in range(10000):
+
+    # Generate 5 random numbers, each between 1 and 4, put them in "a".
+    # Let "1" represent a basket, "2" through "4" be a miss.
+    a = rnd.integers(1, 5, size=5)
+
+    # Count the number of baskets, put that result in b.
+    b = np.sum(a == 1)
+
+    # Keep track of each experiment's results in z.
+    n_baskets[i] = b
+
+    # End the experiment, go back and repeat until all 10000 are completed, then
+    # proceed.
+
+# Determine how many experiments produced more than two baskets, put that
+# result in k.
+n_more_than_2 = np.sum(n_baskets > 2)
+
+# Convert to a proportion.
+prop_more_than_2 = n_more_than_2 / 10000
+
+# Print the result.
+print(prop_more_than_2)
+
+
0.104
+
+
+

End of basketball_shots notebook

+
+
+
+

12.9 Note to the student of analytic probability theory

+

This problem would be done conventionally with the binomial theorem, asking about the chance of getting 3 successes in 5 trials, with the probability of a success = .25.

+
+
+

12.10 Example: One in Black, Two in White, No Misses in Three Archery Shots

+

This is an example of a multiple outcome (multinomial) sampling with unequally likely outcomes; with replacement.

+

Assume from past experience that a given archer puts 10 percent of his shots in the black (“bullseye”) and 60 percent of his shots in the white ring around the bullseye, but misses with 30 percent of his shots. How likely is it that in three shots the shooter will get exactly one bullseye, two in the white, and no misses? Notice that unlike the previous cases, in this example there are more than two outcomes for each trial.

+

This problem may be handled with a deck of three colors (or suits) of cards in proportions varying according to the probabilities of the various outcomes, and sampling with replacement. Using random numbers is simpler, however:

+
    +
  • Step 1. Let “1” = “bullseye,” “2-7” = “in the white,” and “8-0” = “miss.”
  • +
  • Step 2. Choose three random numbers, and examine whether there are one “1” and two numbers “2-7.” If so, record “yes,” otherwise “no.”
  • +
  • Step 3. Repeat step 2 perhaps 400 times, and count the proportion of “yeses.” This estimates the probability sought.
  • +
+

This problem would be handled in conventional probability theory with what is known as the Multinomial Distribution.

+

This problem may be quickly solved on the computer using Python with the notebook labeled “bullseye” below. Bullseye has a complication not found in previous problems: It tests whether two different sorts of events both happen — a bullseye plus two shots in the white.

+

After generating three randomly-drawn numbers between 1 and 10, we check with the sum function to see if there is a bullseye. If there is, the if statement tells the computer to continue with the operations, checking if there are two shots in the white; if there is no bullseye, the if statement tells the computer to end the trial and start another trial. A thousand repetitions are called for, the number of trials meeting the criteria are counted, and the results are then printed.

+

In addition to showing how this particular problem may be handled with Python, the “bullseye” program teaches you some more fundamentals of computer programming. The if statement and the two loops, one within the other, are basic tools of programming.

+
+

Start of bullseye notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
# Make an array to store the results of each trial.
+white_counts = np.zeros(10000)
+
+# Do 10000 experimental trials
+for i in range(10000):
+
+    # To represent 3 shots, generate 3 numbers at random between "1" and "10"
+    # and put them in a. We will let a "1" denote a bullseye, "2"-"7" a shot in
+    # the white, and "8"-"10" a miss.
+    a = rnd.integers(1, 11, size=3)
+
+    # Count the number of bullseyes, put that result in b.
+    b = np.sum(a == 1)
+
+    # If there is exactly one bullseye, we will continue with counting the
+    # other shots. (If there are no bullseyes, we need not bother — the
+    # outcome we are interested in has not occurred.)
+    if b == 1:
+
+        # Count the number of shots in the white, put them in c. (Recall we are
+        # doing this only if we got one bullseye.)
+        c = np.sum((a >= 2) & (a <=7))
+
+        # Keep track of the results of this second count.
+        white_counts[i] = c
+
+        # End the "if" sequence — we will do the following steps without regard
+        # to the "if" condition.
+
+    # End the above experiment and repeat it until 10000 repetitions are
+    # complete, then continue.
+
+# Count the number of occasions on which there are two in the white and a
+# bullseye.
+n_desired = np.sum(white_counts == 2)
+
+# Convert to a proportion.
+prop_desired = n_desired / 10000
+
+# Print the results.
+print(prop_desired)
+
+
0.1052
+
+
+

End of bullseye notebook

+
+

This example illustrates the addition rule that was introduced and discussed in Chapter 9. In Section 12.10, a bullseye, an in-the-white shot, and a missed shot are “mutually exclusive” events because a single shot cannot result in more than one of the three possible outcomes. One can calculate the probability of either of two mutually-exclusive outcomes by adding their probabilities. The probability of either a bullseye or a shot in the white is .1 + .6 = .7. The probability of an arrow either in the white or a miss is .6 + .3 = .9. The logic of the addition rule is obvious when we examine the random numbers given to the outcomes. Seven of 10 random numbers belong to “bullseye” or “in the white,” and nine of 10 belong to “in the white” or “miss.”

+
+
+

12.11 Example: Two Groups of Heart Patients

+

We want to learn how likely it is that, by chance, group A would have as little as two deaths more than group B — Table 12.3:

+
+ + + + + + + + + + + + + + + + + + + + + +
Table 12.3: Two Groups of Heart Patients
LiveDie
Group A7911
Group B219
+
+

This problem, phrased here as a question in probability, is the prototype of a problem in statistics that we will consider later (which the conventional theory would handle with a “chi square distribution”). We can handle it in either of two ways, as follows:

+

Approach A

+
    +
  1. Put 120 balls into a bucket, 100 white (for live) and 20 black (for die).
  2. +
  3. Draw 30 balls randomly and assign them to Group B; the others are assigned to group A.
  4. +
  5. Count the numbers of black balls in the two groups and determine whether Group A’s excess “deaths” (= black balls), compared to Group B, is two or fewer (or what is equivalent in this case, whether there are 11 or fewer black balls in Group A); if so, write “Yes,” otherwise “No.”
  6. +
  7. Repeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”
  8. +
+

A second way we shall think about this sort of problem may be handled as follows:

+

Approach B

+
    +
  1. Put 120 balls into a bucket, 100 white (for live) and 20 black (for die) (as before).
  2. +
  3. Draw balls one by one, replacing the drawn ball each time, until you have accumulated 90 balls for Group A and 30 balls for Group B. (You could, of course, just as well use a bucket for 4 white and 1 black balls or 8 white and 2 black in this approach.)
  4. +
  5. As in approach “A” above, count the numbers of black balls in the two groups and determine whether Group A’s excess deaths is two or fewer; if so, write “Yes,” otherwise “No.”
  6. +
  7. As above, repeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”
  8. +
+

We must also take into account the possibility of a similar eye-catching “unbalanced” result of a much larger proportion of deaths in Group B. It will be a tough decision how to do so, but a reasonable option is to simply double the probability computed in step 4a or 4b.

+

Deciding which of these two approaches — the “permutation” (without replacement) and “bootstrap” (with replacement) methods — is the more appropriate is often a thorny matter; it will be discussed latter in Chapter 24. In many cases, however, the two approaches will lead to similar results.

+

Later, we will actually carry out these procedures with the aid of Python, and estimate the probabilities we seek.

+
+
+

12.12 Example: Dispersion of a Sum of Random Variables — Hammer Lengths — Heads and Handles

+

The distribution of lengths for hammer handles is as follows: 20 percent are 10 inches long, 30 percent are 10.1 inches, 30 percent are 10.2 inches, and 20 percent are 10.3 inches long. The distribution of lengths for hammer heads is as follows: 2.0 inches, 20 percent; 2.1 inches, 20 percent; 2.2 inches, 30 percent; 2.3 inches, 20 percent; 2.4 inches, 10 percent.

+

If you draw a handle and a head at random, what will be the mean total length? In Chapter 9 we saw that the conventional formulaic method tells you that an answer with a formula that says the sum of the means is the mean of the sums, but it is easy to get the answer with simulation. But now we ask about the dispersion of the sum. There are formulaic rules for such measures as the variance. But consider this other example: What proportion of the hammers made with handles and heads drawn at random will have lengths equal to or greater than 12.4 inches? No simple formula will provide an answer. And if the number of categories is increased considerably, any formulaic approach will be become burdensome if not undoable. But Monte Carlo simulation produces an answer quickly and easily, as follows:

+
    +
  1. Fill a bucket with:

    +
      +
    • 2 balls marked “10” (inches),
    • +
    • 3 balls marked “10.1”,
    • +
    • 3 marked “10.2”, and
    • +
    • 2 marked “10.3”.
    • +
    +

    This bucket represents the handles.

    +

    Fill another bucket with:

    +
      +
    • 2 balls marked “2.0”,
    • +
    • 2 balls marked “2.1”,
    • +
    • 3 balls marked “2.2”,
    • +
    • 2 balls marked “2.3” and
    • +
    • 1 ball marked “2.4”.
    • +
    +

    This bucket represents the heads.

  2. +
  3. Pick a ball from each of the “handles” and “heads” bucket, calculate the sum, and replace the balls.

  4. +
  5. Repeat perhaps 200 times (more when you write a computer program), and calculate the proportion of the sums that are greater than 12.4 inches.

  6. +
+

You may also want to forego learning the standard “rule,” and simply estimate the mean this way, also. As an exercise, compute the interquartile range — the difference between the 25th and the 75th percentiles.

+
+
+

12.13 Example: The Product of Random Variables — Theft by Employees

+

The distribution of the number of thefts per month you can expect in your business is as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NumberProbability
00.5
10.2
20.1
30.1
40.1
+

The amounts that may be stolen on any theft are as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
AmountProbability
$500.4
$750.4
$1000.1
$1250.1
+

The same procedure as used above to estimate the mean length of hammers — add the lengths of handles and heads — can be used for this problem except that the results of the drawings from each bucket are multiplied rather than added.

+

In this case there is again a simple rule: The mean of the products equals the product of the means. But this rule holds only when the two urns are indeed independent of each other, as they are in this case.

+

The next two problems are a bit harder than the previous ones; you might skip them for now and come back to them a bit later. However, with the Monte Carlo simulation method they are within the grasp of any introductory student who has had just a bit of experience with the method. In contrast, a standard book whose lead author is Frederick Mosteller, as respected a statistician as there is, says of this type of problem: “Naturally, in this book we cannot expect to study such difficult problems in their full generality [that is, show how to solve them, rather than merely state them], but we can lay a foundation for their study.” (Mosteller, Rourke, and Thomas 1961, 5)

+
+
+

12.14 Example: Flipping Pennies to the End

+

Two players, each with a stake of ten pennies, engage in the following game: A coin is tossed, and if it is (say) heads, player A gives player B a penny; if it is tails, player B gives player A a penny. What is the probability that one player will lose his or her entire stake of 10 pennies if they play for 200 tosses?

+

This is a classic problem in probability theory; it has many everyday applications in situations such as inventory management. For example, what is the probability of going out of stock of a given item in a given week if customers and deliveries arrive randomly? It also is a model for many processes in modern particle physics.

+

Solution of the penny-matching problem with coins is straightforward. Repeatedly flip a coin and check if one player or the other reaches a zero balance before you reach 200 flips. Or with random numbers:

+
    +
  1. Numbers “1-5” = head = “+1”; Numbers “6-0” = tail = “-1.”
  2. +
  3. Proceed down a series of 200 numbers, keeping a running tally of the “+1”’s and the “-1”’s. If the tally reaches “+10” or “-10” on or before the two-hundredth digit, record “yes”; otherwise record “no.”
  4. +
  5. Repeat step 2 perhaps 400 or 10000 times, and calculate the proportion of “yeses.” This estimates the probability sought.
  6. +
+

The following Python program also solves the problem. The heart of the program starts at the line where the program models a coin flip with the statement: c = rnd.integers(1, 3) After you study that, go back and notice the inner for loop starting with for j in range(200): that describes the procedure for flipping a coin 200 times. Finally, note how the outer for i in range(10000): loop simulates 10000 games, each game consisting of the 200 coin flips we generated with the inner for loop above.

+
+

Start of pennies notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+
+
someone_won = np.zeros(10000)
+
+# Do 10000 trials
+for i in range(10000):
+
+    # Record the number 10: a's stake
+    a_stake = 10
+
+    # Same for b
+    b_stake = 10
+
+    # An indicator flag that will be set to "1" when somebody wins.
+    flag = 0
+
+    # Repeat the following steps 200 times.
+    # Notice we use "j" as the counter variable, to avoid overwriting
+    # "i", the counter variable for the 10000 trials.
+    for j in range(200):
+        # Generate the equivalent of a coin flip, letting 1 = heads,
+        # 2 = tails
+        c = rnd.integers(1, 3)
+
+        # If it's a heads
+        if c == 1:
+
+            # Add 1 to b's stake
+            b_stake = b_stake + 1
+
+            # Subtract 1 from a's stake
+            a_stake = a_stake - 1
+
+        # End the "if" condition
+
+        # If it's a tails
+        if c == 2:
+
+            # Add one to a's stake
+            a_stake = a_stake + 1
+
+            # Subtract 1 from b's stake
+            b_stake = b_stake - 1
+
+        # End the "if" condition
+
+        # If a has won
+        if a_stake == 20:
+
+            # Set the indicator flag to 1
+            flag = 1
+
+        # If b has won
+        if b_stake == 20:
+
+            # Set the indicator flag to 1
+            flag = 1
+
+    # End the repeat loop for 200 plays (note that the indicator flag stays at
+    # 0 if neither a nor b has won)
+
+    # Keep track of whether anybody won
+    someone_won[i] = flag
+
+# End the 10000 trials
+
+# Find out how often somebody won
+n_wins = np.sum(someone_won)
+
+# Convert to a proportion
+prop_wins = n_wins / 10000
+
+# Print the results
+print(prop_wins)
+
+
0.8918
+
+
+

End of pennies notebook

+
+

A similar example: Your warehouse starts out with a supply of twelve capacirators. Every three days a new shipment of two capacirators is received. There is a .6 probability that a capacirator will be used each morning, and the same each afternoon. (It is as if a random drawing is made each half-day to see if a capacirator is used; two capacirators may be used in a single day, or one or none). How long will be it, on the average, before the warehouse runs out of stock?

+
+
+

12.15 Example: A Drunk’s Random Walk

+

If a drunk chooses the direction of each step randomly, will he ever get home? If he can only walk on the road on which he lives, the problem is almost the same as the gambler’s-ruin problem above (“pennies”). But if the drunk can go north-south as well as east-west, the problem becomes a bit different and interesting.

+

Looking now at Figure 12.1 — what is the probability of the drunk reaching either his house (at 3 steps east, 2 steps north) or my house (1 west, 4 south) before he finishes taking twelve steps?

+

One way to handle the problem would be to use a four-directional spinner such as is used with a child’s board game, and then keep track of each step on a piece of graph paper. The reader may construct a Python program as an exercise.

+
+
+
+
+

+
Figure 12.1: Drunk random walk
+
+
+
+
+
+
+

12.16 Example: public and private liquor pricing

+

Let’s end this chapter with an actual example that will be used again in Chapter 13 when discussing probability in finite universes, and then at great length in the context of statistics in Chapter 24. This example also illustrates the close connection between problems in pure probability and those in statistical inference.

+

As of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned, and 16 “monopoly” states where the state government owns the retail liquor stores. (Some states were omitted for technical reasons.) These were the representative 1961 prices of a fifth of Seagram 7 Crown whiskey in the two sets of states (Table 12.4):

+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12.4: Whiskey prices by state category
PrivateGovernment
4.824.65
5.294.55
4.894.11
4.954.15
4.554.2
4.94.55
5.253.8
5.34.0
4.294.19
4.854.75
4.544.74
4.754.5
4.854.1
4.854.0
4.55.05
4.754.2
4.79
4.85
4.79
4.95
4.95
4.75
5.2
5.1
4.8
4.29
Count2616
Mean4.844.35
+
+
+
+
+
+
+

+
Figure 12.2: Whiskey prices by state category
+
+
+
+
+

Let us consider that all these states’ prices constitute one single universe (an assumption whose justification will be discussed later). If so, one can ask: If these 42 states constitute a single universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference between the means that was actually observed)?

+

This can be thought of as problem in pure probability because we begin with a known universe and ask how it would behave with random drawings from it. We sample with replacement ; the decision to do so, rather than to sample without replacement (which is the way I had first done it, and for which there may be better justification) will be discussed later. We do so to introduce a “bootstrap”-type procedure (defined later) as follows: Write each of the forty-two observed state prices on a separate card. The shuffled deck simulated a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. For each trial, calculate the difference in mean prices.

+

These are the steps systematically:

+
    +
  • Step A: Write each of the 42 prices on a card and shuffle.
  • +
  • Steps B and C (combined in this case): i) Draw cards randomly with replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $.49; if it is as great or greater than $.49, write “yes,” otherwise “no.”
  • +
  • Step D: Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.
  • +
+

The probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state-ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices. The following notebook performs the operations described above.

+
+

Start of liquor_prices notebook

+ + +
+
import numpy as np
+rnd = np.random.default_rng()
+
+# Import the plotting library
+import matplotlib.pyplot as plt
+
+
+
fake_diffs = np.zeros(10000)
+
+priv = np.array([
+    4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54, 4.75,
+    4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75, 5.20, 5.10,
+    4.80, 4.29])
+
+govt = np.array([
+    4.65, 4.55, 4.11, 4.15, 4.20, 4.55, 3.80, 4.00, 4.19, 4.75, 4.74, 4.50,
+    4.10, 4.00, 5.05, 4.20])
+
+actual_diff = np.mean(priv) - np.mean(govt)
+
+# Join the two vectors of data
+both = np.concatenate((priv, govt))
+
+# Repeat 10000 simulation trials
+for i in range(10000):
+
+    # Sample 26 with replacement for private group
+    fake_priv = np.random.choice(both, size=26)
+
+    # Sample 16 with replacement for govt. group
+    fake_govt = np.random.choice(both, size=16)
+
+    # Find the mean of the "private" group.
+    p = np.mean(fake_priv)
+
+    # Mean of the "govt." group
+    g = np.mean(fake_govt)
+
+    # Difference in the means
+    diff = p - g
+
+    # Keep score of the trials
+    fake_diffs[i] = diff
+
+# Graph of simulation results to compare with the observed result.
+plt.hist(fake_diffs)
+plt.xlabel('Difference in average prices (cents)')
+plt.title('Average price difference (Actual difference = '
+f'{actual_diff * 100:.0f} cents)');
+
+
+
+

+
+
+
+
+

End of liquor_prices notebook

+
+

The results shown above — not even one “success” in 10,000 trials — imply that there is only a very small probability that two groups with mean prices as different as were observed would happen by chance if drawn with replacement from the universe of 42 observed prices.

+

Here we think of these states as if they came from a non-finite universe, which is one possible interpretation for one particular context. However, in Chapter 13 we will postulate a finite universe, which is appropriate if it is reasonable to consider that these observations constitute the entire universe (aside from those states excluded from the analysis because of data complexities).

+
+
+

12.17 The general procedure

+

Chapter 25 generalizes what we have done in the probability problems above into a general procedure, which will in turn be a subpart of a general procedure for all of resampling.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/probability_theory_3_files/figure-html/fig-whiskey-hist-1.png b/python-book/probability_theory_3_files/figure-html/fig-whiskey-hist-1.png new file mode 100644 index 00000000..5f4aa17b Binary files /dev/null and b/python-book/probability_theory_3_files/figure-html/fig-whiskey-hist-1.png differ diff --git a/python-book/probability_theory_3_files/figure-html/unnamed-chunk-40-3.png b/python-book/probability_theory_3_files/figure-html/unnamed-chunk-40-3.png new file mode 100644 index 00000000..f7d6e636 Binary files /dev/null and b/python-book/probability_theory_3_files/figure-html/unnamed-chunk-40-3.png differ diff --git a/python-book/probability_theory_4_finite.html b/python-book/probability_theory_4_finite.html new file mode 100644 index 00000000..5691d48a --- /dev/null +++ b/python-book/probability_theory_4_finite.html @@ -0,0 +1,1473 @@ + + + + + + + + + +Resampling statistics - 13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes

+
+ + + +
+ + + + +
+ + +
+ +
+

13.1 Introduction

+

The examples in Chapter 12 dealt with infinite universes , in which the probability of a given simple event is unaffected by the outcome of the previous simple event. But now we move on to finite universes, situations in which you begin with a given set of objects whose number is not enormous — say, a total of two, or two hundred, or two thousand. If we liken such a situation to a bucket containing balls of different colors each with a number on it, we are interested in the probability of drawing various sets of numbered and colored balls from the bucket on the condition that we do not replace balls after they are drawn.

+

In the cases addressed in this chapter, it is important to remember that the single events no longer are independent of each other. A typical situation in which sampling without replacement occurs is when items are chosen from a finite universe — for example, when children are selected randomly from a classroom. If the class has five boys and five girls, and if you were to choose three girls in a row, then the chance of selecting a fourth girl on the next choice obviously is lower than the chance that you would pick a girl on the first selection.

+

The key to dealing with this type of problem is the same as with earlier problems: You must choose a simulation procedure that produces simple events having the same probabilities as the simple events in the actual problem involving sampling without replacement. That is, you must make sure that your simulation does not allow duplication of events that have already occurred. The easiest way to sample without replacement with resampling techniques is by simply ignoring an outcome if it has already occurred.

+

Examples Section 13.3.1 through Section 13.3.10 deal with some of the more important sorts of questions one may ask about drawings without replacement from such an urn. To get an overview, I suggest that you read over the summaries (in bold) introducing examples Section 13.3.1 to Section 13.3.10 before beginning to work through the examples themselves.

+

This chapter also revisits the general procedure used in solving problems in probability and statistics with simulation, here in connection with problems involving a finite universe. The steps that one follows in simulating the behavior of a universe of interest are set down in such fashion that one may, by random drawings, deduce the probability of various events. Having had by now the experience of working through the problems in Chapter 9 and Chapter 12, the reader should have a solid basis to follow the description of the general procedure which then helps in dealing with specific problems.

+

Let us begin by describing some of the major sorts of problems with the aid of a bucket with six balls.

+
+
+

13.2 Some building-block programs

+

Case 1. Each of six balls is labeled with a number between “1” and “6.” We ask: What is the probability of choosing balls 1, 2, and 3 in that order if we choose three balls without replacement? Figure 13.1 diagrams the events we consider “success.”

+
+
+
+
+

+
Figure 13.1: The Event Classified as “Success” for Case 1
+
+
+
+
+

Case 2. We begin with the same bucket as in Case 1, but now ask the probability of choosing balls 1, 2, and 3 in any order if we choose three balls without replacement. Figure 13.2 diagrams two of the events we consider success. These possibilities include that which is shown in Figure 13.1 above, plus other possibilities.

+
+
+
+
+

+
Figure 13.2: An Incomplete List of the Events Classified as “Success” for Case 2
+
+
+
+
+

Case 3. The odd-numbered balls “1,” “3,” and “5,” are painted red and the even-numbered balls “2,” “4,” and “6” are painted black. What is the probability of getting a red ball and then a black ball in that order? Some possibilities are illustrated in Figure 13.3, which includes the possibility shown in Figure 13.1. It also includes some but not all possibilities found in Figure 13.2; for example, Figure 13.2 includes choosing balls 2, 3 and 1 in that order, but Figure 13.3 does not.

+
+
+
+
+

+
Figure 13.3: An Incomplete List of the Events Classified as “Success” for Case 3
+
+
+
+
+

Case 4. What is the probability of getting two red balls and one black ball in any order?

+
+
+
+
+

+
Figure 13.4: An Incomplete List of the Events Classified as “Success” for Case 4
+
+
+
+
+

Case 5. Various questions about matching may be asked with respect to the six balls. For example, what is the probability of getting ball 1 on the first draw or ball 2 on the second draw or ball 3 on the third draw? (Figure 13.5) Or, what is the probability of getting all balls on the draws corresponding to their numbers?

+
+
+
+
+

+
Figure 13.5: An Incomplete List of the Events Classified as “Success” for Case 5
+
+
+
+
+
+
+

13.3 Problems in finite universes

+
+

13.3.1 Example: four girls and one boy

+

What is the probability of selecting four girls and one boy when selecting five students from any group of twenty-five girls and twenty-five boys? This is an example of sampling without replacement when there are two outcomes and the order does not matter.

+

The important difference between this example and the infinite-universe examples in the prior chapter is that the probability of obtaining a boy or a girl in a single simple event differs from one event to the next in this example, whereas it stays the same when the sampling is with replacement. To illustrate, the probability of a girl is .5 (25 out of 50) when the first student is chosen, but the probability of a girl is either 25/49 or 24/49 when the second student is chosen, depending on whether a boy or a girl was chosen on the first pick. Or after, say, three girls and one boy are picked, the probability of getting a girl on the next choice is (28-3)/(50-4) = 22/46 which is clearly not equal to .5.

+

As always, we must create a satisfactory analog to the process whose probability we want to learn. In this case, we can use a deck of 50 cards, half red and half black, and deal out five cards without replacing them after each card is dealt; this simulates the choice of five students from among the fifty.

+

We can no longer use our procedure from before. If we designated “1-25” as being girls and “26-50” as being boys and then proceeded to draw random numbers, the probability of a girl would be the same on each pick.

+

At this point, it is important to note that — for this particular problem — we do not need to distinguish between particular girls (or boys). That is, it does not matter which girl (or boy) is selected in a given trial. Nor did we pay attention to the order in which we selected girls or boys. This is an instance of Case 4 discussed above. Subsequent problems will deal with situations where the order of selection, and the particular individuals, do matter.

+

Our approach then is to mimic having the class in front of us: an array of 50 strings, half of the entries ‘boy’ and the other half ‘girl’. We then shuffle the class (the array), and choose the first N students (strings).

+
    +
  • Step 1. Create a list with 50 labels, half ‘boy’ and half ‘girl’.
  • +
  • Step 2. Shuffle the class and select five students. Count whether there are four labels equal ‘girl’. If so, write “yes,” otherwise “no”.
  • +
  • Step 3. Repeat step 2, say, 10,000 times, and count the proportion “yes”, which estimates the probability sought.
  • +
+

The results of a few experimental trials are shown in Table 13.1.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + +
Table 13.1: A few experimental trials of four girls and one boy
ExperimentStrings ChosenSuccess?
         1
‘girl’, ‘boy’, ‘boy’, ‘girl’, ‘boy’No
         2
‘boy’, ‘girl’, ‘girl’, ‘girl’, ‘girl’Yes
         3
‘girl, ’girl’, ‘girl’, ‘boy’, ‘girl’Yes
+
+

A solution to this problem with Python is presented below.

+
+

Start of four_girls_one_boy notebook

+ + +
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+
+
N = 10000
+trial_results = np.zeros(N)
+
+# Constitute the set of 25 girls and 25 boys.
+whole_class = np.repeat(['girl', 'boy'], [25, 25])
+
+# Repeat the following steps N times.
+for i in range(N):
+
+    # Shuffle the numbers
+    shuffled = rnd.permuted(whole_class)
+
+    # Take the first 5 numbers, call them c.
+    c = shuffled[:5]
+
+    # Count how many girls there are, put the result in d.
+    d = np.sum(c == 'girl')
+
+    # Keep track of each trial result in z.
+    trial_results[i] = d
+
+    # End the experiment, go back and repeat until all 1000 trials are
+    # complete.
+
+# Count the number of times we got four girls, put the result in k.
+k = np.sum(trial_results == 4)
+
+# Convert to a proportion.
+kk = k / N
+
+# Print the result.
+print(kk)
+
+
0.1505
+
+
+

We can also find the probabilities of other outcomes from a histogram of trial results obtained with the following command:

+
+
# Import the plotting package.
+import matplotlib.pyplot as plt
+
+# Do histogram, with one bin for each possible number.
+plt.hist(trial_results, bins=range(7), align='left', rwidth=0.75)
+plt.title('# of girls');
+
+
+
+

+
+
+
+
+

In the resulting histogram we can see that in 15 percent of the trials, 4 of the 5 selected were girls.

+

It should be noted that for this problem — as for most other problems — there are several other resampling procedures that will also do the job correctly.

+

In analytic probability theory this problem is worked with a formula for “combinations.”

+

End of four_girls_one_boy notebook

+
+
+
+

13.3.2 Example: Five spades and four clubs in a bridge hand

+
+

Start of five_spades_four_clubs notebook

+ + +

This is an example of multiple-outcome sampling without replacement, order does not matter.

+

The problem is similar to the example in Section 13.3.1, except that now there are four equally-likely outcomes instead of only two. A Python solution is:

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+
+
# Constitute the deck of 52 cards.
+# Repeat the suit names 13 times each, to make a 52 card deck.
+deck = np.repeat(['spade', 'club', 'diamond', 'heart'],
+                 [13, 13, 13, 13])
+# Show the deck
+deck
+
+
array(['spade', 'spade', 'spade', 'spade', 'spade', 'spade', 'spade',
+       'spade', 'spade', 'spade', 'spade', 'spade', 'spade', 'club',
+       'club', 'club', 'club', 'club', 'club', 'club', 'club', 'club',
+       'club', 'club', 'club', 'club', 'diamond', 'diamond', 'diamond',
+       'diamond', 'diamond', 'diamond', 'diamond', 'diamond', 'diamond',
+       'diamond', 'diamond', 'diamond', 'diamond', 'heart', 'heart',
+       'heart', 'heart', 'heart', 'heart', 'heart', 'heart', 'heart',
+       'heart', 'heart', 'heart', 'heart'], dtype='<U7')
+
+
+
+
N = 10000
+trial_results = np.zeros(N)
+
+# Repeat the trial N times.
+for i in range(N):
+
+    # Shuffle the deck and draw 13 cards.
+    hand = rnd.choice(deck, size=13, replace=False)
+
+    # Count the number of spades in "hand", put the result in "n_spades".
+    n_spades = np.sum(hand == 'spade')
+
+    # If we have five spades, we'll continue on to count the clubs. If we don't
+    # have five spades, the number of clubs is irrelevant — we have not gotten
+    # the hand we are interested in.
+    if n_spades == 5:
+        # Count the clubs, put the result in "n_clubs"
+        n_clubs = np.sum(hand == 'club')
+        # Keep track of the number of clubs in each trial
+        trial_results[i] = n_clubs
+
+    # End one experiment, go back and repeat until all N trials are done.
+
+# Count the number of trials where we got 4 clubs. This is the answer we want -
+# the number of hands out of 1000 with 5 spades and 4 clubs. (Recall that we
+# only counted the clubs if the hand already had 5 spades.)
+n_5_and_4 = np.sum(trial_results == 4)
+
+# Convert to a proportion.
+prop_5_and_4 = n_5_and_4 / N
+
+# Print the result
+print(prop_5_and_4)
+
+
0.0224
+
+
+

End of five_spades_four_clubs notebook

+
+
+
+

13.3.3 Example: a total of fifteen points in a bridge hand

+
+

Start of fifteen_points_in_bridge notebook

+ + +

Let us assume that ace counts as 4, king = 3, queen = 2, and jack = 1.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+import matplotlib.pyplot as plt
+
+
+
# Constitute a deck with 4 jacks (point value 1), 4 queens (value 2), 4
+# kings (value 3), 4 aces (value 4), and 36 other cards with no point
+# value
+whole_deck = np.repeat([1, 2, 3, 4, 0], [4, 4, 4, 4, 36])
+whole_deck
+
+
array([1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 0, 0, 0, 0, 0, 0,
+       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+       0, 0, 0, 0, 0, 0, 0, 0])
+
+
+
+
N = 10000
+trial_results = np.zeros(N)
+
+# Do N trials.
+for i in range(N):
+    # Shuffle the deck of cards and draw 13
+    hand = rnd.choice(whole_deck, size=13, replace=False)
+
+    # Total the points.
+    points = np.sum(hand)
+
+    # Keep score of the result.
+    trial_results[i] = points
+
+    # End one experiment, go back and repeat until all N trials are done.
+
+
+
# Produce a histogram of trial results.
+plt.hist(trial_results, bins=range(25), align='left', rwidth=0.75)
+plt.title('Points in bridge hands');
+
+
+
+

+
+
+
+
+

From this histogram, we see that in about 4 percent of our trials we obtained a total of exactly 15 points. We can also compute this directly:

+
+
# How many times did we have a hand with fifteen points?
+k = np.sum(trial_results == 15)
+
+# Convert to a proportion.
+kk = k / N
+
+# Show the result.
+kk
+
+
0.0431
+
+
+

End of fifteen_points_in_bridge notebook

+
+
+
+

13.3.4 Example: Four girls then one boy from 25 girls and 25 boys

+
+

Start of four_girls_then_one_boy_25 notebook

+ + +

In this problem, order matters; we are sampling without replacement, with two outcomes, several of each item.

+

What is the probability of getting an ordered series of four girls and then one boy , from a universe of 25 girls and 25 boys? This illustrates Case 3 above. Clearly we can use the same sampling mechanism as in the example Section 13.3.1, but now we record “yes” for a smaller number of composite events.

+

We record “no” even if a single one boy is chosen but he is chosen 1st, 2nd, 3rd, or 4th, whereas in Section 13.3.1, such outcomes are recorded as “yes”-es.

+
    +
  • Step 1. Generate a class (array) of length 50, consisting of 25 strings valued “boy” and 25 strings valued “girl”.
  • +
  • Step 2. Shuffle the class array, and select the first five elements.
  • +
  • Step 3. If the first five elements are exactly 'girl', 'girl', 'girl', 'girl', 'boy', write “yes,” otherwise “no.”
  • +
  • Step 4. Repeat steps 2 and 3, say, 10,000 times, and count the proportion of “yes” results, which estimates the probability sought.
  • +
+

Let us start the single trial procedure like so:

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+
+
# Constitute the set of 25 girls and 25 boys.
+whole_class = np.repeat(['girl', 'boy'], [25, 25])
+
+# Shuffle the class into a random order.
+shuffled = rnd.permuted(whole_class)
+# Take the first 5 class members, call them c.
+c = shuffled[:5]
+# Show the result.
+c
+
+
array(['boy', 'girl', 'boy', 'girl', 'girl'], dtype='<U4')
+
+
+

Our next step (step 3) is to check whether c is exactly equal to the result of interest. The result of interest is:

+
+
# The result we are looking for - four girls and then a boy.
+result_of_interest = np.repeat(['girl', 'boy'], [4, 1])
+result_of_interest
+
+
array(['girl', 'girl', 'girl', 'girl', 'boy'], dtype='<U4')
+
+
+

We can then use an array comparison with == to do an element by element (elementwise) check, asking whether the corresponding elements are equal:

+
+
# A Boolean array, with True where corresponding elements are equal, False
+# otherwise.
+are_equal = c == result_of_interest
+are_equal
+
+
array([False,  True, False,  True, False])
+
+
+

We are nearly finished with step 3 — it only remains to check whether all of the elements were equal, by checking whether all of the values in are_equal are True.

+

We know that there are 5 elements, so we could check whether there are 5 True values with np.sum:

+
+
# Are there exactly 5 True values in `are_equal`?
+np.sum(are_equal) == 5
+
+
False
+
+
+

Another way to ask the same question is by using the np.all function on are_equal. This returns True if all the elements in are_equal are True, and False otherwise.

+
+
+
+ +
+
+Testing whether all elements of an array are the same +
+
+
+

The np.all, applied to a Boolean array (as here), checks whether all of the elements in the Boolean array are True. If so, it returns True, otherwise, it returns False.

+

For example:

+
+
# All elements are True, `np.all` returns True
+np.all([True, True, True, True])
+
+
True
+
+
+
+
# At least one element is False, `np.all` returns False
+np.all([True, True, False, True])
+
+
False
+
+
+
+
+

Here is the full procedure for steps 2 and 3 (a single trial):

+
+
# Shuffle the class into a random order.
+shuffled = rnd.permuted(whole_class)
+# Take the first 5 class members, call them c.
+c = shuffled[:5]
+# For each element, test whether the result is the result of interest.
+are_equal = c == result_of_interest
+# Check whether we have the result we are looking for.
+is_four_girls_then_one_boy = np.all(are_equal)
+
+

All that remains is to put the single trial procedure into a loop.

+
+
N = 10000
+trial_results = np.zeros(N)
+
+# Repeat the following steps 1000 times.
+for i in range(N):
+
+    # Shuffle the class into a random order.
+    shuffled = rnd.permuted(whole_class)
+    # Take the first 5 class members, call them c.
+    c = shuffled[:5]
+    # For each element, test whether the result is the result of interest.
+    are_equal = c == result_of_interest
+    # Check whether we have the result we are looking for.
+    is_four_girls_then_one_boy = np.all(are_equal)
+
+    # Store the result of this trial.
+    trial_results[i] = is_four_girls_then_one_boy
+
+    # End the experiment, go back and repeat until all N trials are
+    # complete.
+
+# Count the number of times we got four girls then a boy
+k = np.sum(trial_results)
+
+# Convert to a proportion.
+kk = k / N
+
+# Print the result.
+print(kk)
+
+
0.0311
+
+
+

This type of problem is conventionally done with a permutation formula.

+

End of four_girls_then_one_boy_25 notebook

+
+
+
+

13.3.5 Example: repeat pairings from random pairing

+
+

Start of university_icebreaker notebook

+ + +

First put two groups of 10 people into 10 pairs. Then re-randomize the pairings. What is the chance that four or more pairs are the same in the second random pairing? This is a problem in the probability of matching by chance.

+

Ten representatives each from two universities, Birmingham and Berkeley, attend a meeting. As a social icebreaker, representatives are divided, randomly, into pairs consisting of one person from each university.

+

If they held a second round of the icebreaker, with a new random pairing, what is the chance that four or more pairs will be the same?

+

In approaching this problem, we start at the point where the first icebreaker is complete. We now have to determine what happens after the second round.

+
    +
  • Step 1. Let “ace” through “10” of hearts represent the ten representatives from Birmingham University. Let “ace” through “10” of spades be their allocated partners (in round one) from Berkeley.
  • +
  • Step 2. Shuffle the hearts and deal them out in a row; shuffle the spades and deal in a row just below the hearts.
  • +
  • Step 3. Count the pairs — a pair is one card from the heart row and one card from the spade row — that contain the same denomination. If 4 or more pairs match, record “yes,” otherwise “no.”
  • +
  • Step 4. Repeat steps (2) and (3), say, 10,000 times.
  • +
  • Step 5. Count the proportion “yes.” This estimates the probability of 4 or more pairs.
  • +
+

Exercise for the student: Write the steps to do this example with random numbers. The Python solution follows below.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+import matplotlib.pyplot as plt
+
+
+
N = 10000
+trial_results = np.zeros(N)
+
+# Assign numbers to each student, according to their pair, after the first
+# icebreaker
+birmingham = np.arange(10)
+berkeley = np.arange(10)
+
+for i in range(N):
+    # Randomly shuffle the students from Berkeley
+    shuffled_berkeley = rnd.permuted(berkeley)
+
+    # Randomly shuffle the students from Birmingham
+    # (This step is not really necessary — shuffling one array is enough to make the matching random.)
+    shuffled_birmingham = rnd.permuted(birmingham)
+
+    # Count in how many cases people landed with the same person as in the
+    # first round, and store in trial_results.
+    matches = np.sum(shuffled_berkeley == shuffled_birmingham)
+    trial_results[i] = matches
+
+# Count the number of times we got 4 or more people assigned to the same person
+k = np.sum(trial_results >= 4)
+
+# Convert to a proportion.
+kk = k / N
+
+# Print the result.
+print(kk)
+
+
0.0165
+
+
+

We see that in about 2 percent of the trials did 4 or more couples end up being re-paired with their own partners. This can also be seen from the histogram:

+
+
# Produce a histogram of trial results.
+plt.hist(trial_results, bins=range(10), align='left', rwidth=0.75)
+plt.title('Same pairs in round two');
+
+
+
+

+
+
+
+
+

End of university_icebreaker notebook

+
+
+
+

13.3.6 Example: Matching Santa Hats

+
+

Start of santas_hats notebook

+ + +

The welcome staff at a restaurant mix up the hats of a party of six Christmas Santas. What is the probability that at least one will get their own hat?.

+

After a long Christmas day, six Santas meet in the pub to let off steam. However, as luck would have it, their hosts have mixed up their hats. When the hats are returned, what is the chance that at least one Santa will get his own hat back?

+

First, assign each of the six Santas a number, and place these numbers in an array. Next, shuffle the array (this represents the mixed-up hats) and compare to the original. The rest of the problem is the same as the pairs one from before, except that we are now interested in any trial where at least one (\(\ge 1\)) Santa received the right hat.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+
+
N = 10000
+trial_results = np.zeros(N, dtype=bool)
+
+# Assign numbers to each owner
+owners = np.arange(6)
+
+# Each hat gets the number of their owner
+hats = np.arange(6)
+
+for i in range(N):
+    # Randomly shuffle the hats and compare to their owners
+    shuffled_hats = rnd.permuted(hats)
+
+    # In how many cases did at least one person get their hat back?
+    trial_results[i] = np.sum(shuffled_hats == owners) >= 1
+
+# How many times, over all trials, did at least one person get their hat back?
+k = np.sum(trial_results)
+
+# Convert to a proportion.
+kk = k / N
+
+# Print the result.
+print(kk)
+
+
0.6391
+
+
+

We see that in roughly 64 percent of the trials at least one Santa received their own hat back.

+

End of santas_hats notebook

+
+
+
+

13.3.7 Example: Twenty executives assigned to two divisions of a firm

+
+

Start of twenty_executives notebook

+ + +

The top manager wants to spread the talent reasonably evenly, but she does not want to label particular executives with a quality rating and therefore considers distributing them with a random selection. She therefore wonders: What are probabilities of the best ten among the twenty being split among the divisions in the ratios 5 and 5, 4 and 6, 3 and 7, etc., if their names are drawn from a hat? One might imagine much the same sort of problem in choosing two teams for a football or baseball contest.

+

One may proceed as follows:

+
    +
  1. Put 10 balls labeled “W” (for “worst”) and 10 balls labeled “B” (best) in a bucket.
  2. +
  3. Draw 10 balls without replacement and count the W’s.
  4. +
  5. Repeat (say) 400 times.
  6. +
  7. Count the number of times each split — 5 W’s and 5 B’s, 4 and 6, etc. — appears in the results.
  8. +
+

The problem can be done with Python as follows:

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+import matplotlib.pyplot as plt
+
+
+
N = 10000
+trial_results = np.zeros(N)
+
+managers = np.repeat(['Worst', 'Best'], [10, 10])
+
+for i in range(N):
+    chosen = rnd.choice(managers, size=10, replace=False)
+    trial_results[i] = np.sum(chosen == 'Best')
+
+plt.hist(trial_results, bins=range(10), align='left', rwidth=0.75)
+plt.title('Number of best managers chosen')
+
+
+
+

+
+
+
+
+

End of twenty_executives notebook

+
+
+
+

13.3.8 Example: Executives Moving

+ +

A major retail chain moves its store managers from city to city every three years in order to calculate individuals’ knowledge and experience. To make the procedure seem fair, the new locations are drawn at random. Nevertheless, the movement is not popular with managers’ families. Therefore, to make the system a bit sporting and to give people some hope of remaining in the same location, the chain allows managers to draw in the lottery the same posts they are now in. What are the probabilities that 1, 2, 3 … will get their present posts again if the number of managers is 30?

+

The problem can be solved with the following steps:

+
    +
  1. Number a set of green balls from “1” to “30” and put them into Bucket A. Number a set of red balls from “1” to “30” and then put into Bucket B. For greater concreteness one could use 30 little numbered dolls in Bucket A and 30 little toy houses in Bucket B.
  2. +
  3. Shuffle Bucket A, and array all its green balls into a row (vector A). Array all the red balls from Bucket B into a second row B just below row A.
  4. +
  5. Count how many green balls in row A have the same numbers as the red balls just below them, and record that number on a scoreboard.
  6. +
  7. Repeat steps 2 and 3 perhaps 1000 times. Then count in the scoreboard the numbers of “0,” “1,” “2,” “3.”
  8. +
+
+
+

13.3.9 Example: State Liquor Systems Again

+

Let’s end this chapter with the example of state liquor systems that we first examined in Chapter 12 and which will be discussed again later in the context of problems in statistics.

+

Remember that as of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned (“Private”), and 16 monopoly states where the state government owns the retail liquor stores (“Government”). See Table 12.4 for the prices in the Private and Government states.

+

We found the average prices were:

+
    +
  • Private: $4.35;
  • +
  • Government: $4.84;
  • +
  • Difference (Government - Private): $0.49.
  • +
+

Let us now consider that all these states’ prices constitute one single finite universe. We ask: If these 42 states constitute a universe, and if they are all shuffled together, how likely is it that if one divides them into two samples at random (sampling without replacement), containing 16 and 26 observations respectively, the difference in mean prices turns out to be as great as $0.49 (the difference that was actually observed)?

+

Again we write each of the forty-two observed state prices on a separate card. The shuffled deck simulates a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, without replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. In each trial calculate the difference in mean prices.

+

The steps more systematically:

+
    +
  • Step A. Write each of the 42 prices on a card and shuffle.
  • +
  • Steps B and C (combined in this case). i) Draw cards randomly without replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $0.49; if it is as great or greater than $0.49, write “yes,” otherwise “no.”
  • +
  • Step D. Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.
  • +
+

The probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices.

+

Please notice how the only difference between this treatment of the problem and the treatment in Chapter 12 is that the drawing in this case is without replacement whereas in Chapter 12 the drawing is with replacement.

+

In Chapter 12 we thought of these states as if they came from a non-finite universe, which is one possible interpretation in one context. But one can also reasonably think about them in another context — as if they constitute the entire universe (aside from those states excluded from the analysis because of data complexities). If so, one can ask: If these 42 states constitute a universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference that was actually observed)?

+
+
+

13.3.10 Example: Five or More Spades in One Bridge Hand; Four Girls and a Boy

+
+

Start of five_spades_four_girls notebook

+ + +

This is a compound problem: what are the chances of both five or more spades in one bridge hand, and four girls and a boy in a five-child family?

+

“Compound” does not necessarily mean “complicated”. It means that the problem is a compound of two or more simpler problems.

+

A natural way to handle such a compound problem is in stages, as we saw in the archery problem of Section 12.10. If a “success” is achieved in the first stage, go on to the second stage; if not, don’t go on. More specifically in this example:

+
    +
  • Step 1. Use a bridge card deck, and five coins with heads = “girl”.
  • +
  • Step 2. Deal a 13-card bridge hand and count the spades. If 5 or more spades, record “no” and end the experimental trial. Otherwise, continue to step 3.
  • +
  • Step 3. Throw five coins, and count “heads.” If four heads, record “yes,” otherwise record “no.”
  • +
  • Step 4. Repeat steps 2 and 3 a thousand times.
  • +
  • Step 5. Compute the proportion of “yes” in step 3. This estimates the probability sought.
  • +
+

The Python solution to this compound problem is neither long nor difficult. We tackle it almost as if the two parts of the problem were to be dealt with separately. We first determine, in a random bridge hand, whether 5 spades or more are dealt, as was done in the problem Section 13.3.2. Then, if 5 or more spades are found, we use rnd.choice to generate a random family of 5 children. This means that we need not generate families if 5 or more spades were not dealt to the bridge hand, because a “success” is only recorded if both conditions are met. After we record the number of girls in each sample of 5 children, we need only finish the loop (by unindenting the next line and then use np.sum to count the number of samples that had 4 girls, storing the result in k. Since we only drew samples of children for those trials in which a bridge hand of 5 spades had already been dealt, k will have the number of trials out of 10000 in which both conditions were met.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+
+
N = 10000
+trial_results = np.zeros(N)
+
+# Deck with 13 spades and 39 other cards
+deck = np.repeat(['spade', 'others'], [13, 52 - 13])
+
+for i in range(N):
+    # Shuffle deck and draw 13 cards
+    hand = rnd.choice(deck, size=13, replace=False)
+
+    n_spades = np.sum(hand == 'spade')
+
+    if n_spades >= 5:
+        # Generate a family, zeros for boys, ones for girls
+        children = rnd.choice(['girl', 'boy'], size=5)
+        n_girls = np.sum(children == 'girl')
+        trial_results[i] = n_girls
+
+k = np.sum(trial_results == 4)
+
+kk = k / N
+
+print(kk)
+
+
0.0282
+
+
+

Here is an alternative approach to the same problem, but getting the result at the end of the loop, by combining Boolean arrays (see Section 10.5).

+
+
N = 10000
+trial_spades = np.zeros(N)
+trial_girls = np.zeros(N)
+
+# Deck with 13 spades and 39 other cards
+deck = np.repeat(['spade', 'other'], [13, 39])
+
+for i in range(N):
+    # Shuffle deck and draw 13 cards
+    hand = rnd.choice(deck, 13, replace=False)
+
+    n_spades = np.sum(hand == 'spade')
+    trial_spades[i] = n_spades
+
+    # Generate a family, zeros for boys, ones for girls
+    children = rnd.choice(['girl', 'boy'], size=5)
+    n_girls = np.sum(children == 'girl')
+    trial_girls[i] = n_girls
+
+k = np.sum((trial_spades >= 5) & (trial_girls == 4))
+
+kk = k / N
+
+print(kk)
+
+
0.0264
+
+
+

End of five_spades_four_girls notebook

+
+
+
+
+ +
+
+Speed and readability +
+
+
+

The last version is a fraction more expensive, but has the advantage that the condition we are testing for is summarized on one line. However, this would not be a good approach to take if the experiments were not completely unrelated.

+
+
+
+
+
+

13.4 Summary

+

This completes the discussion of problems in probability — that is, problems where we assume that the structure is known. Whereas Chapter 12 dealt with samples drawn from universes considered not finite , this chapter deals with problems drawn from finite universes and therefore you sample without replacement.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-15-1.png b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-15-1.png new file mode 100644 index 00000000..805b2aa1 Binary files /dev/null and b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-15-1.png differ diff --git a/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-4-1.png b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-4-1.png new file mode 100644 index 00000000..6bde8b7d Binary files /dev/null and b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-4-1.png differ diff --git a/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-42-1.png b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-42-1.png new file mode 100644 index 00000000..07aa7c39 Binary files /dev/null and b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-47-1.png b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-47-1.png new file mode 100644 index 00000000..4e4f5258 Binary files /dev/null and b/python-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-47-1.png differ diff --git a/python-book/references.html b/python-book/references.html new file mode 100644 index 00000000..01690c6d --- /dev/null +++ b/python-book/references.html @@ -0,0 +1,1034 @@ + + + + + + + + + +Resampling statistics - References + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

References

+
+ + + +
+ + + + +
+ + +
+ +
+
+Ani Adhikari, John DeNero, and David Wagner. 2021. Computational and +Inferential Thinking: The Foundations of Data Science. https://inferentialthinking.com. https://inferentialthinking.com. +
+
+Arbuthnot, John. 1710. “An Argument for Divine Providence, Taken +from the Constant Regularity Observ’d in the Births of Both Sexes. By +Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of +the College of Physitians and the Royal Society.” +Philosophical Transactions of the Royal Society of London 27 +(328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011. +
+
+Barnett, Vic. 1982. Comparative Statistical Inference. 2nd ed. +Wiley Series in Probability and Mathematical Statistics. Chichester: +John Wiley & Sons. https://archive.org/details/comparativestati0000barn. +
+
+Box, George E. P., and George C. Tiao. 1992. Bayesian Inference in +Statistical Analysis. New York: Wiley & Sons, Inc. +https://www.google.co.uk/books/edition/Bayesian_Inference_in_Statistical_Analys/T8Askeyk1k4C. +
+
+Brooks, Charles Ernest Pelham. 1928. “Periodicities in the Nile +Floods.” Memoirs of the Royal Meteorological Society 2 +(12): 9--26. https://www.rmets.org/sites/default/files/papers/brooksmem2-12.pdf. +
+
+Bulmer, M. G. 1979. Principles of Statistics. New York, NY: +Dover Publications, inc. https://archive.org/details/principlesofstat0000bulm. +
+
+Burnett, Ed. 1988. The Complete Direct Mail List Handbook: +Everything You Need to Know about Lists and How to Use Them for Greater +Profit. Englewood Cliffs, New Jersey: Prentice Hall. https://archive.org/details/completedirectma00burn. +
+
+Cascells, Ward, Arno Schoenberger, and Thomas B. Grayboys. 1978. +“Interpretation by Physicians of Clinical Laboratory +Results.” New England Journal of Medicine 299: 999–1001. +https://www.nejm.org/doi/full/10.1056/NEJM197811022991808. +
+
+Catling, HW, and RE Jones. 1977. “A Reinvestigation of the +Provenance of the Inscribed Stirrup Jars Found at Thebes.” +Archaeometry 19 (2): 137–46. +
+
+Chung, James H, and Donald AS Fraser. 1958. “Randomization Tests +for a Multivariate Two-Sample Problem.” Journal of the +American Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf. +
+
+Cipolla, C. M. 1981. Fighting the Plague in Seventeenth-Century +Italy. Merle Curti Lectures. Madison, Wisconsin: University of +Wisconsin Press. https://books.google.co.uk/books?id=Ct\_OJYgnKCsC. +
+
+Cobb, George W. 2007. “The Introductory Statistics Course: A +Ptolemaic Curriculum?” Technology Innovations in Statistics +Education 1 (1). https://escholarship.org/uc/item/6hb3k0nz. +
+
+Coleman, William. 1987. “Experimental Physiology and Statistical +Inference: The Therapeutic Trial in Nineteenth Century +Germany.” In The Probabilistic Revolution: +Volume 2: Ideas in the Sciences, edited by Lorenz Krüger, Gerd +Gigerenzer, and Mary S. Morgan. An MIT Press Classic. MIT Press. https://books.google.co.uk/books?id=SLftmgEACAAJ. +
+
+Cook, Earl. 1976. “Limits to Exploitation of Nonrenewable +Resources.” Science 191 (4228): 677–82. https://www.jstor.org/stable/pdf/1741483.pdf. +
+
+Davenport, Thomas H, and DJ Patil. 2012. “Data Scientist: The +Sexiest Job of the 21st Century.” Harvard Business +Review 90 (10): 70–76. https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century. +
+
+Deshpande, Jayant V, AP Gore, and A Shanubhogue. 1995. Statistical +Analysis of Nonnormal Data. Taylor & Francis. https://www.google.co.uk/books/edition/Statistical_Analysis_of_Nonnormal_Data/sS0on2XqwwoC. +
+
+Dixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to +Statistical Analysis.” +
+
+Donoho, David. 2017. “50 Years of Data Science.” +Journal of Computational and Graphical Statistics 26 (4): +745–66. http://courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf. +
+
+Dunleavy, Kieron, Stefania Pittaluga, John Janik, Nicole Grant, Margaret +Shovlin, Richard Little, Robert Yarchoan, Seth Steinberg, Elaine S. +Jaffe, and Wyndham H. Wilson. 2006. Novel +Treatment of Burkitt Lymphoma with Dose-Adjusted EPOCH-Rituximab: +Preliminary Results Showing Excellent Outcome. +Blood 108 (11): 2736–36. https://doi.org/10.1182/blood.V108.11.2736.2736. +
+
+Dwass, Meyer. 1957. “Modified Randomization Tests for +Nonparametric Hypotheses.” The Annals of Mathematical +Statistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf. +
+
+Efron, Bradley. 1979. “Bootstrap Methods; Another Look at the +Jackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf. +
+
+Efron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to +the Bootstrap.” In Monographs on Statistics and Applied +Probability, edited by David R Cox, David V Hinkley, Nancy Reid, +Donald B Rubin, and Bernard W Silverman. Vol. 57. New York: +Chapman & Hall. +
+
+Feller, William. 1968. An Introduction to Probability Theory and Its +Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & +Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ. +
+
+Feynman, Richard P., and Ralph Leighton. 1988. What Do You +Care What Other People Think? Further Adventures of a Curious +Character. New York, NY: W. W. Norton; Company, Inc. https://archive.org/details/whatdoyoucarewha0000feyn_x5w7. +
+
+Fisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. +Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684. +
+
+———. 1959. “Statistical Methods and Scientific Inference.” +https://archive.org/details/statisticalmetho0000fish. +
+
+———. 1960. The Design of Experiments. 7th ed. Edinburgh: +Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5. +
+
+Fussler, Herman Howe, and Julian Lincoln Simon. 1961. Patterns in +the Use of Books in Large Research Libraries. Chicago: University +of Chicago Library. +
+
+Gardner, Martin. 1985. Mathematical Magic Show. Penguin Books +Ltd, Harmondsworth. +
+
+———. 2001. The Colossal Book of Mathematics. W.W. Norton & +Company Inc., New York. https://archive.org/details/B-001-001-265. +
+
+Gilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot +Hand in Basketball: On the Misperception of Random Sequences.” +Cognitive Psychology 17 (3): 295–314. https://www.joelvelasco.net/teaching/122/Gilo.Vallone.Tversky.pdf. +
+
+Gnedenko, Boris Vladimirovich, I Aleksandr, and Akovlevich Khinchin. +1962. An Elementary Introduction to the Theory of Probability. +New York, NY, USA: Dover Publications, Inc. https://archive.org/details/gnedenko-khinchin-an-elementary-introduction-to-the-theory-of-probability. +
+
+Goldberg, Samuel. 1986. Probability: An Introduction. Courier +Corporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC. +
+
+Graunt, John. 1759. “Natural and Political Observations Mentioned +in a Following Index and Made Upon the Bills of Mortality.” In +Collection of Yearly Bills of Mortality, from 1657 to 1758 +Inclusive, edited by Thomas Birch. London: A. Miller. https://archive.org/details/collectionyearl00hebegoog. +
+
+Hald, Anders. 1990. A History of Probability and Statistics and +Their Applications Before 1750. New York: John Wiley & Sons. https://archive.org/details/historyofprobabi0000hald. +
+
+Hansen, Morris H, William N Hurwitz, and William G Madow. 1953. +“Sample Survey Methods and Theory. Vol. I. Methods and +Applications.” https://archive.org/details/SampleSurveyMethodsAndTheoryVol1. +
+
+Hodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic +Concepts of Probability and Statistics. 2nd ed. San Francisco, +California: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9. +
+
+Hollander, Myles, and Douglas A Wolfe. 1999. Nonparametric +Statistical Methods. 2nd ed. Wiley Series in Probability and +Statistics: Applied Probability and Statistics. New York: John Wiley +& Sons, Inc. https://archive.org/details/nonparametricsta0000ed2holl. +
+
+Hyndman, Rob J, and Yanan Fan. 1996. “Sample Quantiles in +Statistical Packages.” The American Statistician 50 (4): +361–65. https://www.jstor.org/stable/pdf/2684934.pdf. +
+
+Kahn, Harold A, and Christopher T Sempos. 1989. Statistical Methods +in Epidemiology. Vol. 12. Monographs in Epidemiology and +Biostatistics. New York: Oxford University Press. https://www.google.co.uk/books/edition/Statistical_Methods_in_Epidemiology/YERYAgAAQBAJ. +
+
+Kinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948. +“Sexual Behavior in the Human Male.” W. B. Saunders +Company. https://books.google.co.uk/books?id=pfMKrY3VvigC. +
+
+Kornberg, Arthur. 1991. For the Love of Enzymes: The Odyssey of a +Biochemist. Cambridge, Massachusetts: Harvard University Press. https://archive.org/details/forloveofenzymes00arth. +
+
+Kotz, Samuel, and Norman Lloyd Johnson. 1992. Breakthroughs in +Statistics. New York: Springer-Verlag. +
+
+Lee, Peter M. 2012. Bayesian Statistics: An Introduction. 4th +ed. Wiley Online Library. https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/book.htm. +
+
+Lorie, James Hirsch, and Harry V Roberts. 1951. Basic Methods of +Marketing Research. McGraw-Hill. +
+
+Lyon, Herbert L, and Julian Lincoln Simon. 1968. “Price Elasticity +of the Demand for Cigarettes in the United States.” American +Journal of Agricultural Economics 50 (4): 888–95. +
+
+Martineau, Adrian R, David A Jolliffe, Richard L Hooper, Lauren +Greenberg, John F Aloia, Peter Bergman, Gal Dubnov-Raz, et al. 2017. +“Vitamin D Supplementation to Prevent Acute +Respiratory Tract Infections: Systematic Review and Meta-Analysis of +Individual Participant Data.” Bmj 356. +
+
+McCabe, George P, and Linda Doyle McCabe. 1989. Instructor’s Guide +with Solutions for Introduction to the Practice of Statistics. New +York: W. H. Freeman. +
+
+Mosteller, Frederick. 1987. Fifty Challenging Problems in +Probability with Solutions. Courier Corporation. +
+
+Mosteller, Frederick, and Robert E. K. Rourke. 1973. Sturdy +Statistics: Nonparametrics and Order Statistics. Addison-Wesley +Publishing Company. +
+
+Mosteller, Frederick, Robert E. K. Rourke, and George Brinton Thomas Jr. +1961. Probability with Statistical Applications. 2nd ed. https://archive.org/details/probabilitywiths0000most. +
+
+Noreen, Eric W. 1989. Computer-Intensive Methods for Testing +Hypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore. +
+
+Peirce, Charles Sanders. 1923. Chance, Love, and Logic: +Philosophical Essays. New York: Harcourt Brace & Company, Inc. +https://www.gutenberg.org/files/65274/65274-h/65274-h.htm. +
+
+Piketty, Thomas. 2018. “Brahmin Left Vs Merchant Right: Rising +Inequality & the Changing Structure of Political Conflict.” +2018. https://www.prsinstitute.org/downloads/related/economics/RisingInequalityandtheChangingStructureofPoliticalConflict1.pdf. +
+
+Pitman, Edwin JG. 1937. “Significance Tests Which May Be Applied +to Samples from Any Populations.” Supplement to the Journal +of the Royal Statistical Society 4 (1): 119–30. https://www.jstor.org/stable/pdf/2984124.pdf. +
+
+Raiffa, Howard. 1968. “Decision Analysis: Introductory Lectures on +Choices Under Uncertainty.” https://archive.org/details/decisionanalysis0000raif. +
+
+Ruark, Arthur Edward, and Harold Clayton Urey. 1930. Atoms, +Moleculues and Quanta. New York, NY: McGraw-Hill book +company, inc. https://archive.org/details/atomsmoleculesqu00ruar. +
+
+Russell, Bertrand. 1945. A History of Western +Philosophy. New York: Simon; Schuster. +
+
+Savage, Leonard J. 1972. The Foundations of Statistics. New +York: Dover Publications, Inc. +
+
+Savant, Marilyn vos. 1990. “Ask Marilyn.” 1990. https://web.archive.org/web/20160318182523/http://marilynvossavant.com/game-show-problem. +
+
+Schlaifer, Robert. 1961. Introduction to Statistics for Business +Decisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl. +
+
+Selvin, Steve. 1975. “Letters to the Editor.” The +American Statistician 29 (1): 67. http://www.jstor.org/stable/2683689. +
+
+Semmelweis, Ignác Fülöp. 1983. The Etiology, Concept, and +Prophylaxis of Childbed Fever. Translated by K. Codell Carter. +Madison, Wisconsin: University of Wisconsin Press. https://archive.org/details/etiologyconcepta0000unse. +
+
+Shurtleff, Dewey. 1970. “Some Characteristics Related to the +Incidence of Cardiovascular Disease and Death: Framingham Study, 16-Year +Follow-up.” Section 26. Edited by William B. Kannel and Tavia +Gordon. The Framingham Study: An Epidemiological Investigation of +Cardiovascular Disease. Washington, D.C.: U.S. Government Printing +Office. https://upload.wikimedia.org/wikipedia/commons/6/6d/The_Framingham_study_-_an_epidemiological_investigation_of_cardiovascular_disease_sec.26_1970_%28IA_framinghamstudye00kann_25%29.pdf. +
+
+Simon, Julian Lincoln. 1967. “Doctors, Smoking, and Reference +Groups.” Public Opinion Quarterly 31 (4): 646–47. +
+
+———. 1969. Basic Research Methods in Social Science. 1st ed. +New York: Random House. +
+
+———. 1992. Resampling: The New Statistics. 1st ed. +Arlington, VA: Resampling Stats Inc. +
+
+———. 1998. “The Philosophy and Practice of Resampling +Statistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy. +
+
+Simon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976. +“Probability and Statistics: Experimental Results of a Radically +Different Teaching Method.” The American Mathematical +Monthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf. +
+
+Simon, Julian Lincoln, and Paul Burstein. 1985. Basic Research +Methods in Social Science. 3rd ed. New York: Random House. +
+
+Simon, Julian Lincoln, and Allen Holmes. 1969. “A New Way to Teach +Probability Statistics.” The Mathematics Teacher 62 (4): +283–88. +
+
+Simon, Julian Lincoln, Manouchehr Mokhtari, and Daniel H Simon. 1996. +“Are Mergers Beneficial or Detrimental? Evidence from Advertising +Agencies.” International Journal of the Economics of +Business 3 (1): 69–82. +
+
+Simon, Julian Lincoln, and David M Simon. 1996. “The Effects of +Regulations on State Liquor Prices.” Empirica 23: +303–16. +
+
+Støvring, H. 1999. “On Radicke and His Method for Testing Mean +Differences.” Journal of the Royal Statistical Society: +Series D (The Statistician) 48 (2): 189–201. https://www.jstor.org/stable/pdf/2681185.pdf. +
+
+Sudman, Seymour. 1976. Applied Sampling. New York: +Academic Press. https://archive.org/details/appliedsampling0000unse. +
+
+Tukey, John W. 1977. Exploratory Data Analysis. Reading, MA, +USA: Addison-Wesley. +
+
+Tversky, Amos, and Daniel Kahneman. 1982. “Evidential Impact of +Base Rates.” In Judgement Under Uncertainty: Heuristics and +Biases, edited by Daniel Kahneman, Paul Slovic, and Amos Tversky. +Cambridge: Cambridge University Press. https://www.google.co.uk/books/edition/Judgment_Under_Uncertainty/_0H8gwj4a1MC. +
+
+Vazsonyi, Andrew. 1999. “Which Door Has the Cadillac.” +Decision Line 30 (1): 17–19. https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf. +
+
+Wallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New +Approach. New York: The Free Press. +
+
+Whitworth, William Allen. 1897. DCC Exercises in Choice +and Chance. Cambridge, UK: Deighton Bell; Co. https://archive.org/details/dccexerciseschoi00whit. +
+
+Winslow, Charles-Edward Amory. 1980. The Conquest of Epidemic +Disease: A Chapter in the History of Ideas. Madison, Wisconsin: +University of Wisconsin Press. https://archive.org/details/conquestofepidem0000wins_p3k0. +
+
+Wonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory +Statistics. 5th ed. New York: John Wiley & Sons. +
+
+Zhou, Qixing, Christopher E Gibson, and Robert H Foy. 2000. +“Long-Term Changes of Nitrogen and Phosphorus Loadings to a Large +Lake in North-West Ireland.” Water Research 34 (3): +922–26. https://doi.org/10.1016/S0043-1354(99)00199-2. +
+
+ + + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/reliability_average.html b/python-book/reliability_average.html new file mode 100644 index 00000000..b1e47334 --- /dev/null +++ b/python-book/reliability_average.html @@ -0,0 +1,698 @@ + + + + + + + + + +Resampling statistics - 28  Some Last Words About the Reliability of Sample Averages + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

28  Some Last Words About the Reliability of Sample Averages

+
+ + + +
+ + + + +
+ + +
+ +
+

28.1 The problem of uncertainty about the dispersion

+

The inescapable difficulty of estimating the amount of dispersion in the population has greatly exercised statisticians over the years. Hence I must try to clarify the matter. Yet in practice this issue turns out not to be the likely source of much error even if one is somewhat wrong about the extent of dispersion, and therefore we should not let it be a stumbling block in the way of our producing estimates of the accuracy of samples in estimating population parameters.

+

Student’s t test was designed to get around the problem of the lack of knowledge of the population dispersion. But Wallis and Roberts wrote about the t test: “[F]ar-reaching as have been the consequences of the t distribution for technical statistics, in elementary applications it does not differ enough from the normal distribution…to justify giving beginners this added complexity.” [wallis1956statistics], p. x) “Although Student’s t and the F ratio are explained…the student…is advised not ordinarily to use them himself but to use the shortcut methods… These, being non-parametric and involving simpler computations, are more nearly foolproof in the hands of the beginner — and, ordinarily, only a little less powerful.” (p. xi)1

+

If we knew the population parameter — the proportion, in the case we will discuss — we could easily determine how inaccurate the sample proportion is likely to be. If, for example, we wanted to know about the likely inaccuracy of the proportion of a sample of 100 voters drawn from a population of a million that is 60% Democratic, we could simply simulate drawing (say) 200 samples of 100 voters from such a universe, and examine the average inaccuracy of the 200 sample proportions.

+

But in fact we do not know the characteristics of the actual universe. Rather, the nature of the actual universe is what we seek to learn about. Of course, if the amount of variation among samples were the same no matter what the Republican-Democrat proportions in the universe, the issue would still be simple, because we could then estimate the average inaccuracy of the sample proportion for any universe and then assume that it would hold for our universe. But it is reasonable to suppose that the amount of variation among samples will be different for different Democrat-Republican proportions in the universe.

+

Let us first see why the amount of variation among samples drawn from a given universe is different with different relative proportions of the events in the universe. Consider a universe of 999,999 Democrats and one Republican. Most samples of 100 taken from this universe will contain 100 Democrats. A few (and only a very, very few) samples will contain 99 Democrats and one Republican. So the biggest possible difference between the sample proportion and the population proportion (99.9999%) is less than one percent (for the very few samples of 99% Democrats). And most of the time the difference will only be the tiny difference between a sample of 100 Democrats (sample proportion = 100%), and the population proportion of 99.9999%.

+

Compare the above to the possible difference between a sample of 100 from a universe of half a million Republicans and half a million Democrats. At worst a sample could be off by as much as 50% (if it got zero Republicans or zero Democrats), and at best it is unlikely to get exactly 50 of each. So it will almost always be off by 1% or more.

+

It seems, therefore, intuitively reasonable (and in fact it is true) that the likely difference between a sample proportion and the population proportion is greatest with a 50%-50% universe, least with a 0%-100% universe, and somewhere in between for probabilities, in the fashion of Figure 28.1.

+
+
+
+
+

+
Figure 28.1: Relationship Between the Population Proportion and the Likely Error In a Sample
+
+
+
+
+

Perhaps it will help to clarify the issue of estimating dispersion if we consider this: If we compare estimates for a second sample based on a) the population , versus b) the first sample , the former will be more accurate than the latter, because of the sampling variation in the first sample that affects the latter estimate. But we cannot estimate that sampling variation without knowing more about the population.

+
+
+

28.2 Notes on the use of confidence intervals

+
    +
  1. Confidence intervals are used more frequently in the physical sciences — indeed, the concept was developed for use in astronomy — than in bio-statistics and in the social sciences; in these latter fields, measurement is less often the main problem and the distinction between hypotheses often is difficult.
  2. +
  3. Some statisticians suggest that one can do hypothesis tests with the confidence-interval concept. But that seems to me equivalent to suggesting that one can get from New York to Chicago by flying first to Los Angeles. Additionally, the logic of hypothesis tests is much clearer than the logic of confidence intervals, and it corresponds to our intuitions so much more easily.
  4. +
  5. Discussions of confidence intervals sometimes assert that one cannot make a probability statement about where the population mean may be, yet can make statements about the probability that a particular set of samples may bound that mean.
  6. +
+

If we agree that our interest is upcoming events and probably decision-making, then we obviously are interested in putting betting odds on the location of the population mean (and subsequent samples). And a statement about process will not help us with that, but only a probability statement.

+

Moving progressively farther away from the sample mean, we can find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can say that this point represents a “limit” or “boundary” between which and the sample mean may be called a confidence interval, I suppose.

+

This issue is discussed in more detail in Simon (1998, published online).

+
+
+

28.3 Overall summary and conclusions about confidence intervals

+

The first task in statistics is to measure how much — to make a quantitative estimate of the universe from which a given sample has been drawn, including especially the average and the dispersion; the theory of point estimation is discussed in Chapter 19.

+

The next task is to make inferences about the meaning of the estimates. A hypothesis test helps us decide whether two or more universes are the same or different from each other. In contrast, the confidence interval concept helps us decide on the reliability of an estimate.

+

Confidence intervals and hypothesis tests are not entirely disjoint. In fact, hypothesis testing of a single sample against a benchmark value is, under all interpretations, I think, operationally identical with constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different because the questions which they are designed to answer are different.

+

Having now worked through the entire procedure of producing a confidence interval, it should be glaringly obvious why statistics is such a difficult subject. The procedure is very long, and involves a very large number of logical steps. Such a long logical train is very hard to control intellectually, and very hard to follow with one’s intuition. The actual computation of the probabilities is the very least of it, almost a trivial exercise.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/resampling_method.html b/python-book/resampling_method.html new file mode 100644 index 00000000..fce2604a --- /dev/null +++ b/python-book/resampling_method.html @@ -0,0 +1,2279 @@ + + + + + + + + + +Resampling statistics - 2  The resampling method + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

2  The resampling method

+
+ + + +
+ + + + +
+ + +
+ +

This chapter is a brief introduction to the resampling method of solving problems in probability and statistics. We’re going to dive right in and solve a problem hands-on.

+

You will see that the resampling method is easy to understand and apply: all it requires is to understand the physical problem. You then simulate a statistical model of the physical problem with techniques that are intuitively obvious, and estimate the probability sought with repeated random sampling.

+

After finding a solution, we will look at the more conventional formulaic approach, and how that compares. Here’s the spoiler: it requires you to understand complex formulas, and to choose the correct one from many.

+

After reading this chapter, you will understand why we are excited about the resampling method, and why it will allow you to approach even even hard problems without knowing sophisticated statistic techniques.

+
+

2.1 The resampling approach in action

+

Recall the problem from section Section 1.2 in which the contractor owns 20 ambulances:

+
+

You are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.

+
+

The resampling approach produces the estimate as follows.

+
+

2.1.1 Randomness from physical methods

+

We collect 10 coins, and mark one of them with a pen or pencil or tape as being the coin that represents “out-of-order;” the other nine coins stand for “in operation”. For any one ambulance, this set of 10 coins provides a “model” for the one-in-ten chance — a probability of .10 (10 percent) — of it being out of order on a given day. We put the coins into a little jar or bucket.

+

For ambulance #1, we draw a single coin from the bucket. This coin represents whether that ambulance is going to be broken tomorrow. After replacing the coin and shaking the bucket, we repeat the same procedure for ambulance #2, ambulance #3 and so forth. Having repeated the procedure 20 times, we now have a representation of all ambulances for a single day.

+

We can now repeat this whole process as many times as we like: each time, we come up with a representation for a different day, telling us how many ambulances will be out-of-service on that day.

+

After collecting evidence for, say, 50 experimental days we determine the proportion of the experimental days on which three or more ambulances are out of order. That proportion is an estimate of the probability that three or more ambulances will be out of order on a given day — the answer we seek. This procedure is an example of Monte Carlo simulation, which is the heart of the resampling method of statistical estimation.

+

A more direct way to answer this question would be to examine the firm’s actual records for the past 100 days or, better, 500 days (if that’s available) to determine how many days had three or more ambulances out of order. But the resampling procedure described above gives us an estimate even if we do not have such long-term information. This is realistic; it is frequently the case in the real world that we must make estimates on the basis of insufficient history about an event.

+

A quicker resampling method than the coins could be obtained with 20 ten-sided dice or spinners (like those found in the popular Dungeons & Dragons games). For each die, we identify one of its ten sides as “out-of-order”.

+

Funnily enough, standard 10-sided dice have the numbers 0 through 9 on their faces, rather than 1 through 10. Figure 2.1 shows a standard 10-sided die:

+
+
+

+
Figure 2.1: 10-sided die
+
+
+

We decide, arbitrarily, that the 9 side means “out-of-order”. We could even put a little bit of paint on the 9 side to remind us. The die represents an ambulance. If we roll the die, and get this face, this indicates that the ambulance was out of order. If we get any of the other faces — 0 through 8 — this ambulance was in working order. A single throw of all 20 dice will be our experimental trial that represents a single day; we just have to count whether three or more ambulances turn up “out of order”. Figure 2.2 show the result of one trial — throwing 20 dice:

+
+
+

+
Figure 2.2: 20 10-sided dice
+
+
+

As you can see, the trial in Figure 2.2 gave us a single 9, so there was only one ambulance out of order.

+

In a hundred quick throws of the 20 dice — which probably takes less than 5 minutes — we can get a fast and reasonably accurate answer to our question.

+
+
+
+

2.2 Randomness from your computer

+

Computers make it easy to generate random numbers for resampling.

+
+
+
+ +
+
+What do we mean by random? +
+
+
+

Random numbers are numbers where it is impossible to predict which number is coming next. If we ask the computer for a number between 0 and 9, we will get one of the numbers 0 though 9, but we cannot do any better than that in predicting which number it will give us. There is an equal (10%) chance we will get any of the numbers 0 through 9 — just as there is when we roll a fair 10-sided die. We will go into more detail about what exactly we mean by random and chance later in the book (Section 3.8).

+
+
+ +

We can use random numbers from computers to simulate our problem. For example, we can ask the computer to choose a random number between 0 and 9 to represent one ambulance. Let’s say the number 9 represents “out-of-order” and 0 through 8 “in operation”, then any one random number gives us a trial observation for a single ambulance. To get an experimental trial for a single day we look at 20 numbers and count how many of them are 9. We then look at, say, one hundred sets of 20 numbers and count the proportion of sets whose 20 numbers show three or more ambulances being “out-of-order”. Once again, that proportion estimates the probability that three or more ambulances will be out-of-order on any given day.

+

Soon we will do all these steps with some Python code, but for now, consider Table Table 2.1. In each row, we placed 20 numbers, each one representing an ambulance. We added 25 such rows, each representing a simulation of one day.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2.1: 25 simulations of 20 ambulances, with counts
A1A2A3A4A5A6A7A8A9A10A11A12A13A14A15A16A17A18A19A20
Day 154459829158218266505
Day 227446395258125490584
Day 359128753892690725222
Day 424760451376329580604
Day 574891512364851750987
Day 673917799684772024692
Day 739537130800330038646
Day 804679719818704470561
Day 909070160860319831278
Day 1086108345884910869207
Day 1170079230005540178208
Day 1232246396887664387043
Day 1342690085315187683635
Day 1431243162952406190794
Day 1520158581322782212925
Day 1699606332683905788386
Day 1783001537096412501871
Day 1871264300756292803191
Day 1956598430674942061041
Day 2005599434169243186802
Day 2141015164852158620526
Day 2285203509042811571475
Day 2310854752872644356557
Day 2495796347725200919528
Day 2560948348088710734751
+
+ + +
+
+

To know how many ambulances were “out of order” on any given day, we count number of ones in that row. We place the counts in the final column called “#9” (for “number of nines”):

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2.2: 25 simulations of 20 ambulances, with counts
A1A2A3A4A5A6A7A8A9A10A11A12A13A14A15A16A17A18A19A20#9
Day 1544598291582182665052
Day 2274463952581254905842
Day 3591287538926907252223
Day 4247604513763295806041
Day 5748915123648517509872
Day 6739177996847720246924
Day 7395371308003300386461
Day 8046797198187044705612
Day 9090701608603198312782
Day 10861083458849108692072
Day 11700792300055401782081
Day 12322463968876643870431
Day 13426900853151876836351
Day 14312431629524061907943
Day 15201585813227822129251
Day 16996063326839057883863
Day 17830015370964125018711
Day 18712643007562928031912
Day 19565984306749420610412
Day 20055994341692431868023
Day 21410151648521586205260
Day 22852035090428115714751
Day 23108547528726443565570
Day 24957963477252009195284
Day 25609483480887107347511
+
+ + +
+
+

Each value in the last column of Table Table 2.2 is the count of 9s in that row and, therefore, the result from our simulation of one day.

+

We can estimate how often three or more ambulances would break down by looking for values of three or greater in the last column. We find there are 6 rows with three or more in the last column. Finally we divide this number of rows by the number of trials (25) to get an estimate of the proportion of days with three or more breakdowns. The result is 0.24.

+
+
+

2.3 Solving the problem using Python

+

Here we rush ahead to show you how to do this simulation in Python.

+

We go through the Python code for the simulation, but we don’t expect you to understand all of it right now. The rest of this book goes into more detail on reading and writing Python code, and how you can use Python to build your own simulations. Here we just want to show you what this code looks like, to give you an idea of where we are headed.

+

While you can run the code below on your own computer, for now we only need you to read it and follow along; the text explains what each line of code does.

+
+
+
+ +
+
+Coming back to the example +
+
+
+

If you are interested, you can come back to this example later, and run it for yourself. To do this, we recommend you read Chapter 4 that explains how to execute notebooks online or on your own computer.

+
+
+
+

Start of ambulances notebook

+ + +

The first thing to say about the code you will see below is there are some lines that do not do anything; these are the lines beginning with a # character (read # as “hash”). Lines beginning with # are called comments. When Python sees a # at the start of a line, it ignores everything else on that line, and skips to the next. Here’s an example of a comment:

+
+
# Python will completely ignore this text.
+
+

Because Python ignores lines beginning with #, the text after the # is just for us, the humans reading the code. The person writing the code will often use comments to explain what the code is doing.

+

Our next task is to use Python to simulate a single day of ambulances. We will again represent each ambulance by a random number from 0 through 9. 20 of these numbers represents a simulation of all 20 ambulances available to the contractor. We call a simulation of all ambulances for a specific day one trial.

+
+

Before we begin our first trial, we need to load some helpful routines from the NumPy software library. NumPy is a Python library that has many important functions for creating and working with numerical data. We will use routines from NumPy in almost all our examples.

+
+
# Get the Numpy library, and call it "np" for short.
+import numpy as np
+
+

We also need to ask NumPy for an object that can generate random numbers. Such an object is known as a “random number generator”.

+
+
# Ask NumPy for a random number generator.
+# Name it `rnd` — short for "random"
+rnd = np.random.default_rng()
+
+
+
+
+ +
+
+NumPy’s Random Number Generator +
+
+
+

Here are some examples of the random operations we can perform with NumPy:

+
    +
  1. Make a random choice between three words:

    +
    +
    rnd.choice(['apple', 'orange', 'banana'])
    +
    +
    'orange'
    +
    +
  2. +
  3. Make five random choices of three words, using the “size=” argument:

    +
    +
    rnd.choice(['apple', 'orange', 'banana'], size=5)
    +
    +
    array(['orange', 'orange', 'orange', 'banana', 'banana'], dtype='<U6')
    +
    +
  4. +
  5. Shuffle a list of numbers:

    +
    +
    rnd.permutation([1, 2, 3, 4, 5])
    +
    +
    array([3, 5, 4, 2, 1])
    +
    +
  6. +
  7. Generate five random numbers between 1 and 10:

    +
    +
    rnd.integers(1, 11, size=5)
    +
    +
    array([9, 3, 2, 9, 3])
    +
    +
  8. +
+
+
+
+

Recall that we want twenty 10-sided dice — one per ambulance. Our dice should be 10-sided, because each ambulance has a 1-in-10 chance of being out of order.

+

The program to simulate one trial of the ambulances problem therefore begins with these commands:

+
+
# Ask NumPy to generate 20 numbers from 0 through 9.
+
+# These are the numbers we will ask NumPy to select from.
+# We store the numbers together in an *array*.
+numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+# Get 20 (size=20) values from the *numbers* list.
+# Store the 20 numbers with the name "a"
+a = rnd.choice(numbers, size=20)
+
+# The result is a sequence (array) of 20 numbers.
+a
+
+
array([6, 6, 5, 0, 5, 2, 7, 4, 4, 6, 3, 9, 5, 2, 5, 8, 1, 2, 5, 4])
+
+
+

The commands above ask the computer to store the results of the random drawing in a location in the computer’s memory to which we give a name such as “a” or “ambulances” or “aardvark” — the name is up to us.

+

Next, we need to count the number of defective ambulances:

+
+
# Count the number of nines in the random numbers.
+# The "a == 9" part identifies all the numbers equal to 9.
+# The "sum" part counts how many numbers "a == 9" found.
+b = np.sum(a == 9)
+# Show the result
+b
+
+
1
+
+
+
+
+
+ +
+
+Counting sequence elements +
+
+
+

We see that the code uses:

+
+
np.sum(a == 9)
+
+
1
+
+
+

What exactly happens here under the hood? First a == 9 creates an sequence of values that only contains

+

True or False

+

values, depending on whether each element is equal to 9 or not.

+

Then, we ask Python to add up (sum). Python counts True as 1, and False as 0; thus we can use sum to count the number of True values.

+

This comes down to asking “how many elements in a are equal to 9”.

+

Don’t worry, we will go over this again in the next chapter.

+
+
+

The sum command is a counting operation. It asks the computer to count the number of 9s among the twenty numbers that are in location a following the random draw carried out by the rnd.choice operation. The result of the sum operation will be somewhere between 0 and 20, the number of simulated ambulances that were out-of-order on a given simulated day. The result is then placed in another location in the computer’s memory that we label b.

+

Above you see that we have worked out how to tell the computer to do a single trial — one simulated day.

+
+

2.3.1 Repeating trials

+

We could run the code above for one trial over and over, and write down the result on a piece of paper. If we did this 100 times we would have 100 counts of the number of simulated ambulances that had broken down for each simulated day. To answer our question, we will then count the number of times the count was more than three, and divide by 100, to get an estimate of the proportion of days with more than three out-of-order ambulances.

+

One of the great things about the computer is that it is very good at repeating tasks many times, so we do not have to. Our next task is to ask the computer to repeat the single trial many times — say 1000 times — and count up the results for us.

+

Of course Python is very good at repeating things, but the instructions to tell Python to repeat things will take a little while to get used to. Soon, we will spend some time going over it in more detail. For now though, we show you how what it looks like, and ask you to take our word for it.

+

The standard way to repeat steps in Python is a for loop. For example, let us say we wanted to display (print) “Hello” five times. Here is how we would do that with a for loop:

+
+
# Read the next line as "repeat the following steps five times".
+for i in np.arange(0, 5):
+    # The indented stuff is the code we repeat five times.
+    # Print "Hello" to the screen.
+    print("Hello")
+
+
Hello
+Hello
+Hello
+Hello
+Hello
+
+
+

You can probably see where we are going here. We are going to put the code for one trial inside a for loop, to repeat that trial code many times.

+

Our next job is to store the results of each trial. If we are going to run 1000 trials, we need to store 1000 results.

+

To do this, we start with a sequence of 1000 zeros, that we will fill in later, like this:

+
+
# Ask NumPy to make a sequence of 1000 zeros that we will use
+# to store the results of our 1000 trials.
+# Call this sequence "z"
+z = np.zeros(1000)
+
+

For now, z contains 1000 zeros, but we will soon use a for loop to execute 1000 trials. For each trial we will calculate our result (the number of broken-down ambulances), and we will store the result in the z store. We end up with 1000 trial results stored in z.

+

With these parts, we are now ready to solve the ambulance problem, using Python.

+
+
+

2.3.2 The solution

+

This is our big moment! Here we will combine the elements shown above to perform our ambulance simulation over, say, 1000 days. Just a quick reminder: we do not expect you to understand all the detail of the code below; we will cover that later. For now, see if you can follow along with the gist of it.

+

To solve resampling problems, we typically proceed as we have done above. We figure out the structure of a single trial and then place that trial in a for loop that executes it multiple times (once for each day, in our case).

+

Now, let us apply this procedure to our ambulance problem. We simulate 1000 days. You will see that we have just taken the parts above, and put them together. The only new part here, is the step at the end, where we store the result of the trial. Bear with us for that; we will come to it soon.

+
+
# Ask NumPy to make a sequence of 1000 zeros that we will use
+# to store the results of our 1000 trials.
+# Call this sequence "z"
+z = np.zeros(1000)
+
+# These are the numbers we will ask NumPy to select from.
+numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
+
+# Read the next line as "repeat the following steps 1000 times".
+for i in np.arange(0, 1000):
+    # The indented stuff is the code we repeat 1000 times.
+
+    # Get 20 (size=20) values from the *numbers* list.
+    # Store the 20 numbers with the name "a"
+    a = rnd.choice(numbers, size=20)
+
+    # Count the number of nines in the random numbers.
+    # The "a == 9" part identifies all the numbers equal to 9.
+    # The "sum" part counts how many numbers "a == 9" found.
+    b = np.sum(a == 9)
+
+    # Store the result from this trial in the sequence "z"
+    z[i] = b
+
+    # Now go back and repeat the trial, until done.
+
+

The z[i] = b statement that follows the sum counting operation simply keeps track of the results of each trial, placing the number of defective ambulances for each trial inside the sequence called z. The sequence has 1000 positions: one for each trial.

+

When we have run the code above, we have stored 1000 trial results in the sequence z. These are 1000 counts of out-of-order ambulances, one for each of our simulated days. Our last task is to calculate the proportion of these days for which we had more than three broken-down ambulances.

+

Since our aim is to count the number of days in which more than 3 (4 or more) defective ambulances occur, we use another counting sum command at the end of the 1000 trials. This command counts how many times more than 3 defects occurred in the 1000 days recorded in our z sequence, and we place the result in another location, k. This gives us the total number of days where 4 or more defective ambulances are seen to occur. Then we divide the number in k by 1000, the number of trials. Thus we obtain an estimate of the chance, expressed as a probability between 0 and 1, that 4 or more ambulances will be defective on a given day. And we store that result in a location that we call kk, which Python subsequently prints to the screen.

+
+
# How many trials resulted in more than 3 ambulances out of order?
+k = np.sum(z > 3)
+
+# Convert to a proportion.
+kk = k / 1000
+
+# Print the result.
+print(kk)
+
+
0.13
+
+
+

This is the estimate we wanted; the proportion of days where more than three ambulances were out of action.

+

We have crept up on the solution, so it might not be clear to you how few steps you needed to do this task. Here is the whole solution to the problem, without the comments:

+
+
import numpy as np
+rnd = np.random.default_rng()
+
+z = np.zeros(1000)
+numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
+
+for i in np.arange(0, 1000):
+    a = rnd.choice(numbers, size=20)
+    b = np.sum(a == 9)
+    z[i] = b
+
+k = np.sum(z > 3)
+kk = k / 1000
+print(kk)
+
+
0.124
+
+
+

End of ambulances notebook

+
+
+

Notice that the code above is exactly the same as the code we built up in steps. But notice too, that the answer we got from this code was slightly different from the answer we got first.

+

Why did we get a different answer from the same code?

+
+
+
+ +
+
+Randomness in estimates +
+
+
+

This is an essential point — our code uses random numbers to get an estimate of the quantity we want — in this case, the probability of three or more ambulances being out of order. Every run of our code will use a different set of random numbers. Therefore, every run of our code will give us a very slightly different number. As you will soon see, we can make our estimate more and more accurate, and less and less different between each run, by doing many trials in each run. Here we did 1000 trials, but we will usually do 10000 trials, to give us a good estimate, that does not vary much from run to run.

+
+
+

Don’t worry about the detail of how each of these commands works — we will cover those details gradually, over the next few chapters. But, we hope that you can see, in principle, how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using the equivalent of a ten-sided die. This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with.

+

While writing programs like these take a bit of getting used to, it is vastly simpler than the older, more conventional approaches to such problems routinely taught to students.

+
+
+

2.4 How resampling differs from the conventional approach

+

In the standard approach the student learns to choose and solve a formula. Doing the algebra and arithmetic is quick and easy. The difficulty is in choosing the correct formula. Unless you are a professional mathematician, it may take you quite a while to arrive at the correct formula — considerable hard thinking, and perhaps some digging in textbooks. More important than the labor, however, is that you may come up with the wrong formula, and hence obtain the wrong answer. And how would you know if you were wrong?

+

Most students who have had a standard course in probability and statistics are quick to tell you that it is not easy to find the correct formula, even immediately after finishing a course (or several courses) on the subject. After leaving school or university, it is harder still to choose the right formula. Even many people who have taught statistics at the university level (including this writer) must look at a book to get the correct formula for a problem as simple as the ambulances, and then we are often still not sure we have the right answer. This is the grave disadvantage of the standard approach.

+

In the past few decades, resampling and other Monte Carlo simulation methods have come to be used extensively in scientific research. But in contrast to the material in this book, simulation has mostly been used in situations so complex that mathematical methods have not yet been developed to handle them. Here are examples of such situations:

+ +
    +
  1. For a flight to Mars, calculating the correct route involves a great many variables, too many to solve with formulas. Hence, the Monte Carlo simulation method is used.

  2. +
  3. The Navy might want to know how long the average ship will have to wait for dock facilities. The time of completion varies from ship to ship, and the number of ships waiting in line for dock work varies over time. This problem can be handled quite easily with the experimental simulation method, but formal mathematical analysis would be difficult or impossible.

  4. +
  5. What are the best tactics in baseball? Should one bunt? Should one put the best hitter up first, or later? By trying out various tactics with dice or random numbers, Earnshaw Cook (in his book Percentage Baseball), found that it is best never to bunt, and the highest-average hitter should be put up first, in contrast to usual practice. Finding this answer would have been much more difficult with the analytic method.

    +
  6. +
  7. Which search pattern will yield the best results for a ship searching for a school of fish? Trying out “models” of various search patterns with simulation can provide a fast answer.

  8. +
  9. What strategy in the game of Monopoly will be most likely to win? The simulation method systematically plays many games (with a computer) testing various strategies to find the best one.

  10. +
+

But those five examples are all complex problems. This book and its earlier editions break new ground by using this method for simple rather than complex problems , especially in statistics rather than pure probability, and in teaching beginning rather than advanced students to solve problems this way. (Here it is necessary to emphasize that the resampling method is used to solve the problems themselves rather than as a demonstration device to teach the notions found in the standard conventional approach . Simulation has been used in elementary courses in the past, but only to demonstrate the operation of the analytical mathematical ideas. That is very different than using the resampling approach to solve statistics problems themselves, as is done here.)

+

Once we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics . Then we can get down to the business of learning how to do that clear statistical thinking, and putting it to work for you. The study of probability is purely mathematics (though not necessarily formulas) and technique. But statistics has to do with meaning . For example, what is the meaning of data showing an association just discovered between a type of behavior and a disease? Of differences in the pay of men and women in your firm? Issues of causation, acceptability of control, and design of experiments cannot be reduced to technique. This is “philosophy” in the fullest sense. Probability and statistics calculations are just one input. Resampling simulation enables us to get past issues of mathematical technique and focus on the crucial statistical elements of statistical problems.

+

We hope you will find, as you read through the chapters, that the resampling way of thinking is a good way to think about the more traditional statistical methods that some of you may already know. Our approach will be to use resampling to understand the ideas, and then apply this understanding to reason about traditional methods. You may also find that the resampling methods are not only easier to understand — they are often more useful, because they are so general in their application.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/resampling_with_code.html b/python-book/resampling_with_code.html new file mode 100644 index 00000000..bdd1ae36 --- /dev/null +++ b/python-book/resampling_with_code.html @@ -0,0 +1,1478 @@ + + + + + + + + + +Resampling statistics - 5  Resampling with code + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

5  Resampling with code

+
+ + + +
+ + + + +
+ + +
+ +

Chapter 2 used simulation and resampling from tables of random numbers, dice, and coins. Making random choices in this way can make it easier to understand the process, but of course, physical methods of making random outcomes can be slow and boring.

+

We saw that short computer programs can do a huge number of resampling trials in a less than a second. The flexibility of a programming language makes it possible to simulate many different outcomes and tests.

+

Programs can build up tables of random numbers, and do basic tasks like counting the number of values in a row or taking proportions. With these simple tools, we can simulate many problems in probability and statistics.

+

In this chapter, we will model another problem using Python, but this chapter will add three new things.

+
    +
  • The problem we will work on is a little different from the ambulances problem from Chapter 2. It is a real problem about deciding whether a new cancer treatment is better than the alternatives, and it introduces the idea of making a model of the world, to ask questions about chances and probabilities.

  • +
  • We will slow down a little to emphasize the steps in solving this kind of problem. First we work out how to simulate a single trial. Then we work out how to run many simulated trials.

  • +
  • We sprinted through the code in Chapter 2, with the promise we would come back to the details. Here we go into more detail about some ideas from the code in the last chapter. These are:

    +
      +
    • Storing several values together in one place, with arrays.
    • +
    • Using functions (code recipes) to apply procedures.
    • +
    • Comparing numbers to other numbers.
    • +
    • Counting numbers that match a condition.
    • +
  • +
+

In the next chapter, we will talk more about using arrays to store results, and for loops to repeat a procedure many times.

+
+

5.1 Statistics and probability

+

We have already emphasized that statistics is a way of drawing conclusions about data from the real world, in the presence of random variation; probability is the way of reasoning about random variation. This chapter introduces our first statistical problem, where we use probability to draw conclusions about some important data — about a potential cure for a type of cancer. We will not make much of the distinction between probability and statistics here, but we will come back to it several times in later chapters.

+
+
+

5.2 A new treatment for Burkitt lymphoma

+

Burkitt lymphoma is an unusual cancer of the lymphatic system. The lymphatic system is a vein-like network throughout the body that is involved in the immune reaction to disease. In developed countries, with standard treatment, the cure rate for Burkitt lymphoma is about 90%.

+

In 2006, researchers at the US National Cancer Institute (NCI), tested a new treatment for Burkitt lymphoma (Dunleavy et al. 2006). They gave the new treatment to 17 patients, and found that all 17 patients were doing well after two years or more of follow up. By “doing well”, we mean that their lymphoma had not progressed; as a short-hand, we will say that these patients were “cured”, but of course, we do not know what happened to them after this follow up.

+

Here is where we put on our statistical hat and ask ourselves the following question — how surprised are we that the NCI researchers saw their result of 17 out of 17 patients cured?

+

At this stage you might and should ask, what could we possibly mean by “surprised”? That is a good and important question, and we will discuss that much more in the chapters to come. For now, please bear with us as we do a thought experiment.

+

Let us forget the 17 out of 17 result of the NCI study for a moment. Imagine that there is another hospital, called Saint Hypothetical General, just down the road from the NCI, that was also treating 17 patients with Burkitt lymphoma. Saint Hypothetical were not using the NCI treatment, they were using the standard treatment.

+

We already know that each patient given the standard treatment has a 90% chance of cure. Given that 90% cure rate, what is the chance that 17 out of 17 of the Hypothetical group will be cured?

+

You may notice that this question about the Hypothetical group is similar to the problem of the 20 ambulances in Chapter Chapter 2. In that problem, we were interested to know how likely it was that 3 or more of 20 ambulances would be out of action on any one day, given that each ambulance had a 10% chance of being out of action. Here we would like to know the chances that all 17 patients would be cured, given that each patient has a 90% chance of being cured.

+
+
+

5.3 A physical model of the hypothetical hospital

+

As in the ambulance example, we could make a physical model of chance in this world. For example, to simulate whether a given patient is cured or not by a 90% effective treatment, we could throw a ten sided die and record the result. We could say, arbitrarily, that a result of 0 means “not cured”, and all the numbers 1 through 9 mean “cured” (typical 10-sided dice have sides numbered 0 through 9).

+

We could roll 17 dice to simulate one “trial” in this random world. For each trial, we record the number of dice that show numbers 1 through 9 (and not 0). This will be a number between 0 and 17, and it is the number of patients “cured” in our simulated trial.

+

Figure 5.1 is the result of one such trial we did with a set of 17 10-sided dice we happened to have to hand:

+
+
+

+
Figure 5.1: One roll of 17 10-sided dice
+
+
+

The trial in Figure 5.1 shows are four dice with the 0 face uppermost, and the rest with numbers from 1 through 9. Therefore, there were 13 out of 17 not-zero numbers, meaning that 13 out of 17 simulated “patients” were “cured” in this simulated trial.

+ +

We could repeat this simulated trial procedure 100 times, and we would then have 100 counts of the not-zero numbers. Each of the 100 counts would be the number of patients cured in that trial. We can ask how many of these 100 counts were equal to 17. This will give us an estimate of the probability we would see 17 out of 17 patients cured, given that any one patient has a 90% chance of cure. For example, say we saw 15 out of 100 counts were equal to 17. That would give us an estimate of 15 / 100 or 0.15 or 15%, for the probability we would see 17 out of 17 patients cured.

+

So, if Saint Hypothetical General did see 17 out of 17 patients cured with the standard treatment, they would be a little surprised, because they would only expect to see that happen 15% of the time. But they would not be very surprised — 15% of the time is uncommon, but not very uncommon.

+
+
+

5.4 A trial, a run, a count and a proportion

+

Here we stop to emphasize the steps in the process of a random simulation.

+
    +
  1. We decide what we mean by one trial. Here one trial has the same meaning in medicine as resampling — we mean the result of treating 17 patients. One simulated trial is then the simulation of one set of outcomes from 17 patients.
  2. +
  3. Work out the outcome of interest from the trial. The outcome here is the number of patients cured.
  4. +
  5. We work out a way to simulate one trial. Here we chose to throw 17 10-sided dice, and count the number of not zero values. This is the outcome from one simulation trial.
  6. +
  7. We repeat the simulated trial procedure many times, and collect the results from each trial. Say we repeat the trial procedure 100 times; we will call this a run of 100 trials.
  8. +
  9. We count the number of trials with an outcome that matches the outcome we are interested in. In this case we are interested in the outcome 17 out of 17 cured, so we count the number of trials with a score of 17. Say 15 out of the run of 100 trials had an outcome of 17 cured. That is our count.
  10. +
  11. Finally we divide the count by the number of trials to get the proportion. From the example above, we divide 15 by 100 to 0.15 (15%). This is our estimate of the chance of seeing 17 out of 17 patients cured in any one trial. We can also call this an estimate of the probability that 17 out of 17 patients will be cured on any on trial.
  12. +
+

Our next step is to work out the code for step 2: simulate one trial.

+
+
+

5.5 Simulate one trial with code

+

We can use the computer to do something very similar to rolling 17 10-sided dice, by asking the computer for 17 random whole numbers from 0 through 9.

+
+
+
+ +
+
+Whole numbers +
+
+
+

A whole number is a number that is not negative, and does not have fractional part (does not have anything after a decimal point). 0 and 1 and 2 and 3 are whole numbers, but -1 and \(\frac{3}{5}\) and 11.3 are not. The whole numbers from 0 through 9 are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.

+
+
+

We have already discussed what we mean by random in Section 2.2.

+
+

We will be asking the computer to generate many random numbers. So, before we start, we again import NumPy and get its random number generator:

+
+
import numpy as np
+
+# Ask for NumPy's default random number generator and name
+# it `rnd`.  `rnd` is short for "random".
+rnd = np.random.default_rng()
+
+
+
+
+

5.6 From numbers to arrays

+

We next need to prepare the sequence of numbers that we want NumPy to select from.

+

We have already seen the idea that Python has values that are individual numbers. Remember, a variable is a named value. Here we attach the name a to the value 1.

+
+
a = 1
+# Show the value of "a"
+a
+
+
1
+
+
+

NumPy also allows values that are sequences of numbers. NumPy calls these sequences arrays.

+

Here we make a array that contains the 10 numbers we will select from:

+
+
# Make an array of numbers, store with the name "some_numbers".
+some_numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+# Show the value of "some_numbers"
+some_numbers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+

Notice that the value for some_numbers is an array, and that this value contains 10 numbers.

+

Put another way, some_numbers is now the name we can use for this collection of 10 values.

+

Arrays are very useful for simulations and data analysis, and we will be using these for nearly every example in this book.

+
+
+

5.7 Functions

+

Functions are another tool that we will be using everywhere, and that you seen already, although we have not introduced them until now.

+

You can think of functions as named production lines.

+

For example, consider the Python function np.round

+
+ +
+
+
# We load the Numpy library so we have access to the Numpy functions.
+import numpy as np
+
+

np.round is the name for a simple production line, that takes in a number, and (by default) sends back the number rounded to the nearest integer.

+
+
+
+ +
+
+What is an integer? +
+
+
+

An integer is a positive or negative whole number.

+

In other words, a number is an integer if the number is either a whole number (0, 1, 2 …), or a negative whole number (-1, -2, -3 …). All of -208, -2, 0, 10, 105 are integers, but \(\frac{3}{5}\), -10.3 and 0.2 are not.

+

We will use the term integer fairly often, because it is a convenient way to name all the positive and negative whole numbers.

+
+
+

Think of a function as a named production line. We sent the function (production line) raw material (components) to work on. The production line does some work on the components. A finished result comes off the other end.

+

Therefore, think of np.round as the name of a production line, that takes in a component (in this case, any number), and does some work, and sends back the finished result (in this case, the number rounded to the nearest integer.

+

The components we send to a function are called arguments. The finished result the function sends back is the return value.

+
    +
  • Arguments : the value or values we send to a function.
  • +
  • Return value : the values the function sends back.
  • +
+

See Figure 5.2 for an illustration of np.round as a production line.

+
+
+
+
+

+
Figure 5.2: The round function as a production line
+
+
+
+
+

In the next few code cells, you see examples where np.round takes in a not-integer number, as an argument, and sends back the nearest integer as the return value:

+
+
# Put in 3.2, round sends back 3.
+np.round(3.2)
+
+
3.0
+
+
+
+
# Put in -2.7, round sends back -3.
+np.round(-2.7)
+
+
-3.0
+
+
+

Like many functions, np.round can take more than one argument (component). You can send range the number of digits you want to round to, after the number of you want it to work on, like this (see Figure 5.3):

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# round sends back 3.14
+np.round(3.1415, 2)
+
+
3.14
+
+
+
+
+
+
+

+
Figure 5.3: round with optional arguments specifying number of digits
+
+
+
+
+

Notice that the second argument — here 2 — is optional. We only have to send round one argument: the number we want it to round. But we can optionally send it a second argument — the number of decimal places we want it to round to. If we don’t specify the second argument, then round assumes we want to round to 0 decimal places, and therefore, to the nearest integer.

+
+
+

5.8 Functions and named arguments

+

In the example above, we sent round two arguments. round knows that we mean the first argument to be the number we want to round, and the second argument is the number of decimal places we want to round to. It knows which is which by the position of the arguments — the first argument is the number it should round, and second is the number of digits.

+

In fact, internally, the round function also gives these arguments names. It calls the number it should round — a — and the number of digits it should round to — decimals. This is useful, because it is often clearer and simpler to identify the argument we are specifying with its name, instead of just relying on its position.

+

If we aren’t using the argument names, we call the round function as we did above:

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# round sends back 3.14
+np.round(3.1415, 2)
+
+
3.14
+
+
+

In this call, we relied on the fact that we, the people writing the code, and you, the person reading the code, remembers that the second argument (2) means the number of decimal places it should round to. But, we can also specify the argument using its name, like this (see Figure 5.5):

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# Use the name of the number-of-decimals argument for clarity:
+np.round(3.1415, decimals=2)
+
+
3.14
+
+
+
+
+
+
+

+
Figure 5.4: The round function with argument names
+
+
+
+
+
+
+
+
+

+
Figure 5.5: The np.round function with argument names
+
+
+
+
+

Here Python sees the first argument, as before, and assumes that it is the number we want to round. Then it sees the second, named argument — decimals=2 — and knows, from the name, that we mean this to be the number of decimals to round to.

+

In fact, we could even specify both arguments by name, like this:

+
+
# Put in 3.1415, and the number of digits to round to (2).
+np.round(a=3.1415, decimals=2)
+
+
3.14
+
+
+

We don’t usually name both arguments for round, as we have above, because it is so obvious that the first argument is the thing we want to round, and so naming the argument does not make it any more clear what the code is doing. But — as so often in programming — whether to use the names, or let Python work out which argument is which by position, is a judgment call. The judgment you are making is about the way to write the code to be most clear for your reader, where your most important reader may be you, coming back to the code in a week or a year.

+
+
+
+ +
+
+How do you know what names to use for the function arguments? +
+
+
+

You can find the names of the function arguments in the help for the function, either online, or in the notebook interface. For example, to get the help for np.round, including the argument names, you could make a new cell, and type np.round?, then execute the cell by pressing Shift-Enter. This will show the help for the function in the notebook interface.

+
+
+
+
+

5.9 Ranges

+

Now let us return to the variable some_numbers that we created above:

+
+
# Make an array of numbers, store with the name "some_numbers".
+some_numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+# Show the value of "some_numbers"
+some_numbers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+

In fact, we often need to do this: generate a sequence or range of integers, such as 0 through 9.

+
+
+
+ +
+
+Pick a number from 1 through 5 +
+
+
+

Ranges can be confusing in normal speech because it is not always clear whether they include their beginning and end. For example, if someone says “pick a number between 1 and 5”, do they mean all the numbers, including the first and last (any of 1 or 2 or 3 or 4 or 5)? Or do they mean only the numbers that are between 1 and 5 (so 2 or 3 or 4)? Or do they mean all the numbers up to, but not including 5 (so 1 or 2 or 3 or 4)?

+

To avoid this confusion, we will nearly always use “from” and “through” in ranges, meaning that we do include both the start and the end number. For example, if we say “pick a number from 1 through 5” we mean one of 1 or 2 or 3 or 4 or 5.

+
+
+

Creating ranges of numbers is so common that Python has a standard Numpy function np.arange to do that.

+
+
# An array containing all the numbers from 0 through 9.
+some_numbers = np.arange(0, 10)
+some_numbers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+
+

Notice that we send np.arange the arguments 0 and 10. The first argument, here 0, is the start value. The second argument, here 10, is the stop value. Numpy (in the arange function) understands this to mean: start at 0 (the start value) and go up to but do not include 10 (the stop value).

+

You can therefore read np.arange(0, 10) as “the sequence of integers starting at 0, up to, but not including 10”.

+

Like np.round, the arguments to np.arange also have names, so, we could also write:

+
+
# An array containing all the numbers from 0 through 9.
+# Now using named arguments.
+some_numbers = np.arange(start=0, stop=10)
+some_numbers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+

So far, we have sent arange two arguments, but we can also send just one argument, like this:

+
+
# An array containing all the numbers from 0 through 9.
+some_integers = np.arange(10)
+some_integers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+

When we sent arange a single argument, like this, arange understands this to mean we have sent just the stop value, and that is should assume a start value of 0.

+

Again, if we wanted, we could send this argument by name:

+
+
# An array containing all the numbers from 0 through 9.
+# Specify the stop value by explicit name, for clarity.
+some_integers = np.arange(stop=10)
+some_integers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+
+

Here are some more examples of np.arange:

+
+
# All the integers starting at 10, up to, but not including 15.
+# In other words, 10 through 14.
+np.arange(10, 15)
+
+
array([10, 11, 12, 13, 14])
+
+
+
+
# Here we are only sending one value (7). np.arange understands this to be
+# the stop value, and assumes 0 as the start value.
+# In other words, 0 through 6
+np.arange(7)
+
+
array([0, 1, 2, 3, 4, 5, 6])
+
+
+
+
+

5.10 range in Python

+

So far you have seen ranges of integers using np.arange. The np. prefix refers to the fact that np.arange is a function from the Numpy module (library). The a in arange signals that the result np.arange returns is an array:

+
+
arr = np.arange(7)
+# Show the result
+arr
+
+
array([0, 1, 2, 3, 4, 5, 6])
+
+
+
+
# Show what type of thing this is.
+type(arr)
+
+
<class 'numpy.ndarray'>
+
+
+

We do often use np.arange to get a range of integers in a convenient array format, but Python has another way of getting a range of integers — the range function.

+

The range function is very similar to np.arange, but it is not part of Numpy — it is basic function in Python — and it does not return an array of numbers, it returns something else. Here we ask for a range from 0 through 6 (0 up to, but not including 7):

+
+
# Notice no `np.` before `range`.
+r = range(7)
+r
+
+
range(0, 7)
+
+
+

Notice that the thing that came back is something that represents or stands in for the number 0 through 6. It is not an array, but a specific type of thing called — a range:

+
+
type(r)
+
+
<class 'range'>
+
+
+

The range above is a container for the numbers 0 through 6. We can get the numbers out of the container in many different ways, but one of them is to convert this container to an array, using the np.array function. The np.array function takes the thing we pass it, and makes it into an array. When we apply np.array to r above, we get the numbers that r contains:

+
+
# Get the numbers from the range `r`, convert to an array.
+a_from_r = np.array(r)
+# Show the result
+a_from_r
+
+
array([0, 1, 2, 3, 4, 5, 6])
+
+
+

The range function has the same start and stop arguments that np.arange does, and with the same meaning:

+
+
# 3 up to, not including 12.
+# (3 through 11)
+r_2 = range(3, 12)
+r_2
+
+
range(3, 12)
+
+
+
+
np.array(r_2)
+
+
array([ 3,  4,  5,  6,  7,  8,  9, 10, 11])
+
+
+

You may reasonably ask — why do I need this range thing, if I have the very similar np.arange? The answer is — you don’t need range, and you can always use np.arange where you would use range, but for reasons we will go into later (Section 7.6.3), range is a good option when we want to represent a sequence of numbers as input to a for loop. We cover for loops in more detail in Section 7.6.2, but for now, the only thing to remember is that range and np.arange are both ways of expressing sequential ranges of integers.

+
+
+

5.11 Choosing values at random

+

We can use the rnd.choice function to select a single value at random from the sequence of numbers in some_integers.

+
+
+
+ +
+
+More on rnd.choice +
+
+
+

The rnd.choice function will be a fundamental tool for taking many kinds of samples, and we cover it in more detail in Chapter 6.

+
+
+
+
# Select an integer from the choices in some_integers.
+my_integer = rnd.choice(some_integers)
+# Show the value that results.
+my_integer
+
+
5
+
+
+

Like np.round (above), rnd.choice is a function.

+
+
+
+
+ +
+
+Functions and methods +
+
+
+

Actually, to be precise, we should call rnd.choice a method. A method is a function attached to a value. In this case the function choice is attached to the value rnd. That’s not an important distinction for us at the moment, so please forgive our strategic imprecision, and let us continue to say that rnd.choice is a function.

+
+
+
+

As you remember, a function is a named production line. In our case, the production line has the name rnd.choice.

+

We sent rnd.choice. a value to work on — an argument. In this case, the argument was the value of some_integers.

+

Figure 5.6 is a diagram illustrating an example run of the rnd.choice function (production line).

+
+
+
+
+
+

+
Figure 5.6: Example run of the rnd.choice function
+
+
+
+
+
+

Here is the same code again, with new comments.

+
+
# Send the value of "some_integers" to rnd.choice
+# some_integers is the *argument*.
+# Put the *return* value from the function into "my_number".
+my_number = rnd.choice(some_integers)
+# Show the value that results.
+my_number
+
+
4
+
+
+
+
+

5.12 Sampling into arrays

+
+

In the code above, we asked Python to select a single number at random — because that is what rnd.choice does by default].

+

In fact, the people who wrote rnd.choice, wrote it to be flexible in the work that it can do. In particular, we can tell rnd.choice to select any number of values at random, by adding a new argument to the function.

+

In our case, we would like Numpy to select 17 numbers at random from the sequence of some_integers.

+

To do this, we add an argument to the function that tells it how many numbers we want it to select.

+
+
+
# Get 17 values from the *some_integers* array.
+# Store the 17 numbers with the name "a"
+a = rnd.choice(some_integers, 17)
+# Show the result.
+a
+
+
array([4, 5, 9, 8, 2, 9, 1, 5, 8, 2, 1, 8, 2, 6, 6, 5, 0])
+
+
+

As you can see, the function sent back (returned) 17 numbers. Because it is sending back more than one number, the thing it sends back is an array, where the array has 17 elements.

+
+
+

5.13 Counting results

+

We now have the code to do the equivalent of throwing 17 10-sided dice. This is the basis for one simulated trial in the world of Saint Hypothetical General.

+

Our next job is to get the code to count the number of numbers that are not zero in the array a. That will give us the number of patients who were cured in simulated trial.

+

Another way of asking this question, is to ask how many elements in a are greater than zero.

+
+

5.13.1 Comparison

+

To ask whether a number is greater than zero, we use comparison. Here is a greater than zero comparison on a single number:

+
+
n = 5
+# Is the value of n greater than 0?
+# Show the result of the comparison.
+n > 0
+
+
True
+
+
+

> is a comparison — it asks a question about the numbers either side of it. In this case > is asking the question “is the value of n (on the left hand side) greater than 0 (on the right hand side)?” The value of n is 5, so the question becomes, “is 5 greater than 0?” The answer is Yes, and Python represents this Yes answer as the value True.

+

In contrast, the comparison below boils down to “is 0 greater than 0?”, to which the answer is No, and Python represents this as False.

+
+
p = 0
+# Is the value of p greater than 0?
+# Show the result of the comparison.
+p > 0
+
+
False
+
+
+

So far you have seen the results of comparison on a single number. Now say we do the same comparison on an array. For example, say we ask the question “is the value of a greater than 0”? Remember, a is an array containing 17 values. We are comparing 17 values to one value (0). What answer do you think NumPy will give? You may want to think a little about this before you read on.

+

As a reminder, here is the current value for a:

+
+
# Show the current value for "a"
+a
+
+
array([4, 5, 9, 8, 2, 9, 1, 5, 8, 2, 1, 8, 2, 6, 6, 5, 0])
+
+
+

Now you have had some time to think, here is what happens:

+
+
# Is the value of "a" greater than 0
+# Show the result of the comparison.
+a > 0
+
+
array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
+        True,  True,  True,  True,  True,  True,  True, False])
+
+
+

There are 17 values in a, so the comparison to 0 means there are 17 comparisons, and 17 answers. NumPy therefore returns an array of 17 elements, containing these 17 answers. The first answer is the answer to the question “is the value of the first element of a greater than 0”, and the second is the answer to “is the value of the second element of a greater than 0”.

+

Let us store the result of this comparison to work on:

+
+
# Is the value of "a" greater than 0
+# Store as another array "q".
+q = a > 0
+# Show the value of r
+q
+
+
array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
+        True,  True,  True,  True,  True,  True,  True, False])
+
+
+
+
+
+

5.14 Counting True values with sum

+

Notice above that there is one True element in q for every element in a that was greater than 0. It only remains to count the number of True values in q, to get the count of patients in our simulated trial who were cured.

+

We can use the NumPy function np.sum to count the number of True elements in an array. As you can imagine, np.sum adds up all the elements in an array, to give a single number. This will work as we want for the q array, because Python counts False as equal to 0 and True as equal to 1:

+
+
# Question: is False equal to 0?
+# Answer - Yes! (True)
+False == 0
+
+
True
+
+
+
+
# Question: is True equal to 0?
+# Answer - Yes! (True)
+True == 1
+
+
True
+
+
+

Therefore, the function sum, when applied to an array of True and False values, will count the number of True values in the array.

+

To see this in action we can make a new array of True and False values, and try using np.sum on the new array.

+
+
# An array containing three True values and two False values.
+trues_and_falses = np.array([True, False, True, True, False])
+# Show the new array.
+trues_and_falses
+
+
array([ True, False,  True,  True, False])
+
+
+

The sum operation adds all the elements in the array. Because True counts as 1, and False counts as 0, adding all the elements in trues_and_falses is the same as adding up the values 1 + 0 + 1 + 1 + 0, to give 3.

+

We can apply the same operation on q to count the number of True values.

+
+
# Count the number of True values in "q"
+# This is the same as the number of values in "a" that are greater than 0.
+b = np.sum(q)
+# Show the result
+b
+
+
16
+
+
+
+
+

5.15 The procedure for one simulated trial

+

We now have the whole procedure for one simulated trial. We can put the whole procedure in one cell:

+
+
# Procedure for one simulated trial
+
+# Get 17 values from the *some_integers* array.
+# Store the 17 numbers with the name "a"
+a = rnd.choice(some_integers, 17)
+# Is the value of "a" greater than 0
+q = a > 0
+# Count the number of True values in "q"
+b = np.sum(q)
+# Show the result of this simulated trial.
+b
+
+
17
+
+
+
+
+

5.16 Repeating the trial

+

Now we know how to do one simulated trial, we could just keep running the cell above, and writing down the result each time. Once we had run the cell 100 times, we would have 100 counts. Then we could look at the 100 counts to see how many were equal to 17 (all 17 simulated patients cured on that trial). At least that would be much faster than rolling 17 dice 100 times, but we would also like the computer to automate the process of repeating the trial, and keeping track of the counts.

+

Please forgive us as we race ahead again, as we did in the last chapter. As in the last chapter, we will use a results array called z to store the count for each trial. As in the last chapter, we will use a for loop to repeat the trial procedure many times. As in the last chapter, we will not explain the counts array of the for loop in any detail, because we are going to cover those in the next chapter.

+

Let us now imagine that we want to do 100 simulated trials at Saint Hypothetical General. This will give us 100 counts. We will want to store the count for each trial.

+

To do this, we make an array called z to hold the 100 counts. We have called the array z, but we could have called it anything we liked, such as counts or results or cecilia.

+
+
# An array to hold the 100 count values.
+# Later, we will fill this in with real count values from simulated trials.
+z = np.zeros(100)
+
+

Next we use a for loop to repeat the single trial procedure.

+

Notice that the single trial procedure, inside this for loop, is the same as the single trial procedure above — the only two differences are:

+
    +
  • The trial procedure is inside the loop, and
  • +
  • We are storing the count for each trial as we go.
  • +
+

We will go into more detail on how this works in the next chapter.

+
+
# Procedure for 100 simulated trials.
+
+# An array to store the counts for each trial.
+z = np.zeros(100)
+
+# Repeat the trial procedure 100 times.
+for i in np.arange(100):
+    # Get 17 values from the *some_integers* array.
+    # Store the 17 numbers with the name "a".
+    a = rnd.choice(some_integers, 17)
+    # Is the value of "a" greater than 0.
+    q = a > 0
+    # Count the number of True values in "q".
+    b = np.sum(q)
+    # Store the result at the next position in the "z" array.
+    z[i] = b
+    # Now go back and do the next trial until finished.
+# Show the result of all 100 trials.
+z
+
+
array([16., 15., 15., 16., 16., 12., 15., 11., 16., 13., 12., 16., 15.,
+       16., 15., 16., 14., 15., 14., 15., 15., 15., 14., 15., 17., 15.,
+       14., 15., 16., 17., 15., 17., 16., 17., 14., 16., 15., 15., 15.,
+       17., 17., 13., 16., 13., 16., 14., 14., 15., 15., 15., 14., 15.,
+       15., 15., 17., 16., 17., 14., 15., 14., 16., 16., 15., 15., 16.,
+       15., 15., 16., 17., 15., 17., 15., 10., 15., 15., 14., 14., 13.,
+       16., 14., 17., 17., 16., 14., 15., 16., 17., 14., 15., 15., 16.,
+       16., 17., 16., 13., 15., 15., 14., 17., 15.])
+
+
+

Finally, we need to count how many of the trials results we stored in z gave a “cured” count of 17.

+

We can ask the question whether a single number is equal to 17 using the double equals comparison: ==.

+
+
s = 17
+# Is the value of s equal to 17?
+# Show the result of the comparison.
+s == 17
+
+
True
+
+
+
+
+
+ +
+
+ +
+
+
+
+

5.17 Single and double equals

+

Notice that the double equals == means something entirely different to Python than the single equals =. In the code above, Python reads s = 17 to mean “Set the variable s to have the value 17”. In technical terms the single equals is called an assignment operator, because it means assign the value 17 to the variable s.

+

The code s == 17 has a completely different meaning.

+
+

It means “give True if the value in s is equal to 17, and False otherwise”. The == is a comparison operator — it is for comparing two values — here the value in s and the value 17. This comparison, like all comparisons, returns an answer that is either True or False. In our case s has the value 17, so the comparison becomes 17 == 17, meaning “is 17 equal to 17?”, to which the answer is “Yes”, and Python sends back True.

+
+
+

We can ask this question of all 100 counts by asking the question: is the array z equal to 17, like this:

+
+
# Is the value of z equal to 17?
+were_cured = z == 17
+# Show the result of the comparison.
+were_cured
+
+
array([False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False, False,  True, False, False,
+       False, False,  True, False,  True, False,  True, False, False,
+       False, False, False,  True,  True, False, False, False, False,
+       False, False, False, False, False, False, False, False, False,
+        True, False,  True, False, False, False, False, False, False,
+       False, False, False, False, False,  True, False,  True, False,
+       False, False, False, False, False, False, False, False,  True,
+        True, False, False, False, False,  True, False, False, False,
+       False, False,  True, False, False, False, False, False,  True,
+       False])
+
+
+

Finally we use sum to count the number of True values in the were_cured array, to give the number of trials where all 17 patients were cured.

+
+
# Count the number of True values in "were_cured"
+# This is the same as the number of values in "z" that are equal to 17.
+n_all_cured = np.sum(were_cured)
+# Show the result of the comparison.
+n_all_cured
+
+
15
+
+
+

n_all_cured is the number of simulated trials for which all patients were cured. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.

+
+
# Proportion of trials where all patients were cured.
+p = n_all_cured / 100
+# Show the result
+p
+
+
0.15
+
+
+

From this experiment, we see that there is roughly a one-in-six chance that all 17 patients are cured when using a 90% effective treatment.

+
+
+

5.18 What have we learned from Saint Hypothetical?

+

We started with a question about the results of the NCI trial on the new drug. The question was — was the result of their trial — 17 out of 17 patients cured — surprising.

+

Then, for reasons we did not explain in detail, we changed tack, and asked the same question about a hypothetical set of 17 patients getting the standard treatment in Saint Hypothetical General.

+

That Hypothetical question turns out to be fairly easy to answer, because we can use simulation to estimate the chances that 17 out of 17 patients would be cured in such a hypothetical trial, on the assumption that each patient has a 90% chance of being cured with the standard treatment.

+

The answer for Saint Hypothetical General was — we would be somewhat surprised, but not astonished. We only get 17 out of 17 patients cured about one time in six.

+

Now let us return to the NCI trial. Should the trial authors be surprised by their results? If they assumed that their new treatment was exactly as effective as the standard treatment, the result of the trial is a bit unusual, just by chance. It is up us to decide whether the result is unusual enough to make us think that the actual NCI treatment might in fact have been more effective than the standard treatment.

+

You will see this move again and again as we go through the book.

+
    +
  • We take something that really happened — in this case the 17 out of 17 patients cured.
  • +
  • Then we imagine a hypothetical world in which the results only depend on chance.
  • +
  • We do simulations in that hypothetical world to see how often we get a result like the one that happened in the real world.
  • +
  • If the real world result (17 out of 17) is an unusual, surprising result in the simulations from the hypothetical world, we take that as evidence that the real world result might not be due to chance alone.
  • +
+

We have just described the main idea in statistical inference. If that all seems strange and backwards to you, do not worry, we will go over that idea many times in this book. It is not a simple idea to grasp in one go. We hope you will find that, as you do more simulations, and think of more hypothetical worlds, the idea will start to make more sense. Later, we will start to think about asking other questions about probability and chance in the real world.

+
+
+

5.19 Conclusions

+

Can you see how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using 10-sided dice? This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with. Either we will use a device such as coins or dice, or a random number table as an analogy for the physical process we are interested in (patients being cured, in this case), or we will simulate the analogy on the computer using the Python program above.

+

The program above may not seem simple at first glance, but we think you will find, over the course of this book, that these programs become much simpler to understand than the older conventional approach to such problems that has routinely been taught to students for decades.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/resampling_with_code2.html b/python-book/resampling_with_code2.html new file mode 100644 index 00000000..b93405fc --- /dev/null +++ b/python-book/resampling_with_code2.html @@ -0,0 +1,1313 @@ + + + + + + + + + +Resampling statistics - 7  More resampling with code + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

7  More resampling with code

+
+ + + +
+ + + + +
+ + +
+ +

Chapter 5 introduced a problem in probability, that was also a problem in statistics. We asked how surprised we should be at the results of a trial of a new cancer treatment regime.

+

Here we study another urgent problem in the real world - racial bias and the death penalty.

+
+

7.1 A question of life and death

+

This example comes from the excellent Berkeley introduction to data science (Ani Adhikari and Wagner 2021).

+

Robert Swain was a young black man who was sentenced to death in the early 60s. Swain’s trial was held in Talladega County, Alabama. At the time, 26% of the eligible jurors in that county were black, but every member of Swain’s jury was white. Swain and his legal team appealed to the Alabama Supreme Court, and then to the US Supreme Court, arguing that there was racial bias in the jury selection. They noted that there had been no black jurors in Talladega county since 1950, even though they made up about a quarter of the eligible pool of jurors. The US Supreme Court rejected this argument, in a 6 to 3 opinion, writing that “The overall percentage disparity has been small and reflects no studied attempt to include or exclude a specified number of Negros.”.

+

Swain’s team presented a variety of evidence on bias in jury selection, but here we will look at the obvious and apparently surprising fact that Swain’s jury was entirely white. The Supreme Court decided that the “disparity” between selection of white and black jurors “has been small” — but how would they, and how would we, make a rational decision about whether this disparity really was “small”?

+

You might reasonably be worried about the result of this decision for Robert Swain. In fact his death sentence was invalidated by a later, unrelated decision and he served a long prison sentence instead. In 1986, the Supreme Court overturned the precedent set by Swain’s case, in Batson v. Kentucky, 476 U.S. 79.

+
+
+

7.2 A small disparity and a hypothetical world

+

To answer the question that the Supreme Court asked, we return to the method we used in the last chapter.

+

Let us imagine a hypothetical world, in which each individual black or white person had an equal chance of being selected for the jury. Call this world Hypothetical County, Alabama.

+

Just as in 1960’s Talladega County, 26% of eligible jurors in Hypothetical County are black. Hypothetical County jury selection has no bias against black people, so we expect around 26% of the jury to be black. 0.26 * 12 = 3.12, so we expect that, on average, just over 3 out of 12 jurors in a Hypothetical County jury will be black. But, if we select each juror at random from the population, that means that, sometimes, by chance, we will have fewer than 3 black jurors, and sometimes will have more than 3 black jurors. And, by chance, sometimes we will have no black jurors. But, if the jurors really are selected at random, how often would we expect this to happen — that there are no black jurors? We would like to estimate the probability that we will get no black jurors. If that probability is small, then we have some evidence that the disparity in selection between black and white jurors, was not “small”.

+
+

What is the probability of an all white jury being randomly selected out of a population having 26% black people?

+
+
+
+

7.3 Designing the experiment

+

Before we start, we need to figure out three things:

+
    +
  1. What do we mean by one trial?
  2. +
  3. What is the outcome of interest from the trial?
  4. +
  5. How do we simulate one trial?
  6. +
+

We then take three steps to calculate the desired probability:

+
    +
  1. Repeat the simulated trial procedure N times.
  2. +
  3. Count M, the number of trials with an outcome that matches the outcome we are interested in.
  4. +
  5. Calculate the proportion, M/N. This is an estimate of the probability in question.
  6. +
+

For this problem, our task is made a little easier by the fact that our trial (in the resampling sense) is a simulated trial (in the legal sense). One trial requires 12 simulated jurors, each labeled by race (white or black).

+

The outcome we are interested in is the number of black jurors.

+

Now comes the harder part. How do we simulate one trial?

+
+

7.3.1 One trial

+

One trial requires 12 jurors, and we are interested only in the race of each juror. In Hypothetical County, where selection by race is entirely random, each juror has a 26% chance of being black.

+

We need a way of simulating a 26% chance.

+

One way of doing this is by getting a random number from 0 through 99 (inclusive). There are 100 numbers in the range 0 through 99 (inclusive).

+

We will arbitrarily say that the juror is white if the random number is in the range from 0 through 73. 74 of the 100 numbers are in this range, so the juror has a 74/100 = 74% chance of getting the label “white”. We will say the juror is black if the random number is in the range 74 though 99. There are 26 such numbers, so the juror has a 26% chance of getting the label “black”.

+

Next we need a way of getting a random number in the range 0 through 99. This is an easy job for the computer, but if we had to do this with a physical device, we could get a single number by throwing two 10-sided dice, say a blue die and a green die. The face of the blue die will be the 10s digit, and the green face will be the ones digit. So, if the blue die comes up with 8 and the green die has 4, then the random number is 84.

+

We could then simulate 12 jurors by repeating this process 12 times, each time writing down “white” if the number is from 0 through 74, and “black” otherwise. The trial outcome is the number of times we wrote “black” for these 12 simulated jurors.

+
+
+

7.3.2 Using code to simulate a trial

+

We use the same logic to simulate a trial with the computer. A little code makes the job easier, because we can ask Python to give us 12 random numbers from 0 through 99, and to count how many of these numbers are in the range from 75 through 99. Numbers in the range from 75 through 99 correspond to black jurors.

+
+
+

7.3.3 Random numbers from 0 through 99

+

We can now use NumPy and the random number functions from the last chapter to get 12 random numbers from 0 through 99.

+
+
# Import the Numpy library, rename as "np"
+import numpy as np
+
+# Ask NumPy for a random number generator.
+rnd = np.random.default_rng()
+
+# All the integers from 0 up to, but not including 100.
+zero_thru_99 = np.arange(100)
+
+# Get 12 random numbers from 0 through 99
+a = rnd.choice(zero_thru_99, size=12)
+
+# Show the result
+a
+
+
array([59, 43, 45, 58, 95, 89, 23, 99, 17, 51, 85, 23])
+
+
+
+

7.3.3.1 Counting the jurors

+

We use comparison and np.sum to count how many numbers are greater than 74, and therefore, in the range from 75 through 99:

+
+
# How many numbers are greater than 74?
+b = np.sum(a > 74)
+# Show the result
+b
+
+
4
+
+
+
+
+

7.3.3.2 A single simulated trial

+

We assemble the pieces from the last few sections to make a cell that simulates a single trial:

+
+
rnd = np.random.default_rng()
+zero_thru_99 = np.arange(100)
+
+# Get 12 random numbers from 0 through 99
+a = rnd.choice(zero_thru_99, size=12)
+
+# How many numbers are greater than 74?
+b = np.sum(a > 74)
+
+# Show the result
+b
+
+
4
+
+
+
+
+
+
+

7.4 Three simulation steps

+

Now we come back to the details of how we:

+
    +
  1. Repeat the simulated trial many times;
  2. +
  3. record the results for each trial;
  4. +
  5. calculate the required proportion as an estimate of the probability we seek.
  6. +
+

Repeating the trial many times is the job of the for loop, and we will come to that soon.

+

In order to record the results, we will store each trial result in an array.

+
+
+
+ +
+
+More on arrays +
+
+
+

Since we will be working with arrays a lot, it is worth knowing more about them.

+

A NumPy array is a container that stores many elements of the same type. You have already seen, in Chapter 2, how we can create an array from a sequence of numbers using the np.array function.

+
+
# Make an array of numbers, store with the name "some_numbers".
+some_numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+# Show the value of "some_numbers"
+some_numbers
+
+
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
+
+
+

Another way that we can create arrays is to use the np.zeros function to make a new array where all the elements are 0.

+
+
# Make a new array containing 5 zeros.
+# store with the name "z".
+z = np.zeros(5)
+# Show the value of "z"
+z
+
+
array([0., 0., 0., 0., 0.])
+
+
+

Notice the argument 5 to the np.zeros function. This tells the function how many zeros we want in the array that the function will return.

+
+

7.5 array length

+

The are various useful things we can do with this array container. One is to ask how many elements there are in the array container. We can use the len function to calculate the number of elements in an array:

+
+
# Show the number of elements in "z"
+len(z)
+
+
5
+
+
+
+
+

7.6 Indexing into arrays

+

Another thing we can do is set the value for a particular element in the array. To do this, we use square brackets following the array value, on the left hand side of the equals sign, like this:

+
+
# Set the value of the *first* element in the array.
+z[0] = 99
+# Show the new contents of the array.
+z
+
+
array([99.,  0.,  0.,  0.,  0.])
+
+
+

Read the first line of code as “the element at position 0 gets a value of 99”.

+
+

Notice that the position number of the first element in the array is 0, and the position number of the second element is 1. Think of the position as an offset from the beginning of the array. The first element is at the beginning of the array, and so it is at offset (position) 0. This can be a little difficult to get used to at first, but you will find that thinking of the positions of offsets in this way soon starts to come naturally, and later you will also find that it helps you to avoid some common mistakes when using positions for getting and setting values.

+
+

For practice, let us also set the value of the third element in the array:

+
+
# Set the value of the *third* element in the array.
+z[2] = 99
+# Show the new contents of the array.
+z
+
+
array([99.,  0., 99.,  0.,  0.])
+
+
+

Read the first code line above as as “set the value at position 2 in the array to have the value 99”.

+

We can also get the value of the element at a given position, using the same square-bracket notation:

+
+
# Get the value of the *first* element in the array.
+# Store the value with name "v"
+v = z[0]
+# Show the value we got
+v
+
+
99.0
+
+
+

Read the first code line here as “v gets the value at position 0 in the array”.

+

Using square brackets to get and set element values is called indexing into the array.

+
+
+
+
+

7.6.1 Repeating trials

+

As a preview, let us now imagine that we want to do 50 simulated trials of Robert Swain’s jury in Hypothetical County. We will want to store the count for each trial, to give 50 counts.

+

In order to do this, we make an array to hold the 50 counts. Call this array z.

+
+
# An array to hold the 50 count values.
+z = np.zeros(50)
+
+

We could run a single trial to get a single simulated count. Here we just repeat the code cell you saw above. Notice that we can get a different result each time we run this code, because the numbers in a are random choices from the range 0 through 99, and different random numbers will give different counts.

+
+
rnd = np.random.default_rng()
+zero_thru_99 = np.arange(0, 100)
+# Get 12 random numbers from 0 through 99
+a = rnd.choice(zero_thru_99, size=12)
+# How many numbers are greater than 74?
+b = np.sum(a > 74)
+# Show the result
+b
+
+
4
+
+
+

Now we have the result of a single trial, we can store it as the first number in the z array:

+
+
# Store the single trial count as the first value in the "z" array.
+z[0] = b
+# Show all the values in the "z" array.
+z
+
+
array([4., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
+       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
+
+
+

Of course we could just keep doing this: run the cell corresponding to a trial, above, to get a new count, and then store it at the next position in the z array. For example, we could store the counts for the first three trials with:

+
+
# First trial
+a = rnd.choice(zero_thru_99, size=12)
+b = np.sum(a > 74)
+# Store the result at the first position in z
+# Remember, the first position is offset 0.
+z[0] = b
+# Second trial
+a = rnd.choice(zero_thru_99, size=12)
+b = np.sum(a > 74)
+# Store the result at the second position in z
+z[1] = b
+# Third trial
+a = rnd.choice(zero_thru_99, size=12)
+b = np.sum(a > 74)
+# Store the result at the third position in z
+z[2] = b
+
+# And so on ...
+
+

This would get terribly long and boring to type for 50 trials. Luckily computer code is very good at repeating the same procedure many times. For example, Python can do this using a for loop. You have already seen a preview of the for loop in Chapter 2. Here we dive into for loops in more depth.

+
+
+

7.6.2 For-loops in Python

+

A for-loop is a way of asking Python to:

+
    +
  • Take a sequence of things, one by one, and
  • +
  • Do the same task on each one.
  • +
+

We often use this idea when we are trying to explain a repeating procedure. For example, imagine we wanted to explain what the supermarket checkout person does for the items in your shopping basket. You might say that they do this:

+
+

For each item of shopping in your basket, they take the item off the conveyor belt, scan it, and put it on the other side of the till.

+
+

You could also break this description up into bullet points with indentation, to say the same thing:

+
    +
  • For each item from your shopping basket, they: +
      +
    • Take the item off the conveyor belt.
    • +
    • Scan the item.
    • +
    • Put it on the other side of the till.
    • +
  • +
+

Notice the logic; the checkout person is repeating the same procedure for each of a series of items.

+

This is the logic of the for loop in Python. The procedure that Python repeats is called the body of the for loop. In the example of the checkout person above, the repeating procedure is:

+
    +
  • Take the item off the conveyor belt.
  • +
  • Scan the item.
  • +
  • Put it on the other side of the till.
  • +
+

Now imagine we wanted to use Python to print out the year of birth for each of the authors for the third edition of this book:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
AuthorYear of birth
Julian Lincoln Simon1932
Matthew Brett1964
Stéfan van der Walt1980
Ian Nimmo-Smith1944
+

We want to see this output:

+
Author birth year is 1932
+Author birth year is 1964
+Author birth year is 1980
+Author birth year is 1944
+

Of course, we could just ask Python to print out these exact lines, like this:

+
+
print('Author birth year is 1932')
+
+
Author birth year is 1932
+
+
print('Author birth year is 1964')
+
+
Author birth year is 1964
+
+
print('Author birth year is 1980')
+
+
Author birth year is 1980
+
+
print('Author birth year is 1944')
+
+
Author birth year is 1944
+
+
+

We might instead notice that we are repeating the same procedure for each of the four birth years, and decide to do the same thing using a for loop:

+
+
author_birth_years = np.array([1932, 1964, 1980, 1944])
+
+# For each birth year
+for birth_year in author_birth_years:
+    # Repeat this procedure ...
+    print('Author birth year is', birth_year)
+
+
Author birth year is 1932
+Author birth year is 1964
+Author birth year is 1980
+Author birth year is 1944
+
+
+

The for loop starts with a line where we tell it what items we want to repeat the procedure for:

+
+
for birth_year in author_birth_years:
+

This initial line of the for loop ends with a colon.

+

The next thing in the for loop is the procedure Python should follow for each item. Python knows that the following lines are the procedure it should repeat, because the lines are indented. The indented lines are the body of the for loop.

+
+

The initial line of the for loop above tells Python that it should take each item in author_birth_years, one by one — first 1932, then 1964, then 1980, then 1944. For each of these numbers it will:

+
    +
  • Put the number into the variable birth_year, then
  • +
  • Run the indented code .
  • +
+

Just as the person at the supermarket checkout takes each item in turn, for each iteration (repeat) of the for loop, birth_year gets a new value from the sequence in author_birth_years. birth_year is called the loop variable, because it is the variable that gets a new value each time we begin a new iteration of the for loop procedure. As for any variable in Python, we can call our loop variable anything we like. We used birth_year here, but we could have used y or year or some other name.

+

Now you know what the for loop is doing, you can see that the for loop above is equivalent to the following code:

+
+
birth_year = 1932  # Set the loop variable to contain the first value.
+print('Author birth year is', birth_year)  # Use it.
+
+
Author birth year is 1932
+
+
birth_year = 1964  # Set the loop variable to contain the next value.
+print('Author birth year is', birth_year)  # Use the second value.
+
+
Author birth year is 1964
+
+
birth_year = 1980
+print('Author birth year is', birth_year)
+
+
Author birth year is 1980
+
+
birth_year = 1944
+print('Author birth year is', birth_year)
+
+
Author birth year is 1944
+
+
+

Writing the steps in the for loop out like this is called unrolling the loop. It can be a useful exercise to do this when you come across a for loop, in order to work through the logic of the loop. For example, you may want to write out the unrolled equivalent of the first couple of iterations, to see what the loop variable will be, and what will happen in the body of the loop.

+

We often use for loops with ranges (see Section 5.9). Here we use a loop to print out the numbers 0 through 3:

+
+
for n in np.arange(0, 4):
+    print('The loop variable n is', n)
+
+
The loop variable n is 0
+The loop variable n is 1
+The loop variable n is 2
+The loop variable n is 3
+
+
+

Notice that the range ended at (the number before) 4, and that means we repeat the loop body 4 times. We can also use the loop variable value from the range as an index, to get or set the first, second, etc values from an array.

+

For example, maybe we would like to show the author position and the author year of birth.

+

Remember our author birth years:

+
+
author_birth_years
+
+
array([1932, 1964, 1980, 1944])
+
+
+

We can get (for example) the second author birth year with:

+
+
author_birth_years[1]
+
+
1964
+
+
+
+

Remember, for Python, the first element is position 0, so the second element is position 1.

+
+

Using the combination of looping over a range, and array indexing, we can print out the author position and the author birth year:

+
+
for n in np.arange(0, 4):
+    year = author_birth_years[n]
+    print('Birth year of author position', n, 'is', year)
+
+
Birth year of author position 0 is 1932
+Birth year of author position 1 is 1964
+Birth year of author position 2 is 1980
+Birth year of author position 3 is 1944
+
+
+
+

Again, remember Python considers 0 as the first position.

+
+

Just for practice, let us unroll the first two iterations through this for loop, to remind ourselves what the code is doing:

+
+
# Unrolling the for loop.
+n = 0
+year = author_birth_years[n]  # Will be 1932
+print('Birth year of author position', n, 'is', year)
+
+
Birth year of author position 0 is 1932
+
+
n = 1
+year = author_birth_years[n]  # Will be 1964
+print('Birth year of author position', n, 'is', year)
+
+
Birth year of author position 1 is 1964
+
+
# And so on.
+
+
+
+

7.6.3 range in Python for loops

+

So far we have used np.arange to give us the sequence of integers that we feed into the for loop. But — as you saw in Section 5.10 — we can also get a range of numbers from Python’s range function. range is a common and useful alternative way to provide a range of numbers to a for loop.

+

You have just seen how we would use np.arange to send the numbers 0, 1, 2, and 3 to a for loop, in the example above, repeated here:

+
+
for n in np.arange(0, 4):
+    year = author_birth_years[n]
+    print('Birth year of author position', n, 'is', year)
+
+
Birth year of author position 0 is 1932
+Birth year of author position 1 is 1964
+Birth year of author position 2 is 1980
+Birth year of author position 3 is 1944
+
+
+

We could also use range instead of np.arange to do the same task:

+
+
for n in range(0, 4):
+    year = author_birth_years[n]
+    print('Birth year of author position', n, 'is', year)
+
+
Birth year of author position 0 is 1932
+Birth year of author position 1 is 1964
+Birth year of author position 2 is 1980
+Birth year of author position 3 is 1944
+
+
+

In fact, you will see this pattern throughout the book, where we use for statements like for value in range(10000): to ask Python to put each number in the range 0 up to (not including) 100000 into the variable value, and then do something in the body of the loop. Just to be clear, we could always, and almost as easily, write for value in np.range(10000): to do the same task. But — even though we could use np.arange to get an array of numbers, we generally prefer range in our Python for loops, because it is just a little less typing (without the np.a of np.arange, and because it is a more common pattern in standard Python code.1

+
+
+

7.6.4 Putting it all together

+

Here is the code we worked out above, to implement a single trial:

+
+
rnd = np.random.default_rng()
+zero_thru_99 = np.arange(0, 100)
+# Get 12 random numbers from 0 through 99
+a = rnd.choice(zero_thru_99, size=12)
+# How many numbers are greater than 74?
+b = np.sum(a > 74)
+# Show the result
+b
+
+
4
+
+
+

We found that we could use arrays to store the results of these trials, and that we could use for loops to repeat the same procedure many times.

+

Now we can put these parts together to do 50 simulated trials:

+
+
# Procedure for 50 simulated trials.
+
+# The Numpy random number generator.
+rnd = np.random.default_rng()
+
+# All the numbers from 0 through 99.
+zero_through_99 = np.arange(0, 100)
+
+# An array to store the counts for each trial.
+z = np.zeros(50)
+
+# Repeat the trial procedure 50 times.
+for i in np.arange(0, 50):
+    # Get 12 random numbers from 0 through 99
+    a = rnd.choice(zero_through_99, size=12)
+    # How many numbers are greater than 74?
+    b = np.sum(a > 74)
+    # Store the result at the next position in the "z" array.
+    z[i] = b
+    # Now go back and do the next trial until finished.
+# Show the result of all 50 trials.
+z
+
+
array([4., 2., 3., 3., 4., 1., 4., 2., 7., 2., 3., 1., 6., 2., 5., 5., 3.,
+       1., 3., 4., 2., 2., 2., 4., 3., 4., 4., 2., 3., 3., 3., 1., 3., 1.,
+       2., 3., 2., 2., 3., 3., 6., 1., 3., 3., 4., 2., 4., 3., 4., 3.])
+
+
+

Finally, we need to count how many of the trials in z ended up with all-white juries. These are the trials with a z (count) value of 0.

+

To do this, we can ask an array which elements match a certain condition. E.g.:

+
+
x = np.array([2, 1, 3, 0])
+y = x < 2
+# Show the result
+y
+
+
array([False,  True, False,  True])
+
+
+

We now use that same technique to ask, of each of the 50 counts, whether the array z is equal to 0, like this:

+
+
# Is the value of z equal to 0?
+all_white = z == 0
+# Show the result of the comparison.
+all_white
+
+
array([False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False, False, False, False, False,
+       False, False, False, False, False])
+
+
+

We need to get the number of True values in all_white, to find how many simulated trials gave all-white juries.

+
+
# Count the number of True values in "all_white"
+# This is the same as the number of values in "z" that are equal to 0.
+n_all_white = np.sum(all_white)
+# Show the result of the comparison.
+n_all_white
+
+
0
+
+
+

n_all_white is the number of simulated trials for which all the jury members were white. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.

+
+
# Proportion of trials where all jury members were white.
+p = n_all_white / 50
+# Show the result
+p
+
+
0.0
+
+
+

From this initial simulation, it seems there is around a 0% chance that a jury selected randomly from the population, which was 26% black, would have no black jurors.

+
+
+
+

7.7 Many many trials

+

Our experiment above is only 50 simulated trials. The higher the number of trials, the more confident we can be of our estimate for p — the proportion of trials where we get an all-white jury.

+

It is no extra trouble for us to tell the computer to do a very large number of trials. For example, we might want to run 10,000 trials instead of 50. All we have to do is to run the loop 10,000 times instead of 50 times. The computer has to do more work, but it is more than up to the job.

+

Here is exactly the same code we ran above, but collected into one cell, and using 10,000 trials instead of 50. We have left out the comments, to make the code more compact.

+
+
# Full simulation procedure, with 10,000 trials.
+rnd = np.random.default_rng()
+zero_through_99 = np.arange(0, 100)
+# 10,000 trials.
+z = np.zeros(10000)
+for i in np.arange(0, 10000):
+    a = rnd.choice(zero_through_99, size=12)
+    b = np.sum(a > 74)
+    z[i] = b
+all_white = z == 0
+n_all_white = sum(all_white)
+p = n_all_white / 10000
+p
+
+
0.0305
+
+
+

We now have a new, more accurate estimate of the proportion of Hypothetical County juries with all-white juries. The proportion is 0.03, and so 3%.

+

This proportion means that, for any one jury from Hypothetical County, there is a less than one in 20 chance that the jury would be all white.

+

As we will see in more detail later, we might consider using the results from this experiment in Hypothetical County, to reflect on the result we saw in the real Talladega County. We might conclude, for example, that there was likely some systematic difference between Hypothetical County and Talledega County. Maybe the difference was that there was, in fact, some bias in the jury selection in Talledega county, and that the Supreme Court was wrong to reject this. You will hear more of this line of reasoning later in the book.

+
+
+

7.8 Conclusion

+

In this chapter we studied a real life-and-death question, on racial bias and the death penalty. We continued our exploration of the ways we can use probability, and resampling, to draw conclusions about real events. Along the way, we went into more detail on arrays in Python, and for loops; two basic tools in resampling.

+

In the next chapter, we will work through some more problems in probability, to show how we can use resampling, to answer questions about chance. We will add some more tools for writing code in Python, to make your programs easier to write, read, and understand.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/sampling_tools.html b/python-book/sampling_tools.html new file mode 100644 index 00000000..6b311b51 --- /dev/null +++ b/python-book/sampling_tools.html @@ -0,0 +1,1235 @@ + + + + + + + + + +Resampling statistics - 6  Tools for samples and sampling + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

6  Tools for samples and sampling

+
+ + + +
+ + + + +
+ + +
+ +
+

6.1 Introduction

+

Now you have some experience with Python, probabilities and resampling, it is time to introduce some useful tools for our experiments and programs.

+
+

Start of sampling_tools notebook

+ + +
+

6.2 Samples and labels

+

Thus far we have used numbers such as 1 and 0 and 10 to represent the elements we are sampling from. For example, in Chapter 7, we were simulating the chance of a particular juror being black, given that 26% of the eligible jurors in the county were black. We used integers for that task, where we started with all the integers from 0 through 99, and asked NumPy to select values at random from those integers. When NumPy selected an integer from 0 through 25, we chose to label the resulting simulated juror as black — there are 26 integers in the range 0 through 25, so there is a 26% chance that any one integer will be in that range. If the integer was from 26 through 99, the simulated juror was white (there are 74 integers in the range 26 through 99).

+

Here is the process of simulating a single juror, adapted from Section 7.3.3:

+
+
import numpy as np
+# Ask NumPy for a random number generator.
+rnd = np.random.default_rng()
+
+# All the integers from 0 up to, but not including 100.
+zero_thru_99 = np.arange(100)
+
+# Get one random numbers from 0 through 99
+a = rnd.choice(zero_thru_99)
+
+# Show the result
+a
+
+
59
+
+
+

After that, we have to unpack our labeling of 0 through 25 as being “black” and 26 through 99 as being “white”. We might do that like this:

+
+
this_juror_is_black = a < 26
+this_juror_is_black
+
+
False
+
+
+

This all works as we want it to, but it’s just a little bit difficult to remember the coding (less than 26 means “black”, greater than 25 means “white”). We had to use that coding because we committed ourselves to using random numbers to simulate the outcomes.

+

However, Python can also store bits of text, called strings. Values that are bits of text can be very useful because the text values can be memorable labels for the entities we are sampling from, in our simulations.

+
+

Before we get to strings, let us consider the different types of value we have seen so far.

+
+

6.3 Types of values in Python

+

You have already come across the idea that Python values can be integers (positive or negative whole numbers), like this:

+
+
v = 10
+v
+
+
10
+
+
+

Here the variable v holds the value. We can see what type of value v holds by using the type function:

+
+
type(v)
+
+
<class 'int'>
+
+
+

As you may have noticed, Python can also have floating point values. These are values with a decimal point — so numbers that do not have to be integers, but can be any value between the integers. These floating points values are of type float:

+
+
f = 10.1
+type(f)
+
+
<class 'float'>
+
+
+
+

6.3.1 Numpy arrays

+

You have also seen that Numpy contains another type, the array. An array is a value that contains a sequence of values. For example, here is an array of integers:

+
+
arr = np.array([0, 10, 99, 4])
+arr
+
+
array([ 0, 10, 99,  4])
+
+
+

Notice that this value arr is of type np.ndarray:

+
+
type(arr)
+
+
<class 'numpy.ndarray'>
+
+
+

The array has its own internal record of what type of values it holds. This is called the array dtype:

+
+
arr.dtype
+
+
dtype('int64')
+
+
+

The array dtype records the type of value stored in the array. All values in the array must be of this type, and all values in the array are therefore of the same type.

+

The array above contains integers, but we can also make arrays containing floating point values:

+
+
float_arr = np.array([0.1, 10.1, 99.0, 4.3])
+float_arr
+
+
array([ 0.1, 10.1, 99. ,  4.3])
+
+
+
+
float_arr.dtype
+
+
dtype('float64')
+
+
+
+
+

6.3.2 Lists

+

We have elided past another Python type, the list. In fact we have already used lists in making arrays. For example, here we make an array with four values:

+
+
np.array([0, 10, 99, 4])
+
+
array([ 0, 10, 99,  4])
+
+
+

We could also write the statement above in two steps:

+
+
my_list = [0, 10, 99, 4]
+np.array(my_list)
+
+
array([ 0, 10, 99,  4])
+
+
+

In the first statement — my_list = [0, 10, 99, 4] — we construct a list — a container for the four values. Let’s look at the my_list value:

+
+
my_list
+
+
[0, 10, 99, 4]
+
+
+

Notice that we do not see array in the display — this is not an array but a list:

+
+
type(my_list)
+
+
<class 'list'>
+
+
+

A list is a basic Python type. We can construct it by using the square brackets notation that you see above; we start with [, then we put the values we want to go in the list, separated by commas, followed by ]. Here is another list:

+
+
# Creating another list.
+list_2 = [5, 10, 20]
+
+

As you saw, we have been building arrays by building lists, and then passing the list to the np.array function, to create an array.

+
+
list_again = [100, 10, 0]
+np.array(list_again)
+
+
array([100,  10,   0])
+
+
+

Of course, we can do this one line, as we have been doing up till now, by constructing the list inside the parentheses of the function. So, the following cell has just the same output as the cell above:

+
+
# Constructing the list inside the function brackets.
+np.array([100, 10, 0])
+
+
array([100,  10,   0])
+
+
+

Lists are like arrays in that they are values that contain values, but they are unlike arrays in various ways — that we will not go into now. We often use lists to construct sequences into lists to turn them into arrays. For our purposes, and particularly for our calculations, arrays are much more useful and efficient than lists.

+ +
+
+
+
+
+

6.4 String values

+

So far, all the values you have seen in Python arrays have been numbers. Now we get on to values that are bits of text. These are called strings.

+

Here is a single Python string value:

+
+
s = "Resampling"
+s
+
+
'Resampling'
+
+
+

What is the type of the new bit-of-text value s?

+
+
type(s)
+
+
<class 'str'>
+
+
+

The Python str value is a bit of text, and therefore consists of a sequence of characters.

+

As arrays are containers for other things, such as numbers, strings are containers for characters.

+
+

As we can find the number of elements in an array (Section 7.5), we can find the number of characters in a string with the len function:

+
+
# Number of characters in s
+len(s)
+
+
10
+
+
+
+
+

As we can index into array values to get individual elements (Section 7.6), we can index into string values to get individual characters:

+
+
# Get the second character of the string
+# Remember, Python's index positions start at 0.
+second_char = s[1]
+second_char
+
+
'e'
+
+
+
+
+
+

6.5 Strings in arrays

+

As we can store numbers as elements in arrays, we can also store strings as array elements.

+
+
# Just for clarity, make the list first.
+# Lists can also contain strings.
+list_of_strings = ['Julian', 'Lincoln', 'Simon']
+# Then pass the list to np.array to make the array.
+arr_of_strings = np.array(list_of_strings)
+arr_of_strings
+
+
array(['Julian', 'Lincoln', 'Simon'], dtype='<U7')
+
+
+
+
# We can also create the list and the array in one line,
+# as we have been doing up til now.
+arr_of_strings = np.array(['Julian', 'Lincoln', 'Simon'])
+arr_of_strings
+
+
array(['Julian', 'Lincoln', 'Simon'], dtype='<U7')
+
+
+
+

Notice the array dtype:

+
+
arr_of_strings.dtype
+
+
dtype('<U7')
+
+
+

The U in the dtype tells you that the elements in the array are Unicode strings (Unicode is a computer representation of text characters). The number after the U gives the maximum number of characters for any string in the array, here set to the length of the longest string when we created the array.

+
+
+
+ +
+
+Take care with Numpy string arrays +
+
+
+

It is easy to run into trouble with Numpy string arrays where the elements have a maximum length, as here. Remember, the dtype of the array tells you what type of element the array can hold. Here the dtype is telling you that the array can hold strings of maximum length 7 characters. Now imagine trying to put a longer string into the array — what do you think would happen?

+

This happens:

+
+
# An array of small strings.
+small_strings = np.array(['six', 'one', 'two'])
+small_strings.dtype
+
+
dtype('<U3')
+
+
+
+
# Set a new value for the first element (first string).
+small_strings[0] = 'seven'
+small_strings
+
+
array(['sev', 'one', 'two'], dtype='<U3')
+
+
+

Numpy truncates the new string to match the original maximum length.

+

For that reason, it is often useful to instruct Numpy that you want to use effectively infinite length strings, by specifying the array dtype as object when you make the array, like this:

+
+
# An array of small strings, but this time, tell Numpy
+# that the strings should be of effectively infinite length.
+small_strings_better = np.array(['six', 'one', 'two'], dtype=object)
+small_strings_better
+
+
array(['six', 'one', 'two'], dtype=object)
+
+
+

Notice that the code uses a named function argument (Section 5.8), to specify to np.array that the array elements should be of type object. This type can store any Python value, and so, when the array is storing strings, it will use Python’s own string values as elements, rather than the more efficient but more fragile Unicode strings that Numpy uses by default.

+
+
# Set a new value for the first element in the new array.
+small_strings_better[0] = 'seven'
+small_strings_better
+
+
array(['seven', 'one', 'two'], dtype=object)
+
+
+ +
+
+ +
+

As for any array, you can select elements with indexing. When you select an element with a given position (index), you get the string at at that position:

+
+
# Julian Lincoln Simon's second name.
+# (Remember, Python's positions start at 0).
+middle_name = arr_of_strings[1]
+middle_name
+
+
'Lincoln'
+
+
+

As for numbers, we can compare strings with, for example, the == operator, that asks whether the two strings are equal:

+
+
middle_name == 'Lincoln'
+
+
True
+
+
+
+
+

6.6 Repeating elements

+

Now let us go back to the problem of selecting black and white jurors.

+

We started with the strategy of using numbers 0 through 25 to mean “black” jurors, and 26 through 99 to mean “white” jurors. We selected values at random from 0 through 99, and then worked out whether the number meant a “black” juror (was less than 26) or a “white” juror (was greater than 25).

+

It would be good to use strings instead of numbers to identify the potential jurors. Then we would not have to remember our coding of 0 through 25 and 26 through 99.

+

If only there was a way to make an array of 100 strings, where 26 of the strings were “black” and 74 were “white”. Then we could select randomly from that array, and it would be immediately obvious that we had a “black” or “white” juror.

+

Luckily, of course, we can do that, by using the np.repeat function to construct the array.

+

Here is how that works:

+
+
# The values that we will repeat to fill up the larger array.
+# Use a list to store the sequence of values.
+juror_types = ['black', 'white']
+# The number of times we want to repeat "black" and "white".
+# Use a list to store the sequence of values.
+repeat_nos = [26, 74]
+# Repeat "black" 26 times and "white" 74 times.
+# We have passed two lists here, but we could also have passed
+# arrays - the Numpy repeat function converts the lists to arrays
+# before it builds the repeats.
+jury_pool = np.repeat(juror_types, repeat_nos)
+# Show the result
+jury_pool
+
+
array(['black', 'black', 'black', 'black', 'black', 'black', 'black',
+       'black', 'black', 'black', 'black', 'black', 'black', 'black',
+       'black', 'black', 'black', 'black', 'black', 'black', 'black',
+       'black', 'black', 'black', 'black', 'black', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white'], dtype='<U5')
+
+
+

We can use this array of repeats of strings, to sample from. The result is easier to grasp, because we are using the string labels, instead of numbers:

+
+
# Select one juror at random from the black / white pool.
+one_juror = rnd.choice(jury_pool)
+one_juror
+
+
'white'
+
+
+

We can select our full jury of 12 jurors, and see the results in a more obvious form:

+
+
# Select 12 jurors at random from the black / white pool.
+one_jury = rnd.choice(jury_pool, 12)
+one_jury
+
+
array(['white', 'white', 'white', 'white', 'black', 'white', 'black',
+       'white', 'white', 'black', 'black', 'white'], dtype='<U5')
+
+
+
+
+
+ +
+
+Using the size argument to rnd.choice +
+
+
+

In the code above, we have specified the size of the sample we want (12) with the second argument to rnd.choice. As you saw in Section 5.8, we can also give names to the function arguments, in this case, to make it clearer what we mean by “12” in the code above. In fact, from now on, that is what we will do; we will specify the size of our sample by using the name for the function argument to rnd.choicesize — like this:

+
+
# Select 12 jurors at random from the black / white pool.
+# Specify the sample size using the "size" named argument.
+one_jury = rnd.choice(jury_pool, size=12)
+one_jury
+
+
array(['black', 'white', 'white', 'white', 'black', 'white', 'black',
+       'white', 'white', 'white', 'white', 'white'], dtype='<U5')
+
+
+
+
+

We can use == on the array to get True values where the juror was “black” and False values otherwise:

+
+
are_black = one_jury == 'black'
+are_black
+
+
array([ True, False, False, False,  True, False,  True, False, False,
+       False, False, False])
+
+
+

Finally, we can np.sum to find the number of black jurors (Section 5.14):

+
+
# Number of black jurors in this simulated jury.
+n_black = np.sum(are_black)
+n_black
+
+
3
+
+
+

Putting that all together, this is our new procedure to select one jury and count the number of black jurors:

+
+
one_jury = rnd.choice(jury_pool, size=12)
+are_black = one_jury == 'black'
+n_black = np.sum(are_black)
+n_black
+
+
3
+
+
+

Or we can be even more compact by putting several statements together into one line:

+
+
# The same as above, but on one line.
+n_black = np.sum(rnd.choice(jury_pool, size=12) == 'black')
+n_black
+
+
1
+
+
+
+
+

6.7 Resampling with and without replacement

+

Now let us return to the details of Robert Swain’s case, that you first saw in Chapter 7.

+

We looked at the composition of Robert Swain’s 12-person jury — but in fact, by law, that does not have to be representative of the eligible jurors. The 12-person jury is drawn from a jury panel, of 100 people, and this should, in turn, be drawn from the population of all eligible jurors in the county, consisting, at the time, of “all male citizens in the community over 21 who are reputed to be honest, intelligent men and are esteemed for their integrity, good character and sound judgment.” So, unless there was some bias against black jurors, we might expect the 100-person jury panel to be a plausibly random sample of the eligible jurors, of whom 26% were black. See the Supreme Court case judgement for details.

+

In fact, in Robert Swain’s trial, there were 8 black members in the 100-person jury panel. We will leave it to you to adapt the simulation from Chapter 7 to ask the question — is 8% surprising as a random sample from a population with 26% black people?

+

But we have a different question: given that 8 out of 100 of the jury panel were black, is it surprising that none of the 12-person jury were black? As usual, we can answer that question with simulation.

+

Let’s think about what a single simulated jury selection would look like.

+

First we compile a representation of the actual jury panel, using the tools we have used above.

+
+
juror_types = ['black', 'white']
+# in fact there were 8 black jurors and 92 white jurors.
+panel_nos = [8, 92]
+jury_panel = np.repeat(juror_types, panel_nos)
+# Show the result
+jury_panel
+
+
array(['black', 'black', 'black', 'black', 'black', 'black', 'black',
+       'black', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white', 'white', 'white',
+       'white', 'white'], dtype='<U5')
+
+
+

Now consider taking a 12-person jury at random from this panel. We select the first juror at random, so that juror has an 8 out of 100 chance of being black. But when we select the second jury member, the situation has changed slightly. We can’t select the first juror again, so our panel is now 99 people. If our first juror was black, then the chances of selecting another black juror next are not 8 out of 100, but 7 out of 99 — a smaller chance. The problem is, as we shall see in more detail later, the chances of getting a black juror as the second, and third and fourth members of the jury depend on whether we selected a black juror as the first and second and third jury members. At its most extreme, imagine we had already selected eight jurors, and by some strange chance, all eight were black. Now our chances of selecting a black juror as the ninth juror are zero — there are no black jurors left to select from the panel.

+

In this case we are selecting jurors from the panel without replacement, meaning, that once we have selected a particular juror, we cannot select them again, and we do not put them back into the panel when we select our next juror.

+

This is the probability equivalent of the situation when you are dealing a hand of cards. Let’s say someone is dealing you, and you only, a hand of five cards. You get an ace as your first card. Your chances of getting an ace as your first card were just the number of aces in the deck divided by the number of cards — four in 52 – \(\frac{4}{52}\). But for your second card, the probability has changed, because there is one less ace remaining in the pack, and one less card, so your chances of getting an ace as your second card are now \(\frac{3}{51}\). This is sampling without replacement — in a normal game, you can’t get the same card twice. Of course, you could imagine getting a hand where you sampled with replacement. In that case, you’d get a card, you’d write down what it was, and you’d give the card back to the dealer, who would replace the card in the deck, shuffle again, and give you another card.

+

As you can see, the chances change if you are sampling with or without replacement, and the kind of sampling you do, will dictate how you model your chances in your simulations.

+

Because this distinction is so common, and so important, the machinery you have already seen in rnd.choice has simple ways for you to select your sampling type. You have already seen sampling with replacement, and it looks like this:

+
+
# Take a sample of 12 jurors from the panel *with replacement*
+# With replacement is the default for `rnd.choice`.
+strange_jury = rnd.choice(jury_panel, size=12)
+strange_jury
+
+
array(['white', 'white', 'white', 'black', 'white', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white'], dtype='<U5')
+
+
+

This is a strange jury, because it can select any member of the jury pool more than once. Perhaps that juror would have to fill two (or more!) seats, or run quickly between them. But of course, that is not how juries are selected. They are selected without replacement:

+
+
# Take a sample of 12 jurors from the panel *without replacement*
+ok_jury = rnd.choice(jury_panel, 12, replace=False)
+ok_jury
+
+
array(['white', 'white', 'white', 'white', 'black', 'white', 'white',
+       'white', 'white', 'white', 'white', 'white'], dtype='<U5')
+
+
+
+
+
+ +
+
+Comments at the end of lines +
+
+
+

You have already seen comment lines. These are lines beginning with #, to signal to Python that the rest of the line is text for humans to read, but Python to ignore.

+
+
# This is a comment.  Python ignores this line.
+
+

You can also put comments at the end of code lines, by finishing the code part of the line, and then putting a #, followed by more text. Again, Python will ignore everything after the # as a text for humans, but not for Python.

+
+
print('Hello')  # This is a comment at the end of the line.
+
+
Hello
+
+
+
+
+

To finish the procedure for simulating a single jury selection, we count the number of black jurors:

+
+
n_black = np.sum(ok_jury == 'black')  # How many black jurors?
+n_black
+
+
1
+
+
+

Now we have the procedure for one simulated trial, here is the procedure for 10000 simulated trials.

+
+
counts = np.zeros(10000)
+for i in np.arange(10000):
+    # Single trial procedure
+    jury = rnd.choice(jury_panel, size=12, replace=False)
+    n_black = np.sum(jury == 'black')  # How many black jurors?
+    # Store the result
+    counts[i] = n_black
+
+# Number of juries with 0 black jurors.
+zero_black = np.sum(counts == 0)
+# Proportion
+p_zero_black = zero_black / 10000
+print(p_zero_black)
+
+
0.3421
+
+
+

We have found that, when there are only 8% black jurors in the jury panel, having no black jurors in the final jury happens about 34% of the time, even in this case, where the jury is selected completely at random from the jury panel.

+

We should look for the main source of bias in the initial selection of the jury panel, not in the selection of the jury from the panel.

+ +

End of sampling_tools notebook

+
+
+
+
+
+ +
+
+With or without replacement for the original jury selection +
+
+
+

You may have noticed in Chapter 7 that we were sampling Robert Swain’s jury from the eligible pool of jurors, with replacement. You might reasonably ask whether we should have selected from the eligible jurors without replacement, given that the same juror cannot serve more than once in the same jury, and therefore, the same argument applies there as here.

+

The trick there was that we were selecting from a very large pool of many thousand eligible jurors, of whom 26% were black. Let’s say there were 10,000 eligible jurors, of whom 2,600 were black. When selecting the first juror, there is exactly a 2,600 in 10,000 chance of getting a black juror — 26%. If we do get a black juror first, then the chance that the second juror will be black has changed slightly, 2,599 in 9,999. But these changes are very small; even if we select eleven black jurors out of eleven, when we come to the twelfth juror, we still have a 2,589 out of 9,989 chance of getting another black juror, and that works out at a 25.92% chance — hardly changed from the original 26%. So yes, you’d be right, we really should have compiled our population of 2,600 black jurors and 7,400 white jurors, and then sampled without replacement from that population, but as the resulting sample probabilities will be very similar to the simpler sampling with replacement, we chose to try and slide that one quietly past you, in the hope you would forgive us when you realized.

+
+
+
+
+

6.8 Conclusion

+

This chapter introduced you to the idea of strings — values in Python that store bits of text. Strings are very useful as labels for the entities we are sampling from, when we do our simulations. Strings are particularly useful when we use them with arrays, and one way we often do that is to build up arrays of strings to sample from, using the np.repeat function.

+

There is a fundamental distinction between two different types of sampling — sampling with replacement, where we draw an element from a larger pool, then put that element back before drawing again, and sampling without replacement, where we remove the element from the remaining pool when we draw it into the sample. As we will see later, it is often a judgment call which of these two types of sampling is a more reasonable model of the world you are trying to simulate.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/sampling_variability.html b/python-book/sampling_variability.html new file mode 100644 index 00000000..0dcf8540 --- /dev/null +++ b/python-book/sampling_variability.html @@ -0,0 +1,1410 @@ + + + + + + + + + +Resampling statistics - 14  On Variability in Sampling + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

14  On Variability in Sampling

+
+ + + +
+ + + + +
+ + +
+ +
+

[Debra said]: “I’ve had such good luck with Japanese cars and poor luck with American...”

+
+
+

The ’65 Ford Mustang: “It was fun, but I had to put two new transmissions in it.”

+
+
+

The Ford Torino: “That got two transmissions too. That finished me with Ford.”

+
+
+

The Plymouth Horizon: “The disaster of all disasters. That should’ve been painted bright yellow. What a lemon.”

+
+

(From Washington Post Magazine , May 17, 1992, p. 19)

+

Do the quotes above convince you that Japanese cars are better than American? Has Debra got enough evidence to reach the conclusion she now holds? That sort of question, and the reasoning we use to address it, is the subject of this chapter.

+

More generally, how should one go about using the available data to test the hypothesis that Japanese cars are better? That is an example of the questions that are the subject of statistics.

+
+

14.1 Variability and small samples

+

Perhaps the most important idea for sound statistical inference — the section of the book we are now beginning, in contrast to problems in probability, which we have studied in the previous chapters — is recognition of the presence of variability in the results of small samples . The fatal error of relying on too-small samples is all too common among economic forecasters, journalists, and others who deal with trends and public opinion. Athletes, sports coaches, sportswriters, and fans too frequently disregard this principle both in their decisions and in their discussion.

+

Our intuitions often carry us far astray when the results vary from situation to situation — that is, when there is variability in outcomes — and when we have only a small sample of outcomes to look at.

+

To motivate the discussion, I’ll tell you something that almost no American sports fan will believe: There is no such thing as a slump in baseball batting. That is, a batter often goes an alarming number of at-bats without getting a hit, and everyone — the manager, the sportswriters, and the batter himself — assumes that something has changed, and the probability of the batter getting a hit is now lower than it was before the slump. It is common for the manager to replace the player for a while, and for the player and coaches to change the player’s hitting style so as to remedy the defect. But the chance of a given batter getting a hit is just the same after he has gone many at-bats without a hit as when he has been hitting well. A belief in slumps causes managers to play line-ups which may not be their best.

+

By “slump” I mean that a player’s probability of getting a hit in a given at-bat is lower during a period than during average periods. And when I say there is no such thing as a slump, I mean that the chances of getting a hit after any sequence of at-bats without a hit is not different than the long-run average.

+

The “hot hand” in basketball is another illusion. In practical terms, the hot hand does not exist — or rather — if it does, the effect is weak.1 The chance of a shooter scoring is more or less the same after they have just missed a flock of shots as when they have just sunk a long string. That is, the chance of scoring a basket is not appreciably higher after a run of successes than after a run of failures. But even professional teams choose plays on the basis of who supposedly has a hot hand.

+

Managers who substitute for the “slumping” or “cold-handed” players with other players who, in the long run, have lower batting averages, or set up plays for the shooter who supposedly has a hot hand, make a mistake. The supposed hot hand in basketball, and the slump in baseball, are illusions because the observed long runs of outs, or of baskets, are statistical artifacts, due to ordinary random variability. The identification of slumps and hot hands is superstitious behavior, classic cases of the assignment of pattern to a series of events when there really is no pattern.

+

How do statisticians ascertain that slumps and hot hands are very weak effects, or do not exist? In brief, in baseball we simulate a hitter with a given average — say .250 — and compare the results with actual hitters of that average, to see whether they have “slumps” longer than the computer. The method of investigation is roughly as follows. You program a computer or other machine to behave the way a player would, given the player’s long-run average, on the assumption that each trial is a random drawing. For example, if a player has a .250 season-long batting average, the machine is programmed like a bucket containing three black balls and one white ball. Then for each simulated at bat, the machine shuffles the “balls” and draws one; it then records whether the result is black or white, after which the ball is replaced in the bucket. To study a season with four hundred at-bats, a simulated ball is drawn four hundred times.

+

The records of the player’s real season and the simulated season are then compared. If there really is such a thing as a non-random slump or streak, there will be fewer but longer “runs” of hits or outs in the real record than in the simulated record. On the other hand, if performance is independent from at-bat trial to at-bat trial, the actual record will change from hit to out and from out to hit as often as does the random simulated record. I suggested this sort of test for the existence of slumps in my 1969 book that first set forth the resampling method, a predecessor of this book.

+

For example, Table 14.1 shows the results of one 400 at-bat season for a simulated .250 hitter. (H = hit, O = out, sequential at-bats ordered vertically) Note the “slump” — 1 for 24 — in columns 7 & 8 (in bold).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14.1: 400 simulated at-bats (ordered vertically)
OOOOOOHOOOOHOHOO
OOOOOHOOHHHOHHOO
OOOHOOOOHOOOHHOO
OOOOOHHOOOOHOOOH
HOHOOHOOOHOOOOHO
HOOHOOHHOHOOHOHO
OOHOOOOHOOOOOOHO
OOHOOOOHHOOOOOOO
OHOOOOOOHHOOOHOO
OHHOOOOHOHOOHOHO
OOHHOHOHOHHHOOOO
HOOOOOOOOHOHHOOO
OHOOOHOOOOOOOOHH
HOHOOOHOOOOHHOOH
OOOOHHOOOOOHHHHO
OOOOHHOOOOOHOOOO
HOOOOOOOOOOOOOOO
OHHHOOOHOHOOOOOO
OHOHOOOOHOOOOHOO
OOOHHOOOOOHOHOOH
OHOOHOOOOOHOOOOO
HHHOOOOHOOOOHOOH
OOOHHOOOOOOOOOHO
OHOOOOOHHOOOOOOH
OOOOOHOOOHOHOHOO
+
+

Harry Roberts investigated the batting records of a sample of major leaguers.2 He compared players’ season-long records against the behavior of random-number drawings. If slumps existed rather than being a fiction of the imagination, the real players’ records would shift from a string of hits to a string of outs less frequently than would the random-number sequences. But in fact the number of shifts, and the average lengths of strings of hits and outs, are on average the same for players as for player-simulating random-number devices.

+

Over long periods, averages may vary systematically, as Ty Cobb’s annual batting averages varied non-randomly from season to season, Roberts found. But in the short run, most individual and team performances have shown results similar to the outcomes that a lottery-type random number machine would produce.

+

Thomas Gilovich, Robert Vallone and Amos Twersky (1985) performed a similar study of basketball shooting. They examined the records of shots from the floor by the Philadelphia 76’ers, foul shots by the Boston Celtics, and a shooting experiment of Cornell University teams. They found that “basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses…provided no evidence for a positive correlation between the outcomes of successive shots.”

+

To put their conclusion differently, knowing whether a shooter has scored or not scored on the previous shot — or in any previous sequence of shots — is of absolutely no use in predicting whether the shooter will or will not score on the next shot. Similarly, knowledge of the past series of at-bats in baseball does not improve a prediction of whether a batter will get a hit this time.

+

Of course a batter feels — and intensely — as if she or he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, both sandlot players and professionals feel confident that this time will be a hit, too. And after you have hit a bunch of baskets from all over the court, you feel as if you can’t miss.

+

But notice that card players get the same poignant feeling of being “hot” or “cold,” too. After a poker player “fills” several straights and flushes in a row, s/he feels s/he will hit the next one too. (Of course there are some players who feel just the opposite, that the “law of averages” is about to catch up with them.)

+

You will agree, I’m sure, that the cards don’t have any memory, and a player’s chance of filling a straight or flush remains the same no matter how he or she has done in the last series of hands. Clearly, then, a person can have a strong feeling that something is about to happen even when that feeling has no foundation. This supports the idea that even though a player in sports “feels” that s/he is in a slump or has a hot hand, this does not imply that the feeling has any basis in reality.

+

Why, when a batter is low in his/her mind because s/he has been making a lot of outs or for personal reasons, does her/ his batting not suffer? And why the opposite? Apparently at any given moment there are many influences operating upon a player’s performance in a variety of directions, with none of them clearly dominant. Hence there is no simple convincing explanation why a player gets a hit or an out, a basket or a miss, on any given attempt.

+

But though science cannot provide an explanation, the sports commentators always are ready to offer their analyses. Listen, for example, to how they tell you that Joe Zilch must have been trying extra hard just because of his slump. There is a sportswriter’s explanation for anything that happens.

+

Why do we believe the nonsense we hear about “momentum,” “comeback,” “she’s due this time,” and so on? The adult of the human species has a powerful propensity to believe that he or she can find a pattern even when there is no pattern to be found. Two decades ago I cooked up series of numbers with a random-number machine that looked as if they were prices on the stock market. Subjects in the experiment were told to buy and sell whichever stocks they chose. Then I gave them “another day’s prices,” and asked them to buy and sell again. The subjects did all kinds of fancy figuring, using an incredible variety of assumptions — even though there was no way for the figuring to help them. That is, people sought patterns even though there was no reason to believe that there were any patterns to be found.

+

When I stopped the game before the ten buy-and-sell sessions the participants expected, people asked that the game continue. Then I would tell them that there was no basis for any patterns in the data. “Winning” or “losing” had no meaning. But the subjects demanded to continue anyway. They continued believing that they could find patterns even after I told them that the numbers were randomly looked up and not real stock prices.

+

The illusions in our thinking about sports have important counterparts in our thinking about such real-world phenomena as the climate, the stock market, and trends in the prices of raw materials such as mercury, copper and wheat. And private and public decisions made on the basis of faulty understanding of these real situations, caused by illusory thinking on the order of belief in slumps and hot hands, are often costly and sometimes disastrous.

+

An example of the belief that there are patterns when there are none: Systems for finding patterns in the stock market are peddled that have about the same reliability as advice from a racetrack tout — and millions buy them.

+

One of the scientific strands leading into research on variability was the body of studies that considers the behavior of stock prices as a “random walk.” That body of work asserts that a stock broker or chartist who claims to be able to find patterns in past price movements of stocks that will predict future movements should be listened to with about the same credulity as a racetrack tout or an astrologer. A second strand was the work in psychology in the last decade or two which has recognized that people’s estimates of uncertain events are systematically biased in a variety of interesting and knowable ways.

+

The U.S. government has made — and continues to make — blunders costing the public scores of billions of dollars, using slump-type fallacious reasoning about resources and energy. Forecasts are issued and policies are adopted based on the belief that a short-term increase in price constitutes a long-term trend. But the “experts” employed by the government to make such forecasts do no better on average than do private forecasters, and often the system of forecasting that they use is much more misleading than would be a random-number generating machine of the sort used in the baseball slump experiments.

+

Please look at the data in Figure 14.1 for the height of the Nile River over about half a century. Is it not natural to think that those data show a decline in the height of the river? One can imagine that if our modern communication technology existed then, the Cairo newspapers would have been calling for research to be done on the fall of the Nile, and the television anchors would have been warning the people to change their ways and use less water.

+
+
+
+
+

+
Figure 14.1: Height of the Nile River Over Half of a Century
+
+
+
+
+

Let’s look at Figure 14.2 which represents the data over an even longer period. What now would you say about the height of the Nile? Clearly the “threat” was non-existent, and only appeared threatening because the time span represented by the data was too short. The point of this display is that looking at too-short a segment of experience frequently leads us into error. And “too short” may be as long as a century.

+
+
+

+
Figure 14.2: Variations in the height of Nile Flood in centimeters. The sloping line indicates the secular raising of the bed of the Nile by deposition of silt. From Brooks (1928)
+
+
+

Another example is the price of mercury, which is representative of all metals. Figure 14.3 shows a forecast made in 1976 by natural-scientist Earl Cook (1976). He combined a then-recent upturn in prices with the notion that there is a finite amount of mercury on the earth’s surface, plus the mathematical charm of plotting a second-degree polynomial with the computer. Figure 14.4 and Figure 14.5 show how the forecast was almost immediately falsified, and the price continued its long-run decline.

+
+
+

+
Figure 14.3: The Price of Mercury from Cook (1976)
+
+
+
+
+
+
+

+
Figure 14.4: Mercury Reserves, 1950-1990
+
+
+
+
+
+
+
+
+

+
Figure 14.5: Mercury Price Indexes, 1950-1990
+
+
+
+
+

Lack of sound statistical intuition about variability can lead to manipulation of the public being by unscrupulous persons. Commodity funds sellers use a device of this sort to make their results look good (The Washington Post, Sep 28, 1987, p. 71). Some individual commodity traders inevitably do well in their private trading, just by chance. A firm then hires one of them, builds a public fund around him, and claims the private record for the fund’s own history. But of course the private record has no predictive power, any more than does the record of someone who happened to get ten heads in a row flipping coins.

+

How can we avoid falling into such traps? It is best to look at the longest possible sweep of history. That is, use the largest possible sample of observations to avoid sampling error. For copper we have data going back to the 18th century B.C. In Babylonia, over a period of 1000 years, the price of iron fell to one fifth of what it was under Hammurabi (almost 4000 years ago), and the price of copper then cost about a thousand times its current price in the U.S., relative to wages. So the inevitable short-run increases in price should be considered in this long-run context to avoid drawing unsound conclusions due to small-sample variability.

+

Proof that it is sound judgment to rely on the longest possible series is given by the accuracy of predictions one would have made in the past. In the context of copper, mercury, and other raw materials, we can refer to a sample of years in the past, and from those years imagine ourselves forecasting the following year. If you had bet every time that prices would go down in consonance with the long-run trend, you would have been a big winner on average.

+
+
+

14.2 Regression to the mean

+
+

UP, DOWN “The Dodgers demoted last year’s NL rookie of the year, OF Todd Hollandsworth (.237, 1 HR, 18 RBI) to AAA Albuquerque...” (Item in Washington Post , 6/14/97)

+
+

It is a well-known fact that the Rookie of the Year in a sport such as baseball seldom has as outstanding a season in their sophomore year. Why is this so? Let’s use the knowledge we have acquired of probability and simulation to explain this phenomenon.

+

The matter at hand might be thought of as a problem in pure probability — if one simply asks about the chance that a given player (the Rookie of the Year) will repeat. Or it could be considered a problem in statistics, as discussed in coming chapters. Let’s consider the matter in the context of baseball.

+

Imagine 10 mechanical “ball players,” each a machine that has three white balls (hits) and 7 black balls. Every time the machine goes to bat, you take a ball out of the machine, look to see if it is a hit or an out, and put it back. For each “ball player” you do this 100 times. One of them is going to do better than the others, and that one becomes the Rookie of the Year. See Table 14.2.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14.2: Rookie Seasons (100 at bats)
# of HitsBatting Average
32.320
34.340
33.330
30.300
35.350
33.330
30.300
31.310
28.280
25.250
+
+

Would you now expect that the player who happened to be the best among the top ten in the first year to again be the best among the top ten in the next year, also? The sports writers do. But of course this seldom happens. The Rookie of the Year in major-league baseball seldom has as outstanding a season in their sophomore year as in their rookie year. You can expect them to do better than the average of all sophomores, but not necessarily better than all of the rest of the group of talented players who are now sophomores. (Please notice that we are not saying that there is no long-run difference among the top ten rookies. But suppose there is. Table 14.3 shows the season’s performance for ten batters of differing performances).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14.3: Simulated season’s performance for 10 batters of differing “true” averages
“True”Rookie
.270.340
.270.240
.280.330
.280.300
.300.280
.300.420
.320.340
.320.350
.330.260
.330.330
+
+

We see from Table 14.3 that we have ten batters whose “true” batting averages range from .270 to .330. Their rookie year performance (400 at bats), simulated on the basis of their “true”average is on the right. Which one is the rookie of the year? It’s #6, who hit .420 during the rookie session. Will they do as well next year? Not likely — their “true” average is only .300.

+
+

Start of sampling_variability notebook

+ + +

Try generating some rookie “seasons” yourself with the following commands, ranging the batter’s “true” performance by changing the value of p_hit (the probability of a hit).

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+
+
# Simulate a rookie season of 400 at-bats.
+
+# You might try changing the value below and rerunning.
+# This is the true (long-run) probability of a hit for this batter.
+p_hit = 0.4
+print('True average is:', p_hit)
+
+
True average is: 0.4
+
+
at_bats = rnd.choice(['Hit', 'Out'], p=[p_hit, 1 - p_hit], size=400)
+simulated_average = np.sum(at_bats == 'Hit') / 400
+# Show the result
+print('Simulated average is:', simulated_average)
+
+
Simulated average is: 0.4075
+
+
+

Simulate a set of 10 or 20 such rookie seasons, and look at the one who did best. How did their rookie season compare to their “true” average?

+

End of sampling_variability notebook

+
+

The explanation is the presence of variability . And lack of recognition of the role of variability is at the heart of much fallacious reasoning. Being alert to the role of variability is crucial.

+

Or consider the example of having a superb meal at a restaurant — the best meal you have ever eaten. That fantastic meal is almost surely the combination of the restaurant being better than average, plus a lucky night for the chef and the dish you ordered. The next time you return you can expect a meal better than average, because the restaurant is better than average in the long run. But the meal probably will be less good than the superb one you had the first time, because there is no reason to believe that the chef will get so lucky again and that the same sort of variability will happen this time.

+

These examples illustrate the concept of “regression to the mean” — a confusingly-titled and very subtle effect caused by variability in results among successive samples drawn from the same population. This phenomenon was given its title more than a century ago by Francis Galton, one of the great founders of modern statistics, when at first he thought that the height of the human species was becoming more uniform, after he noticed that the children of the tallest and shortest parents usually are closer to the average of all people than their parents are. But later he discovered his fallacy — that the variability in heights of children of quite short and quite tall parents also causes some people to be even more exceptionally tall or short than their parents. So the spread in heights among humans remains much the same from generation to generation; there is no “regression to the mean.” The heart of the matter is that any exceptional observed case in a group is likely to be the result of two forces — a) an underlying propensity to differ from the average in one direction or the other, plus b) some chance sampling variability that happens (in the observed case) to push even further in the exceptional direction.

+

A similar phenomenon arises in direct-mail marketing. When a firm tests many small samples of many lists of names and then focuses its mass mailings on the lists that performed best in the tests, the full list “rollouts” usually do not perform as well as the samples did in the initial tests. It took many years before mail-order experts (see especially (Burnett 1988)) finally understood that regression to the mean inevitably causes an important part of the dropoff from sample to rollout observed in the set of lists that give the very best results in a multi-list test.

+

The larger the test samples, the less the dropoff, of course, because larger samples reduce variability in results. But larger samples risk more money. So the test-sample-size decision for the marketer inevitably is a trade-off between accuracy and cost.

+

And one last amusing example: After I (JLS) lectured to the class on this material, the student who had gotten the best grade on the first mid-term exam came up after class and said: “Does that mean that on the second mid-term I should expect to do well but not the best in the class?” And that’s exactly what happened: He had the second-best score in the class on the next midterm.

+

A related problem arises when one conducts multiple tests, as when testing thousands of drugs for therapeutic value. Some of the drugs may appear to have a therapeutic effect just by chance. We will discuss this problem later when discussing hypothesis testing.

+
+
+

14.3 Summary and conclusion

+

The heart of statistics is clear thinking. One of the key elements in being a clear thinker is to have a sound gut understanding of statistical processes and variability. This chapter amplifies this point.

+

A great benefit to using simulations rather than formulas to deal with problems in probability and statistics is that the presence and importance of variability becomes manifest in the course of the simulation work.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/search.json b/python-book/search.json new file mode 100644 index 00000000..6d192749 --- /dev/null +++ b/python-book/search.json @@ -0,0 +1,1633 @@ +[ + { + "objectID": "index.html#python-edition", + "href": "index.html#python-edition", + "title": "Resampling statistics", + "section": "Python edition", + "text": "Python edition" + }, + { + "objectID": "preface_third.html#what-simon-saw", + "href": "preface_third.html#what-simon-saw", + "title": "Preface to the third edition", + "section": "What Simon saw", + "text": "What Simon saw\nSimon gives the early history of this book in the original preface. He starts with the following observation:\n\nIn the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly…\n\nSimon then applied his striking capacity for independent thought to the problem — and came to two essential conclusions.\nThe first was that introductory courses in statistics use far too much mathematics. Most students cannot follow along and quickly get lost, reducing the subject to — as Simon puts it — “mumbo-jumbo”.\nOn its own, this was not a new realization. Simon quotes a classic textbook by Wallis and Roberts (1956), in which they compare teaching statistics through mathematics to teaching in a foreign language. More recently, other teachers of statistics have come to the same conclusion. Cobb (2007) argues that it is practically impossible to teach students the level of mathematics they would need to understand standard introductory courses. As you will see below, Cobb also agrees with Simon about the solution.\nSimon’s great contribution was to see how we can replace the mathematics, to better reveal the true heart of statistical thinking. His starting point appears in the original preface: “Beneath the logic of a statistical inference there necessarily lies a physical process”. Drawing conclusions from noisy data means building a model of the noisy world, and seeing how that model behaves. That model can be physical, where we generate the noisiness of the world using physical devices like dice and spinners and coin-tosses. In fact, Simon used exactly these kinds of devices in his first experiments in teaching (Simon 1969). He then saw that it was much more efficient to build these models with simple computer code, and the result was the first and second editions of this book, with their associated software, the Resampling Stats language.\nSimon’s second conclusion follows from the first. Now that Simon had stripped away the unnecessary barrier of mathematics, he had got to the heart of what is interesting and difficult in statistics. Drawing conclusions from noisy data involves a lot of hard, clear thinking. We need to be honest with our students about that; statistics is hard, not because it is obscure (it need not be), but because it deals with difficult problems. It is exactly that hard logical thinking that can make statistics so interesting to our best students; “statistics” is just reasoning about the world when the world is noisy. Simon writes eloquently about this in a section in the introduction — “Why is statistics such a difficult subject” (Section 1.6).\nWe needed both of Simon’s conclusions to get anywhere. We cannot hope to teach two hard subjects at the same time; mathematics, and statistical reasoning. That is what Simon has done: he replaced the mathematics with something that is much easier to reason about. Then he can concentrate on the real, interesting problem — the hard thinking about data, and the world it comes from. To quote from a later section in this book (Section 2.4): “Once we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics.” Instead of asking “where would I look up the right recipe for this”, you find yourself asking “what kind of world do these data come from?” and “how can I reason about that world?”. Like Simon, we have found that this way of thinking and teaching is almost magically liberating and satisfying. We hope and believe that you will find the same." + }, + { + "objectID": "preface_third.html#sec-resampling-data-science", + "href": "preface_third.html#sec-resampling-data-science", + "title": "Preface to the third edition", + "section": "Resampling and data science", + "text": "Resampling and data science\nThe ideas in Simon’s book, first published in 1992, have found themselves at the center of the modern movement of data science.\nIn the section above, we described Simon’s path in discovering physical models as a way of teaching and explaining statistical tests. He saw that code was the right way to express these physical models, and therefore, to build and explain statistical tests.\nMeanwhile, the wider world of data analysis has been coming to the same conclusion, but from the opposite direction. Simon saw the power of resampling for explanation, and then that code was the right way to express these explanations. The data science movement discovered first that code was essential for data analysis, and then that code was the right way to explain statistics.\nThe modern use of the phrase “data science” comes from the technology industry. From around 2007, companies such as LinkedIn and Facebook began to notice that there was a new type of data analyst that was much more effective than their predecessors. They came to call these analysts “data scientists”, because they had learned how to deal with large and difficult data while working in scientific fields such as ecology, biology, or astrophysics. They had done this by learning to use code:\n\nData scientists’ most basic, universal skill is the ability to write code. (Davenport and Patil 2012)\n\nFurther reflection (Donoho 2017) suggested that something deep was going on: that data science was the expression of a radical change in the way we analyze data, in academia, and in industry. At the center of this change — was code. Code is the language that allows us to tell the computer what it should do with data; it is the native language of data analysis.\nThis insight transforms the way with think of code. In the past, we have thought of code as a separate, specialized skill, that some of us learn. We take coding courses — we “learn to code”. If code is the fundamental language for analyzing data, then we need code to express what data analysis does, and explain how it works. Here we “code to learn”. Code is not an aim in itself, but a language we can use to express the simple ideas behind data analysis and statistics.\nThus the data science movement started from code as the foundation for data analysis, to using code to explain statistics. It ends at the same place as this book, from the other side of the problem.\nThe growth of data science is the inevitable result of taking computing seriously in education and research. We have already cited Cobb (2007) on the impossibility of teaching the mathematics students would need in order to understand traditional statistics courses. He goes on to explain why there is so much mathematics, and why we should remove it. In the age before ubiquitous computing, we needed mathematics to simplify calculations that we could not practically do by hand. Now we have great computing power in our phones and laptops, we do not have this constraint, and we can use simpler resampling methods to solve the same problems. As Simon shows, these are much easier to describe and understand. Data science, and teaching with resampling, are the obvious consequences of ubiquitous computing." + }, + { + "objectID": "preface_third.html#what-we-changed", + "href": "preface_third.html#what-we-changed", + "title": "Preface to the third edition", + "section": "What we changed", + "text": "What we changed\nThis diversion, through data science, leads us to the changes that we have made for the new edition. The previous edition of this book is still excellent, and you can read it free, online, at http://www.resample.com/intro-text-online. It continues to be ahead of its time, and ahead of our time. Its one major drawback is that Simon bases much of the book around code written in a special language that he developed with Dan Weidenfeld, called Resampling Stats. Resampling Stats is well designed for expressing the steps in simulating worlds that include elements of randomness, and it was a useful contribution at the time that it was written. Since then, and particularly in the last decade, there have been many improvements in more powerful and general languages, such as Python and R. These languages are particularly suitable for beginners in data analysis, and they come with a huge range of tools and libraries for a many tasks in data analysis, including the kinds of models and simulations you will see in this book. We have updated the book to use Python, instead of Resampling Stats. If you already know Python or a similar language, such as R, you will have a big head start in reading this book, but even if you do not, we have written the book so it will be possible to pick up the Python code that you need to understand and build the kind of models that Simon uses. The advantage to us, your authors, is that we can use the very powerful tools associated with Python to make it easier to run and explain the code. The advantage to you, our readers, is that you can also learn these tools, and the Python language. They will serve you well for the rest of your career in data analysis.\n\nOur second major change is that we have added some content that Simon specifically left out. Simon knew that his approach was radical for its time, and designed his book as a commentary, correction, and addition to traditional courses in statistics. He assumes some familiarity with the older world of normal distributions, t-tests, Chi-squared tests, analysis of variance, and correlation. In the time that has passed since he wrote the book, his approach to explanation has reached the mainstream. It is now perfectly possible to teach an introductory statistics course without referring to the older statistical methods. This means that the earlier editions of this book can now serve on their own as an introduction to statistics — but, used this way, at the time we write, this will leave our readers with some gaps to fill. Simon’s approach will give you a deep understanding of the ideas of statistics, and resampling methods to apply them, but you will likely come across other teachers and researchers using the traditional methods. To bridge this gap, we have added new sections that explain how resampling methods relate to their corresponding traditional methods. Luckily, we find these explanations add deeper understanding to the traditional methods. Teaching resampling is the best foundation for statistics, including the traditional methods.\nLastly, we have extended Simon’s explanation of Bayesian probability and inference. This is partly because Bayesian methods have become so important in statistical inference, and partly because Simon’s approach has such obvious application in explaining how Bayesian methods work." + }, + { + "objectID": "preface_third.html#who-should-read-this-book-and-when", + "href": "preface_third.html#who-should-read-this-book-and-when", + "title": "Preface to the third edition", + "section": "Who should read this book, and when", + "text": "Who should read this book, and when\nAs you have seen in the previous sections, this book uses a radical approach to explaining statistical inference — the science of drawing conclusions from noisy data. This approach is quickly becoming the standard in teaching of data science, partly because it is so much easier to explain, and partly because of the increasing role of code in data analysis.\nOur book teaches the basics of using the Python language, basic probability, statistical inference through simulation and resampling, confidence intervals, and basic Bayesian reasoning, all through the use of model building in simple code.\nStatistical inference is an important part of research methods for many subjects; so much so, that research methods courses may even be called “statistics” courses, or include “statistics” components. This book covers the basic ideas behind statistical inference, and how you can apply these ideas to draw practical statistical conclusions. We recommend it to you as an introduction to statistics. If you are a teacher, we suggest you consider this book as a primary text for first statistics courses. We hope you will find, as we have, that this method of explaining through building is much more productive and satisfying than the traditional method of trying to convey some “intuitive” understanding of fairly complicated mathematics. We explain the relationship of these resampling techniques to traditional methods. Even if you do need to teach your students t-tests, and analysis of variance, we hope you will share our experience that this way of explaining is much more compelling than the traditional approach.\nSimon wrote this book for students and teachers who were interested to discover a radical new method of explanation in statistics and probability. The book will still work well for that purpose. If you have done a statistics course, but you kept feeling that you did not really understand it, or there was something fundamental missing that you could not put your finger on — good for you! — then, please, read this book. There is a good chance that it will give you deeper understanding, and reveal the logic behind the often arcane formulations of traditional statistics.\nOur book is only part of a data science course. There are several important aspects to data science. A data science course needs all the elements we list above, but it should also cover the process of reading, cleaning, and reorganizing data using Python, or another language, such as\nR\nIt may also go into more detail about the experimental design, and cover prediction techniques, such as classification with machine learning, and data exploration with plots, tables, and summary measures. We do not cover those here. If you are teaching a full data science course, we suggest that you use this book as your first text, as an introduction to code, and statistical inference, and then add some of the many excellent resources on these other aspects of data science that assume some knowledge of statistics and programming." + }, + { + "objectID": "preface_third.html#welcome-to-resampling", + "href": "preface_third.html#welcome-to-resampling", + "title": "Preface to the third edition", + "section": "Welcome to resampling", + "text": "Welcome to resampling\nWe hope you will agree that Simon’s insights for understanding and explaining are — really extraordinary. We are catching up slowly. If you are like us, your humble authors, you will find that Simon has succeeded in explaining what statistics is, and exactly how it works, to anyone with the patience to work through the examples, and think hard about the problems. If you have that patience, the rewards are great. Not only will you understand statistics down to its deepest foundations, but you will be able to think of your own tests, for your own problems, and have the tools to implement them yourself.\nMatthew Brett\nStéfan van der Walt\nIan Nimmo-Smith\n\n\n\n\nCobb, George W. 2007. “The Introductory Statistics Course: A Ptolemaic Curriculum?” Technology Innovations in Statistics Education 1 (1). https://escholarship.org/uc/item/6hb3k0nz.\n\n\nDavenport, Thomas H, and DJ Patil. 2012. “Data Scientist: The Sexiest Job of the 21st Century.” Harvard Business Review 90 (10): 70–76. https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century.\n\n\nDonoho, David. 2017. “50 Years of Data Science.” Journal of Computational and Graphical Statistics 26 (4): 745–66. http://courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf.\n\n\nSimon, Julian Lincoln. 1969. Basic Research Methods in Social Science. 1st ed. New York: Random House.\n\n\n———. 1992. Resampling: The New Statistics. 1st ed. Arlington, VA: Resampling Stats Inc.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New Approach. New York: The Free Press." + }, + { + "objectID": "preface_second.html#sec-brief-history", + "href": "preface_second.html#sec-brief-history", + "title": "Preface to the second edition", + "section": "Brief history of the resampling method", + "text": "Brief history of the resampling method\nThis book describes a revolutionary — but now fully accepted — approach to probability and statistics. Monte Carlo resampling simulation takes the mumbo-jumbo out of statistics and enables even beginning students to understand completely everything that is done.\nBefore we go further, let’s make the discussion more concrete with an example. Ask a class: What are the chances that three of a family’s first four children will be girls? After various entertaining class suggestions about procreating four babies, or surveying families with four children, someone in the group always suggests flipping a coin. This leads to valuable student discussion about whether the probability of a girl is exactly half (there are about 105 males born for each 100 females), whether .5 is a satisfactory approximation, whether four coins flipped once give the same answer as one coin flipped four times, and so on. Soon the class decides to take actual samples of coin flips. And students see that this method quickly arrives at estimates that are accurate enough for most purposes. Discussion of what is “accurate enough” also comes up, and that discussion is valuable, too.\nThe Monte Carlo method itself is not new. Near the end of World War II, a group of physicists at the Rand Corp. began to use random-number simulations to study processes too complex to handle with formulas. The name “Monte Carlo” came from the analogy to the gambling houses on the French Riviera. The application of Monte Carlo methods in teaching statistics also is not new. Simulations have often been used to illustrate basic concepts. What is new and radical is using Monte Carlo methods routinely as problem-solving tools for everyday problems in probability and statistics.\nFrom here on, the related term resampling will be used throughout the book. Resampling refers to the use of the observed data or of a data generating mechanism (such as a die) to produce new hypothetical samples, the results of which can then be analyzed. The term computer-intensive methods also is frequently used to refer to techniques such as these.\nThe history of resampling is as follows: In the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly in their social science research. I sympathized with them. Even many experts are unable to understand intuitively the formal mathematical approach to the subject. Clearly, we need a method free of the formulas that bewilder almost everyone.\nThe solution is as follows: Beneath the logic of a statistical inference there necessarily lies a physical process. The resampling methods described in this book allow us to work directly with the underlying physical model by simulating it, rather than describing it with formulae. This general insight is also the heart of the specific technique Bradley Efron felicitously labeled ‘the bootstrap’ (1979), a device I introduced in 1969 that is now the most commonly used, and best known, resampling method.\nThe resampling approach was first tried with graduate students in 1966, and it worked exceedingly well. Next, under the auspices of the father of the “new math,” Max Beberman, I “taught” the method to a class of high school seniors in 1967. The word “taught” is in quotation marks because the pedagogical essence of the resampling approach is that the students discover the method for themselves with a minimum of explicit instruction from the teacher.\nThe first classes were a success and the results were published in 1969 (J. L. Simon and Holmes 1969). Three PhD experiments were then conducted under Kenneth Travers’ supervision, and they all showed overwhelming superiority for the resampling method (J. L. Simon, Atkinson, and Shevokas 1976). Subsequent research has confirmed this success.\nThe method was first presented at some length in the 1969 edition of my book Basic Research Methods in Social Science (J. L. Simon 1969) (third edition with Paul Burstein -Simon Julian Lincoln and Burstein (1985)).\nFor some years, the resampling method failed to ignite interest among statisticians. While many factors (including the accumulated intellectual and emotional investment in existing methods) impede the adoption of any new technique, the lack of readily available computing power and tools was an obstacle. (The advent of the personal computer in the 1980s changed that, of course.)\nThen in the late 1970s, Efron began to publish formal analyses of the bootstrap — an important resampling application (Efron 1979). Interest among statisticians has exploded since then, in conjunction with the availability of easy, fast, and inexpensive computer simulations. The bootstrap has been the most widely used, but across-the-board application of computer intensive methods now seems at hand. As Noreen (1989) noted, “there is a computer-intensive alternative to just about every conventional parametric and non-parametric test.” And the bootstrap method has now been hailed by an official American Statistical Association volume as the only “great breakthrough” in statistics since 1970 (Kotz and Johnson 1992).\nIt seems appropriate now to offer the resampling method as the technique of choice for beginning students as well as for the advanced practitioners who have been exploring and applying the method.\nThough the term “computer-intensive methods” is nowadays used to describe the techniques elaborated here, this book can be read either with or without the accompanying use of the computer. However, as a practical matter, users of these methods are unlikely to be content with manual simulations if a quick and simple computer-program alternative is available.\nThe ultimate test of the resampling method is how well you, the reader, learn it and like it. But knowing about the experiences of others may help beginners as well as experienced statisticians approach the scary subject of statistics with a good attitude. Students as early as junior high school, taught by a variety of instructors and in other languages as well as English, have — in a matter of 6 or 12 short hours — learned how to handle problems that students taught conventionally do not learn until advanced university courses. And several controlled experimental studies show that, on average, students who learn this method are more likely to arrive at correct solutions than are students who are taught conventional methods.\nBest of all, the experiments comparing the resampling method against conventional methods show that students enjoy learning statistics and probability this way, and they don’t suffer statistics panic. This experience contrasts sharply with the reactions of students learning by conventional methods. (This is true even when the same teachers teach both methods as part of an experiment.)\nA public offer: The intellectual history of probability and statistics began with gambling games and betting. Therefore, perhaps a lighthearted but very serious offer would not seem inappropriate here: I hereby publicly offer to stake $5,000 in a contest against any teacher of conventional statistics, with the winner to be decided by whose students get the larger number of simple and complex numerical problems correct, when teaching similar groups of students for a limited number of class hours — say, six or ten. And if I should win, as I am confident that I will, I will contribute the winnings to the effort to promulgate this teaching method. (Here it should be noted that I am far from being the world’s most skillful or charming teacher. It is the subject matter that does the job, not the teacher’s excellence.) This offer has been in print for many years now, but no one has accepted it.\nThe early chapters of the book contain considerable discussion of the resampling method, and of ways to teach it. This material is intended mainly for the instructor; because the method is new and revolutionary, many instructors appreciate this guidance. But this didactic material is also intended to help the student get actively involved in the learning process rather than just sitting like a baby bird with its beak open waiting for the mother bird to drop morsels into its mouth. You may skip this didactic material, of course, and I hope that it does not get in your way. But all things considered, I decided it was better to include this material early on rather than to put it in the back or in a separate publication where it might be overlooked." + }, + { + "objectID": "preface_second.html#brief-history-of-statistics", + "href": "preface_second.html#brief-history-of-statistics", + "title": "Preface to the second edition", + "section": "Brief history of statistics", + "text": "Brief history of statistics\nIn ancient times, mathematics developed from the needs of governments and rich men to number armies, flocks, and especially to count the taxpayers and their possessions. Up until the beginning of the 20th century, the term statistic meant the number of something — soldiers, births, taxes, or what-have-you. In many cases, the term statistic still means the number of something; the most important statistics for the United States are in the Statistical Abstract of the United States . These numbers are now known as descriptive statistics. This book will not deal at all with the making or interpretation of descriptive statistics, because the topic is handled very well in most conventional statistics texts.\nAnother stream of thought entered the field of probability and statistics in the 17th century by way of gambling in France. Throughout history people had learned about the odds in gambling games by repeated plays of the game. But in the year 1654, the French nobleman Chevalier de Mere asked the great mathematician and philosopher Pascal to help him develop correct odds for some gambling games. Pascal, the famous Fermat, and others went on to develop modern probability theory.\nLater these two streams of thought came together. Researchers wanted to know how accurate their descriptive statistics were — not only the descriptive statistics originating from sample surveys, but also the numbers arising from experiments. Statisticians began to apply the theory of probability to the accuracy of the data arising from sample surveys and experiments, and that became the theory of inferential statistics .\nHere we find a guidepost: probability theory and statistics are relevant whenever there is uncertainty about events occurring in the world, or in the numbers describing those events.\nLater, probability theory was also applied to another context in which there is uncertainty — decision-making situations. Descriptive statistics like those gathered by insurance companies — for example, the number of people per thousand in each age bracket who die in a five-year period — have been used for a long time in making decisions such as how much to charge for insurance policies. But in the modern probabilistic theory of decision-making in business, politics and war, the emphasis is different; in such situations the emphasis is on methods of combining estimates of probabilities that depend upon each other in complicated ways in order to arrive at the best decision. This is a return to the gambling origins of probability and statistics. In contrast, in standard insurance situations (not including war insurance or insurance on a dancer’s legs) the probabilities can be estimated with good precision without complex calculation, on the basis of a great many observations, and the main statistical task is gathering the information. In business and political decision-making situations, however, one often works with probabilities based on very limited information — often little better than guesses. There the task is how best to combine these guesses about various probabilities into an overall probability estimate.\nEstimating probabilities with conventional mathematical methods is often so complex that the process scares many people. And properly so, because its difficulty leads to errors. The statistics profession worries greatly about the widespread use of conventional tests whose foundations are poorly understood. The wide availability of statistical computer packages that can easily perform these tests with a single command, regardless of whether the user understands what is going on or whether the test is appropriate, has exacerbated this problem. This led John Tukey to turn the field toward descriptive statistics with his techniques of “exploratory data analysis” (Tukey 1977). These descriptive methods are well described in many texts.\nProbabilistic analysis also is crucial, however. Judgments about whether the government should allow a new medicine on the market, or whether an operator should adjust a screw machine, require more than eyeball inspection of data to assess the chance variability. But until now the teaching of probabilistic statistics, with its abstruse structure of mathematical formulas, mysterious tables of calculations, and restrictive assumptions concerning data distributions — all of which separate the student from the actual data or physical process under consideration — have been an insurmountable obstacle to intuitive understanding.\nNow, however, the resampling method enables researchers and decision-makers in all walks of life to obtain the benefits of statistics and predictability without the shortcomings of conventional methods, free of mathematical formulas and restrictive assumptions. Resampling’s repeated experimental trials on the computer enable the data (or a data-generating mechanism representing a hypothesis) to express their own properties, without difficult and misleading assumptions.\nSo — good luck. I hope that you enjoy the book and profit from it.\nJulian Lincoln Simon\n1997\n\n\n\n\nEfron, Bradley. 1979. “Bootstrap Methods; Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf.\n\n\nKotz, Samuel, and Norman Lloyd Johnson. 1992. Breakthroughs in Statistics. New York: Springer-Verlag.\n\n\nNoreen, Eric W. 1989. Computer-Intensive Methods for Testing Hypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore.\n\n\nSimon, Julian Lincoln. 1969. Basic Research Methods in Social Science. 1st ed. New York: Random House.\n\n\nSimon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976. “Probability and Statistics: Experimental Results of a Radically Different Teaching Method.” The American Mathematical Monthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research Methods in Social Science. 3rd ed. New York: Random House.\n\n\nSimon, Julian Lincoln, and Allen Holmes. 1969. “A New Way to Teach Probability Statistics.” The Mathematics Teacher 62 (4): 283–88.\n\n\nTukey, John W. 1977. Exploratory Data Analysis. Reading, MA, USA: Addison-Wesley." + }, + { + "objectID": "intro.html#uses-of-probability-and-statistics", + "href": "intro.html#uses-of-probability-and-statistics", + "title": "1  Introduction", + "section": "1.1 Uses of Probability and Statistics", + "text": "1.1 Uses of Probability and Statistics\nThis chapter introduces you to probability and statistics. First come examples of the kinds of practical problems that this knowledge can solve for us. One reason that the term “statistic” often scares and confuses people is that the term has several sorts of meanings. We discuss the meanings of “statistics” in the section “Types of statistics”. Then comes a discussion on the relationship of probabilities to decisions. Following this we talk about the limitations of probability and statistics. And last is a discussion of why statistics can be such a difficult subject. Most important, this chapter describes the types of problems the book will tackle.\nAt the foundation of sound decision-making lies the ability to make accurate estimates of the probabilities of future events. Probabilistic problems confront everyone — a company owner considering whether to expand their business, to the scientist testing a vaccine, to the individual deciding whether to buy insurance." + }, + { + "objectID": "intro.html#sec-what-problems", + "href": "intro.html#sec-what-problems", + "title": "1  Introduction", + "section": "1.2 What kinds of problems shall we solve?", + "text": "1.2 What kinds of problems shall we solve?\nThese are some examples of the kinds of problems that we can handle with the methods described in this book:\n\nYou are a doctor trying to develop a treatment for COVID19. Currently you are working on a medicine labeled AntiAnyVir. You have data from patients to whom medicine AntiAnyVir was given. You want to judge on the basis of those results whether AntiAnyVir really improves survival or whether it is no better than a sugar pill.\nYou are the campaign manager for the Republicrat candidate for President of the United States. You have the results from a recent poll taken in New Hampshire. You want to know the chance that your candidate would win in New Hampshire if the election were held today.\nYou are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.\nYou are an environmental scientist monitoring levels of phosphorus pollution in a lake. The phosphorus levels have been fluctuated around a relatively low level until recently, but they have been higher in the last few years. Does these recent higher levels indicate some important change or can we put them down to some chance and ordinary variation from year to year?\n\nThe core of all these problems, and of the others that we will deal with in this book, is that you want to know the “chance” or “probability” — different words for the same idea — that some event will or will not happen, or that something is true or false. To put it another way, we want to answer questions about “What is the probability that…?”, given the body of information that you have in hand.\nThe question “What is the probability that…?” is usually not the ultimate question that interests us at a given moment.\nEventually, a person wants to use the estimated probability to help make a decision concerning some action one might take. These are the kinds of decisions, related to the questions about probability stated above, that ultimately we would like to make:\n\nShould you (the researcher) advise doctors to prescribe medicine AntiAnyVir for COVID19 patients, or, should you (the researcher) continue to study AntiAnyVir before releasing it for use? A related matter: should you and other research workers feel sufficiently encouraged by the results of medicine AntiAnyVir so that you should continue research in this general direction rather than turning to some other promising line of research? These are just two of the possible decisions that might be influenced by the answer to the question about the probability that medicine AntiAnyVir is effective in treating COVID19.\nShould you advise the Republicrat presidential candidate to go to New Hampshire to campaign? If the poll tells you conclusively that she or he will not win in New Hampshire, you might decide that it is not worthwhile investing effort to campaign there. Similarly, if the poll tells you conclusively that they surely will win in New Hampshire, you probably would not want to campaign further there. But if the poll is not conclusive in one direction or the other, you might choose to invest the effort to campaign in New Hampshire. Analysis of the chances of winning in New Hampshire based on the poll data can help you make this decision sensibly.\nShould your company buy more ambulances? Clearly the answer to this question is affected by the probability that a given number of your ambulances will be out of action on a given day. But of course this estimated probability will be only one part of the decision.\nShould we search for new causes of phosphorus pollution as a result of the recent measurements from the lake? If the causes have not changed, and the recent higher values were just the result of ordinary variation, our search will end up wasting time and money that could have been better spent elsewhere.\n\nThe kinds of questions to which we wish to find probabilistic and statistical answers may be found throughout the social, biological and physical sciences; in business; in politics; in engineering; and in most other forms of human endeavor." + }, + { + "objectID": "intro.html#sec-types-of-statistics", + "href": "intro.html#sec-types-of-statistics", + "title": "1  Introduction", + "section": "1.3 Types of statistics", + "text": "1.3 Types of statistics\nThe term statistics sometimes causes confusion and therefore needs explanation.\nStatistics can mean two related things. It can refer to a certain sort of number — of which more below. Or it can refer to the field of inquiry that studies these numbers.\nA statistic is a number that we can calculate from a larger collection of numbers we are interested in. For example, table Table 1.1 has some yearly measures of “soluble reactive phosphorus” (SRP) from Lough Erne — a lake in Ireland (Zhou, Gibson, and Foy 2000).\n\n\n\n\nTable 1.1: Soluble Reactive Phosphorus in Lough Erne\n\n\nYear\nSRP\n\n\n\n\n1974\n26.2\n\n\n1975\n22.8\n\n\n1976\n37.2\n\n\n1983\n54.7\n\n\n1984\n37.7\n\n\n1987\n54.3\n\n\n1989\n35.7\n\n\n1991\n72.0\n\n\n1992\n85.1\n\n\n1993\n86.7\n\n\n1994\n93.3\n\n\n1995\n107.2\n\n\n1996\n80.3\n\n\n1997\n70.7\n\n\n\n\n\n\n\n\nWe may want to summarize this set of SRP measurements. For example, we could add up all the SRP values to give the total. We could also divide the total by the number of measurements, to give the average. Or we could measure the spread of the values by finding the minimum and the maximum — see table Table 1.2). All these numbers are descriptive statistics, because they are summaries that describe the collection of SRP measurements.\n\n\n\n\nTable 1.2: Statistics for SRP levels\n\n\n\nDescriptive statistics for SRP\n\n\n\n\nTotal\n863.9\n\n\nMean\n61.7\n\n\nMinimum\n22.8\n\n\nMaximum\n107.2\n\n\n\n\n\n\n\n\nDescriptive statistics are nothing new to you; you have been using many of them all your life.\nWe can calculate other numbers that can be useful for drawing conclusions or inferences from a collection of numbers; these are inferential statistics. Inferential statistics are often probability values that give the answer to questions like “What are the chances that …”.\nFor example, imagine we suspect there was some environmental change in 1990. We see that the average SRP value before 1990 was 38.4 and the average SRP value after 1990 was 85. That gives us a difference in the average of 46.6. But, could this difference be due to chance fluctuations from year to year? Were we just unlucky in getting a few larger measurements in later years? We could use methods that you will see in this book to calculate a probability to answer that question. The probability value is an inferential statistic, because we can use it to draw an inference about the measures.\nInferential statistics use descriptive statistics as their input. Inferential statistics can be used for two purposes: to aid scientific understanding by estimating the probability that a statement is true or not, and to aid in making sound decisions by estimating which alternative among a range of possibilities is most desirable." + }, + { + "objectID": "intro.html#probabilities-and-decisions", + "href": "intro.html#probabilities-and-decisions", + "title": "1  Introduction", + "section": "1.4 Probabilities and decisions", + "text": "1.4 Probabilities and decisions\nThere are two differences between questions about probabilities and the ultimate decision problems:\n\nDecision problems always involve evaluation of the consequences — that is, taking into account the benefits and the costs of the consequences — whereas pure questions about probabilities are estimated without evaluations of the consequences.\nDecision problems often involve a complex combination of sets of probabilities and consequences, together with their evaluations. For example: In the case of the contractor’s ambulances, it is clear that there will be a monetary loss to the contractor if she makes a commitment to have 17 ambulances available for tomorrow and then cannot produce that many. Furthermore, the contractor must take into account the further consequence that there may be a loss of goodwill for the future if she fails to meet her obligations tomorrow — and then again there may not be any such loss; and if there is such loss of goodwill it might be a loss worth $10,000 or $20,000 or $30,000. Here the decision problem involves not only the probability that there will be fewer than 17 ambulances tomorrow but also the immediate monetary loss and the subsequent possible losses of goodwill, and the valuation of all these consequences.\n\nContinuing with the decision concerning whether to do more research on medicine AntiAnyVir: If you do decide to continue research on AntiAnyVir, (a) you may, or (b) you may not, come up with an important general treatment for viral infections within, say, the next 3 years. If you do come up with such a general treatment, of course it will have very great social benefits. Furthermore, (c) if you decide not to do further research on AntiAnyVir now, you can direct your time and that of other people to research in other directions, with some chance that the other research will produce a less-general but nevertheless useful treatment for some relatively infrequent viral infections. Those three possibilities have different social benefits. The probability that medicine AntiAnyVir really has some benefit in treating COVID19, as judged by your prior research, obviously will influence your decision on whether or not to do more research on medicine AntiAnyVir. But that judgment about the probability is only one part of the overall web of consequences and evaluations that must be taken into account when making your decision whether or not to do further research on medicine AntiAnyVir.\nWhy does this book limit itself to the specific probability questions when ultimately we are interested in decisions? A first reason is division of labor. The more general aspects of the decision-making process in the face of uncertainty are treated well in other books. This book’s special contribution is its new approach to the crucial process of estimating the chances that an event will occur.\nSecond, the specific elements of the overall decision-making process taught in this book belong to the interrelated subjects of probability theory and statistics . Though probabilistic and statistical theory ultimately is intended to be part of the general decision-making process, often only the estimation of probabilities is done systematically, and the rest of the decision-making process — for example, the decision whether or not to proceed with further research on medicine AntiAnyVir — is done in informal and unsystematic fashion. This is regrettable, but the fact that this is standard practice is an additional reason why the treatment of statistics and probability in this book is sufficiently complete.\nA third reason that this book covers only statistics and not numerical reasoning about decisions is because most college and university statistics courses and books are limited to statistics." + }, + { + "objectID": "intro.html#limitations-of-probability-and-statistics", + "href": "intro.html#limitations-of-probability-and-statistics", + "title": "1  Introduction", + "section": "1.5 Limitations of probability and statistics", + "text": "1.5 Limitations of probability and statistics\nStatistical testing is not equivalent to research, and research is not the same as statistical testing. Rather, statistical inference is a handmaiden of research, often but not always necessary in the research process.\nA working knowledge of the basic ideas of statistics, especially the elements of probability, is unsurpassed in its general value to everyone in a modern society. Statistics and probability help clarify one’s thinking and improve one’s capacity to deal with practical problems and to understand the world. To be efficient, a social scientist or decision-maker is almost certain to need statistics and probability.\nOn the other hand, important research and top-notch decision-making have been done by people with absolutely no formal knowledge of statistics. And a limited study of statistics sometimes befuddles students into thinking that statistical principles are guides to research design and analysis. This mistaken belief only inhibits the exercise of sound research thinking. Alfred Kinsey long ago put it this way:\n\n… no statistical treatment can put validity into generalizations which are based on data that were not reasonably accurate and complete to begin with. It is unfortunate that academic departments so often offer courses on the statistical manipulation of human material to students who have little understanding of the problems involved in securing the original data. … When training in these things replaces or at least precedes some of the college courses on the mathematical treatment of data, we shall come nearer to having a science of human behavior. (Kinsey, Pomeroy, and Martin 1948, p 35).\n\nIn much — even most — research in social and physical sciences, statistical testing is not necessary. Where there are large differences between different sorts of circumstances for example, if a new medicine cures 90 patients out of 100 and the old medicine cures only 10 patients out of 100 — we do not need refined statistical tests to tell us whether or not the new medicine really has an effect. And the best research is that which shows large differences, because it is the large effects that matter. If the researcher finds that s/he must use refined statistical tests to reveal whether there are differences, this sometimes means that the differences do not matter much.\nTo repeat, then, some or even much research — especially in the physical and biological sciences — does not need the kind of statistical manipulation that will be described in this book. But most decision problems do need the kind of probabilistic and statistical input that is described in this book.\nAnother matter: If the raw data are of poor quality, probabilistic and statistical manipulation cannot be very useful. In the example of the contractor and her ambulances, if the contractor’s estimate that a given ambulance has a one-in-ten chance of being unfit for service out-of-order on a given day is very inaccurate, then our calculation of the probability that three or more ambulances will be out of order on a given day will not be helpful, and may be misleading. To put it another way, one cannot make bread without flour, yeast, and water. And good raw data are the flour, yeast and water necessary to get an accurate estimate of a probability. The most refined statistical and probabilistic manipulations are useless if the input data are poor — the result of unrepresentative samples, uncontrolled experiments, inaccurate measurement, and the host of other ways that information gathering can go wrong. (See Simon and Burstein (1985) for a catalog of the obstacles to obtaining good data.) Therefore, we should constantly direct our attention to ensuring that the data upon which we base our calculations are the best it is possible to obtain." + }, + { + "objectID": "intro.html#sec-stats-difficult", + "href": "intro.html#sec-stats-difficult", + "title": "1  Introduction", + "section": "1.6 Why is Statistics Such a Difficult Subject?", + "text": "1.6 Why is Statistics Such a Difficult Subject?\nWhy is statistics such a tough subject for so many people?\n“Among mathematicians and statisticians who teach introductory statistics, there is a tendency to view students who are not skillful in mathematics as unintelligent,” say two of the authors of a popular introductory text (McCabe and McCabe 1989, p 2). As these authors imply, this view is out-and-out wrong; lack of general intelligence on the part of students is not the root of the problem.\nScan this book and you will find almost no formal mathematics. Yet nearly every student finds the subject very difficult — as difficult as anything taught at universities. The root of the difficulty is that the subject matter is extremely difficult. Let’s find out why .\nIt is easy to find out with high precision which movie is playing tonight at the local cinema; you can look it up on the web or call the cinema and ask. But consider by contrast how difficult it is to determine with accuracy:\n\nWhether we will save lives by recommending vitamin D supplements for the whole population as protection against viral infections. Some evidence suggests that low vitamin D levels predispose to more severe lung infections, and that taking supplements can help (Martineau et al. 2017). But, how certain can we be of the evidence? How safe are the supplements? Does the benefit, and the risk, differ by ethnicity?\nWhat will be the result of more than a hundred million Americans voting for president a month from now; the best attempt usually is a sample of 2000 people, selected in some fashion or another that is far from random, weeks before the election, asked questions that are by no means the same as the actual voting act, and so on;\nHow men feel about women and vice versa.\n\nThe cleverest and wisest people have pondered for thousands of years how to obtain answers to questions like these, and made little progress. Dealing with uncertainty was completely outside the scope of the ancient philosophers. It was not until two or three hundred years ago that people began to make any progress at all on these sorts of questions, and it was only about one century ago that we began to have reasonably competent procedures — simply because the problems are inherently difficult. So it is no wonder that the body of these methods is difficult.\nSo: The bad news is that the subject is extremely difficult. The good news is that you — and that means you — can understand it with hard thinking, even if you have no mathematical background beyond arithmetic and you think that you have no mathematical capability. That’s because the difficulty lies in such matters as pin-pointing the right question, but not in any difficulties of mathematical manipulation.\n\n\n\n\nKinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948. “Sexual Behavior in the Human Male.” W. B. Saunders Company. https://books.google.co.uk/books?id=pfMKrY3VvigC.\n\n\nMartineau, Adrian R, David A Jolliffe, Richard L Hooper, Lauren Greenberg, John F Aloia, Peter Bergman, Gal Dubnov-Raz, et al. 2017. “Vitamin D Supplementation to Prevent Acute Respiratory Tract Infections: Systematic Review and Meta-Analysis of Individual Participant Data.” Bmj 356.\n\n\nMcCabe, George P, and Linda Doyle McCabe. 1989. Instructor’s Guide with Solutions for Introduction to the Practice of Statistics. New York: W. H. Freeman.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research Methods in Social Science. 3rd ed. New York: Random House.\n\n\nZhou, Qixing, Christopher E Gibson, and Robert H Foy. 2000. “Long-Term Changes of Nitrogen and Phosphorus Loadings to a Large Lake in North-West Ireland.” Water Research 34 (3): 922–26. https://doi.org/10.1016/S0043-1354(99)00199-2." + }, + { + "objectID": "resampling_method.html#the-resampling-approach-in-action", + "href": "resampling_method.html#the-resampling-approach-in-action", + "title": "2  The resampling method", + "section": "2.1 The resampling approach in action", + "text": "2.1 The resampling approach in action\nRecall the problem from section Section 1.2 in which the contractor owns 20 ambulances:\n\nYou are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.\n\nThe resampling approach produces the estimate as follows.\n\n2.1.1 Randomness from physical methods\nWe collect 10 coins, and mark one of them with a pen or pencil or tape as being the coin that represents “out-of-order;” the other nine coins stand for “in operation”. For any one ambulance, this set of 10 coins provides a “model” for the one-in-ten chance — a probability of .10 (10 percent) — of it being out of order on a given day. We put the coins into a little jar or bucket.\nFor ambulance #1, we draw a single coin from the bucket. This coin represents whether that ambulance is going to be broken tomorrow. After replacing the coin and shaking the bucket, we repeat the same procedure for ambulance #2, ambulance #3 and so forth. Having repeated the procedure 20 times, we now have a representation of all ambulances for a single day.\nWe can now repeat this whole process as many times as we like: each time, we come up with a representation for a different day, telling us how many ambulances will be out-of-service on that day.\nAfter collecting evidence for, say, 50 experimental days we determine the proportion of the experimental days on which three or more ambulances are out of order. That proportion is an estimate of the probability that three or more ambulances will be out of order on a given day — the answer we seek. This procedure is an example of Monte Carlo simulation, which is the heart of the resampling method of statistical estimation.\nA more direct way to answer this question would be to examine the firm’s actual records for the past 100 days or, better, 500 days (if that’s available) to determine how many days had three or more ambulances out of order. But the resampling procedure described above gives us an estimate even if we do not have such long-term information. This is realistic; it is frequently the case in the real world that we must make estimates on the basis of insufficient history about an event.\nA quicker resampling method than the coins could be obtained with 20 ten-sided dice or spinners (like those found in the popular Dungeons & Dragons games). For each die, we identify one of its ten sides as “out-of-order”.\nFunnily enough, standard 10-sided dice have the numbers 0 through 9 on their faces, rather than 1 through 10. Figure 2.1 shows a standard 10-sided die:\n\n\n\nFigure 2.1: 10-sided die\n\n\nWe decide, arbitrarily, that the 9 side means “out-of-order”. We could even put a little bit of paint on the 9 side to remind us. The die represents an ambulance. If we roll the die, and get this face, this indicates that the ambulance was out of order. If we get any of the other faces — 0 through 8 — this ambulance was in working order. A single throw of all 20 dice will be our experimental trial that represents a single day; we just have to count whether three or more ambulances turn up “out of order”. Figure 2.2 show the result of one trial — throwing 20 dice:\n\n\n\nFigure 2.2: 20 10-sided dice\n\n\nAs you can see, the trial in Figure 2.2 gave us a single 9, so there was only one ambulance out of order.\nIn a hundred quick throws of the 20 dice — which probably takes less than 5 minutes — we can get a fast and reasonably accurate answer to our question." + }, + { + "objectID": "resampling_method.html#sec-randomness-computer", + "href": "resampling_method.html#sec-randomness-computer", + "title": "2  The resampling method", + "section": "2.2 Randomness from your computer", + "text": "2.2 Randomness from your computer\nComputers make it easy to generate random numbers for resampling.\n\n\n\n\n\n\nWhat do we mean by random?\n\n\n\nRandom numbers are numbers where it is impossible to predict which number is coming next. If we ask the computer for a number between 0 and 9, we will get one of the numbers 0 though 9, but we cannot do any better than that in predicting which number it will give us. There is an equal (10%) chance we will get any of the numbers 0 through 9 — just as there is when we roll a fair 10-sided die. We will go into more detail about what exactly we mean by random and chance later in the book (Section 3.8).\n\n\n\nWe can use random numbers from computers to simulate our problem. For example, we can ask the computer to choose a random number between 0 and 9 to represent one ambulance. Let’s say the number 9 represents “out-of-order” and 0 through 8 “in operation”, then any one random number gives us a trial observation for a single ambulance. To get an experimental trial for a single day we look at 20 numbers and count how many of them are 9. We then look at, say, one hundred sets of 20 numbers and count the proportion of sets whose 20 numbers show three or more ambulances being “out-of-order”. Once again, that proportion estimates the probability that three or more ambulances will be out-of-order on any given day.\nSoon we will do all these steps with some Python code, but for now, consider Table Table 2.1. In each row, we placed 20 numbers, each one representing an ambulance. We added 25 such rows, each representing a simulation of one day.\n\n\n\n\nTable 2.1: 25 simulations of 20 ambulances, with counts \n\n\n\nA1\nA2\nA3\nA4\nA5\nA6\nA7\nA8\nA9\nA10\nA11\nA12\nA13\nA14\nA15\nA16\nA17\nA18\nA19\nA20\n\n\n\n\nDay 1\n5\n4\n4\n5\n9\n8\n2\n9\n1\n5\n8\n2\n1\n8\n2\n6\n6\n5\n0\n5\n\n\nDay 2\n2\n7\n4\n4\n6\n3\n9\n5\n2\n5\n8\n1\n2\n5\n4\n9\n0\n5\n8\n4\n\n\nDay 3\n5\n9\n1\n2\n8\n7\n5\n3\n8\n9\n2\n6\n9\n0\n7\n2\n5\n2\n2\n2\n\n\nDay 4\n2\n4\n7\n6\n0\n4\n5\n1\n3\n7\n6\n3\n2\n9\n5\n8\n0\n6\n0\n4\n\n\nDay 5\n7\n4\n8\n9\n1\n5\n1\n2\n3\n6\n4\n8\n5\n1\n7\n5\n0\n9\n8\n7\n\n\nDay 6\n7\n3\n9\n1\n7\n7\n9\n9\n6\n8\n4\n7\n7\n2\n0\n2\n4\n6\n9\n2\n\n\nDay 7\n3\n9\n5\n3\n7\n1\n3\n0\n8\n0\n0\n3\n3\n0\n0\n3\n8\n6\n4\n6\n\n\nDay 8\n0\n4\n6\n7\n9\n7\n1\n9\n8\n1\n8\n7\n0\n4\n4\n7\n0\n5\n6\n1\n\n\nDay 9\n0\n9\n0\n7\n0\n1\n6\n0\n8\n6\n0\n3\n1\n9\n8\n3\n1\n2\n7\n8\n\n\nDay 10\n8\n6\n1\n0\n8\n3\n4\n5\n8\n8\n4\n9\n1\n0\n8\n6\n9\n2\n0\n7\n\n\nDay 11\n7\n0\n0\n7\n9\n2\n3\n0\n0\n0\n5\n5\n4\n0\n1\n7\n8\n2\n0\n8\n\n\nDay 12\n3\n2\n2\n4\n6\n3\n9\n6\n8\n8\n7\n6\n6\n4\n3\n8\n7\n0\n4\n3\n\n\nDay 13\n4\n2\n6\n9\n0\n0\n8\n5\n3\n1\n5\n1\n8\n7\n6\n8\n3\n6\n3\n5\n\n\nDay 14\n3\n1\n2\n4\n3\n1\n6\n2\n9\n5\n2\n4\n0\n6\n1\n9\n0\n7\n9\n4\n\n\nDay 15\n2\n0\n1\n5\n8\n5\n8\n1\n3\n2\n2\n7\n8\n2\n2\n1\n2\n9\n2\n5\n\n\nDay 16\n9\n9\n6\n0\n6\n3\n3\n2\n6\n8\n3\n9\n0\n5\n7\n8\n8\n3\n8\n6\n\n\nDay 17\n8\n3\n0\n0\n1\n5\n3\n7\n0\n9\n6\n4\n1\n2\n5\n0\n1\n8\n7\n1\n\n\nDay 18\n7\n1\n2\n6\n4\n3\n0\n0\n7\n5\n6\n2\n9\n2\n8\n0\n3\n1\n9\n1\n\n\nDay 19\n5\n6\n5\n9\n8\n4\n3\n0\n6\n7\n4\n9\n4\n2\n0\n6\n1\n0\n4\n1\n\n\nDay 20\n0\n5\n5\n9\n9\n4\n3\n4\n1\n6\n9\n2\n4\n3\n1\n8\n6\n8\n0\n2\n\n\nDay 21\n4\n1\n0\n1\n5\n1\n6\n4\n8\n5\n2\n1\n5\n8\n6\n2\n0\n5\n2\n6\n\n\nDay 22\n8\n5\n2\n0\n3\n5\n0\n9\n0\n4\n2\n8\n1\n1\n5\n7\n1\n4\n7\n5\n\n\nDay 23\n1\n0\n8\n5\n4\n7\n5\n2\n8\n7\n2\n6\n4\n4\n3\n5\n6\n5\n5\n7\n\n\nDay 24\n9\n5\n7\n9\n6\n3\n4\n7\n7\n2\n5\n2\n0\n0\n9\n1\n9\n5\n2\n8\n\n\nDay 25\n6\n0\n9\n4\n8\n3\n4\n8\n0\n8\n8\n7\n1\n0\n7\n3\n4\n7\n5\n1\n\n\n\n\n\n\n\n\nTo know how many ambulances were “out of order” on any given day, we count number of ones in that row. We place the counts in the final column called “#9” (for “number of nines”):\n\n\n\n\nTable 2.2: 25 simulations of 20 ambulances, with counts \n\n\n\nA1\nA2\nA3\nA4\nA5\nA6\nA7\nA8\nA9\nA10\nA11\nA12\nA13\nA14\nA15\nA16\nA17\nA18\nA19\nA20\n#9\n\n\n\n\nDay 1\n5\n4\n4\n5\n9\n8\n2\n9\n1\n5\n8\n2\n1\n8\n2\n6\n6\n5\n0\n5\n2\n\n\nDay 2\n2\n7\n4\n4\n6\n3\n9\n5\n2\n5\n8\n1\n2\n5\n4\n9\n0\n5\n8\n4\n2\n\n\nDay 3\n5\n9\n1\n2\n8\n7\n5\n3\n8\n9\n2\n6\n9\n0\n7\n2\n5\n2\n2\n2\n3\n\n\nDay 4\n2\n4\n7\n6\n0\n4\n5\n1\n3\n7\n6\n3\n2\n9\n5\n8\n0\n6\n0\n4\n1\n\n\nDay 5\n7\n4\n8\n9\n1\n5\n1\n2\n3\n6\n4\n8\n5\n1\n7\n5\n0\n9\n8\n7\n2\n\n\nDay 6\n7\n3\n9\n1\n7\n7\n9\n9\n6\n8\n4\n7\n7\n2\n0\n2\n4\n6\n9\n2\n4\n\n\nDay 7\n3\n9\n5\n3\n7\n1\n3\n0\n8\n0\n0\n3\n3\n0\n0\n3\n8\n6\n4\n6\n1\n\n\nDay 8\n0\n4\n6\n7\n9\n7\n1\n9\n8\n1\n8\n7\n0\n4\n4\n7\n0\n5\n6\n1\n2\n\n\nDay 9\n0\n9\n0\n7\n0\n1\n6\n0\n8\n6\n0\n3\n1\n9\n8\n3\n1\n2\n7\n8\n2\n\n\nDay 10\n8\n6\n1\n0\n8\n3\n4\n5\n8\n8\n4\n9\n1\n0\n8\n6\n9\n2\n0\n7\n2\n\n\nDay 11\n7\n0\n0\n7\n9\n2\n3\n0\n0\n0\n5\n5\n4\n0\n1\n7\n8\n2\n0\n8\n1\n\n\nDay 12\n3\n2\n2\n4\n6\n3\n9\n6\n8\n8\n7\n6\n6\n4\n3\n8\n7\n0\n4\n3\n1\n\n\nDay 13\n4\n2\n6\n9\n0\n0\n8\n5\n3\n1\n5\n1\n8\n7\n6\n8\n3\n6\n3\n5\n1\n\n\nDay 14\n3\n1\n2\n4\n3\n1\n6\n2\n9\n5\n2\n4\n0\n6\n1\n9\n0\n7\n9\n4\n3\n\n\nDay 15\n2\n0\n1\n5\n8\n5\n8\n1\n3\n2\n2\n7\n8\n2\n2\n1\n2\n9\n2\n5\n1\n\n\nDay 16\n9\n9\n6\n0\n6\n3\n3\n2\n6\n8\n3\n9\n0\n5\n7\n8\n8\n3\n8\n6\n3\n\n\nDay 17\n8\n3\n0\n0\n1\n5\n3\n7\n0\n9\n6\n4\n1\n2\n5\n0\n1\n8\n7\n1\n1\n\n\nDay 18\n7\n1\n2\n6\n4\n3\n0\n0\n7\n5\n6\n2\n9\n2\n8\n0\n3\n1\n9\n1\n2\n\n\nDay 19\n5\n6\n5\n9\n8\n4\n3\n0\n6\n7\n4\n9\n4\n2\n0\n6\n1\n0\n4\n1\n2\n\n\nDay 20\n0\n5\n5\n9\n9\n4\n3\n4\n1\n6\n9\n2\n4\n3\n1\n8\n6\n8\n0\n2\n3\n\n\nDay 21\n4\n1\n0\n1\n5\n1\n6\n4\n8\n5\n2\n1\n5\n8\n6\n2\n0\n5\n2\n6\n0\n\n\nDay 22\n8\n5\n2\n0\n3\n5\n0\n9\n0\n4\n2\n8\n1\n1\n5\n7\n1\n4\n7\n5\n1\n\n\nDay 23\n1\n0\n8\n5\n4\n7\n5\n2\n8\n7\n2\n6\n4\n4\n3\n5\n6\n5\n5\n7\n0\n\n\nDay 24\n9\n5\n7\n9\n6\n3\n4\n7\n7\n2\n5\n2\n0\n0\n9\n1\n9\n5\n2\n8\n4\n\n\nDay 25\n6\n0\n9\n4\n8\n3\n4\n8\n0\n8\n8\n7\n1\n0\n7\n3\n4\n7\n5\n1\n1\n\n\n\n\n\n\n\n\nEach value in the last column of Table Table 2.2 is the count of 9s in that row and, therefore, the result from our simulation of one day.\nWe can estimate how often three or more ambulances would break down by looking for values of three or greater in the last column. We find there are 6 rows with three or more in the last column. Finally we divide this number of rows by the number of trials (25) to get an estimate of the proportion of days with three or more breakdowns. The result is 0.24." + }, + { + "objectID": "resampling_method.html#solving-the-problem-using", + "href": "resampling_method.html#solving-the-problem-using", + "title": "2  The resampling method", + "section": "2.3 Solving the problem using Python", + "text": "2.3 Solving the problem using Python\nHere we rush ahead to show you how to do this simulation in Python.\nWe go through the Python code for the simulation, but we don’t expect you to understand all of it right now. The rest of this book goes into more detail on reading and writing Python code, and how you can use Python to build your own simulations. Here we just want to show you what this code looks like, to give you an idea of where we are headed.\nWhile you can run the code below on your own computer, for now we only need you to read it and follow along; the text explains what each line of code does.\n\n\n\n\n\n\nComing back to the example\n\n\n\nIf you are interested, you can come back to this example later, and run it for yourself. To do this, we recommend you read Chapter 4 that explains how to execute notebooks online or on your own computer.\n\n\n\nStart of ambulances notebook\n\nDownload notebook\nInteract\n\n\nThe first thing to say about the code you will see below is there are some lines that do not do anything; these are the lines beginning with a # character (read # as “hash”). Lines beginning with # are called comments. When Python sees a # at the start of a line, it ignores everything else on that line, and skips to the next. Here’s an example of a comment:\n\n# Python will completely ignore this text.\n\nBecause Python ignores lines beginning with #, the text after the # is just for us, the humans reading the code. The person writing the code will often use comments to explain what the code is doing.\nOur next task is to use Python to simulate a single day of ambulances. We will again represent each ambulance by a random number from 0 through 9. 20 of these numbers represents a simulation of all 20 ambulances available to the contractor. We call a simulation of all ambulances for a specific day one trial.\n\nBefore we begin our first trial, we need to load some helpful routines from the NumPy software library. NumPy is a Python library that has many important functions for creating and working with numerical data. We will use routines from NumPy in almost all our examples.\n\n# Get the Numpy library, and call it \"np\" for short.\nimport numpy as np\n\nWe also need to ask NumPy for an object that can generate random numbers. Such an object is known as a “random number generator”.\n\n# Ask NumPy for a random number generator.\n# Name it `rnd` — short for \"random\"\nrnd = np.random.default_rng()\n\n\n\n\n\n\n\nNumPy’s Random Number Generator\n\n\n\nHere are some examples of the random operations we can perform with NumPy:\n\nMake a random choice between three words:\n\nrnd.choice(['apple', 'orange', 'banana'])\n\n'orange'\n\n\nMake five random choices of three words, using the “size=” argument:\n\nrnd.choice(['apple', 'orange', 'banana'], size=5)\n\narray(['orange', 'orange', 'orange', 'banana', 'banana'], dtype='<U6')\n\n\nShuffle a list of numbers:\n\nrnd.permutation([1, 2, 3, 4, 5])\n\narray([3, 5, 4, 2, 1])\n\n\nGenerate five random numbers between 1 and 10:\n\nrnd.integers(1, 11, size=5)\n\narray([9, 3, 2, 9, 3])\n\n\n\n\n\n\nRecall that we want twenty 10-sided dice — one per ambulance. Our dice should be 10-sided, because each ambulance has a 1-in-10 chance of being out of order.\nThe program to simulate one trial of the ambulances problem therefore begins with these commands:\n\n# Ask NumPy to generate 20 numbers from 0 through 9.\n\n# These are the numbers we will ask NumPy to select from.\n# We store the numbers together in an *array*.\nnumbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n# Get 20 (size=20) values from the *numbers* list.\n# Store the 20 numbers with the name \"a\"\na = rnd.choice(numbers, size=20)\n\n# The result is a sequence (array) of 20 numbers.\na\n\narray([6, 6, 5, 0, 5, 2, 7, 4, 4, 6, 3, 9, 5, 2, 5, 8, 1, 2, 5, 4])\n\n\nThe commands above ask the computer to store the results of the random drawing in a location in the computer’s memory to which we give a name such as “a” or “ambulances” or “aardvark” — the name is up to us.\nNext, we need to count the number of defective ambulances:\n\n# Count the number of nines in the random numbers.\n# The \"a == 9\" part identifies all the numbers equal to 9.\n# The \"sum\" part counts how many numbers \"a == 9\" found.\nb = np.sum(a == 9)\n# Show the result\nb\n\n1\n\n\n\n\n\n\n\n\nCounting sequence elements\n\n\n\nWe see that the code uses:\n\nnp.sum(a == 9)\n\n1\n\n\nWhat exactly happens here under the hood? First a == 9 creates an sequence of values that only contains\nTrue or False\nvalues, depending on whether each element is equal to 9 or not.\nThen, we ask Python to add up (sum). Python counts True as 1, and False as 0; thus we can use sum to count the number of True values.\nThis comes down to asking “how many elements in a are equal to 9”.\nDon’t worry, we will go over this again in the next chapter.\n\n\nThe sum command is a counting operation. It asks the computer to count the number of 9s among the twenty numbers that are in location a following the random draw carried out by the rnd.choice operation. The result of the sum operation will be somewhere between 0 and 20, the number of simulated ambulances that were out-of-order on a given simulated day. The result is then placed in another location in the computer’s memory that we label b.\nAbove you see that we have worked out how to tell the computer to do a single trial — one simulated day.\n\n2.3.1 Repeating trials\nWe could run the code above for one trial over and over, and write down the result on a piece of paper. If we did this 100 times we would have 100 counts of the number of simulated ambulances that had broken down for each simulated day. To answer our question, we will then count the number of times the count was more than three, and divide by 100, to get an estimate of the proportion of days with more than three out-of-order ambulances.\nOne of the great things about the computer is that it is very good at repeating tasks many times, so we do not have to. Our next task is to ask the computer to repeat the single trial many times — say 1000 times — and count up the results for us.\nOf course Python is very good at repeating things, but the instructions to tell Python to repeat things will take a little while to get used to. Soon, we will spend some time going over it in more detail. For now though, we show you how what it looks like, and ask you to take our word for it.\nThe standard way to repeat steps in Python is a for loop. For example, let us say we wanted to display (print) “Hello” five times. Here is how we would do that with a for loop:\n\n# Read the next line as \"repeat the following steps five times\".\nfor i in np.arange(0, 5):\n # The indented stuff is the code we repeat five times.\n # Print \"Hello\" to the screen.\n print(\"Hello\")\n\nHello\nHello\nHello\nHello\nHello\n\n\nYou can probably see where we are going here. We are going to put the code for one trial inside a for loop, to repeat that trial code many times.\nOur next job is to store the results of each trial. If we are going to run 1000 trials, we need to store 1000 results.\nTo do this, we start with a sequence of 1000 zeros, that we will fill in later, like this:\n\n# Ask NumPy to make a sequence of 1000 zeros that we will use\n# to store the results of our 1000 trials.\n# Call this sequence \"z\"\nz = np.zeros(1000)\n\nFor now, z contains 1000 zeros, but we will soon use a for loop to execute 1000 trials. For each trial we will calculate our result (the number of broken-down ambulances), and we will store the result in the z store. We end up with 1000 trial results stored in z.\nWith these parts, we are now ready to solve the ambulance problem, using Python.\n\n\n2.3.2 The solution\nThis is our big moment! Here we will combine the elements shown above to perform our ambulance simulation over, say, 1000 days. Just a quick reminder: we do not expect you to understand all the detail of the code below; we will cover that later. For now, see if you can follow along with the gist of it.\nTo solve resampling problems, we typically proceed as we have done above. We figure out the structure of a single trial and then place that trial in a for loop that executes it multiple times (once for each day, in our case).\nNow, let us apply this procedure to our ambulance problem. We simulate 1000 days. You will see that we have just taken the parts above, and put them together. The only new part here, is the step at the end, where we store the result of the trial. Bear with us for that; we will come to it soon.\n\n# Ask NumPy to make a sequence of 1000 zeros that we will use\n# to store the results of our 1000 trials.\n# Call this sequence \"z\"\nz = np.zeros(1000)\n\n# These are the numbers we will ask NumPy to select from.\nnumbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\n# Read the next line as \"repeat the following steps 1000 times\".\nfor i in np.arange(0, 1000):\n # The indented stuff is the code we repeat 1000 times.\n\n # Get 20 (size=20) values from the *numbers* list.\n # Store the 20 numbers with the name \"a\"\n a = rnd.choice(numbers, size=20)\n\n # Count the number of nines in the random numbers.\n # The \"a == 9\" part identifies all the numbers equal to 9.\n # The \"sum\" part counts how many numbers \"a == 9\" found.\n b = np.sum(a == 9)\n\n # Store the result from this trial in the sequence \"z\"\n z[i] = b\n\n # Now go back and repeat the trial, until done.\n\nThe z[i] = b statement that follows the sum counting operation simply keeps track of the results of each trial, placing the number of defective ambulances for each trial inside the sequence called z. The sequence has 1000 positions: one for each trial.\nWhen we have run the code above, we have stored 1000 trial results in the sequence z. These are 1000 counts of out-of-order ambulances, one for each of our simulated days. Our last task is to calculate the proportion of these days for which we had more than three broken-down ambulances.\nSince our aim is to count the number of days in which more than 3 (4 or more) defective ambulances occur, we use another counting sum command at the end of the 1000 trials. This command counts how many times more than 3 defects occurred in the 1000 days recorded in our z sequence, and we place the result in another location, k. This gives us the total number of days where 4 or more defective ambulances are seen to occur. Then we divide the number in k by 1000, the number of trials. Thus we obtain an estimate of the chance, expressed as a probability between 0 and 1, that 4 or more ambulances will be defective on a given day. And we store that result in a location that we call kk, which Python subsequently prints to the screen.\n\n# How many trials resulted in more than 3 ambulances out of order?\nk = np.sum(z > 3)\n\n# Convert to a proportion.\nkk = k / 1000\n\n# Print the result.\nprint(kk)\n\n0.13\n\n\nThis is the estimate we wanted; the proportion of days where more than three ambulances were out of action.\nWe have crept up on the solution, so it might not be clear to you how few steps you needed to do this task. Here is the whole solution to the problem, without the comments:\n\nimport numpy as np\nrnd = np.random.default_rng()\n\nz = np.zeros(1000)\nnumbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nfor i in np.arange(0, 1000):\n a = rnd.choice(numbers, size=20)\n b = np.sum(a == 9)\n z[i] = b\n\nk = np.sum(z > 3)\nkk = k / 1000\nprint(kk)\n\n0.124\n\n\nEnd of ambulances notebook\n\n\nNotice that the code above is exactly the same as the code we built up in steps. But notice too, that the answer we got from this code was slightly different from the answer we got first.\nWhy did we get a different answer from the same code?\n\n\n\n\n\n\nRandomness in estimates\n\n\n\nThis is an essential point — our code uses random numbers to get an estimate of the quantity we want — in this case, the probability of three or more ambulances being out of order. Every run of our code will use a different set of random numbers. Therefore, every run of our code will give us a very slightly different number. As you will soon see, we can make our estimate more and more accurate, and less and less different between each run, by doing many trials in each run. Here we did 1000 trials, but we will usually do 10000 trials, to give us a good estimate, that does not vary much from run to run.\n\n\nDon’t worry about the detail of how each of these commands works — we will cover those details gradually, over the next few chapters. But, we hope that you can see, in principle, how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using the equivalent of a ten-sided die. This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with.\nWhile writing programs like these take a bit of getting used to, it is vastly simpler than the older, more conventional approaches to such problems routinely taught to students." + }, + { + "objectID": "resampling_method.html#sec-resamp-differs", + "href": "resampling_method.html#sec-resamp-differs", + "title": "2  The resampling method", + "section": "2.4 How resampling differs from the conventional approach", + "text": "2.4 How resampling differs from the conventional approach\nIn the standard approach the student learns to choose and solve a formula. Doing the algebra and arithmetic is quick and easy. The difficulty is in choosing the correct formula. Unless you are a professional mathematician, it may take you quite a while to arrive at the correct formula — considerable hard thinking, and perhaps some digging in textbooks. More important than the labor, however, is that you may come up with the wrong formula, and hence obtain the wrong answer. And how would you know if you were wrong?\nMost students who have had a standard course in probability and statistics are quick to tell you that it is not easy to find the correct formula, even immediately after finishing a course (or several courses) on the subject. After leaving school or university, it is harder still to choose the right formula. Even many people who have taught statistics at the university level (including this writer) must look at a book to get the correct formula for a problem as simple as the ambulances, and then we are often still not sure we have the right answer. This is the grave disadvantage of the standard approach.\nIn the past few decades, resampling and other Monte Carlo simulation methods have come to be used extensively in scientific research. But in contrast to the material in this book, simulation has mostly been used in situations so complex that mathematical methods have not yet been developed to handle them. Here are examples of such situations:\n\n\nFor a flight to Mars, calculating the correct route involves a great many variables, too many to solve with formulas. Hence, the Monte Carlo simulation method is used.\nThe Navy might want to know how long the average ship will have to wait for dock facilities. The time of completion varies from ship to ship, and the number of ships waiting in line for dock work varies over time. This problem can be handled quite easily with the experimental simulation method, but formal mathematical analysis would be difficult or impossible.\nWhat are the best tactics in baseball? Should one bunt? Should one put the best hitter up first, or later? By trying out various tactics with dice or random numbers, Earnshaw Cook (in his book Percentage Baseball), found that it is best never to bunt, and the highest-average hitter should be put up first, in contrast to usual practice. Finding this answer would have been much more difficult with the analytic method.\n\nWhich search pattern will yield the best results for a ship searching for a school of fish? Trying out “models” of various search patterns with simulation can provide a fast answer.\nWhat strategy in the game of Monopoly will be most likely to win? The simulation method systematically plays many games (with a computer) testing various strategies to find the best one.\n\nBut those five examples are all complex problems. This book and its earlier editions break new ground by using this method for simple rather than complex problems , especially in statistics rather than pure probability, and in teaching beginning rather than advanced students to solve problems this way. (Here it is necessary to emphasize that the resampling method is used to solve the problems themselves rather than as a demonstration device to teach the notions found in the standard conventional approach . Simulation has been used in elementary courses in the past, but only to demonstrate the operation of the analytical mathematical ideas. That is very different than using the resampling approach to solve statistics problems themselves, as is done here.)\nOnce we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics . Then we can get down to the business of learning how to do that clear statistical thinking, and putting it to work for you. The study of probability is purely mathematics (though not necessarily formulas) and technique. But statistics has to do with meaning . For example, what is the meaning of data showing an association just discovered between a type of behavior and a disease? Of differences in the pay of men and women in your firm? Issues of causation, acceptability of control, and design of experiments cannot be reduced to technique. This is “philosophy” in the fullest sense. Probability and statistics calculations are just one input. Resampling simulation enables us to get past issues of mathematical technique and focus on the crucial statistical elements of statistical problems.\nWe hope you will find, as you read through the chapters, that the resampling way of thinking is a good way to think about the more traditional statistical methods that some of you may already know. Our approach will be to use resampling to understand the ideas, and then apply this understanding to reason about traditional methods. You may also find that the resampling methods are not only easier to understand — they are often more useful, because they are so general in their application." + }, + { + "objectID": "what_is_probability.html#introduction", + "href": "what_is_probability.html#introduction", + "title": "3  What is probability?", + "section": "3.1 Introduction", + "text": "3.1 Introduction\nThe central concept for dealing with uncertainty is probability. Hence we must inquire into the “meaning” of the term probability. (The term “meaning” is in quotes because it can be a confusing word.)\nYou have been using the notion of probability all your life when drawing conclusions about what you expect to happen, and in reaching decisions in your public and personal lives.\nYou wonder: Will the kick from the 45 yard line go through the uprights? How much oil can you expect from the next well you drill, and what value should you assign to that prospect? Will you make money if you invest in tech stocks for the medium term, or should you spread your investments across the stock market? Will the next Space-X launch end in disaster? Your answers to these questions rest on the probabilities you estimate.\nAnd you act on the basis of probabilities: You pay extra for an low-interest loan, if you think that interest rates are going to go up. You bet heavily on a poker hand if there is a high probability that you have the best hand. A hospital decides not to buy another ambulance when the administrator judges that there is a low probability that all the other ambulances will ever be in use at once. NASA decides whether or not to send off the space shuttle this morning as scheduled.\nThe idea of probability is essential when we reason about uncertainty, and so this chapter discusses what is meant by such key terms as “probability,” “chance”, “sample,” and “universe.” It discusses the nature and the usefulness of the concept of probability as used in this book, and it touches on the source of basic estimates of probability that are the raw material of statistical inferences." + }, + { + "objectID": "what_is_probability.html#the-meaning-of-probability", + "href": "what_is_probability.html#the-meaning-of-probability", + "title": "3  What is probability?", + "section": "3.2 The “Meaning” of “Probability”", + "text": "3.2 The “Meaning” of “Probability”\nProbability is difficult to define (Feller 1968), but here is a useful informal starting point:\n\nA probability is a number from 0 through 1 that reflects how likely it is that a particular event will happen.\n\nAny particular stated probability is an assertion that indicates how likely you believe it is that an event will occur.\nIf you give an event a probability of 0 you mean that you are certain it will not happen. If you give probability 1 to an event, you mean you are certain that it will happen. For example, if I give you one card from deck that you know contains only the standard 52 cards — before you look at the card, you can give probability 0 to the card being a joker, because you are certain the pack does not contain any joker cards. If I then select only the 14 spades from that deck, and give you a card from that selection, you will say there is probability 1 that the card is a black card, because all the spades are black cards.\nA probability estimate of .2 indicates that you think there is twice as great a chance of the event happening as if you had estimated a probability of .1. This is the rock-bottom interpretation of the term “probability,” and the heart of the concept. 1\nThe idea of probability arises when you are not sure about what will happen in an uncertain situation. For example, you may lack information and therefore can only make an estimate. If someone asks you your name, you do not use the concept of probability to answer; you know the answer to a very high degree of surety. To be sure, there is some chance that you do not know your own name, but for all practical purposes you can be quite sure of the answer. If someone asks you who will win tomorrow’s baseball game, however, there is a considerable chance that you will be wrong no matter what you say. Whenever there is a reasonable chance that your prediction will be wrong, the concept of probability can help you.\nThe concept of probability helps you to answer the question, “How likely is it that…?” The purpose of the study of probability and statistics is to help you make sound appraisals of statements about the future, and good decisions based upon those appraisals. The concept of probability is especially useful when you have a sample from a larger set of data — a “universe” — and you want to know the probability of various degrees of likeness between the sample and the universe. (The universe of events you are sampling from is also called the “population,” a concept to be discussed below.) Perhaps the universe of your study is all high school graduates in 2018. You might then want to know, for example, the probability that the universe’s average SAT (university entrance) score will not differ from your sample’s average SAT by more than some arbitrary number of SAT points — say, ten points.\nWe have said that a probability statement is about the future. Well, usually. Occasionally you might state a probability about your future knowledge of past events — that is, “I think I’ll find out that…” — or even about the unknown past. (Historians use probabilities to measure their uncertainty about whether events occurred in the past, and the courts do, too, though the courts hesitate to say so explicitly.)\nSometimes one knows a probability, such as in the case of a gambler playing black on an honest roulette wheel, or an insurance company issuing a policy on an event with which it has had a lot of experience, such as a life insurance policy. But often one does not know the probability of a future event. Therefore, our concept of probability must include situations where extensive data are not available.\nAll of the many techniques used to estimate probabilities should be thought of as proxies for the actual probability. For example, if Mission Control at Space Central simulates what should and probably will happen in space if a valve is turned aboard a space craft just now being built, the test result on the ground is a proxy for the real probability of what will happen when the crew turn the valve in the planned mission.\nIn some cases, it is difficult to conceive of any data that can serve as a proxy. For example, the director of the CIA, Robert Gates, said in 1993 “that in May 1989, the CIA reported that the problems in the Soviet Union were so serious and the situation so volatile that Gorbachev had only a 50-50 chance of surviving the next three to four years unless he retreated from his reform policies” (The Washington Post , January 17, 1993, p. A42). Can such a statement be based on solid enough data to be more than a crude guess?\nThe conceptual probability in any specific situation is an interpretation of all the evidence that is then available . For example, a wise biomedical worker’s estimate of the chance that a given therapy will have a positive effect on a sick patient should be an interpretation of the results of not just one study in isolation, but of the results of that study plus everything else that is known about the disease and the therapy. A wise policymaker in business, government, or the military will base a probability estimate on a wide variety of information and knowledge. The same is even true of an insurance underwriter who bases a life-insurance or shipping-insurance rate not only on extensive tables of long-time experience but also on recent knowledge of other kinds. Each situation asks us to make a choice of the best method of estimating a probability — whether that estimate is objective — from a frequency series — or subjective, from the distillation of other experience." + }, + { + "objectID": "what_is_probability.html#the-nature-and-meaning-of-the-concept-of-probability", + "href": "what_is_probability.html#the-nature-and-meaning-of-the-concept-of-probability", + "title": "3  What is probability?", + "section": "3.3 The nature and meaning of the concept of probability", + "text": "3.3 The nature and meaning of the concept of probability\nIt is confusing and unnecessary to inquire what probability “really” is. (Indeed, the terms “really” and “is,” alone or in combination, are major sources of confusion in statistics and in other logical and scientific discussions, and it is often wise to avoid their use.) Various concepts of probability — which correspond to various common definitions of the term — are useful in particular contexts. This book contains many examples of the use of probability. Work with them will gradually develop a sound understanding of the concept.\nThere are two major concepts and points of view about probability — frequency and degrees of belief. Each is useful in some situations but not in others. Though they may seem incompatible in principle, there almost never is confusion about which is appropriate in a given situation.\n\nFrequency . The probability of an event can be said to be the proportion of times that the event has taken place in the past, usually based on a long series of trials. Insurance companies use this when they estimate the probability that a thirty-five-year-old teacher will die during a period for which he wants to buy an insurance policy. (Notice this shortcoming: Sometimes you must bet upon events that have never or only infrequently taken place before, and so you cannot reasonably reckon the proportion of times they occurred one way or the other in the past.)\nDegree of belief . The probability that an event will take place or that a statement is true can be said to correspond to the odds at which you would bet that the event will take place. (Notice a shortcoming of this concept: You might be willing to accept a five-dollar bet at 2-1 odds that your team will win the game, but you might be unwilling to bet a hundred dollars at the same odds.)\n\nSee (Barnett 1982, chap. 3) for an in-depth discussion of different approaches to probability.\nThe connection between gambling and immorality or vice troubles some people about gambling examples. On the other hand, the immediacy and consequences of the decisions that the gambler has to make give the subject a special tang. There are several reasons why statistics use so many gambling examples — and especially tossing coins, throwing dice, and playing cards:\n\nHistorical . The theory of probability began with gambling examples of dice analyzed by Cardano, Galileo, and then by Pascal and Fermat.\nGenerality . These examples are not related to any particular walk of life, and therefore they can be generalized to applications in any walk of life. Students in any field — business, medicine, science — can feel equally at home with gambling examples.\nSharpness . These examples are particularly stark, and unencumbered by the baggage of particular walks of life or special uses.\nUniversality . Many other texts use these same examples, and therefore the use of them connects up this book with the main body of writing about probability and statistics.\n\nOften we’ll begin with a gambling example and then consider an example in one of the professional fields — such as business and other decision-making activities, biostatistics and medicine, social science and natural science — and everyday living. People in one field often can benefit from examples in others; for example, medical students should understand the need for business decision-making in terms of medical practice, as well as the biostatistical examples. And social scientists should understand the decision-making aspects of statistics if they have any interest in the use of their work in public policy." + }, + { + "objectID": "what_is_probability.html#back-to-proxies", + "href": "what_is_probability.html#back-to-proxies", + "title": "3  What is probability?", + "section": "3.4 Back to Proxies", + "text": "3.4 Back to Proxies\nExample of a proxy: The “probability risk assessments” (PRAs) that are made for the chances of failures of nuclear power plants are based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. A PRA can cost a nuclear facility $5 million.\nAnother example: If a manager of a high-street store looks at the sales of a particular brand of smart watches in the last two Decembers, and on that basis guesses how likely it is that she will run out of stock if she orders 200 smart watches, then the last two years’ experience is serving as a proxy for future experience. If a sales manager just “intuits” that the odds are 3 to 1 (a probability of .75) that the main local competitor will not meet a price cut, then all her past experience summed into her intuition is a proxy for the probability that it will really happen. Whether any proxy is a good or bad one depends on the wisdom of the person choosing the proxy and making the probability estimates.\nHow does one estimate a probability in practice? This involves practical skills not very different from the practical skills required to estimate with accuracy the length of a golf shot, the number of carpenters you will need to build a house, or the time it will take you to walk to a friend’s house; we will consider elsewhere some ways to improve your practical skills in estimating probabilities. For now, let us simply categorize and consider in the next section various ways of estimating an ordinary garden variety of probability, which is called an “unconditional” probability." + }, + { + "objectID": "what_is_probability.html#sec-probability-ways", + "href": "what_is_probability.html#sec-probability-ways", + "title": "3  What is probability?", + "section": "3.5 The various ways of estimating probabilities", + "text": "3.5 The various ways of estimating probabilities\nConsider the probability of drawing an even-numbered spade from a deck of poker cards (consider the queen as even and the jack and king as odd). Here are several general methods of estimation, where we define each method in terms of the operations we use to make the estimate:\n\nExperience.\nThe first possible source for an estimate of the probability of drawing an even-numbered spade is the purely empirical method of experience . If you have watched card games casually from time to time, you might simply guess at the proportion of times you have seen even-numbered spades appear — say, “about 1 in 15” or “about 1 in 9” (which is almost correct) or something like that. (If you watch long enough you might come to estimate something like 6 in 52.)\nGeneral information and experience are also the source for estimating the probability that the sales of a particular brand of smart watch this December will be between 200 and 250, based on sales the last two Decembers; that your team will win the football game tomorrow; that war will break out next year; or that a United States astronaut will reach Mars before a Russian astronaut. You simply put together all your relevant prior experience and knowledge, and then make an educated guess.\nObservation of repeated events can help you estimate the probability that a machine will turn out a defective part or that a child can memorize four nonsense syllables correctly in one attempt. You watch repeated trials of similar events and record the results.\nData on the mortality rates for people of various ages in a particular country in a given decade are the basis for estimating the probabilities of death, which are then used by the actuaries of an insurance company to set life insurance rates. This is systematized experience — called a frequency series .\nNo frequency series can speak for itself in a perfectly objective manner. Many judgments inevitably enter into compiling every frequency series — deciding which frequency series to use for an estimate, choosing which part of the frequency series to use, and so on. For example, should the insurance company use only its records from last year, which will be too few to provide as much data as is preferable, or should it also use death records from years further back, when conditions were slightly different, together with data from other sources? (Of course, no two deaths — indeed, no events of any kind — are exactly the same. But under many circumstances they are practically the same, and science is only interested in such “practical” considerations.)\nGiven that we have to use judgment in probability estimates, the reader may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities; the two terms are working synonyms.\nThere is no logical difference between the sort of probability that the life insurance company estimates on the basis of its “frequency series” of past death rates, and the manager’s estimates of the sales of smart watches in December, based on sales in that month in the past two years. 2\nThe concept of a probability based on a frequency series can be rendered almost useless when all the observations are repetitions of a single magnitude — for example, the case of all successes and zero failures of space-shuttle launches prior to the Challenger shuttle tragedy in the 1980s; in those data alone there was almost no basis to estimate the probability of a shuttle failure. (Probabilists have made some rather peculiar attempts over the centuries to estimate probabilities from the length of a zero-defect time series — such as the fact that the sun has never failed to rise (foggy days aside! — based on the undeniable fact that the longer such a series is, the smaller the probability of a failure; see e.g., (Whitworth 1897, xix–xli). However, one surely has more information on which to act when one has a long series of observations of the same magnitude rather than a short series).\nSimulated experience.\nA second possible source of probability estimates is empirical scientific investigation with repeated trials of the phenomenon. This is an empirical method even when the empirical trials are simulations. In the case of the even-numbered spades, the empirical scientific procedure is to shuffle the cards, deal one card, record whether or not the card is an even-number spade, replace the card, and repeat the steps a good many times. The proportions of times you observe an even-numbered spade come up is a probability estimate based on a frequency series.\nYou might reasonably ask why we do not just count the number of even-numbered spades in the deck of fifty-two cards — using the sample space analysis you see below. No reason at all. But that procedure would not work if you wanted to estimate the probability of a baseball batter getting a hit or a cigarette lighter producing flame.\nSome varieties of poker are so complex that experiment is the only feasible way to estimate the probabilities a player needs to know.\nThe resampling approach to statistics produces estimates of most probabilities with this sort of experimental “Monte Carlo” method. More about this later.\nSample space analysis and first principles.\nA third source of probability estimates is counting the possibilities — the quintessential theoretical method. For example, by examination of an ordinary die one can determine that there are six different numbers that can come up. One can then determine that the probability of getting (say) either a “1” or a “2,” on a single throw, is 2/6 = 1/3, because two among the six possibilities are “1” or “2.” One can similarly determine that there are two possibilities of getting a “1” plus a “6” out of thirty-six possibilities when rolling two dice, yielding a probability estimate of 2/36 = 1/18.\nEstimating probabilities by counting the possibilities has two requirements: 1) that the possibilities all be known (and therefore limited), and few enough to be studied easily; and 2) that the probability of each particular possibility be known, for example, that the probabilities of all sides of the dice coming up are equal, that is, equal to 1/6.\nMathematical shortcuts to sample-space analysis.\nA fourth source of probability estimates is mathematical calculations . If one knows by other means that the probability of a spade is 1/4 and the probability of an even-numbered card is 6/13, one can use probability calculation rules to calculate that the probability of turning up an even-numbered spade is 6/52 (that is, 1/4 x 6/13). If one knows that the probability of a spade is 1/4 and the probability of a heart is 1/4, one can then calculate that the probability of getting a heart or a spade is 1/2 (that is 1/4 + 1/4). The point here is not the particular calculation procedures, which we will touch on later, but rather that one can often calculate the desired probability on the basis of already-known probabilities.\nIt is possible to estimate probabilities with mathematical calculation only if one knows by other means the probabilities of some related events. For example, there is no possible way of mathematically calculating that a child will memorize four nonsense syllables correctly in one attempt; empirical knowledge is necessary.\nKitchen-sink methods.\nIn addition to the above four categories of estimation procedures, the statistical imagination may produce estimates in still other ways such as a) the salesman’s seat-of-the-pants estimate of what the competition’s price will be next quarter, based on who-knows-what gossip, long-time acquaintance with the competitors, and so on, and b) the probability risk assessments (PRAs) that are made for the chances of failures of nuclear power plants based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. Any of these methods may be a combination of theoretical and empirical methods.\n\nAs an example of an organization struggling with kitchen-sink methods, consider the estimation of the probability of failure for the tragic flight of the Challenger shuttle, as described by the famous physicist Nobelist Richard Feynman. This is a very real case that includes just about every sort of complication that enters into estimating probabilities.\n\n…Mr. Ullian told us that 5 out of 127 rockets that he had looked at had failed — a rate of about 4 percent. He took that 4 percent and divided it by 4, because he assumed a manned flight would be safer than an unmanned one. He came out with about a 1 percent chance of failure, and that was enough to warrant the destruct charges.\nBut NASA [the space agency in charge] told Mr. Ullian that the probability of failure was more like 1 in \\(10^5\\).\nI tried to make sense out of that number. “Did you say 1 in \\(10^5\\)?”\n“That’s right; 1 in 100,000.”\n“That means you could fly the shuttle every day for an average of 300 years between accidents — every day, one flight, for 300 years — which is obviously crazy!”\n“Yes, I know,” said Mr. Ullian. “I moved my number up to 1 in 1000 to answer all of NASA’s claims — that they were much more careful with manned flights, that the typical rocket isn’t a valid comparison, etcetera.”\nBut then a new problem came up: the Jupiter probe, Galileo , was going to use a power supply that runs on heat generated by radioactivity. If the shuttle carrying Galileo failed, radioactivity could be spread over a large area. So the argument continued: NASA kept saying 1 in 100,000 and Mr. Ullian kept saying 1 in 1000, at best.\nMr. Ullian also told us about the problems he had in trying to talk to the man in charge, Mr. Kingsbury: he could get appointments with underlings, but he never could get through to Kingsbury and find out how NASA got its figure of 1 in 100,000 (Feynman and Leighton 1988, 179–80).\n\nFeynman tried to ascertain more about the origins of the figure of 1 in 100,000 that entered into NASA’s calculations. He performed an experiment with the engineers:\n\n…“Here’s a piece of paper each. Please write on your paper the answer to this question: what do you think is the probability that a flight would be uncompleted due to a failure in this engine?”\nThey write down their answers and hand in their papers. One guy wrote “99-44/100% pure” (copying the Ivory soap slogan), meaning about 1 in 200. Another guy wrote something very technical and highly quantitative in the standard statistical way, carefully defining everything, that I had to translate — which also meant about 1 in 200. The third guy wrote, simply, “1 in 300.”\nMr. Lovingood’s paper, however, said:\n“Cannot quantify. Reliability is judged from:\n\npast experience\nquality control in manufacturing\nengineering judgment”\n\n“Well,” I said, “I’ve got four answers, and one of them weaseled.” I turned to Mr. Lovingood: “I think you weaseled.”\n“I don’t think I weaseled.”\n“You didn’t tell me what your confidence was, sir; you told me how you determined it. What I want to know is: after you determined it, what was it?”\nHe says, “100 percent” — the engineers’ jaws drop, my jaw drops; I look at him, everybody looks at him — “uh, uh, minus epsilon!”\nSo I say, “Well, yes; that’s fine. Now, the only problem is, WHAT IS EPSILON?”\nHe says, “\\(10^-5\\).” It was the same number that Mr. Ullian had told us about: 1 in 100,000.\nI showed Mr. Lovingood the other answers and said, “You’ll be interested to know that there is a difference between engineers and management here — a factor of more than 300.”\nHe says, “Sir, I’ll be glad to send you the document that contains this estimate, so you can understand it.”\nLater, Mr. Lovingood sent me that report. It said things like “The probability of mission success is necessarily very close to 1.0” — does that mean it is close to 1.0, or it ought to be close to 1.0? — and “Historically, this high degree of mission success has given rise to a difference in philosophy between unmanned and manned space flight programs; i.e., numerical probability versus engineering judgment.” As far as I can tell, “engineering judgment” means they’re just going to make up numbers! The probability of an engine-blade failure was given as a universal constant, as if all the blades were exactly the same, under the same conditions. The whole paper was quantifying everything. Just about every nut and bolt was in there: “The chance that a HPHTP pipe will burst is \\(10^-7\\).” You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you add everything together you get 1 in 100,000. (Feynman and Leighton 1988, 182–83).\n\nWe see in the Challenger shuttle case very mixed kinds of inputs to actual estimates of probabilities. They include frequency series of past flights of other rockets, judgments about the relevance of experience with that different sort of rocket, adjustments for special temperature conditions (cold), and much much more. There also were complex computational processes in arriving at the probabilities that were made the basis for the launch decision. And most impressive of all, of course, are the extraordinary differences in estimates made by various persons (or perhaps we should talk of various statuses and roles) which make a mockery of the notion of objective estimation in this case.\nWorking with different sorts of estimation methods in different sorts of situations is not new; practical statisticians do so all the time. We argue that we should make no apology for doing so.\nThe concept of probability varies from one field of endeavor to another; it is different in the law, in science, and in business. The concept is most straightforward in decision-making situations such as business and gambling; there it is crystal-clear that one’s interest is entirely in making accurate predictions so as to advance the interests of oneself and one’s group. The concept is most difficult in social science, where there is considerable doubt about the aims and values of an investigation. In sum, one should not think of what a probability “is” but rather how best to estimate it. In practice, neither in actual decision-making situations nor in scientific work — nor in classes — do people experience difficulties estimating probabilities because of philosophical confusions. Only philosophers and mathematicians worry — and even they really do not need to worry — about the “meaning” of probability3." + }, + { + "objectID": "what_is_probability.html#the-relationship-of-probability-to-other-magnitudes", + "href": "what_is_probability.html#the-relationship-of-probability-to-other-magnitudes", + "title": "3  What is probability?", + "section": "3.6 The relationship of probability to other magnitudes", + "text": "3.6 The relationship of probability to other magnitudes\nAn important argument in favor of approaching the concept of probability as an estimate is that an estimate of a probability often (though not always) is the opposite side of the coin from an estimate of a physical quantity such as time or space.\nFor example, uncertainty about the probability that one will finish a task within 9 minutes is another way of labeling the uncertainty that the time required to finish the task will be less than 9 minutes. Hence, if estimation is appropriate for time in this case, it should be equally appropriate for probability. The same is true for the probability that the quantity of smart watches sold will be between 200 and 250 units.\nHence the concept of probability, and its estimation in any particular case, should be no more puzzling than is the “dual” concept of time or distance or quantities of smart watches. That is, lack of certainty about the probability that an event will occur is not different in nature from lack of certainty about the amount of time or distance in the event. There is no essential difference between whether a part 2 inches in length will be the next to emerge from the machine, or what the length of the next part will be, or the length of the part that just emerged (if it has not yet been measured).\nThe information available for the measurement of (say) the length of a car or the location of a star is exactly the same information that is available with respect to the concept of probability in those situations. That is, one may have ten disparate observations of a car’s length which then constitute a probability distribution, and the same for the altitude of a star in the heavens.\nIn a book of puzzles about probability (Mosteller 1987, problem 42), this problem appears: “If a stick is broken in two at random, what is the average length of the smaller piece?” This particular puzzle does not even mention probability explicitly, and no one would feel the need to write a scholarly treatise on the meaning of the word “length” here, any more than one would one do so if the question were about an astronomer’s average observation of the angle of a star at a given time or place, or the average height of boards cut by a carpenter, or the average size of a basketball team. Nor would one write a treatise about the “meaning” of “time” if a similar puzzle involved the average time between two bird calls. Yet a rephrasing of the problem reveals its tie to the concept of probability, to wit: What is the probability that the smaller piece will be (say) more than half the length of the larger piece? Or, what is the probability distribution of the sizes of the shorter piece?\nThe duality of the concepts of probability and physical entities also emerges in Whitworth’s discussion (1897) of fair betting odds:\n\n…What sum ought you fairly give or take now, while the event is undetermined, in exchange for the assurance that you shall receive a stated sum (say $1,000) if the favourable event occur? The chance of receiving $1,000 is worth something. It is not as good as the certainty of receiving $1,000, and therefore it is worth less than $1,000. But the prospect or expectation or chance, however slight, is a commodity which may be bought and sold. It must have its price somewhere between zero and $1,000. (p. xix.)\n\n\n…And the ratio of the expectation to the full sum to be received is what is called the chance of the favourable event. For instance, if we say that the chance is 1/5, it is equivalent to saying that $200 is the fair price of the contingent $1,000. (p. xx.)…\n\n\nThe fair price can sometimes be calculated mathematically from a priori considerations: sometimes it can be deduced from statistics, that is, from the recorded results of observation and experiment. Sometimes it can only be estimated generally, the estimate being founded on a limited knowledge or experience. If your expectation depends on the drawing of a ticket in a raffle, the fair price can be calculated from abstract considerations: if it depend upon your outliving another person, the fair price can be inferred from recorded statistics: if it depend upon a benefactor not revoking his will, the fair price depends upon the character of your benefactor, his habit of changing his mind, and other circumstances upon the knowledge of which you base your estimate. But if in any of these cases you determine that $300 is the sum which you ought fairly to accept for your prospect, this is equivalent to saying that your chance, whether calculated or estimated, is 3/10... (p. xx.)\n\nIt is indubitable that along with frequency data, a wide variety of other information will affect the odds at which a reasonable person will bet. If the two concepts of probability stand on a similar footing here, why should they not be on a similar footing in all discussion of probability? I can think of no reason that they should not be so treated.\nScholars write about the “discovery” of the concept of probability in one century or another. But is it not likely that even in pre-history, when a fisherperson was asked how long the big fish was, s/he sometimes extended her/his arms and said, “About this long, but I’m not exactly sure,” and when a scout was asked how many of the enemy there were, s/he answered, “I don’t know for sure...probably about fifty.” The uncertainty implicit in these statements is the functional equivalent of probability statements. There simply is no need to make such heavy work of the probability concept as the philosophers and mathematicians and historians have done." + }, + { + "objectID": "what_is_probability.html#what-is-chance", + "href": "what_is_probability.html#what-is-chance", + "title": "3  What is probability?", + "section": "3.7 What is “chance”?", + "text": "3.7 What is “chance”?\nThe study of probability focuses on events with randomness — that is, events about which there is uncertainty whether or not they will occur. And the uncertainty refers to your knowledge rather than to the event itself. For example, consider this physical illustration with a remote control. The remote control has a front end that should point at the TV that is controls, and a back end that will usually be pointing at me, the user of the remote control. Call the front — the TV end, and the back — the sofa end of the remote control.\nI spin the remote control like a baton twirler. If I hold it at the sofa end and attempt to flip it so that it turns only half a revolution, I can be almost sure that I will correctly get the TV end and not the sofa end. And if I attempt to flip it a full revolution, again I can almost surely get the sofa end successfully. It is not a random event whether I catch the sofa end or the TV end (here ignoring those throws when I catch neither end) when doing only half a revolution or one revolution. The result is quite predictable in both these simple maneuvers so far.\nWhen I say the result is “predictable,” I mean that you would not bet with me about whether this time I’ll get the TV or the sofa end. So we say that the outcome of my flip aiming at half a revolution is not “random.”\nWhen I twirl the remote control so little, I control (almost completely) whether the sofa end or the TV end comes down to my hand; this is the same as saying that the outcome does not occur by chance.\nThe terms “random” and “chance” implicitly mean that you believe that I cannot control or cannot know in advance what will happen.\nWhether this twirl will be the rare time I miss, however, should be considered chance. Though you would not bet at even odds on my catching the sofa end versus the TV end if there is to be only a half or one full revolution, you might bet — at (say) odds of 50 to 1 — that I will make a mistake and get it wrong, or drop it. So the very same flip can be seen as random or determined depending on what aspect of it we are looking at.\nOf course you would not bet against me about my not making a mistake, because the bet might cause me to make a mistake purposely. This “moral hazard” is a problem that emerges when a person buys life insurance and may commit suicide, or when a boxer may lose a fight purposely. The people who stake money on those events say that such an outcome is “fixed” (a very appropriate word) and not random.\nNow I attempt more difficult maneuvers with the remote control. I can do \\(1\\frac{1}{2}\\) flips pretty well, and two full revolutions with some success — maybe even \\(2\\frac{1}{2}\\) flips on a good day. But when I get much beyond that, I cannot determine very well whether I’ll get the sofa or the TV end. The outcome gradually becomes less and less predictable — that is, more and more random.\nIf I flip the remote control so that it revolves three or more times, I can hardly control the process at all, and hence I cannot predict well whether I’ll get the sofa end or the TV end. With 5 revolutions I have absolutely no control over the outcome; I cannot predict the outcome better than 50-50. At that point, getting the sofa end or the TV end has become a completely random event for our purposes, just like flipping a coin high in the air. So at that point we say that “chance” controls the outcome, though that word is just a synonym for my lack of ability to control and predict the outcome. “Chance” can be thought to stand for the myriad small factors that influence the outcome.\nWe see the same gradual increase in randomness with increasing numbers of shuffles of cards. After one shuffle, a skilled magician can know where every card is, and after two shuffles there is still much order that s/he can work with. But after (say) five shuffles, the magician no longer has any power to predict and control, and the outcome of any draw can then be thought of as random chance.\nAt what point do we say that the outcome is “random” or “pure chance” as to whether my hand will grasp the TV end, the sofa end, or at some other spot? There is no sharp boundary to this transition. Rather, the transition is gradual; this is the crucial idea, and one that I have not seen stated before.\nWhether or not we refer to the outcome as random depends upon the twirler’s skill, which influences how predictable the event is. A baton twirler or juggler might be able to do ten flips with a non-random outcome; if the twirler is an expert and the outcome is highly predictable, we say it is not random but rather is determined.\nAgain, this shows that the randomness is not a property of the physical event, but rather of a person’s knowledge and skill." + }, + { + "objectID": "what_is_probability.html#sec-what-is-chance", + "href": "what_is_probability.html#sec-what-is-chance", + "title": "3  What is probability?", + "section": "3.8 What Do We Mean by “Random”?", + "text": "3.8 What Do We Mean by “Random”?\nWe have defined “chance” and “random* as the absence of predictive power and/or explanation and/or control. Here we should not confuse the concepts of determinacy-indeterminacy and predictable-unpredictable. What matters for decision purposes is whether you can predict. Whether the process is”really” determinate is largely a matter of definition and labeling, an unnecessary philosophical controversy for our purposes (and perhaps for any other purpose) 4.\nThe remote control in the previous demonstration becomes unpredictable — that is, random — even though it still is subject to similar physical processes as when it is predictable. I do not deny in principle that these processes can be “understood,” or that one could produce a machine that would — like a baton twirler — make the course of the remote control predictable for many turns. But in practice we cannot make the predictions — and it is the practical reality, rather than the principle, that matters here.\nWhen I flip the remote control half a turn or one turn, I control (almost completely) whether it comes down at the sofa end end or the TV end, so we do not say that the outcome is chance. Much the same can be said about what happens to the predictability of drawing a given card as one increases the number of times one shuffles a deck of cards.\nConsider, too, a set of fake dice that I roll. Before you know they are fake, you assume that the probabilities of various outcomes is a matter of chance. But after you know that the dice are loaded, you no longer assume that the outcome is chance. This illustrates how the probabilities you work with are influenced by your knowledge of the facts of the situation.\nAdmittedly, this way of thinking about probability takes some getting used to. Events may appear to be random, but in fact, we can predict them — and visa versa. For example, suppose a magician does a simple trick with dice such as this one:\n\nThe magician turns her back while a spectator throws three dice on the table. He is instructed to add the faces. He then picks up any one die, adding the number on the bottom to the previous total. This same die is rolled again. The number it now shows is also added to the total. The magician turns around. She calls attention to the fact that she has no way of knowing which of the three dice was used for the second roll. She picks up the dice, shakes them in her hand a moment, then correctly announces the final sum.\n\nMethod:. When the spectator rolls the dice, they get three numbers, one from each of the three dice. Call these numbers \\(a\\), \\(b\\) and \\(c\\). Then he chooses one die — it doesn’t matter which, but let’s say he chooses the third die, with value \\(c\\). He adds the bottom of the third die to the total. Here’s the trick — the total of opposite faces on a standard die always add up to 7 — 1 is opposite 6, 2 is opposite 5, and 3 is opposite 4. So the total is now \\(a + b + 7\\). Then the spectator rolls the third die again, to get a new number \\(d\\). The total is now \\(a + b + 7 + d\\). When the magician turns round she can see what \\(a\\) and \\(b\\) and \\(d\\) are, so to get the right final total, she just needs to add 7 (Gardner 1985, p259). Ben Sparks does a nice demonstration of the trick on Numerphile YouTube.\nThe point here is that, until you know the trick, you (the magician) cannot predict the final sum, so the magician and the spectator consider the result as random. If you do know the trick, you can predict the result, and it is not random. Whether something is “random” or not, depends on what you know.\nConsider the distributions of heights of various groups of living things (including people). When we consider all living things taken together, the shape of the overall distribution — many individuals at the tiny end where the viruses are found, and very few individuals at the tall end where the giraffes are — is determined mostly by the distribution of species that have different mean heights. Hence we can explain the shape of that distribution, and we do not say that is determined by “chance.” But with a homogenous cohort of a single species — say, all 25-year-old human females in the U.S. — our best description of the shape of the distribution is “chance.” With situations in between, the shape is partly due to identifiable factors — e.g. age — and partly due to “chance.”\nOr consider the case of a basketball shooter: What causes her or him to make (or not make) a basket this shot, after a string of successes? Much must be ascribed to chance variation. But what causes a given shooter to be very good or very poor relative to other players? For that explanation we can point to such factors as the amount of practice or natural talent.\nAgain, all this has nothing to do with whether the mechanism is “really” chance, unlike the arguments that have been raging in physics for a century. That is the point of the remote control demonstration. Our knowledge and our power to predict the outcome gradually transits from non-chance (that is, “determined”) to chance (“not determined”) in a gradual way even though the same sort of physical mechanism produces each throw of the remote control.\nEarlier I mentioned that when we say that chance controls the outcome of the remote control flip after (say) five revolutions, we mean that there are many small forces that affect the outcome. The effect of each force is not known, and each is independent of the other. None of these forces is large enough for me (as the remote control twirler) to deal with, or else I would deal with it and be able to improve my control and my ability to predict the outcome. This concept of many small influences — “small” meaning in practice those influences whose effects cannot be identified and allowed for — which affect the outcome and whose effects are not knowable and which are independent of each other is important in statistical inference. For example, as we will see later, when we add many unpredictable deviations together, and plot the distribution of the result, we end up with the famous and very common bell-shaped normal distribution — this striking result comes about because of a mathematical phenomenon called the Central Limit Theorem. We will show this at work, later in the book." + }, + { + "objectID": "what_is_probability.html#randomness-from-the-computer", + "href": "what_is_probability.html#randomness-from-the-computer", + "title": "3  What is probability?", + "section": "3.9 Randomness from the computer", + "text": "3.9 Randomness from the computer\nWe now have the idea of random variation as being variation we cannot predict. For example, when we flip the remote control through many rotations, we can no longer easily predict which end will land in our hand. We can call the result of any particular flip — random — because we cannot predict whether the result will be TV end or sofa end.\nWe still know some things about the result — it will be one of two options — TV or sofa (unless we drop it). But we cannot predict which. We say the result of each flip is random if we cannot do anything to improve our prediction of 50% for TV (or sofa) end on the next flip.\nWe are not saying the result is random in any deep, non-deterministic sense — we are only saying we can treat the result as random, because we cannot predict it.\nNow consider getting random numbers from the computer, where the numbers can either be 0 or 1. This is rather like tossing a fair coin, where the results are 0 and 1 rather than “heads” and “tails”.\nWhen we ask the computer for a random choice between 0 and 1, we accept it is random-enough, or random-like, if we can’t do anything to predict which of 0 or 1 we will get on any one trial. We can’t do better than guessing that the next value will be — say — 0 — and whichever number we guess, we will only ever have a 50% chance of being correct. We are not saying the computer is giving truly random numbers in some deep sense, only numbers we cannot distinguish from truly random numbers, because we cannot do anything to predict them. The technical term for random numbers from the computer is therefore pseudo-random — meaning, like random numbers, in the sense they are effectively unpredictable. Effectively unpredictable means there is no practical way for you, or even a very powerful computer, to do anything to improve your prediction of the next number in the series." + }, + { + "objectID": "what_is_probability.html#the-philosophers-dispute-about-the-concept-of-probability", + "href": "what_is_probability.html#the-philosophers-dispute-about-the-concept-of-probability", + "title": "3  What is probability?", + "section": "3.10 The philosophers’ dispute about the concept of probability", + "text": "3.10 The philosophers’ dispute about the concept of probability\nThose who call themselves “objectivists” or “frequentists” and those who call themselves “personalists” or “Bayesians” have been arguing for hundreds or even thousands of years about the “nature” of probability. The objectivists insist (correctly) that any estimation not based on a series of observations is subject to potential bias, from which they conclude (incorrectly) that we should never think of probability that way. They are worried about the perversion of science, the substitution of arbitrary assessments for value-free data-gathering. The personalists argue (correctly) that in many situations it is not possible to obtain sufficient data to avoid considerable judgment. Indeed, if a probability is about the future, some judgment is always required — about which observations will be relevant, and so on. They sometimes conclude (incorrectly) that the objectivists’ worries are unimportant.\nAs is so often the case, the various sides in the argument have different sorts of situations in mind. As we have seen, the arguments disappear if one thinks operationally with respect to the purpose of the work, rather than in terms of properties, as mentioned earlier.\nHere is an example of the difficulty of focusing on the supposed properties of the mechanism or situation: The mathematical theorist asserts that the probability of a die falling with the “5” side up is 1/6, on the basis of the physics of equally-weighted sides. But if one rolls a particular die a million times, and it turns up “5” less than 1/6 of the time, one surely would use the observed proportion as the practical estimate. The probabilities of various outcomes with cheap dice may depend upon the number of pips drilled out on a side. In 20,000 throws of a red die and 20,000 throws of a white die, the proportions of 3’s and 4’s were, respectively, .159 and .146, .145 and .142 — all far below the expected proportions of .167. That is, 3’s and 4’s occurred about 11 percent less often that if the dice had been perfectly formed, a difference that could make a big difference in a gambling game (Bulmer 1979, 18).\nIt is reasonable to think of both the engineering method (the theoretical approach) and the empirical method (experimentation and data collection) as two alternative ways to estimate a probability. The two methods use different processes and different proxies for the probability you wish to estimate. One must adduce additional knowledge to decide which method to use in any given situation. It is sensible to use the empirical method when data are available. (But use both together whenever possible.)\nIn view of the inevitably subjective nature of probability estimates, you may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities. The two terms are working synonyms.\nMost important: One cannot sensibly talk about probabilities in the abstract, without reference to some set of facts. The topic then loses its meaning, and invites confusion and argument. This also is a reason why a general formalization of the probability concept does not make sense." + }, + { + "objectID": "what_is_probability.html#the-relationship-of-probability-to-the-concept-of-resampling", + "href": "what_is_probability.html#the-relationship-of-probability-to-the-concept-of-resampling", + "title": "3  What is probability?", + "section": "3.11 The relationship of probability to the concept of resampling", + "text": "3.11 The relationship of probability to the concept of resampling\nThere is no all-agreed definition of the concept of the resampling method in statistics. Unlike some other writers, I prefer to apply the term to problems in both pure probability and statistics. This set of examples may illustrate:\n\nConsider asking about the number of hits one would expect from a 0.250 (25 percent) batter in a 400 at-bat season. One would call this a problem in “probability.” The sampling distribution of the batter’s results can be calculated by formula or produced by Monte Carlo simulation.\nNow consider examining the number of hits in a given batter’s season, and asking how likely that number (or fewer) is to occur by chance if the batter’s long-run batting average is 0.250. One would call this a problem in “statistics.” But just as in example (1) above, the answer can be calculated by formula or produced by Monte Carlo simulation. And the calculation or simulation is exactly the same as used in (1).\nHere the term “resampling” might be applied to the simulation with considerable agreement among people familiar with the term, but perhaps not by all such persons.\nNext consider an observed distribution of distances that a batter’s hits travel in a season with 100 hits, with an observed mean of 150 feet per hit. One might ask how likely it is that a sample of 10 hits drawn with replacement from the observed distribution of hit lengths (with a mean of 150 feet) would have a mean greater than 160 feet, and one could easily produce an answer with repeated Monte Carlo samples. Traditionally this would be called a problem in probability.\nNext consider that a batter gets 10 hits with a mean of 160 feet, and one wishes to estimate the probability that the sample would be produced by a distribution as specified in (3). This is a problem in statistics, and by 1996, it is common statistical practice to treat it with a resampling method. The actual simulation would, however, be identical to the work described in (3).\n\nBecause the work in (4) and (2) differ only in question (4) involving measured data and question (2) involving counted data, there seems no reason to discriminate between the two cases with respect to the term “resampling.” With respect to the pairs of cases (1) and (2), and (3) and (4), there is no difference in the actual work performed, though there is a difference in the way the question is framed. I would therefore urge that the label “resampling” be applied to (1) and (3) as well as to (2) and (4), to bring out the important fact that the procedure is the same as in resampling questions in statistics.\nOne could easily produce examples like (1) and (2) for cases that are similar except that the drawing is without replacement, as in the sampling version of Fisher’s permutation test — for example, a tea taster (Fisher 1935; Fisher 1960, chap. II, section 5). And one could adduce the example of prices in different state liquor control systems (see Section 12.16) which is similar to cases (3) and (4) except that sampling without replacement seems appropriate. Again, the analogs to cases (2) and (4) would generally be called “resampling.”\nThe concept of resampling is defined in a more precise way in Section 8.9." + }, + { + "objectID": "what_is_probability.html#conclusion", + "href": "what_is_probability.html#conclusion", + "title": "3  What is probability?", + "section": "3.12 Conclusion", + "text": "3.12 Conclusion\nWe define “chance” as the absence of predictive power and/ or explanation and/or control.\nWhen the remote control rotates more than three or four turns I cannot control the outcome — whether TV or sofa end — with any accuracy. That is to say, I cannot predict much better than 50-50 with more than four rotations. So we then say that the outcome is determined by “chance.”\nAs to those persons who wish to inquire into what the situation “really” is: I hope they agree that we do not need to do so to proceed with our work. I hope all will agree that the outcome of flipping the TV gradually becomes unpredictable (random) though still subject to similar physical processes as when predictable. I do not deny in principle that these processes can be “understood,” certainly one can develop a machine (or a baton twirler) that will make the outcome predictable for many turns. But this has nothing to do with whether the mechanism is “really” something one wants to say is influenced by “chance.” This is the point of the cooking-TV demonstration. The outcome traverses from non-chance (determined) to chance (not determined) in a smooth way even though the physical mechanism that produces the revolutions remains much the same over the traverse.\n\n\n\n\nBarnett, Vic. 1982. Comparative Statistical Inference. 2nd ed. Wiley Series in Probability and Mathematical Statistics. Chichester: John Wiley & Sons. https://archive.org/details/comparativestati0000barn.\n\n\nBulmer, M. G. 1979. Principles of Statistics. New York, NY: Dover Publications, inc. https://archive.org/details/principlesofstat0000bulm.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nFeynman, Richard P., and Ralph Leighton. 1988. What Do You Care What Other People Think? Further Adventures of a Curious Character. New York, NY: W. W. Norton; Company, Inc. https://archive.org/details/whatdoyoucarewha0000feyn_x5w7.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nGardner, Martin. 1985. Mathematical Magic Show. Penguin Books Ltd, Harmondsworth.\n\n\nMosteller, Frederick. 1987. Fifty Challenging Problems in Probability with Solutions. Courier Corporation.\n\n\nRaiffa, Howard. 1968. “Decision Analysis: Introductory Lectures on Choices Under Uncertainty.” https://archive.org/details/decisionanalysis0000raif.\n\n\nRuark, Arthur Edward, and Harold Clayton Urey. 1930. Atoms, Moleculues and Quanta. New York, NY: McGraw-Hill book company, inc. https://archive.org/details/atomsmoleculesqu00ruar.\n\n\nRussell, Bertrand. 1945. A History of Western Philosophy. New York: Simon; Schuster.\n\n\nWhitworth, William Allen. 1897. DCC Exercises in Choice and Chance. Cambridge, UK: Deighton Bell; Co. https://archive.org/details/dccexerciseschoi00whit." + }, + { + "objectID": "about_technology.html#python-and-its-packages", + "href": "about_technology.html#python-and-its-packages", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.1 Python and its packages", + "text": "4.1 Python and its packages\nThis version of the book uses the Python [^python-lang] programming language to implement resampling algorithms.\nPython is a programming language that can be used for many tasks. It is a popular language for teaching, but is also used widely in industry and academia. It is one of the most widely used programming languages in the world, and the most popular language for data science.\nFor many of the initial examples, we will also be using the NumPy [^numpy] package for Python. A package is a library of Python code and data. NumPy is a package that makes it easier to work with sequences of data values, such as sequences of numbers. These are typical in probability and statistics.\nLater, we be using the Matplotlib [^matplotlib] package. This is the main Python package with code for producing plots, such as bar charts, histograms, and scatter plots. See the rest of the book for more details on these plots.\nStill further on in the book, we will use more specialized libraries for data manipulation and analysis. Pandas [^pandas] is the standard Python package for loading data files and working with data tables. SciPy [^scipy] is a package that houses a wide range of numerical routines, including some simple statistical methods. The Statsmodels [^statsmodels] package has code for many more statistical procedures. We will often find ourselves comparing the results of our own resampling algorithms to those in SciPy and Statsmodels." + }, + { + "objectID": "about_technology.html#the-environment", + "href": "about_technology.html#the-environment", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.2 The environment", + "text": "4.2 The environment\nMany of the chapters have sections with code for you to run, and experiment with. These sections contain Jupyter notebooks 1]. Jupyter notebooks are interactive web pages that allow you to read, write and run Python code. We mark the start of each notebook in the text with a note and link heading like the one you see below. In the web edition of this book, you can click on the Download link in this header to download the section as a notebook. You can also click on the Interact link in this header to open the notebook on a cloud computer. This allows you to interact with the notebook on the cloud computer. You can run the code, and experiment by making changes.\nIn the print version of the book, we point you to the web version, to get the links.\nAt the end of this chapter, we explain how to run these notebooks on your own computer. In the next section you will see an example notebook; you might want to run this in the cloud to get started." + }, + { + "objectID": "about_technology.html#getting-started-with-the-notebook", + "href": "about_technology.html#getting-started-with-the-notebook", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.3 Getting started with the notebook", + "text": "4.3 Getting started with the notebook\nThe next section contains a notebook called “Billie’s Bill”. If you are looking at the web edition, you will see links to interact with this notebook in the cloud, or download it to your computer.\n\nStart of billies_bill notebook\n\nDownload notebook\nInteract\n\n\nThe text in this notebook section assumes you have opened the page as an interactive notebook, on your own computer, or one of the Jupyter web interfaces.\nA notebook can contain blocks of text — like this one — as well as code, and the results from running the code.\nIf you are in the notebook interface (rather than reading this in the textbook), you will see the Jupyter menu near the top of the page, with headings “File”, “Edit” and so on.\n\nUnderneath that, by default, you may see a row of icons - the “Toolbar”.\nIn the toolbar, you may see icons to run the current cell, among others.\nTo move from one cell to the next, you can click the run icon in the toolbar, but it is more efficient to press the Shift key, and press Enter (with Shift still held down). We will write this as Shift-Enter.\n\nIn this, our first notebook, we will be using Python to solve one of those difficult and troubling problems in life — working out the bill in a restaurant.\n\n4.4 The meal in question\nAlex and Billie are at a restaurant, getting ready to order. They do not have much money, so they are calculating the expected bill before they order.\nAlex is thinking of having the fish for £10.50, and Billie is leaning towards the chicken, at £9.25. First they calculate their combined bill.\nBelow this text you see a code cell. It contains the Python code to calculate the total bill. Press Shift-Enter in the cell below, to see the total.\n\n10.50 + 9.25\n\n19.75\n\n\nThe contents of the cell above is Python code. As you would predict, Python understands numbers like 10.50, and it understands + between the numbers as an instruction to add the numbers.\nWhen you press Shift-Enter, Python finds 10.50, realizes it is a number, and stores that number somewhere in memory. It does the same thing for 9.25, and then it runs the addition operation on these two numbers in memory, which gives the number 19.75.\nFinally, Python sends the resulting number (19.75) back to the notebook for display. The notebook detects that Python sent back a value, and shows it to us.\nThis is exactly what a calculator would do.\n\n\n4.5 Comments\nUnlike a calculator, we can also put notes next to our calculations, to remind us what they are for. One way of doing this is to use a “comment”. You have already seen comments in the previous chapter.\nA comment is some text that the computer will ignore. In Python, you can make a comment by starting a line with the # (hash) character. For example, the next cell is a code cell, but when you run it, it does not show any result. In this case, that is because the computer sees the # at the beginning of the line, and then ignores the rest.\n\n# This bit of text is for me to read, and the computer to ignore.\n\nMany of the code cells you see will have comments in them, to explain what the code is doing.\nPractice writing comments for your own code. It is a very good habit to get into. You will find that experienced programmers write many comments on their code. They do not do this to show off, but because they have a lot of experience in reading code, and they know that comments make it much easier to read and understand code.\n\n\n4.6 More calculations\nLet us continue with the struggle that Alex and Billie are having with their bill.\nThey realize that they will also need to pay a tip.\nThey think it would be reasonable to leave a 15% tip. Now they need to multiply their total bill by 0.15, to get the tip. The bill is about £20, so they know that the tip will be about £3.\nIn Python * means multiplication. This is the equivalent of the “×” key on a calculator.\nWhat about this, for the correct calculation?\n\n# The tip - with a nasty mistake.\n10.50 + 9.25 * 0.15\n\n11.8875\n\n\nOh dear, no, that isn’t doing the right calculation.\nPython follows the normal rules of precedence with calculations. These rules tell us to do multiplication before addition.\nSee https://en.wikipedia.org/wiki/Order_of_operations for more detail on the standard rules.\nIn the case above the rules tell Python to first calculate 9.25 * 0.15 (to get 1.3875) and then to add the result to 10.50, giving 11.8875.\nWe need to tell Python we want it to do the addition and then the multiplication. We do this with round brackets (parentheses):\n\n\n\n\n\n\n\n\n\n\nThere are three types of brackets in Python.\nThese are:\n\nround brackets or parentheses: ();\nsquare brackets: [];\ncurly brackets: {}.\n\nEach type of bracket has a different meaning in Python. In the examples, play close to attention to the type of brackets we are using.\n\n\n\n# The bill plus tip - mistake fixed.\n(10.50 + 9.25) * 0.15\n\n2.9625\n\n\nThe obvious next step is to calculate the bill including the tip.\n\n# The bill, including the tip\n10.50 + 9.25 + (10.50 + 9.25) * 0.15\n\n22.7125\n\n\nAt this stage we start to feel that we are doing too much typing. Notice that we had to type out 10.50 + 9.25 twice there. That is a little boring, but it also makes it easier to make mistakes. The more we have to type, the greater the chance we have to make a mistake.\nTo make things simpler, we would like to be able to store the result of the calculation 10.50 + 9.25, and then re-use this value, to calculate the tip.\nThis is the role of variables. A variable is a value with a name.\nHere is a variable:\n\n# The cost of Alex's meal.\na = 10.50\n\na is a name we give to the value 10.50. You can read the line above as “The variable a gets the value 10.50”. We can also talk of setting the variable. Here we are setting a to equal 10.50.\nNow, when we use a in code, it refers to the value we gave it. For example, we can put a on a line on its own, and Python will show us the value of a:\n\n# The value of a\na\n\n10.5\n\n\nWe did not have to use the name a — we can choose almost any name we like. For example, we could have chosen alex_meal instead:\n\n# The cost of Alex's meal.\n# alex_meal gets the value 10.50\nalex_meal = 10.50\n\nWe often set variables like this, and then display the result, all in the same cell. We do this by first setting the variable, as above, and then, on the final line of the cell, we put the variable name on a line on its own, to ask Python to show us the value of the variable. Here we set billie_meal to have the value 9.25, and then show the value of billie_meal, all in the same cell.\n\n# The cost of Billie's meal.\nbillie_meal = 9.25\n# Show the value of billies_meal\nbillie_meal\n\n9.25\n\n\nOf course, here, we did not learn much, but we often set variable values with the results of a calculation. For example:\n\n# The cost of both meals, before tip.\nbill_before_tip = 10.50 + 9.25\n# Show the value of both meals.\nbill_before_tip\n\n19.75\n\n\nBut wait — we can do better than typing in the calculation like this. We can use the values of our variables, instead of typing in the values again.\n\n# The cost of both meals, before tip, using variables.\nbill_before_tip = alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n19.75\n\n\nWe make the calculation clearer by writing the calculation this way — we are calculating the bill before the tip by adding the cost of Alex’s and Billie’s meal — and that’s what the code looks like. But this also allows us to change the variable value, and recalculate. For example, say Alex decided to go for the hummus plate, at £7.75. Now we can tell Python that we want alex_meal to have the value 7.75 instead of 10.50:\n\n# The new cost of Alex's meal.\n# alex_meal gets the value 7.75\nalex_meal = 7.75\n# Show the value of alex_meal\nalex_meal\n\n7.75\n\n\nNotice that alex_meal now has a new value. It was 10.50, but now it is 7.75. We have reset the value of alex_meal. In order to use the new value for alex_meal, we must recalculate the bill before tip with exactly the same code as before:\n\n# The new cost of both meals, before tip.\nbill_before_tip = alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n17.0\n\n\nNotice that, now we have rerun this calculation, we have reset the value for bill_before_tip to the correct value corresponding to the new value for alex_meal.\nAll that remains is to recalculate the bill plus tip, using the new value for the variable:\n\n# The cost of both meals, after tip.\nbill_after_tip = bill_before_tip + bill_before_tip * 0.15\n# Show the value of both meals, after tip.\nbill_after_tip\n\n19.55\n\n\nNow we are using variables with relevant names, the calculation looks right to our eye. The code expresses the calculation as we mean it: the bill after tip is equal to the bill before the tip, plus the bill before the tip times 0.15.\n\n\n4.7 And so, on\nNow you have done some practice with the notebook, and with variables, you are ready for a new problem in probability and statistics, in the next chapter.\nEnd of billies_bill notebook" + }, + { + "objectID": "about_technology.html#the-meal-in-question", + "href": "about_technology.html#the-meal-in-question", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.4 The meal in question", + "text": "4.4 The meal in question\nAlex and Billie are at a restaurant, getting ready to order. They do not have much money, so they are calculating the expected bill before they order.\nAlex is thinking of having the fish for £10.50, and Billie is leaning towards the chicken, at £9.25. First they calculate their combined bill.\nBelow this text you see a code cell. It contains the Python code to calculate the total bill. Press Shift-Enter in the cell below, to see the total.\n\n10.50 + 9.25\n\n19.75\n\n\nThe contents of the cell above is Python code. As you would predict, Python understands numbers like 10.50, and it understands + between the numbers as an instruction to add the numbers.\nWhen you press Shift-Enter, Python finds 10.50, realizes it is a number, and stores that number somewhere in memory. It does the same thing for 9.25, and then it runs the addition operation on these two numbers in memory, which gives the number 19.75.\nFinally, Python sends the resulting number (19.75) back to the notebook for display. The notebook detects that Python sent back a value, and shows it to us.\nThis is exactly what a calculator would do." + }, + { + "objectID": "about_technology.html#comments", + "href": "about_technology.html#comments", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.5 Comments", + "text": "4.5 Comments\nUnlike a calculator, we can also put notes next to our calculations, to remind us what they are for. One way of doing this is to use a “comment”. You have already seen comments in the previous chapter.\nA comment is some text that the computer will ignore. In Python, you can make a comment by starting a line with the # (hash) character. For example, the next cell is a code cell, but when you run it, it does not show any result. In this case, that is because the computer sees the # at the beginning of the line, and then ignores the rest.\n\n# This bit of text is for me to read, and the computer to ignore.\n\nMany of the code cells you see will have comments in them, to explain what the code is doing.\nPractice writing comments for your own code. It is a very good habit to get into. You will find that experienced programmers write many comments on their code. They do not do this to show off, but because they have a lot of experience in reading code, and they know that comments make it much easier to read and understand code." + }, + { + "objectID": "about_technology.html#more-calculations", + "href": "about_technology.html#more-calculations", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.6 More calculations", + "text": "4.6 More calculations\nLet us continue with the struggle that Alex and Billie are having with their bill.\nThey realize that they will also need to pay a tip.\nThey think it would be reasonable to leave a 15% tip. Now they need to multiply their total bill by 0.15, to get the tip. The bill is about £20, so they know that the tip will be about £3.\nIn Python * means multiplication. This is the equivalent of the “×” key on a calculator.\nWhat about this, for the correct calculation?\n\n# The tip - with a nasty mistake.\n10.50 + 9.25 * 0.15\n\n11.8875\n\n\nOh dear, no, that isn’t doing the right calculation.\nPython follows the normal rules of precedence with calculations. These rules tell us to do multiplication before addition.\nSee https://en.wikipedia.org/wiki/Order_of_operations for more detail on the standard rules.\nIn the case above the rules tell Python to first calculate 9.25 * 0.15 (to get 1.3875) and then to add the result to 10.50, giving 11.8875.\nWe need to tell Python we want it to do the addition and then the multiplication. We do this with round brackets (parentheses):\n\n\n\n\n\n\n\n\n\n\nThere are three types of brackets in Python.\nThese are:\n\nround brackets or parentheses: ();\nsquare brackets: [];\ncurly brackets: {}.\n\nEach type of bracket has a different meaning in Python. In the examples, play close to attention to the type of brackets we are using.\n\n\n\n# The bill plus tip - mistake fixed.\n(10.50 + 9.25) * 0.15\n\n2.9625\n\n\nThe obvious next step is to calculate the bill including the tip.\n\n# The bill, including the tip\n10.50 + 9.25 + (10.50 + 9.25) * 0.15\n\n22.7125\n\n\nAt this stage we start to feel that we are doing too much typing. Notice that we had to type out 10.50 + 9.25 twice there. That is a little boring, but it also makes it easier to make mistakes. The more we have to type, the greater the chance we have to make a mistake.\nTo make things simpler, we would like to be able to store the result of the calculation 10.50 + 9.25, and then re-use this value, to calculate the tip.\nThis is the role of variables. A variable is a value with a name.\nHere is a variable:\n\n# The cost of Alex's meal.\na = 10.50\n\na is a name we give to the value 10.50. You can read the line above as “The variable a gets the value 10.50”. We can also talk of setting the variable. Here we are setting a to equal 10.50.\nNow, when we use a in code, it refers to the value we gave it. For example, we can put a on a line on its own, and Python will show us the value of a:\n\n# The value of a\na\n\n10.5\n\n\nWe did not have to use the name a — we can choose almost any name we like. For example, we could have chosen alex_meal instead:\n\n# The cost of Alex's meal.\n# alex_meal gets the value 10.50\nalex_meal = 10.50\n\nWe often set variables like this, and then display the result, all in the same cell. We do this by first setting the variable, as above, and then, on the final line of the cell, we put the variable name on a line on its own, to ask Python to show us the value of the variable. Here we set billie_meal to have the value 9.25, and then show the value of billie_meal, all in the same cell.\n\n# The cost of Billie's meal.\nbillie_meal = 9.25\n# Show the value of billies_meal\nbillie_meal\n\n9.25\n\n\nOf course, here, we did not learn much, but we often set variable values with the results of a calculation. For example:\n\n# The cost of both meals, before tip.\nbill_before_tip = 10.50 + 9.25\n# Show the value of both meals.\nbill_before_tip\n\n19.75\n\n\nBut wait — we can do better than typing in the calculation like this. We can use the values of our variables, instead of typing in the values again.\n\n# The cost of both meals, before tip, using variables.\nbill_before_tip = alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n19.75\n\n\nWe make the calculation clearer by writing the calculation this way — we are calculating the bill before the tip by adding the cost of Alex’s and Billie’s meal — and that’s what the code looks like. But this also allows us to change the variable value, and recalculate. For example, say Alex decided to go for the hummus plate, at £7.75. Now we can tell Python that we want alex_meal to have the value 7.75 instead of 10.50:\n\n# The new cost of Alex's meal.\n# alex_meal gets the value 7.75\nalex_meal = 7.75\n# Show the value of alex_meal\nalex_meal\n\n7.75\n\n\nNotice that alex_meal now has a new value. It was 10.50, but now it is 7.75. We have reset the value of alex_meal. In order to use the new value for alex_meal, we must recalculate the bill before tip with exactly the same code as before:\n\n# The new cost of both meals, before tip.\nbill_before_tip = alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n17.0\n\n\nNotice that, now we have rerun this calculation, we have reset the value for bill_before_tip to the correct value corresponding to the new value for alex_meal.\nAll that remains is to recalculate the bill plus tip, using the new value for the variable:\n\n# The cost of both meals, after tip.\nbill_after_tip = bill_before_tip + bill_before_tip * 0.15\n# Show the value of both meals, after tip.\nbill_after_tip\n\n19.55\n\n\nNow we are using variables with relevant names, the calculation looks right to our eye. The code expresses the calculation as we mean it: the bill after tip is equal to the bill before the tip, plus the bill before the tip times 0.15." + }, + { + "objectID": "about_technology.html#and-so-on", + "href": "about_technology.html#and-so-on", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.7 And so, on", + "text": "4.7 And so, on\nNow you have done some practice with the notebook, and with variables, you are ready for a new problem in probability and statistics, in the next chapter.\nEnd of billies_bill notebook" + }, + { + "objectID": "about_technology.html#running-the-code-on-your-own-computer", + "href": "about_technology.html#running-the-code-on-your-own-computer", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.8 Running the code on your own computer", + "text": "4.8 Running the code on your own computer\nMany people, including your humble authors, like to be able to run code examples on their own computers. This section explains how you can set up to run the notebooks on your own computer.\nOnce you have done this setup, you can use the “download” link\n\nYou will need to install the Python language on your computer, and then install the following packages:\n\nNumPy\nMatplotlib - for plots\nSciPy - a collection of modules for scientific computing;\nPandas - for loading, saving and manipulating data tables;\nStatsmodels - for traditional statistical analysis.\nJupyter - to run the Jupyter Notebook on your own computer.\n\nOne easy way to all install all these packages on Windows, Mac or Linux, is to use the Anaconda Python distribution [^anaconda_distro]. Anaconda provides a single installer that will install Python and all the packages above, by default.\nAnother method is to install Python from the Python website [^python-lang]. Then use the Pip [^pip] installer to install the packages you need.\nTo use Pip, start a terminal (Start key, “cmd” in Windows, Command key and space then “Terminal” on Mac), and then, at the prompt, type:\nNow you should be able to start the Jupyter notebook application. See the Jupyter documentation for how to start Jupyter. Open the notebook you downloaded for the chapter; you will now be able to run the code on your own computer, and experiment by making changes." + }, + { + "objectID": "resampling_with_code.html#statistics-and-probability", + "href": "resampling_with_code.html#statistics-and-probability", + "title": "5  Resampling with code", + "section": "5.1 Statistics and probability", + "text": "5.1 Statistics and probability\nWe have already emphasized that statistics is a way of drawing conclusions about data from the real world, in the presence of random variation; probability is the way of reasoning about random variation. This chapter introduces our first statistical problem, where we use probability to draw conclusions about some important data — about a potential cure for a type of cancer. We will not make much of the distinction between probability and statistics here, but we will come back to it several times in later chapters." + }, + { + "objectID": "resampling_with_code.html#a-new-treatment-for-burkitt-lymphoma", + "href": "resampling_with_code.html#a-new-treatment-for-burkitt-lymphoma", + "title": "5  Resampling with code", + "section": "5.2 A new treatment for Burkitt lymphoma", + "text": "5.2 A new treatment for Burkitt lymphoma\nBurkitt lymphoma is an unusual cancer of the lymphatic system. The lymphatic system is a vein-like network throughout the body that is involved in the immune reaction to disease. In developed countries, with standard treatment, the cure rate for Burkitt lymphoma is about 90%.\nIn 2006, researchers at the US National Cancer Institute (NCI), tested a new treatment for Burkitt lymphoma (Dunleavy et al. 2006). They gave the new treatment to 17 patients, and found that all 17 patients were doing well after two years or more of follow up. By “doing well”, we mean that their lymphoma had not progressed; as a short-hand, we will say that these patients were “cured”, but of course, we do not know what happened to them after this follow up.\nHere is where we put on our statistical hat and ask ourselves the following question — how surprised are we that the NCI researchers saw their result of 17 out of 17 patients cured?\nAt this stage you might and should ask, what could we possibly mean by “surprised”? That is a good and important question, and we will discuss that much more in the chapters to come. For now, please bear with us as we do a thought experiment.\nLet us forget the 17 out of 17 result of the NCI study for a moment. Imagine that there is another hospital, called Saint Hypothetical General, just down the road from the NCI, that was also treating 17 patients with Burkitt lymphoma. Saint Hypothetical were not using the NCI treatment, they were using the standard treatment.\nWe already know that each patient given the standard treatment has a 90% chance of cure. Given that 90% cure rate, what is the chance that 17 out of 17 of the Hypothetical group will be cured?\nYou may notice that this question about the Hypothetical group is similar to the problem of the 20 ambulances in Chapter Chapter 2. In that problem, we were interested to know how likely it was that 3 or more of 20 ambulances would be out of action on any one day, given that each ambulance had a 10% chance of being out of action. Here we would like to know the chances that all 17 patients would be cured, given that each patient has a 90% chance of being cured." + }, + { + "objectID": "resampling_with_code.html#a-physical-model-of-the-hypothetical-hospital", + "href": "resampling_with_code.html#a-physical-model-of-the-hypothetical-hospital", + "title": "5  Resampling with code", + "section": "5.3 A physical model of the hypothetical hospital", + "text": "5.3 A physical model of the hypothetical hospital\nAs in the ambulance example, we could make a physical model of chance in this world. For example, to simulate whether a given patient is cured or not by a 90% effective treatment, we could throw a ten sided die and record the result. We could say, arbitrarily, that a result of 0 means “not cured”, and all the numbers 1 through 9 mean “cured” (typical 10-sided dice have sides numbered 0 through 9).\nWe could roll 17 dice to simulate one “trial” in this random world. For each trial, we record the number of dice that show numbers 1 through 9 (and not 0). This will be a number between 0 and 17, and it is the number of patients “cured” in our simulated trial.\nFigure 5.1 is the result of one such trial we did with a set of 17 10-sided dice we happened to have to hand:\n\n\n\nFigure 5.1: One roll of 17 10-sided dice\n\n\nThe trial in Figure 5.1 shows are four dice with the 0 face uppermost, and the rest with numbers from 1 through 9. Therefore, there were 13 out of 17 not-zero numbers, meaning that 13 out of 17 simulated “patients” were “cured” in this simulated trial.\n\nWe could repeat this simulated trial procedure 100 times, and we would then have 100 counts of the not-zero numbers. Each of the 100 counts would be the number of patients cured in that trial. We can ask how many of these 100 counts were equal to 17. This will give us an estimate of the probability we would see 17 out of 17 patients cured, given that any one patient has a 90% chance of cure. For example, say we saw 15 out of 100 counts were equal to 17. That would give us an estimate of 15 / 100 or 0.15 or 15%, for the probability we would see 17 out of 17 patients cured.\nSo, if Saint Hypothetical General did see 17 out of 17 patients cured with the standard treatment, they would be a little surprised, because they would only expect to see that happen 15% of the time. But they would not be very surprised — 15% of the time is uncommon, but not very uncommon." + }, + { + "objectID": "resampling_with_code.html#a-trial-a-run-a-count-and-a-proportion", + "href": "resampling_with_code.html#a-trial-a-run-a-count-and-a-proportion", + "title": "5  Resampling with code", + "section": "5.4 A trial, a run, a count and a proportion", + "text": "5.4 A trial, a run, a count and a proportion\nHere we stop to emphasize the steps in the process of a random simulation.\n\nWe decide what we mean by one trial. Here one trial has the same meaning in medicine as resampling — we mean the result of treating 17 patients. One simulated trial is then the simulation of one set of outcomes from 17 patients.\nWork out the outcome of interest from the trial. The outcome here is the number of patients cured.\nWe work out a way to simulate one trial. Here we chose to throw 17 10-sided dice, and count the number of not zero values. This is the outcome from one simulation trial.\nWe repeat the simulated trial procedure many times, and collect the results from each trial. Say we repeat the trial procedure 100 times; we will call this a run of 100 trials.\nWe count the number of trials with an outcome that matches the outcome we are interested in. In this case we are interested in the outcome 17 out of 17 cured, so we count the number of trials with a score of 17. Say 15 out of the run of 100 trials had an outcome of 17 cured. That is our count.\nFinally we divide the count by the number of trials to get the proportion. From the example above, we divide 15 by 100 to 0.15 (15%). This is our estimate of the chance of seeing 17 out of 17 patients cured in any one trial. We can also call this an estimate of the probability that 17 out of 17 patients will be cured on any on trial.\n\nOur next step is to work out the code for step 2: simulate one trial." + }, + { + "objectID": "resampling_with_code.html#simulate-one-trial-with-code", + "href": "resampling_with_code.html#simulate-one-trial-with-code", + "title": "5  Resampling with code", + "section": "5.5 Simulate one trial with code", + "text": "5.5 Simulate one trial with code\nWe can use the computer to do something very similar to rolling 17 10-sided dice, by asking the computer for 17 random whole numbers from 0 through 9.\n\n\n\n\n\n\nWhole numbers\n\n\n\nA whole number is a number that is not negative, and does not have fractional part (does not have anything after a decimal point). 0 and 1 and 2 and 3 are whole numbers, but -1 and \\(\\frac{3}{5}\\) and 11.3 are not. The whole numbers from 0 through 9 are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.\n\n\nWe have already discussed what we mean by random in Section 2.2.\n\nWe will be asking the computer to generate many random numbers. So, before we start, we again import NumPy and get its random number generator:\n\nimport numpy as np\n\n# Ask for NumPy's default random number generator and name\n# it `rnd`. `rnd` is short for \"random\".\nrnd = np.random.default_rng()" + }, + { + "objectID": "resampling_with_code.html#from-numbers-to-s", + "href": "resampling_with_code.html#from-numbers-to-s", + "title": "5  Resampling with code", + "section": "5.6 From numbers to arrays", + "text": "5.6 From numbers to arrays\nWe next need to prepare the sequence of numbers that we want NumPy to select from.\nWe have already seen the idea that Python has values that are individual numbers. Remember, a variable is a named value. Here we attach the name a to the value 1.\n\na = 1\n# Show the value of \"a\"\na\n\n1\n\n\nNumPy also allows values that are sequences of numbers. NumPy calls these sequences arrays.\nHere we make a array that contains the 10 numbers we will select from:\n\n# Make an array of numbers, store with the name \"some_numbers\".\nsome_numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n# Show the value of \"some_numbers\"\nsome_numbers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\nNotice that the value for some_numbers is an array, and that this value contains 10 numbers.\nPut another way, some_numbers is now the name we can use for this collection of 10 values.\nArrays are very useful for simulations and data analysis, and we will be using these for nearly every example in this book." + }, + { + "objectID": "resampling_with_code.html#sec-introducing-functions", + "href": "resampling_with_code.html#sec-introducing-functions", + "title": "5  Resampling with code", + "section": "5.7 Functions", + "text": "5.7 Functions\nFunctions are another tool that we will be using everywhere, and that you seen already, although we have not introduced them until now.\nYou can think of functions as named production lines.\nFor example, consider the Python function np.round\n\n\n\n\n# We load the Numpy library so we have access to the Numpy functions.\nimport numpy as np\n\nnp.round is the name for a simple production line, that takes in a number, and (by default) sends back the number rounded to the nearest integer.\n\n\n\n\n\n\nWhat is an integer?\n\n\n\nAn integer is a positive or negative whole number.\nIn other words, a number is an integer if the number is either a whole number (0, 1, 2 …), or a negative whole number (-1, -2, -3 …). All of -208, -2, 0, 10, 105 are integers, but \\(\\frac{3}{5}\\), -10.3 and 0.2 are not.\nWe will use the term integer fairly often, because it is a convenient way to name all the positive and negative whole numbers.\n\n\nThink of a function as a named production line. We sent the function (production line) raw material (components) to work on. The production line does some work on the components. A finished result comes off the other end.\nTherefore, think of np.round as the name of a production line, that takes in a component (in this case, any number), and does some work, and sends back the finished result (in this case, the number rounded to the nearest integer.\nThe components we send to a function are called arguments. The finished result the function sends back is the return value.\n\nArguments : the value or values we send to a function.\nReturn value : the values the function sends back.\n\nSee Figure 5.2 for an illustration of np.round as a production line.\n\n\n\n\n\nFigure 5.2: The round function as a production line\n\n\n\n\nIn the next few code cells, you see examples where np.round takes in a not-integer number, as an argument, and sends back the nearest integer as the return value:\n\n# Put in 3.2, round sends back 3.\nnp.round(3.2)\n\n3.0\n\n\n\n# Put in -2.7, round sends back -3.\nnp.round(-2.7)\n\n-3.0\n\n\nLike many functions, np.round can take more than one argument (component). You can send range the number of digits you want to round to, after the number of you want it to work on, like this (see Figure 5.3):\n\n# Put in 3.1415, and the number of digits to round to (2).\n# round sends back 3.14\nnp.round(3.1415, 2)\n\n3.14\n\n\n\n\n\n\n\nFigure 5.3: round with optional arguments specifying number of digits\n\n\n\n\nNotice that the second argument — here 2 — is optional. We only have to send round one argument: the number we want it to round. But we can optionally send it a second argument — the number of decimal places we want it to round to. If we don’t specify the second argument, then round assumes we want to round to 0 decimal places, and therefore, to the nearest integer." + }, + { + "objectID": "resampling_with_code.html#sec-named-arguments", + "href": "resampling_with_code.html#sec-named-arguments", + "title": "5  Resampling with code", + "section": "5.8 Functions and named arguments", + "text": "5.8 Functions and named arguments\nIn the example above, we sent round two arguments. round knows that we mean the first argument to be the number we want to round, and the second argument is the number of decimal places we want to round to. It knows which is which by the position of the arguments — the first argument is the number it should round, and second is the number of digits.\nIn fact, internally, the round function also gives these arguments names. It calls the number it should round — a — and the number of digits it should round to — decimals. This is useful, because it is often clearer and simpler to identify the argument we are specifying with its name, instead of just relying on its position.\nIf we aren’t using the argument names, we call the round function as we did above:\n\n# Put in 3.1415, and the number of digits to round to (2).\n# round sends back 3.14\nnp.round(3.1415, 2)\n\n3.14\n\n\nIn this call, we relied on the fact that we, the people writing the code, and you, the person reading the code, remembers that the second argument (2) means the number of decimal places it should round to. But, we can also specify the argument using its name, like this (see Figure 5.5):\n\n# Put in 3.1415, and the number of digits to round to (2).\n# Use the name of the number-of-decimals argument for clarity:\nnp.round(3.1415, decimals=2)\n\n3.14\n\n\n\n\n\n\n\nFigure 5.4: The round function with argument names\n\n\n\n\n\n\n\n\n\nFigure 5.5: The np.round function with argument names\n\n\n\n\nHere Python sees the first argument, as before, and assumes that it is the number we want to round. Then it sees the second, named argument — decimals=2 — and knows, from the name, that we mean this to be the number of decimals to round to.\nIn fact, we could even specify both arguments by name, like this:\n\n# Put in 3.1415, and the number of digits to round to (2).\nnp.round(a=3.1415, decimals=2)\n\n3.14\n\n\nWe don’t usually name both arguments for round, as we have above, because it is so obvious that the first argument is the thing we want to round, and so naming the argument does not make it any more clear what the code is doing. But — as so often in programming — whether to use the names, or let Python work out which argument is which by position, is a judgment call. The judgment you are making is about the way to write the code to be most clear for your reader, where your most important reader may be you, coming back to the code in a week or a year.\n\n\n\n\n\n\nHow do you know what names to use for the function arguments?\n\n\n\nYou can find the names of the function arguments in the help for the function, either online, or in the notebook interface. For example, to get the help for np.round, including the argument names, you could make a new cell, and type np.round?, then execute the cell by pressing Shift-Enter. This will show the help for the function in the notebook interface." + }, + { + "objectID": "resampling_with_code.html#sec-ranges", + "href": "resampling_with_code.html#sec-ranges", + "title": "5  Resampling with code", + "section": "5.9 Ranges", + "text": "5.9 Ranges\nNow let us return to the variable some_numbers that we created above:\n\n# Make an array of numbers, store with the name \"some_numbers\".\nsome_numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n# Show the value of \"some_numbers\"\nsome_numbers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\nIn fact, we often need to do this: generate a sequence or range of integers, such as 0 through 9.\n\n\n\n\n\n\nPick a number from 1 through 5\n\n\n\nRanges can be confusing in normal speech because it is not always clear whether they include their beginning and end. For example, if someone says “pick a number between 1 and 5”, do they mean all the numbers, including the first and last (any of 1 or 2 or 3 or 4 or 5)? Or do they mean only the numbers that are between 1 and 5 (so 2 or 3 or 4)? Or do they mean all the numbers up to, but not including 5 (so 1 or 2 or 3 or 4)?\nTo avoid this confusion, we will nearly always use “from” and “through” in ranges, meaning that we do include both the start and the end number. For example, if we say “pick a number from 1 through 5” we mean one of 1 or 2 or 3 or 4 or 5.\n\n\nCreating ranges of numbers is so common that Python has a standard Numpy function np.arange to do that.\n\n# An array containing all the numbers from 0 through 9.\nsome_numbers = np.arange(0, 10)\nsome_numbers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\nNotice that we send np.arange the arguments 0 and 10. The first argument, here 0, is the start value. The second argument, here 10, is the stop value. Numpy (in the arange function) understands this to mean: start at 0 (the start value) and go up to but do not include 10 (the stop value).\nYou can therefore read np.arange(0, 10) as “the sequence of integers starting at 0, up to, but not including 10”.\nLike np.round, the arguments to np.arange also have names, so, we could also write:\n\n# An array containing all the numbers from 0 through 9.\n# Now using named arguments.\nsome_numbers = np.arange(start=0, stop=10)\nsome_numbers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\nSo far, we have sent arange two arguments, but we can also send just one argument, like this:\n\n# An array containing all the numbers from 0 through 9.\nsome_integers = np.arange(10)\nsome_integers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\nWhen we sent arange a single argument, like this, arange understands this to mean we have sent just the stop value, and that is should assume a start value of 0.\nAgain, if we wanted, we could send this argument by name:\n\n# An array containing all the numbers from 0 through 9.\n# Specify the stop value by explicit name, for clarity.\nsome_integers = np.arange(stop=10)\nsome_integers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\n\nHere are some more examples of np.arange:\n\n# All the integers starting at 10, up to, but not including 15.\n# In other words, 10 through 14.\nnp.arange(10, 15)\n\narray([10, 11, 12, 13, 14])\n\n\n\n# Here we are only sending one value (7). np.arange understands this to be\n# the stop value, and assumes 0 as the start value.\n# In other words, 0 through 6\nnp.arange(7)\n\narray([0, 1, 2, 3, 4, 5, 6])" + }, + { + "objectID": "resampling_with_code.html#sec-python-range", + "href": "resampling_with_code.html#sec-python-range", + "title": "5  Resampling with code", + "section": "5.10 range in Python", + "text": "5.10 range in Python\nSo far you have seen ranges of integers using np.arange. The np. prefix refers to the fact that np.arange is a function from the Numpy module (library). The a in arange signals that the result np.arange returns is an array:\n\narr = np.arange(7)\n# Show the result\narr\n\narray([0, 1, 2, 3, 4, 5, 6])\n\n\n\n# Show what type of thing this is.\ntype(arr)\n\n<class 'numpy.ndarray'>\n\n\nWe do often use np.arange to get a range of integers in a convenient array format, but Python has another way of getting a range of integers — the range function.\nThe range function is very similar to np.arange, but it is not part of Numpy — it is basic function in Python — and it does not return an array of numbers, it returns something else. Here we ask for a range from 0 through 6 (0 up to, but not including 7):\n\n# Notice no `np.` before `range`.\nr = range(7)\nr\n\nrange(0, 7)\n\n\nNotice that the thing that came back is something that represents or stands in for the number 0 through 6. It is not an array, but a specific type of thing called — a range:\n\ntype(r)\n\n<class 'range'>\n\n\nThe range above is a container for the numbers 0 through 6. We can get the numbers out of the container in many different ways, but one of them is to convert this container to an array, using the np.array function. The np.array function takes the thing we pass it, and makes it into an array. When we apply np.array to r above, we get the numbers that r contains:\n\n# Get the numbers from the range `r`, convert to an array.\na_from_r = np.array(r)\n# Show the result\na_from_r\n\narray([0, 1, 2, 3, 4, 5, 6])\n\n\nThe range function has the same start and stop arguments that np.arange does, and with the same meaning:\n\n# 3 up to, not including 12.\n# (3 through 11)\nr_2 = range(3, 12)\nr_2\n\nrange(3, 12)\n\n\n\nnp.array(r_2)\n\narray([ 3, 4, 5, 6, 7, 8, 9, 10, 11])\n\n\nYou may reasonably ask — why do I need this range thing, if I have the very similar np.arange? The answer is — you don’t need range, and you can always use np.arange where you would use range, but for reasons we will go into later (Section 7.6.3), range is a good option when we want to represent a sequence of numbers as input to a for loop. We cover for loops in more detail in Section 7.6.2, but for now, the only thing to remember is that range and np.arange are both ways of expressing sequential ranges of integers." + }, + { + "objectID": "resampling_with_code.html#sec-random-choice", + "href": "resampling_with_code.html#sec-random-choice", + "title": "5  Resampling with code", + "section": "5.11 Choosing values at random", + "text": "5.11 Choosing values at random\nWe can use the rnd.choice function to select a single value at random from the sequence of numbers in some_integers.\n\n\n\n\n\n\nMore on rnd.choice\n\n\n\nThe rnd.choice function will be a fundamental tool for taking many kinds of samples, and we cover it in more detail in Chapter 6.\n\n\n\n# Select an integer from the choices in some_integers.\nmy_integer = rnd.choice(some_integers)\n# Show the value that results.\nmy_integer\n\n5\n\n\nLike np.round (above), rnd.choice is a function.\n\n\n\n\n\n\n\nFunctions and methods\n\n\n\nActually, to be precise, we should call rnd.choice a method. A method is a function attached to a value. In this case the function choice is attached to the value rnd. That’s not an important distinction for us at the moment, so please forgive our strategic imprecision, and let us continue to say that rnd.choice is a function.\n\n\n\nAs you remember, a function is a named production line. In our case, the production line has the name rnd.choice.\nWe sent rnd.choice. a value to work on — an argument. In this case, the argument was the value of some_integers.\nFigure 5.6 is a diagram illustrating an example run of the rnd.choice function (production line).\n\n\n\n\n\n\nFigure 5.6: Example run of the rnd.choice function\n\n\n\n\n\nHere is the same code again, with new comments.\n\n# Send the value of \"some_integers\" to rnd.choice\n# some_integers is the *argument*.\n# Put the *return* value from the function into \"my_number\".\nmy_number = rnd.choice(some_integers)\n# Show the value that results.\nmy_number\n\n4" + }, + { + "objectID": "resampling_with_code.html#sec-sampling-arrays", + "href": "resampling_with_code.html#sec-sampling-arrays", + "title": "5  Resampling with code", + "section": "5.12 Sampling into arrays", + "text": "5.12 Sampling into arrays\n\nIn the code above, we asked Python to select a single number at random — because that is what rnd.choice does by default].\nIn fact, the people who wrote rnd.choice, wrote it to be flexible in the work that it can do. In particular, we can tell rnd.choice to select any number of values at random, by adding a new argument to the function.\nIn our case, we would like Numpy to select 17 numbers at random from the sequence of some_integers.\nTo do this, we add an argument to the function that tells it how many numbers we want it to select.\n\n\n# Get 17 values from the *some_integers* array.\n# Store the 17 numbers with the name \"a\"\na = rnd.choice(some_integers, 17)\n# Show the result.\na\n\narray([4, 5, 9, 8, 2, 9, 1, 5, 8, 2, 1, 8, 2, 6, 6, 5, 0])\n\n\nAs you can see, the function sent back (returned) 17 numbers. Because it is sending back more than one number, the thing it sends back is an array, where the array has 17 elements." + }, + { + "objectID": "resampling_with_code.html#counting-results", + "href": "resampling_with_code.html#counting-results", + "title": "5  Resampling with code", + "section": "5.13 Counting results", + "text": "5.13 Counting results\nWe now have the code to do the equivalent of throwing 17 10-sided dice. This is the basis for one simulated trial in the world of Saint Hypothetical General.\nOur next job is to get the code to count the number of numbers that are not zero in the array a. That will give us the number of patients who were cured in simulated trial.\nAnother way of asking this question, is to ask how many elements in a are greater than zero.\n\n5.13.1 Comparison\nTo ask whether a number is greater than zero, we use comparison. Here is a greater than zero comparison on a single number:\n\nn = 5\n# Is the value of n greater than 0?\n# Show the result of the comparison.\nn > 0\n\nTrue\n\n\n> is a comparison — it asks a question about the numbers either side of it. In this case > is asking the question “is the value of n (on the left hand side) greater than 0 (on the right hand side)?” The value of n is 5, so the question becomes, “is 5 greater than 0?” The answer is Yes, and Python represents this Yes answer as the value True.\nIn contrast, the comparison below boils down to “is 0 greater than 0?”, to which the answer is No, and Python represents this as False.\n\np = 0\n# Is the value of p greater than 0?\n# Show the result of the comparison.\np > 0\n\nFalse\n\n\nSo far you have seen the results of comparison on a single number. Now say we do the same comparison on an array. For example, say we ask the question “is the value of a greater than 0”? Remember, a is an array containing 17 values. We are comparing 17 values to one value (0). What answer do you think NumPy will give? You may want to think a little about this before you read on.\nAs a reminder, here is the current value for a:\n\n# Show the current value for \"a\"\na\n\narray([4, 5, 9, 8, 2, 9, 1, 5, 8, 2, 1, 8, 2, 6, 6, 5, 0])\n\n\nNow you have had some time to think, here is what happens:\n\n# Is the value of \"a\" greater than 0\n# Show the result of the comparison.\na > 0\n\narray([ True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, False])\n\n\nThere are 17 values in a, so the comparison to 0 means there are 17 comparisons, and 17 answers. NumPy therefore returns an array of 17 elements, containing these 17 answers. The first answer is the answer to the question “is the value of the first element of a greater than 0”, and the second is the answer to “is the value of the second element of a greater than 0”.\nLet us store the result of this comparison to work on:\n\n# Is the value of \"a\" greater than 0\n# Store as another array \"q\".\nq = a > 0\n# Show the value of r\nq\n\narray([ True, True, True, True, True, True, True, True, True,\n True, True, True, True, True, True, True, False])" + }, + { + "objectID": "resampling_with_code.html#sec-count-with-sum", + "href": "resampling_with_code.html#sec-count-with-sum", + "title": "5  Resampling with code", + "section": "5.14 Counting True values with sum", + "text": "5.14 Counting True values with sum\nNotice above that there is one True element in q for every element in a that was greater than 0. It only remains to count the number of True values in q, to get the count of patients in our simulated trial who were cured.\nWe can use the NumPy function np.sum to count the number of True elements in an array. As you can imagine, np.sum adds up all the elements in an array, to give a single number. This will work as we want for the q array, because Python counts False as equal to 0 and True as equal to 1:\n\n# Question: is False equal to 0?\n# Answer - Yes! (True)\nFalse == 0\n\nTrue\n\n\n\n# Question: is True equal to 0?\n# Answer - Yes! (True)\nTrue == 1\n\nTrue\n\n\nTherefore, the function sum, when applied to an array of True and False values, will count the number of True values in the array.\nTo see this in action we can make a new array of True and False values, and try using np.sum on the new array.\n\n# An array containing three True values and two False values.\ntrues_and_falses = np.array([True, False, True, True, False])\n# Show the new array.\ntrues_and_falses\n\narray([ True, False, True, True, False])\n\n\nThe sum operation adds all the elements in the array. Because True counts as 1, and False counts as 0, adding all the elements in trues_and_falses is the same as adding up the values 1 + 0 + 1 + 1 + 0, to give 3.\nWe can apply the same operation on q to count the number of True values.\n\n# Count the number of True values in \"q\"\n# This is the same as the number of values in \"a\" that are greater than 0.\nb = np.sum(q)\n# Show the result\nb\n\n16" + }, + { + "objectID": "resampling_with_code.html#the-procedure-for-one-simulated-trial", + "href": "resampling_with_code.html#the-procedure-for-one-simulated-trial", + "title": "5  Resampling with code", + "section": "5.15 The procedure for one simulated trial", + "text": "5.15 The procedure for one simulated trial\nWe now have the whole procedure for one simulated trial. We can put the whole procedure in one cell:\n\n# Procedure for one simulated trial\n\n# Get 17 values from the *some_integers* array.\n# Store the 17 numbers with the name \"a\"\na = rnd.choice(some_integers, 17)\n# Is the value of \"a\" greater than 0\nq = a > 0\n# Count the number of True values in \"q\"\nb = np.sum(q)\n# Show the result of this simulated trial.\nb\n\n17" + }, + { + "objectID": "resampling_with_code.html#repeating-the-trial", + "href": "resampling_with_code.html#repeating-the-trial", + "title": "5  Resampling with code", + "section": "5.16 Repeating the trial", + "text": "5.16 Repeating the trial\nNow we know how to do one simulated trial, we could just keep running the cell above, and writing down the result each time. Once we had run the cell 100 times, we would have 100 counts. Then we could look at the 100 counts to see how many were equal to 17 (all 17 simulated patients cured on that trial). At least that would be much faster than rolling 17 dice 100 times, but we would also like the computer to automate the process of repeating the trial, and keeping track of the counts.\nPlease forgive us as we race ahead again, as we did in the last chapter. As in the last chapter, we will use a results array called z to store the count for each trial. As in the last chapter, we will use a for loop to repeat the trial procedure many times. As in the last chapter, we will not explain the counts array of the for loop in any detail, because we are going to cover those in the next chapter.\nLet us now imagine that we want to do 100 simulated trials at Saint Hypothetical General. This will give us 100 counts. We will want to store the count for each trial.\nTo do this, we make an array called z to hold the 100 counts. We have called the array z, but we could have called it anything we liked, such as counts or results or cecilia.\n\n# An array to hold the 100 count values.\n# Later, we will fill this in with real count values from simulated trials.\nz = np.zeros(100)\n\nNext we use a for loop to repeat the single trial procedure.\nNotice that the single trial procedure, inside this for loop, is the same as the single trial procedure above — the only two differences are:\n\nThe trial procedure is inside the loop, and\nWe are storing the count for each trial as we go.\n\nWe will go into more detail on how this works in the next chapter.\n\n# Procedure for 100 simulated trials.\n\n# An array to store the counts for each trial.\nz = np.zeros(100)\n\n# Repeat the trial procedure 100 times.\nfor i in np.arange(100):\n # Get 17 values from the *some_integers* array.\n # Store the 17 numbers with the name \"a\".\n a = rnd.choice(some_integers, 17)\n # Is the value of \"a\" greater than 0.\n q = a > 0\n # Count the number of True values in \"q\".\n b = np.sum(q)\n # Store the result at the next position in the \"z\" array.\n z[i] = b\n # Now go back and do the next trial until finished.\n# Show the result of all 100 trials.\nz\n\narray([16., 15., 15., 16., 16., 12., 15., 11., 16., 13., 12., 16., 15.,\n 16., 15., 16., 14., 15., 14., 15., 15., 15., 14., 15., 17., 15.,\n 14., 15., 16., 17., 15., 17., 16., 17., 14., 16., 15., 15., 15.,\n 17., 17., 13., 16., 13., 16., 14., 14., 15., 15., 15., 14., 15.,\n 15., 15., 17., 16., 17., 14., 15., 14., 16., 16., 15., 15., 16.,\n 15., 15., 16., 17., 15., 17., 15., 10., 15., 15., 14., 14., 13.,\n 16., 14., 17., 17., 16., 14., 15., 16., 17., 14., 15., 15., 16.,\n 16., 17., 16., 13., 15., 15., 14., 17., 15.])\n\n\nFinally, we need to count how many of the trials results we stored in z gave a “cured” count of 17.\nWe can ask the question whether a single number is equal to 17 using the double equals comparison: ==.\n\ns = 17\n# Is the value of s equal to 17?\n# Show the result of the comparison.\ns == 17\n\nTrue\n\n\n\n\n\n\n\n\n\n\n\n\n\n5.17 Single and double equals\nNotice that the double equals == means something entirely different to Python than the single equals =. In the code above, Python reads s = 17 to mean “Set the variable s to have the value 17”. In technical terms the single equals is called an assignment operator, because it means assign the value 17 to the variable s.\nThe code s == 17 has a completely different meaning.\n\nIt means “give True if the value in s is equal to 17, and False otherwise”. The == is a comparison operator — it is for comparing two values — here the value in s and the value 17. This comparison, like all comparisons, returns an answer that is either True or False. In our case s has the value 17, so the comparison becomes 17 == 17, meaning “is 17 equal to 17?”, to which the answer is “Yes”, and Python sends back True.\n\n\nWe can ask this question of all 100 counts by asking the question: is the array z equal to 17, like this:\n\n# Is the value of z equal to 17?\nwere_cured = z == 17\n# Show the result of the comparison.\nwere_cured\n\narray([False, False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, True, False, False,\n False, False, True, False, True, False, True, False, False,\n False, False, False, True, True, False, False, False, False,\n False, False, False, False, False, False, False, False, False,\n True, False, True, False, False, False, False, False, False,\n False, False, False, False, False, True, False, True, False,\n False, False, False, False, False, False, False, False, True,\n True, False, False, False, False, True, False, False, False,\n False, False, True, False, False, False, False, False, True,\n False])\n\n\nFinally we use sum to count the number of True values in the were_cured array, to give the number of trials where all 17 patients were cured.\n\n# Count the number of True values in \"were_cured\"\n# This is the same as the number of values in \"z\" that are equal to 17.\nn_all_cured = np.sum(were_cured)\n# Show the result of the comparison.\nn_all_cured\n\n15\n\n\nn_all_cured is the number of simulated trials for which all patients were cured. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.\n\n# Proportion of trials where all patients were cured.\np = n_all_cured / 100\n# Show the result\np\n\n0.15\n\n\nFrom this experiment, we see that there is roughly a one-in-six chance that all 17 patients are cured when using a 90% effective treatment." + }, + { + "objectID": "resampling_with_code.html#single-and-double-equals", + "href": "resampling_with_code.html#single-and-double-equals", + "title": "5  Resampling with code", + "section": "5.17 Single and double equals", + "text": "5.17 Single and double equals\nNotice that the double equals == means something entirely different to Python than the single equals =. In the code above, Python reads s = 17 to mean “Set the variable s to have the value 17”. In technical terms the single equals is called an assignment operator, because it means assign the value 17 to the variable s.\nThe code s == 17 has a completely different meaning." + }, + { + "objectID": "resampling_with_code.html#what-have-we-learned-from-saint-hypothetical", + "href": "resampling_with_code.html#what-have-we-learned-from-saint-hypothetical", + "title": "5  Resampling with code", + "section": "5.18 What have we learned from Saint Hypothetical?", + "text": "5.18 What have we learned from Saint Hypothetical?\nWe started with a question about the results of the NCI trial on the new drug. The question was — was the result of their trial — 17 out of 17 patients cured — surprising.\nThen, for reasons we did not explain in detail, we changed tack, and asked the same question about a hypothetical set of 17 patients getting the standard treatment in Saint Hypothetical General.\nThat Hypothetical question turns out to be fairly easy to answer, because we can use simulation to estimate the chances that 17 out of 17 patients would be cured in such a hypothetical trial, on the assumption that each patient has a 90% chance of being cured with the standard treatment.\nThe answer for Saint Hypothetical General was — we would be somewhat surprised, but not astonished. We only get 17 out of 17 patients cured about one time in six.\nNow let us return to the NCI trial. Should the trial authors be surprised by their results? If they assumed that their new treatment was exactly as effective as the standard treatment, the result of the trial is a bit unusual, just by chance. It is up us to decide whether the result is unusual enough to make us think that the actual NCI treatment might in fact have been more effective than the standard treatment.\nYou will see this move again and again as we go through the book.\n\nWe take something that really happened — in this case the 17 out of 17 patients cured.\nThen we imagine a hypothetical world in which the results only depend on chance.\nWe do simulations in that hypothetical world to see how often we get a result like the one that happened in the real world.\nIf the real world result (17 out of 17) is an unusual, surprising result in the simulations from the hypothetical world, we take that as evidence that the real world result might not be due to chance alone.\n\nWe have just described the main idea in statistical inference. If that all seems strange and backwards to you, do not worry, we will go over that idea many times in this book. It is not a simple idea to grasp in one go. We hope you will find that, as you do more simulations, and think of more hypothetical worlds, the idea will start to make more sense. Later, we will start to think about asking other questions about probability and chance in the real world." + }, + { + "objectID": "resampling_with_code.html#conclusions", + "href": "resampling_with_code.html#conclusions", + "title": "5  Resampling with code", + "section": "5.19 Conclusions", + "text": "5.19 Conclusions\nCan you see how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using 10-sided dice? This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with. Either we will use a device such as coins or dice, or a random number table as an analogy for the physical process we are interested in (patients being cured, in this case), or we will simulate the analogy on the computer using the Python program above.\nThe program above may not seem simple at first glance, but we think you will find, over the course of this book, that these programs become much simpler to understand than the older conventional approach to such problems that has routinely been taught to students for decades.\n\n\n\n\nDunleavy, Kieron, Stefania Pittaluga, John Janik, Nicole Grant, Margaret Shovlin, Richard Little, Robert Yarchoan, Seth Steinberg, Elaine S. Jaffe, and Wyndham H. Wilson. 2006. “Novel Treatment of Burkitt Lymphoma with Dose-Adjusted EPOCH-Rituximab: Preliminary Results Showing Excellent Outcome.” Blood 108 (11): 2736–36. https://doi.org/10.1182/blood.V108.11.2736.2736." + }, + { + "objectID": "sampling_tools.html#introduction", + "href": "sampling_tools.html#introduction", + "title": "6  Tools for samples and sampling", + "section": "6.1 Introduction", + "text": "6.1 Introduction\nNow you have some experience with Python, probabilities and resampling, it is time to introduce some useful tools for our experiments and programs.\n\nStart of sampling_tools notebook\n\nDownload notebook\nInteract\n\n\n\n6.2 Samples and labels\nThus far we have used numbers such as 1 and 0 and 10 to represent the elements we are sampling from. For example, in Chapter 7, we were simulating the chance of a particular juror being black, given that 26% of the eligible jurors in the county were black. We used integers for that task, where we started with all the integers from 0 through 99, and asked NumPy to select values at random from those integers. When NumPy selected an integer from 0 through 25, we chose to label the resulting simulated juror as black — there are 26 integers in the range 0 through 25, so there is a 26% chance that any one integer will be in that range. If the integer was from 26 through 99, the simulated juror was white (there are 74 integers in the range 26 through 99).\nHere is the process of simulating a single juror, adapted from Section 7.3.3:\n\nimport numpy as np\n# Ask NumPy for a random number generator.\nrnd = np.random.default_rng()\n\n# All the integers from 0 up to, but not including 100.\nzero_thru_99 = np.arange(100)\n\n# Get one random numbers from 0 through 99\na = rnd.choice(zero_thru_99)\n\n# Show the result\na\n\n59\n\n\nAfter that, we have to unpack our labeling of 0 through 25 as being “black” and 26 through 99 as being “white”. We might do that like this:\n\nthis_juror_is_black = a < 26\nthis_juror_is_black\n\nFalse\n\n\nThis all works as we want it to, but it’s just a little bit difficult to remember the coding (less than 26 means “black”, greater than 25 means “white”). We had to use that coding because we committed ourselves to using random numbers to simulate the outcomes.\nHowever, Python can also store bits of text, called strings. Values that are bits of text can be very useful because the text values can be memorable labels for the entities we are sampling from, in our simulations.\n\nBefore we get to strings, let us consider the different types of value we have seen so far.\n\n6.3 Types of values in Python\nYou have already come across the idea that Python values can be integers (positive or negative whole numbers), like this:\n\nv = 10\nv\n\n10\n\n\nHere the variable v holds the value. We can see what type of value v holds by using the type function:\n\ntype(v)\n\n<class 'int'>\n\n\nAs you may have noticed, Python can also have floating point values. These are values with a decimal point — so numbers that do not have to be integers, but can be any value between the integers. These floating points values are of type float:\n\nf = 10.1\ntype(f)\n\n<class 'float'>\n\n\n\n6.3.1 Numpy arrays\nYou have also seen that Numpy contains another type, the array. An array is a value that contains a sequence of values. For example, here is an array of integers:\n\narr = np.array([0, 10, 99, 4])\narr\n\narray([ 0, 10, 99, 4])\n\n\nNotice that this value arr is of type np.ndarray:\n\ntype(arr)\n\n<class 'numpy.ndarray'>\n\n\nThe array has its own internal record of what type of values it holds. This is called the array dtype:\n\narr.dtype\n\ndtype('int64')\n\n\nThe array dtype records the type of value stored in the array. All values in the array must be of this type, and all values in the array are therefore of the same type.\nThe array above contains integers, but we can also make arrays containing floating point values:\n\nfloat_arr = np.array([0.1, 10.1, 99.0, 4.3])\nfloat_arr\n\narray([ 0.1, 10.1, 99. , 4.3])\n\n\n\nfloat_arr.dtype\n\ndtype('float64')\n\n\n\n\n6.3.2 Lists\nWe have elided past another Python type, the list. In fact we have already used lists in making arrays. For example, here we make an array with four values:\n\nnp.array([0, 10, 99, 4])\n\narray([ 0, 10, 99, 4])\n\n\nWe could also write the statement above in two steps:\n\nmy_list = [0, 10, 99, 4]\nnp.array(my_list)\n\narray([ 0, 10, 99, 4])\n\n\nIn the first statement — my_list = [0, 10, 99, 4] — we construct a list — a container for the four values. Let’s look at the my_list value:\n\nmy_list\n\n[0, 10, 99, 4]\n\n\nNotice that we do not see array in the display — this is not an array but a list:\n\ntype(my_list)\n\n<class 'list'>\n\n\nA list is a basic Python type. We can construct it by using the square brackets notation that you see above; we start with [, then we put the values we want to go in the list, separated by commas, followed by ]. Here is another list:\n\n# Creating another list.\nlist_2 = [5, 10, 20]\n\nAs you saw, we have been building arrays by building lists, and then passing the list to the np.array function, to create an array.\n\nlist_again = [100, 10, 0]\nnp.array(list_again)\n\narray([100, 10, 0])\n\n\nOf course, we can do this one line, as we have been doing up till now, by constructing the list inside the parentheses of the function. So, the following cell has just the same output as the cell above:\n\n# Constructing the list inside the function brackets.\nnp.array([100, 10, 0])\n\narray([100, 10, 0])\n\n\nLists are like arrays in that they are values that contain values, but they are unlike arrays in various ways — that we will not go into now. We often use lists to construct sequences into lists to turn them into arrays. For our purposes, and particularly for our calculations, arrays are much more useful and efficient than lists.\n\n\n\n\n\n\n6.4 String values\nSo far, all the values you have seen in Python arrays have been numbers. Now we get on to values that are bits of text. These are called strings.\nHere is a single Python string value:\n\ns = \"Resampling\"\ns\n\n'Resampling'\n\n\nWhat is the type of the new bit-of-text value s?\n\ntype(s)\n\n<class 'str'>\n\n\nThe Python str value is a bit of text, and therefore consists of a sequence of characters.\nAs arrays are containers for other things, such as numbers, strings are containers for characters.\n\nAs we can find the number of elements in an array (Section 7.5), we can find the number of characters in a string with the len function:\n\n# Number of characters in s\nlen(s)\n\n10\n\n\n\n\nAs we can index into array values to get individual elements (Section 7.6), we can index into string values to get individual characters:\n\n# Get the second character of the string\n# Remember, Python's index positions start at 0.\nsecond_char = s[1]\nsecond_char\n\n'e'\n\n\n\n\n\n6.5 Strings in arrays\nAs we can store numbers as elements in arrays, we can also store strings as array elements.\n\n# Just for clarity, make the list first.\n# Lists can also contain strings.\nlist_of_strings = ['Julian', 'Lincoln', 'Simon']\n# Then pass the list to np.array to make the array.\narr_of_strings = np.array(list_of_strings)\narr_of_strings\n\narray(['Julian', 'Lincoln', 'Simon'], dtype='<U7')\n\n\n\n# We can also create the list and the array in one line,\n# as we have been doing up til now.\narr_of_strings = np.array(['Julian', 'Lincoln', 'Simon'])\narr_of_strings\n\narray(['Julian', 'Lincoln', 'Simon'], dtype='<U7')\n\n\n\nNotice the array dtype:\n\narr_of_strings.dtype\n\ndtype('<U7')\n\n\nThe U in the dtype tells you that the elements in the array are Unicode strings (Unicode is a computer representation of text characters). The number after the U gives the maximum number of characters for any string in the array, here set to the length of the longest string when we created the array.\n\n\n\n\n\n\nTake care with Numpy string arrays\n\n\n\nIt is easy to run into trouble with Numpy string arrays where the elements have a maximum length, as here. Remember, the dtype of the array tells you what type of element the array can hold. Here the dtype is telling you that the array can hold strings of maximum length 7 characters. Now imagine trying to put a longer string into the array — what do you think would happen?\nThis happens:\n\n# An array of small strings.\nsmall_strings = np.array(['six', 'one', 'two'])\nsmall_strings.dtype\n\ndtype('<U3')\n\n\n\n# Set a new value for the first element (first string).\nsmall_strings[0] = 'seven'\nsmall_strings\n\narray(['sev', 'one', 'two'], dtype='<U3')\n\n\nNumpy truncates the new string to match the original maximum length.\nFor that reason, it is often useful to instruct Numpy that you want to use effectively infinite length strings, by specifying the array dtype as object when you make the array, like this:\n\n# An array of small strings, but this time, tell Numpy\n# that the strings should be of effectively infinite length.\nsmall_strings_better = np.array(['six', 'one', 'two'], dtype=object)\nsmall_strings_better\n\narray(['six', 'one', 'two'], dtype=object)\n\n\nNotice that the code uses a named function argument (Section 5.8), to specify to np.array that the array elements should be of type object. This type can store any Python value, and so, when the array is storing strings, it will use Python’s own string values as elements, rather than the more efficient but more fragile Unicode strings that Numpy uses by default.\n\n# Set a new value for the first element in the new array.\nsmall_strings_better[0] = 'seven'\nsmall_strings_better\n\narray(['seven', 'one', 'two'], dtype=object)\n\n\n\n\n\n\n\nAs for any array, you can select elements with indexing. When you select an element with a given position (index), you get the string at at that position:\n\n# Julian Lincoln Simon's second name.\n# (Remember, Python's positions start at 0).\nmiddle_name = arr_of_strings[1]\nmiddle_name\n\n'Lincoln'\n\n\nAs for numbers, we can compare strings with, for example, the == operator, that asks whether the two strings are equal:\n\nmiddle_name == 'Lincoln'\n\nTrue\n\n\n\n\n6.6 Repeating elements\nNow let us go back to the problem of selecting black and white jurors.\nWe started with the strategy of using numbers 0 through 25 to mean “black” jurors, and 26 through 99 to mean “white” jurors. We selected values at random from 0 through 99, and then worked out whether the number meant a “black” juror (was less than 26) or a “white” juror (was greater than 25).\nIt would be good to use strings instead of numbers to identify the potential jurors. Then we would not have to remember our coding of 0 through 25 and 26 through 99.\nIf only there was a way to make an array of 100 strings, where 26 of the strings were “black” and 74 were “white”. Then we could select randomly from that array, and it would be immediately obvious that we had a “black” or “white” juror.\nLuckily, of course, we can do that, by using the np.repeat function to construct the array.\nHere is how that works:\n\n# The values that we will repeat to fill up the larger array.\n# Use a list to store the sequence of values.\njuror_types = ['black', 'white']\n# The number of times we want to repeat \"black\" and \"white\".\n# Use a list to store the sequence of values.\nrepeat_nos = [26, 74]\n# Repeat \"black\" 26 times and \"white\" 74 times.\n# We have passed two lists here, but we could also have passed\n# arrays - the Numpy repeat function converts the lists to arrays\n# before it builds the repeats.\njury_pool = np.repeat(juror_types, repeat_nos)\n# Show the result\njury_pool\n\narray(['black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'black', 'black', 'black', 'black', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white'], dtype='<U5')\n\n\nWe can use this array of repeats of strings, to sample from. The result is easier to grasp, because we are using the string labels, instead of numbers:\n\n# Select one juror at random from the black / white pool.\none_juror = rnd.choice(jury_pool)\none_juror\n\n'white'\n\n\nWe can select our full jury of 12 jurors, and see the results in a more obvious form:\n\n# Select 12 jurors at random from the black / white pool.\none_jury = rnd.choice(jury_pool, 12)\none_jury\n\narray(['white', 'white', 'white', 'white', 'black', 'white', 'black',\n 'white', 'white', 'black', 'black', 'white'], dtype='<U5')\n\n\n\n\n\n\n\n\nUsing the size argument to rnd.choice\n\n\n\nIn the code above, we have specified the size of the sample we want (12) with the second argument to rnd.choice. As you saw in Section 5.8, we can also give names to the function arguments, in this case, to make it clearer what we mean by “12” in the code above. In fact, from now on, that is what we will do; we will specify the size of our sample by using the name for the function argument to rnd.choice — size — like this:\n\n# Select 12 jurors at random from the black / white pool.\n# Specify the sample size using the \"size\" named argument.\none_jury = rnd.choice(jury_pool, size=12)\none_jury\n\narray(['black', 'white', 'white', 'white', 'black', 'white', 'black',\n 'white', 'white', 'white', 'white', 'white'], dtype='<U5')\n\n\n\n\nWe can use == on the array to get True values where the juror was “black” and False values otherwise:\n\nare_black = one_jury == 'black'\nare_black\n\narray([ True, False, False, False, True, False, True, False, False,\n False, False, False])\n\n\nFinally, we can np.sum to find the number of black jurors (Section 5.14):\n\n# Number of black jurors in this simulated jury.\nn_black = np.sum(are_black)\nn_black\n\n3\n\n\nPutting that all together, this is our new procedure to select one jury and count the number of black jurors:\n\none_jury = rnd.choice(jury_pool, size=12)\nare_black = one_jury == 'black'\nn_black = np.sum(are_black)\nn_black\n\n3\n\n\nOr we can be even more compact by putting several statements together into one line:\n\n# The same as above, but on one line.\nn_black = np.sum(rnd.choice(jury_pool, size=12) == 'black')\nn_black\n\n1\n\n\n\n\n6.7 Resampling with and without replacement\nNow let us return to the details of Robert Swain’s case, that you first saw in Chapter 7.\nWe looked at the composition of Robert Swain’s 12-person jury — but in fact, by law, that does not have to be representative of the eligible jurors. The 12-person jury is drawn from a jury panel, of 100 people, and this should, in turn, be drawn from the population of all eligible jurors in the county, consisting, at the time, of “all male citizens in the community over 21 who are reputed to be honest, intelligent men and are esteemed for their integrity, good character and sound judgment.” So, unless there was some bias against black jurors, we might expect the 100-person jury panel to be a plausibly random sample of the eligible jurors, of whom 26% were black. See the Supreme Court case judgement for details.\nIn fact, in Robert Swain’s trial, there were 8 black members in the 100-person jury panel. We will leave it to you to adapt the simulation from Chapter 7 to ask the question — is 8% surprising as a random sample from a population with 26% black people?\nBut we have a different question: given that 8 out of 100 of the jury panel were black, is it surprising that none of the 12-person jury were black? As usual, we can answer that question with simulation.\nLet’s think about what a single simulated jury selection would look like.\nFirst we compile a representation of the actual jury panel, using the tools we have used above.\n\njuror_types = ['black', 'white']\n# in fact there were 8 black jurors and 92 white jurors.\npanel_nos = [8, 92]\njury_panel = np.repeat(juror_types, panel_nos)\n# Show the result\njury_panel\n\narray(['black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white'], dtype='<U5')\n\n\nNow consider taking a 12-person jury at random from this panel. We select the first juror at random, so that juror has an 8 out of 100 chance of being black. But when we select the second jury member, the situation has changed slightly. We can’t select the first juror again, so our panel is now 99 people. If our first juror was black, then the chances of selecting another black juror next are not 8 out of 100, but 7 out of 99 — a smaller chance. The problem is, as we shall see in more detail later, the chances of getting a black juror as the second, and third and fourth members of the jury depend on whether we selected a black juror as the first and second and third jury members. At its most extreme, imagine we had already selected eight jurors, and by some strange chance, all eight were black. Now our chances of selecting a black juror as the ninth juror are zero — there are no black jurors left to select from the panel.\nIn this case we are selecting jurors from the panel without replacement, meaning, that once we have selected a particular juror, we cannot select them again, and we do not put them back into the panel when we select our next juror.\nThis is the probability equivalent of the situation when you are dealing a hand of cards. Let’s say someone is dealing you, and you only, a hand of five cards. You get an ace as your first card. Your chances of getting an ace as your first card were just the number of aces in the deck divided by the number of cards — four in 52 – \\(\\frac{4}{52}\\). But for your second card, the probability has changed, because there is one less ace remaining in the pack, and one less card, so your chances of getting an ace as your second card are now \\(\\frac{3}{51}\\). This is sampling without replacement — in a normal game, you can’t get the same card twice. Of course, you could imagine getting a hand where you sampled with replacement. In that case, you’d get a card, you’d write down what it was, and you’d give the card back to the dealer, who would replace the card in the deck, shuffle again, and give you another card.\nAs you can see, the chances change if you are sampling with or without replacement, and the kind of sampling you do, will dictate how you model your chances in your simulations.\nBecause this distinction is so common, and so important, the machinery you have already seen in rnd.choice has simple ways for you to select your sampling type. You have already seen sampling with replacement, and it looks like this:\n\n# Take a sample of 12 jurors from the panel *with replacement*\n# With replacement is the default for `rnd.choice`.\nstrange_jury = rnd.choice(jury_panel, size=12)\nstrange_jury\n\narray(['white', 'white', 'white', 'black', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white'], dtype='<U5')\n\n\nThis is a strange jury, because it can select any member of the jury pool more than once. Perhaps that juror would have to fill two (or more!) seats, or run quickly between them. But of course, that is not how juries are selected. They are selected without replacement:\n\n# Take a sample of 12 jurors from the panel *without replacement*\nok_jury = rnd.choice(jury_panel, 12, replace=False)\nok_jury\n\narray(['white', 'white', 'white', 'white', 'black', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white'], dtype='<U5')\n\n\n\n\n\n\n\n\nComments at the end of lines\n\n\n\nYou have already seen comment lines. These are lines beginning with #, to signal to Python that the rest of the line is text for humans to read, but Python to ignore.\n\n# This is a comment. Python ignores this line.\n\nYou can also put comments at the end of code lines, by finishing the code part of the line, and then putting a #, followed by more text. Again, Python will ignore everything after the # as a text for humans, but not for Python.\n\nprint('Hello') # This is a comment at the end of the line.\n\nHello\n\n\n\n\nTo finish the procedure for simulating a single jury selection, we count the number of black jurors:\n\nn_black = np.sum(ok_jury == 'black') # How many black jurors?\nn_black\n\n1\n\n\nNow we have the procedure for one simulated trial, here is the procedure for 10000 simulated trials.\n\ncounts = np.zeros(10000)\nfor i in np.arange(10000):\n # Single trial procedure\n jury = rnd.choice(jury_panel, size=12, replace=False)\n n_black = np.sum(jury == 'black') # How many black jurors?\n # Store the result\n counts[i] = n_black\n\n# Number of juries with 0 black jurors.\nzero_black = np.sum(counts == 0)\n# Proportion\np_zero_black = zero_black / 10000\nprint(p_zero_black)\n\n0.3421\n\n\nWe have found that, when there are only 8% black jurors in the jury panel, having no black jurors in the final jury happens about 34% of the time, even in this case, where the jury is selected completely at random from the jury panel.\nWe should look for the main source of bias in the initial selection of the jury panel, not in the selection of the jury from the panel.\n\nEnd of sampling_tools notebook\n\n\n\n\n\n\n\n\nWith or without replacement for the original jury selection\n\n\n\nYou may have noticed in Chapter 7 that we were sampling Robert Swain’s jury from the eligible pool of jurors, with replacement. You might reasonably ask whether we should have selected from the eligible jurors without replacement, given that the same juror cannot serve more than once in the same jury, and therefore, the same argument applies there as here.\nThe trick there was that we were selecting from a very large pool of many thousand eligible jurors, of whom 26% were black. Let’s say there were 10,000 eligible jurors, of whom 2,600 were black. When selecting the first juror, there is exactly a 2,600 in 10,000 chance of getting a black juror — 26%. If we do get a black juror first, then the chance that the second juror will be black has changed slightly, 2,599 in 9,999. But these changes are very small; even if we select eleven black jurors out of eleven, when we come to the twelfth juror, we still have a 2,589 out of 9,989 chance of getting another black juror, and that works out at a 25.92% chance — hardly changed from the original 26%. So yes, you’d be right, we really should have compiled our population of 2,600 black jurors and 7,400 white jurors, and then sampled without replacement from that population, but as the resulting sample probabilities will be very similar to the simpler sampling with replacement, we chose to try and slide that one quietly past you, in the hope you would forgive us when you realized." + }, + { + "objectID": "sampling_tools.html#samples-and-labels", + "href": "sampling_tools.html#samples-and-labels", + "title": "6  Tools for samples and sampling", + "section": "6.2 Samples and labels", + "text": "6.2 Samples and labels\nThus far we have used numbers such as 1 and 0 and 10 to represent the elements we are sampling from. For example, in Chapter 7, we were simulating the chance of a particular juror being black, given that 26% of the eligible jurors in the county were black. We used integers for that task, where we started with all the integers from 0 through 99, and asked NumPy to select values at random from those integers. When NumPy selected an integer from 0 through 25, we chose to label the resulting simulated juror as black — there are 26 integers in the range 0 through 25, so there is a 26% chance that any one integer will be in that range. If the integer was from 26 through 99, the simulated juror was white (there are 74 integers in the range 26 through 99).\nHere is the process of simulating a single juror, adapted from Section 7.3.3:\n\nimport numpy as np\n# Ask NumPy for a random number generator.\nrnd = np.random.default_rng()\n\n# All the integers from 0 up to, but not including 100.\nzero_thru_99 = np.arange(100)\n\n# Get one random numbers from 0 through 99\na = rnd.choice(zero_thru_99)\n\n# Show the result\na\n\n59\n\n\nAfter that, we have to unpack our labeling of 0 through 25 as being “black” and 26 through 99 as being “white”. We might do that like this:\n\nthis_juror_is_black = a < 26\nthis_juror_is_black\n\nFalse\n\n\nThis all works as we want it to, but it’s just a little bit difficult to remember the coding (less than 26 means “black”, greater than 25 means “white”). We had to use that coding because we committed ourselves to using random numbers to simulate the outcomes.\nHowever, Python can also store bits of text, called strings. Values that are bits of text can be very useful because the text values can be memorable labels for the entities we are sampling from, in our simulations.\n\nBefore we get to strings, let us consider the different types of value we have seen so far.\n\n6.3 Types of values in Python\nYou have already come across the idea that Python values can be integers (positive or negative whole numbers), like this:\n\nv = 10\nv\n\n10\n\n\nHere the variable v holds the value. We can see what type of value v holds by using the type function:\n\ntype(v)\n\n<class 'int'>\n\n\nAs you may have noticed, Python can also have floating point values. These are values with a decimal point — so numbers that do not have to be integers, but can be any value between the integers. These floating points values are of type float:\n\nf = 10.1\ntype(f)\n\n<class 'float'>\n\n\n\n6.3.1 Numpy arrays\nYou have also seen that Numpy contains another type, the array. An array is a value that contains a sequence of values. For example, here is an array of integers:\n\narr = np.array([0, 10, 99, 4])\narr\n\narray([ 0, 10, 99, 4])\n\n\nNotice that this value arr is of type np.ndarray:\n\ntype(arr)\n\n<class 'numpy.ndarray'>\n\n\nThe array has its own internal record of what type of values it holds. This is called the array dtype:\n\narr.dtype\n\ndtype('int64')\n\n\nThe array dtype records the type of value stored in the array. All values in the array must be of this type, and all values in the array are therefore of the same type.\nThe array above contains integers, but we can also make arrays containing floating point values:\n\nfloat_arr = np.array([0.1, 10.1, 99.0, 4.3])\nfloat_arr\n\narray([ 0.1, 10.1, 99. , 4.3])\n\n\n\nfloat_arr.dtype\n\ndtype('float64')\n\n\n\n\n6.3.2 Lists\nWe have elided past another Python type, the list. In fact we have already used lists in making arrays. For example, here we make an array with four values:\n\nnp.array([0, 10, 99, 4])\n\narray([ 0, 10, 99, 4])\n\n\nWe could also write the statement above in two steps:\n\nmy_list = [0, 10, 99, 4]\nnp.array(my_list)\n\narray([ 0, 10, 99, 4])\n\n\nIn the first statement — my_list = [0, 10, 99, 4] — we construct a list — a container for the four values. Let’s look at the my_list value:\n\nmy_list\n\n[0, 10, 99, 4]\n\n\nNotice that we do not see array in the display — this is not an array but a list:\n\ntype(my_list)\n\n<class 'list'>\n\n\nA list is a basic Python type. We can construct it by using the square brackets notation that you see above; we start with [, then we put the values we want to go in the list, separated by commas, followed by ]. Here is another list:\n\n# Creating another list.\nlist_2 = [5, 10, 20]\n\nAs you saw, we have been building arrays by building lists, and then passing the list to the np.array function, to create an array.\n\nlist_again = [100, 10, 0]\nnp.array(list_again)\n\narray([100, 10, 0])\n\n\nOf course, we can do this one line, as we have been doing up till now, by constructing the list inside the parentheses of the function. So, the following cell has just the same output as the cell above:\n\n# Constructing the list inside the function brackets.\nnp.array([100, 10, 0])\n\narray([100, 10, 0])\n\n\nLists are like arrays in that they are values that contain values, but they are unlike arrays in various ways — that we will not go into now. We often use lists to construct sequences into lists to turn them into arrays. For our purposes, and particularly for our calculations, arrays are much more useful and efficient than lists." + }, + { + "objectID": "sampling_tools.html#types-of-values-in-python", + "href": "sampling_tools.html#types-of-values-in-python", + "title": "6  Tools for samples and sampling", + "section": "6.3 Types of values in Python", + "text": "6.3 Types of values in Python\nYou have already come across the idea that Python values can be integers (positive or negative whole numbers), like this:\n\nv = 10\nv\n\n10\n\n\nHere the variable v holds the value. We can see what type of value v holds by using the type function:\n\ntype(v)\n\n<class 'int'>\n\n\nAs you may have noticed, Python can also have floating point values. These are values with a decimal point — so numbers that do not have to be integers, but can be any value between the integers. These floating points values are of type float:\n\nf = 10.1\ntype(f)\n\n<class 'float'>\n\n\n\n6.3.1 Numpy arrays\nYou have also seen that Numpy contains another type, the array. An array is a value that contains a sequence of values. For example, here is an array of integers:\n\narr = np.array([0, 10, 99, 4])\narr\n\narray([ 0, 10, 99, 4])\n\n\nNotice that this value arr is of type np.ndarray:\n\ntype(arr)\n\n<class 'numpy.ndarray'>\n\n\nThe array has its own internal record of what type of values it holds. This is called the array dtype:\n\narr.dtype\n\ndtype('int64')\n\n\nThe array dtype records the type of value stored in the array. All values in the array must be of this type, and all values in the array are therefore of the same type.\nThe array above contains integers, but we can also make arrays containing floating point values:\n\nfloat_arr = np.array([0.1, 10.1, 99.0, 4.3])\nfloat_arr\n\narray([ 0.1, 10.1, 99. , 4.3])\n\n\n\nfloat_arr.dtype\n\ndtype('float64')\n\n\n\n\n6.3.2 Lists\nWe have elided past another Python type, the list. In fact we have already used lists in making arrays. For example, here we make an array with four values:\n\nnp.array([0, 10, 99, 4])\n\narray([ 0, 10, 99, 4])\n\n\nWe could also write the statement above in two steps:\n\nmy_list = [0, 10, 99, 4]\nnp.array(my_list)\n\narray([ 0, 10, 99, 4])\n\n\nIn the first statement — my_list = [0, 10, 99, 4] — we construct a list — a container for the four values. Let’s look at the my_list value:\n\nmy_list\n\n[0, 10, 99, 4]\n\n\nNotice that we do not see array in the display — this is not an array but a list:\n\ntype(my_list)\n\n<class 'list'>\n\n\nA list is a basic Python type. We can construct it by using the square brackets notation that you see above; we start with [, then we put the values we want to go in the list, separated by commas, followed by ]. Here is another list:\n\n# Creating another list.\nlist_2 = [5, 10, 20]\n\nAs you saw, we have been building arrays by building lists, and then passing the list to the np.array function, to create an array.\n\nlist_again = [100, 10, 0]\nnp.array(list_again)\n\narray([100, 10, 0])\n\n\nOf course, we can do this one line, as we have been doing up till now, by constructing the list inside the parentheses of the function. So, the following cell has just the same output as the cell above:\n\n# Constructing the list inside the function brackets.\nnp.array([100, 10, 0])\n\narray([100, 10, 0])\n\n\nLists are like arrays in that they are values that contain values, but they are unlike arrays in various ways — that we will not go into now. We often use lists to construct sequences into lists to turn them into arrays. For our purposes, and particularly for our calculations, arrays are much more useful and efficient than lists." + }, + { + "objectID": "sampling_tools.html#sec-intro-to-strings", + "href": "sampling_tools.html#sec-intro-to-strings", + "title": "6  Tools for samples and sampling", + "section": "6.4 String values", + "text": "6.4 String values\nSo far, all the values you have seen in Python arrays have been numbers. Now we get on to values that are bits of text. These are called strings.\nHere is a single Python string value:\n\ns = \"Resampling\"\ns\n\n'Resampling'\n\n\nWhat is the type of the new bit-of-text value s?\n\ntype(s)\n\n<class 'str'>\n\n\nThe Python str value is a bit of text, and therefore consists of a sequence of characters.\nAs arrays are containers for other things, such as numbers, strings are containers for characters.\n\nAs we can find the number of elements in an array (Section 7.5), we can find the number of characters in a string with the len function:\n\n# Number of characters in s\nlen(s)\n\n10\n\n\n\n\nAs we can index into array values to get individual elements (Section 7.6), we can index into string values to get individual characters:\n\n# Get the second character of the string\n# Remember, Python's index positions start at 0.\nsecond_char = s[1]\nsecond_char\n\n'e'" + }, + { + "objectID": "sampling_tools.html#strings-in-s", + "href": "sampling_tools.html#strings-in-s", + "title": "6  Tools for samples and sampling", + "section": "6.5 Strings in arrays", + "text": "6.5 Strings in arrays\nAs we can store numbers as elements in arrays, we can also store strings as array elements.\n\n# Just for clarity, make the list first.\n# Lists can also contain strings.\nlist_of_strings = ['Julian', 'Lincoln', 'Simon']\n# Then pass the list to np.array to make the array.\narr_of_strings = np.array(list_of_strings)\narr_of_strings\n\narray(['Julian', 'Lincoln', 'Simon'], dtype='<U7')\n\n\n\n# We can also create the list and the array in one line,\n# as we have been doing up til now.\narr_of_strings = np.array(['Julian', 'Lincoln', 'Simon'])\narr_of_strings\n\narray(['Julian', 'Lincoln', 'Simon'], dtype='<U7')\n\n\n\nNotice the array dtype:\n\narr_of_strings.dtype\n\ndtype('<U7')\n\n\nThe U in the dtype tells you that the elements in the array are Unicode strings (Unicode is a computer representation of text characters). The number after the U gives the maximum number of characters for any string in the array, here set to the length of the longest string when we created the array.\n\n\n\n\n\n\nTake care with Numpy string arrays\n\n\n\nIt is easy to run into trouble with Numpy string arrays where the elements have a maximum length, as here. Remember, the dtype of the array tells you what type of element the array can hold. Here the dtype is telling you that the array can hold strings of maximum length 7 characters. Now imagine trying to put a longer string into the array — what do you think would happen?\nThis happens:\n\n# An array of small strings.\nsmall_strings = np.array(['six', 'one', 'two'])\nsmall_strings.dtype\n\ndtype('<U3')\n\n\n\n# Set a new value for the first element (first string).\nsmall_strings[0] = 'seven'\nsmall_strings\n\narray(['sev', 'one', 'two'], dtype='<U3')\n\n\nNumpy truncates the new string to match the original maximum length.\nFor that reason, it is often useful to instruct Numpy that you want to use effectively infinite length strings, by specifying the array dtype as object when you make the array, like this:\n\n# An array of small strings, but this time, tell Numpy\n# that the strings should be of effectively infinite length.\nsmall_strings_better = np.array(['six', 'one', 'two'], dtype=object)\nsmall_strings_better\n\narray(['six', 'one', 'two'], dtype=object)\n\n\nNotice that the code uses a named function argument (Section 5.8), to specify to np.array that the array elements should be of type object. This type can store any Python value, and so, when the array is storing strings, it will use Python’s own string values as elements, rather than the more efficient but more fragile Unicode strings that Numpy uses by default.\n\n# Set a new value for the first element in the new array.\nsmall_strings_better[0] = 'seven'\nsmall_strings_better\n\narray(['seven', 'one', 'two'], dtype=object)\n\n\n\n\n\n\n\nAs for any array, you can select elements with indexing. When you select an element with a given position (index), you get the string at at that position:\n\n# Julian Lincoln Simon's second name.\n# (Remember, Python's positions start at 0).\nmiddle_name = arr_of_strings[1]\nmiddle_name\n\n'Lincoln'\n\n\nAs for numbers, we can compare strings with, for example, the == operator, that asks whether the two strings are equal:\n\nmiddle_name == 'Lincoln'\n\nTrue" + }, + { + "objectID": "sampling_tools.html#sec-repeating", + "href": "sampling_tools.html#sec-repeating", + "title": "6  Tools for samples and sampling", + "section": "6.6 Repeating elements", + "text": "6.6 Repeating elements\nNow let us go back to the problem of selecting black and white jurors.\nWe started with the strategy of using numbers 0 through 25 to mean “black” jurors, and 26 through 99 to mean “white” jurors. We selected values at random from 0 through 99, and then worked out whether the number meant a “black” juror (was less than 26) or a “white” juror (was greater than 25).\nIt would be good to use strings instead of numbers to identify the potential jurors. Then we would not have to remember our coding of 0 through 25 and 26 through 99.\nIf only there was a way to make an array of 100 strings, where 26 of the strings were “black” and 74 were “white”. Then we could select randomly from that array, and it would be immediately obvious that we had a “black” or “white” juror.\nLuckily, of course, we can do that, by using the np.repeat function to construct the array.\nHere is how that works:\n\n# The values that we will repeat to fill up the larger array.\n# Use a list to store the sequence of values.\njuror_types = ['black', 'white']\n# The number of times we want to repeat \"black\" and \"white\".\n# Use a list to store the sequence of values.\nrepeat_nos = [26, 74]\n# Repeat \"black\" 26 times and \"white\" 74 times.\n# We have passed two lists here, but we could also have passed\n# arrays - the Numpy repeat function converts the lists to arrays\n# before it builds the repeats.\njury_pool = np.repeat(juror_types, repeat_nos)\n# Show the result\njury_pool\n\narray(['black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'black', 'black', 'black', 'black', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white'], dtype='<U5')\n\n\nWe can use this array of repeats of strings, to sample from. The result is easier to grasp, because we are using the string labels, instead of numbers:\n\n# Select one juror at random from the black / white pool.\none_juror = rnd.choice(jury_pool)\none_juror\n\n'white'\n\n\nWe can select our full jury of 12 jurors, and see the results in a more obvious form:\n\n# Select 12 jurors at random from the black / white pool.\none_jury = rnd.choice(jury_pool, 12)\none_jury\n\narray(['white', 'white', 'white', 'white', 'black', 'white', 'black',\n 'white', 'white', 'black', 'black', 'white'], dtype='<U5')\n\n\n\n\n\n\n\n\nUsing the size argument to rnd.choice\n\n\n\nIn the code above, we have specified the size of the sample we want (12) with the second argument to rnd.choice. As you saw in Section 5.8, we can also give names to the function arguments, in this case, to make it clearer what we mean by “12” in the code above. In fact, from now on, that is what we will do; we will specify the size of our sample by using the name for the function argument to rnd.choice — size — like this:\n\n# Select 12 jurors at random from the black / white pool.\n# Specify the sample size using the \"size\" named argument.\none_jury = rnd.choice(jury_pool, size=12)\none_jury\n\narray(['black', 'white', 'white', 'white', 'black', 'white', 'black',\n 'white', 'white', 'white', 'white', 'white'], dtype='<U5')\n\n\n\n\nWe can use == on the array to get True values where the juror was “black” and False values otherwise:\n\nare_black = one_jury == 'black'\nare_black\n\narray([ True, False, False, False, True, False, True, False, False,\n False, False, False])\n\n\nFinally, we can np.sum to find the number of black jurors (Section 5.14):\n\n# Number of black jurors in this simulated jury.\nn_black = np.sum(are_black)\nn_black\n\n3\n\n\nPutting that all together, this is our new procedure to select one jury and count the number of black jurors:\n\none_jury = rnd.choice(jury_pool, size=12)\nare_black = one_jury == 'black'\nn_black = np.sum(are_black)\nn_black\n\n3\n\n\nOr we can be even more compact by putting several statements together into one line:\n\n# The same as above, but on one line.\nn_black = np.sum(rnd.choice(jury_pool, size=12) == 'black')\nn_black\n\n1" + }, + { + "objectID": "sampling_tools.html#resampling-with-and-without-replacement", + "href": "sampling_tools.html#resampling-with-and-without-replacement", + "title": "6  Tools for samples and sampling", + "section": "6.7 Resampling with and without replacement", + "text": "6.7 Resampling with and without replacement\nNow let us return to the details of Robert Swain’s case, that you first saw in Chapter 7.\nWe looked at the composition of Robert Swain’s 12-person jury — but in fact, by law, that does not have to be representative of the eligible jurors. The 12-person jury is drawn from a jury panel, of 100 people, and this should, in turn, be drawn from the population of all eligible jurors in the county, consisting, at the time, of “all male citizens in the community over 21 who are reputed to be honest, intelligent men and are esteemed for their integrity, good character and sound judgment.” So, unless there was some bias against black jurors, we might expect the 100-person jury panel to be a plausibly random sample of the eligible jurors, of whom 26% were black. See the Supreme Court case judgement for details.\nIn fact, in Robert Swain’s trial, there were 8 black members in the 100-person jury panel. We will leave it to you to adapt the simulation from Chapter 7 to ask the question — is 8% surprising as a random sample from a population with 26% black people?\nBut we have a different question: given that 8 out of 100 of the jury panel were black, is it surprising that none of the 12-person jury were black? As usual, we can answer that question with simulation.\nLet’s think about what a single simulated jury selection would look like.\nFirst we compile a representation of the actual jury panel, using the tools we have used above.\n\njuror_types = ['black', 'white']\n# in fact there were 8 black jurors and 92 white jurors.\npanel_nos = [8, 92]\njury_panel = np.repeat(juror_types, panel_nos)\n# Show the result\njury_panel\n\narray(['black', 'black', 'black', 'black', 'black', 'black', 'black',\n 'black', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white', 'white', 'white',\n 'white', 'white'], dtype='<U5')\n\n\nNow consider taking a 12-person jury at random from this panel. We select the first juror at random, so that juror has an 8 out of 100 chance of being black. But when we select the second jury member, the situation has changed slightly. We can’t select the first juror again, so our panel is now 99 people. If our first juror was black, then the chances of selecting another black juror next are not 8 out of 100, but 7 out of 99 — a smaller chance. The problem is, as we shall see in more detail later, the chances of getting a black juror as the second, and third and fourth members of the jury depend on whether we selected a black juror as the first and second and third jury members. At its most extreme, imagine we had already selected eight jurors, and by some strange chance, all eight were black. Now our chances of selecting a black juror as the ninth juror are zero — there are no black jurors left to select from the panel.\nIn this case we are selecting jurors from the panel without replacement, meaning, that once we have selected a particular juror, we cannot select them again, and we do not put them back into the panel when we select our next juror.\nThis is the probability equivalent of the situation when you are dealing a hand of cards. Let’s say someone is dealing you, and you only, a hand of five cards. You get an ace as your first card. Your chances of getting an ace as your first card were just the number of aces in the deck divided by the number of cards — four in 52 – \\(\\frac{4}{52}\\). But for your second card, the probability has changed, because there is one less ace remaining in the pack, and one less card, so your chances of getting an ace as your second card are now \\(\\frac{3}{51}\\). This is sampling without replacement — in a normal game, you can’t get the same card twice. Of course, you could imagine getting a hand where you sampled with replacement. In that case, you’d get a card, you’d write down what it was, and you’d give the card back to the dealer, who would replace the card in the deck, shuffle again, and give you another card.\nAs you can see, the chances change if you are sampling with or without replacement, and the kind of sampling you do, will dictate how you model your chances in your simulations.\nBecause this distinction is so common, and so important, the machinery you have already seen in rnd.choice has simple ways for you to select your sampling type. You have already seen sampling with replacement, and it looks like this:\n\n# Take a sample of 12 jurors from the panel *with replacement*\n# With replacement is the default for `rnd.choice`.\nstrange_jury = rnd.choice(jury_panel, size=12)\nstrange_jury\n\narray(['white', 'white', 'white', 'black', 'white', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white'], dtype='<U5')\n\n\nThis is a strange jury, because it can select any member of the jury pool more than once. Perhaps that juror would have to fill two (or more!) seats, or run quickly between them. But of course, that is not how juries are selected. They are selected without replacement:\n\n# Take a sample of 12 jurors from the panel *without replacement*\nok_jury = rnd.choice(jury_panel, 12, replace=False)\nok_jury\n\narray(['white', 'white', 'white', 'white', 'black', 'white', 'white',\n 'white', 'white', 'white', 'white', 'white'], dtype='<U5')\n\n\n\n\n\n\n\n\nComments at the end of lines\n\n\n\nYou have already seen comment lines. These are lines beginning with #, to signal to Python that the rest of the line is text for humans to read, but Python to ignore.\n\n# This is a comment. Python ignores this line.\n\nYou can also put comments at the end of code lines, by finishing the code part of the line, and then putting a #, followed by more text. Again, Python will ignore everything after the # as a text for humans, but not for Python.\n\nprint('Hello') # This is a comment at the end of the line.\n\nHello\n\n\n\n\nTo finish the procedure for simulating a single jury selection, we count the number of black jurors:\n\nn_black = np.sum(ok_jury == 'black') # How many black jurors?\nn_black\n\n1\n\n\nNow we have the procedure for one simulated trial, here is the procedure for 10000 simulated trials.\n\ncounts = np.zeros(10000)\nfor i in np.arange(10000):\n # Single trial procedure\n jury = rnd.choice(jury_panel, size=12, replace=False)\n n_black = np.sum(jury == 'black') # How many black jurors?\n # Store the result\n counts[i] = n_black\n\n# Number of juries with 0 black jurors.\nzero_black = np.sum(counts == 0)\n# Proportion\np_zero_black = zero_black / 10000\nprint(p_zero_black)\n\n0.3421\n\n\nWe have found that, when there are only 8% black jurors in the jury panel, having no black jurors in the final jury happens about 34% of the time, even in this case, where the jury is selected completely at random from the jury panel.\nWe should look for the main source of bias in the initial selection of the jury panel, not in the selection of the jury from the panel.\n\nEnd of sampling_tools notebook" + }, + { + "objectID": "sampling_tools.html#conclusion", + "href": "sampling_tools.html#conclusion", + "title": "6  Tools for samples and sampling", + "section": "6.8 Conclusion", + "text": "6.8 Conclusion\nThis chapter introduced you to the idea of strings — values in Python that store bits of text. Strings are very useful as labels for the entities we are sampling from, when we do our simulations. Strings are particularly useful when we use them with arrays, and one way we often do that is to build up arrays of strings to sample from, using the np.repeat function.\nThere is a fundamental distinction between two different types of sampling — sampling with replacement, where we draw an element from a larger pool, then put that element back before drawing again, and sampling without replacement, where we remove the element from the remaining pool when we draw it into the sample. As we will see later, it is often a judgment call which of these two types of sampling is a more reasonable model of the world you are trying to simulate." + }, + { + "objectID": "resampling_with_code2.html#a-question-of-life-and-death", + "href": "resampling_with_code2.html#a-question-of-life-and-death", + "title": "7  More resampling with code", + "section": "7.1 A question of life and death", + "text": "7.1 A question of life and death\nThis example comes from the excellent Berkeley introduction to data science (Ani Adhikari and Wagner 2021).\nRobert Swain was a young black man who was sentenced to death in the early 60s. Swain’s trial was held in Talladega County, Alabama. At the time, 26% of the eligible jurors in that county were black, but every member of Swain’s jury was white. Swain and his legal team appealed to the Alabama Supreme Court, and then to the US Supreme Court, arguing that there was racial bias in the jury selection. They noted that there had been no black jurors in Talladega county since 1950, even though they made up about a quarter of the eligible pool of jurors. The US Supreme Court rejected this argument, in a 6 to 3 opinion, writing that “The overall percentage disparity has been small and reflects no studied attempt to include or exclude a specified number of Negros.”.\nSwain’s team presented a variety of evidence on bias in jury selection, but here we will look at the obvious and apparently surprising fact that Swain’s jury was entirely white. The Supreme Court decided that the “disparity” between selection of white and black jurors “has been small” — but how would they, and how would we, make a rational decision about whether this disparity really was “small”?\nYou might reasonably be worried about the result of this decision for Robert Swain. In fact his death sentence was invalidated by a later, unrelated decision and he served a long prison sentence instead. In 1986, the Supreme Court overturned the precedent set by Swain’s case, in Batson v. Kentucky, 476 U.S. 79." + }, + { + "objectID": "resampling_with_code2.html#a-small-disparity-and-a-hypothetical-world", + "href": "resampling_with_code2.html#a-small-disparity-and-a-hypothetical-world", + "title": "7  More resampling with code", + "section": "7.2 A small disparity and a hypothetical world", + "text": "7.2 A small disparity and a hypothetical world\nTo answer the question that the Supreme Court asked, we return to the method we used in the last chapter.\nLet us imagine a hypothetical world, in which each individual black or white person had an equal chance of being selected for the jury. Call this world Hypothetical County, Alabama.\nJust as in 1960’s Talladega County, 26% of eligible jurors in Hypothetical County are black. Hypothetical County jury selection has no bias against black people, so we expect around 26% of the jury to be black. 0.26 * 12 = 3.12, so we expect that, on average, just over 3 out of 12 jurors in a Hypothetical County jury will be black. But, if we select each juror at random from the population, that means that, sometimes, by chance, we will have fewer than 3 black jurors, and sometimes will have more than 3 black jurors. And, by chance, sometimes we will have no black jurors. But, if the jurors really are selected at random, how often would we expect this to happen — that there are no black jurors? We would like to estimate the probability that we will get no black jurors. If that probability is small, then we have some evidence that the disparity in selection between black and white jurors, was not “small”.\n\nWhat is the probability of an all white jury being randomly selected out of a population having 26% black people?" + }, + { + "objectID": "resampling_with_code2.html#designing-the-experiment", + "href": "resampling_with_code2.html#designing-the-experiment", + "title": "7  More resampling with code", + "section": "7.3 Designing the experiment", + "text": "7.3 Designing the experiment\nBefore we start, we need to figure out three things:\n\nWhat do we mean by one trial?\nWhat is the outcome of interest from the trial?\nHow do we simulate one trial?\n\nWe then take three steps to calculate the desired probability:\n\nRepeat the simulated trial procedure N times.\nCount M, the number of trials with an outcome that matches the outcome we are interested in.\nCalculate the proportion, M/N. This is an estimate of the probability in question.\n\nFor this problem, our task is made a little easier by the fact that our trial (in the resampling sense) is a simulated trial (in the legal sense). One trial requires 12 simulated jurors, each labeled by race (white or black).\nThe outcome we are interested in is the number of black jurors.\nNow comes the harder part. How do we simulate one trial?\n\n7.3.1 One trial\nOne trial requires 12 jurors, and we are interested only in the race of each juror. In Hypothetical County, where selection by race is entirely random, each juror has a 26% chance of being black.\nWe need a way of simulating a 26% chance.\nOne way of doing this is by getting a random number from 0 through 99 (inclusive). There are 100 numbers in the range 0 through 99 (inclusive).\nWe will arbitrarily say that the juror is white if the random number is in the range from 0 through 73. 74 of the 100 numbers are in this range, so the juror has a 74/100 = 74% chance of getting the label “white”. We will say the juror is black if the random number is in the range 74 though 99. There are 26 such numbers, so the juror has a 26% chance of getting the label “black”.\nNext we need a way of getting a random number in the range 0 through 99. This is an easy job for the computer, but if we had to do this with a physical device, we could get a single number by throwing two 10-sided dice, say a blue die and a green die. The face of the blue die will be the 10s digit, and the green face will be the ones digit. So, if the blue die comes up with 8 and the green die has 4, then the random number is 84.\nWe could then simulate 12 jurors by repeating this process 12 times, each time writing down “white” if the number is from 0 through 74, and “black” otherwise. The trial outcome is the number of times we wrote “black” for these 12 simulated jurors.\n\n\n7.3.2 Using code to simulate a trial\nWe use the same logic to simulate a trial with the computer. A little code makes the job easier, because we can ask Python to give us 12 random numbers from 0 through 99, and to count how many of these numbers are in the range from 75 through 99. Numbers in the range from 75 through 99 correspond to black jurors.\n\n\n7.3.3 Random numbers from 0 through 99\nWe can now use NumPy and the random number functions from the last chapter to get 12 random numbers from 0 through 99.\n\n# Import the Numpy library, rename as \"np\"\nimport numpy as np\n\n# Ask NumPy for a random number generator.\nrnd = np.random.default_rng()\n\n# All the integers from 0 up to, but not including 100.\nzero_thru_99 = np.arange(100)\n\n# Get 12 random numbers from 0 through 99\na = rnd.choice(zero_thru_99, size=12)\n\n# Show the result\na\n\narray([59, 43, 45, 58, 95, 89, 23, 99, 17, 51, 85, 23])\n\n\n\n7.3.3.1 Counting the jurors\nWe use comparison and np.sum to count how many numbers are greater than 74, and therefore, in the range from 75 through 99:\n\n# How many numbers are greater than 74?\nb = np.sum(a > 74)\n# Show the result\nb\n\n4\n\n\n\n\n7.3.3.2 A single simulated trial\nWe assemble the pieces from the last few sections to make a cell that simulates a single trial:\n\nrnd = np.random.default_rng()\nzero_thru_99 = np.arange(100)\n\n# Get 12 random numbers from 0 through 99\na = rnd.choice(zero_thru_99, size=12)\n\n# How many numbers are greater than 74?\nb = np.sum(a > 74)\n\n# Show the result\nb\n\n4" + }, + { + "objectID": "resampling_with_code2.html#three-simulation-steps", + "href": "resampling_with_code2.html#three-simulation-steps", + "title": "7  More resampling with code", + "section": "7.4 Three simulation steps", + "text": "7.4 Three simulation steps\nNow we come back to the details of how we:\n\nRepeat the simulated trial many times;\nrecord the results for each trial;\ncalculate the required proportion as an estimate of the probability we seek.\n\nRepeating the trial many times is the job of the for loop, and we will come to that soon.\nIn order to record the results, we will store each trial result in an array.\n\n\n\n\n\n\nMore on arrays\n\n\n\nSince we will be working with arrays a lot, it is worth knowing more about them.\nA NumPy array is a container that stores many elements of the same type. You have already seen, in Chapter 2, how we can create an array from a sequence of numbers using the np.array function.\n\n# Make an array of numbers, store with the name \"some_numbers\".\nsome_numbers = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n# Show the value of \"some_numbers\"\nsome_numbers\n\narray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n\n\nAnother way that we can create arrays is to use the np.zeros function to make a new array where all the elements are 0.\n\n# Make a new array containing 5 zeros.\n# store with the name \"z\".\nz = np.zeros(5)\n# Show the value of \"z\"\nz\n\narray([0., 0., 0., 0., 0.])\n\n\nNotice the argument 5 to the np.zeros function. This tells the function how many zeros we want in the array that the function will return.\n\n7.5 array length\nThe are various useful things we can do with this array container. One is to ask how many elements there are in the array container. We can use the len function to calculate the number of elements in an array:\n\n# Show the number of elements in \"z\"\nlen(z)\n\n5\n\n\n\n\n7.6 Indexing into arrays\nAnother thing we can do is set the value for a particular element in the array. To do this, we use square brackets following the array value, on the left hand side of the equals sign, like this:\n\n# Set the value of the *first* element in the array.\nz[0] = 99\n# Show the new contents of the array.\nz\n\narray([99., 0., 0., 0., 0.])\n\n\nRead the first line of code as “the element at position 0 gets a value of 99”.\n\nNotice that the position number of the first element in the array is 0, and the position number of the second element is 1. Think of the position as an offset from the beginning of the array. The first element is at the beginning of the array, and so it is at offset (position) 0. This can be a little difficult to get used to at first, but you will find that thinking of the positions of offsets in this way soon starts to come naturally, and later you will also find that it helps you to avoid some common mistakes when using positions for getting and setting values.\n\nFor practice, let us also set the value of the third element in the array:\n\n# Set the value of the *third* element in the array.\nz[2] = 99\n# Show the new contents of the array.\nz\n\narray([99., 0., 99., 0., 0.])\n\n\nRead the first code line above as as “set the value at position 2 in the array to have the value 99”.\nWe can also get the value of the element at a given position, using the same square-bracket notation:\n\n# Get the value of the *first* element in the array.\n# Store the value with name \"v\"\nv = z[0]\n# Show the value we got\nv\n\n99.0\n\n\nRead the first code line here as “v gets the value at position 0 in the array”.\nUsing square brackets to get and set element values is called indexing into the array.\n\n\n\n\n7.6.1 Repeating trials\nAs a preview, let us now imagine that we want to do 50 simulated trials of Robert Swain’s jury in Hypothetical County. We will want to store the count for each trial, to give 50 counts.\nIn order to do this, we make an array to hold the 50 counts. Call this array z.\n\n# An array to hold the 50 count values.\nz = np.zeros(50)\n\nWe could run a single trial to get a single simulated count. Here we just repeat the code cell you saw above. Notice that we can get a different result each time we run this code, because the numbers in a are random choices from the range 0 through 99, and different random numbers will give different counts.\n\nrnd = np.random.default_rng()\nzero_thru_99 = np.arange(0, 100)\n# Get 12 random numbers from 0 through 99\na = rnd.choice(zero_thru_99, size=12)\n# How many numbers are greater than 74?\nb = np.sum(a > 74)\n# Show the result\nb\n\n4\n\n\nNow we have the result of a single trial, we can store it as the first number in the z array:\n\n# Store the single trial count as the first value in the \"z\" array.\nz[0] = b\n# Show all the values in the \"z\" array.\nz\n\narray([4., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])\n\n\nOf course we could just keep doing this: run the cell corresponding to a trial, above, to get a new count, and then store it at the next position in the z array. For example, we could store the counts for the first three trials with:\n\n# First trial\na = rnd.choice(zero_thru_99, size=12)\nb = np.sum(a > 74)\n# Store the result at the first position in z\n# Remember, the first position is offset 0.\nz[0] = b\n# Second trial\na = rnd.choice(zero_thru_99, size=12)\nb = np.sum(a > 74)\n# Store the result at the second position in z\nz[1] = b\n# Third trial\na = rnd.choice(zero_thru_99, size=12)\nb = np.sum(a > 74)\n# Store the result at the third position in z\nz[2] = b\n\n# And so on ...\n\nThis would get terribly long and boring to type for 50 trials. Luckily computer code is very good at repeating the same procedure many times. For example, Python can do this using a for loop. You have already seen a preview of the for loop in Chapter 2. Here we dive into for loops in more depth.\n\n\n7.6.2 For-loops in Python\nA for-loop is a way of asking Python to:\n\nTake a sequence of things, one by one, and\nDo the same task on each one.\n\nWe often use this idea when we are trying to explain a repeating procedure. For example, imagine we wanted to explain what the supermarket checkout person does for the items in your shopping basket. You might say that they do this:\n\nFor each item of shopping in your basket, they take the item off the conveyor belt, scan it, and put it on the other side of the till.\n\nYou could also break this description up into bullet points with indentation, to say the same thing:\n\nFor each item from your shopping basket, they:\n\nTake the item off the conveyor belt.\nScan the item.\nPut it on the other side of the till.\n\n\nNotice the logic; the checkout person is repeating the same procedure for each of a series of items.\nThis is the logic of the for loop in Python. The procedure that Python repeats is called the body of the for loop. In the example of the checkout person above, the repeating procedure is:\n\nTake the item off the conveyor belt.\nScan the item.\nPut it on the other side of the till.\n\nNow imagine we wanted to use Python to print out the year of birth for each of the authors for the third edition of this book:\n\n\n\nAuthor\nYear of birth\n\n\n\n\nJulian Lincoln Simon\n1932\n\n\nMatthew Brett\n1964\n\n\nStéfan van der Walt\n1980\n\n\nIan Nimmo-Smith\n1944\n\n\n\nWe want to see this output:\nAuthor birth year is 1932\nAuthor birth year is 1964\nAuthor birth year is 1980\nAuthor birth year is 1944\nOf course, we could just ask Python to print out these exact lines, like this:\n\nprint('Author birth year is 1932')\n\nAuthor birth year is 1932\n\nprint('Author birth year is 1964')\n\nAuthor birth year is 1964\n\nprint('Author birth year is 1980')\n\nAuthor birth year is 1980\n\nprint('Author birth year is 1944')\n\nAuthor birth year is 1944\n\n\nWe might instead notice that we are repeating the same procedure for each of the four birth years, and decide to do the same thing using a for loop:\n\nauthor_birth_years = np.array([1932, 1964, 1980, 1944])\n\n# For each birth year\nfor birth_year in author_birth_years:\n # Repeat this procedure ...\n print('Author birth year is', birth_year)\n\nAuthor birth year is 1932\nAuthor birth year is 1964\nAuthor birth year is 1980\nAuthor birth year is 1944\n\n\nThe for loop starts with a line where we tell it what items we want to repeat the procedure for:\n\nfor birth_year in author_birth_years:\nThis initial line of the for loop ends with a colon.\nThe next thing in the for loop is the procedure Python should follow for each item. Python knows that the following lines are the procedure it should repeat, because the lines are indented. The indented lines are the body of the for loop.\n\nThe initial line of the for loop above tells Python that it should take each item in author_birth_years, one by one — first 1932, then 1964, then 1980, then 1944. For each of these numbers it will:\n\nPut the number into the variable birth_year, then\nRun the indented code .\n\nJust as the person at the supermarket checkout takes each item in turn, for each iteration (repeat) of the for loop, birth_year gets a new value from the sequence in author_birth_years. birth_year is called the loop variable, because it is the variable that gets a new value each time we begin a new iteration of the for loop procedure. As for any variable in Python, we can call our loop variable anything we like. We used birth_year here, but we could have used y or year or some other name.\nNow you know what the for loop is doing, you can see that the for loop above is equivalent to the following code:\n\nbirth_year = 1932 # Set the loop variable to contain the first value.\nprint('Author birth year is', birth_year) # Use it.\n\nAuthor birth year is 1932\n\nbirth_year = 1964 # Set the loop variable to contain the next value.\nprint('Author birth year is', birth_year) # Use the second value.\n\nAuthor birth year is 1964\n\nbirth_year = 1980\nprint('Author birth year is', birth_year)\n\nAuthor birth year is 1980\n\nbirth_year = 1944\nprint('Author birth year is', birth_year)\n\nAuthor birth year is 1944\n\n\nWriting the steps in the for loop out like this is called unrolling the loop. It can be a useful exercise to do this when you come across a for loop, in order to work through the logic of the loop. For example, you may want to write out the unrolled equivalent of the first couple of iterations, to see what the loop variable will be, and what will happen in the body of the loop.\nWe often use for loops with ranges (see Section 5.9). Here we use a loop to print out the numbers 0 through 3:\n\nfor n in np.arange(0, 4):\n print('The loop variable n is', n)\n\nThe loop variable n is 0\nThe loop variable n is 1\nThe loop variable n is 2\nThe loop variable n is 3\n\n\nNotice that the range ended at (the number before) 4, and that means we repeat the loop body 4 times. We can also use the loop variable value from the range as an index, to get or set the first, second, etc values from an array.\nFor example, maybe we would like to show the author position and the author year of birth.\nRemember our author birth years:\n\nauthor_birth_years\n\narray([1932, 1964, 1980, 1944])\n\n\nWe can get (for example) the second author birth year with:\n\nauthor_birth_years[1]\n\n1964\n\n\n\nRemember, for Python, the first element is position 0, so the second element is position 1.\n\nUsing the combination of looping over a range, and array indexing, we can print out the author position and the author birth year:\n\nfor n in np.arange(0, 4):\n year = author_birth_years[n]\n print('Birth year of author position', n, 'is', year)\n\nBirth year of author position 0 is 1932\nBirth year of author position 1 is 1964\nBirth year of author position 2 is 1980\nBirth year of author position 3 is 1944\n\n\n\nAgain, remember Python considers 0 as the first position.\n\nJust for practice, let us unroll the first two iterations through this for loop, to remind ourselves what the code is doing:\n\n# Unrolling the for loop.\nn = 0\nyear = author_birth_years[n] # Will be 1932\nprint('Birth year of author position', n, 'is', year)\n\nBirth year of author position 0 is 1932\n\nn = 1\nyear = author_birth_years[n] # Will be 1964\nprint('Birth year of author position', n, 'is', year)\n\nBirth year of author position 1 is 1964\n\n# And so on.\n\n\n\n7.6.3 range in Python for loops\nSo far we have used np.arange to give us the sequence of integers that we feed into the for loop. But — as you saw in Section 5.10 — we can also get a range of numbers from Python’s range function. range is a common and useful alternative way to provide a range of numbers to a for loop.\nYou have just seen how we would use np.arange to send the numbers 0, 1, 2, and 3 to a for loop, in the example above, repeated here:\n\nfor n in np.arange(0, 4):\n year = author_birth_years[n]\n print('Birth year of author position', n, 'is', year)\n\nBirth year of author position 0 is 1932\nBirth year of author position 1 is 1964\nBirth year of author position 2 is 1980\nBirth year of author position 3 is 1944\n\n\nWe could also use range instead of np.arange to do the same task:\n\nfor n in range(0, 4):\n year = author_birth_years[n]\n print('Birth year of author position', n, 'is', year)\n\nBirth year of author position 0 is 1932\nBirth year of author position 1 is 1964\nBirth year of author position 2 is 1980\nBirth year of author position 3 is 1944\n\n\nIn fact, you will see this pattern throughout the book, where we use for statements like for value in range(10000): to ask Python to put each number in the range 0 up to (not including) 100000 into the variable value, and then do something in the body of the loop. Just to be clear, we could always, and almost as easily, write for value in np.range(10000): to do the same task. But — even though we could use np.arange to get an array of numbers, we generally prefer range in our Python for loops, because it is just a little less typing (without the np.a of np.arange, and because it is a more common pattern in standard Python code.1\n\n\n7.6.4 Putting it all together\nHere is the code we worked out above, to implement a single trial:\n\nrnd = np.random.default_rng()\nzero_thru_99 = np.arange(0, 100)\n# Get 12 random numbers from 0 through 99\na = rnd.choice(zero_thru_99, size=12)\n# How many numbers are greater than 74?\nb = np.sum(a > 74)\n# Show the result\nb\n\n4\n\n\nWe found that we could use arrays to store the results of these trials, and that we could use for loops to repeat the same procedure many times.\nNow we can put these parts together to do 50 simulated trials:\n\n# Procedure for 50 simulated trials.\n\n# The Numpy random number generator.\nrnd = np.random.default_rng()\n\n# All the numbers from 0 through 99.\nzero_through_99 = np.arange(0, 100)\n\n# An array to store the counts for each trial.\nz = np.zeros(50)\n\n# Repeat the trial procedure 50 times.\nfor i in np.arange(0, 50):\n # Get 12 random numbers from 0 through 99\n a = rnd.choice(zero_through_99, size=12)\n # How many numbers are greater than 74?\n b = np.sum(a > 74)\n # Store the result at the next position in the \"z\" array.\n z[i] = b\n # Now go back and do the next trial until finished.\n# Show the result of all 50 trials.\nz\n\narray([4., 2., 3., 3., 4., 1., 4., 2., 7., 2., 3., 1., 6., 2., 5., 5., 3.,\n 1., 3., 4., 2., 2., 2., 4., 3., 4., 4., 2., 3., 3., 3., 1., 3., 1.,\n 2., 3., 2., 2., 3., 3., 6., 1., 3., 3., 4., 2., 4., 3., 4., 3.])\n\n\nFinally, we need to count how many of the trials in z ended up with all-white juries. These are the trials with a z (count) value of 0.\nTo do this, we can ask an array which elements match a certain condition. E.g.:\n\nx = np.array([2, 1, 3, 0])\ny = x < 2\n# Show the result\ny\n\narray([False, True, False, True])\n\n\nWe now use that same technique to ask, of each of the 50 counts, whether the array z is equal to 0, like this:\n\n# Is the value of z equal to 0?\nall_white = z == 0\n# Show the result of the comparison.\nall_white\n\narray([False, False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False, False,\n False, False, False, False, False])\n\n\nWe need to get the number of True values in all_white, to find how many simulated trials gave all-white juries.\n\n# Count the number of True values in \"all_white\"\n# This is the same as the number of values in \"z\" that are equal to 0.\nn_all_white = np.sum(all_white)\n# Show the result of the comparison.\nn_all_white\n\n0\n\n\nn_all_white is the number of simulated trials for which all the jury members were white. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.\n\n# Proportion of trials where all jury members were white.\np = n_all_white / 50\n# Show the result\np\n\n0.0\n\n\nFrom this initial simulation, it seems there is around a 0% chance that a jury selected randomly from the population, which was 26% black, would have no black jurors." + }, + { + "objectID": "resampling_with_code2.html#sec-array-length", + "href": "resampling_with_code2.html#sec-array-length", + "title": "7  More resampling with code", + "section": "7.5 array length", + "text": "7.5 array length\nThe are various useful things we can do with this array container. One is to ask how many elements there are in the array container. We can use the len function to calculate the number of elements in an array:\n\n# Show the number of elements in \"z\"\nlen(z)\n\n5" + }, + { + "objectID": "resampling_with_code2.html#sec-array-indexing", + "href": "resampling_with_code2.html#sec-array-indexing", + "title": "7  More resampling with code", + "section": "7.6 Indexing into arrays", + "text": "7.6 Indexing into arrays\nAnother thing we can do is set the value for a particular element in the array. To do this, we use square brackets following the array value, on the left hand side of the equals sign, like this:\n\n# Set the value of the *first* element in the array.\nz[0] = 99\n# Show the new contents of the array.\nz\n\narray([99., 0., 0., 0., 0.])\n\n\nRead the first line of code as “the element at position 0 gets a value of 99”.\n\nNotice that the position number of the first element in the array is 0, and the position number of the second element is 1. Think of the position as an offset from the beginning of the array. The first element is at the beginning of the array, and so it is at offset (position) 0. This can be a little difficult to get used to at first, but you will find that thinking of the positions of offsets in this way soon starts to come naturally, and later you will also find that it helps you to avoid some common mistakes when using positions for getting and setting values.\n\nFor practice, let us also set the value of the third element in the array:\n\n# Set the value of the *third* element in the array.\nz[2] = 99\n# Show the new contents of the array.\nz\n\narray([99., 0., 99., 0., 0.])\n\n\nRead the first code line above as as “set the value at position 2 in the array to have the value 99”.\nWe can also get the value of the element at a given position, using the same square-bracket notation:\n\n# Get the value of the *first* element in the array.\n# Store the value with name \"v\"\nv = z[0]\n# Show the value we got\nv\n\n99.0\n\n\nRead the first code line here as “v gets the value at position 0 in the array”.\nUsing square brackets to get and set element values is called indexing into the array." + }, + { + "objectID": "resampling_with_code2.html#many-many-trials", + "href": "resampling_with_code2.html#many-many-trials", + "title": "7  More resampling with code", + "section": "7.7 Many many trials", + "text": "7.7 Many many trials\nOur experiment above is only 50 simulated trials. The higher the number of trials, the more confident we can be of our estimate for p — the proportion of trials where we get an all-white jury.\nIt is no extra trouble for us to tell the computer to do a very large number of trials. For example, we might want to run 10,000 trials instead of 50. All we have to do is to run the loop 10,000 times instead of 50 times. The computer has to do more work, but it is more than up to the job.\nHere is exactly the same code we ran above, but collected into one cell, and using 10,000 trials instead of 50. We have left out the comments, to make the code more compact.\n\n# Full simulation procedure, with 10,000 trials.\nrnd = np.random.default_rng()\nzero_through_99 = np.arange(0, 100)\n# 10,000 trials.\nz = np.zeros(10000)\nfor i in np.arange(0, 10000):\n a = rnd.choice(zero_through_99, size=12)\n b = np.sum(a > 74)\n z[i] = b\nall_white = z == 0\nn_all_white = sum(all_white)\np = n_all_white / 10000\np\n\n0.0305\n\n\nWe now have a new, more accurate estimate of the proportion of Hypothetical County juries with all-white juries. The proportion is 0.03, and so 3%.\nThis proportion means that, for any one jury from Hypothetical County, there is a less than one in 20 chance that the jury would be all white.\nAs we will see in more detail later, we might consider using the results from this experiment in Hypothetical County, to reflect on the result we saw in the real Talladega County. We might conclude, for example, that there was likely some systematic difference between Hypothetical County and Talledega County. Maybe the difference was that there was, in fact, some bias in the jury selection in Talledega county, and that the Supreme Court was wrong to reject this. You will hear more of this line of reasoning later in the book." + }, + { + "objectID": "resampling_with_code2.html#conclusion", + "href": "resampling_with_code2.html#conclusion", + "title": "7  More resampling with code", + "section": "7.8 Conclusion", + "text": "7.8 Conclusion\nIn this chapter we studied a real life-and-death question, on racial bias and the death penalty. We continued our exploration of the ways we can use probability, and resampling, to draw conclusions about real events. Along the way, we went into more detail on arrays in Python, and for loops; two basic tools in resampling.\nIn the next chapter, we will work through some more problems in probability, to show how we can use resampling, to answer questions about chance. We will add some more tools for writing code in Python, to make your programs easier to write, read, and understand.\n\n\n\n\nAni Adhikari, John DeNero, and David Wagner. 2021. Computational and Inferential Thinking: The Foundations of Data Science. https://inferentialthinking.com. https://inferentialthinking.com." + }, + { + "objectID": "probability_theory_1a.html#introduction", + "href": "probability_theory_1a.html#introduction", + "title": "8  Probability Theory, Part 1", + "section": "8.1 Introduction", + "text": "8.1 Introduction\nLet’s assume we understand the nature of the system or mechanism that produces the uncertain events in which we are interested. That is, the probability of the relevant independent simple events is assumed to be known, the way we assume we know the probability of a single “6” with a given die. The task is to determine the probability of various sequences or combinations of the simple events — say, three “6’s” in a row with the die. These are the sorts of probability problems dealt with in this chapter.\n\nThe resampling method — or just call it simulation or Monte Carlo method, if you prefer — will be illustrated with classic examples. Typically, a single trial of the system is simulated with cards, dice, random numbers, or a computer program. Then trials are repeated again and again to estimate the frequency of occurrence of the event in which we are interested; this is the probability we seek. We can obtain as accurate an estimate of the probability as we wish by increasing the number of trials. The key task in each situation is designing an experiment that accurately simulates the system in which we are interested.\nThis chapter begins the Monte Carlo simulation work that culminates in the resampling method in statistics proper. The chapter deals with problems in probability theory — that is, situations where one wants to estimate the probability of one or more particular events when the basic structure and parameters of the system are known. In later chapters we move on to inferential statistics, where similar simulation work is known as resampling." + }, + { + "objectID": "probability_theory_1a.html#definitions", + "href": "probability_theory_1a.html#definitions", + "title": "8  Probability Theory, Part 1", + "section": "8.2 Definitions", + "text": "8.2 Definitions\nA few definitions first:\n\nSimple Event : An event such as a single flip of a coin, or one draw of a single card. A simple event cannot be broken down into simpler events of a similar sort.\nSimple Probability (also called “primitive probability”): The probability that a simple event will occur; for example, that my favorite football team, the Washington Commanders, will win on Sunday.\n\nDuring a recent season, the “experts” said that the Commanders had a 60 percent chance of winning on Opening Day; that estimate is a simple probability. We can model that probability by putting into a bucket six green balls to stand for wins, and four red balls to stand for losses (or we could use 60 and 40 balls, or 600 and 400). For the outcome on any given day, we draw one ball from the bucket, and record a simulated win if the ball is green, a loss if the ball is red.\nSo far the bucket has served only as a physical representation of our thoughts. But as we shall see shortly, this representation can help us think clearly about the process of interest to us. It can also give us information that is not yet in our thoughts.\nEstimating simple probabilities wisely depends largely upon gathering evidence well. It also helps to adjust one’s probability estimates skillfully to make them internally consistent. Estimating probabilities has much in common with estimating lengths, weights, skills, costs, and other subjects of measurement and judgment.\nSome more definitions:\n\nComposite Event : A composite event is the combination of two or more simple events. Examples include all heads in three throws of a single coin; all heads in one throw of three coins at once; Sunday being a nice day and the Commanders winning; and the birth of nine females out of the next ten calves born if the chance of a female in a single birth is 0.48.\nCompound Probability : The probability that a composite event will occur.\n\nThe difficulty in estimating simple probabilities such as the chance of the Commanders winning on Sunday arises from our lack of understanding of the world around us. The difficulty of estimating compound probabilities such as the probability of it being a nice day Sunday and the Commanders winning is the weakness in our mathematical intuition interacting with our lack of understanding of the world around us. Our task in the study of probability and statistics is to overcome the weakness of our mathematical intuition by using a systematic process of simulation (or the devices of formulaic deductive theory).\nConsider now a question about a compound probability: What are the chances of the Commanders winning their first two games if we think that each of those games can be modeled by our bucket containing six red and four green balls? If one drawing from the bucket represents one game, a second drawing should represent the second game (assuming we replace the first ball drawn in order to keep the chances of winning the same for the two games). If so, two drawings from the bucket should represent two games. And we can then estimate the compound probability we seek with a series of two-ball trial experiments.\nMore specifically, our procedure in this case — the prototype of all procedures in the resampling simulation approach to probability and statistics — is as follows:\n\nPut six green (“Win”) and four red (“Lose”) balls in a bucket.\nDraw a ball, record its color, and replace it (so that the probability of winning the second simulated game is the same as the first).\nDraw another ball and record its color.\nIf both balls drawn were green record “Yes”; otherwise record “No.”\nRepeat steps 2-4 a thousand times.\nCount the proportion of “Yes”s to the total number of “Yes”s and “No”s; the result is the probability we seek.\n\nMuch the same procedure could be used to estimate the probability of the Commanders winning (say) 3 of their next 4 games. We will return to this illustration again and we will see how it enables us to estimate many other sorts of probabilities.\n\nExperiment or Experimental Trial, or Trial, or Resampling Experiment : A simulation experiment or trial is a randomly-generated composite event which has the same characteristics as the actual composite event in which we are interested (except that in inferential statistics the resampling experiment is generated with the “benchmark” or “null” universe rather than with the “alternative” universe). \nParameter : A numerical property of a universe. For example, the “true” mean (don’t worry about the meaning of “true”), and the range between largest and smallest members, are two of its parameters." + }, + { + "objectID": "probability_theory_1a.html#theoretical-and-historical-methods-of-estimation", + "href": "probability_theory_1a.html#theoretical-and-historical-methods-of-estimation", + "title": "8  Probability Theory, Part 1", + "section": "8.3 Theoretical and historical methods of estimation", + "text": "8.3 Theoretical and historical methods of estimation\nAs introduced in Section 3.5, there are two general ways to tackle any probability problem: theoretical-deductive and empirical , each of which has two sub-types. These concepts have complicated links with the concept of “frequency series” discussed earlier.\n\nEmpirical Methods . One empirical method is to look at actual cases in nature — for example, examine all (or a sample of) the families in Brazil that have four children and count the proportion that have three girls among them. (This is the most fundamental process in science and in information-getting generally. But in general we do not discuss it in this book and leave it to courses called “research methods.” I regard that as a mistake and a shame, but so be it.) In some cases, of course, we cannot get data in such fashion because it does not exist.\nAnother empirical method is to manipulate the simple elements in such fashion as to produce hypothetical experience with how the simple elements behave. This is the heart of the resampling method, as well as of physical simulations such as wind tunnels.\nTheoretical Methods . The most fundamental theoretical approach is to resort to first principles, working with the elements in their full deductive simplicity, and examining all possibilities. This is what we do when we use a tree diagram to calculate the probability of three girls in families of four children.\n\n\nThe formulaic approach is a theoretical method that aims to avoid the inconvenience of resorting to first principles, and instead uses calculation shortcuts that have been worked out in the past.\nWhat the Book Teaches . This book teaches you the empirical method using hypothetical cases. Formulas can be misleading for most people in most situations, and should be used as a shortcut only when a person understands exactly which first principles are embodied in the formulas. But most of the time, students and practitioners resort to the formulaic approach without understanding the first principles that lie behind them — indeed, their own teachers often do not understand these first principles — and therefore they have almost no way to verify that the formula is right. Instead they use canned checklists of qualifying conditions." + }, + { + "objectID": "probability_theory_1a.html#samples-and-universes", + "href": "probability_theory_1a.html#samples-and-universes", + "title": "8  Probability Theory, Part 1", + "section": "8.4 Samples and universes", + "text": "8.4 Samples and universes\nThe terms “sample” and “universe” (or “population”) 1 were used earlier without definition. But now these terms must be defined.\n\n8.4.1 The concept of a sample\nFor our purposes, a “sample” is a collection of observations for which you obtain the data to be used in the problem. Almost any set of observations for which you have data constitutes a sample. (You might, or might not, choose to call a complete census a sample.)" + }, + { + "objectID": "probability_theory_1a.html#the-concept-of-a-universe-or-population", + "href": "probability_theory_1a.html#the-concept-of-a-universe-or-population", + "title": "8  Probability Theory, Part 1", + "section": "8.5 The concept of a universe or population", + "text": "8.5 The concept of a universe or population\nFor every sample there must also be a universe “behind” it. But “universe” is harder to define, partly because it is often an imaginary concept. A universe is the collection of things or people that you want to say that your sample was taken from . A universe can be finite and well defined — “all live holders of the Congressional Medal of Honor,” “all presidents of major universities,” “all billion-dollar corporations in the United States.” Of course, these finite universes may not be easy to pin down; for instance, what is a “major university”? And these universes may contain some elements that are difficult to find; for instance, some Congressional Medal winners may have left the country, and there may not be adequate public records on some billion-dollar corporations.\nUniverses that are called “infinite” are harder to understand, and it is often difficult to decide which universe is appropriate for a given purpose. For example, if you are studying a sample of patients suffering from schizophrenia, what is the universe from which the sample comes? Depending on your purposes, the appropriate universe might be all patients with schizophrenia now alive, or it might be all patients who might ever live. The latter concept of the universe of patients with schizophrenia is imaginary because some of the universe does not exist. And it is infinite because it goes on forever.\nNot everyone likes this definition of “universe.” Others prefer to think of a universe, not as the collection of people or things that you want to say your sample was taken from, but as the collection that the sample was actually taken from. This latter view equates the universe to the “sampling frame” (the actual list or set of elements you sample from) which is always finite and existent. The definition of universe offered here is simply the most practical, in our opinion." + }, + { + "objectID": "probability_theory_1a.html#the-conventions-of-probability", + "href": "probability_theory_1a.html#the-conventions-of-probability", + "title": "8  Probability Theory, Part 1", + "section": "8.6 The conventions of probability", + "text": "8.6 The conventions of probability\nLet’s review the basic conventions and rules used in the study of probability:\n\nProbabilities are expressed as decimals between 0 and 1, like percentages. The weather forecaster might say that the probability of rain tomorrow is 0.2, or 0.97.\nThe probabilities of all the possible alternative outcomes in a single “trial” must add to unity. If you are prepared to say that it must either rain or not rain, with no other outcome being possible — that is, if you consider the outcomes to be mutually exclusive (a term that we discuss below), then one of those probabilities implies the other. That is, if you estimate that the probability of rain is 0.2 — written \\(P(\\text{rain}) = 0.2\\) — that implies that you estimate that \\(P(\\text{no rain}) = 0.8\\).\n\n\n\n\n\n\n\nWriting probabilities\n\n\n\nWe will now be writing some simple formulae using probability. Above we write the probability of rain tomorrow as \\(P(\\text{rain})\\). This probability might be 0.2, and we could write this as:\n\\[\nP(\\text{rain}) = 0.2\n\\]\nWe can term “rain tomorrow” an event — the event may occur: \\(\\text{rain}\\), or it may not occur: \\(\\text{no rain}\\).\nWe often shorten the name of our event — here \\(\\text{rain}\\) — to a single letter, such as \\(R\\). So, in this case, we could write \\(P(\\text{rain}) = 0.2\\) as \\(P(R) = 0.2\\) — meaning the same thing. We tend to prefer single letters — as in \\(P(R)\\) — to longer names — as in \\(P(\\text{rain})\\). This is because the single letters can be easier to read in these compact formulae.\nAbove we have written the probability of “rain tomorrow” event not occurring as \\(P(\\text{no rain})\\). Another way of referring to an event not occurring is to suffix the event name with a caret (^) character like this: \\(\\ \\hat{} R\\). So read \\(P(\\ \\hat{} R)\\) as “the probability that it will not rain”, and it is just another way of writing \\(P(\\text{no rain})\\). We sometimes call \\(\\ \\hat{} R\\) the complement of \\(R\\).\nWe use \\(\\text{and}\\) between two events to mean both events occur.\nFor example, say we call the event “Commanders win the game” as \\(W\\). One example of a compound event (see above) would be the event \\(W \\text{and} R\\), meaning, the event where the Commanders won the game and it rained." + }, + { + "objectID": "probability_theory_1a.html#mutually-exclusive-events-the-addition-rule", + "href": "probability_theory_1a.html#mutually-exclusive-events-the-addition-rule", + "title": "8  Probability Theory, Part 1", + "section": "8.7 Mutually exclusive events — the addition rule", + "text": "8.7 Mutually exclusive events — the addition rule\nDefinition: If there are just two events \\(A\\) and \\(B\\) and they are “mutually exclusive” or “disjoint,” each implies the absence of the other. Green and red coats are mutually exclusive for you if (but only if) you never wear more than one coat at a time.\nTo state this idea formally, if \\(A\\) and \\(B\\) are mutually exclusive, then:\n\\[\nP(A \\text{ and } B) = 0\n\\]\nIf \\(A\\) is “wearing a green coat” and \\(B\\) is “wearing a red coat” (and you never wear two coats at the same time), then the probability that you are wearing a green coat and a red coat is 0: \\(P(A \\text{ and } B) = 0\\).\nIn that case, outcomes \\(A\\) and \\(B\\), and hence outcome \\(A\\) and its own absence (written \\(P(\\ \\hat{} A)\\)), are necessarily mutually exclusive, and hence the two probabilities add to unity:\n\n\\[\nP(A) + P(\\ \\hat{} A) = 1\n\\]\nThe sales of your store in a given year cannot be both above and below $1 million. Therefore if \\(P(\\text{sales > \\$1 million}) = 0.2\\), \\(P(\\text{sales <=\n\\$1 million}) = 0.8\\).\nThis “complements” rule is useful as a consistency check on your estimates of probabilities. If you say that the probability of rain is 0.2, then you should check that you think that the probability of no rain is 0.8; if not, reconsider both the estimates. The same for the probabilities of your team winning and losing its next game." + }, + { + "objectID": "probability_theory_1a.html#joint-probabilities", + "href": "probability_theory_1a.html#joint-probabilities", + "title": "8  Probability Theory, Part 1", + "section": "8.8 Joint probabilities", + "text": "8.8 Joint probabilities\nLet’s return now to the Commanders. We said earlier that our best guess of the probability that the Commanders will win the first game is 0.6. Let’s complicate the matter a bit and say that the probability of the Commanders winning depends upon the weather; on a nice day we estimate a 0.65 chance of winning, on a nasty (rainy or snowy) day a chance of 0.55. It is obvious that we then want to know the chance of a nice day, and we estimate a probability of 0.7. Let’s now ask the probability that both will happen — it will be a nice day and the Commanders will win .\nBefore getting on with the process of estimation itself, let’s tarry a moment to discuss the probability estimates. Where do we get the notion that the probability of a nice day next Sunday is 0.7? We might have done so by checking the records of the past 50 years, and finding 35 nice days on that date. If we assume that the weather has not changed over that period (an assumption that some might not think reasonable, and the wisdom of which must be the outcome of some non-objective judgment), our probability estimate of a nice day would then be 35/50 = 0.7.\nTwo points to notice here: 1) The source of this estimate is an objective “frequency series.” And 2) the data come to us as the records of 50 days, of which 35 were nice. We would do best to stick with exactly those numbers rather than convert them into a single number — 70 percent. Percentages have a way of being confusing. (When his point score goes up from 2 to 3, my racquetball partner is fond of saying that he has made a “fifty percent increase”; that’s just one of the confusions with percentages.) And converting to a percent loses information: We no longer know how many observations the percent is based upon, whereas 35/50 keeps that information.\nNow, what about the estimate that the Commanders have a 0.65 chance of winning on a nice day — where does that come from? Unlike the weather situation, there is no long series of stable data to provide that information about the probability of winning. Instead, we construct an estimate using whatever information or “hunch” we have. The information might include the Commanders’ record earlier in this season, injuries that have occurred, what the “experts” in the newspapers say, the gambling odds, and so on. The result certainly is not “objective,” or the result of a stable frequency series. But we treat the 0.65 probability in quite the same way as we treat the .7 estimate of a nice day. In the case of winning, however, we produce an estimate expressed directly as a percent.\nIf we are shaky about the estimate of winning — as indeed we ought to be, because so much judgment and guesswork inevitably goes into it — we might proceed as follows: Take hold of a bucket and two bags of balls, green and red. Put into the bucket some number of green balls — say 10. Now add enough red balls to express your judgment that the ratio is the ratio of expected wins to losses on a nice day, adding or subtracting green balls as necessary to get the ratio you want. If you end up with 13 green and 7 red balls, then you are “modeling” a probability of 0.65, as stated above. If you end up with a different ratio of balls, then you have learned from this experiment with your own mind processes that you think that the probability of a win on a nice day is something other than 0.65.\nDon’t put away the bucket. We will be using it again shortly. And keep in mind how we have just been using it, because our use later will be somewhat different though directly related.\nOne good way to begin the process of producing a compound estimate is by portraying the available data in a “tree diagram” like Figure 8.1. The tree diagram shows the possible events in the order in which they might occur. A tree diagram is extremely valuable whether you will continue with either simulation or the formulaic method.\n\n\n\n\n\nFigure 8.1: Tree diagram" + }, + { + "objectID": "probability_theory_1a.html#sec-what-is-resampling", + "href": "probability_theory_1a.html#sec-what-is-resampling", + "title": "8  Probability Theory, Part 1", + "section": "8.9 The Monte Carlo simulation method (resampling)", + "text": "8.9 The Monte Carlo simulation method (resampling)\nThe steps we follow to simulate an answer to the compound probability question are as follows:\n\nPut seven blue balls (for “nice day”) and three yellow balls (“not nice”) into a bucket labeled A.\nPut 65 green balls (for “win”) and 35 red balls (“lose”) into a bucket labeled B. This bucket represents the chance that the Commanders will when it is a nice day.\nDraw one ball from bucket A. If it is blue, carry on to the next step; otherwise record “no” and stop.\nIf you have drawn a blue ball from bucket A, now draw a ball from bucket B, and if it is green, record “yes” on a score sheet; otherwise write “no.”\nRepeat steps 3-4 perhaps 10000 times.\nCount the number of “yes” trials.\nCompute the probability you seek as (number of “yeses”/ 10000). (This is the same as (number of “yeses”/ (number of “yeses” + number of “noes”)\n\nActually doing the above series of steps by hand is useful to build your intuition about probability and simulation methods. But the procedure can also be simulated with a computer. We will use Python to do this in a moment." + }, + { + "objectID": "probability_theory_1a.html#if-statements-in", + "href": "probability_theory_1a.html#if-statements-in", + "title": "8  Probability Theory, Part 1", + "section": "8.10 If statements in Python", + "text": "8.10 If statements in Python\nBefore we get to the simulation, we need another feature of Python, called a conditional or if statement.\nHere we have rewritten step 4 above, but using indentation to emphasize the idea:\nIf you have drawn a blue ball from bucket A:\n Draw a ball from bucket B\n if the ball is green:\n record \"yes\"\n otherwise:\n record \"no\".\nNotice the structure. The first line is the header of the if statement. It has a condition — this is why if statements are often called conditional statements. The condition here is “you have drawn a blue ball from bucket A”. If this condition is met — it is True that you have drawn a blue ball from bucket A then we go on to do the stuff that is indented. Otherwise we do not do any of the stuff that is indented.\nThe indented stuff above is the body of the if statement. It is the stuff we do if the conditional at the top is True.\nNow let’s see how we would write that in Python.\nLet’s make bucket A. Remember, this is the weather bucket. It has seven blue balls (for 70% fine days) and 3 yellow balls (for 30% rainy days). See Section 6.6 for the np.repeat way of repeating elements multiple times.\n\nStart of fine_win notebook\n\nDownload notebook\nInteract\n\n\n\n# Load the NumPy array library.\nimport numpy as np\n\n# Make a random number generator\nrnd = np.random.default_rng()\n\n\n# blue means \"nice day\", yellow means \"not nice\".\nbucket_A = np.repeat(['blue', 'yellow'], [7, 3])\nbucket_A\n\narray(['blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'yellow',\n 'yellow', 'yellow'], dtype='<U6')\n\n\nNow let us draw a ball at random from bucket_A:\n\na_ball = rnd.choice(bucket_A)\na_ball\n\n'blue'\n\n\nHow we run our first if statement. Running this code will display “The ball was blue” if the ball was blue, otherwise it will not display anything:\n\nif a_ball == 'blue':\n print('The ball was blue')\n\nThe ball was blue\n\n\n\nNotice that the header line has if, followed by the conditional expression (question) a_ball == 'blue'. The header line finishes with a colon :. The body of the if statement is one or more indented lines. Here there is only one line: print('The ball was blue'). Python only runs the body of the if statement if the condition is True.2\n\nTo confirm we see “The ball was blue” if a_ball is 'blue' and nothing otherwise, we can set a_ball and re-run the code:\n\n# Set value of a_ball so we know what it is.\na_ball = 'blue'\n\n\nif a_ball == 'blue':\n # The conditional statement is True in this case, so the body does run.\n print('The ball was blue')\n\nThe ball was blue\n\n\n\na_ball = 'yellow'\n\n\nif a_ball == 'blue':\n # The conditional statement is False, so the body does not run.\n print('The ball was blue')\n\nWe can add an else clause to the if statement. Remember the body of the if statement runs if the conditional expression (here a_ball == 'blue') is True. The else clause runs if the conditional statement is False. This may be clearer with an example:\n\na_ball = 'blue'\n\n\nif a_ball == 'blue':\n # The conditional expression is True in this case, so the body runs.\n print('The ball was blue')\nelse:\n # The conditional expression was True, so the else clause does not run.\n print('The ball was not blue')\n\nThe ball was blue\n\n\n\nNotice that the else clause of the if statement starts with a header line — else — followed by a colon :. It then has its own indented body of indented code. The body of the else clause only runs if the initial conditional expression is not True.\n\n\na_ball = 'yellow'\n\n\nif a_ball == 'blue':\n # The conditional expression was False, so the body does not run.\n print('The ball was blue')\nelse:\n # but the else clause does run.\n print('The ball was not blue')\n\nThe ball was not blue\n\n\nWith this machinery, we can now implement the full logic of step 4 above:\nIf you have drawn a blue ball from bucket A:\n Draw a ball from bucket B\n if the ball is green:\n record \"yes\"\n otherwise:\n record \"no\".\nHere is bucket B. Remember green means “win” (65% of the time) and red means “lose” (35% of the time). We could call this the “Commanders win when it is a nice day” bucket:\n\nbucket_B = np.repeat(['green', 'red'], [65, 35])\n\nThe full logic for step 4 is:\n\n# By default, say we have no result.\nresult = 'No result'\na_ball = rnd.choice(bucket_A)\n# If you have drawn a blue ball from bucket A:\nif a_ball == 'blue':\n # Draw a ball at random from bucket B\n b_ball = rnd.choice(bucket_B)\n # if the ball is green:\n if b_ball == 'green':\n # record \"yes\"\n result = 'yes'\n # otherwise:\n else:\n # record \"no\".\n result = 'no'\n# Show what we got in this case.\nresult\n\n'yes'\n\n\nNow we have everything we need to run many trials with the same logic.\n\n# The result of each trial.\n# To start with, say we have no result for all the trials.\nz = np.repeat(['No result'], 10000)\n\n# Repeat trial procedure 10000 times\nfor i in range(10000):\n # draw one \"ball\" for the weather, store in \"a_ball\"\n # blue is \"nice day\", yellow is \"not nice\"\n a_ball = rnd.choice(bucket_A)\n if a_ball == 'blue': # nice day\n # if no rain, check on game outcome\n # green is \"win\" (give nice day), red is \"lose\" (given nice day).\n b_ball = rnd.choice(bucket_B)\n if b_ball == 'green': # Commanders win\n # Record result.\n z[i] = 'yes'\n else:\n z[i] = 'no'\n # End of trial, go back to the beginning until done.\n\n# Count of the number of times we got \"yes\".\nk = np.sum(z == 'yes')\n# Show the proportion of *both* fine day *and* wins\nkk = k / 10000\nkk\n\n0.4603\n\n\nThe above procedure gives us the probability that it will be a nice day and the Commanders will win — about 46%.\nEnd of fine_win notebook\n\nLet’s say that we think that the Commanders have a 0.55 (55%) chance of winning on a not-nice day. With the aid of a bucket with a different composition — one made by substituting 55 green and 45 yellow balls in Step 4 — a similar procedure yields the chance that it will be a nasty day and the Commanders will win. With a similar substitution and procedure we could also estimate the probabilities that it will be a nasty day and the Commanders will lose, and a nice day and the Commanders will lose. The sum of these probabilities should come close to unity, because the sum includes all the possible outcomes. But it will not exactly equal unity because of what we call “sampling variation” or “sampling error.”\nPlease notice that each trial of the procedure begins with the same numbers of balls in the buckets as the previous trial. That is, you must replace the balls you draw after each trial in order that the probabilities remain the same from trial to trial. Later we will discuss the general concept of replacement versus non-replacement more fully." + }, + { + "objectID": "probability_theory_1a.html#the-deductive-formulaic-method", + "href": "probability_theory_1a.html#the-deductive-formulaic-method", + "title": "8  Probability Theory, Part 1", + "section": "8.11 The deductive formulaic method", + "text": "8.11 The deductive formulaic method\nIt also is possible to get an answer with formulaic methods to the question about a nice day and the Commanders winning. The following discussion of nice-day-Commanders-win handled by formula is a prototype of the formulaic deductive method for handling other problems.\nReturn now to the tree diagram (Figure 8.1) above. We can read from the tree diagram that 70 percent of the time it will be nice, and of that 70 percent of the time, 65 percent of the games will be wins. That is, \\(0.65 * 0.7 = 0.455\\) = the probability of a nice day and a win. That is the answer we seek. The method seems easy, but it also is easy to get confused and obtain the wrong answer." + }, + { + "objectID": "probability_theory_1a.html#multiplication-rule", + "href": "probability_theory_1a.html#multiplication-rule", + "title": "8  Probability Theory, Part 1", + "section": "8.12 Multiplication rule", + "text": "8.12 Multiplication rule\nWe can generalize what we have just done. The foregoing formula exemplifies what is known as the “multiplication rule”:\n\\[\nP(\\text{nice day and win}) = P(\\text{nice day}) * P(\\text{winning | nice day})\n\\]\nwhere the vertical line in \\(P(\\text{winning | nice day})\\) means “conditional upon” or “given that.” That is, the vertical line indicates a “conditional probability,” a concept we must consider in a minute.\nThe multiplication rule is a formula that produces the probability of the combination (juncture) of two or more events . More discussion of it will follow below." + }, + { + "objectID": "probability_theory_1a.html#sec-cond-uncond", + "href": "probability_theory_1a.html#sec-cond-uncond", + "title": "8  Probability Theory, Part 1", + "section": "8.13 Conditional and unconditional probabilities", + "text": "8.13 Conditional and unconditional probabilities\nTwo kinds of probability statements — conditional and unconditional — must now be distinguished.\nIt is the appropriate concept when many factors, all small relative to each other rather than one force having an overwhelming influence, affect the outcome.\nA conditional probability is formally written \\(P(\\text{Commanders win\n| rain}) = 0.65\\), and it is read “The probability that the Commanders will win if (given that) it rains is 0.65.” It is the appropriate concept when there is one (or more) major event of interest in decision contexts.\nLet’s use another football example to explain conditional and unconditional probabilities. In the year this was being written, the University of Maryland had an unpromising football team. Someone may nevertheless ask what chance the team had of winning the post season game at the bowl to which only the best team in the University of Maryland’s league is sent. One may say that if by some miracle the University of Maryland does get to the bowl, its chance would be a bit less than 50- 50 — say, 0.40. That is, the probability of its winning, conditional on getting to the bowl is 0.40. But the chance of its getting to the bowl at all is very low, perhaps 0.01. If so, the unconditional probability of winning at the bowl is the probability of its getting there multiplied by the probability of winning if it gets there; that is, 0.01 x 0.40 = 0.004. (It would be even better to say that .004 is the probability of winning conditional only on having a team, there being a league, and so on, all of which seem almost sure things.) Every probability is conditional on many things — that war does not break out, that the sun continues to rise, and so on. But if all those unspecified conditions are very sure, and can be taken for granted, we talk of the probability as unconditional.\nA conditional probability is a statement that the probability of an event is such-and-such if something else is so-and-so; it is the “if” that makes a probability statement conditional. True, in some sense all probability statements are conditional; for example, the probability of an even-numbered spade is 6/52 if the deck is a poker deck and not necessarily if it is a pinochle deck or Tarot deck. But we ignore such conditions for most purposes.\nMost of the use of the concept of probability in the social sciences is conditional probability. All hypothesis-testing statistics (discussed starting in Chapter 20) are conditional probabilities.\nHere is the typical conditional-probability question used in social-science statistics: What is the probability of obtaining this sample S (by chance) if the sample were taken from universe A? For example, what is the probability of getting a sample of five children with I.Q.s over 100 by chance in a sample randomly chosen from the universe of children whose average I.Q. is 100?\nOne way to obtain such conditional-probability statements is by examination of the results generated by universes like the conditional universe. For example, assume that we are considering a universe of children where the average I.Q. is 100.\nWrite down “over 100” and “under 100” respectively on many slips of paper, put them into a hat, draw five slips several times, and see how often the first five slips drawn are all over 100. This is the resampling (Monte Carlo simulation) method of estimating probabilities.\nAnother way to obtain such conditional-probability statements is formulaic calculation. For example, if half the slips in the hat have numbers under 100 and half over 100, the probability of getting five in a row above 100 is 0.03125 — that is, \\(0.5^5\\), or 0.5 x 0.5 x 0.5 x 0.5 x 0.5, using the multiplication rule introduced above. But if you are not absolutely sure you know the proper mathematical formula, you are more likely to come up with a sound answer with the simulation method.\nLet’s illustrate the concept of conditional probability with four cards — two aces and two 3’s (or two black and two red). What is the probability of an ace? Obviously, 0.5. If you first draw an ace, what is the probability of an ace now? That is, what is the probability of an ace conditional on having drawn one already? Obviously not 0.5.\nThis change in the conditional probabilities is the basis of mathematician Edward Thorp’s famous system of card-counting to beat the casinos at blackjack (Twenty One).\nCasinos can defeat card counting by using many decks at once so that conditional probabilities change more slowly, and are not very different than unconditional probabilities. Looking ahead, we will see that sampling with replacement, and sampling without replacement from a huge universe, are much the same in practice, so we can substitute one for the other at our convenience.\nLet’s further illustrate the concept of conditional probability with a puzzle (from Gardner 2001, 288). “… shuffle a packet of four cards — two red, two black — and deal them face down in a row. Two cards are picked at random, say by placing a penny on each. What is the probability that those two cards are the same color?”\n1. Play the game with the cards 100 times, and estimate the probability sought.\nOR\n\nPut slips with the numbers “1,” “1,” “2,” and “2” in a hat, or in an array named N on a computer.\nShuffle the slips of paper by shaking the hat or shuffling the array (of which more below).\nTake two slips of paper from the hat or from N, to get two numbers.\nCall the first number you selected A and the second B.\nAre A and B the same? If so, record “Yes” otherwise “No”.\nRepeat (2-5) 10000 times, and count the proportion of “Yes” results. That proportion equals the probability we seek to estimate.\n\nBefore we proceed to do this procedure in Python, we need a command to shuffle an array." + }, + { + "objectID": "probability_theory_1a.html#sec-shuffling", + "href": "probability_theory_1a.html#sec-shuffling", + "title": "8  Probability Theory, Part 1", + "section": "8.14 Shuffling with rnd.permuted", + "text": "8.14 Shuffling with rnd.permuted\nIn the recipe above, the array N has four values:\n\n# Numbers representing the slips in the hat.\nN = np.array([1, 1, 2, 2])\n\nFor the physical simulation, we specified that we would shuffle the slips of paper with these numbers, meaning that we would jumble them up into a random order. When we have done this, we will select two slips — say the first two — from the shuffled slips.\nAs we will be discussing more in various places, this shuffle-then-draw procedure is also called resampling without replacement. The without replacement idea refers to the fact that, after shuffling, we take a first virtual slip of paper from the shuffled array, and then a second — but we do not replace the first slip of paper into the shuffled array before drawing the second. For example, say I drew a “1” from N for the first value. If I am sampling without replacement then, when I draw the next value, the candidates I am choosing from are now “1”, “2” and “2”, because I have removed the “1” I got as the first value. If I had instead been sampling with replacement, then I would put back the “1” I had drawn, and would draw the second sample from the full set of “1”, “1”, “2”, “2”.\n::: python You can use rnd.permuted to shuffle an array into a random order.\nLike rnd.choice, rnd.permuted is a function (actually, a method) of rnd, that takes an array as input, and produces a version of the array, where the elements are in random order.\n\n# The array N, shuffled into a random order.\nshuffled = rnd.permuted(N)\n# The \"slips\" are now in random order.\nshuffled\n\narray([2, 2, 1, 1])\n\n\nSee Section 11.4 for some more discussion of shuffling and sampling without replacement." + }, + { + "objectID": "probability_theory_1a.html#code-answers-to-the-cards-and-pennies-problem", + "href": "probability_theory_1a.html#code-answers-to-the-cards-and-pennies-problem", + "title": "8  Probability Theory, Part 1", + "section": "8.15 Code answers to the cards and pennies problem", + "text": "8.15 Code answers to the cards and pennies problem\n\nStart of cards_pennies notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\n# Numbers representing the slips in the hat.\nN = np.array([1, 1, 2, 2])\n\n# An array in which we will store the result of each trial.\nz = np.repeat(['No result yet'], 10000)\n\nfor i in range(10000):\n # Shuffle the numbers in N into a random order.\n shuffled = rnd.permuted(N)\n\n A = shuffled[0] # The first slip from the shuffled array.\n B = shuffled[1] # The second slip from the shuffled array.\n\n # Set the result of this trial.\n if A == B:\n z[i] = 'Yes'\n else:\n z[i] = 'No'\n\n# How many times did we see \"Yes\"?\nk = np.sum(z == 'Yes')\n\n# The proportion.\nkk = k / 10000\n\nprint(kk)\n\n0.337\n\n\nNow let’s play the game differently, first picking one card and putting it back and shuffling before picking a second card. What are the results now? You can try it with the cards, but here is another program, similar to the last, to run that variation.\n\n# The cards / pennies game - but replacing the slip and re-shuffling, before\n# drawing again.\n\n# An array in which we will store the result of each trial.\nz = np.repeat(['No result yet'], 10000)\n\nfor i in range(10000):\n # Shuffle the numbers in N into a random order.\n first_shuffle = rnd.permuted(N)\n # Draw a slip of paper.\n A = first_shuffle[0] # The first slip.\n\n # Shuffle again (with all the slips).\n second_shuffle = rnd.permuted(N)\n # Draw a slip of paper.\n B = second_shuffle[0] # The second slip.\n\n # Set the result of this trial.\n if A == B:\n z[i] = 'Yes'\n else:\n z[i] = 'No'\n\n# How many times did we see \"Yes\"?\nk = np.sum(z == 'Yes')\n\n# The proportion.\nkk = k / 10000\n\nprint(kk)\n\n0.5072\n\n\nEnd of cards_pennies notebook\n\nWhy do you get different results in the two cases? Let’s ask the question differently: What is the probability of first picking a black card? Clearly, it is 50-50, or 0.5. Now, if you first pick a black card, what is the probability in the first game above of getting a second black card? There are two red and one black cards left, so now p = 1/3.\nBut in the second game, what is the probability of picking a second black card if the first one you pick is black? It is still 0.5 because we are sampling with replacement.\nThe probability of picking a second black card conditional on picking a first black card in the first game is 1/3, and it is different from the unconditional probability of picking a black card first. But in the second game the probability of the second black card conditional on first picking a black card is the same as the probability of the first black card.\nSo the reason you lose money if you play the first game at even odds against a carnival game operator is because the conditional probability is different than the original probability.\nAnd an illustrative joke: The best way to avoid there being a live bomb aboard your plane flight is to take an inoperative bomb aboard with you; the probability of one bomb is very low, and by the multiplication rule, the probability of two bombs is very very low . Two hundred years ago the same joke was told about the midshipman who, during a battle, stuck his head through a hole in the ship’s side that had just been made by an enemy cannon ball because he had heard that the probability of two cannonballs striking in the same place was one in a million.\nWhat’s wrong with the logic in the joke? The probability of there being a bomb aboard already, conditional on your bringing a bomb aboard, is the same as the conditional probability if you do not bring a bomb aboard. Hence you change nothing by bringing a bomb aboard, and do not reduce the probability of an explosion." + }, + { + "objectID": "probability_theory_1a.html#the-commanders-again-plus-leaving-the-game-early", + "href": "probability_theory_1a.html#the-commanders-again-plus-leaving-the-game-early", + "title": "8  Probability Theory, Part 1", + "section": "8.16 The Commanders again, plus leaving the game early", + "text": "8.16 The Commanders again, plus leaving the game early\nLet’s carry exactly the same process one tiny step further. Assume that if the Commanders win, there is a 0.3 chance you will leave the game early. Now let us ask the probability of a nice day, the Commanders winning, and you leaving early. You should be able to see that this probability can be estimated with three buckets instead of two. Or it can be computed with the multiplication rule as 0.65 * 0.7 * 0.3 = 0.1365 (about 0.14) — the probability of a nice day and a win and you leave early.\nThe book shows you the formal method — the multiplication rule, in this case — for several reasons: 1) Simulation is weak with very low probabilities, e.g. P(50 heads in 50 throws). But — a big but — statistics and probability is seldom concerned with very small probabilities. Even for games like poker, the orders of magnitude of 5 aces in a wild game with joker, or of a royal flush, matter little. 2) The multiplication rule is wonderfully handy and convenient for quick calculations in a variety of circumstances. A back-of-the-envelope calculation can be quicker than a simulation. And it can also be useful in situations where the probability you will calculate will be very small, in which case simulation can require considerable computer time to be accurate. (We will shortly see this point illustrated in the case of estimating the rate of transmission of AIDS by surgeons.) 3) It is useful to know the theory so that you are able to talk to others, or if you go on to other courses in the mathematics of probability and statistics.\nThe multiplication rule also has the drawback of sometimes being confusing, however. If you are in the slightest doubt about whether the circumstances are correct for applying it, you will be safer to perform a simulation as we did earlier with the Commanders, though in practice you are likely to simulate with the aid of a computer program, as we shall see shortly. So use the multiplication rule only when there is no possibility of confusion. Usually that means using it only when the events under consideration are independent.\nNotice that the same multiplication rule gives us the probability of any particular sequence of hits and misses — say, a miss, then a hit, then a hit if the probability of a single miss is 2/3. Among the 2/3 of the trials with misses on the first shot, 1/3 will next have a hit, so 2/3 x 1/3 equals the probability of a miss then a hit. Of those 2/9 of the trials, 1/3 will then have a hit, or 2/3 x 1/3 x 1/3 = 2/27 equals the probability of the sequence miss-hit-hit.\nThe multiplication rule is very useful in everyday life. It fits closely to a great many situations such as “What is the chance that it will rain (.3) and that (if it does rain) the plane will not fly (.8)?” Hence the probability of your not leaving the airport today is 0.3 x 0.8 = 0.24.\n\n\n\n\nGardner, Martin. 2001. The Colossal Book of Mathematics. W.W. Norton & Company Inc., New York. https://archive.org/details/B-001-001-265." + }, + { + "objectID": "probability_theory_1b.html#sec-independence", + "href": "probability_theory_1b.html#sec-independence", + "title": "9  Probability Theory Part I (continued)", + "section": "9.1 The special case of independence", + "text": "9.1 The special case of independence\nA key concept in probability and statistics is that of the independence of two events in which we are interested. Two events are said to be “independent” when one of them does not have any apparent relationship to the other. If I flip a coin that I know from other evidence is a fair coin, and I get a head, the chance of then getting another head is still 50-50 (one in two, or one to one.) And, if I flip a coin ten times and get heads the first nine times, the probability of getting a head on the tenth flip is still 50-50. Hence the concept of independence is characterized by the phrase “The coin has no memory.” (Actually the matter is a bit more complicated. If you had previously flipped the coin many times and knew it to be a fair coin, then the odds would still be 50-50, even after nine heads. But, if you had never seen the coin before, the run of nine heads might reasonably make you doubt that the coin was a fair one.)\nIn the Washington Commanders example above, we needed a different set of buckets to estimate the probability of a nice day plus a win, and of a nasty day plus a win. But what if the Commanders’ chances of winning are the same whether the day is nice or nasty? If so, we say that the chance of winning is independent of the kind of day. That is, in this special case,\n\\[\nP(\\text{win | nice day}) = P(\\text{win | nasty day}) \\text{ and } P(\\text{nice\nday and win})\n\\]\n\\[\n= P(\\text{nice day}) * P(\\text{winning | nice day})\n\\]\n\\[\n= P(\\text{nice day}) * P(\\text{winning})\n\\]\n\n\n\n\n\n\n\n\n\n\nSee section Section 8.13 for an explanation of this notation.\n\n\nIn this case we need only one set of two buckets to make all the estimates.\nIndependence means that the elements are drawn from 2 or more separate sets of possibilities . That is, \\(P(A | B) = P(A | \\ \\hat{} B) = P(A)\\) and vice versa.\n\nIn other words, if the occurrence of the first event does not change this probability that the second event will occur, then the events are independent.\nAnother way to put the matter: Events A and B are said to be independent of each other if knowing whether A occurs does not change the probability that B will occur, and vice versa. If knowing whether A does occur alters the probability of B occurring, then A and B are dependent.\nIf two events are independent, the multiplication rule simplifies to \\(P(A \\text{ and } B) = P(A) * P(B)\\) . I’ll repeat once more: This rule is simply a mathematical shortcut, and one can make the desired estimate by simulation.\nAlso again, if two events are not independent — that is, if \\(P(A | B)\\) is not equal to \\(P(A)\\) because \\(P(A)\\) is dependent upon the occurrence of \\(B\\), then the formula to be used now is, \\(P(A \\text{ and } B) = P(A | B) * P(B)\\) , which is sufficiently confusing that you are probably better off with a simulation.\nWhat about if each of the probabilities is dependent on the other outcome? There is no easy formulaic method to deal with such a situation.\nPeople commonly make the mistake of treating independent events as non-independent, perhaps from superstitious belief. After a long run of blacks, roulette gamblers say that the wheel is “due” to come up red. And sportswriters make a living out of interpreting various sequences of athletic events that occur by chance, and they talk of teams that are “due” to win because of the “Law of Averages.” For example, if Barry Bonds goes to bat four times without a hit, all of us (including trained statisticians who really know better) feel that he is “due” to get a hit and that the probability of his doing so is very high — higher that is, than his season’s average. The so-called “Law of Averages” implies no such thing, of course.\nEvents are often dependent in subtle ways. A boy may telephone one of several girls chosen at random. But, if he calls the same girl again (or if he does not call her again), the second event is not likely to be independent of the first. And the probability of his calling her is different after he has gone out with her once than before he went out with her.\nAs noted in the section above, events A and B are said to be independent of each other if the conditional probabilities of A and B remain the same . And the conditional probabilities remain the same if sampling is conducted with replacement .\n\nLet’s now re-consider the multiplication rule with the special but important case of independence.\n\n9.1.1 Example: Four Events in a Row — The Multiplication Rule\nAssume that we want to know the probability of four successful archery shots in a row, where the probability of a success on a given shot is .25.\nInstead of simulating the process with resampling trials we can, if we wish, arrive at the answer with the “multiplication rule.” This rule says that the probability that all of a given number of independent events (the successful shots) will occur (four out of four in this case) is the product of their individual probabilities — in this case, 1/4 x 1/4 x 1/4 x 1/4 = 1/256. If in doubt about whether the multiplication rule holds in any given case, however, you may check by resampling simulation. For the case of four daughters in a row, assuming that the probability of a girl is .5, the probability is 1/2 x 1/2 x 1/2 x 1/2 = 1/16.\nBetter yet, we’d use the more exact probability of getting a girl: \\(100/206\\), and multiply out the result as \\((100/206)^4\\). An important point here, however: we have estimated the probability of a particular family having four daughters as 1 in 16 — that is, odds of 15 to 1. But note well: This is a very different idea from stating that the odds are 15 to 1 against some family’s having four daughters in a row. In fact, as many families will have four girls in a row as will have boy-girl-boy-girl in that order or girl-boy-girl-boy or any other series of four children. The chances against any particular series is the same — 1 in 16 — and one-sixteenth of all four-children families will have each of these series, on average. This means that if your next-door neighbor has four daughters, you cannot say how much “out of the ordinary” the event is. It is easy to slip into unsound thinking about this matter.\n\nWhy do we multiply the probabilities of the independent simple events to learn the probability that they will occur jointly (the composite event)? Let us consider this in the context of three basketball shots each with 1/3 probability of hitting.\n\n\n\n\n\nFigure 9.1: Tree Diagram for 3 Basketball Shots, Probability of a Hit is 1/3\n\n\n\n\nFigure 9.1 is a tree diagram showing a set of sequential simple events where each event is conditional upon a prior simple event. Hence every probability after the first is a conditional probability.\nIn Figure 9.1, follow the top path first. On approximately one-third of the occasions, the first shot will hit. Among that third of the first shots, roughly a third will again hit on the second shot, that is, 1/3 of 1/3 or 1/3 x 1/3 = 1/9. The top path makes it clear that in 1/3 x 1/3 = 1/9 of the trials, two hits in a row will occur. Then, of the 1/9 of the total trials in which two hits in a row occur, about 1/3 will go on to a third hit, or 1/3 x 1/3 x 1/3 = 1/27. Remember that we are dealing here with independent events; regardless of whether the player made his first two shots, the probability is still 1 in 3 on the third shot." + }, + { + "objectID": "probability_theory_1b.html#the-addition-of-probabilities", + "href": "probability_theory_1b.html#the-addition-of-probabilities", + "title": "9  Probability Theory Part I (continued)", + "section": "9.2 The addition of probabilities", + "text": "9.2 The addition of probabilities\nBack to the Washington Redskins again. You ponder more deeply the possibility of a nasty day, and you estimate with more discrimination that the probability of snow is .1 and of rain it is .2 (with .7 of a nice day). Now you wonder: What is the probability of a rainy day or a nice day?\nTo find this probability by simulation:\n\nPut 7 blue balls (nice day), 1 black ball (snowy day) and 2 gray balls (rainy day) into a bucket. You want to know the probability of a blue or a gray ball. To find this probability:\nDraw one ball and record “yes” if its color is blue or gray, “no” otherwise.\nRepeat step 1 perhaps 200 times.\nFind the proportion of “yes” trials.\n\nThis procedure certainly will do the job. And simulation may be unavoidable when the situation gets more complex. But in this simple case, you are likely to see that you can compute the probability by adding the .7 probability of a nice day and the .2 probability of a rainy day to get the desired probability. This procedure of formulaic deductive probability theory is called the addition rule ." + }, + { + "objectID": "probability_theory_1b.html#the-addition-rule", + "href": "probability_theory_1b.html#the-addition-rule", + "title": "9  Probability Theory Part I (continued)", + "section": "9.3 The addition rule", + "text": "9.3 The addition rule\nThe addition rule applies to mutually exclusive outcomes — that is, the case where if one outcome occurs, the other(s) cannot occur; one event implies the absence of the other when events are mutually exclusive. Green and red coats are mutually exclusive if you never wear more than one coat at a time. If there are only two possible mutually-exclusive outcomes, the outcomes are complementary . It may be helpful to note that mutual exclusivity equals total dependence; if one outcome occurs, the other cannot. Hence we write formally that\n\\[\n\\text{If} P(A \\text{ and } B) = 0 \\text{ then }\n\\]\n\\[\nP(A \\text{ or } B) = P(A) + P(B)\n\\]\nAn outcome and its absence are mutually exclusive, and their probabilities add to unity.\n\\[\nP(A) + P(\\ \\hat{} A) = 1\n\\]\nExamples include a) rain and no rain, and b) if \\(P(\\text{sales > \\$1 million}) = 0.2\\), then \\(P(\\text{sales <= \\$1 million}) = 0.8\\).\nAs with the multiplication rule, the addition rule can be a useful shortcut. The answer can always be obtained by simulation, too.\nWe have so far implicitly assumed that a rainy day and a snowy day are mutually exclusive. But that need not be so; both rain and snow can occur on the same day; if we take this possibility into account, we cannot then use the addition rule.\nConsider the case in which seven days in ten are nice, one day is rainy, one day is snowy, and one day is both rainy and snowy. What is the chance that it will be either nice or snowy? The procedure is just as before, except that some rainy days are included because they are also snowy.\nWhen A and B are not mutually exclusive — when it is possible that the day might be both rainy and snowy, or you might wear both red and green coats on the same day, we write (in the latter case) P(red and green coats) > 0, and the appropriate formula is\n\\[\nP(\\text{red or green}) = P(\\text{red}) + P(\\text{green}) - P(\\text{red and green}) `\n\\]\n\nIn this case as in much of probability theory, the simulation for the case in which the events are not mutually exclusive is no more complex than when they are mutually exclusive; indeed, if you simulate you never even need to know the concept of mutual exclusivity or inquire whether that is your situation. In contrast, the appropriate formula for non-exclusivity is more complex, and if one uses formulas one must inquire into the characteristics of the situation and decide which formula to apply depending upon the classification; if you classify wrongly and therefore apply the wrong formula, the result is a wrong answer.\n\nTo repeat, the addition rule only works when the probabilities you are adding are mutually exclusive — that is, when the two cannot occur together.\nThe multiplication and addition rules are as different from each other as mortar and bricks; both, however, are needed to build walls. The multiplication rule pertains to a single outcome composed of two or more elements (e.g. weather, and win-or-lose), whereas the addition rule pertains to two or more possible outcomes for one element. Drawing from a card deck (with replacement) provides an analogy: the addition rule is like one draw with two or more possible cards of interest, whereas the multiplication rule is like two or more cards being drawn with one particular “hand” being of interest." + }, + { + "objectID": "probability_theory_1b.html#theoretical-devices-for-the-study-of-probability", + "href": "probability_theory_1b.html#theoretical-devices-for-the-study-of-probability", + "title": "9  Probability Theory Part I (continued)", + "section": "9.4 Theoretical devices for the study of probability", + "text": "9.4 Theoretical devices for the study of probability\nIt may help you to understand the simulation approach to estimating composite probabilities demonstrated in this book if you also understand the deductive formulaic approach. So we’ll say a bit about it here.\nThe most fundamental concept in theoretical probability is the list of events that may occur, together with the probability of each one (often arranged so as to be equal probabilities). This is the concept that Galileo employed in his great fundamental work in theoretical probability about four hundred years ago when a gambler asked Galileo about the chances of getting a nine rather than a ten in a game of three dice (though others such as Cardano had tackled the subject earlier). 1\nGalileo wrote down all the possibilities in a tree form, a refinement for mapping out the sample space.\nGalileo simply displayed the events themselves — such as “2,” “4,” and “4,” making up a total of 10, a specific event arrived at in a specific way. Several different events can lead to a 10 with three dice. If we now consider each of these events, we arrive at the concept of the ways that a total of 10 can arise. We ask the number of ways that an outcome can and cannot occur. (See the paragraph above). This is equivalent both operationally and linguistically to the paths in (say) the quincunx device or Pascal’s Triangle which we shall discuss shortly.\nA tree is the most basic display of the paths in a given situation. Each branch of the tree — a unique path from the start on the left-hand side to the endpoint on the right-hand side — contains the sequence of all the elements that make up that event, in the order in which they occur. The right-hand ends of the branches constitute a list of the outcomes. That list includes all possible permutations — that is, it distinguishes among outcomes by the orders in which the particular die outcomes occur." + }, + { + "objectID": "probability_theory_1b.html#the-concept-of-sample-space", + "href": "probability_theory_1b.html#the-concept-of-sample-space", + "title": "9  Probability Theory Part I (continued)", + "section": "9.5 The Concept of Sample Space", + "text": "9.5 The Concept of Sample Space\nThe formulaic approach begins with the idea of sample space , which is the set of all possible outcomes of the “experiment” or other situation that interests us. Here is a formal definition from Goldberg (1986, 46):\n\nA sample space S associated with a real or conceptual experiment is a set such that (1) each element of S denotes an outcome of the experiment, and (2) any performance of the experiment results in an outcome that corresponds to one and only one element of S.\n\nBecause the sum of the probabilities for all the possible outcomes in a given experimental trial is unity, the sum of all the events in the sample space (S) = 1.\nEarly on, people came up with the idea of estimating probabilities by arraying the possibilities for, and those against, the event occurring. For example, the coin could fall in three ways — head, tail, or on its side. They then speedily added the qualification that the possibilities in the list must have an equal chance, to distinguish the coin falling on its side from the other possibilities (so ignore it). Or, if it is impossible to make the probabilities equal, make special allowance for inequality. Working directly with the sample space is the method of first principles . The idea of a list was refined to the idea of sample space, and “for” and “against” were refined to the “success” and “failure” elements among the total elements.\nThe concept of sample space raises again the issue of how to estimate the simple probabilities. While we usually can estimate the probabilities accurately in gambling games because we ourselves construct the games and therefore control the probabilities that they produce, we have much less knowledge of the structures that underlie the important problems in life — in science, business, the stock market, medicine, sports, and so on. We therefore must wrestle with the issue of what probabilities we should include in our theoretical sample space, or in our experiments. Often we proceed by choosing as an analogy a physical “model” whose properties we know and which we consider to be appropriate — such as a gambling game with coins, dice, cards. This model becomes our idealized setup. But this step makes crystal-clear that judgment is heavily involved in the process, because choosing the analogy requires judgment.\nA Venn diagram is another device for displaying the elements that make up an event. But unlike a tree diagram, it does not show the sequence of those elements; rather, it shows the extent of overlap among various classes of elements .\nA Venn diagram expresses by areas (especially rectangular Venn diagrams) the numbers at the end of the branches in a tree.\nPascal’s Triangle is still another device. It aggregates the last permutation branches in the tree into combinations — that is, without distinguishing by order. It shows analytically (by tracing them) the various paths that lead to various combinations.\nThe study of the mathematics of probability is the study of calculational shortcuts to do what tree diagrams do. If you don’t care about the shortcuts, then you don’t need the formal mathematics--though it may improve your mathematical insight (or it may not). The resampling method dispenses not only with the shortcuts but also with the entire counting of points in the sample space.\n\n\n\n\nBulmer, M. G. 1979. Principles of Statistics. New York, NY: Dover Publications, inc. https://archive.org/details/principlesofstat0000bulm.\n\n\nGoldberg, Samuel. 1986. Probability: An Introduction. Courier Corporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC." + }, + { + "objectID": "more_sampling_tools.html#introduction", + "href": "more_sampling_tools.html#introduction", + "title": "10  Two puzzles and more tools", + "section": "10.1 Introduction", + "text": "10.1 Introduction\nIn the next chapter we will deal with some more involved problems in probability, as a preparation for statistics, where we use reasoning from probability to draw conclusions about a world like our own, where variation often appears to be more or less random.\nBefore we get down to the business of complex probabilistic problems in the next few chapters, let’s consider a couple of peculiar puzzles. These puzzles allow us to introduce some more of the key tools in Python for Monte Carlo resampling, and show the power of such simulation to help solve, and then reason about, problems in probability." + }, + { + "objectID": "more_sampling_tools.html#the-treasure-fleet-recovered", + "href": "more_sampling_tools.html#the-treasure-fleet-recovered", + "title": "10  Two puzzles and more tools", + "section": "10.2 The treasure fleet recovered", + "text": "10.2 The treasure fleet recovered\nThis is a classic problem in probability:1\n\nA Spanish treasure fleet of three ships was sunk at sea off Mexico. One ship had a chest of gold forward and another aft, another ship had a chest of gold forward and a chest of silver aft, while a third ship had a chest of silver forward and another chest of silver aft. Divers just found one of the ships and a chest of gold in it, but they don’t know whether it was from forward or aft. They are now taking bets about whether the other chest found on the same ship will contain silver or gold. What are fair odds?\n\nThese are the logical steps one may distinguish in arriving at a correct answer with deductive logic (portrayed in Figure 10.1).\n\nPostulate three ships — Ship I with two gold chests (G-G), ship II with one gold and one silver chest (G-S), and ship III with S-S. (Choosing notation might well be considered one or more additional steps.)\nAssert equal probabilities of each ship being found.\nStep 2 implies equal probabilities of being found for each of the six chests.\nFact: Diver finds a chest of gold.\nStep 4 implies that S-S ship III was not found; hence remove it from subsequent analysis.\nThree possibilities: 6a) Diver found chest I-Ga, 6b) diver found I-Gb, 6c) diver found II-Gc.\nFrom step 2, the cases a, b, and c in step 6 have equal probabilities.\nIf possibility 6a is the case, then the other chest is I-Gb; the comparable statements for cases 6b and 6c are I-Ga and II-S.\nFrom steps 6 and 7: From equal probabilities of the three cases, and no other possible outcome, \\(P(6a) = 1/3\\), \\(P(6b) = 1/3\\), \\(P(6c) = 1/3\\).\nSo \\(P(G) = P(6a) + P(6b)\\) = 1/3 + 1/3 = 2/3.\n\nSee Figure 10.1.\n\n\n\n\n\nFigure 10.1: Ships with Gold and Silver\n\n\n\n\nThe following simulation arrives at the correct answer.\n\nWrite “Gold” on three pieces of paper and “Silver” on three pieces of paper. These represent the chests.\nGet three buckets each with two pieces of paper. Each bucket represents a ship, each piece of paper represents a chest in that ship. One bucket has two pieces of paper with “Gold” written on them; one has pieces of paper with “Gold” and “Silver”, and one has “Silver” and “Silver”.\nChoose a bucket at random, to represent choosing a ship at random.\nShuffle the pieces of paper in the bucket and pick one, to represent choosing the first chest from that ship at random.\nIf the piece of paper says “Silver”, the first chest we found in this ship was silver, and we stop the trial and make no further record. If “Gold”, continue.\nGet the second piece of paper from the bucket, representing the second chest on the chosen ship. Record whether this was “Silver” or “Gold” on the scoreboard.\nRepeat steps (3 - 6) many times, and calculate the proportion of “Gold”s on the scoreboard. (The answer should be about \\(\\frac{2}{3}\\).)\n\n\nHere is a notebook simulation with Python:\n\nStart of gold_silver_ships notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\n# The 3 buckets. Each bucket represents a ship. Each has two chests.\nbucket1 = ['Gold', 'Gold'] # Chests in first ship.\nbucket2 = ['Gold', 'Silver'] # Chests in second ship.\nbucket3 = ['Silver', 'Silver'] # Chests in third ship.\n\n\n# For each trial, we will have one of three states:\n#\n# 1. When opening the first chest, it did not contain gold.\n# We will reject these trials, since they do not match our\n# experiment description.\n# 2. Gold was found in the first and the second chest.\n# 3. Gold was found in the first, but silver in the second chest.\n#\n# We need a placeholder value for all trials, and will make that\n# \"No gold in chest 1, chest 2 never opened\".\nsecond_chests = np.repeat(['No gold in chest 1, chest 2 never opened'], 10000)\n\nfor i in range(10000):\n # Select a ship at random from the three ships.\n ship_no = rnd.choice([1, 2, 3])\n # Get the chests from this ship (represented by a bucket).\n if ship_no == 1:\n bucket = bucket1\n if ship_no == 2:\n bucket = bucket2\n if ship_no == 3:\n bucket = bucket3\n\n # We shuffle the order of the chests in this ship, to simulate\n # the fact that we don't know which of the two chests we have\n # found first, forward or aft.\n shuffled = rnd.permuted(bucket)\n\n if shuffled[0] == 'Gold': # We found a gold chest first.\n # Store whether the Second chest was silver or gold.\n second_chests[i] = shuffled[1]\n\n # End loop, go back to beginning.\n\n# Number of times we found gold in the second chest.\nn_golds = np.sum(second_chests == 'Gold')\n# Number of times we found silver in the second chest.\nn_silvers = np.sum(second_chests == 'Silver')\n# As a ratio of golds to all second chests (where the first was gold).\nprint(n_golds / (n_golds + n_silvers))\n\n0.6625368731563421\n\n\nEnd of gold_silver_ships notebook\n\nIn the code above, we have first chosen the ship number at random, and then used a set of if ... statements to get the pair of chests corresponding to the given ship. There are simpler and more elegant ways of writing this code, but they would need some Python features that we haven’t covered yet.2" + }, + { + "objectID": "more_sampling_tools.html#back-to-boolean-s", + "href": "more_sampling_tools.html#back-to-boolean-s", + "title": "10  Two puzzles and more tools", + "section": "10.3 Back to Boolean arrays", + "text": "10.3 Back to Boolean arrays\nThe code above implements the procedure we might well use if we were simulating the problem physically. We do a trial, and we record the result. We do this on a piece of paper if we are doing a physical simulation, and in the second_chests array in code.\nFinally we tally up the results. If we are doing a physical simulation, we go back over the all the trial results and counting up the “Gold” and “Silver” outcomes. In code we use the comparisons == 'Gold' and == 'Silver' to find the trials of interest, and then count them up with np.sum.\nBoolean arrays are a fundamental tool in Python, and we will use them in nearly all our simulations.\nHere is a remind of how those arrays work.\nFirst, let’s slice out the first 10 values of the second_chests trial-by-trial results tally from the simulation above:\n\n# Get values at positions 0 through 9 (up to, but not including position 10)\nfirst_10_chests = second_chests[:10]\nfirst_10_chests\n\narray(['Silver', 'No gold in chest 1, chest 2 never opened',\n 'No gold in chest 1, chest 2 never opened', 'Gold', 'Gold',\n 'No gold in chest 1, chest 2 never opened', 'Gold', 'Gold',\n 'No gold in chest 1, chest 2 never opened', 'Gold'], dtype='<U40')\n\n\nBefore we started the simulation, we set second_chests to contain 10,000 strings, where each string was “No gold in chest 1, chest 2 never opened”. In the simulation, we check whether there was gold in the first chest, and, if not, we don’t change the value in second_chest, and the value remains as “No gold in chest 1, chest 2 never opened”.\nOnly if there was gold in the first chest, do we go on to check whether the second chest contains silver or gold. Therefore, we only set a new value in second_chests where there was gold in the first chest.\nNow let’s show the effect of running a comparison on first_10_chests:\n\nwere_gold = (first_10_chests == 'Gold')\nwere_gold\n\narray([False, False, False, True, True, False, True, True, False,\n True])\n\n\n\n\n\n\n\n\nParentheses and Boolean comparisons\n\n\n\nNotice the round brackets (parentheses) around (first_10_chests == 'Gold'). In this particular case, we would get the same result without the parentheses, so the paretheses are optional— although see below for an example where the they are not optional. In general, you will see we put parentheses around all expressions that generate Boolean arrays, and we recommend you do too. It is good habit to get into, to make it clear that this is an expression that generates a value.\n\n\nThe == 'Gold' comparison is asking a question. It is asking that question of an array, and the array contains multiple values. NumPy treats this comparison as asking the question of each element in the array. We get an answer for the question for each element. The answer for position 0 is True if the element at position 0 is equal to 'Gold' and False otherwise, and so on, for positions 1, 2 and so on. We started with 10 strings. After the comparison == 'Gold' we have 10 Boolean values, where a Boolean value can either be True or False.\n\n\nNow we have an array with True for the “Gold” results and False otherwise, we can count the number of “Gold” results by using np.sum on the array. As you remember (Section 5.14) np.sum counts True as 1 and False as 0, so the sum of the Boolean array is just the number of True values in the array — the count that we need.\n\n# The number of True values — so the number of \"Gold\" chests.\nnp.sum(were_gold)\n\n5" + }, + { + "objectID": "more_sampling_tools.html#sec-ships-booleans", + "href": "more_sampling_tools.html#sec-ships-booleans", + "title": "10  Two puzzles and more tools", + "section": "10.4 Boolean arrays and another take on the ships problem", + "text": "10.4 Boolean arrays and another take on the ships problem\nIf we are doing a physical simulation, we usually want to finish up all the work for the trial during the trial, so we have one outcome from the trial. This makes it easier to tally up the results in the end.\nWe have no such constraint when we are using code, so it is sometimes easier to record several results from the trial, and do the final combinations and tallies at the end. We will show you what we mean with a slight variation on the two-ships code you saw above.\n\nStart of gold_silver_booleans notebook\n\nDownload notebook\nInteract\n\n\nNotice that the first part of the code is identical to the first approach to this problem. There are two key differences — see the comments for an explanation.\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\n# The 3 buckets, each representing two chests on a ship.\n# As before.\nbucket1 = ['Gold', 'Gold'] # Chests in first ship.\nbucket2 = ['Gold', 'Silver'] # Chests in second ship.\nbucket3 = ['Silver', 'Silver'] # Chests in third ship.\n\n\n# Here is where the difference starts. We are now going to fill in\n# the result for the first chest _and_ the result for the second chest.\n#\n# Later we will fill in all these values, so the string we put here\n# does not matter.\n\n# Whether the first chest was Gold or Silver.\nfirst_chests = np.repeat(['To be announced'], 10000)\n# Whether the second chest was Gold or Silver.\nsecond_chests = np.repeat(['To be announced'], 10000)\n\nfor i in range(10000):\n # Select a ship at random from the three ships.\n # As before.\n ship_no = rnd.choice([1, 2, 3])\n # Get the chests from this ship.\n # As before.\n if ship_no == 1:\n bucket = bucket1\n if ship_no == 2:\n bucket = bucket2\n if ship_no == 3:\n bucket = bucket3\n\n # As before.\n shuffled = rnd.permuted(bucket)\n\n # Here is the big difference - we store the result for the first and second\n # chests.\n first_chests[i] = shuffled[0]\n second_chests[i] = shuffled[1]\n\n# End loop, go back to beginning.\n\n# We will do the calculation we need in the next cell. For now\n# just display the first 10 values.\nten_first_chests = first_chests[:10]\nprint('The first 10 values of \"first_chests:', ten_first_chests)\n\nThe first 10 values of \"first_chests: ['Gold' 'Silver' 'Silver' 'Gold' 'Gold' 'Silver' 'Gold' 'Gold' 'Silver'\n 'Gold']\n\nten_second_chests = second_chests[:10]\nprint('The first 10 values of \"second_chests', ten_second_chests)\n\nThe first 10 values of \"second_chests ['Silver' 'Gold' 'Silver' 'Gold' 'Gold' 'Silver' 'Gold' 'Gold' 'Silver'\n 'Gold']\n\n\nIn this variant, we recorded the type of first chest for each trial (“Gold” or “Silver”), and the type of second chest of the second chest (“Gold” or “Silver”).\nWe would like to count the number of times there was “Gold” in the first chest and “Gold” in the second.\n\n10.5 Combining Boolean arrays\nWe can do the count we need by combining the Boolean arrays with the & operator. & combines Boolean arrays with a logical and. Logical and is a rule for combining two Boolean values, where the rule is: the result is True if the first value is True and the second value if True.\nHere we use the & operator to combine some Boolean values on the left and right of the operator:\n\nTrue & True # Both are True, so result is True\n\nTrue\n\n\n\nTrue & False # At least one of the values is False, so result is False\n\nFalse\n\n\n\nFalse & True # At least one of the values is False, so result is False\n\nFalse\n\n\n\nFalse & False # At least one (in fact both) are False, result is False.\n\nFalse\n\n\n\n\n\n\n\n\n\n& and and in Python\n\n\n\nIn fact Python has another operation to apply this logical and operation to values — the and operator:\n\nprint(True and True)\n\nTrue\n\nprint(True and False)\n\nFalse\n\nprint(False and True)\n\nFalse\n\nprint(False and False)\n\nFalse\n\n\nYou will see this and operator often in Python code, but it does not work well when combining Numpy arrays, so we will use the similar & operator, that does work on arrays.\n\n\n\nAbove you saw that the == operator (as in == 'Gold'), when applied to arrays, asks the question of every element in the array.\nFirst make the Boolean arrays.\n\nten_first_gold = (ten_first_chests == 'Gold')\nprint(\"Ten first == 'Gold'\", ten_first_gold)\n\nTen first == 'Gold' [ True False False True True False True True False True]\n\nten_second_gold = (ten_second_chests == 'Gold')\nprint(\"Ten second == 'Gold'\", ten_second_gold)\n\nTen second == 'Gold' [False True False True True False True True False True]\n\n\nNow let us use & to combine Boolean arrays:\n\nten_both = (ten_first_gold & ten_second_gold)\nten_both\n\narray([False, False, False, True, True, False, True, True, False,\n True])\n\n\nNotice that Python does the comparison elementwise — element by element.\nYou saw that when we did second_chests == 'Gold' this had the effect of asking the == 'Gold' question of each element, so there will be one answer per element in second_chests. In that case there was an array to the left of == and a single value to the right. We were comparing an array to a value.\nHere we are asking the & question of ten_first_gold and ten_second_gold. Here there is an array to the left and an array to the right. We are asking the & question 10 times, but the first question we are asking is:\n\n# First question, giving first element of result.\n(ten_first_gold[0] & ten_second_gold[0])\n\nFalse\n\n\nThe second question is:\n\n# Second question, giving second element of result.\n(ten_first_gold[1] & ten_second_gold[1])\n\nFalse\n\n\nand so on. We have ten elements on each side, and 10 answers, giving an array (ten_both) of 10 elements. Each element in ten_both is the answer to the & question for the elements at the corresponding positions in ten_first_gold and ten_second_gold.\nWe could also create the Boolean arrays and do the & operation all in one step, like this:\n\nten_both = (ten_first_chests == 'Gold') & (ten_second_chests == 'Gold')\nten_both\n\narray([False, False, False, True, True, False, True, True, False,\n True])\n\n\n\n\n\n\n\n\n\n\nParentheses, arrays and comparisons\n\n\n\nAgain you will notice the round brackets (parentheses) around (ten_first_chests == 'Gold') and (ten_second_chests == 'Gold'). Above, you saw us recommend you always use paretheses around Boolean expressions like this. The parentheses make the code easier to read — but be careful — in this case, we actually need the parentheses to make Python do what we want; see the footnote for more detail.3\n\n\n\nRemember, we wanted the answer to the question: how many trials had “Gold” in the first chest and “Gold” in the second. We can answer that question for the first 10 trials with np.sum:\n\nn_ten_both = np.sum(ten_both)\nn_ten_both\n\n5\n\n\nWe can answer the same question for all the trials, in the same way:\n\nfirst_gold = (first_chests == 'Gold')\nsecond_gold = (second_chests == 'Gold')\nn_both_gold = np.sum(first_gold & second_gold)\nn_both_gold\n\n3369\n\n\nWe could also do the same calculation all in one line:\n\n# Notice the parentheses - we need these - see above.\nn_both_gold = np.sum((first_chests == 'Gold') & (second_chests == 'Gold'))\nn_both_gold\n\n3369\n\n\nWe can then count all the ships where the first chest was gold:\n\nn_first_gold = np.sum(first_chests == 'Gold')\nn_first_gold\n\n5085\n\n\nThe final calculation is the proportion of second chests that are gold, given the first chest was also gold:\n\np_g_given_g = n_both_gold / n_first_gold\np_g_given_g\n\n0.6625368731563421\n\n\nOf course we won’t get exactly the same results from the two simulations, in the same way that we won’t get exactly the same results from any two runs of the same simulation, because of the random values we are using. But the logic for the two simulations are the same, and we are doing many trials (10,000), so the results will be very similar.\nEnd of gold_silver_booleans notebook" + }, + { + "objectID": "more_sampling_tools.html#sec-combine-booleans", + "href": "more_sampling_tools.html#sec-combine-booleans", + "title": "10  Two puzzles and more tools", + "section": "10.5 Combining Boolean arrays", + "text": "10.5 Combining Boolean arrays\nWe can do the count we need by combining the Boolean arrays with the & operator. & combines Boolean arrays with a logical and. Logical and is a rule for combining two Boolean values, where the rule is: the result is True if the first value is True and the second value if True.\nHere we use the & operator to combine some Boolean values on the left and right of the operator:\n\nTrue & True # Both are True, so result is True\n\nTrue\n\n\n\nTrue & False # At least one of the values is False, so result is False\n\nFalse\n\n\n\nFalse & True # At least one of the values is False, so result is False\n\nFalse\n\n\n\nFalse & False # At least one (in fact both) are False, result is False.\n\nFalse\n\n\n\n\n\n\n\n\n\n& and and in Python\n\n\n\nIn fact Python has another operation to apply this logical and operation to values — the and operator:\n\nprint(True and True)\n\nTrue\n\nprint(True and False)\n\nFalse\n\nprint(False and True)\n\nFalse\n\nprint(False and False)\n\nFalse\n\n\nYou will see this and operator often in Python code, but it does not work well when combining Numpy arrays, so we will use the similar & operator, that does work on arrays.\n\n\n\nAbove you saw that the == operator (as in == 'Gold'), when applied to arrays, asks the question of every element in the array.\nFirst make the Boolean arrays.\n\nten_first_gold = (ten_first_chests == 'Gold')\nprint(\"Ten first == 'Gold'\", ten_first_gold)\n\nTen first == 'Gold' [ True False False True True False True True False True]\n\nten_second_gold = (ten_second_chests == 'Gold')\nprint(\"Ten second == 'Gold'\", ten_second_gold)\n\nTen second == 'Gold' [False True False True True False True True False True]\n\n\nNow let us use & to combine Boolean arrays:\n\nten_both = (ten_first_gold & ten_second_gold)\nten_both\n\narray([False, False, False, True, True, False, True, True, False,\n True])\n\n\nNotice that Python does the comparison elementwise — element by element.\nYou saw that when we did second_chests == 'Gold' this had the effect of asking the == 'Gold' question of each element, so there will be one answer per element in second_chests. In that case there was an array to the left of == and a single value to the right. We were comparing an array to a value.\nHere we are asking the & question of ten_first_gold and ten_second_gold. Here there is an array to the left and an array to the right. We are asking the & question 10 times, but the first question we are asking is:\n\n# First question, giving first element of result.\n(ten_first_gold[0] & ten_second_gold[0])\n\nFalse\n\n\nThe second question is:\n\n# Second question, giving second element of result.\n(ten_first_gold[1] & ten_second_gold[1])\n\nFalse\n\n\nand so on. We have ten elements on each side, and 10 answers, giving an array (ten_both) of 10 elements. Each element in ten_both is the answer to the & question for the elements at the corresponding positions in ten_first_gold and ten_second_gold.\nWe could also create the Boolean arrays and do the & operation all in one step, like this:\n\nten_both = (ten_first_chests == 'Gold') & (ten_second_chests == 'Gold')\nten_both\n\narray([False, False, False, True, True, False, True, True, False,\n True])\n\n\n\n\n\n\n\n\n\n\nParentheses, arrays and comparisons\n\n\n\nAgain you will notice the round brackets (parentheses) around (ten_first_chests == 'Gold') and (ten_second_chests == 'Gold'). Above, you saw us recommend you always use paretheses around Boolean expressions like this. The parentheses make the code easier to read — but be careful — in this case, we actually need the parentheses to make Python do what we want; see the footnote for more detail.3\n\n\n\nRemember, we wanted the answer to the question: how many trials had “Gold” in the first chest and “Gold” in the second. We can answer that question for the first 10 trials with np.sum:\n\nn_ten_both = np.sum(ten_both)\nn_ten_both\n\n5\n\n\nWe can answer the same question for all the trials, in the same way:\n\nfirst_gold = (first_chests == 'Gold')\nsecond_gold = (second_chests == 'Gold')\nn_both_gold = np.sum(first_gold & second_gold)\nn_both_gold\n\n3369\n\n\nWe could also do the same calculation all in one line:\n\n# Notice the parentheses - we need these - see above.\nn_both_gold = np.sum((first_chests == 'Gold') & (second_chests == 'Gold'))\nn_both_gold\n\n3369\n\n\nWe can then count all the ships where the first chest was gold:\n\nn_first_gold = np.sum(first_chests == 'Gold')\nn_first_gold\n\n5085\n\n\nThe final calculation is the proportion of second chests that are gold, given the first chest was also gold:\n\np_g_given_g = n_both_gold / n_first_gold\np_g_given_g\n\n0.6625368731563421\n\n\nOf course we won’t get exactly the same results from the two simulations, in the same way that we won’t get exactly the same results from any two runs of the same simulation, because of the random values we are using. But the logic for the two simulations are the same, and we are doing many trials (10,000), so the results will be very similar.\nEnd of gold_silver_booleans notebook" + }, + { + "objectID": "more_sampling_tools.html#the-monty-hall-problem", + "href": "more_sampling_tools.html#the-monty-hall-problem", + "title": "10  Two puzzles and more tools", + "section": "10.6 The Monty Hall problem", + "text": "10.6 The Monty Hall problem\nThe Monty Hall Problem is a puzzle in probability that is famous for its deceptive simplicity. It has its own long Wikipedia page: https://en.wikipedia.org/wiki/Monty_Hall_problem.\nHere is the problem in the form it is best known; a letter to the columnist Marilyn vos Savant, published in Parade Magazine (1990):\n\nSuppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?\n\nIn fact the first person to propose (and solve) this problem was Steve Selvin, a professor of public health at the University of California, Berkeley (Selvin 1975).\nMost people, including at least one of us, your humble authors, quickly come to the wrong conclusion. The most common but incorrect answer is that it will make no difference if you switch doors or stay with your original choice. The obvious intuition is that, after Monty opens his door, there are two doors that might have the car behind them, and therefore, there is a 50% chance it will be behind any one of the two. It turns out that answer is wrong; you will double your chances of winning by switching doors. Did you get the answer right?\nIf you got the answer wrong, you are in excellent company. As you can see from the commentary in Savant (1990), many mathematicians wrote to Parade magazine to assert that the (correct) solution was wrong. Paul Erdős was one of the most famous mathematicians of the 20th century; he could not be convinced of the correct solution until he had seen a computer simulation (Vazsonyi 1999), of the type we will do below.\nTo simulate a trial of this problem, we need to select a door at random to house the car, and another door at random, to be the door the contestant chooses. We number the doors 1, 2 and 3. Now we need two random choices from the options 1, 2 or 3, one for the door with the car, the other for the contestant door. To chose a door for the car, we could throw a die, and chose door 1 if the die shows 1 or 4, door 2 if the die shows 2 or 5, and door 3 for 3 or 6. Then we throw the die again to chose the contestant door.\nBut throwing dice is a little boring; we have to find the die, then throw it many times, and record the results. Instead we can ask the computer to chose the doors at random.\nFor this simulation, let us do 25 trials. We ask the computer to create two sets of 25 random numbers from 1 through 3. The first set is the door with the car behind it (“Car door”). The second set have the door that the contestant chose at random (“Our door”). We put these in a table, and make some new, empty columns to fill in later. The first new column is “Monty opens”. In due course, we will use this column to record the door that Monty Hall will open on this trial. The last two columns express the outcome. The first is “Stay wins”. This has “Yes” if we win on this trial by sticking to our original choice of door, and “No” otherwise. The last column is “Switch wins”. This has “Yes” if we win by switching doors, and “No” otherwise. See table Table 10.1).\n\n\n\n\nTable 10.1: 25 simulations of the Monty Hall problem \n\n\n\nCar door\nOur door\nMonty opens\nStay wins\nSwitch wins\n\n\n\n\n1\n3\n3\n\n\n\n\n\n2\n3\n1\n\n\n\n\n\n3\n1\n3\n\n\n\n\n\n4\n1\n1\n\n\n\n\n\n5\n2\n3\n\n\n\n\n\n6\n2\n1\n\n\n\n\n\n7\n2\n2\n\n\n\n\n\n8\n1\n3\n\n\n\n\n\n9\n1\n2\n\n\n\n\n\n10\n3\n1\n\n\n\n\n\n11\n2\n2\n\n\n\n\n\n12\n3\n2\n\n\n\n\n\n13\n2\n2\n\n\n\n\n\n14\n3\n1\n\n\n\n\n\n15\n1\n2\n\n\n\n\n\n16\n2\n1\n\n\n\n\n\n17\n3\n3\n\n\n\n\n\n18\n3\n2\n\n\n\n\n\n19\n1\n1\n\n\n\n\n\n20\n3\n2\n\n\n\n\n\n21\n2\n2\n\n\n\n\n\n22\n3\n1\n\n\n\n\n\n23\n3\n1\n\n\n\n\n\n24\n1\n1\n\n\n\n\n\n25\n2\n3\n\n\n\n\n\n\n\n\n\nIn the first trial in Table 10.1), the computer selected door 3 for car, and door 3 for the contestant. Now Monty must open a door, and he cannot open our door (door 3) so he has the choice of opening door 1 or door 2; he chooses randomly, and opens door 2. On this trial, we win if we stay with our original choice, and we lose if we change to the remaining door, door 1.\nNow we go the second trial. The computer chose door 3 for the car, and door 1 for our choice. Monty cannot choose our door (door 1) or the door with the car behind it (door 3), so he must open door 2. Now if we stay with our original choice, we lose, but if we switch, we win.\nYou may want to print out table Table 10.1, and fill out the blank columns, to work through the logic.\nAfter doing a few more trials, and some reflection, you may see that there are two different situations here: the situation when our initial guess was right, and the situation where our initial guess was wrong. When our initial guess was right, we win by staying with our original choice, but when it was wrong, we always win by switching. The chance of our initial guess being correct is 1/3 (one door out of three). So the chances of winning by staying are 1/3, and the chances of winning by switching are 2/3. But remember, you don’t need to follow this logic to get the right answer. As you will see below, the resampling simulation shows us that the Switch strategy wins.\nTable Table 10.2 is a version of table Table 10.1 for which we have filled in the blank columns using the logic above.\n\n\n\n\nTable 10.2: 25 simulations of the Monty Hall problem, filled out \n\n\n\nCar door\nOur door\nMonty opens\nStay wins\nSwitch wins\n\n\n\n\n1\n3\n3\n1\nYes\nNo\n\n\n2\n3\n1\n2\nNo\nYes\n\n\n3\n1\n3\n2\nNo\nYes\n\n\n4\n1\n1\n2\nYes\nNo\n\n\n5\n2\n3\n1\nNo\nYes\n\n\n6\n2\n1\n3\nNo\nYes\n\n\n7\n2\n2\n3\nYes\nNo\n\n\n8\n1\n3\n2\nNo\nYes\n\n\n9\n1\n2\n3\nNo\nYes\n\n\n10\n3\n1\n2\nNo\nYes\n\n\n11\n2\n2\n1\nYes\nNo\n\n\n12\n3\n2\n1\nNo\nYes\n\n\n13\n2\n2\n1\nYes\nNo\n\n\n14\n3\n1\n2\nNo\nYes\n\n\n15\n1\n2\n3\nNo\nYes\n\n\n16\n2\n1\n3\nNo\nYes\n\n\n17\n3\n3\n2\nYes\nNo\n\n\n18\n3\n2\n1\nNo\nYes\n\n\n19\n1\n1\n2\nYes\nNo\n\n\n20\n3\n2\n1\nNo\nYes\n\n\n21\n2\n2\n1\nYes\nNo\n\n\n22\n3\n1\n2\nNo\nYes\n\n\n23\n3\n1\n2\nNo\nYes\n\n\n24\n1\n1\n2\nYes\nNo\n\n\n25\n2\n3\n1\nNo\nYes\n\n\n\n\n\n\nThe proportion of times “Stay” wins in these 25 trials is 0.36. The proportion of times “Switch” wins is 0.64; the Switch strategy wins about twice as often as the Stay strategy." + }, + { + "objectID": "more_sampling_tools.html#monty-hall-with", + "href": "more_sampling_tools.html#monty-hall-with", + "title": "10  Two puzzles and more tools", + "section": "10.7 Monty Hall with Python", + "text": "10.7 Monty Hall with Python\nNow you have seen what the results might look like for a physical simulation, you can exercise some of your newly-strengthened Python muscles to do the simulation with code.\n\nStart of monty_hall notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\nThe Monty Hall problem has a slightly complicated structure, so we will start by looking at the procedure for one trial. When we have that clear, we will put that procedure into a for loop for the simulation.\nLet’s start with some variables. Let’s call the door I choose my_door.\nWe choose that door at random from a sequence of all possible doors. Call the doors 1, 2 and 3 from left to right.\n\n# List of doors to chose from.\ndoors = [1, 2, 3]\n\n# We choose one door at random.\nmy_door = rnd.choice(doors)\n\n# Show the result\nmy_door\n\n2\n\n\nWe choose one of the doors to be the door with the car behind it:\n\n# One door at random has the car behind it.\ncar_door = rnd.choice(doors)\n\n# Show the result\ncar_door\n\n2\n\n\nNow we need to decide which door Monty will open.\nBy our set up, Monty cannot open our door (my_door). By the set up, he has not opened (and cannot open) the door with the car behind it (car_door).\nmy_door and car_door might be the same.\nSo, to get Monty’s choices, we want to take all doors (doors) and remove my_door and car_door. That leaves the door or doors Monty can open.\nHere are the doors Monty cannot open. Remember, a third of the time my_door and car_door will be the same, so we will include the same door twice, as doors Monty can’t open.\n\ncant_open = [my_door, car_door]\ncant_open\n\n[2, 2]\n\n\nWe want to find the remaining doors from doors after removing the doors named in cant_open.\nNumPy has a good function for this, called np.setdiff1d. It calculates the set difference between two sequences, such as arrays.\nThe set difference between two sequences is the members that are in the first sequence, but are not in the second sequence. Here are a few examples of this set difference function in NumPy.\n\nNotice that we are using lists as the input (first and second) sequences here. We can use lists or arrays or any other type of sequence in Python. (See Section 6.3.2 for an introduction to lists).\nNumpy functions like np.setdiff1d always return an array.\n\n\n# Members in [1, 2, 3] that are *not* in [1]\n# 1, 2, 3, removing 1, if present.\nnp.setdiff1d([1, 2, 3], [1])\n\narray([2, 3])\n\n\n\n# Members in [1, 2, 3] that are *not* in [2, 3]\n# 1, 2, 3, removing 2 and 3, if present.\nnp.setdiff1d([1, 2, 3], [2, 3])\n\narray([1])\n\n\n\n# Members in [1, 2, 3] that are *not* in [2, 2]\n# 1, 2, 3, removing 2 and 2 again, if present.\nnp.setdiff1d([1, 2, 3], [2, 2])\n\narray([1, 3])\n\n\nThis logic allows us to choose the doors Monty can open:\n\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\nmontys_choices\n\narray([1, 3])\n\n\nNotice that montys_choices will only have one element left when my_door and car_door were different, but it will have two elements if my_door and car_door were the same.\nLet’s play out those two cases:\n\nmy_door = 1 # For example.\ncar_door = 2 # For example.\n# Monty can only choose door 3 now.\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\nmontys_choices\n\narray([3])\n\n\n\nmy_door = 1 # For example.\ncar_door = 1 # For example.\n# Monty can choose either door 2 or door 3.\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\nmontys_choices\n\narray([2, 3])\n\n\nIf Monty can only choose one door, we’ll take that. Otherwise we’ll chose a door at random from the two doors available.\n\nif len(montys_choices) == 1: # Only one door available.\n montys_door = montys_choices[0] # Take the first (of 1!).\nelse: # Two doors to choose from:\n # Choose at random.\n montys_door = rnd.choice(montys_choices)\nmontys_door\n\n2\n\n\n\nIn fact, we can avoid that if len( check for the number of doors, because rnd.choice will also work on a sequence of length 1 — in that case, it always returns the single element in the sequence, like this:\n\n# rnd.choice on sequence with single element - always returns that element.\nrnd.choice([2])\n\n2\n\n\nThat means we can simplify the code above to:\n\n# Choose single door left to choose, or door at random if two.\nmontys_door = rnd.choice(montys_choices)\nmontys_door\n\n3\n\n\n\nNow we know Monty’s door, we can identify the other door, by removing our door, and Monty’s door, from the available options:\n\nremaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n# There is only one remaining door, take that.\nother_door = remaining_doors[0]\nother_door\n\n2\n\n\nThe logic above gives us the full procedure for one trial.\n\nmy_door = rnd.choice(doors)\ncar_door = rnd.choice(doors)\n# Which door will Monty open?\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\n# Choose single door left to choose, or door at random if two.\nmontys_door = rnd.choice(montys_choices)\n# Now find the door we'll open if we switch.\nremaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n# There is only one door left.\nother_door = remaining_doors[0]\n# Calculate the result of this trial.\nif my_door == car_door:\n stay_wins = True\nif other_door == car_door:\n switch_wins = True\n\nAll that remains is to put that trial procedure into a loop, and collect the results as we repeat the procedure many times.\n\n# Arrays to store the results for each trial.\nstay_wins = np.repeat([False], 10000)\nswitch_wins = np.repeat([False], 10000)\n\n# A list of doors to chose from.\ndoors = [1, 2, 3]\n\nfor i in range(10000):\n # You will recognize the below as the single-trial procedure above.\n my_door = rnd.choice(doors)\n car_door = rnd.choice(doors)\n # Which door will Monty open?\n montys_choices = np.setdiff1d(doors, [my_door, car_door])\n # Choose single door left to choose, or door at random if two.\n montys_door = rnd.choice(montys_choices)\n # Now find the door we'll open if we switch.\n remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n # There is only one door left.\n other_door = remaining_doors[0]\n # Calculate the result of this trial.\n if my_door == car_door:\n stay_wins[i] = True\n if other_door == car_door:\n switch_wins[i] = True\n\np_for_stay = np.sum(stay_wins) / 10000\np_for_switch = np.sum(switch_wins) / 10000\n\nprint('p for stay:', p_for_stay)\n\np for stay: 0.3326\n\nprint('p for switch:', p_for_switch)\n\np for switch: 0.6674\n\n\nWe can also follow the same strategy as we used for the second implementation of the two-ships problem (Section 10.4).\nHere, as in the second two-ships implementation, we do not calculate the trial results (stay_wins, switch_wins) in each trial. Instead, we store the doors for each trial, and then use Boolean arrays to calculate the results for all trials, at the end.\n\n# Instead of storing the trial results, we store the doors for each trial.\nmy_doors = np.zeros(10000)\ncar_doors = np.zeros(10000)\nother_doors = np.zeros(10000)\n\ndoors = [1, 2, 3]\n\nfor i in range(10000):\n my_door = rnd.choice(doors)\n car_door = rnd.choice(doors)\n # Which door will Monty open?\n montys_choices = np.setdiff1d(doors, [my_door, car_door])\n # Choose single door left to choose, or door at random if two.\n montys_door = rnd.choice(montys_choices)\n # Now find the door we'll open if we switch.\n remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n # There is only one door left.\n other_door = remaining_doors[0]\n\n # Store the doors we chose.\n my_doors[i] = my_door\n car_doors[i] = car_door\n other_doors[i] = other_door\n\n# Now - at the end of all the trials, we use Boolean arrays to calculate the\n# results.\nstay_wins = my_doors == car_doors\nswitch_wins = other_doors == car_doors\n\np_for_stay = np.sum(stay_wins) / 10000\np_for_switch = np.sum(switch_wins) / 10000\n\nprint('p for stay:', p_for_stay)\n\np for stay: 0.3374\n\nprint('p for switch:', p_for_switch)\n\np for switch: 0.6626\n\n\n\n10.7.1 Insight from the Monty Hall simulation\nThe code simulation gives us an estimate of the right answer, but it also forces us to set out the exact mechanics of the problem. For example, by looking at the code, we see that we can calculate “stay_wins” with this code alone:\n\n# Just choose my door and the car door for each trial.\nmy_doors = np.zeros(10000)\ncar_doors = np.zeros(10000)\ndoors = [1, 2, 3]\n\nfor i in range(10000):\n my_doors[i] = rnd.choice(doors)\n car_doors[i] = rnd.choice(doors)\n\n# Calculate whether I won by staying.\nstay_wins = my_doors == car_doors\np_for_stay = np.sum(stay_wins) / 10000\n\nprint('p for stay:', p_for_stay)\n\np for stay: 0.3244\n\n\nThis calculation, on its own, tells us the answer, but it also points to another insight — whatever Monty does with the doors, it doesn’t change the probability that our initial guess is right, and that must be 1 in 3 (0.333). If the probability of stay_win is 1 in 3, and we only have one other door to switch to, the probability of winning after switching must be 2 in 3 (0.666).\n\n\n10.7.2 Simulation and a variant of Monty Hall\nYou have seen that you can avoid the silly mistakes that many of us make with probability — by asking the computer to tell you the result before you start to reason from first principles.\nAs an example, consider the following variant of the Monty Hall problem.\nThe set up to the problem has us choosing a door (my_door above), and then Monty opens one of the other two doors.\nSometimes (in fact, 2/3 of the time) there is a car behind one of Monty’s doors. We’ve obliged Monty to open the other door, and his choice is forced.\nWhen his choice was not forced, we had Monty choose the door at random.\nFor example, let us say we chose door 1.\nLet us say that the car is also under door 1.\nMonty has the option of choosing door 2 or door 3, and he chooses randomly between them.\n\nmy_door = 1 # We chose door 1 at random.\ncar_door = 1 # This trial, by chance, the car door is 1.\n# Monty is left with doors 2 and 3 to choose from.\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\n# He chooses randomly.\nmontys_door = rnd.choice(montys_choices)\n# Show the result\nmontys_door\n\n2\n\n\nNow — let us say we happen to know that Monty is rather lazy, and he will always choose the left-most (lower-numbered) door of the two options.\nIn the previous example, Monty had the option of choosing door 2 and 3. In this new scenario, we know that he will always choose door 2 (the left-most door).\n\nmy_door = 1 # We chose door 1 at random.\ncar_door = 1 # This trial, by chance, the car door is 1.\n# Monty is left with doors 2 and 3 to choose from.\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\n# He chooses the left-most door, always.\nmontys_door = montys_choices[0]\n# Show the result\nmontys_door\n\n2\n\n\nIt feels as if we have more information about where the car is, when we know this. Consider the situation where we have chosen door 1, and Monty opens door 3. We know that he would have preferred to open door 2, if he was allowed. We therefore know he wasn’t allowed to open door 2, and that means the car is definitely under door 2.\n\nmy_door = 1 # We chose door 1 at random.\ncar_door = 2 # This trial, by chance, the car door under door 2.\n# Monty is left with door 3 only to choose from.\nmontys_choices = np.setdiff1d(doors, [my_door, car_door])\n# He chooses the left-most door, always. But in this case, the left-most\n# available door is 3 (he can't choose 2, it is the car_door).\n# Notice the doors were in order, so the left-most door is the first door\n# in the array.\nmontys_door = montys_choices[0]\n# Show the result\nmontys_door\n\n3\n\n\nTo take that into account, we might try a different strategy. We will stick to our own choice if Monty has chosen the left-most of the two doors he had available to him, because he might have chosen that door because there was a car underneath the other door, or because there was a car under neither, but he preferred the left door. But, if Monty chooses the right-most of the two-doors available to him, we will switch from our own choice to the other (unopened) door, because we can be sure that the car is under the other (unopened) door.\nCall this the “switch if Monty chooses right door” strategy, or “switch if right” for short.\nCan you see quickly whether this will be better than the “always stay” strategy? Will it be better than the “always switch” strategy? Take a moment to think it through, and write down your answers.\nIf you can quickly see the answer to both questions — well done — but, are you sure you are right?\nWe can test by simulation.\nFor our test of the “switch is right” strategy, we can tell if one door is to the right of another door by comparison; higher numbers mean further to the right: 2 is right of 1, and 3 is right of 2.\n\n# Door 3 is right of door 1.\n3 > 1\n\nTrue\n\n\n\n# A test of the switch-if-right strategy.\n# The car doors.\ncar_doors = np.zeros(10000)\n# The door we chose using the strategy.\nstrategy_doors = np.zeros(10000)\n\ndoors = [1, 2, 3]\n\nfor i in range(10000):\n my_door = rnd.choice(doors)\n car_door = rnd.choice(doors)\n # Which door will Monty open?\n montys_choices = np.setdiff1d(doors, [my_door, car_door])\n # Choose Monty's door from the remaining options.\n # This time, he always prefers the left door.\n montys_door = montys_choices[0]\n # Now find the door we'll open if we switch.\n remaining_doors = np.setdiff1d(doors, [my_door, montys_door])\n # There is only one door remaining - but is Monty's door\n # to the right of this one? Then Monty had to shift.\n other_door = remaining_doors[0]\n if montys_door > other_door:\n # Monty's door was the right-hand door, the car is under the other one.\n strategy_doors[i] = other_door\n else: # We stick with the door we first thought of.\n strategy_doors[i] = my_door\n # Store the car door for this trial.\n car_doors[i] = car_door\n\nstrategy_wins = strategy_doors == car_doors\n\np_for_strategy = np.sum(strategy_wins) / 10000\n\nprint('p for strategy:', p_for_strategy)\n\np for strategy: 0.6641\n\n\nWe find that the “switch-if-right” has around the same chance of success as the “always-switch” strategy — of about 66.6%, or 2 in 3. Were your initial answers right? Now you’ve seen the result, can you see why it should be so? It may not be obvious — the Monty Hall problem is deceptively difficult. But our case here is that the simulation first gives you an estimate of the correct answer, and then, gives you a good basis for thinking more about the problem. That is:\n\nsimulation is useful for estimation and\nsimulation is useful for reflection.\n\nEnd of monty_hall notebook" + }, + { + "objectID": "more_sampling_tools.html#why-use-simulation", + "href": "more_sampling_tools.html#why-use-simulation", + "title": "10  Two puzzles and more tools", + "section": "10.8 Why use simulation?", + "text": "10.8 Why use simulation?\nDoing these simulations has two large benefits. First, it gives us the right answer, saving us from making a mistake. Second, the process of simulation forces us to think about how the problem works. This can give us better understanding, and make it easier to reason about the solution.\nWe will soon see that these same advantages also apply to reasoning about statistics.\n\n\n\n\nGoldberg, Samuel. 1986. Probability: An Introduction. Courier Corporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC.\n\n\nSavant, Marilyn vos. 1990. “Ask Marilyn.” 1990. https://web.archive.org/web/20160318182523/http://marilynvossavant.com/game-show-problem.\n\n\nSelvin, Steve. 1975. “Letters to the Editor.” The American Statistician 29 (1): 67. http://www.jstor.org/stable/2683689.\n\n\nVazsonyi, Andrew. 1999. “Which Door Has the Cadillac.” Decision Line 30 (1): 17–19. https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf." + }, + { + "objectID": "probability_theory_2_compound.html#introduction", + "href": "probability_theory_2_compound.html#introduction", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.1 Introduction", + "text": "11.1 Introduction\nIn this chapter we will deal with what are usually called “probability problems” rather than the “statistical inference problems” discussed in later chapters. The difference is that for probability problems we begin with a knowledge of the properties of the universe with which we are working. (See Section 8.9 on the definition of resampling.)\nWe start with some basic problems in probability. To make sure we do know the properties of the universe we are working with, we start with poker, and a pack of cards. Working with some poker problems, we rediscover the fundamental distinction between sampling with and without replacement." + }, + { + "objectID": "probability_theory_2_compound.html#sec-one-pair", + "href": "probability_theory_2_compound.html#sec-one-pair", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.2 Introducing a poker problem: one pair (two of a kind)", + "text": "11.2 Introducing a poker problem: one pair (two of a kind)\nWhat is the chance that the first five cards chosen from a deck of 52 (bridge/poker) cards will contain two (and only two) cards of the same denomination (two 3’s for example)? (Please forgive the rather sterile unrealistic problems in this and the other chapters on probability. They reflect the literature in the field for 300 years. We’ll get more realistic in the statistics chapters.)\nWe shall estimate the odds the way that gamblers have estimated gambling odds for thousands of years. First, check that the deck is a standard deck and is not missing any cards. (Overlooking such small but crucial matters often leads to errors in science.) Shuffle thoroughly until you are satisfied that the cards are randomly distributed. (It is surprisingly hard to shuffle well.) Then deal five cards, and mark down whether the hand does or does not contain a pair of the same denomination.\nAt this point, we must decide whether three of a kind, four of a kind or two pairs meet our criterion for a pair. Since our criterion is “two and only two,” we decide not to count them.\nThen replace the five cards in the deck, shuffle, and deal again. Again mark down whether the hand contains one pair of the same denomination. Do this many times. Then count the number of hands with one pair, and figure the proportion (as a percentage) of all hands.\nTable 11.1 has the results of 25 hands of this procedure.\n\n\n\nTable 11.1: Results of 25 hands for the problem “one pair”\n\n\n\n\n\n\n\n\n\n\n\nHand\nCard 1\nCard 2\nCard 3\nCard 4\nCard 5\nOne pair?\n\n\n\n\n1\nKing ♢\nKing ♠\nQueen ♠\n10 ♢\n6 ♠\nYes\n\n\n2\n8 ♢\nAce ♢\n4 ♠\n10 ♢\n3 ♣\nNo\n\n\n3\n4 ♢\n5 ♣\nAce ♢\nQueen ♡\n10 ♠\nNo\n\n\n4\n3 ♡\nAce ♡\n5 ♣\n3 ♢\nJack ♢\nYes\n\n\n5\n6 ♠\nKing ♣\n6 ♢\n3 ♣\n3 ♡\nNo\n\n\n6\nQueen ♣\n7 ♢\nJack ♠\n5 ♡\n8 ♡\nNo\n\n\n7\n9 ♣\n4 ♣\n9 ♠\nJack ♣\n5 ♠\nYes\n\n\n8\n3 ♠\n3 ♣\n3 ♡\n5 ♠\n5 ♢\nYes\n\n\n9\nQueen ♢\n4 ♠\nQueen ♣\n6 ♡\n4 ♢\nNo\n\n\n10\nQueen ♠\n3 ♣\n7 ♠\n7 ♡\n8 ♢\nYes\n\n\n11\n8 ♡\n9 ♠\n7 ♢\n8 ♠\nAce ♡\nYes\n\n\n12\nAce ♠\n9 ♡\n4 ♣\n2 ♠\nAce ♢\nYes\n\n\n13\n4 ♡\n3 ♣\nAce ♢\n9 ♡\n5 ♡\nNo\n\n\n14\n10 ♣\n7 ♠\n8 ♣\nKing ♣\n4 ♢\nNo\n\n\n15\nQueen ♣\n8 ♠\nQueen ♠\n8 ♣\n5 ♣\nNo\n\n\n16\nKing ♡\n10 ♣\nJack ♠\n10 ♢\n10 ♡\nNo\n\n\n17\nQueen ♠\nQueen ♡\nAce ♡\nKing ♢\n7 ♡\nYes\n\n\n18\n5 ♢\n6 ♡\nAce ♡\n4 ♡\n6 ♢\nYes\n\n\n19\n3 ♠\n5 ♡\n2 ♢\nKing ♣\n9 ♡\nNo\n\n\n20\n8 ♠\nJack ♢\n7 ♣\n10 ♡\n3 ♡\nNo\n\n\n21\n5 ♢\n4 ♠\nJack ♡\n2 ♠\nKing ♠\nNo\n\n\n22\n5 ♢\n4 ♢\nJack ♣\nKing ♢\n2 ♠\nNo\n\n\n23\nKing ♡\nKing ♠\n6 ♡\n2 ♠\n5 ♣\nYes\n\n\n24\n8 ♠\n9 ♠\n6 ♣\nAce ♣\n5 ♢\nNo\n\n\n25\nAce ♢\n7 ♠\n4 ♡\n9 ♢\n9 ♠\nYes\n\n\n\n\n\n\n\n\n\n\n\n% Yes\n\n\n\n\n\n44%\n\n\n\n\n\nIn this series of 25 experiments, 44 percent of the hands contained one pair, and therefore 0.44 is our estimate (for the time being) of the probability that one pair will turn up in a poker hand. But we must notice that this estimate is based on only 25 hands, and therefore might well be fairly far off the mark (as we shall soon see).\nThis experimental “resampling” estimation does not require a deck of cards. For example, one might create a 52-sided die, one side for each card in the deck, and roll it five times to get a “hand.” But note one important part of the procedure: No single “card” is allowed to come up twice in the same set of five spins, just as no single card can turn up twice or more in the same hand. If the same “card” did turn up twice or more in a dice experiment, one could pretend that the roll had never taken place; this procedure is necessary to make the dice experiment analogous to the actual card-dealing situation under investigation. Otherwise, the results will be slightly in error. This type of sampling is “sampling without replacement,” because each card is not replaced in the deck prior to dealing the next card (that is, prior to the end of the hand)." + }, + { + "objectID": "probability_theory_2_compound.html#a-first-approach-to-the-one-pair-problem-with-code", + "href": "probability_theory_2_compound.html#a-first-approach-to-the-one-pair-problem-with-code", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.3 A first approach to the one-pair problem with code", + "text": "11.3 A first approach to the one-pair problem with code\nWe could also approach this problem using random numbers from the computer to simulate the values.\nLet us first make some numbers from which to sample. We want to simulate a deck of playing cards analogous to the real cards we used previously. We don’t need to simulate all the features of a deck, but only the features that matter for the problem at hand. In our case, the feature that matters is the face value. We require a deck with four “1”s, four “2”s, etc., up to four “13”s, where 1 is an Ace, and 13 is a King. The suits don’t matter for our present purposes.\nWe first first make an array to represent the face values in one suit.\n\n# Card values 1 through 13 (1 up to, not including 14).\none_suit = np.arange(1, 14)\none_suit\n\narray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])\n\n\nWe have the face values for one suit, but we need the face values for whole deck of cards — four suits. We do this by making a new array that consists of four repeats of one_suit:\n\n# Repeat the one_suit array four times\ndeck = np.repeat(one_suit, 4)\ndeck\n\narray([ 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5,\n 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9,\n 9, 9, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13,\n 13])" + }, + { + "objectID": "probability_theory_2_compound.html#sec-shuffling-deck", + "href": "probability_theory_2_compound.html#sec-shuffling-deck", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.4 Shuffling the deck with Python", + "text": "11.4 Shuffling the deck with Python\nAt this point we have a complete deck in the variable deck . But that “deck” is ordered by value, first ones (Aces) then 2s and so on. If we do not shuffle the deck, the results will be predictable. Therefore, we would like to select five of these “cards” (52 values) at random. There are two ways of doing this. The first is to use the ’rnd.choice`]{.python} tool in the familiar way, to choose 5 values at random from this strictly ordered deck. We want to draw these cards without replacement (of which more later). Without replacement means that once we have drawn a particular value, we cannot draw that value a second time — just as you cannot get the same card twice in a hand when the dealer deals you a hand of five cards.\n\nSo far, each of our uses of rnd.choice has done sampling with replacement, where you can get the same item more than once in a particular sample. Here we need without replacement. rnd.choice has an argument you can send, called replace, to tell it whether to replace values when drawing the sample. We have not used that argument so far, because the default is True — sampling with replacement. Here we need to use the argument — replace=False — to get sampling without replacement.\n\n\n# One hand, sampling from the deck without replacement.\nhand = rnd.choice(deck, size=5, replace=False)\nhand\n\narray([ 9, 4, 11, 9, 13])\n\n\nThe above is one way to get a random hand of five cards from the deck. Another way is to use the rnd.permuted function to shuffle the whole deck of 52 “cards” into a random order, just as a dealer would shuffle the deck before dealing. Then we could take — for example — the first five cards from the shuffled deck to give a random hand. See Section 8.14 for more on rnd.permuted.\n\n# Shuffle the whole 52 card deck.\nshuffled = rnd.permuted(deck)\n# The \"cards\" are now in random order.\nshuffled\n\narray([12, 13, 2, 9, 6, 7, 7, 7, 11, 13, 2, 8, 6, 9, 4, 1, 5,\n 12, 11, 9, 1, 2, 4, 2, 3, 3, 11, 6, 4, 11, 8, 7, 13, 8,\n 12, 5, 4, 5, 9, 8, 5, 6, 3, 1, 1, 12, 3, 13, 10, 10, 10,\n 10])\n\n\nNow we can get our hand by taking the first five cards from the deck:\n\n# Select the first five \"cards\" from the shuffled deck.\nhand = shuffled[:5]\nhand\n\narray([12, 13, 2, 9, 6])\n\n\nYou have seen that we can use one of two procedures to a get random sample of five cards from deck, drawn without replacement:\n\nUsing rnd.choice with size=5 and replace=False to take the random sample directly from deck, or\nshuffling the entire deck and then taking the first five “cards” from the result of the shuffle.\n\nEither is a valid way of getting five cards at random from the deck. It’s up to us which to choose — we slightly prefer to shuffle and take the first five, because it is more like the physical procedure of shuffling the deck and dealing, but which you prefer, is up to you.\n\n11.4.1 A first-pass computer solution to the one-pair problem\nChoosing the shuffle deal way, the cell to generate one hand is:\n\nshuffled = rnd.permuted(deck)\nhand = shuffled[:5]\nhand\n\narray([ 7, 4, 12, 1, 2])\n\n\nWithout doing anything further, we could run this cell many times, and each time, we could note down whether the particular hand had exactly one pair or not.\nTable 11.2 has the result of running that procedure 25 times:\n\n\n\nTable 11.2: Results of 25 hands using random numbers\n\n\n\n\n\n\n\n\n\n\n\nHand\nCard 1\nCard 2\nCard 3\nCard 4\nCard 5\nOne pair?\n\n\n\n\n1\n10\n5\n7\n12\n12\nYes\n\n\n2\n6\n9\n2\n6\n8\nYes\n\n\n3\n11\n8\n9\n6\n1\nNo\n\n\n4\n8\n10\n2\n11\n12\nNo\n\n\n5\n1\n10\n11\n8\n5\nNo\n\n\n6\n8\n10\n3\n9\n5\nNo\n\n\n7\n10\n9\n13\n1\n9\nYes\n\n\n8\n13\n4\n3\n11\n5\nNo\n\n\n9\n7\n1\n4\n13\n6\nNo\n\n\n10\n11\n5\n11\n8\n4\nYes\n\n\n11\n7\n10\n7\n13\n9\nYes\n\n\n12\n2\n11\n4\n7\n8\nNo\n\n\n13\n12\n1\n3\n10\n2\nNo\n\n\n14\n10\n2\n11\n8\n1\nNo\n\n\n15\n1\n6\n12\n12\n5\nYes\n\n\n16\n4\n8\n7\n8\n6\nYes\n\n\n17\n7\n10\n9\n4\n4\nYes\n\n\n18\n3\n4\n11\n11\n12\nYes\n\n\n19\n10\n12\n2\n13\n1\nNo\n\n\n20\n9\n6\n4\n13\n4\nYes\n\n\n21\n7\n3\n3\n9\n7\nNo\n\n\n22\n13\n4\n10\n5\n8\nNo\n\n\n23\n13\n2\n9\n8\n8\nYes\n\n\n24\n5\n12\n7\n11\n8\nNo\n\n\n25\n7\n5\n8\n10\n7\nYes\n\n\n\n\n\n\n\n\n\n\n\n% Yes\n\n\n\n\n\n48%" + }, + { + "objectID": "probability_theory_2_compound.html#finding-exactly-one-pair-using-code", + "href": "probability_theory_2_compound.html#finding-exactly-one-pair-using-code", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.5 Finding exactly one pair using code", + "text": "11.5 Finding exactly one pair using code\nThus far we have had to look ourselves at the set of cards, or at the numbers, and decide if there was exactly one pair. We would like the computer to do this for us. Let us stay with the numbers we generated above by dealing the random hand from the deck of numbers. To find pairs, we will go through the following procedure:\n\nFor each possible value (1 through 13), count the number of times each value has occurred in hand. Call the result of this calculation — repeat_nos.\nSelect repeat_nos values equal to 2;\nCount the number of “2” values in repeat_nos. This the number of pairs, and excludes three of a kind or four a kind.\nIf the number of pairs is exactly one, label the hand as “Yes”, otherwise label it as “No”." + }, + { + "objectID": "probability_theory_2_compound.html#finding-number-of-repeats-using", + "href": "probability_theory_2_compound.html#finding-number-of-repeats-using", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.6 Finding number of repeats using np.bincount", + "text": "11.6 Finding number of repeats using np.bincount\nConsider the following 5-card “hand” of values:\n\nhand = np.array([5, 7, 5, 4, 7])\n\nThis hand represents a pair of 5s and a pair of 7s.\nWe want to detect the number of repeats for each possible card value, 1 through 13. Let’s say we are looking for 5s. We can detect which of the values are equal to 5 by making a Boolean array, where there is True for a value equal to 5, and False otherwise:\n\nis_5 = (hand == 5)\nis_5\n\narray([ True, False, True, False, False])\n\n\nWe can then count the number of 5s with:\n\nnp.sum(is_5)\n\n2\n\n\nIn one cell:\n\nnumber_of_5s = np.sum(hand == 5)\nnumber_of_5s\n\n2\n\n\nWe could do this laborious task for every possible card value (1 through 13):\n\nnumber_of_1s = np.sum(hand == 1) # Number of aces in hand\nnumber_of_2s = np.sum(hand == 2) # Number of 2s in hand\nnumber_of_3s = np.sum(hand == 3)\nnumber_of_4s = np.sum(hand == 4)\nnumber_of_5s = np.sum(hand == 5)\nnumber_of_6s = np.sum(hand == 6)\nnumber_of_7s = np.sum(hand == 7)\nnumber_of_8s = np.sum(hand == 8)\nnumber_of_9s = np.sum(hand == 9)\nnumber_of_10s = np.sum(hand == 10)\nnumber_of_11s = np.sum(hand == 11)\nnumber_of_12s = np.sum(hand == 12)\nnumber_of_13s = np.sum(hand == 13) # Number of Kings in hand.\n\nAbove, we store the result for each card in a separate variable; this is inconvenient, because we would have to go through each variable checking for a pair (a value of 2). It would be more convenient to store these results in an array. One way to do that would be to store the result for card value 1 at position (index, offset) 1, the result for value 2 at position 2, and so on, like this:\n\n# Make array length 14. We don't use position (offset) 0, and the last\n# position (offset) in this array will be 13.\nrepeat_nos = np.zeros(14)\nrepeat_nos[1] = np.sum(hand == 1) # Number of aces in hand\nrepeat_nos[2] = np.sum(hand == 2) # Number of 2s in hand\nrepeat_nos[3] = np.sum(hand == 3)\nrepeat_nos[4] = np.sum(hand == 4)\nrepeat_nos[5] = np.sum(hand == 5)\nrepeat_nos[6] = np.sum(hand == 6)\nrepeat_nos[7] = np.sum(hand == 7)\nrepeat_nos[8] = np.sum(hand == 8)\nrepeat_nos[9] = np.sum(hand == 9)\nrepeat_nos[10] = np.sum(hand == 10)\nrepeat_nos[11] = np.sum(hand == 11)\nrepeat_nos[12] = np.sum(hand == 12)\nrepeat_nos[13] = np.sum(hand == 13) # Number of Kings in hand.\n# Show the result\nrepeat_nos\n\narray([0., 0., 0., 0., 1., 2., 0., 2., 0., 0., 0., 0., 0., 0.])\n\n\nYou may recognize all this repetitive typing as a good sign we could use a for loop to do the work — er — for us.\n\nrepeat_nos = np.zeros(14)\nfor i in range(14): # Set i to be first 0, then 1, ... through 13.\n repeat_nos[i] = np.sum(hand == i)\n# Show the result\nrepeat_nos\n\narray([0., 0., 0., 0., 1., 2., 0., 2., 0., 0., 0., 0., 0., 0.])\n\n\n\nNotice that we started our loop by checking for values equal to 0, and then values equal to 1 and so on. By our definition of the deck, no card can have value 0, so the first time through this loop, we will always get a count of 0. We could have saved ourselves a tiny amount of computing time if we had missed out that pointless step of checking 0, by using for i in range(1, 14): instead. In this case, we think the code is a little bit neater to read if we leave in the default start at 0, at a tiny cost in wasted computer effort.\n\nIn our particular hand, after we have done the count for 7s, we will always get 0 for card values 8, 9 … 13, because 7 was the highest card (maximum value) for our particular hand. As you might expect, there is a a Numpy function np.max that will quickly tell us the maximum value in the hand:\n\nnp.max(hand)\n\n7\n\n\nWe can use np.max to make our loop more efficient, by stopping our checks when we’ve reached the maximum value, like this:\n\nmax_value = np.max(hand)\n# Only make an array large enough to house counts for the max value.\nrepeat_nos = np.zeros(max_value + 1)\nfor i in range(max_value + 1): # Set i to 0, then 1 ... through max_value\n repeat_nos[i] = np.sum(hand == i)\n# Show the result\nrepeat_nos\n\narray([0., 0., 0., 0., 1., 2., 0., 2.])\n\n\nIn fact, this is exactly what the function np.bincount does, so we can use that function instead of our loop, to do the same job:\n\nrepeat_nos = np.bincount(hand)\nrepeat_nos\n\narray([0, 0, 0, 0, 1, 2, 0, 2])" + }, + { + "objectID": "probability_theory_2_compound.html#looking-for-hands-with-exactly-one-pair", + "href": "probability_theory_2_compound.html#looking-for-hands-with-exactly-one-pair", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.7 Looking for hands with exactly one pair", + "text": "11.7 Looking for hands with exactly one pair\nNow we have repeat_nos, we can proceed with the rest of the steps above.\nWe can count the number of cards that have exactly two repeats:\n\n(repeat_nos == 2)\n\narray([False, False, False, False, False, True, False, True])\n\n\n\nn_pairs = np.sum(repeat_nos == 2)\n# Show the result\nn_pairs\n\n2\n\n\nThe hand is of interest to us only if the number of pairs is exactly 1:\n\n# Check whether there is exactly one pair in this hand.\nn_pairs == 1\n\nFalse\n\n\nWe now have the machinery to use Python for all the logic in simulating multiple hands, and checking for exactly one pair.\nLet’s do that, and use Python to do the full job of dealing many hands and finding pairs in each one. We repeat the procedure above using a for loop. The for loop commands the program to do ten thousand repeats of the statements in the “loop” (indented statements).\nIn the body of the loop (the part that gets repeated for each trial) we:\n\nShuffle the deck.\nDeal ourselves a new hand.\nCalculate the repeat_nos for this new hand.\nCalculate the number of pairs from repeat_nos; store this as n_pairs.\nPut n_pairs for this repetition into the correct place in the scoring array z.\n\nWith that we end a single trial, and go back to the beginning, until we have done this 10000 times.\nWhen those 10000 repetitions are over, the computer moves on to count (sum) the number of “1’s” in the score-keeping array z, each “1” indicating a hand with exactly one pair. We store this count at location k. We divide k by 10000 to get the proportion of hands that had one pair, and we print the result of k to the screen.\n\nStart of one_pair notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\n# Create a bucket (vector) called a with four \"1's,\" four \"2's,\" four \"3's,\"\n# etc., to represent a deck of cards\none_suit = np.arange(1, 14)\none_suit\n\narray([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])\n\n\n\n# Repeat values for one suit four times to make a 52 card deck of values.\ndeck = np.repeat(one_suit, 4)\ndeck\n\narray([ 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5,\n 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9,\n 9, 9, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13,\n 13])\n\n\n\n# Array to store result of each trial.\nz = np.zeros(10000)\n\n# Repeat the following steps 10000 times\nfor i in range(10000):\n # Shuffle the deck\n shuffled = rnd.permuted(deck)\n\n # Take the first five cards to make a hand.\n hand = shuffled[:5]\n\n # How many pairs?\n # Counts for each card rank.\n repeat_nos = np.bincount(hand)\n n_pairs = np.sum(repeat_nos == 2)\n\n # Keep score of # of pairs\n z[i] = n_pairs\n\n # End loop, go back and repeat\n\n# How often was there 1 pair?\nk = np.sum(z == 1)\n\n# Convert to proportion.\nkk = k / 10000\n\n# Show the result.\nprint(kk)\n\n0.4191\n\n\nEnd of one_pair notebook\n\nIn one run of the program, the result in kk was 0.419, so our estimate would be that the probability of a single pair is 0.419.\nHow accurate are these resampling estimates? The accuracy depends on the number of hands we deal — the more hands, the greater the accuracy. If we were to examine millions of hands, 42 percent would contain a pair each; that is, the chance of getting a pair in the long run is 42 percent. It turns out the estimate of 48 percent based on 25 hands in Table 11.1 is fairly close to the long-run estimate, though whether or not it is close enough depends on one’s needs of course. If you need great accuracy, deal many more hands.\nA note on the decks, hands, repeat_noss in the above program, etc.: These “variables” are called “array”s in Python. An array is an array (sequence) of elements that gets filled with numbers as Python conducts its operations.\nTo help keep things straight (though the program does not require it), we often use z to name the array that collects all the trial results, and k to denote our overall summary results. Or you could call it something like scoreboard — it’s up to you.\nHow many trials (hands) should be made for the estimate? There is no easy answer.1 One useful device is to run several (perhaps ten) equal sized sets of trials, and then examine whether the proportion of pairs found in the entire group of trials is very different from the proportions found in the various subgroup sets. If the proportions of pairs in the various subgroups differ greatly from one another or from the overall proportion, then keep running additional larger subgroups of trials until the variation from one subgroup to another is sufficiently small for your purposes. While such a procedure would be impractical using a deck of cards or any other physical means, it requires little effort with the computer and Python." + }, + { + "objectID": "probability_theory_2_compound.html#two-more-tntroductory-poker-problems", + "href": "probability_theory_2_compound.html#two-more-tntroductory-poker-problems", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.8 Two more tntroductory poker problems", + "text": "11.8 Two more tntroductory poker problems\nWhich is more likely, a poker hand with two pairs, or a hand with three of a kind? This is a comparison problem, rather than a problem in absolute estimation as was the previous example.\nIn a series of 100 “hands” that were “dealt” using random numbers, four hands contained two pairs, and two hands contained three of a kind. Is it safe to say, on the basis of these 100 hands, that hands with two pairs are more frequent than hands with three of a kind? To check, we deal another 300 hands. Among them we see fifteen hands with two pairs (3.75 percent) and eight hands with three of a kind (2 percent), for a total of nineteen to ten. Although the difference is not enormous, it is reasonably clear-cut. Another 400 hands might be advisable, but we shall not bother.\nEarlier I obtained forty-four hands with one pair each out of 100 hands, which makes it quite plain that one pair is more frequent than either two pairs or three-of-a-kind. Obviously, we need more hands to compare the odds in favor of two pairs with the odds in favor of three-of-a-kind than to compare those for one pair with those for either two pairs or three-of-a-kind. Why? Because the difference in odds between one pair, and either two pairs or three-of-a-kind, is much greater than the difference in odds between two pairs and three-of-a-kind. This observation leads to a general rule: The closer the odds between two events, the more trials are needed to determine which has the higher odds.\nAgain it is interesting to compare the odds with the formulaic mathematical computations, which are 1 in 21 (4.75 percent) for a hand containing two pairs and 1 in 47 (2.1 percent) for a hand containing three-of-a-kind — not too far from the estimates of .0375 and .02 derived from simulation.\nTo handle the problem with the aid of the computer, we simply need to estimate the proportion of hands having triplicates and the proportion of hands with two pairs, and compare those estimates.\nTo estimate the hands with three-of-a-kind, we can use a notebook just like “One Pair” earlier, except using repeat_nos == 3 to search for triplicates instead of duplicates. The program, then, is:\n\nStart of three_of_a_kind notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\n# Create a bucket (vector) called a with four \"1's,\" four \"2's,\" four \"3's,\"\n# etc., to represent a deck of cards\none_suit = np.arange(1, 14)\n# Repeat values for one suit four times to make a 52 card deck of values.\ndeck = np.repeat(one_suit, 4)\n\n\ntriples_per_trial = np.zeros(10000)\n\n# Repeat the following steps 10000 times\nfor i in range(10000):\n # Shuffle the deck\n shuffled = rnd.permuted(deck)\n\n # Take the first five cards.\n hand = shuffled[:5]\n\n # How many triples?\n repeat_nos = np.bincount(hand)\n n_triples = np.sum(repeat_nos == 3)\n\n # Keep score of # of triples\n triples_per_trial[i] = n_triples\n\n # End loop, go back and repeat\n\n# How often was there 1 pair?\nn_triples = np.sum(triples_per_trial == 1)\n\n# Convert to proportion\nprint(n_triples / 10000)\n\n0.0272\n\n\nEnd of three_of_a_kind notebook\n\nTo estimate the probability of getting a two-pair hand, we revert to the original program (counting pairs), except that we examine all the results in the score-keeping vector z for hands in which we had two pairs, instead of one .\n\nStart of two_pairs notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\none_suit = np.arange(1, 14)\ndeck = np.repeat(one_suit, 4)\n\n\npairs_per_trial = np.zeros(10000)\n\n# Repeat the following steps 10000 times\nfor i in range(10000):\n # Shuffle the deck\n shuffled = rnd.permuted(deck)\n\n # Take the first five cards.\n hand = shuffled[:5]\n\n # How many pairs?\n # Counts for each card rank.\n repeat_nos = np.bincount(hand)\n n_pairs = np.sum(repeat_nos == 2)\n\n # Keep score of # of pairs\n pairs_per_trial[i] = n_pairs\n\n # End loop, go back and repeat\n\n# How often were there 2 pairs?\nn_two_pairs = np.sum(pairs_per_trial == 2)\n\n# Convert to proportion\nprint(n_two_pairs / 10000)\n\n0.0487\n\n\nEnd of two_pairs notebook\n\nFor efficiency (though efficiency really is not important here because the computer performs its operations so cheaply) we could develop both estimates in a single program by simply generating 10000 hands, and count the number with three-of-a-kind and the number with two pairs.\nBefore we leave the poker problems, we note a difficulty with Monte Carlo simulation. The probability of a royal flush is so low (about one in half a million) that it would take much computer time to compute. On the other hand, considerable inaccuracy is of little matter. Should one care whether the probability of a royal flush is 1/100,000 or 1/500,000?" + }, + { + "objectID": "probability_theory_2_compound.html#the-concepts-of-replacement-and-non-replacement", + "href": "probability_theory_2_compound.html#the-concepts-of-replacement-and-non-replacement", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.9 The concepts of replacement and non-replacement", + "text": "11.9 The concepts of replacement and non-replacement\nIn the poker example above, we did not replace the first card we drew. If we were to replace the card, it would leave the probability the same before the second pick as before the first pick. That is, the conditional probability remains the same. If we replace, conditions do not change. But if we do not replace the item drawn, the probability changes from one moment to the next. (Perhaps refresh your mind with the examples in the discussion of conditional probability including Section 9.1.1)\nIf we sample with replacement, the sample drawings remain independent of each other — a topic addressed in Section 9.1.\nIn many cases, a key decision in modeling the situation in which we are interested is whether to sample with or without replacement. The choice must depend on the characteristics of the situation.\nThere is a close connection between the lack of finiteness of the concept of universe in a given situation, and sampling with replacement. That is, when the universe (population) we have in mind is not small, or has no conceptual bounds at all, then the probability of each successive observation remains the same, and this is modeled by sampling with replacement. (“Not finite” is a less expansive term than “infinite,” though one might regard them as synonymous.)\nChapter 12 discusses problems whose appropriate concept of a universe is finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is not finite. This general procedure will be discussed several times, with examples included." + }, + { + "objectID": "probability_theory_3.html#sec-birthday-problem", + "href": "probability_theory_3.html#sec-birthday-problem", + "title": "12  Probability Theory, Part 3", + "section": "12.1 Example: The Birthday Problem", + "text": "12.1 Example: The Birthday Problem\nThis examples illustrates the probability of duplication in a multi-outcome sample from an infinite universe.\nAs an indication of the power and simplicity of resampling methods, consider this famous examination question used in probability courses: What is the probability that two or more people among a roomful of (say) twenty-five people will have the same birthday? To obtain an answer we need simply examine the first twenty-five numbers from the random-number table that fall between “001” and “365” (the number of days in the year), record whether or not there is a duplication among the twenty-five, and repeat the process often enough to obtain a reasonably stable probability estimate.\nPose the question to a mathematical friend of yours, then watch her or him sweat for a while, and afterwards compare your answer to hers/his. I think you will find the correct answer very surprising. It is not unheard of for people who know how this problem works to take advantage of their knowledge by making and winning big bets on it. (See how a bit of knowledge of probability can immediately be profitable to you by avoiding such unfortunate occurrences?)\nMore specifically, these steps answer the question for the case of twenty-five people in the room:\n\nStep 1. Let three-digit random numbers 1-365 stand for the 365 days in the year. (Ignore leap year for simplicity.)\nStep 2. Examine for duplication among the first twenty-five random numbers chosen “001-365.” (Triplicates or higher-order repeats are counted as duplicates here.) If there is one or more duplicate, record “yes.” Otherwise record “no.”\nStep 3. Repeat perhaps a thousand times, and calculate the proportion of a duplicate birthday among twenty-five people.\n\nYou would probably use the computer to generate the initial random numbers.\nNow try the program written as follows.\n\nStart of birthday_problem notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\nn_with_same_birthday = np.zeros(10000)\n\ndays_of_year = np.arange(1, 366) # 1 through 365\n\n# Do 10000 trials (experiments)\nfor i in range(10000):\n # Generate 25 numbers randomly between \"1\" and \"365\" put them in a.\n a = rnd.choice(days_of_year, size=25)\n\n # Looking in a, count the number of multiples and put the result in\n # b. We request multiples > 1 because we are interested in any multiple,\n # whether it is a duplicate, triplicate, etc. Had we been interested only\n # in duplicates, we would have put in np.sum(counts == 2).\n counts = np.bincount(a)\n n_duplicates = np.sum(counts > 1)\n\n # Score the result of each trial to our store\n n_with_same_birthday[i] = n_duplicates\n\n # End the loop for the trial, go back and repeat the trial until all 10000\n # are complete, then proceed.\n\n# Determine how many trials had at least one multiple.\nk = np.sum(n_with_same_birthday)\n\n# Convert to a proportion.\nkk = k / 10000\n\n# Print the result.\nprint(kk)\n\n0.7799\n\n\nEnd of birthday_problem notebook\n\nWe have dealt with this example in a rather intuitive and unsystematic fashion. From here on, we will work in a more systematic, step-by-step manner. And from here on the problems form an orderly sequence of the classical types of problems in probability theory (Chapter 12 and Chapter 13), and inferential statistics (Chapter 20 to Chapter 28.)" + }, + { + "objectID": "probability_theory_3.html#example-three-daughters-among-four-children", + "href": "probability_theory_3.html#example-three-daughters-among-four-children", + "title": "12  Probability Theory, Part 3", + "section": "12.2 Example: Three Daughters Among Four Children", + "text": "12.2 Example: Three Daughters Among Four Children\nThis problem illustrates a problem with two outcomes (Binomial 1) and sampling with Replacement Among Equally Likely Outcomes.\nWhat is the probability that exactly three of the four children in a four-child family will be daughters?2\nThe first step is to state that the approximate probability that a single birth will produce a daughter is 50-50 (1 in 2). This estimate is not strictly correct, because there are roughly 106 male children born to each 100 female children. But the approximation is close enough for most purposes, and the 50-50 split simplifies the job considerably. (Such “false” approximations are part of the everyday work of the scientist. The appropriate question is not whether or not a statement is “only” an approximation, but whether or not it is a good enough approximation for your purposes.)\nThe probability that a fair coin will turn up heads is .50 or 50-50, close to the probability of having a daughter. Therefore, flip a coin in groups of four flips, and count how often three of the flips produce heads . (You must decide in advance whether three heads means three girls or three boys.) It is as simple as that.\nIn resampling estimation it is of the highest importance to work in a careful, step-by-step fashion — to write down the steps in the estimation, and then to do the experiments just as described in the steps. Here are a set of steps that will lead to a correct answer about the probability of getting three daughters among four children:\n\nStep 1. Using coins, let “heads” equal “girl” and “tails” equal “boy.”\nStep 2. Throw four coins.\nStep 3. Examine whether the four coins fall with exactly three heads up. If so, write “yes” on a record sheet; otherwise write “no.”\nStep 4. Repeat step 2 perhaps two hundred times.\nStep 5. Count the proportion “yes.” This proportion is an estimate of the probability of obtaining exactly 3 daughters in 4 children.\n\nThe first few experimental trials might appear in the record sheet as follows (Table 12.1):\n\n\nTable 12.1: Example trials from the three-girls problem\n\n\nNumber of Heads\nYes or No\n\n\n\n\n1\nNo\n\n\n0\nNo\n\n\n3\nYes\n\n\n2\nNo\n\n\n1\nNo\n\n\n2\nNo\n\n\n…\n…\n\n\n…\n…\n\n\n…\n…\n\n\n\n\nThe probability of getting three daughters in four births could also be found with a deck of cards, a random number table, a die, or with Python. For example, half the cards in a deck are black, so the probability of getting a black card (“daughter”) from a full deck is 1 in 2. Therefore, deal a card, record “daughter” or “son,” replace the card, shuffle, deal again, and so forth for 200 sets of four cards. Then count the proportion of groups of four cards in which you got four daughters.\n\nStart of three_girls notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\ngirl_counts = np.zeros(10000)\n\n# Do 10000 trials\nfor i in range(10000):\n\n # Select 'girl' or 'boy' at random, four times.\n children = rnd.choice(['girl', 'boy'], size=4)\n\n # Count the number of girls and put the result in b.\n b = np.sum(children == 'girl')\n\n # Keep track of each trial result in z.\n girl_counts[i] = b\n\n # End this trial, repeat the experiment until 10000 trials are complete,\n # then proceed.\n\n# Count the number of experiments where we got exactly 3 girls, and put this\n# result in k.\nn_three_girls = np.sum(girl_counts == 3)\n\n# Convert to a proportion.\nthree_girls_prop = n_three_girls / 10000\n\n# Print the results.\nprint(three_girls_prop)\n\n0.2502\n\n\nEnd of three_girls notebook\n\nNotice that the procedure outlined in the steps above would have been different (though almost identical) if we asked about the probability of three or more daughters rather than exactly three daughters among four children. For three or more daughters we would have scored “yes” on our score-keeping pad for either three or four heads, rather than for just three heads. Likewise, in the computer solution we would have used the statement n_three_girls = np.sum(girl_counts >= 3) .\nIt is important that, in this case, in contrast to what we did in the example from Section 11.2 (the introductory poker example), the card is replaced each time so that each card is dealt from a full deck. This method is known as sampling with replacement . One samples with replacement whenever the successive events are independent ; in this case we assume that the chance of having a daughter remains the same (1 girl in 2 births) no matter what sex the previous births were 3. But, if the first card dealt is black and would not be replaced, the chance of the second card being black would no longer be 26 in 52 (.50), but rather 25 in 51 (.49), if the first three cards are black and would not be replaced, the chances of the fourth card’s being black would sink to 23 in 49 (.47).\nTo push the illustration further, consider what would happen if we used a deck of only six cards, half (3 of 6) black and half (3 of 6) red, instead of a deck of 52 cards. If the chosen card is replaced each time, the 6-card deck produces the same results as a 52-card deck; in fact, a two-card deck would do as well. But, if the sampling is done without replacement, it is impossible to obtain 4 “daughters” with the 6-card deck because there are only 3 “daughters” in the deck. To repeat, then, whenever you want to estimate the probability of some series of events where each event is independent of the other, you must sample with replacement ." + }, + { + "objectID": "probability_theory_3.html#variations-of-the-daughters-problem", + "href": "probability_theory_3.html#variations-of-the-daughters-problem", + "title": "12  Probability Theory, Part 3", + "section": "12.3 Variations of the daughters problem", + "text": "12.3 Variations of the daughters problem\nIn later chapters we will frequently refer to a problem which is identical in basic structure to the problem of three girls in four children — the probability of getting 9 females in ten calf births if the probability of a female birth is (say) .5 — when we set this problem in the context of the possibility that a genetic engineering practice is effective in increasing the proportion of females (desirable for the production of milk).\nSo far we have assumed the simple case where we have an array of values that we are sampling from, and we are selecting each of these values into the sample with equal probability.\nFor example, we started with the simple assumption that a child is just as likely to be born a boy as a girl. Our input is:\n\ninput_values = ['girl', 'boy']\n\nBy default, rnd.choice will draw the input values with equal probability. Here, we draw a sample (children) of four values from the input, where each value in children has an equal chance of being “girl” or “boy”.\n\nchildren = rnd.choice(input_values, size=4)\nchildren\n\narray(['boy', 'boy', 'boy', 'girl'], dtype='<U4')\n\n\nThat is, rnd.choice gives each element in input_values an equal chance of being selected as the next element in children.\nThat is fine if we have some simple probability to simulate, like 0.5. But now let us imagine we want to get more precise. We happen to know that any given birth is just slightly more likely to be a boy than a girl.4. For example, the proportion of boys born in the UK is 0.513. Hence the proportion of girls is 1-0.513 = 0.487." + }, + { + "objectID": "probability_theory_3.html#and-the-probp-argument", + "href": "probability_theory_3.html#and-the-probp-argument", + "title": "12  Probability Theory, Part 3", + "section": "12.4 rnd.choice and the p argument", + "text": "12.4 rnd.choice and the p argument\nWe could replicate this probability of 0.487 for ‘girl’ in the output sample by making an input array of 1000 strings, that contains 487 ‘girls’ and 513 ‘boys’:\n\nbig_girls = np.repeat(['girl', 'boy'], [487, 513])\n\nNow if we sample using the default in rnd.choice, each element in the input big_girls array will have the same chance of appearing in the sample, but because there are 487 ‘girls’, and 513 ‘boys’, each with an equal chance of appearing in the sample, we will get a ‘girl’ in roughly 487 out of every 1000 elements we draw, and a boy roughly 513 / 1000 times. That is, our chance of any one element of being a ‘girl’ is, as we want, 0.487.\n\n# Now each element has probability 0.487 of 'girl', 0.513 of 'boy'.\nrealistic_children = rnd.choice(big_girls, size=4)\nrealistic_children\n\narray(['boy', 'boy', 'girl', 'boy'], dtype='<U4')\n\n\nBut, there is an easier way than compiling a big 1000 element array, and that is to use the p= argument to rnd.choice. This allows us to specify the probability with which we will draw each of the input elements into the output sample. For example, to draw ‘girl’ with probability 0.487 and ‘boy’ with probability 0.513, we would do:\n\n# Draw 'girl' with probability (p) 0.487 and 'boy' 0.513.\nchildren_again = rnd.choice(['girl', 'boy'], size=4, p=[0.487, 0.513])\nchildren_again\n\narray(['girl', 'boy', 'girl', 'girl'], dtype='<U4')\n\n\nThe p argument allows us to specify the probability of each element in the input array — so if we had three elements in the input array, we would need three probabilities in p. For example, let’s say we were looking at some poorly-entered hospital records, we might have ‘girl’ or ‘boy’ recorded as the child’s gender, but the record might be missing — ‘not-recorded’ — with a 19% chance:\n\n# Draw 'girl' with probability (p) 0.4, 'boy' with p=0.41, 'not-recorded' with\n# p=0.19.\nrnd.choice(['girl', 'boy', 'not-recorded'], size=30, p=[0.4, 0.41, 0.19])\n\narray(['girl', 'girl', 'girl', 'girl', 'boy', 'girl', 'girl',\n 'not-recorded', 'girl', 'boy', 'boy', 'girl', 'girl', 'boy',\n 'not-recorded', 'girl', 'not-recorded', 'boy', 'girl', 'boy',\n 'not-recorded', 'girl', 'boy', 'girl', 'boy', 'not-recorded',\n 'girl', 'girl', 'boy', 'not-recorded'], dtype='<U12')\n\n\n\n\n\n\n\n\nHow does the p argument to rnd.choice work?\n\n\n\nYou might wonder how Python does this trick of choosing the elements with different probabilities.\nOne way of doing this is to use uniform random numbers from 0 through 1. These are floating point numbers that can take any value, at random, from 0 through 1.\n\n# Run this cell a few times to see random numbers anywhere from 0 through 1.\nrnd.uniform()\n\n0.3358873070551027\n\n\nBecause this random uniform number has an equal chance of being anywhere in the range 0 through 1, there is a 50% chance that any given number will be less then 0.5 and a 50% chance it is greater than 0.5. (Of course it could be exactly equal to 0.5, but this is vanishingly unlikely, so we will ignore that for now).\nSo, if we thought girls were exactly as likely as boys, we could select from ‘girl’ and ‘boy’ using this simple logic:\n\nif rnd.uniform() < 0.5:\n result = 'girl'\nelse:\n result = 'boy'\n\nBut, by the same logic, there is a 0.487 chance that the random uniform number will be less than 0.487 and a 0.513 chance it will be greater. So, if we wanted to give ourselves a 0.487 chance of ‘girl’, we could do:\n\nif rnd.uniform() < 0.487:\n result = 'girl'\nelse:\n result = 'boy'\n\nWe can extend the same kind of logic to three options. For example, there is a 0.4 chance the random uniform number will be less than 0.4, a 0.41 chance it will be somewhere between 0.4 and 0.81, and a 0.19 chance it will be greater than 0.81." + }, + { + "objectID": "probability_theory_3.html#the-daughters-problem-with-more-accurate-probabilities", + "href": "probability_theory_3.html#the-daughters-problem-with-more-accurate-probabilities", + "title": "12  Probability Theory, Part 3", + "section": "12.5 The daughters problem with more accurate probabilities", + "text": "12.5 The daughters problem with more accurate probabilities\nWe can use the probability argument to rnd.choice to do a more realistic simulation of the chance of a family with exactly three girls. In this case it is easy to make the chance for the Python simulation, but much more difficult using physical devices like coins to simulate the randomness.\nRemember, the original code for the 50-50 case, has the following:\n\n# Select 'girl' or 'boy' at random, four times.\nchildren = rnd.choice(['girl', 'boy'], size=4)\n\n# Count the number of girls and put the result in b.\nb = np.sum(children == 'girl')\n\nThe only change we need to the above, for the 0.487 - 0.513 case, is the one you see above:\n\n# Give 'girl' 48.7% of the time, 'boy' 51.3% of the time.\nchildren = rnd.choice(['girl', 'boy'], size=4, p=[0.487, 0.513])\n\nb = np.sum(children == 'girl')\n\nThe rest of the program remains unchanged." + }, + { + "objectID": "probability_theory_3.html#a-note-on-clarifying-and-labeling-problems", + "href": "probability_theory_3.html#a-note-on-clarifying-and-labeling-problems", + "title": "12  Probability Theory, Part 3", + "section": "12.6 A note on clarifying and labeling problems", + "text": "12.6 A note on clarifying and labeling problems\nIn conventional analytic texts and courses on inferential statistics, students are taught to distinguish between various classes of problems in order to decide which formula to apply. I doubt the wisdom of categorizing and labeling problems in that fashion, and the practice is unnecessary here. I consider it better that the student think through every new problem in the most fundamental terms. The exercise of this basic thinking avoids the mistakes that come from too-hasty and superficial pigeon-holing of problems into categories. Nevertheless, in order to help readers connect up the resampling material with the conventional curriculum of analytic methods, the examples presented here are given their conventional labels. And the examples given here cover the range of problems encountered in courses in probability and inferential statistics.\nTo repeat, one does not need to classify a problem when one proceeds with the Monte Carlo resampling method; you simply model the features of the situation you wish to analyze. In contrast, with conventional methods you must classify the situation and then apply procedures according to rules that depend upon the classification; often the decision about which rules to follow must be messy because classification is difficult in many cases, which contributes to the difficulty of choosing correct conventional formulaic methods." + }, + { + "objectID": "probability_theory_3.html#binomial-trials", + "href": "probability_theory_3.html#binomial-trials", + "title": "12  Probability Theory, Part 3", + "section": "12.7 Binomial trials", + "text": "12.7 Binomial trials\nThe problem of the three daughters in four births is known in the conventional literature as a “binomial sampling experiment with equally-likely outcomes.” “Binomial” means that the individual simple event (a birth or a coin flip) can have only two outcomes (boy or girl, heads or tails), “binomial” meaning “two names” in Latin.5\nA fundamental property of binomial processes is that the individual trials are independent , a concept discussed earlier. A binomial sampling process is a series of binomial (one-of-two-outcome) events about which one may ask many sorts of questions — the probability of exactly X heads (“successes”) in N trials, or the probability of X or more “successes” in N trials, and so on.\n“Equally likely outcomes” means we assume that the probability of a girl or boy in any one birth is the same (though this assumption is slightly contrary to fact); we represent this assumption with the equal-probability heads and tails of a coin. Shortly we will come to binomial sampling experiments where the probabilities of the individual outcomes are not equal.\nThe term “with replacement” was explained earlier; if we were to use a deck of red and black cards (instead of a coin) for this resampling experiment, we would replace the card each time a card is drawn.\nThe introductory poker example from Section 11.2, illustrated sampling without replacement, as will other examples to follow.\nThis problem would be done conventionally with the binomial theorem using probabilities of .5, or of .487 and .513, asking about 3 successes in 4 trials." + }, + { + "objectID": "probability_theory_3.html#example-three-or-more-successful-basketball-shots-in-five-attempts", + "href": "probability_theory_3.html#example-three-or-more-successful-basketball-shots-in-five-attempts", + "title": "12  Probability Theory, Part 3", + "section": "12.8 Example: Three or More Successful Basketball Shots in Five Attempts", + "text": "12.8 Example: Three or More Successful Basketball Shots in Five Attempts\nThis is an example of two-outcome sampling with unequally-likely outcomes, with replacement — a binomial experiment.\nWhat is the probability that a basketball player will score three or more baskets in five shots from a spot 30 feet from the basket, if on the average she succeeds with 25 percent of her shots from that spot?\nIn this problem the probabilities of “success” or “failure” are not equal, in contrast to the previous problem of the daughters. Instead of a 50-50 coin, then, an appropriate “model” would be a thumbtack that has a 25 percent chance of landing “up” when it falls, and a 75 percent chance of landing down.\nIf we lack a thumbtack known to have a 25 percent chance of landing “up,” we could use a card deck and let spades equal “success” and the other three suits represent “failure.” Our resampling experiment could then be done as follows:\n\nLet “spade” stand for “successful shot,” and the other suits stand for unsuccessful shot.\nDraw a card, record its suit (“spade” or “other”) and replace. Do so five times (for five shots).\nRecord whether the outcome of step 2 was three or more spades. If so indicate “yes,” and otherwise “no.”\nRepeat steps 2-4 perhaps four hundred times.\nCount the proportion “yes” out of the four hundred throws. That proportion estimates the probability of getting three or more baskets out of five shots if the probability of a single basket is .25.\n\nThe first four repetitions on your score sheet might look like this (Table 12.2):\n\n\nTable 12.2: First four repetitions of 3 or more shots simulation\n\n\nCard 1\nCard 2\nCard 3\nCard 4\nCard 5\nResult\n\n\n\n\nSpade\nOther\nOther\nOther\nOther\nNo\n\n\nOther\nOther\nOther\nOther\nOther\nNo\n\n\nSpade\nSpade\nOther\nSpade\nSpade\nYes\n\n\nOther\nSpade\nOther\nOther\nSpade\nNo\n\n\n\n\nInstead of cards, we could have used two-digit random numbers, with (say) “1-25” standing for “success,” and “26-00” (“00” in place of “100”) standing for failure. Then the steps would simply be:\n\nLet the random numbers “1-25” stand for “successful shot,” “26-00” for unsuccessful shot.\nDraw five random numbers;\nCount how many of the numbers are between “01” and “25.” If three or more, score “yes.”\nRepeat step 2 four hundred times.\n\nIf you understand the earlier “three_girls” program, then the program below should be easy: To create 10000 samples, we start with a for statement. We then sample 5 numbers between “1” and “4” into our variable a to simulate the 5 shots, each with a 25 percent — or 1 in 4 — chance of scoring. We decide that 1 will stand for a successful shot, and 2 through 4 will stand for a missed shot, and therefore we count (sum) the number of 1’s in a to determine the number of shots resulting in baskets in the current sample. The next step is to transfer the results of each trial to array n_baskets. We then finish the loop by unindenting the next line of code. The final step is to search the array n_baskets, after the 10000 samples have been generated and sum the times that 3 or more baskets were made. We place the results in n_more_than_2, calculate the proportion in propo_more_than_2, and then display the result.\n\nStart of basketball_shots notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\nn_baskets = np.zeros(10000)\n\n# Do 10000 experimental trials.\nfor i in range(10000):\n\n # Generate 5 random numbers, each between 1 and 4, put them in \"a\".\n # Let \"1\" represent a basket, \"2\" through \"4\" be a miss.\n a = rnd.integers(1, 5, size=5)\n\n # Count the number of baskets, put that result in b.\n b = np.sum(a == 1)\n\n # Keep track of each experiment's results in z.\n n_baskets[i] = b\n\n # End the experiment, go back and repeat until all 10000 are completed, then\n # proceed.\n\n# Determine how many experiments produced more than two baskets, put that\n# result in k.\nn_more_than_2 = np.sum(n_baskets > 2)\n\n# Convert to a proportion.\nprop_more_than_2 = n_more_than_2 / 10000\n\n# Print the result.\nprint(prop_more_than_2)\n\n0.104\n\n\nEnd of basketball_shots notebook" + }, + { + "objectID": "probability_theory_3.html#note-to-the-student-of-analytic-probability-theory", + "href": "probability_theory_3.html#note-to-the-student-of-analytic-probability-theory", + "title": "12  Probability Theory, Part 3", + "section": "12.9 Note to the student of analytic probability theory", + "text": "12.9 Note to the student of analytic probability theory\nThis problem would be done conventionally with the binomial theorem, asking about the chance of getting 3 successes in 5 trials, with the probability of a success = .25." + }, + { + "objectID": "probability_theory_3.html#sec-one-black-archery", + "href": "probability_theory_3.html#sec-one-black-archery", + "title": "12  Probability Theory, Part 3", + "section": "12.10 Example: One in Black, Two in White, No Misses in Three Archery Shots", + "text": "12.10 Example: One in Black, Two in White, No Misses in Three Archery Shots\nThis is an example of a multiple outcome (multinomial) sampling with unequally likely outcomes; with replacement.\nAssume from past experience that a given archer puts 10 percent of his shots in the black (“bullseye”) and 60 percent of his shots in the white ring around the bullseye, but misses with 30 percent of his shots. How likely is it that in three shots the shooter will get exactly one bullseye, two in the white, and no misses? Notice that unlike the previous cases, in this example there are more than two outcomes for each trial.\nThis problem may be handled with a deck of three colors (or suits) of cards in proportions varying according to the probabilities of the various outcomes, and sampling with replacement. Using random numbers is simpler, however:\n\nStep 1. Let “1” = “bullseye,” “2-7” = “in the white,” and “8-0” = “miss.”\nStep 2. Choose three random numbers, and examine whether there are one “1” and two numbers “2-7.” If so, record “yes,” otherwise “no.”\nStep 3. Repeat step 2 perhaps 400 times, and count the proportion of “yeses.” This estimates the probability sought.\n\nThis problem would be handled in conventional probability theory with what is known as the Multinomial Distribution.\nThis problem may be quickly solved on the computer using Python with the notebook labeled “bullseye” below. Bullseye has a complication not found in previous problems: It tests whether two different sorts of events both happen — a bullseye plus two shots in the white.\nAfter generating three randomly-drawn numbers between 1 and 10, we check with the sum function to see if there is a bullseye. If there is, the if statement tells the computer to continue with the operations, checking if there are two shots in the white; if there is no bullseye, the if statement tells the computer to end the trial and start another trial. A thousand repetitions are called for, the number of trials meeting the criteria are counted, and the results are then printed.\nIn addition to showing how this particular problem may be handled with Python, the “bullseye” program teaches you some more fundamentals of computer programming. The if statement and the two loops, one within the other, are basic tools of programming.\n\nStart of bullseye notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\n# Make an array to store the results of each trial.\nwhite_counts = np.zeros(10000)\n\n# Do 10000 experimental trials\nfor i in range(10000):\n\n # To represent 3 shots, generate 3 numbers at random between \"1\" and \"10\"\n # and put them in a. We will let a \"1\" denote a bullseye, \"2\"-\"7\" a shot in\n # the white, and \"8\"-\"10\" a miss.\n a = rnd.integers(1, 11, size=3)\n\n # Count the number of bullseyes, put that result in b.\n b = np.sum(a == 1)\n\n # If there is exactly one bullseye, we will continue with counting the\n # other shots. (If there are no bullseyes, we need not bother — the\n # outcome we are interested in has not occurred.)\n if b == 1:\n\n # Count the number of shots in the white, put them in c. (Recall we are\n # doing this only if we got one bullseye.)\n c = np.sum((a >= 2) & (a <=7))\n\n # Keep track of the results of this second count.\n white_counts[i] = c\n\n # End the \"if\" sequence — we will do the following steps without regard\n # to the \"if\" condition.\n\n # End the above experiment and repeat it until 10000 repetitions are\n # complete, then continue.\n\n# Count the number of occasions on which there are two in the white and a\n# bullseye.\nn_desired = np.sum(white_counts == 2)\n\n# Convert to a proportion.\nprop_desired = n_desired / 10000\n\n# Print the results.\nprint(prop_desired)\n\n0.1052\n\n\nEnd of bullseye notebook\n\nThis example illustrates the addition rule that was introduced and discussed in Chapter 9. In Section 12.10, a bullseye, an in-the-white shot, and a missed shot are “mutually exclusive” events because a single shot cannot result in more than one of the three possible outcomes. One can calculate the probability of either of two mutually-exclusive outcomes by adding their probabilities. The probability of either a bullseye or a shot in the white is .1 + .6 = .7. The probability of an arrow either in the white or a miss is .6 + .3 = .9. The logic of the addition rule is obvious when we examine the random numbers given to the outcomes. Seven of 10 random numbers belong to “bullseye” or “in the white,” and nine of 10 belong to “in the white” or “miss.”" + }, + { + "objectID": "probability_theory_3.html#example-two-groups-of-heart-patients", + "href": "probability_theory_3.html#example-two-groups-of-heart-patients", + "title": "12  Probability Theory, Part 3", + "section": "12.11 Example: Two Groups of Heart Patients", + "text": "12.11 Example: Two Groups of Heart Patients\nWe want to learn how likely it is that, by chance, group A would have as little as two deaths more than group B — Table 12.3:\n\n\nTable 12.3: Two Groups of Heart Patients\n\n\n\nLive\nDie\n\n\n\n\nGroup A\n79\n11\n\n\nGroup B\n21\n9\n\n\n\n\nThis problem, phrased here as a question in probability, is the prototype of a problem in statistics that we will consider later (which the conventional theory would handle with a “chi square distribution”). We can handle it in either of two ways, as follows:\nApproach A\n\nPut 120 balls into a bucket, 100 white (for live) and 20 black (for die).\nDraw 30 balls randomly and assign them to Group B; the others are assigned to group A.\nCount the numbers of black balls in the two groups and determine whether Group A’s excess “deaths” (= black balls), compared to Group B, is two or fewer (or what is equivalent in this case, whether there are 11 or fewer black balls in Group A); if so, write “Yes,” otherwise “No.”\nRepeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”\n\nA second way we shall think about this sort of problem may be handled as follows:\nApproach B\n\nPut 120 balls into a bucket, 100 white (for live) and 20 black (for die) (as before).\nDraw balls one by one, replacing the drawn ball each time, until you have accumulated 90 balls for Group A and 30 balls for Group B. (You could, of course, just as well use a bucket for 4 white and 1 black balls or 8 white and 2 black in this approach.)\nAs in approach “A” above, count the numbers of black balls in the two groups and determine whether Group A’s excess deaths is two or fewer; if so, write “Yes,” otherwise “No.”\nAs above, repeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”\n\nWe must also take into account the possibility of a similar eye-catching “unbalanced” result of a much larger proportion of deaths in Group B. It will be a tough decision how to do so, but a reasonable option is to simply double the probability computed in step 4a or 4b.\nDeciding which of these two approaches — the “permutation” (without replacement) and “bootstrap” (with replacement) methods — is the more appropriate is often a thorny matter; it will be discussed latter in Chapter 24. In many cases, however, the two approaches will lead to similar results.\nLater, we will actually carry out these procedures with the aid of Python, and estimate the probabilities we seek." + }, + { + "objectID": "probability_theory_3.html#example-dispersion-of-a-sum-of-random-variables-hammer-lengths-heads-and-handles", + "href": "probability_theory_3.html#example-dispersion-of-a-sum-of-random-variables-hammer-lengths-heads-and-handles", + "title": "12  Probability Theory, Part 3", + "section": "12.12 Example: Dispersion of a Sum of Random Variables — Hammer Lengths — Heads and Handles", + "text": "12.12 Example: Dispersion of a Sum of Random Variables — Hammer Lengths — Heads and Handles\nThe distribution of lengths for hammer handles is as follows: 20 percent are 10 inches long, 30 percent are 10.1 inches, 30 percent are 10.2 inches, and 20 percent are 10.3 inches long. The distribution of lengths for hammer heads is as follows: 2.0 inches, 20 percent; 2.1 inches, 20 percent; 2.2 inches, 30 percent; 2.3 inches, 20 percent; 2.4 inches, 10 percent.\nIf you draw a handle and a head at random, what will be the mean total length? In Chapter 9 we saw that the conventional formulaic method tells you that an answer with a formula that says the sum of the means is the mean of the sums, but it is easy to get the answer with simulation. But now we ask about the dispersion of the sum. There are formulaic rules for such measures as the variance. But consider this other example: What proportion of the hammers made with handles and heads drawn at random will have lengths equal to or greater than 12.4 inches? No simple formula will provide an answer. And if the number of categories is increased considerably, any formulaic approach will be become burdensome if not undoable. But Monte Carlo simulation produces an answer quickly and easily, as follows:\n\nFill a bucket with:\n\n2 balls marked “10” (inches),\n3 balls marked “10.1”,\n3 marked “10.2”, and\n2 marked “10.3”.\n\nThis bucket represents the handles.\nFill another bucket with:\n\n2 balls marked “2.0”,\n2 balls marked “2.1”,\n3 balls marked “2.2”,\n2 balls marked “2.3” and\n1 ball marked “2.4”.\n\nThis bucket represents the heads.\nPick a ball from each of the “handles” and “heads” bucket, calculate the sum, and replace the balls.\nRepeat perhaps 200 times (more when you write a computer program), and calculate the proportion of the sums that are greater than 12.4 inches.\n\nYou may also want to forego learning the standard “rule,” and simply estimate the mean this way, also. As an exercise, compute the interquartile range — the difference between the 25th and the 75th percentiles." + }, + { + "objectID": "probability_theory_3.html#example-the-product-of-random-variables-theft-by-employees", + "href": "probability_theory_3.html#example-the-product-of-random-variables-theft-by-employees", + "title": "12  Probability Theory, Part 3", + "section": "12.13 Example: The Product of Random Variables — Theft by Employees", + "text": "12.13 Example: The Product of Random Variables — Theft by Employees\nThe distribution of the number of thefts per month you can expect in your business is as follows:\n\n\n\nNumber\nProbability\n\n\n\n\n0\n0.5\n\n\n1\n0.2\n\n\n2\n0.1\n\n\n3\n0.1\n\n\n4\n0.1\n\n\n\nThe amounts that may be stolen on any theft are as follows:\n\n\n\nAmount\nProbability\n\n\n\n\n$50\n0.4\n\n\n$75\n0.4\n\n\n$100\n0.1\n\n\n$125\n0.1\n\n\n\nThe same procedure as used above to estimate the mean length of hammers — add the lengths of handles and heads — can be used for this problem except that the results of the drawings from each bucket are multiplied rather than added.\nIn this case there is again a simple rule: The mean of the products equals the product of the means. But this rule holds only when the two urns are indeed independent of each other, as they are in this case.\nThe next two problems are a bit harder than the previous ones; you might skip them for now and come back to them a bit later. However, with the Monte Carlo simulation method they are within the grasp of any introductory student who has had just a bit of experience with the method. In contrast, a standard book whose lead author is Frederick Mosteller, as respected a statistician as there is, says of this type of problem: “Naturally, in this book we cannot expect to study such difficult problems in their full generality [that is, show how to solve them, rather than merely state them], but we can lay a foundation for their study.” (Mosteller, Rourke, and Thomas 1961, 5)" + }, + { + "objectID": "probability_theory_3.html#example-flipping-pennies-to-the-end", + "href": "probability_theory_3.html#example-flipping-pennies-to-the-end", + "title": "12  Probability Theory, Part 3", + "section": "12.14 Example: Flipping Pennies to the End", + "text": "12.14 Example: Flipping Pennies to the End\nTwo players, each with a stake of ten pennies, engage in the following game: A coin is tossed, and if it is (say) heads, player A gives player B a penny; if it is tails, player B gives player A a penny. What is the probability that one player will lose his or her entire stake of 10 pennies if they play for 200 tosses?\nThis is a classic problem in probability theory; it has many everyday applications in situations such as inventory management. For example, what is the probability of going out of stock of a given item in a given week if customers and deliveries arrive randomly? It also is a model for many processes in modern particle physics.\nSolution of the penny-matching problem with coins is straightforward. Repeatedly flip a coin and check if one player or the other reaches a zero balance before you reach 200 flips. Or with random numbers:\n\nNumbers “1-5” = head = “+1”; Numbers “6-0” = tail = “-1.”\nProceed down a series of 200 numbers, keeping a running tally of the “+1”’s and the “-1”’s. If the tally reaches “+10” or “-10” on or before the two-hundredth digit, record “yes”; otherwise record “no.”\nRepeat step 2 perhaps 400 or 10000 times, and calculate the proportion of “yeses.” This estimates the probability sought.\n\nThe following Python program also solves the problem. The heart of the program starts at the line where the program models a coin flip with the statement: c = rnd.integers(1, 3) After you study that, go back and notice the inner for loop starting with for j in range(200): that describes the procedure for flipping a coin 200 times. Finally, note how the outer for i in range(10000): loop simulates 10000 games, each game consisting of the 200 coin flips we generated with the inner for loop above.\n\nStart of pennies notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n\nsomeone_won = np.zeros(10000)\n\n# Do 10000 trials\nfor i in range(10000):\n\n # Record the number 10: a's stake\n a_stake = 10\n\n # Same for b\n b_stake = 10\n\n # An indicator flag that will be set to \"1\" when somebody wins.\n flag = 0\n\n # Repeat the following steps 200 times.\n # Notice we use \"j\" as the counter variable, to avoid overwriting\n # \"i\", the counter variable for the 10000 trials.\n for j in range(200):\n # Generate the equivalent of a coin flip, letting 1 = heads,\n # 2 = tails\n c = rnd.integers(1, 3)\n\n # If it's a heads\n if c == 1:\n\n # Add 1 to b's stake\n b_stake = b_stake + 1\n\n # Subtract 1 from a's stake\n a_stake = a_stake - 1\n\n # End the \"if\" condition\n\n # If it's a tails\n if c == 2:\n\n # Add one to a's stake\n a_stake = a_stake + 1\n\n # Subtract 1 from b's stake\n b_stake = b_stake - 1\n\n # End the \"if\" condition\n\n # If a has won\n if a_stake == 20:\n\n # Set the indicator flag to 1\n flag = 1\n\n # If b has won\n if b_stake == 20:\n\n # Set the indicator flag to 1\n flag = 1\n\n # End the repeat loop for 200 plays (note that the indicator flag stays at\n # 0 if neither a nor b has won)\n\n # Keep track of whether anybody won\n someone_won[i] = flag\n\n# End the 10000 trials\n\n# Find out how often somebody won\nn_wins = np.sum(someone_won)\n\n# Convert to a proportion\nprop_wins = n_wins / 10000\n\n# Print the results\nprint(prop_wins)\n\n0.8918\n\n\nEnd of pennies notebook\n\nA similar example: Your warehouse starts out with a supply of twelve capacirators. Every three days a new shipment of two capacirators is received. There is a .6 probability that a capacirator will be used each morning, and the same each afternoon. (It is as if a random drawing is made each half-day to see if a capacirator is used; two capacirators may be used in a single day, or one or none). How long will be it, on the average, before the warehouse runs out of stock?" + }, + { + "objectID": "probability_theory_3.html#example-a-drunks-random-walk", + "href": "probability_theory_3.html#example-a-drunks-random-walk", + "title": "12  Probability Theory, Part 3", + "section": "12.15 Example: A Drunk’s Random Walk", + "text": "12.15 Example: A Drunk’s Random Walk\nIf a drunk chooses the direction of each step randomly, will he ever get home? If he can only walk on the road on which he lives, the problem is almost the same as the gambler’s-ruin problem above (“pennies”). But if the drunk can go north-south as well as east-west, the problem becomes a bit different and interesting.\nLooking now at Figure 12.1 — what is the probability of the drunk reaching either his house (at 3 steps east, 2 steps north) or my house (1 west, 4 south) before he finishes taking twelve steps?\nOne way to handle the problem would be to use a four-directional spinner such as is used with a child’s board game, and then keep track of each step on a piece of graph paper. The reader may construct a Python program as an exercise.\n\n\n\n\n\nFigure 12.1: Drunk random walk" + }, + { + "objectID": "probability_theory_3.html#sec-public-liquor", + "href": "probability_theory_3.html#sec-public-liquor", + "title": "12  Probability Theory, Part 3", + "section": "12.16 Example: public and private liquor pricing", + "text": "12.16 Example: public and private liquor pricing\nLet’s end this chapter with an actual example that will be used again in Chapter 13 when discussing probability in finite universes, and then at great length in the context of statistics in Chapter 24. This example also illustrates the close connection between problems in pure probability and those in statistical inference.\nAs of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned, and 16 “monopoly” states where the state government owns the retail liquor stores. (Some states were omitted for technical reasons.) These were the representative 1961 prices of a fifth of Seagram 7 Crown whiskey in the two sets of states (Table 12.4):\n\n\n\nTable 12.4: Whiskey prices by state category\n\n\n\n\n\n\n\n\nPrivate\nGovernment\n\n\n\n\n\n4.82\n4.65\n\n\n\n5.29\n4.55\n\n\n\n4.89\n4.11\n\n\n\n4.95\n4.15\n\n\n\n4.55\n4.2\n\n\n\n4.9\n4.55\n\n\n\n5.25\n3.8\n\n\n\n5.3\n4.0\n\n\n\n4.29\n4.19\n\n\n\n4.85\n4.75\n\n\n\n4.54\n4.74\n\n\n\n4.75\n4.5\n\n\n\n4.85\n4.1\n\n\n\n4.85\n4.0\n\n\n\n4.5\n5.05\n\n\n\n4.75\n4.2\n\n\n\n4.79\n\n\n\n\n4.85\n\n\n\n\n4.79\n\n\n\n\n4.95\n\n\n\n\n4.95\n\n\n\n\n4.75\n\n\n\n\n5.2\n\n\n\n\n5.1\n\n\n\n\n4.8\n\n\n\n\n4.29\n\n\n\n\n\n\n\n\nCount\n26\n16\n\n\nMean\n4.84\n4.35\n\n\n\n\n\n\n\n\n\n\nFigure 12.2: Whiskey prices by state category\n\n\n\n\nLet us consider that all these states’ prices constitute one single universe (an assumption whose justification will be discussed later). If so, one can ask: If these 42 states constitute a single universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference between the means that was actually observed)?\nThis can be thought of as problem in pure probability because we begin with a known universe and ask how it would behave with random drawings from it. We sample with replacement ; the decision to do so, rather than to sample without replacement (which is the way I had first done it, and for which there may be better justification) will be discussed later. We do so to introduce a “bootstrap”-type procedure (defined later) as follows: Write each of the forty-two observed state prices on a separate card. The shuffled deck simulated a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. For each trial, calculate the difference in mean prices.\nThese are the steps systematically:\n\nStep A: Write each of the 42 prices on a card and shuffle.\nSteps B and C (combined in this case): i) Draw cards randomly with replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $.49; if it is as great or greater than $.49, write “yes,” otherwise “no.”\nStep D: Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.\n\nThe probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state-ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices. The following notebook performs the operations described above.\n\nStart of liquor_prices notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nrnd = np.random.default_rng()\n\n# Import the plotting library\nimport matplotlib.pyplot as plt\n\n\nfake_diffs = np.zeros(10000)\n\npriv = np.array([\n 4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54, 4.75,\n 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75, 5.20, 5.10,\n 4.80, 4.29])\n\ngovt = np.array([\n 4.65, 4.55, 4.11, 4.15, 4.20, 4.55, 3.80, 4.00, 4.19, 4.75, 4.74, 4.50,\n 4.10, 4.00, 5.05, 4.20])\n\nactual_diff = np.mean(priv) - np.mean(govt)\n\n# Join the two vectors of data\nboth = np.concatenate((priv, govt))\n\n# Repeat 10000 simulation trials\nfor i in range(10000):\n\n # Sample 26 with replacement for private group\n fake_priv = np.random.choice(both, size=26)\n\n # Sample 16 with replacement for govt. group\n fake_govt = np.random.choice(both, size=16)\n\n # Find the mean of the \"private\" group.\n p = np.mean(fake_priv)\n\n # Mean of the \"govt.\" group\n g = np.mean(fake_govt)\n\n # Difference in the means\n diff = p - g\n\n # Keep score of the trials\n fake_diffs[i] = diff\n\n# Graph of simulation results to compare with the observed result.\nplt.hist(fake_diffs)\nplt.xlabel('Difference in average prices (cents)')\nplt.title('Average price difference (Actual difference = '\nf'{actual_diff * 100:.0f} cents)');\n\n\n\n\n\n\n\n\nEnd of liquor_prices notebook\n\nThe results shown above — not even one “success” in 10,000 trials — imply that there is only a very small probability that two groups with mean prices as different as were observed would happen by chance if drawn with replacement from the universe of 42 observed prices.\nHere we think of these states as if they came from a non-finite universe, which is one possible interpretation for one particular context. However, in Chapter 13 we will postulate a finite universe, which is appropriate if it is reasonable to consider that these observations constitute the entire universe (aside from those states excluded from the analysis because of data complexities)." + }, + { + "objectID": "probability_theory_3.html#the-general-procedure", + "href": "probability_theory_3.html#the-general-procedure", + "title": "12  Probability Theory, Part 3", + "section": "12.17 The general procedure", + "text": "12.17 The general procedure\nChapter 25 generalizes what we have done in the probability problems above into a general procedure, which will in turn be a subpart of a general procedure for all of resampling.\n\n\n\n\nArbuthnot, John. 1710. “An Argument for Divine Providence, Taken from the Constant Regularity Observ’d in the Births of Both Sexes. By Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of the College of Physitians and the Royal Society.” Philosophical Transactions of the Royal Society of London 27 (328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011.\n\n\nMosteller, Frederick, Robert E. K. Rourke, and George Brinton Thomas Jr. 1961. Probability with Statistical Applications. 2nd ed. https://archive.org/details/probabilitywiths0000most." + }, + { + "objectID": "probability_theory_4_finite.html#introduction", + "href": "probability_theory_4_finite.html#introduction", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.1 Introduction", + "text": "13.1 Introduction\nThe examples in Chapter 12 dealt with infinite universes , in which the probability of a given simple event is unaffected by the outcome of the previous simple event. But now we move on to finite universes, situations in which you begin with a given set of objects whose number is not enormous — say, a total of two, or two hundred, or two thousand. If we liken such a situation to a bucket containing balls of different colors each with a number on it, we are interested in the probability of drawing various sets of numbered and colored balls from the bucket on the condition that we do not replace balls after they are drawn.\nIn the cases addressed in this chapter, it is important to remember that the single events no longer are independent of each other. A typical situation in which sampling without replacement occurs is when items are chosen from a finite universe — for example, when children are selected randomly from a classroom. If the class has five boys and five girls, and if you were to choose three girls in a row, then the chance of selecting a fourth girl on the next choice obviously is lower than the chance that you would pick a girl on the first selection.\nThe key to dealing with this type of problem is the same as with earlier problems: You must choose a simulation procedure that produces simple events having the same probabilities as the simple events in the actual problem involving sampling without replacement. That is, you must make sure that your simulation does not allow duplication of events that have already occurred. The easiest way to sample without replacement with resampling techniques is by simply ignoring an outcome if it has already occurred.\nExamples Section 13.3.1 through Section 13.3.10 deal with some of the more important sorts of questions one may ask about drawings without replacement from such an urn. To get an overview, I suggest that you read over the summaries (in bold) introducing examples Section 13.3.1 to Section 13.3.10 before beginning to work through the examples themselves.\nThis chapter also revisits the general procedure used in solving problems in probability and statistics with simulation, here in connection with problems involving a finite universe. The steps that one follows in simulating the behavior of a universe of interest are set down in such fashion that one may, by random drawings, deduce the probability of various events. Having had by now the experience of working through the problems in Chapter 9 and Chapter 12, the reader should have a solid basis to follow the description of the general procedure which then helps in dealing with specific problems.\nLet us begin by describing some of the major sorts of problems with the aid of a bucket with six balls." + }, + { + "objectID": "probability_theory_4_finite.html#some-building-block-programs", + "href": "probability_theory_4_finite.html#some-building-block-programs", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.2 Some building-block programs", + "text": "13.2 Some building-block programs\nCase 1. Each of six balls is labeled with a number between “1” and “6.” We ask: What is the probability of choosing balls 1, 2, and 3 in that order if we choose three balls without replacement? Figure 13.1 diagrams the events we consider “success.”\n\n\n\n\n\nFigure 13.1: The Event Classified as “Success” for Case 1\n\n\n\n\nCase 2. We begin with the same bucket as in Case 1, but now ask the probability of choosing balls 1, 2, and 3 in any order if we choose three balls without replacement. Figure 13.2 diagrams two of the events we consider success. These possibilities include that which is shown in Figure 13.1 above, plus other possibilities.\n\n\n\n\n\nFigure 13.2: An Incomplete List of the Events Classified as “Success” for Case 2\n\n\n\n\nCase 3. The odd-numbered balls “1,” “3,” and “5,” are painted red and the even-numbered balls “2,” “4,” and “6” are painted black. What is the probability of getting a red ball and then a black ball in that order? Some possibilities are illustrated in Figure 13.3, which includes the possibility shown in Figure 13.1. It also includes some but not all possibilities found in Figure 13.2; for example, Figure 13.2 includes choosing balls 2, 3 and 1 in that order, but Figure 13.3 does not.\n\n\n\n\n\nFigure 13.3: An Incomplete List of the Events Classified as “Success” for Case 3\n\n\n\n\nCase 4. What is the probability of getting two red balls and one black ball in any order?\n\n\n\n\n\nFigure 13.4: An Incomplete List of the Events Classified as “Success” for Case 4\n\n\n\n\nCase 5. Various questions about matching may be asked with respect to the six balls. For example, what is the probability of getting ball 1 on the first draw or ball 2 on the second draw or ball 3 on the third draw? (Figure 13.5) Or, what is the probability of getting all balls on the draws corresponding to their numbers?\n\n\n\n\n\nFigure 13.5: An Incomplete List of the Events Classified as “Success” for Case 5" + }, + { + "objectID": "probability_theory_4_finite.html#problems-in-finite-universes", + "href": "probability_theory_4_finite.html#problems-in-finite-universes", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.3 Problems in finite universes", + "text": "13.3 Problems in finite universes\n\n13.3.1 Example: four girls and one boy\nWhat is the probability of selecting four girls and one boy when selecting five students from any group of twenty-five girls and twenty-five boys? This is an example of sampling without replacement when there are two outcomes and the order does not matter.\nThe important difference between this example and the infinite-universe examples in the prior chapter is that the probability of obtaining a boy or a girl in a single simple event differs from one event to the next in this example, whereas it stays the same when the sampling is with replacement. To illustrate, the probability of a girl is .5 (25 out of 50) when the first student is chosen, but the probability of a girl is either 25/49 or 24/49 when the second student is chosen, depending on whether a boy or a girl was chosen on the first pick. Or after, say, three girls and one boy are picked, the probability of getting a girl on the next choice is (28-3)/(50-4) = 22/46 which is clearly not equal to .5.\nAs always, we must create a satisfactory analog to the process whose probability we want to learn. In this case, we can use a deck of 50 cards, half red and half black, and deal out five cards without replacing them after each card is dealt; this simulates the choice of five students from among the fifty.\nWe can no longer use our procedure from before. If we designated “1-25” as being girls and “26-50” as being boys and then proceeded to draw random numbers, the probability of a girl would be the same on each pick.\nAt this point, it is important to note that — for this particular problem — we do not need to distinguish between particular girls (or boys). That is, it does not matter which girl (or boy) is selected in a given trial. Nor did we pay attention to the order in which we selected girls or boys. This is an instance of Case 4 discussed above. Subsequent problems will deal with situations where the order of selection, and the particular individuals, do matter.\nOur approach then is to mimic having the class in front of us: an array of 50 strings, half of the entries ‘boy’ and the other half ‘girl’. We then shuffle the class (the array), and choose the first N students (strings).\n\nStep 1. Create a list with 50 labels, half ‘boy’ and half ‘girl’.\nStep 2. Shuffle the class and select five students. Count whether there are four labels equal ‘girl’. If so, write “yes,” otherwise “no”.\nStep 3. Repeat step 2, say, 10,000 times, and count the proportion “yes”, which estimates the probability sought.\n\nThe results of a few experimental trials are shown in Table 13.1.\n\n\nTable 13.1: A few experimental trials of four girls and one boy\n\n\n\n\n\n\n\nExperiment\nStrings Chosen\nSuccess?\n\n\n 1\n‘girl’, ‘boy’, ‘boy’, ‘girl’, ‘boy’\nNo\n\n\n 2\n‘boy’, ‘girl’, ‘girl’, ‘girl’, ‘girl’\nYes\n\n\n 3\n‘girl, ’girl’, ‘girl’, ‘boy’, ‘girl’\nYes\n\n\n\n\nA solution to this problem with Python is presented below.\n\nStart of four_girls_one_boy notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n\nN = 10000\ntrial_results = np.zeros(N)\n\n# Constitute the set of 25 girls and 25 boys.\nwhole_class = np.repeat(['girl', 'boy'], [25, 25])\n\n# Repeat the following steps N times.\nfor i in range(N):\n\n # Shuffle the numbers\n shuffled = rnd.permuted(whole_class)\n\n # Take the first 5 numbers, call them c.\n c = shuffled[:5]\n\n # Count how many girls there are, put the result in d.\n d = np.sum(c == 'girl')\n\n # Keep track of each trial result in z.\n trial_results[i] = d\n\n # End the experiment, go back and repeat until all 1000 trials are\n # complete.\n\n# Count the number of times we got four girls, put the result in k.\nk = np.sum(trial_results == 4)\n\n# Convert to a proportion.\nkk = k / N\n\n# Print the result.\nprint(kk)\n\n0.1505\n\n\nWe can also find the probabilities of other outcomes from a histogram of trial results obtained with the following command:\n\n# Import the plotting package.\nimport matplotlib.pyplot as plt\n\n# Do histogram, with one bin for each possible number.\nplt.hist(trial_results, bins=range(7), align='left', rwidth=0.75)\nplt.title('# of girls');\n\n\n\n\n\n\n\n\nIn the resulting histogram we can see that in 15 percent of the trials, 4 of the 5 selected were girls.\nIt should be noted that for this problem — as for most other problems — there are several other resampling procedures that will also do the job correctly.\nIn analytic probability theory this problem is worked with a formula for “combinations.”\nEnd of four_girls_one_boy notebook\n\n\n\n13.3.2 Example: Five spades and four clubs in a bridge hand\n\nStart of five_spades_four_clubs notebook\n\nDownload notebook\nInteract\n\n\nThis is an example of multiple-outcome sampling without replacement, order does not matter.\nThe problem is similar to the example in Section 13.3.1, except that now there are four equally-likely outcomes instead of only two. A Python solution is:\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n\n# Constitute the deck of 52 cards.\n# Repeat the suit names 13 times each, to make a 52 card deck.\ndeck = np.repeat(['spade', 'club', 'diamond', 'heart'],\n [13, 13, 13, 13])\n# Show the deck\ndeck\n\narray(['spade', 'spade', 'spade', 'spade', 'spade', 'spade', 'spade',\n 'spade', 'spade', 'spade', 'spade', 'spade', 'spade', 'club',\n 'club', 'club', 'club', 'club', 'club', 'club', 'club', 'club',\n 'club', 'club', 'club', 'club', 'diamond', 'diamond', 'diamond',\n 'diamond', 'diamond', 'diamond', 'diamond', 'diamond', 'diamond',\n 'diamond', 'diamond', 'diamond', 'diamond', 'heart', 'heart',\n 'heart', 'heart', 'heart', 'heart', 'heart', 'heart', 'heart',\n 'heart', 'heart', 'heart', 'heart'], dtype='<U7')\n\n\n\nN = 10000\ntrial_results = np.zeros(N)\n\n# Repeat the trial N times.\nfor i in range(N):\n\n # Shuffle the deck and draw 13 cards.\n hand = rnd.choice(deck, size=13, replace=False)\n\n # Count the number of spades in \"hand\", put the result in \"n_spades\".\n n_spades = np.sum(hand == 'spade')\n\n # If we have five spades, we'll continue on to count the clubs. If we don't\n # have five spades, the number of clubs is irrelevant — we have not gotten\n # the hand we are interested in.\n if n_spades == 5:\n # Count the clubs, put the result in \"n_clubs\"\n n_clubs = np.sum(hand == 'club')\n # Keep track of the number of clubs in each trial\n trial_results[i] = n_clubs\n\n # End one experiment, go back and repeat until all N trials are done.\n\n# Count the number of trials where we got 4 clubs. This is the answer we want -\n# the number of hands out of 1000 with 5 spades and 4 clubs. (Recall that we\n# only counted the clubs if the hand already had 5 spades.)\nn_5_and_4 = np.sum(trial_results == 4)\n\n# Convert to a proportion.\nprop_5_and_4 = n_5_and_4 / N\n\n# Print the result\nprint(prop_5_and_4)\n\n0.0224\n\n\nEnd of five_spades_four_clubs notebook\n\n\n\n13.3.3 Example: a total of fifteen points in a bridge hand\n\nStart of fifteen_points_in_bridge notebook\n\nDownload notebook\nInteract\n\n\nLet us assume that ace counts as 4, king = 3, queen = 2, and jack = 1.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\nimport matplotlib.pyplot as plt\n\n\n# Constitute a deck with 4 jacks (point value 1), 4 queens (value 2), 4\n# kings (value 3), 4 aces (value 4), and 36 other cards with no point\n# value\nwhole_deck = np.repeat([1, 2, 3, 4, 0], [4, 4, 4, 4, 36])\nwhole_deck\n\narray([1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0])\n\n\n\nN = 10000\ntrial_results = np.zeros(N)\n\n# Do N trials.\nfor i in range(N):\n # Shuffle the deck of cards and draw 13\n hand = rnd.choice(whole_deck, size=13, replace=False)\n\n # Total the points.\n points = np.sum(hand)\n\n # Keep score of the result.\n trial_results[i] = points\n\n # End one experiment, go back and repeat until all N trials are done.\n\n\n# Produce a histogram of trial results.\nplt.hist(trial_results, bins=range(25), align='left', rwidth=0.75)\nplt.title('Points in bridge hands');\n\n\n\n\n\n\n\n\nFrom this histogram, we see that in about 4 percent of our trials we obtained a total of exactly 15 points. We can also compute this directly:\n\n# How many times did we have a hand with fifteen points?\nk = np.sum(trial_results == 15)\n\n# Convert to a proportion.\nkk = k / N\n\n# Show the result.\nkk\n\n0.0431\n\n\nEnd of fifteen_points_in_bridge notebook\n\n\n\n13.3.4 Example: Four girls then one boy from 25 girls and 25 boys\n\nStart of four_girls_then_one_boy_25 notebook\n\nDownload notebook\nInteract\n\n\nIn this problem, order matters; we are sampling without replacement, with two outcomes, several of each item.\nWhat is the probability of getting an ordered series of four girls and then one boy , from a universe of 25 girls and 25 boys? This illustrates Case 3 above. Clearly we can use the same sampling mechanism as in the example Section 13.3.1, but now we record “yes” for a smaller number of composite events.\nWe record “no” even if a single one boy is chosen but he is chosen 1st, 2nd, 3rd, or 4th, whereas in Section 13.3.1, such outcomes are recorded as “yes”-es.\n\nStep 1. Generate a class (array) of length 50, consisting of 25 strings valued “boy” and 25 strings valued “girl”.\nStep 2. Shuffle the class array, and select the first five elements.\nStep 3. If the first five elements are exactly 'girl', 'girl', 'girl', 'girl', 'boy', write “yes,” otherwise “no.”\nStep 4. Repeat steps 2 and 3, say, 10,000 times, and count the proportion of “yes” results, which estimates the probability sought.\n\nLet us start the single trial procedure like so:\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n\n# Constitute the set of 25 girls and 25 boys.\nwhole_class = np.repeat(['girl', 'boy'], [25, 25])\n\n# Shuffle the class into a random order.\nshuffled = rnd.permuted(whole_class)\n# Take the first 5 class members, call them c.\nc = shuffled[:5]\n# Show the result.\nc\n\narray(['boy', 'girl', 'boy', 'girl', 'girl'], dtype='<U4')\n\n\nOur next step (step 3) is to check whether c is exactly equal to the result of interest. The result of interest is:\n\n# The result we are looking for - four girls and then a boy.\nresult_of_interest = np.repeat(['girl', 'boy'], [4, 1])\nresult_of_interest\n\narray(['girl', 'girl', 'girl', 'girl', 'boy'], dtype='<U4')\n\n\nWe can then use an array comparison with == to do an element by element (elementwise) check, asking whether the corresponding elements are equal:\n\n# A Boolean array, with True where corresponding elements are equal, False\n# otherwise.\nare_equal = c == result_of_interest\nare_equal\n\narray([False, True, False, True, False])\n\n\nWe are nearly finished with step 3 — it only remains to check whether all of the elements were equal, by checking whether all of the values in are_equal are True.\nWe know that there are 5 elements, so we could check whether there are 5 True values with np.sum:\n\n# Are there exactly 5 True values in `are_equal`?\nnp.sum(are_equal) == 5\n\nFalse\n\n\nAnother way to ask the same question is by using the np.all function on are_equal. This returns True if all the elements in are_equal are True, and False otherwise.\n\n\n\n\n\n\nTesting whether all elements of an array are the same\n\n\n\nThe np.all, applied to a Boolean array (as here), checks whether all of the elements in the Boolean array are True. If so, it returns True, otherwise, it returns False.\nFor example:\n\n# All elements are True, `np.all` returns True\nnp.all([True, True, True, True])\n\nTrue\n\n\n\n# At least one element is False, `np.all` returns False\nnp.all([True, True, False, True])\n\nFalse\n\n\n\n\nHere is the full procedure for steps 2 and 3 (a single trial):\n\n# Shuffle the class into a random order.\nshuffled = rnd.permuted(whole_class)\n# Take the first 5 class members, call them c.\nc = shuffled[:5]\n# For each element, test whether the result is the result of interest.\nare_equal = c == result_of_interest\n# Check whether we have the result we are looking for.\nis_four_girls_then_one_boy = np.all(are_equal)\n\nAll that remains is to put the single trial procedure into a loop.\n\nN = 10000\ntrial_results = np.zeros(N)\n\n# Repeat the following steps 1000 times.\nfor i in range(N):\n\n # Shuffle the class into a random order.\n shuffled = rnd.permuted(whole_class)\n # Take the first 5 class members, call them c.\n c = shuffled[:5]\n # For each element, test whether the result is the result of interest.\n are_equal = c == result_of_interest\n # Check whether we have the result we are looking for.\n is_four_girls_then_one_boy = np.all(are_equal)\n\n # Store the result of this trial.\n trial_results[i] = is_four_girls_then_one_boy\n\n # End the experiment, go back and repeat until all N trials are\n # complete.\n\n# Count the number of times we got four girls then a boy\nk = np.sum(trial_results)\n\n# Convert to a proportion.\nkk = k / N\n\n# Print the result.\nprint(kk)\n\n0.0311\n\n\nThis type of problem is conventionally done with a permutation formula.\nEnd of four_girls_then_one_boy_25 notebook\n\n\n\n13.3.5 Example: repeat pairings from random pairing\n\nStart of university_icebreaker notebook\n\nDownload notebook\nInteract\n\n\nFirst put two groups of 10 people into 10 pairs. Then re-randomize the pairings. What is the chance that four or more pairs are the same in the second random pairing? This is a problem in the probability of matching by chance.\nTen representatives each from two universities, Birmingham and Berkeley, attend a meeting. As a social icebreaker, representatives are divided, randomly, into pairs consisting of one person from each university.\nIf they held a second round of the icebreaker, with a new random pairing, what is the chance that four or more pairs will be the same?\nIn approaching this problem, we start at the point where the first icebreaker is complete. We now have to determine what happens after the second round.\n\nStep 1. Let “ace” through “10” of hearts represent the ten representatives from Birmingham University. Let “ace” through “10” of spades be their allocated partners (in round one) from Berkeley.\nStep 2. Shuffle the hearts and deal them out in a row; shuffle the spades and deal in a row just below the hearts.\nStep 3. Count the pairs — a pair is one card from the heart row and one card from the spade row — that contain the same denomination. If 4 or more pairs match, record “yes,” otherwise “no.”\nStep 4. Repeat steps (2) and (3), say, 10,000 times.\nStep 5. Count the proportion “yes.” This estimates the probability of 4 or more pairs.\n\nExercise for the student: Write the steps to do this example with random numbers. The Python solution follows below.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\nimport matplotlib.pyplot as plt\n\n\nN = 10000\ntrial_results = np.zeros(N)\n\n# Assign numbers to each student, according to their pair, after the first\n# icebreaker\nbirmingham = np.arange(10)\nberkeley = np.arange(10)\n\nfor i in range(N):\n # Randomly shuffle the students from Berkeley\n shuffled_berkeley = rnd.permuted(berkeley)\n\n # Randomly shuffle the students from Birmingham\n # (This step is not really necessary — shuffling one array is enough to make the matching random.)\n shuffled_birmingham = rnd.permuted(birmingham)\n\n # Count in how many cases people landed with the same person as in the\n # first round, and store in trial_results.\n matches = np.sum(shuffled_berkeley == shuffled_birmingham)\n trial_results[i] = matches\n\n# Count the number of times we got 4 or more people assigned to the same person\nk = np.sum(trial_results >= 4)\n\n# Convert to a proportion.\nkk = k / N\n\n# Print the result.\nprint(kk)\n\n0.0165\n\n\nWe see that in about 2 percent of the trials did 4 or more couples end up being re-paired with their own partners. This can also be seen from the histogram:\n\n# Produce a histogram of trial results.\nplt.hist(trial_results, bins=range(10), align='left', rwidth=0.75)\nplt.title('Same pairs in round two');\n\n\n\n\n\n\n\n\nEnd of university_icebreaker notebook\n\n\n\n13.3.6 Example: Matching Santa Hats\n\nStart of santas_hats notebook\n\nDownload notebook\nInteract\n\n\nThe welcome staff at a restaurant mix up the hats of a party of six Christmas Santas. What is the probability that at least one will get their own hat?.\nAfter a long Christmas day, six Santas meet in the pub to let off steam. However, as luck would have it, their hosts have mixed up their hats. When the hats are returned, what is the chance that at least one Santa will get his own hat back?\nFirst, assign each of the six Santas a number, and place these numbers in an array. Next, shuffle the array (this represents the mixed-up hats) and compare to the original. The rest of the problem is the same as the pairs one from before, except that we are now interested in any trial where at least one (\\(\\ge 1\\)) Santa received the right hat.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n\nN = 10000\ntrial_results = np.zeros(N, dtype=bool)\n\n# Assign numbers to each owner\nowners = np.arange(6)\n\n# Each hat gets the number of their owner\nhats = np.arange(6)\n\nfor i in range(N):\n # Randomly shuffle the hats and compare to their owners\n shuffled_hats = rnd.permuted(hats)\n\n # In how many cases did at least one person get their hat back?\n trial_results[i] = np.sum(shuffled_hats == owners) >= 1\n\n# How many times, over all trials, did at least one person get their hat back?\nk = np.sum(trial_results)\n\n# Convert to a proportion.\nkk = k / N\n\n# Print the result.\nprint(kk)\n\n0.6391\n\n\nWe see that in roughly 64 percent of the trials at least one Santa received their own hat back.\nEnd of santas_hats notebook\n\n\n\n13.3.7 Example: Twenty executives assigned to two divisions of a firm\n\nStart of twenty_executives notebook\n\nDownload notebook\nInteract\n\n\nThe top manager wants to spread the talent reasonably evenly, but she does not want to label particular executives with a quality rating and therefore considers distributing them with a random selection. She therefore wonders: What are probabilities of the best ten among the twenty being split among the divisions in the ratios 5 and 5, 4 and 6, 3 and 7, etc., if their names are drawn from a hat? One might imagine much the same sort of problem in choosing two teams for a football or baseball contest.\nOne may proceed as follows:\n\nPut 10 balls labeled “W” (for “worst”) and 10 balls labeled “B” (best) in a bucket.\nDraw 10 balls without replacement and count the W’s.\nRepeat (say) 400 times.\nCount the number of times each split — 5 W’s and 5 B’s, 4 and 6, etc. — appears in the results.\n\nThe problem can be done with Python as follows:\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\nimport matplotlib.pyplot as plt\n\n\nN = 10000\ntrial_results = np.zeros(N)\n\nmanagers = np.repeat(['Worst', 'Best'], [10, 10])\n\nfor i in range(N):\n chosen = rnd.choice(managers, size=10, replace=False)\n trial_results[i] = np.sum(chosen == 'Best')\n\nplt.hist(trial_results, bins=range(10), align='left', rwidth=0.75)\nplt.title('Number of best managers chosen')\n\n\n\n\n\n\n\n\nEnd of twenty_executives notebook\n\n\n\n13.3.8 Example: Executives Moving\n\nA major retail chain moves its store managers from city to city every three years in order to calculate individuals’ knowledge and experience. To make the procedure seem fair, the new locations are drawn at random. Nevertheless, the movement is not popular with managers’ families. Therefore, to make the system a bit sporting and to give people some hope of remaining in the same location, the chain allows managers to draw in the lottery the same posts they are now in. What are the probabilities that 1, 2, 3 … will get their present posts again if the number of managers is 30?\nThe problem can be solved with the following steps:\n\nNumber a set of green balls from “1” to “30” and put them into Bucket A. Number a set of red balls from “1” to “30” and then put into Bucket B. For greater concreteness one could use 30 little numbered dolls in Bucket A and 30 little toy houses in Bucket B.\nShuffle Bucket A, and array all its green balls into a row (vector A). Array all the red balls from Bucket B into a second row B just below row A.\nCount how many green balls in row A have the same numbers as the red balls just below them, and record that number on a scoreboard.\nRepeat steps 2 and 3 perhaps 1000 times. Then count in the scoreboard the numbers of “0,” “1,” “2,” “3.”\n\n\n\n13.3.9 Example: State Liquor Systems Again\nLet’s end this chapter with the example of state liquor systems that we first examined in Chapter 12 and which will be discussed again later in the context of problems in statistics.\nRemember that as of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned (“Private”), and 16 monopoly states where the state government owns the retail liquor stores (“Government”). See Table 12.4 for the prices in the Private and Government states.\nWe found the average prices were:\n\nPrivate: $4.35;\nGovernment: $4.84;\nDifference (Government - Private): $0.49.\n\nLet us now consider that all these states’ prices constitute one single finite universe. We ask: If these 42 states constitute a universe, and if they are all shuffled together, how likely is it that if one divides them into two samples at random (sampling without replacement), containing 16 and 26 observations respectively, the difference in mean prices turns out to be as great as $0.49 (the difference that was actually observed)?\nAgain we write each of the forty-two observed state prices on a separate card. The shuffled deck simulates a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, without replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. In each trial calculate the difference in mean prices.\nThe steps more systematically:\n\nStep A. Write each of the 42 prices on a card and shuffle.\nSteps B and C (combined in this case). i) Draw cards randomly without replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $0.49; if it is as great or greater than $0.49, write “yes,” otherwise “no.”\nStep D. Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.\n\nThe probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices.\nPlease notice how the only difference between this treatment of the problem and the treatment in Chapter 12 is that the drawing in this case is without replacement whereas in Chapter 12 the drawing is with replacement.\nIn Chapter 12 we thought of these states as if they came from a non-finite universe, which is one possible interpretation in one context. But one can also reasonably think about them in another context — as if they constitute the entire universe (aside from those states excluded from the analysis because of data complexities). If so, one can ask: If these 42 states constitute a universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference that was actually observed)?\n\n\n13.3.10 Example: Five or More Spades in One Bridge Hand; Four Girls and a Boy\n\nStart of five_spades_four_girls notebook\n\nDownload notebook\nInteract\n\n\nThis is a compound problem: what are the chances of both five or more spades in one bridge hand, and four girls and a boy in a five-child family?\n“Compound” does not necessarily mean “complicated”. It means that the problem is a compound of two or more simpler problems.\nA natural way to handle such a compound problem is in stages, as we saw in the archery problem of Section 12.10. If a “success” is achieved in the first stage, go on to the second stage; if not, don’t go on. More specifically in this example:\n\nStep 1. Use a bridge card deck, and five coins with heads = “girl”.\nStep 2. Deal a 13-card bridge hand and count the spades. If 5 or more spades, record “no” and end the experimental trial. Otherwise, continue to step 3.\nStep 3. Throw five coins, and count “heads.” If four heads, record “yes,” otherwise record “no.”\nStep 4. Repeat steps 2 and 3 a thousand times.\nStep 5. Compute the proportion of “yes” in step 3. This estimates the probability sought.\n\nThe Python solution to this compound problem is neither long nor difficult. We tackle it almost as if the two parts of the problem were to be dealt with separately. We first determine, in a random bridge hand, whether 5 spades or more are dealt, as was done in the problem Section 13.3.2. Then, if 5 or more spades are found, we use rnd.choice to generate a random family of 5 children. This means that we need not generate families if 5 or more spades were not dealt to the bridge hand, because a “success” is only recorded if both conditions are met. After we record the number of girls in each sample of 5 children, we need only finish the loop (by unindenting the next line and then use np.sum to count the number of samples that had 4 girls, storing the result in k. Since we only drew samples of children for those trials in which a bridge hand of 5 spades had already been dealt, k will have the number of trials out of 10000 in which both conditions were met.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n\nN = 10000\ntrial_results = np.zeros(N)\n\n# Deck with 13 spades and 39 other cards\ndeck = np.repeat(['spade', 'others'], [13, 52 - 13])\n\nfor i in range(N):\n # Shuffle deck and draw 13 cards\n hand = rnd.choice(deck, size=13, replace=False)\n\n n_spades = np.sum(hand == 'spade')\n\n if n_spades >= 5:\n # Generate a family, zeros for boys, ones for girls\n children = rnd.choice(['girl', 'boy'], size=5)\n n_girls = np.sum(children == 'girl')\n trial_results[i] = n_girls\n\nk = np.sum(trial_results == 4)\n\nkk = k / N\n\nprint(kk)\n\n0.0282\n\n\nHere is an alternative approach to the same problem, but getting the result at the end of the loop, by combining Boolean arrays (see Section 10.5).\n\nN = 10000\ntrial_spades = np.zeros(N)\ntrial_girls = np.zeros(N)\n\n# Deck with 13 spades and 39 other cards\ndeck = np.repeat(['spade', 'other'], [13, 39])\n\nfor i in range(N):\n # Shuffle deck and draw 13 cards\n hand = rnd.choice(deck, 13, replace=False)\n\n n_spades = np.sum(hand == 'spade')\n trial_spades[i] = n_spades\n\n # Generate a family, zeros for boys, ones for girls\n children = rnd.choice(['girl', 'boy'], size=5)\n n_girls = np.sum(children == 'girl')\n trial_girls[i] = n_girls\n\nk = np.sum((trial_spades >= 5) & (trial_girls == 4))\n\nkk = k / N\n\nprint(kk)\n\n0.0264\n\n\nEnd of five_spades_four_girls notebook\n\n\n\n\n\n\n\nSpeed and readability\n\n\n\nThe last version is a fraction more expensive, but has the advantage that the condition we are testing for is summarized on one line. However, this would not be a good approach to take if the experiments were not completely unrelated." + }, + { + "objectID": "probability_theory_4_finite.html#summary", + "href": "probability_theory_4_finite.html#summary", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.4 Summary", + "text": "13.4 Summary\nThis completes the discussion of problems in probability — that is, problems where we assume that the structure is known. Whereas Chapter 12 dealt with samples drawn from universes considered not finite , this chapter deals with problems drawn from finite universes and therefore you sample without replacement." + }, + { + "objectID": "sampling_variability.html#variability-and-small-samples", + "href": "sampling_variability.html#variability-and-small-samples", + "title": "14  On Variability in Sampling", + "section": "14.1 Variability and small samples", + "text": "14.1 Variability and small samples\nPerhaps the most important idea for sound statistical inference — the section of the book we are now beginning, in contrast to problems in probability, which we have studied in the previous chapters — is recognition of the presence of variability in the results of small samples . The fatal error of relying on too-small samples is all too common among economic forecasters, journalists, and others who deal with trends and public opinion. Athletes, sports coaches, sportswriters, and fans too frequently disregard this principle both in their decisions and in their discussion.\nOur intuitions often carry us far astray when the results vary from situation to situation — that is, when there is variability in outcomes — and when we have only a small sample of outcomes to look at.\nTo motivate the discussion, I’ll tell you something that almost no American sports fan will believe: There is no such thing as a slump in baseball batting. That is, a batter often goes an alarming number of at-bats without getting a hit, and everyone — the manager, the sportswriters, and the batter himself — assumes that something has changed, and the probability of the batter getting a hit is now lower than it was before the slump. It is common for the manager to replace the player for a while, and for the player and coaches to change the player’s hitting style so as to remedy the defect. But the chance of a given batter getting a hit is just the same after he has gone many at-bats without a hit as when he has been hitting well. A belief in slumps causes managers to play line-ups which may not be their best.\nBy “slump” I mean that a player’s probability of getting a hit in a given at-bat is lower during a period than during average periods. And when I say there is no such thing as a slump, I mean that the chances of getting a hit after any sequence of at-bats without a hit is not different than the long-run average.\nThe “hot hand” in basketball is another illusion. In practical terms, the hot hand does not exist — or rather — if it does, the effect is weak.1 The chance of a shooter scoring is more or less the same after they have just missed a flock of shots as when they have just sunk a long string. That is, the chance of scoring a basket is not appreciably higher after a run of successes than after a run of failures. But even professional teams choose plays on the basis of who supposedly has a hot hand.\nManagers who substitute for the “slumping” or “cold-handed” players with other players who, in the long run, have lower batting averages, or set up plays for the shooter who supposedly has a hot hand, make a mistake. The supposed hot hand in basketball, and the slump in baseball, are illusions because the observed long runs of outs, or of baskets, are statistical artifacts, due to ordinary random variability. The identification of slumps and hot hands is superstitious behavior, classic cases of the assignment of pattern to a series of events when there really is no pattern.\nHow do statisticians ascertain that slumps and hot hands are very weak effects, or do not exist? In brief, in baseball we simulate a hitter with a given average — say .250 — and compare the results with actual hitters of that average, to see whether they have “slumps” longer than the computer. The method of investigation is roughly as follows. You program a computer or other machine to behave the way a player would, given the player’s long-run average, on the assumption that each trial is a random drawing. For example, if a player has a .250 season-long batting average, the machine is programmed like a bucket containing three black balls and one white ball. Then for each simulated at bat, the machine shuffles the “balls” and draws one; it then records whether the result is black or white, after which the ball is replaced in the bucket. To study a season with four hundred at-bats, a simulated ball is drawn four hundred times.\nThe records of the player’s real season and the simulated season are then compared. If there really is such a thing as a non-random slump or streak, there will be fewer but longer “runs” of hits or outs in the real record than in the simulated record. On the other hand, if performance is independent from at-bat trial to at-bat trial, the actual record will change from hit to out and from out to hit as often as does the random simulated record. I suggested this sort of test for the existence of slumps in my 1969 book that first set forth the resampling method, a predecessor of this book.\nFor example, Table 14.1 shows the results of one 400 at-bat season for a simulated .250 hitter. (H = hit, O = out, sequential at-bats ordered vertically) Note the “slump” — 1 for 24 — in columns 7 & 8 (in bold).\n\n\nTable 14.1: 400 simulated at-bats (ordered vertically)\n\n\nO\nO\nO\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nH\nO\nO\n\n\nO\nO\nO\nO\nO\nH\nO\nO\nH\nH\nH\nO\nH\nH\nO\nO\n\n\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\nO\nH\nH\nO\nO\n\n\nO\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nH\nO\nO\nO\nH\n\n\nH\nO\nH\nO\nO\nH\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\n\n\nH\nO\nO\nH\nO\nO\nH\nH\nO\nH\nO\nO\nH\nO\nH\nO\n\n\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\nO\nO\nO\nO\nH\nO\n\n\nO\nO\nH\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nO\nO\n\n\nO\nH\nO\nO\nO\nO\nO\nO\nH\nH\nO\nO\nO\nH\nO\nO\n\n\nO\nH\nH\nO\nO\nO\nO\nH\nO\nH\nO\nO\nH\nO\nH\nO\n\n\nO\nO\nH\nH\nO\nH\nO\nH\nO\nH\nH\nH\nO\nO\nO\nO\n\n\nH\nO\nO\nO\nO\nO\nO\nO\nO\nH\nO\nH\nH\nO\nO\nO\n\n\nO\nH\nO\nO\nO\nH\nO\nO\nO\nO\nO\nO\nO\nO\nH\nH\n\n\nH\nO\nH\nO\nO\nO\nH\nO\nO\nO\nO\nH\nH\nO\nO\nH\n\n\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nH\nH\nH\nH\nO\n\n\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nH\nO\nO\nO\nO\n\n\nH\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\n\n\nO\nH\nH\nH\nO\nO\nO\nH\nO\nH\nO\nO\nO\nO\nO\nO\n\n\nO\nH\nO\nH\nO\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\n\n\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nH\nO\nH\nO\nO\nH\n\n\nO\nH\nO\nO\nH\nO\nO\nO\nO\nO\nH\nO\nO\nO\nO\nO\n\n\nH\nH\nH\nO\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\nH\n\n\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nO\nO\nO\nO\nH\nO\n\n\nO\nH\nO\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nO\nH\n\n\nO\nO\nO\nO\nO\nH\nO\nO\nO\nH\nO\nH\nO\nH\nO\nO\n\n\n\n\nHarry Roberts investigated the batting records of a sample of major leaguers.2 He compared players’ season-long records against the behavior of random-number drawings. If slumps existed rather than being a fiction of the imagination, the real players’ records would shift from a string of hits to a string of outs less frequently than would the random-number sequences. But in fact the number of shifts, and the average lengths of strings of hits and outs, are on average the same for players as for player-simulating random-number devices.\nOver long periods, averages may vary systematically, as Ty Cobb’s annual batting averages varied non-randomly from season to season, Roberts found. But in the short run, most individual and team performances have shown results similar to the outcomes that a lottery-type random number machine would produce.\nThomas Gilovich, Robert Vallone and Amos Twersky (1985) performed a similar study of basketball shooting. They examined the records of shots from the floor by the Philadelphia 76’ers, foul shots by the Boston Celtics, and a shooting experiment of Cornell University teams. They found that “basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses…provided no evidence for a positive correlation between the outcomes of successive shots.”\nTo put their conclusion differently, knowing whether a shooter has scored or not scored on the previous shot — or in any previous sequence of shots — is of absolutely no use in predicting whether the shooter will or will not score on the next shot. Similarly, knowledge of the past series of at-bats in baseball does not improve a prediction of whether a batter will get a hit this time.\nOf course a batter feels — and intensely — as if she or he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, both sandlot players and professionals feel confident that this time will be a hit, too. And after you have hit a bunch of baskets from all over the court, you feel as if you can’t miss.\nBut notice that card players get the same poignant feeling of being “hot” or “cold,” too. After a poker player “fills” several straights and flushes in a row, s/he feels s/he will hit the next one too. (Of course there are some players who feel just the opposite, that the “law of averages” is about to catch up with them.)\nYou will agree, I’m sure, that the cards don’t have any memory, and a player’s chance of filling a straight or flush remains the same no matter how he or she has done in the last series of hands. Clearly, then, a person can have a strong feeling that something is about to happen even when that feeling has no foundation. This supports the idea that even though a player in sports “feels” that s/he is in a slump or has a hot hand, this does not imply that the feeling has any basis in reality.\nWhy, when a batter is low in his/her mind because s/he has been making a lot of outs or for personal reasons, does her/ his batting not suffer? And why the opposite? Apparently at any given moment there are many influences operating upon a player’s performance in a variety of directions, with none of them clearly dominant. Hence there is no simple convincing explanation why a player gets a hit or an out, a basket or a miss, on any given attempt.\nBut though science cannot provide an explanation, the sports commentators always are ready to offer their analyses. Listen, for example, to how they tell you that Joe Zilch must have been trying extra hard just because of his slump. There is a sportswriter’s explanation for anything that happens.\nWhy do we believe the nonsense we hear about “momentum,” “comeback,” “she’s due this time,” and so on? The adult of the human species has a powerful propensity to believe that he or she can find a pattern even when there is no pattern to be found. Two decades ago I cooked up series of numbers with a random-number machine that looked as if they were prices on the stock market. Subjects in the experiment were told to buy and sell whichever stocks they chose. Then I gave them “another day’s prices,” and asked them to buy and sell again. The subjects did all kinds of fancy figuring, using an incredible variety of assumptions — even though there was no way for the figuring to help them. That is, people sought patterns even though there was no reason to believe that there were any patterns to be found.\nWhen I stopped the game before the ten buy-and-sell sessions the participants expected, people asked that the game continue. Then I would tell them that there was no basis for any patterns in the data. “Winning” or “losing” had no meaning. But the subjects demanded to continue anyway. They continued believing that they could find patterns even after I told them that the numbers were randomly looked up and not real stock prices.\nThe illusions in our thinking about sports have important counterparts in our thinking about such real-world phenomena as the climate, the stock market, and trends in the prices of raw materials such as mercury, copper and wheat. And private and public decisions made on the basis of faulty understanding of these real situations, caused by illusory thinking on the order of belief in slumps and hot hands, are often costly and sometimes disastrous.\nAn example of the belief that there are patterns when there are none: Systems for finding patterns in the stock market are peddled that have about the same reliability as advice from a racetrack tout — and millions buy them.\nOne of the scientific strands leading into research on variability was the body of studies that considers the behavior of stock prices as a “random walk.” That body of work asserts that a stock broker or chartist who claims to be able to find patterns in past price movements of stocks that will predict future movements should be listened to with about the same credulity as a racetrack tout or an astrologer. A second strand was the work in psychology in the last decade or two which has recognized that people’s estimates of uncertain events are systematically biased in a variety of interesting and knowable ways.\nThe U.S. government has made — and continues to make — blunders costing the public scores of billions of dollars, using slump-type fallacious reasoning about resources and energy. Forecasts are issued and policies are adopted based on the belief that a short-term increase in price constitutes a long-term trend. But the “experts” employed by the government to make such forecasts do no better on average than do private forecasters, and often the system of forecasting that they use is much more misleading than would be a random-number generating machine of the sort used in the baseball slump experiments.\nPlease look at the data in Figure 14.1 for the height of the Nile River over about half a century. Is it not natural to think that those data show a decline in the height of the river? One can imagine that if our modern communication technology existed then, the Cairo newspapers would have been calling for research to be done on the fall of the Nile, and the television anchors would have been warning the people to change their ways and use less water.\n\n\n\n\n\nFigure 14.1: Height of the Nile River Over Half of a Century\n\n\n\n\nLet’s look at Figure 14.2 which represents the data over an even longer period. What now would you say about the height of the Nile? Clearly the “threat” was non-existent, and only appeared threatening because the time span represented by the data was too short. The point of this display is that looking at too-short a segment of experience frequently leads us into error. And “too short” may be as long as a century.\n\n\n\nFigure 14.2: Variations in the height of Nile Flood in centimeters. The sloping line indicates the secular raising of the bed of the Nile by deposition of silt. From Brooks (1928)\n\n\nAnother example is the price of mercury, which is representative of all metals. Figure 14.3 shows a forecast made in 1976 by natural-scientist Earl Cook (1976). He combined a then-recent upturn in prices with the notion that there is a finite amount of mercury on the earth’s surface, plus the mathematical charm of plotting a second-degree polynomial with the computer. Figure 14.4 and Figure 14.5 show how the forecast was almost immediately falsified, and the price continued its long-run decline.\n\n\n\nFigure 14.3: The Price of Mercury from Cook (1976)\n\n\n\n\n\n\n\nFigure 14.4: Mercury Reserves, 1950-1990\n\n\n\n\n\n\n\n\n\nFigure 14.5: Mercury Price Indexes, 1950-1990\n\n\n\n\nLack of sound statistical intuition about variability can lead to manipulation of the public being by unscrupulous persons. Commodity funds sellers use a device of this sort to make their results look good (The Washington Post, Sep 28, 1987, p. 71). Some individual commodity traders inevitably do well in their private trading, just by chance. A firm then hires one of them, builds a public fund around him, and claims the private record for the fund’s own history. But of course the private record has no predictive power, any more than does the record of someone who happened to get ten heads in a row flipping coins.\nHow can we avoid falling into such traps? It is best to look at the longest possible sweep of history. That is, use the largest possible sample of observations to avoid sampling error. For copper we have data going back to the 18th century B.C. In Babylonia, over a period of 1000 years, the price of iron fell to one fifth of what it was under Hammurabi (almost 4000 years ago), and the price of copper then cost about a thousand times its current price in the U.S., relative to wages. So the inevitable short-run increases in price should be considered in this long-run context to avoid drawing unsound conclusions due to small-sample variability.\nProof that it is sound judgment to rely on the longest possible series is given by the accuracy of predictions one would have made in the past. In the context of copper, mercury, and other raw materials, we can refer to a sample of years in the past, and from those years imagine ourselves forecasting the following year. If you had bet every time that prices would go down in consonance with the long-run trend, you would have been a big winner on average." + }, + { + "objectID": "sampling_variability.html#regression-to-the-mean", + "href": "sampling_variability.html#regression-to-the-mean", + "title": "14  On Variability in Sampling", + "section": "14.2 Regression to the mean", + "text": "14.2 Regression to the mean\n\nUP, DOWN “The Dodgers demoted last year’s NL rookie of the year, OF Todd Hollandsworth (.237, 1 HR, 18 RBI) to AAA Albuquerque...” (Item in Washington Post , 6/14/97)\n\nIt is a well-known fact that the Rookie of the Year in a sport such as baseball seldom has as outstanding a season in their sophomore year. Why is this so? Let’s use the knowledge we have acquired of probability and simulation to explain this phenomenon.\nThe matter at hand might be thought of as a problem in pure probability — if one simply asks about the chance that a given player (the Rookie of the Year) will repeat. Or it could be considered a problem in statistics, as discussed in coming chapters. Let’s consider the matter in the context of baseball.\nImagine 10 mechanical “ball players,” each a machine that has three white balls (hits) and 7 black balls. Every time the machine goes to bat, you take a ball out of the machine, look to see if it is a hit or an out, and put it back. For each “ball player” you do this 100 times. One of them is going to do better than the others, and that one becomes the Rookie of the Year. See Table 14.2.\n\n\nTable 14.2: Rookie Seasons (100 at bats)\n\n\n# of Hits\nBatting Average\n\n\n\n\n32\n.320\n\n\n34\n.340\n\n\n33\n.330\n\n\n30\n.300\n\n\n35\n.350\n\n\n33\n.330\n\n\n30\n.300\n\n\n31\n.310\n\n\n28\n.280\n\n\n25\n.250\n\n\n\n\nWould you now expect that the player who happened to be the best among the top ten in the first year to again be the best among the top ten in the next year, also? The sports writers do. But of course this seldom happens. The Rookie of the Year in major-league baseball seldom has as outstanding a season in their sophomore year as in their rookie year. You can expect them to do better than the average of all sophomores, but not necessarily better than all of the rest of the group of talented players who are now sophomores. (Please notice that we are not saying that there is no long-run difference among the top ten rookies. But suppose there is. Table 14.3 shows the season’s performance for ten batters of differing performances).\n\n\nTable 14.3: Simulated season’s performance for 10 batters of differing “true” averages\n\n\n“True”\nRookie\n\n\n\n\n.270\n.340\n\n\n.270\n.240\n\n\n.280\n.330\n\n\n.280\n.300\n\n\n.300\n.280\n\n\n.300\n.420\n\n\n.320\n.340\n\n\n.320\n.350\n\n\n.330\n.260\n\n\n.330\n.330\n\n\n\n\nWe see from Table 14.3 that we have ten batters whose “true” batting averages range from .270 to .330. Their rookie year performance (400 at bats), simulated on the basis of their “true”average is on the right. Which one is the rookie of the year? It’s #6, who hit .420 during the rookie session. Will they do as well next year? Not likely — their “true” average is only .300.\n\nStart of sampling_variability notebook\n\nDownload notebook\nInteract\n\n\nTry generating some rookie “seasons” yourself with the following commands, ranging the batter’s “true” performance by changing the value of p_hit (the probability of a hit).\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n\n# Simulate a rookie season of 400 at-bats.\n\n# You might try changing the value below and rerunning.\n# This is the true (long-run) probability of a hit for this batter.\np_hit = 0.4\nprint('True average is:', p_hit)\n\nTrue average is: 0.4\n\nat_bats = rnd.choice(['Hit', 'Out'], p=[p_hit, 1 - p_hit], size=400)\nsimulated_average = np.sum(at_bats == 'Hit') / 400\n# Show the result\nprint('Simulated average is:', simulated_average)\n\nSimulated average is: 0.4075\n\n\nSimulate a set of 10 or 20 such rookie seasons, and look at the one who did best. How did their rookie season compare to their “true” average?\nEnd of sampling_variability notebook\n\nThe explanation is the presence of variability . And lack of recognition of the role of variability is at the heart of much fallacious reasoning. Being alert to the role of variability is crucial.\nOr consider the example of having a superb meal at a restaurant — the best meal you have ever eaten. That fantastic meal is almost surely the combination of the restaurant being better than average, plus a lucky night for the chef and the dish you ordered. The next time you return you can expect a meal better than average, because the restaurant is better than average in the long run. But the meal probably will be less good than the superb one you had the first time, because there is no reason to believe that the chef will get so lucky again and that the same sort of variability will happen this time.\nThese examples illustrate the concept of “regression to the mean” — a confusingly-titled and very subtle effect caused by variability in results among successive samples drawn from the same population. This phenomenon was given its title more than a century ago by Francis Galton, one of the great founders of modern statistics, when at first he thought that the height of the human species was becoming more uniform, after he noticed that the children of the tallest and shortest parents usually are closer to the average of all people than their parents are. But later he discovered his fallacy — that the variability in heights of children of quite short and quite tall parents also causes some people to be even more exceptionally tall or short than their parents. So the spread in heights among humans remains much the same from generation to generation; there is no “regression to the mean.” The heart of the matter is that any exceptional observed case in a group is likely to be the result of two forces — a) an underlying propensity to differ from the average in one direction or the other, plus b) some chance sampling variability that happens (in the observed case) to push even further in the exceptional direction.\nA similar phenomenon arises in direct-mail marketing. When a firm tests many small samples of many lists of names and then focuses its mass mailings on the lists that performed best in the tests, the full list “rollouts” usually do not perform as well as the samples did in the initial tests. It took many years before mail-order experts (see especially (Burnett 1988)) finally understood that regression to the mean inevitably causes an important part of the dropoff from sample to rollout observed in the set of lists that give the very best results in a multi-list test.\nThe larger the test samples, the less the dropoff, of course, because larger samples reduce variability in results. But larger samples risk more money. So the test-sample-size decision for the marketer inevitably is a trade-off between accuracy and cost.\nAnd one last amusing example: After I (JLS) lectured to the class on this material, the student who had gotten the best grade on the first mid-term exam came up after class and said: “Does that mean that on the second mid-term I should expect to do well but not the best in the class?” And that’s exactly what happened: He had the second-best score in the class on the next midterm.\nA related problem arises when one conducts multiple tests, as when testing thousands of drugs for therapeutic value. Some of the drugs may appear to have a therapeutic effect just by chance. We will discuss this problem later when discussing hypothesis testing." + }, + { + "objectID": "sampling_variability.html#summary-and-conclusion", + "href": "sampling_variability.html#summary-and-conclusion", + "title": "14  On Variability in Sampling", + "section": "14.3 Summary and conclusion", + "text": "14.3 Summary and conclusion\nThe heart of statistics is clear thinking. One of the key elements in being a clear thinker is to have a sound gut understanding of statistical processes and variability. This chapter amplifies this point.\nA great benefit to using simulations rather than formulas to deal with problems in probability and statistics is that the presence and importance of variability becomes manifest in the course of the simulation work.\n\n\n\n\nBrooks, Charles Ernest Pelham. 1928. “Periodicities in the Nile Floods.” Memoirs of the Royal Meteorological Society 2 (12): 9--26. https://www.rmets.org/sites/default/files/papers/brooksmem2-12.pdf.\n\n\nBurnett, Ed. 1988. The Complete Direct Mail List Handbook: Everything You Need to Know about Lists and How to Use Them for Greater Profit. Englewood Cliffs, New Jersey: Prentice Hall. https://archive.org/details/completedirectma00burn.\n\n\nCook, Earl. 1976. “Limits to Exploitation of Nonrenewable Resources.” Science 191 (4228): 677–82. https://www.jstor.org/stable/pdf/1741483.pdf.\n\n\nGilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot Hand in Basketball: On the Misperception of Random Sequences.” Cognitive Psychology 17 (3): 295–314. https://www.joelvelasco.net/teaching/122/Gilo.Vallone.Tversky.pdf." + }, + { + "objectID": "monte_carlo.html#a-definition-and-general-procedure-for-monte-carlo-simulation", + "href": "monte_carlo.html#a-definition-and-general-procedure-for-monte-carlo-simulation", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.1 A definition and general procedure for Monte Carlo simulation", + "text": "15.1 A definition and general procedure for Monte Carlo simulation\nThis is what we shall mean by the term Monte Carlo simulation when discussing problems in probability: Using the given data-generating mechanism (such as a coin or die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples . That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.\nThis definition fits both problems in pure probability as well as problems in statistics, but in the latter case the process is called resampling . The reason that the same definition fits is that at the core of every problem in inferential statistics lies a problem in probability ; that is, the procedure for handling every statistics problem is the procedure for handling a problem in probability. (There is related discussion of definitions in Chapter 8 and Chapter 20.)\nThe following series of steps should apply to all problems in probability. I’ll first state the procedure straight through without examples, and then show how it applies to individual examples.\n\nStep A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.\nStep B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.\nStep C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.\nStep D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.\n\nNow let us apply the general procedure to some examples to make it more concrete.\nHere are four problems to be used as illustrations:\n\nThree percent gizmos — if on average 3 percent of the gizmos sent out are defective, what is the chance that there will be more than 10 defectives in a shipment of 200?\nThree girls, 106 in 206 — what are the chances of getting three or more girls in the first four children, if the probability of a female birth is 106/206?\nLess than 20 baskets — what are the chances of Joe Hothand scoring 20 or fewer baskets in 57 shots if his long-run average is 47 percent?\nSame birthday in 25 — what is the probability of two or more people in a group of 25 persons having the same birthday — i. e., the same month and same day of the month?" + }, + { + "objectID": "monte_carlo.html#apply-step-a-construct-a-simulation-universe", + "href": "monte_carlo.html#apply-step-a-construct-a-simulation-universe", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.2 Apply step A — construct a simulation universe", + "text": "15.2 Apply step A — construct a simulation universe\nAs a reminder:\n\nStep A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.\n\nFor our example problems:\n\nThree percent gizmos: A random drawing with replacement from the set of numbers 1 through 100 with 1 through 3 designated as defective, simulates the system that produces 3 defective gizmos among 100.\nThree girls, 106 in 206: You could take two decks of cards, from which you take out both Aces of spades, and replace these with a Joker. You now have 103 cards (206 / 2), of which 53 (106 / 2) are red, counting the Joker as red. You could also use a random drawing from two sets of numbers, one comprising 1 through 106 and the other 107 through 206. Either universe can simulate the system that produces a single male or female birth, when we are estimating the probability of three girls in the first four children. Notice that in this universe the probability of a girl remains the same from trial event to trial event — that is, the trials are independent — demonstrating a universe from which we sample with replacement.\nLess than 20 baskets: A random drawing with replacement from a bucket containing a hundred balls, 47 red and 53 black, simulates the system that produces 47 percent baskets for Joe Hothand.\nSame birthday in 25: A random drawing with replacement from the numbers 1 through 365 simulates the system that produces a birthday.\n\nThis step A includes two operations:\n\nDecide which symbols will stand for the elements of the universe you will simulate.\nDetermine whether the sampling will be with or without replacement. (This can be ambiguous in a complex modeling situation.)\n\nHard thinking is required in order to determine the appropriate “real” universe whose properties interest you." + }, + { + "objectID": "monte_carlo.html#apply-step-b-specify-the-procedure", + "href": "monte_carlo.html#apply-step-b-specify-the-procedure", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.3 Apply step B — specify the procedure", + "text": "15.3 Apply step B — specify the procedure\n\nStep B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.\n\nFor example:\n\nThree percent gizmos: For a single gizmo, you can draw a single number from an infinite universe. Or one can use a finite set with replacement and shuffling.\nThree girls, 106 in 206: In the case of three or more daughters among four children, you could use the deck of 103 cards, from Step A, of which 53 count as red. To simulate one child, you can draw a card and then replace it, noting female for a red card or a Joker. Or if you are using random numbers from the computer, the random numbers automatically simulate replacement. Just as the chances of having a boy or a girl do not change depending on the sex of the preceding child, so we want to ensure through sampling with replacement that the chances do not change each time we choose from the deck of cards.\nLess than 20 baskets: In the case of Joe Hothand’s shooting, the procedure is to consider the numbers 1 through 47 as “baskets,” and 48 through 100 as “misses,” with the same other considerations as the gizmos.\nSame birthday in 25. In the case of the birthday problem, the drawing must be with replacement, because the fact that you have drawn — say — a 10 (10th day in year), should not affect the chances of drawing 10 for a second person in the room.\n\nRecording the outcome of the sampling must be indicated as part of this step, e.g., “record ‘yes’ if girl or basket, ‘no’ if a boy or a miss.”" + }, + { + "objectID": "monte_carlo.html#apply-step-c-describe-any-composite-events", + "href": "monte_carlo.html#apply-step-c-describe-any-composite-events", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.4 Apply step C — describe any composite events", + "text": "15.4 Apply step C — describe any composite events\n\nStep C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.\n\nFor example:\n\nThree percent gizmos: For the gizmos, draw a sample of 200.\nThree girls, 106 in 206: For the three or more girls among four children, the procedure for each simple event of a single birth was described in step B. Now we must specify repeating the simple event four times, and counting whether the outcome is or is not three girls.\nLess than 20 baskets: In the case of Joe Hothand’s shots, we must draw 57 numbers to make up a sample of shots, and examine whether there are 20 or more misses.\n\nRecording the results as “ten or more defectives,” “three or more girls” or “two or less girls,” and “20 or more misses” or “19 or fewer,” is part of this step. This record indicates the results of all the trials and is the basis for a tabulation of the final result." + }, + { + "objectID": "monte_carlo.html#apply-step-d-calculate-the-probability", + "href": "monte_carlo.html#apply-step-d-calculate-the-probability", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.5 Apply step D — calculate the probability", + "text": "15.5 Apply step D — calculate the probability\n\nStep D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.\n\nFor example: the proportions of “yes” and “no,” and “20 or more” and “19 or fewer” estimate the probability we seek in step C.\nThe above procedure is similar to the procedure followed with the analytic formulaic method except that the latter method constructs notation and manipulates it." + }, + { + "objectID": "monte_carlo.html#summary", + "href": "monte_carlo.html#summary", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.6 Summary", + "text": "15.6 Summary\nThis chapter gives a more general description of the specific steps used in prior chapters to solve problems in probability." + }, + { + "objectID": "standard_scores.html#household-income-and-congressional-districts", + "href": "standard_scores.html#household-income-and-congressional-districts", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.1 Household income and congressional districts", + "text": "16.1 Household income and congressional districts\nDemocratic congresswoman Marcy Kaptur has represented the 9th district of Ohio since 1983. Ohio’s 9th district is relatively working class, and the Democratic party has, traditionally, represented people with lower income. However, Kaptur has pointed out that this pattern appears to be changing; more of the high-income congressional districts now lean Democrat, and the Republican party is now more likely to represent lower-income districts. The French economist Thomas Piketty has described this phenomenon across several Western countries. Voters for left parties are now more likely to be highly educated and wealthy. He terms this shift “Brahmin Left Vs Merchant Right” (Piketty 2018). The data below come from a table Kaptur prepared that shows this pattern in the 2023 US congress. The table lists the top 20 districts by the median income of the households in that district, along with their representatives and their party.2\n\n\n\n\nTable 16.1: 20 most wealthy 2023 Congressional districts by household income\n\n\n\nAscending_Rank\nDistrict\nMedian Income\nRepresentative\nParty\n\n\n\n\n422\n422\nMD-3\n114804\nJ. Sarbanes\nDemocrat\n\n\n423\n423\nMA-5\n115618\nK. Clark\nDemocrat\n\n\n424\n424\nNY-12\n116070\nJ. Nadler\nDemocrat\n\n\n425\n425\nVA-8\n116332\nD. Beyer\nDemocrat\n\n\n426\n426\nMD-5\n117049\nS. Hoyer\nDemocrat\n\n\n427\n427\nNJ-11\n117198\nM. Sherrill\nDemocrat\n\n\n428\n428\nNY-3\n119185\nG. Santos\nRepublican\n\n\n429\n429\nCA-14\n119209\nE. Swalwell\nDemocrat\n\n\n430\n430\nNJ-7\n119567\nT. Kean\nRepublican\n\n\n431\n431\nNY-1\n120031\nN. LaLota\nRepublican\n\n\n432\n432\nWA-1\n120671\nS. DelBene\nDemocrat\n\n\n433\n433\nMD-8\n120948\nJ. Raskin\nDemocrat\n\n\n434\n434\nNY-4\n121979\nA. D’Esposito\nRepublican\n\n\n435\n435\nCA-11\n124456\nN. Pelosi\nDemocrat\n\n\n436\n436\nCA-15\n125855\nK. Mullin\nDemocrat\n\n\n437\n437\nCA-10\n135150\nM. DeSaulnier\nDemocrat\n\n\n438\n438\nVA-11\n139003\nG. Connolly\nDemocrat\n\n\n439\n439\nVA-10\n140815\nJ. Wexton\nDemocrat\n\n\n440\n440\nCA-16\n150720\nA. Eshoo\nDemocrat\n\n\n441\n441\nCA-17\n157049\nR. Khanna\nDemocrat\n\n\n\n\n\n\n\n\nYou may notice right away that many of the 20 richest districts have Democratic Party representatives.\nIn fact, if we look at all 441 congressional districts in Kaptur’s table, we find a large difference in the average median household income for Democrat and Republican districts; the Democrat districts are, on average, about 14% richer (Table 16.2).\n\n\n\n\nTable 16.2: Means for median household income by party\n\n\n\nMean of median household income\n\n\n\n\nDemocrat\n$76,933\n\n\nRepublican\n$67,474\n\n\n\n\n\n\n\n\nNext we are going to tip our hand, and show how we got these data. In previous chapters, we had cells like this in which we enter the values we will analyze. These values come from the example we introduced in Section 12.16:\n\n# Liquor prices for US states with private market.\npriv = np.array([\n 4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54, 4.75,\n 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75, 5.20, 5.10,\n 4.80, 4.29])\n\nNow we have 441 values to enter, and it is time to introduce Pythons standard tools for loading data.\n\n16.1.1 Comma-separated-values (CSV) format\nThe data we will load is in a file on disk called data/congress_2023.csv. These are data from Kaptur’s table in a comma-separated-values (CSV) format file. We refer to this file with its filename, containing the directory (data/) followed by the name of the file (congress_2023.csv), giving a filename of data/congress_2023.csv.\nThe CSV format is a very simple text format for storing table data. Usually, the first line of the CSV file contains the column names of the table, and the rest of the lines contain the row values. As the name suggests, commas (,) separate the column names in the first line, and the row values in the following lines. If you opened the data/congress_2023.csv file in some editor, such as Notepad on Windows or TextEdit on Mac, you would find that the first few lines looked like this:\n\nAscending_Rank,District,Median_Income,Representative,Party\n1,PR-At Large,22237,J. González-Colón,Republican\n2,AS-At Large,28352,A. Coleman,Republican\n3,MP-At Large,31362,G. Sablan,Democrat\n4,KY-5,37910,H. Rogers,Republican\n5,MS-2,37933,B. G. Thompson,Democrat\n\n\n\n16.1.2 Introducing the Pandas library\nHere we start using the Pandas library to load table data into Python.\nThus far we have used the Numpy library to work with data in arrays. Pandas is As always with Python, when we want to use a library like Pandas, we have to import it first.\nWe have used the term library here, but Python uses the term module to refer to libraries of code and data that you import.\nWhen using Numpy, we write:\n\n# Import the Numpy library (module), name it \"np\".\nimport numpy as np\n\nNow we will use the Pandas library (module).\nWe can import Pandas like this:\n\n# Import the Pandas library (module)\nimport pandas\n\nAs Numpy has a standard abbreviation np, that almost everyone writing Python code will recognize and use, so Pandas has the standard abbreviation pd:\n\n# Import the Pandas library (module), name it \"pd\".\nimport pandas as pd\n\nPandas is the standard data science library for Python. It is particularly good at loading data files, and presenting them to us as a useful table-like structure, called a data frame.\nWe start by using Pandas to load our data file:\n\ndistrict_income = pd.read_csv('data/congress_2023.csv')\n\nWe have thus far done many operations that returned Numpy arrays. pd.read_csv returns a Pandas data frame:\n\ntype(district_income)\n\n<class 'pandas.core.frame.DataFrame'>\n\n\nA data frame is Pandas’ own way of representing a table, with columns and rows. You can think of it as Python’s version of a spreadsheet. As strings or Numpy arrays have methods (functions attached to the array), so Pandas data frames have methods. These methods do things with the data frame to which they are attached. For example, the head method of the data frame shows (by default) the first five rows in the table:\n\n# Show the first five rows in the data frame\ndistrict_income.head()\n\n Ascending_Rank District Median_Income Representative Party\n0 1 PR-At Large 22237 J. González-Colón Republican\n1 2 AS-At Large 28352 A. Coleman Republican\n2 3 MP-At Large 31362 G. Sablan Democrat\n3 4 KY-5 37910 H. Rogers Republican\n4 5 MS-2 37933 B. G. Thompson Democrat\n\n\nThe data are in income order, from lowest to highest, so the first five districts are those with the lowest household income.\n\n\n\n\n\n\nSorting\n\n\n\n\nIf the data were not already in income order, we could have sorted them with Numpy’s sort[R’s function.\n\n\n\n\nWe are particularly interested in the column named Median_Income.\nYou may remember the idea of indexing, introduced in Section 7.6. Indexing occurs when we fetch data from within a container, such as a string or an array. We do this by putting square brackets [] after the value we want to index into, and put something inside the brackets to say what we want.\nFor example, to get the first element of the priv array above, we use indexing:\n\n# Fetch the first element of the priv array with indexing.\n# This is the element at position 0.\npriv[0]\n\n4.82\n\n\nAs you can index into strings and Numpy arrays, by using square brackets, so you can index into Pandas data frames. Instead of putting the position between the square brackets, we can put the column name. This fetches the data from that column, returning a new type of value called a Pandas Series.\n\n# Index into Pandas data frame to get one column of data.\n# Notice we use a string between the square brackets, giving the column name.\nincome_col = district_income['Median_Income']\n# The value that comes back is of type Series. A Series represents the\n# data from a single column.\ntype(income_col)\n\n<class 'pandas.core.series.Series'>\n\n\nWe want to go straight to our familiar Numpy arrays, so we convert the column of data into a Numpy array, using the np.array function you have already seen:\n\n\n# Convert column data into a Numpy array.\nincomes = np.array(income_col)\n# Show the first five values, by indexing with a slice.\nincomes[:5]\n\narray([22237, 28352, 31362, 37910, 37933])\n\n\n:::\n\n16.1.3 Incomes and Ranks\nWe now have the incomes values as an array.\nThere are 441 values in the whole vector, one of each congressional district:\n\nlen(incomes)\n\n441\n\n\nWhile we are at it, let us also get the values from the “Ascending_Rank” column, with the same procedure. These are ranks from low to high, meaning 1 is the lowest median income, and 441 is the highest median income.\n\nlo_to_hi_ranks = np.array(district_income['Ascending_Rank'])\n# Show the first five values, by indexing with a slice.\nlo_to_hi_ranks[:5]\n\narray([1, 2, 3, 4, 5])\n\n\nIn our case, the DataFrame has the Ascending_Rank column with the ranks we need, but if we need the ranks and we don’t have them, we can calculate them using the rankdata function from the Scipy stats package.\n\n\n16.1.4 Introducing Scipy\nEarlier in this chapter we introduced the Pandas module. We used Pandas to load the CSV data into Python.\nNow we introduce another fundamental Python library for working with data called Scipy. The name Scipy comes from the compression of SCIentific PYthon, and the library is nearly as broad as the name suggests — it is a huge collection of functions and data that implement a wide range of scientific algorithms. Scipy is an umbrella package, in that it contains sub-packages, each covering a particular field of scientific computing. One of those sub-packages is called stats, and, yes, it covers statistics.\nWe can get the Scipy stats sub-package with:\n\nimport scipy.stats\n\nbut, as for Numpy and Pandas, we often import the package with an abbreviation, such as:\n\n# Import the scipy.stats package with the name \"sps\".\nimport scipy.stats as sps\n\nOne of the many functions in scipy.stats is the rankdata function.\n\n\n16.1.5 Calculating ranks\nAs you might expect sps.rankdata accepts an array as an input argument. Let’s say that there are n = len(data) values in the array that we pass to sps.rankdata. The function returns an array, length \\(n\\), where the elements are the ranks of each corresponding element in the input data array. A rank value of 1 corresponds the lowest value in data (closest to negative infinity), and a rank of \\(n\\) corresponds to the highest value (closest to positive infinity).\nHere’s an example data array to show how sps.rankdata works.\n\n# The data.\ndata = np.array([3, -1, 5, -2])\n# Corresponding ranks for the data.\nsps.rankdata(data)\n\narray([3., 2., 4., 1.])\n\n\nWe can use sps.rankdata to recalculate the ranks for the congressional median household income values.\n\n# Recalculate the ranks.\nrecalculated_ranks = sps.rankdata(incomes)\n# Show the first 5 ranks.\nrecalculated_ranks[:5]\n\narray([1., 2., 3., 4., 5.])" + }, + { + "objectID": "standard_scores.html#comparing-two-values-in-the-district-income-data", + "href": "standard_scores.html#comparing-two-values-in-the-district-income-data", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.2 Comparing two values in the district income data", + "text": "16.2 Comparing two values in the district income data\nLet us say that we have taken an interest in two particular members of Congress: the Speaker of the House of Representatives, Republican Kevin McCarthy, and the progressive activist and Democrat Alexandria Ocasio-Cortez. We will refer to both using their initials: KM for Kevin Owen McCarthy and AOC for Alexandra Ocasio-Cortez.\nBy scrolling through the CSV file, or (in our case) using some simple Pandas code that we won’t cover now, we find the rows corresponding to McCarthy (KM) and Ocasio-Cortez (AOC) — Table 16.3.\n\n\n\n\nTable 16.3: Rows for Kevin McCarthy and Alexandra Ocasio-Cortez \n\n\nAscending_Rank\nDistrict\nMedian Income\nRepresentative\nParty\n\n\n\n\n81\nNY-14\n56129\nA. Ocasio-Cortez\nDemocrat\n\n\n295\nCA-20\n77205\nK. McCarthy\nRepublican\n\n\n\n\n\n\n\n\nThe rows show the rank of each congressional district in terms of median household income. The districts are ordered by this rank, so we can get their respective indices (positions) in the incomes array from their rank. Remember, Python’s indices start at 0, whereas the ranks start at 1, so we need to subtract 1 from the rank to get the index\n\n# Rank of McCarthy's district in terms of median household income.\nkm_rank = 295\n# Index (position) of McCarthy's value in the \"incomes\" array.\n# Subtract one from rank, because Python starts indices at 0 rather than 1.\nkm_index = km_rank - 1\n\nNow we have the index (position) of KM’s value, we can find the household income for his district from the incomes array:\n\n# Show the median household income from McCarthy's district\n# by indexing into the \"incomes\" array:\nkm_income = incomes[km_index]\nkm_income\n\n77205\n\n\nHere is the corresponding index and incomes value for AOC:\n\n# Index (position) of AOC's value in the \"incomes\" array.\naoc_rank = 81\naoc_index = aoc_rank - 1\n# Show the median household income from AOC's district\n# by indexing into the \"incomes\" array:\naoc_income = incomes[aoc_index]\naoc_income\n\n56129\n\n\nNotice that we fetch the same value for median household income from incomes as you see in the corresponding rows." + }, + { + "objectID": "standard_scores.html#comparing-values-with-ranks-and-quantile-positions", + "href": "standard_scores.html#comparing-values-with-ranks-and-quantile-positions", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.3 Comparing values with ranks and quantile positions", + "text": "16.3 Comparing values with ranks and quantile positions\nWe have KM’s and AOC’s district median household income values, but our next question might be — how unusual are these values?\nOf course, it depends what we mean by unusual. We might mean, are they greater or smaller than most of the other values?\nOne way of answering that question is simply looking at the rank of the values. If the rank is lower than \\(\\frac{441}{2} = 220.5\\) then this is a district with lower median income than most districts. If it is greater than \\(220.5\\) then it has higher median income than most districts. We see that KM’s district, with rank 295 is wealthier than most, whereas AOC’s district (rank 81) is poorer than most.\nBut we can’t interpret the ranks without remembering that there are 441 values, so — for example - a rank of 81 represents a relatively low value, whereas one of 295 is relatively high.\nWe would like some scale that tells us immediately whether this is a relatively low or a relatively high value, without having to remembering how many values there are.\nThis is a good use for quantile positions (QPs). The QP of a value tells you where the value ranks relative to the other values, on a scale from \\(0\\) through \\(1\\). A QP of \\(0\\) tells you this is the lowest-ranking value, and a QP of \\(1\\) tells you this is the highest-ranking value.\nWe can calculate the QP for each rank. Think of the low-to-high ranks as being a line starting at 1 (the lowest rank — for the lowest median income) and going up to 441 (the highest rank — for the highest median income).\nThe QP corresponding to any particular rank tells you how far along this line the rank is. Notice that the length of the line is the distance from the first to the last value, so 441 - 1 = 440.\nSo, if the rank was \\(1\\), then the value is at the start of the line. It has got \\(\\frac{0}{440}\\) of the way along the line, and the QP is \\(0\\). If the rank is \\(441\\), the value is at the end of the line, it has got \\(\\frac{440}{440}\\) of the way along the line and the QP is \\(1\\).\nNow consider the rank of \\(100\\). It has got \\(\\frac{(100 - 1)}{440}\\) of the way along the line, and the QP position is 0.22.\nMore generally, we can translate the high-to-low ranks to QPs with:\n\n# Length of the line defining quantile positions.\n# Start of line is rank 1 (quantile position 0).\n# End of line is rank 441 (quantile position 1).\ndistance = len(lo_to_hi_ranks) - 1 # 440 in our case.\n# What proportion along the line does each value get to?\nquantile_positions = (lo_to_hi_ranks - 1) / distance\n# Show the first five.\nquantile_positions[:5]\n\narray([0. , 0.00227273, 0.00454545, 0.00681818, 0.00909091])\n\n\nLet’s plot the ranks and the QPs together on the x-axis:\n\n\n\n\n\n\n\n\n\nThe QPs for KM and AOC tell us where their districts’ incomes are in the ranks, on a 0 to 1 scale:\n\nkm_quantile_position = quantile_positions[km_index]\nkm_quantile_position\n\n0.6681818181818182\n\n\n\naoc_quantile_position = quantile_positions[aoc_index]\naoc_quantile_position\n\n0.18181818181818182\n\n\nIf we multiply the QP by 100, we get the percentile positions — so the percentile position ranges from 0 through 100.\n\n# Percentile positions are just quantile positions * 100\nprint('KM percentile position:', km_quantile_position * 100)\n\nKM percentile position: 66.81818181818183\n\nprint('AOC percentile position:', aoc_quantile_position * 100)\n\nAOC percentile position: 18.181818181818183\n\n\nNow consider one particular QP: \\(0.5\\). The \\(0.5\\) QP is exactly half-way along the line from rank \\(1\\) to rank \\(441\\). In our case this corresponds to rank \\(\\frac{441 - 1}{2} + 1 = 221\\).\n\n# For rank 221 we need index 220, because Python indices start at 0\nprint('Middle rank:', lo_to_hi_ranks[220])\n\nMiddle rank: 221\n\nprint('Quantile position:', quantile_positions[220])\n\nQuantile position: 0.5\n\n\nThe value corresponding to any particular QP is the quantile value, or just the quantile for short. For a QP of 0.5, the quantile (quantile value) is:\n\n# Quantile value for 0.5\nprint('Quantile value for QP of 0.5:', incomes[220])\n\nQuantile value for QP of 0.5: 67407\n\n\nIn fact we can ask Python for this value (quantile) directly, using the quantile function:\n\nnp.quantile(incomes, 0.5)\n\n67407.0\n\n\n\n\n\n\n\n\nquantile and sorting\n\n\n\nIn our case, the incomes data is already sorted from lowest (at position 0 in the array to highest (at position 440 in the array). The quantile function does not need the data to be sorted; it does its own internal sorting to do the calculation.\nFor example, we could shuffle incomes into a random order, and still get the same values from quantile.\n\nrnd = np.random.default_rng()\nshuffled_incomes = rnd.permuted(incomes)\n# Quantile still gives the same value.\nnp.quantile(incomes, 0.5)\n\n67407.0\n\n\n\n\nAbove we have the 0.5 quantile — the value corresponding to the QP of 0.5.\nThe 0.5 quantile is an interesting value. By the definition of QP, exactly half of the remaining values (after excluding the 0.5 quantile value) have lower rank, and are therefore less than the 0.5 quantile value. Similarly exactly half of the remaining values are greater than the 0.5 quantile. You may recognize this as the median value. This is such a common quantile value that NumPy has a function np.median as a shortcut for np.quantile(data, 0.5).\n\nnp.median(incomes)\n\n67407.0\n\n\nAnother interesting QP is 0.25. We find the QP of 0.25 at rank:\n\nqp25_rank = (441 - 1) * 0.25 + 1\nqp25_rank\n\n111.0\n\n\n\n# Therefore, index 110 (Python indices start from 0)\nprint('Rank corresponding to QP 0.25:', qp25_rank)\n\nRank corresponding to QP 0.25: 111.0\n\nprint('0.25 quantile value:', incomes[110])\n\n0.25 quantile value: 58961\n\nprint('0.25 quantile value using np.quantile:',\n np.quantile(incomes, 0.25))\n\n0.25 quantile value using np.quantile: 58961.0\n\n\n\n\n\n\n\n\n\n\n\nCall the 0.25 quantile value \\(V\\). \\(V\\) is the number such that 25% of the remaining values are less than \\(V\\), and 75% are greater.\nNow let’s think about the 0.01 quantile. We don’t have an income value exactly corresponding to this QP, because there is no rank exactly corresponding to the 0.01 QP.\n\nrank_for_qp001 = (441 - 1) * 0.01 + 1\nrank_for_qp001\n\n5.4\n\n\nLet’s have a look at the first 10 values for rank / QP and incomes:\n\n\n\n\n\n\n\n\n\nWhat then, is the quantile value for QP = 0.01? There are various ways to answer that question (Hyndman and Fan 1996), but one obvious way, and the default for NumPy, is to draw a straight line up from the matching rank — or equivalently, down from the QP — then note where that line crosses the lines joining the values to the left and right of the QP on the graph above, and look across to the y-axis for the corresponding value:\n\n\n\n\n\n\n\n\n\n\nnp.quantile(incomes, 0.01)\n\n38887.4\n\n\nThis is called the linear method — because it uses straight lines joining the points to estimate the quantile value for a QP that does not correspond to a whole-number rank.\n\n\n\n\n\n\nCalculating quantiles using the linear method\n\n\n\nWe gave a graphical explanation of how to calculate the quantile for a QP that does not correspond to whole-number rank in the data. A more formal way of getting the value using the numerical equivalent of the graphical method is linear interpolation. Linear interpolation calculates the quantile value as a weighted average of the quantile values for the QPs of the whole number ranks just less than, and just greater than the QP we are interested in. For example, let us return to the QP of \\(0.01\\). Let us remind ourselves of the QPs, whole-number ranks and corresponding values either side of the QP \\(0.01\\):\n\nRanks, QPs and corresponding values around QP of 0.01\n\n\nRank\nQuantile position\nQuantile value\n\n\n\n\n5\n0.0099\n37933\n\n\n5.4\n0.01\nV\n\n\n6\n0.0113\n40319\n\n\n\nWhat value should we should give \\(V\\) in the table? One answer is to take the average of the two values either side of the desired QP — in this case \\((37933 + 40319) / 2\\). We could write this same calculation as \\(37933 * 0.5 + 40319 * 0.5\\) — showing that we are giving equal weight (\\(0.5\\)) to the two values either side.\nBut giving both values equal weight doesn’t seem quite right, because the QP we want is closer to the QP for rank 5 (and corresponding value 37933) than it is to the QP for rank 6 (and corresponding value 40319). We should give more weight to the rank 5 value than the rank 6 value. Specifically the lower value is 0.4 rank units away from the QP rank we want, and the higher is 0.6 rank units away. So we give higher weight for shorter distance, and multiply the rank 5 value by \\(1 - 0.4 = 0.6\\), and the rank 6 value by \\(1 - 0.6 = 0.4\\). Therefore the weighted average is \\(37933 * 0.6 + 40319 * 0.4 = 38887.4\\). This is a mathematical way to get the value we described graphically, of tracking up from the rank of 5.4 to the line drawn between the values for rank 5 and 6, and reading off the y-value at which this track crosses that line." + }, + { + "objectID": "standard_scores.html#unusual-values-compared-to-the-distribution", + "href": "standard_scores.html#unusual-values-compared-to-the-distribution", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.4 Unusual values compared to the distribution", + "text": "16.4 Unusual values compared to the distribution\nNow we return the problem of whether KMs and AOCs districts are unusual in terms of their median household incomes. From what we have so far, we might conclude that AOC’s district is fairly poor, and KM’s district is relatively wealthy. But — are either of their districts unusual in their wealth or poverty?\nTo answer that question, we have to think about the distribution of values. Are either AOC’s or KM’s district outside the typical spread of values for districts?\nThe rest of this section is an attempt to answer what we could mean by outside and typical spread.\nLet us start with a histogram of the district incomes, marking the position of the KM and AOC districts.\n\n\n\n\n\n\n\n\n\nWhat could we mean by “outside” the “typical spread”. By outside, we mean somewhere away from the center of the distribution. Let us take the mean of the distribution to be its center, and add that to the plot.\n\nmean_income = np.mean(incomes)" + }, + { + "objectID": "standard_scores.html#on-deviations", + "href": "standard_scores.html#on-deviations", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.5 On deviations", + "text": "16.5 On deviations\nNow let us ask what we could mean by typical spread. By spread we mean deviation either side of the center.\nWe can calculate how far away each income is away from the mean, by subtracting the mean from all the income values. Call the result — the deviations from the mean, or deviations for short.\n\ndeviations = incomes - np.mean(incomes)\n\nThe deviation values give, for each district, how far that district’s income is from the mean. Values near the mean will have small (positive or negative) values, and values further from the mean will have large (positive and negative) values. Here is a histogram of the deviation values.\n\n\n\n\n\n\n\n\n\nNotice that the shape of the distribution has not changed — all that changed is the position of the distribution on the x-axis. In fact, the distribution of deviations centers on zero — the deviations have a mean of (as near as the computer can accurately calculate) zero:\n\n# Show the mean of the deviations, rounded to 8 decimal places.\nnp.round(np.mean(deviations), 8)\n\n0.0" + }, + { + "objectID": "standard_scores.html#the-mean-absolute-deviation", + "href": "standard_scores.html#the-mean-absolute-deviation", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.6 The mean absolute deviation", + "text": "16.6 The mean absolute deviation\nNow let us consider the deviation value for KM and AOC:\n\nprint('Deviation for KM:', deviations[km_index])\n\nDeviation for KM: 5098.036281179142\n\nprint('Deviation for AOC:', deviations[aoc_index])\n\nDeviation for AOC: -15977.963718820858\n\n\nWe have the same problem as before. Yes, we see that KM has a positive deviation, and therefore, that his district is more wealthy than average across the 441 districts. Conversely AOC’s district has a negative deviation, and is poorer than average. But we still lack a standard measure of how far away from the mean each district is, in terms of the spread of values in the histogram.\nTo get such a standard measure, we would like idea of a typical or average deviation. Then we will compare KM’s and AOC’s deviations to the average deviation, to see if they are unusually far from the mean.\nYou have just seen above that we cannot use the literal average (mean) of the deviations for this purpose because the positive and negative deviations will exactly cancel out, and the mean deviation will always be as near as the computer can calculate to zero.\nTo stop the negatives canceling the positives, we can simply knock the minus signs off all the negative deviations.\nThis is the job of the NumPy abs function — where abs is short for absolute. The abs function will knock minus signs off negative values, like this:\n\nnp.abs([-1, 0, 1, -2])\n\narray([1, 0, 1, 2])\n\n\nTo get an average of the deviations, regardless of whether they are positive or negative, we can take the mean of the absolute deviations, like this:\n\n# The Mean Absolute Deviation (MAD)\nabs_deviations = np.abs(deviations)\nmad = np.mean(abs_deviations)\n# Show the result\nmad\n\n15101.657570662428\n\n\nThis is the Mean Absolute Deviation (MAD). It is one measure of the typical spread. MAD is the average distance (regardless of positive or negative) of a value from the mean of the values.\nWe can get an idea of how typical a particular deviation is by dividing the deviation by the MAD value, like this:\n\nprint('Deviation in MAD units for KM:', deviations[km_index] / mad)\n\nDeviation in MAD units for KM: 0.33758123949803737\n\nprint('Deviation in MAD units AOC:', deviations[aoc_index] / mad)\n\nDeviation in MAD units AOC: -1.0580271499375542" + }, + { + "objectID": "standard_scores.html#the-standard-deviation", + "href": "standard_scores.html#the-standard-deviation", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.7 The standard deviation", + "text": "16.7 The standard deviation\nWe are interested in the average deviation, but we find that a simple average of the deviations from the mean always gives 0 (perhaps with some tiny calculation error), because the positive and negative deviations cancel exactly.\nThe MAD calculation solves this problem by knocking the signs off the negative values before we take the mean.\nAnother very popular way of solving the same problem is to precede the calculation by squaring all the deviations, like this:\n\nsquared_deviations = deviations ** 2\n# Show the first five values.\nsquared_deviations[:5]\n\narray([2.48701328e+09, 1.91449685e+09, 1.66015207e+09, 1.16943233e+09,\n 1.16785980e+09])\n\n\n\n\n\n\n\n\nExponential format for showing very large and very small numbers\n\n\n\nThe squared_deviation values above appear in exponential notation (E-notation). Other terms for E-notation are scientific notation, scientific form, or standard form. E-notation is a useful way to express very large (far from 0) or very small (close to 0) numbers in a more compact form.\nE-notation represents a value as a floating point value \\(m\\) multiplied by 10 to the power of an exponent \\(n\\):\n\\[\nm * 10^n\n\\]\n\\(m\\) is a floating point number with one digit before the decimal point — so it can be any value from 1.0 through 9.9999… \\(n\\) is an integer (positive or negative whole number).\nFor example, the median household income of KM’s district is 77205 (dollars). We can express that same number in E-notation as \\(7.7205 * 10^4\\) . Python writes this as 7.7205e4, where the number before the e is \\(m\\) and the number after the e is the exponent value \\(n\\). E-notation is another way of writing the number, because \\(7.7205 * 10^4 = 77205\\).\n\n7.7205e4 == 77205\n\nTrue\n\n\nIt is no great advantage to use E-notation in this case; 77205 is probably easier to read and understand than 7.7205e4. The notation comes into its own where you start to lose track of the powers of 10 when you read a number — and that does happen when the number becomes very long without E-notation. For example, \\(77205^2 = 5960612025\\). \\(5960612025\\) is long enough that you start having to count the digits to see how large it is. In E-notation, that number is 5.960612025e9. If you remember that \\(10^9\\) is one US billion, then the E-notation tells you at a glance that the value is about \\(5.9\\) billion.\nPython makes its own decision whether to print out numbers using E-notation. This only affects the display of the numbers; the underlying values remain the same whether NumPy chooses to show them in E-notation or not.\n\n\nThe process of squaring the deviations turns all the negative values into positive values.\nWe can then take the average (mean) of the squared deviations to give a measure of the typical squared deviation:\n\nmean_squared_deviation = np.mean(squared_deviations)\nmean_squared_deviation\n\n385971462.1165975\n\n\nRather confusingly, the field of statistics uses the term variance to refer to mean squared deviation value. Just to emphasize that naming, let’s do the same calculation but using “variance” as the variable name.\n\n# Statistics calls the mean squared deviation - the \"variance\"\nvariance = np.mean(squared_deviations)\nvariance\n\n385971462.1165975\n\n\n\nIt will come as no surprise to find that Numpy has a function to do the whole variance calculation — subtracting the mean, and returning the average squared deviation — np.var:\n\n# Use np.var to calculate the mean squared deviation directly.\nnp.var(incomes)\n\n385971462.1165975\n\n\n\nThe variance is the typical (in the sense of the mean) squared deviation. The units for the variance, in our case, would be squared dollars. But we are more interested in the typical deviation, in our original units – dollars rather than squared dollars.\nSo we take the square root of the mean squared deviation (the square root of the variance), to get the standard deviation. It is the standard deviation in the sense that it a measure of typical deviation, in the specific sense of the square root of the mean squared deviations.\n\n# The standard deviation is the square root of the mean squared deviation.\n# (and therefore, the square root of the variance).\nstandard_deviation = np.sqrt(mean_squared_deviation)\nstandard_deviation\n\n19646.156420954136\n\n\n\nAgain, Numpy has a function to do this calculation directly: np.std:\n\n# Use np.std to calculate the square root of the mean squared deviation\n# directly.\nnp.std(incomes)\n\n19646.156420954136\n\n\n\n# Of course, np.std(incomes) is the same as:\nnp.sqrt(np.var(incomes))\n\n19646.156420954136\n\n\n\nThe standard deviation (the square root of the mean squared deviation) is a popular alternative to the Mean Absolute Deviation, as a measure of typical spread.\nFigure 16.1 shows another histogram of the income values, marking the mean, the mean plus or minus one standard deviation, and the mean plus or minus two standard deviations. You can see that the mean plus or minus one standard deviation includes a fairly large proportion of the data. The mean plus or minus two standard deviation includes much larger proportion.\n\n\n\n\n\nFigure 16.1: Income histogram plus or minus 1 and 2 standard deviations\n\n\n\n\nNow let us return to the question of how unusual our two congressional districts are in terms of the distribution. First we calculate the number of standard deviations of each district from the mean:\n\nkm_std_devs = deviations[km_index] / standard_deviation\nprint('Deviation in standard deviation units for KM:',\n np.round(km_std_devs), 2)\n\nDeviation in standard deviation units for KM: 0.0 2\n\naoc_std_devs = deviations[aoc_index] / standard_deviation\nprint('Deviation in standard deviation units for AOC:',\n np.round(aoc_std_devs), 2)\n\nDeviation in standard deviation units for AOC: -1.0 2\n\n\nThe values for each district are a re-expression of the income values in terms of the distribution. They give the distance from the mean (positive or negative) in units of standard deviation." + }, + { + "objectID": "standard_scores.html#standard-scores", + "href": "standard_scores.html#standard-scores", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.8 Standard scores", + "text": "16.8 Standard scores\nWe will often find uses for the procedure we have just applied, where we take the original values (here, incomes) and:\n\nSubtract the mean to convert to deviations, then\nDivide by the standard deviation\n\nLet’s apply that procedure to all the incomes values.\nFirst we calculate the standard deviation:\n\ndeviations = incomes - np.mean(incomes)\nincome_std = np.sqrt(np.mean(deviations ** 2))\n\nThen we calculate standard scores:\n\ndeviations_in_stds = deviations / income_std\ndeviations_in_stds[:5]\n\narray([-2.53840816, -2.22715135, -2.07394072, -1.74064397, -1.73947326])\n\n\nThis procedure converts the original data (here incomes) to deviations from the mean in terms of the standard deviation. The resulting values are called standard scores or z-scores. One name for this procedure is “z-scoring”.\nIf you plot a histogram of the standard scores, you will see they have a mean of (actually exactly) 0, and a standard deviation of (actually exactly) 1.\n\n\n\n\n\n\n\n\n\nWith all this information — what should we conclude about the two districts in question? KM’s district is 0.26 standard deviations above the mean, but that’s not enough to conclude that it is unusual. We see from the histogram that a large proportion of the districts are at least this distance from the mean. We can calculate that proportion directly.\n\n# Distances (negative or positive) from the mean.\nabs_std_devs = np.abs(deviations_in_stds)\n# Number where distance greater than KM distance.\nn_gt_km = np.sum(abs_std_devs > km_std_devs)\nprop_gt_km = n_gt_km / len(deviations_in_stds)\nprint(\"Proportion of districts further from mean than KM:\",\n np.round(prop_gt_km, 2))\n\nProportion of districts further from mean than KM: 0.82\n\n\nA full 82% of districts are further from the mean than is KM’s district. KM’s district is richer than average, but not unusual. The benefit of the standard deviation distance is that we can see this directly from the value, without doing the calculation of proportions, because the standard deviation is a measure of typical spread, and KM’s district is well-within this measure.\nAOC’s district is -0.81 standard deviations from the mean. This is a little more unusual than KM’s score.\n\n# Number where distance greater than AOC distance.\n# Make AOC's distance positive to correspond to distance from the mean.\nn_gt_aoc = np.sum(abs_std_devs > np.abs(aoc_std_devs))\nprop_gt_aoc = n_gt_aoc / len(deviations_in_stds)\nprint(\"Proportion of districts further from mean than AOC:\",\n np.round(prop_gt_aoc, 2))\n\nProportion of districts further from mean than AOC: 0.35\n\n\nOnly 35% of districts are further from the mean than AOC’s district, but this is still a reasonable proportion. We see from the standard score that AOC is within one standard deviation. AOC’s district is poorer than average, but not to a remarkable degree." + }, + { + "objectID": "standard_scores.html#standard-scores-to-compare-values-on-different-scales", + "href": "standard_scores.html#standard-scores-to-compare-values-on-different-scales", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.9 Standard scores to compare values on different scales", + "text": "16.9 Standard scores to compare values on different scales\nWhy are standard scores so useful? They allow us to compare values on very different scales.\nConsider the values in Table 16.4. Each row of the table corresponds to a team competing in the English Premier League (EPL) for the 2021-2022 season. For those of you with absolutely no interest in sports, the EPL is the league of the top 20 teams in English football, or soccer to our North American friends. The points column of the table gives the total number of points at the end of the 2021 season (from 38 games). The team gets 3 points for a win, and 1 point for a draw, so the maximum possible points from 38 games are \\(3 * 38 = 114\\). The wages column gives the estimated total wage bill in thousands of British Pounds (£1000).\n\n\n\n\nTable 16.4: 2021 points and wage bills (£1000s) for EPL teams \n\n\nteam\npoints\nwages\n\n\n\n\nManchester City\n93\n168572\n\n\nLiverpool\n92\n148772\n\n\nChelsea\n74\n187340\n\n\nTottenham Hotspur\n71\n110416\n\n\nArsenal\n69\n118074\n\n\nManchester United\n58\n238780\n\n\nWest Ham United\n56\n77936\n\n\nLeicester City\n52\n81590\n\n\nBrighton and Hove Albion\n51\n49820\n\n\nWolverhampton Wanderers\n51\n62756\n\n\nNewcastle United\n49\n73308\n\n\nCrystal Palace\n48\n71910\n\n\nBrentford\n46\n28606\n\n\nAston Villa\n45\n85330\n\n\nSouthampton\n40\n58657\n\n\nEverton\n39\n110202\n\n\nLeeds United\n38\n37354\n\n\nBurnley\n35\n40830\n\n\nWatford\n23\n42030\n\n\nNorwich City\n22\n31750\n\n\n\n\n\n\n\n\nLet’s say we own Crystal Palace Football Club. Crystal Palace was a bit below average in the league in terms of points. Now we are thinking about whether we should invest in higher-paid players for the coming season, to improve our points score, and therefore, league position.\nOne thing we might like to know is whether there is an association between the wage bill and the points scored.\nTo look at that, we can do a scatter plot. This is a plot with — say — wages on the x-axis, and points on the y-axis. For each team we have a pair of values — their wage bill and their points scored. For each team, we put a marker on the scatter plot at the coordinates given by the wage value (on the x-axis) and the points value (on the y-axis).\nHere is that plot for our EPL data in Table 16.4, with the Crystal Palace marker picked out in red.\n\n\n\n\n\n\n\n\n\nIt looks like there is a rough association of wages and points; teams that spend more in wages tend to have more points.\nAt the moment, the points and wages are in very different units. Points are on a possible scale of 0 (lose every game) to 38 * 3 = 114 (win every game). Wages are in thousands of pounds. Maybe we are not interested in the values in these units, but in how unusual the values are, in terms of wages, and in terms of points.\nThis is a good application of standard scores. Standard scores convert the original values to values on a standard scale, where 0 corresponds to an average value, 1 to a value one standard deviation above the mean, and -1 to a value one standard deviation below the mean. If we follow the standard score process for both points and wages, the values will be in the same standard units.\nTo do this calculation, we need the values from the table. We follow the same recipe as before, in loading the data with Pandas, and converting to arrays.\n\nimport numpy as np\nimport pandas as pd\n\npoints_wages = pd.read_csv('data/premier_league.csv')\npoints = np.array(points_wages['points'])\nwages = np.array(points_wages['wages'])\n\nAs you recall, the standard deviation is the square root of the mean squared deviation. In code:\n\n# The standard deviation is the square root of the\n# mean squared deviation.\nwage_deviations = wages - np.mean(wages)\nwage_std = np.sqrt(np.mean(wage_deviations ** 2))\nwage_std\n\n55523.946071289814\n\n\nNow we can apply the standard score procedure to wages. We divide the deviations by the standard deviation.\n\nstandard_wages = (wages - np.mean(wages)) / wage_std\n\nWe apply the same procedure to the points:\n\npoint_deviations = points - np.mean(points)\npoint_std = np.sqrt(np.mean(point_deviations ** 2))\nstandard_points = point_deviations / point_std\n\nNow, when we plot the standard score version of the points against the standard score version of the wages, we see that they are in comparable units, each with a mean of 0, and a spread (a standard deviation) of 1.\n\n\n\n\n\n\n\n\n\nLet us go back to our concerns as the owners of Crystal Palace. Counting down from the top in the table above, we see that Crystal Palace is the 12th row. Therefore, we can get the Crystal Palace wage value with:\n\n# In Python the 12th value is at position (index) 11\ncp_index = 11\ncp_wages = wages[cp_index]\ncp_wages\n\n71910\n\n\nWe can get our wage bill in standard units in the same way:\n\ncp_standard_wages = standard_wages[cp_index]\ncp_standard_wages\n\n-0.3474473873890471\n\n\nOur wage bill is a below average, but its still within striking distance of the mean.\nWe know that we are comparing ourselves against the other teams, so perhaps we want to increase our wage bill by one standard deviation, to push us above the mean, and somewhat away from the center of the pack. If we add one standard deviation to our wage bill, that increases the standard score of our wages by 1.\nBut — if we increase our wages by one standard deviation — how much can we expect that to increase our points — in standard units.\nThat is question about the strength of the association between two measures — here wages and points — and we will cover that topic in much more detail in Chapter 29. But, racing ahead — here is the answer to the question we have just posed — the amount we expect to gain in points, in standard units, if we increase our wages by one standard deviation (and therefore, 1 in standard units).\nFor reasons we won’t justify now, we calculate the \\(r\\) value of association between wages and points, like this:\n\nstandards_multiplied = standard_wages * standard_points\nr = np.mean(standards_multiplied)\nr\n\n0.7080086644844557\n\n\nThe \\(r\\) value is the answer to our question. For every one unit increase in standard scores in wages, we expect an increase of \\(r\\) (0.708) standard score units in points." + }, + { + "objectID": "standard_scores.html#conclusion", + "href": "standard_scores.html#conclusion", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.10 Conclusion", + "text": "16.10 Conclusion\nWhen we look at a set of values, we often ask questions about whether individual values are unusual or surprising. One way of doing that is to look at where the values are in the sorted order — for example, using the raw rank of values, or the proportion of values below this value — the quantiles or percentiles of a value. Another measure of interest is where a value is in comparison to the spread of all values either side of the mean. We use the term “deviations” to refer to the original values after we have subtracted the mean of the values. We can measure spread either side of the mean with metrics such as the mean of the absolute deviations (MAD) and the square root of the mean squared deviations (the standard deviation). One common use of the deviations and the standard deviation is to transform values into standard scores. These are the deviations divided by the standard deviation, and they transform values to have a standard mean (zero) and spread (standard deviation of 1). This can make it easier to compare sets of values with very different ranges and means.\n\n\n\n\nHyndman, Rob J, and Yanan Fan. 1996. “Sample Quantiles in Statistical Packages.” The American Statistician 50 (4): 361–65. https://www.jstor.org/stable/pdf/2684934.pdf.\n\n\nPiketty, Thomas. 2018. “Brahmin Left Vs Merchant Right: Rising Inequality & the Changing Structure of Political Conflict.” 2018. https://www.prsinstitute.org/downloads/related/economics/RisingInequalityandtheChangingStructureofPoliticalConflict1.pdf." + }, + { + "objectID": "inference_ideas.html#knowledge-without-probabilistic-statistical-inference", + "href": "inference_ideas.html#knowledge-without-probabilistic-statistical-inference", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.1 Knowledge without probabilistic statistical inference", + "text": "17.1 Knowledge without probabilistic statistical inference\nLet us distinguish two kinds of knowledge with which inference at large (that is, not just probabilistic statistical inference) is mainly concerned: a) one or more absolute measurements on one or more dimensions of a collection of one or more items — for example, your income, or the mean income of the people in your country; and b) comparative measurements and evaluations of two or more collections of items (especially whether they are equal or unequal)—for example, the mean income in Brazil compared to the mean income in Argentina. Types (a) and (b) both include asking whether there has been a change between one observation and another.\nWhat is the conceptual basis for gathering these types of knowledge about the world? I believe that our rock bottom conceptual tool is the assumption of what we may call sameness , or continuity , or constancy , or repetition , or equality , or persistence ; “constancy” and “continuity” will be the terms used most frequently here, and I shall use them interchangeably.\nContinuity is a non-statistical concept. It is a best guess about the next point beyond the known observations, without any idea of the accuracy of the estimate. It is like testing the ground ahead when walking in a marsh. It is local rather than global. We’ll talk a bit later about why continuity seems to be present in much of the world that we encounter.\nThe other great concept in statistical inference, and perhaps in all inference taken together, is representative (usually random) sampling, to be discussed in Chapter 18. Representative sampling — which depends upon the assumption of sameness (homogeneity) throughout the universe to be investigated — is quite different than continuity; representative sampling assumes that there is no greater chance of a connection between any two elements that might be drawn into the sample than between any other two elements; the order of drawing is immaterial. In contrast, continuity assumes that there is a greater chance of connection between two contiguous elements than between either one of the elements and any of the many other elements that are not contiguous to either. Indeed, the process of randomizing is a device for doing away with continuity and autocorrelation within some bounded closed system — the sample “frame.” It is an attempt to map (describe) the entire area ahead using the device of the systematic survey. Random representative sampling enables us to make probabilistic inferences about a population based on the evidence of a sample.\n\nTo return now to the concept of sameness: Examples of the principle are that we assume: a) our house will be in the same place tomorrow as today; b) a hammer will break an egg every time you hit the latter with the former (or even the former with the latter); c) if you observe that the first fifteen persons you see walking out of a door at the airport are male, the sixteenth probably will be male also; d) paths in the village stay much the same through a person’s life; e) religious ritual changes little through the decades; f) your best guess about tomorrow’s temperature or stock price is that will be the same as today’s. This principle of constancy is related to David Hume’s concept of constant conjunction .\nWhen my children were young, I would point to a tree on our lawn and ask: “Do you think that tree will be there tomorrow?” And when they would answer “Yes,” I’d ask, “Why doesn’t the tree fall?” That’s a tough question to answer.\nThere are two reasonable bases for predicting that the tree will be standing tomorrow. First and most compelling for most of us is that almost all trees continue standing from day to day, and this particular one has never fallen; hence, what has been in the past is likely to continue. This assessment requires no scientific knowledge of trees, yet it is a very functional way to approach most questions concerning the trees — such as whether to hang a clothesline from it, or whether to worry that it will fall on the house tonight. That is, we can predict the outcome in this case with very high likelihood of being correct even though we do not utilize anything that would be called either science or statistical inference. (But what do you reply when your child says: “Why should I wear a seat belt? I’ve never been in an accident”?)\nA second possible basis for prediction that the tree will be standing is scientific analysis of the tree’s roots — how the tree’s weight is distributed, its sickness or health, and so on. Let’s put aside this sort of scientific-engineering analysis for now.\nThe first basis for predicting that the tree will be standing tomorrow — sameness — is the most important heuristic device in all of knowledge-gathering. It is often a weak heuristic; certainly the prediction about the tree would be better grounded (!) after a skilled forester examines the tree. But persistence alone might be a better heuristic in a particular case than an engineering-scientific analysis alone.\nThis heuristic appears more obvious if the child — or the adult — were to respond to the question about the tree with another question: Why should I expect it to fall ? In the absence of some reason to expect change, it is quite reasonable to expect no change. And the child’s new question does not duck the central question we have asked about the tree, any more than one ducks a probability estimate by estimating the complementary probability (that is, unity minus the probability sought); indeed, this is a very sound strategy in many situations.\n\nConstancy can refer to location, time, relationship to another variable, or yet another dimension. Constancy may also be cyclical. Some cyclical changes can be charted or mapped with relative certainty — for example the life-cycles of persons, plants, and animals; the diurnal cycle of dark and light; and the yearly cycle of seasons. The courses of some diseases can also be charted. Hence these kinds of knowledge have long been well known.\nConsider driving along a road. One can predict that the price of the next gasoline station will be within a few cents of the gasoline station that you just passed. But as you drive further and further, the dispersion increases as you cross state lines and taxes differ. This illustrates continuity.\nThe attention to constancy can focus on a single event, such as leaves of similar shape appearing on the same plant. Or attention can focus on single sequences of “production,” as in the process by which a seed produces a tree. For example, let’s say you see two puppies — one that looks like a low-slung dachshund, and the other a huge mastiff. You also see two grown male dogs, also apparently dachshund and mastiff. If asked about the parentage of the small ones, you are likely — using the principle of sameness — to point — quickly and with surety — to the adult dogs of the same breed. (Here it is important to notice that this answer implicitly assumes that the fathers of the puppies are among these dogs. But the fathers might be somewhere else entirely; it is in these ways that the principle of sameness can lead you astray.)\nWhen applying the concept of sameness, the object of interest may be collections of data, as in Semmelweiss’s (1983, 64) data on the consistent differences in rates of maternal deaths from childbed fever in two clinics with different conditions (see Table 17.1), or the similarities in sex ratios from year to year in Graunt’s (1759, 304) data on christenings in London (Table 17.2), or the stark effect in John Snow’s (Winslow 1980, 276) data on the numbers of cholera cases associated with two London water suppliers (Table 17.3), or Kanehiro Takaki’s (Kornberg 1991, 9) discovery of the reduction in beriberi among Japanese sailors as a result of a change in diet (Table 17.4). These data seem so overwhelmingly clear cut that our naive statistical sense makes the relationships seem deterministic, and the conclusions seems straightforward. (But the same statistical sense frequently misleads us when considering sports and stock market data.)\n\n\nTable 17.1: Deaths of Mothers from childbed fever in two clinics\n\n\n\n\n\n\n\n\n\n\n\n\nFirst clinic\nSecond clinic\n\n\n\nBirths\nDeaths\nRate\nBirths\nDeaths\nRate\n\n\n\n\n1841\n3,036\n237\n7.7\n2,442\n86\n3.5\n\n\n1842\n3,287\n518\n15.8\n2,659\n202\n7.5\n\n\n1843\n3,060\n274\n8.9\n2,739\n164\n5.9\n\n\n1844\n3,157\n260\n8.2\n2,956\n68\n2.3\n\n\n1845\n3,492\n241\n6.8\n3,241\n66\n2.03\n\n\n1845\n4,010\n459\n11.4\n3,754\n105\n2.7\n\n\n\nTotal\n20,042\n1,989\n\n17,791\n691\n\n\n\nAverage\n\n\n9.92\n\n\n3.38\n\n\n\n\n\n\n\nTable 17.2: Ratio of number of male to number of female christenings in London\n\n\nPeriod\nMale / Female ratio\n\n\n\n\n1629-1636\n1.072\n\n\n1637-1640\n1.073\n\n\n1641-1648\n1.063\n\n\n1649-1656\n1.095\n\n\n1657-1660\n1.069\n\n\n\n\n\n\nTable 17.3: Rates of death from cholera for three water suppliers\n\n\nWater supplier\nCholera deaths per 10,000 houses\n\n\n\n\nSouthwark and Vauxhall\n71\n\n\nLambeth\n5\n\n\nRest of London\n9\n\n\n\n\n\n\nTable 17.4: Takaki’s Japanese Naval Records of Deaths from Beriberi\n\n\n\n\n\n\n\n\nYear\nDiet\nTotal Navy Personnel\nDeaths from Beriberi\n\n\n\n\n1880\nRice diet\n4,956\n1,725\n\n\n1881\nRice diet\n4,641\n1,165\n\n\n1882\nRice diet\n4,769\n1,929\n\n\n1883\nRice Diet\n5,346\n1,236\n\n\n1884\nChange to new diet\n5,638\n718\n\n\n1885\nNew diet\n6,918\n41\n\n\n1886\nNew diet\n8,475\n3\n\n\n1887\nNew diet\n9,106\n0\n\n\n1888\nNew diet\n9,184\n0\n\n\n\n\nConstancy and sameness can be seen in macro structures; consider, for example, the constant location of your house. Constancy can also be seen in micro aggregations — for example, the raindrops and rain that account for the predictably fluctuating height of the Nile, or the ratio of boys to girls born in London, cases in which we can average to see the “statistical” sameness. The total sum of the raindrops produces the level of a reservoir or a river from year to year, and the sum of the behaviors of collections of persons causes the birth rates in the various years.\nStatistical inference is only needed when a person thinks that s/he might have found a pattern but the pattern is not completely obvious to all. Probabilistic inference works to test — either to confirm or discount — the belief in the pattern’s existence. We will see such cases in the following chapter.\nPeople have always been forced to think about and act in situations that have not been constant — that is, situations where the amount of variability in the phenomenon makes it impossible to draw clear cut, sensible conclusions. For example, the appearance of game animals in given places and at given times has always been uncertain to hunters, and therefore it has always been difficult to know which target to hunt in which place at what time. And of course variability of the weather has always made it a very uncertain element. The behavior of one’s enemies and friends has always been uncertain, too, though uncertain in a manner different from the behavior of wild animals; there often is a gaming element in interactions with other humans. But in earlier times, data and techniques did not exist to enable us to bring statistical inference to bear." + }, + { + "objectID": "inference_ideas.html#the-treatment-of-uncertainty", + "href": "inference_ideas.html#the-treatment-of-uncertainty", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.2 The treatment of uncertainty", + "text": "17.2 The treatment of uncertainty\nThe purpose of statistical inference is to help us peer through the veil of variability when it obscures the main thrust of the data, so as to improve the decisions we make. Statistical inference (or in most cases, simply probabilistic estimation) can help:\n\na gambler deciding on the appropriate odds in a betting game when there seems to be little or no difference between two or more outcomes;\nan astronomer deciding upon one or another value as the central estimate for the location of a star when there is considerable variation in the observations s/he has made of the star;\na basketball coach pondering whether to remove from the game her best shooter who has heretofore done poorly tonight;\nan oil-drilling firm debating whether to follow up a test-well drilling with a full-bore drilling when the probability of success is not overwhelming but the payoff to a gusher could be large.\n\nReturning to the tree near the Simon house: Let’s change the facts. Assume now that one major part of the tree is mostly dead, and we expect a big winter storm tonight. What is the danger that the tree will fall on the house? Should we spend $1500 to have the mostly-dead third of it cut down? We know that last year a good many trees fell on houses in the neighborhood during such a storm.\nWe can gather some data on the proportion of old trees this size that fell on houses — about 5 in 100, so far as we can tell. Now it is no longer an open-and-shut case about whether the tree will be standing tomorrow, and we are using statistical inference to help us with our thinking. We proceed to find a set of trees that we consider similar to this one , and study the variation in the outcomes of such trees. So far we have estimated that the average for this group of trees — the mean (proportion) that fell in the last big storm — is 5 percent. Averages are much more “stable” — that is, more similar to each other — than are individual cases.\nNotice how we use the crucial concept of sameness: We assume that our tree is like the others we observed, or at least that it is not systematically different from most of them and it is more-or-less average.\nHow would our thinking be different if our data were that one tree in 10 had fallen instead of 5 in 100? This is a question in statistical inference.\n\nHow about if we investigate further and find that 4 of 40 elms fell, but only one of 60 oaks , and ours is an oak tree. Should we consider that oaks and elms have different chances of falling? Proceeding a bit further, we can think of the question as: Should we or should we not consider oaks and elms as different? This is the type of statistical inference called “hypothesis testing”: We apply statistical procedures to help us decide whether to treat the two classes of trees as the same or different. If we should consider them the same, our worries about the tree falling are greater than if we consider them different with respect to the chance of damage.1\nNotice that statistical inference was not necessary for accurate prediction when I asked the kids about the likelihood of a live tree falling on a day when there would be no storm. So it is with most situations we encounter. But when the assumption of constancy becomes shaky for one reason or another, as with the sick tree falling in a storm, we need a more refined form of thinking. We collect data on a large number of instances, inquire into whether the instances in which we are interested (our tree and the chance of it falling) are representative — that is, whether it resembles what we would get if we drew a sample randomly — and we then investigate the behavior of this large class of instances to see what light it throws on the instances(s) in which we are interested.\nThe procedure in this case — which we shall discuss in greater detail later on — is to ask: If oaks and elms are not different, how likely is it that only one of 60 oaks would fall whereas 4 of 40 elms would fall? Again, notice the assumption that our tree is “representative” of the other trees about which we have information — that it is not systematically different from most of them, but rather that it is more-or-less average. Our tree certainly was not chosen randomly from the set of trees we are considering. But for purposes of our analysis, we proceed as if it had been chosen randomly — because we deem it “representative.”\nThis is the first of two roles that the concept of randomness plays in statistical thinking. Here is an example of the second use of the concept of randomness: We conduct an experiment — plant elm and oak trees at randomly-selected locations on a plot of land, and then try to blow them down with a wind-making machine. (The random selection of planting spots is important because some locations on a plot of ground have different growing characteristics than do others.) Some purists object that only this sort of experimental sampling is a valid subject of statistical inference; it can never be appropriate, they say, to simply assume on the basis of other knowledge that the tree is representative. I regard that purist view as a helpful discipline on our thinking. But accepting its conclusion — that one should not apply statistical inference except to randomly-drawn or randomly-constituted samples — would take from us a tool that has proven useful in a variety of activities.\nAs discussed earlier in this chapter, the data in some (probably most) scientific situations are so overwhelming that one can proceed without probabilistic inference. Historical examples include those shown above of Semmelweiss and puerperal fever, and John Snow and cholera.2 But where there was lack of overwhelming evidence, the causation of many diseases long remained unclear for lack of statistical procedures. This led to superstitious beliefs and counter-productive behavior, such as quarantines against plague often were. Some effective practices also arose despite the lack of sound theory, however — the waxed costumes of doctors, and the burning of mattresses, despite the wrong theory about the causation of plague; see (Cipolla 1981).\nSo far I have spoken only of predictability and not of other elements of statistical knowledge such as understanding and control . This is simply because statistical correlation is the bed rock of most scientific understanding, and predictability. Later we will expand the discussion beyond predictability; it holds no sacred place here." + }, + { + "objectID": "inference_ideas.html#where-statistical-inference-becomes-crucial", + "href": "inference_ideas.html#where-statistical-inference-becomes-crucial", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.3 Where statistical inference becomes crucial", + "text": "17.3 Where statistical inference becomes crucial\nThere was little role for statistical inference until about three centuries ago because there existed very few scientific data. When scientific data began to appear, the need emerged for statistical inference to improve the interpretation of the data. As we saw, statistical inference is not needed when the evidence is overwhelming. A thousand cholera cases at one well and zero at another obviously does not require a statistical test. Neither would 999 cases to one, or even 700 cases to 300, because our inbred and learned statistical senses can detect that the two situations are different. But probabilistic inference is needed when the number of cases is relatively small or where for other reasons the data are somewhat ambiguous.\nFor example, when working with the 17th century data on births and deaths, John Graunt — great statistician though he was — drew wrong conclusions about some matters because he lacked modern knowledge of statistical inference. For example, he found that in the rural parish of Romsey “there were born 15 Females for 16 Males, whereas in London there were 13 for 14, which shows, that London is somewhat more apt to produce Males, then the country” (p. 71). He suggests that the “curious” inquire into the causes of this phenomenon, apparently not recognizing — and at that time he had no way to test — that the difference might be due solely to chance. He also notices (p. 94) that the variations in deaths among years in Romsey were greater than in London, and he attempted to explain this apparent fact (which is just a statistical artifact) rather than understanding that this is almost inevitable because Romsey is so much smaller than London. Because we have available to us the modern understanding of variability, we can now reach sound conclusions on these matters.3\nSummary statistics — such as the simple mean — are devices for reducing a large mass of data (inevitably confusing unless they are absolutely clear cut) to something one can manage to understand. And probabilistic inference is a device for determining whether patterns should be considered as facts or artifacts.\nHere is another example that illustrates the state of early quantitative research in medicine:\n\nExploring the effect of a common medicinal substance, Bőcker examined the effect of sasparilla on the nitrogenous and other constituents of the urine. An individual receiving a controlled diet was given a decoction of sasparilla for a period of twelve days, and the volume of urine passed daily was carefully measured. For a further twelve days that same individual, on the same diet, was given only distilled water, and the daily quantity of urine was again determined. The first series of researches gave the following figures (in cubic centimeters): 1,467, 1,744, 1,665, 1,220, 1,161, 1,369, 1,675, 2,199, 887, 1,634, 943, and 2,093 (mean = 1,499); the second series: 1,263, 1,740, 1,538, 1,526, 1,387, 1,422, 1,754, 1,320, 1,809, 2,139, 1,574, and 1,114 (mean = 1,549). Much uncertainty surrounded the exactitude of these measurements, but this played little role in the ensuing discussion. The fundamental issue was not the quality of the experimental data but how inferences were drawn from those data (Coleman 1987, 207).\n\nThe experimenter Böcker had no reliable way of judging whether the data for the two groups were or were not meaningfully different, and therefore he arrived at the unsound conclusion that there was indeed a difference. (Gustav Radicke used this example as the basis for early work on statistical significance (Støvring 1999).)\nAnother example: Joseph Lister convinced the scientific world of the germ theory of infection, and the possibility of preventing death with a disinfectant, with these data: Prior to the use of antiseptics — 16 post-operative deaths in 35 amputations; subsequent to the use of antiseptics — 6 deaths in 40 amputations (Winslow 1980, 303). But how sure could one be that a difference of that size might not occur just by chance? No one then could say, nor did anyone inquire, apparently.\nHere’s another example of great scientists falling into error because of a too-primitive approach to data (Feller 1968, 1:69–70): Charles Darwin wanted to compare two sets of measured data, each containing 16 observations. At Darwin’s request, Francis Galton compared the two sets of data by ranking each, and then comparing them pairwise. The a’s were ahead 13 times. Without knowledge of the actual probabilities Galton concluded that the treatment was effective. But, assuming perfect randomness, the probability that the a’s beat [the others] 13 times or more equals 3/16. This means that in three out of sixteen cases a perfectly ineffectual treatment would appear as good or better than the treatment classified as effective by Galton.\nThat is, Galton and Darwin reached an unsound conclusion. As Feller (1968, 1:70) says, “This shows that a quantitative analysis may be a valuable supplement to our rather shaky intuition”.\nLooking ahead, the key tool in situations like Graunt’s and Böcker’s and Lister’s is creating ceteris paribus — making “everything else the same” — with random selection in experiments, or at least with statistical controls in non-experimental situations." + }, + { + "objectID": "inference_ideas.html#conclusions", + "href": "inference_ideas.html#conclusions", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.4 Conclusions", + "text": "17.4 Conclusions\nIn all knowledge-seeking and decision-making, our aim is to peer into the unknown and reduce our uncertainty a bit. The two main concepts that we use — the two great concepts in all of scientific knowledge-seeking, and perhaps in all practical thinking and decision-making — are a) continuity (or non-randomness) and the extent to which it applies in given situation, and b) random sampling, and the extent to which we can assume that our observations are indeed chosen by a random process.\n\n\n\n\nCipolla, C. M. 1981. Fighting the Plague in Seventeenth-Century Italy. Merle Curti Lectures. Madison, Wisconsin: University of Wisconsin Press. https://books.google.co.uk/books?id=Ct\\_OJYgnKCsC.\n\n\nColeman, William. 1987. “Experimental Physiology and Statistical Inference: The Therapeutic Trial in Nineteenth Century Germany.” In The Probabilistic Revolution: Volume 2: Ideas in the Sciences, edited by Lorenz Krüger, Gerd Gigerenzer, and Mary S. Morgan. An MIT Press Classic. MIT Press. https://books.google.co.uk/books?id=SLftmgEACAAJ.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nGraunt, John. 1759. “Natural and Political Observations Mentioned in a Following Index and Made Upon the Bills of Mortality.” In Collection of Yearly Bills of Mortality, from 1657 to 1758 Inclusive, edited by Thomas Birch. London: A. Miller. https://archive.org/details/collectionyearl00hebegoog.\n\n\nHald, Anders. 1990. A History of Probability and Statistics and Their Applications Before 1750. New York: John Wiley & Sons. https://archive.org/details/historyofprobabi0000hald.\n\n\nKornberg, Arthur. 1991. For the Love of Enzymes: The Odyssey of a Biochemist. Cambridge, Massachusetts: Harvard University Press. https://archive.org/details/forloveofenzymes00arth.\n\n\nSemmelweis, Ignác Fülöp. 1983. The Etiology, Concept, and Prophylaxis of Childbed Fever. Translated by K. Codell Carter. Madison, Wisconsin: University of Wisconsin Press. https://archive.org/details/etiologyconcepta0000unse.\n\n\nStøvring, H. 1999. “On Radicke and His Method for Testing Mean Differences.” Journal of the Royal Statistical Society: Series D (The Statistician) 48 (2): 189–201. https://www.jstor.org/stable/pdf/2681185.pdf.\n\n\nWinslow, Charles-Edward Amory. 1980. The Conquest of Epidemic Disease: A Chapter in the History of Ideas. Madison, Wisconsin: University of Wisconsin Press. https://archive.org/details/conquestofepidem0000wins_p3k0." + }, + { + "objectID": "inference_intro.html#statistical-inference-and-random-sampling", + "href": "inference_intro.html#statistical-inference-and-random-sampling", + "title": "18  Introduction to Statistical Inference", + "section": "18.1 Statistical inference and random sampling", + "text": "18.1 Statistical inference and random sampling\nContinuity and sameness is the fundamental concept in inference in general, as discussed in Chapter 17. Random sampling is the second great concept in inference, and it distinguishes probabilistic statistical inference from non-statistical inference as well as from non-probabilistic inference based on statistical data.\nLet’s begin the discussion with a simple though unrealistic situation. Your friend Arista a) looks into a cardboard carton, b) reaches in, c) pulls out her hand, and d) shows you a green ball. What might you reasonably infer?\nYou might at least be fairly sure that the green ball came from the carton, though you recognize that Arista might have had it concealed in her hand when she reached into the carton. But there is not much more you might reasonably conclude at this point except that there was at least one green ball in the carton to start with. There could be no more balls; there could be many green balls and no others; there could be a thousand red balls and just one green ball; and there could be one green ball, a hundred balls of different colors, and two pounds of mud — given that she looked in first, it is not improbable that she picked out the only green ball among other material of different sorts.\nThere is not much you could say with confidence about the probability of yourself reaching into the same carton with your eyes closed and pulling out a single green ball. To use other language (which some philosophers might say is not appropriate here as the situation is too specific), there is little basis for induction about the contents of the box. Nor is the situation very different if your friend reaches in three times in a row and hands you a green ball each time.\nSo far we have put our question rather vaguely. Let us frame a more precise inquiry: What do we predict about the next item(s) we might draw from the carton? If we assume — based on who-knows-what information or notions — that another ball will emerge, we could simply use the principle of sameness and (until we see a ball of another color) predict that the next ball will be green, whether one or three or 100 balls is (are) drawn.\nBut now what about if Arista pulls out nine green balls and one red ball? The principle of sameness cannot be applied as simply as before. Based on the last previous ball, the next one will be red. But taking into account all the balls we have seen, the next will “probably” be green. We have no solid basis on which to go further. There cannot be any “solution” to the “problem” of reaching a general conclusion on the basis of these specific pieces of evidence.\nNow consider what you might conclude if you were told that a single green ball had been drawn with a random sampling procedure from a box containing nothing but balls. Knowledge that the sample was drawn randomly from a given universe is grounds for belief that one knows much more than if a sample were not drawn randomly. First, you would be sure — if you had reasonable basis to believe that the sampling really was random, which is not easy to guarantee — that the ball came from the box. Second, you would guess that the proportion of green balls is not very small, because if there are only a few green balls and many other-colored balls, it would be unusual — that is, the event would have a low probability — to draw a green ball. Not impossible, but unlikely. And we can compute the probability of drawing a green ball — or any other combination of colors — for different assumed compositions within the box . So the knowledge that the sampling process is random greatly increases our ability — or our confidence in our ability — to infer the contents of the box.\nLet us note well the strategy of the previous paragraph: Ask about the probability that one or more various possible contents of the box (the “universe”) will produce the observed sample , on the assumption that the sample was drawn randomly. This is the central strategy of all statistical inference , though I do not find it so stated elsewhere. We shall come back to this idea shortly.\nThere are several kinds of questions one might ask about the contents of the box. One general category includes questions about our best guesses of the box’s contents — that is, questions of estimation . Another category includes questions about our surety of that description, and our surety that the contents are similar or different from the contents of other boxes; the consideration of surety follows after estimates are made. The estimation questions can be subtle and unexpected (Savage 1972, chap. 15), but do not cause major controversy about the foundations of statistics. So we can quickly move on to questions about the extent of surety in our estimations.\nConsider your reaction if the sampling produces 10 green balls in a row, or 9 out of 10. If you had no other information (a very important assumption that we will leave aside for now), your best guess would be that the box contains all green balls, or a proportion of 9 of 10, in the two cases respectively. This estimation process seems natural enough.\nYou would be surprised if someone told you that instead of the box containing the proportion in the sample, it contained just half green balls. How surprised? Intuitively, the extent of your surprise would depend on the probability that a half-green “universe” would produce 10 or 9 green balls out of 10. This surprise is a key element in the logic of the hypothesis-testing branch of statistical inference.\nWe learn more about the likely contents of the box by asking about the probability that various specific populations of balls within the box would produce the particular sample that we received. That is, we can ask how likely a collection of 25 percent green balls is to produce (say) 9 of 10 green ones, and how likely collections of 50 percent, 75 percent, 90 percent (and any other collections of interest) are to produce the observed sample. That is, we ask about the consistency between any particular hypothesized collection within the box and the sample we observe. And it is reasonable to believe that those universes which have greater consistency with the observed sample — that is, those universes that are more likely to produce the observed sample — are more likely to be in the box than other universes. This (to repeat, as I shall repeat many times) is the basic strategy of statistical investigation. If we observe 9 of 10 green balls, we then determine that universes with (say) 9/10 and 10/10 green balls are more consistent with the observed evidence than are universes of 0/10 and 1/10 green balls. So by this process of considering specific universes that the box might contain, we make possible more specific inferences about the box’s probable contents based on the sample evidence than we could without this process.\nPlease notice the role of the assessment of probabilities here: By one technical means or another (either simulation or formulas), we assess the probabilities that a particular universe will produce the observed sample, and other samples as well.\nIt is of the highest importance to recognize that without additional knowledge (or assumption) one cannot make any statements about the probability of the sample having come from any particular universe , on the basis of the sample evidence. (Better read that last sentence again.) We can only speak about the probability that a particular universe will produce the observed sample, a very different matter. This issue will arise again very sharply in the context of confidence intervals.\nLet us generalize the steps in statistical inference:\n\nFrame the original question as: What is the chance of getting the observed sample x from population X? That is, what is probability of (If x then X)?\nProceed to this question: What kinds of samples does X produce, with which probability? That is, what is the probability of this particular x coming from X? That is, what is p(x|X)?\nActually investigate the behavior of X with respect to x and other samples. One can do this in two ways:\n\nUse the formulaic calculus of probability, perhaps resorting to Monte Carlo methods if an appropriate formula does not exist. Or,\nUse resampling (in the larger sense), the domain of which equals (all Monte Carlo experimentation) minus (the use of Monte Carlo methods for approximations, investigation of complex functions in statistics and other theoretical mathematics, and uses elsewhere in science). Resampling in its more restricted sense includes the bootstrap, permutation tests, and other non-parametric methods.\n\nInterpretation of the probabilities that result from step 3 in terms of\n\nacceptance or rejection of hypotheses, ii) surety of conclusions, or iii) inputs to decision theory.\n\n\nHere is a short definition of statistical inference:\n\nThe selection of a probabilistic model that might resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.\n\nWe will get even more specific about the procedure when we discuss the canonical procedures for hypothesis testing and for the finding of confidence intervals in the chapters on those subjects.\nThe discussion so far has been in the spirit of what is known as hypothesis testing . The result of a hypothesis test is a decision about whether or not one believes that the sample is likely to have been drawn randomly from the “benchmark universe” X. The logic is that if the probability of such a sample coming from that universe is low, we will then choose to believe the alternative — to wit, that the sample came from the universe that resembles the sample.\n\nThe underlying idea is that if an event would be very surprising if it really happened — as it would be very surprising if the dog had really eaten the homework (see Chapter 21) — we are inclined not to believe in that possibility. (This logic will be explored further in later chapters on hypothesis testing.)\nWe have so far assumed that our only relevant knowledge is the sample. And though we almost never lack some additional information, this can be a sensible way to proceed when we wish to suppress any other information or speculation. This suppression is controversial; those known as Bayesians or subjectivists want us to take into account all the information we have. But even they would not dispute suppressing information in certain cases — such as a teacher who does not want to know students’ IQ scores because s/he might want avoid the possibility of unconsciously being affected by that score, or an employer who wants not to know the potential employee’s ethnic or racial background even though the hiring process might be more “successful” on some metric, or a sports coach who refuses to pick the starting team each year until the players have competed for the positions.\n\nNow consider a variant on the green-ball situation discussed above. Assume now that you are told that samples of balls are alternately drawn from one of two specified universes — two buckets of balls, one with 50 percent green balls and the other with 80 percent green balls. Now you are shown a sample of nine green and one red balls drawn from one of those buckets. On the basis of your sample you can then say how probable it is that the sample came from one or the other universe . You proceed by computing the probabilities (often called the likelihoods in this situation) that each of those two universes would individually produce the observed samples — probabilities that you could arrive at with resampling, with Pascal’s Triangle, or with a table of binomial probabilities, or with the Normal approximation and the Z distribution, or with yet other devices. Those probabilities are .01 and .27, and the ratio of the two (0.1/.27) is a bit less than .04. That is, fair betting odds are about 1 to 27.\nLet us consider a genetics problem on this model. Plant A produces 3/4 black seeds and 1/4 reds; plant B produces all reds. You get a red seed. Which plant would you guess produced it? You surely would guess plant B. Now, how about 9 reds and a black, from Plants A and C, the latter producing 50 percent reds on average?\nTo put the question more precisely: What betting odds would you give that the one red seed came from plant B? Let us reason this way: If you do this again and again, 4 of 5 of the red seeds you see will come from plant B. Therefore, reasonable (or “fair”) odds are 4 to 1, because this is in accord with the ratios with which red seeds are produced by the two plants — 4/4 to 1/4.\nHow about the sample of 9 reds and a black, and plants A and C? It would make sense that the appropriate odds would be derived from the probabilities of the two plants producing that particular sample, probabilities which we computed above.\nNow let us move to a bit more complex problem: Consider two buckets — bucket G with 2 red and 1 black balls, and bucket H with 100 red and 100 black balls. Someone flips a coin to decide which bucket will be drawn from, reaches into that bucket, and chooses two balls without replacing the first one before drawing the second. Both are red. What are the odds that the sample came from bucket G? Clearly, the answer should derive from the probabilities that the two buckets would produce the observed sample.\n(Now just for fun, how about if the first ball drawn is thrown back after examining? What now are the appropriate odds?)\nLet’s restate the central issue. One can state the probability that a particular plant which produces on average 1 red and 3 black seeds will produce one red seed, or 5 reds among a sample of 10. But without further assumptions — such as the assumption above that the possibilities are limited to two specific universes — one cannot say how likely a given red seed is to have come from a given plant, even if we know that that plant produces only reds. (For example, it may have come from other plants producing only red seeds.)\nWhen we limit the possibilities to two universes (or to a larger set of specified universes) we are able to put a probability on one hypothesis or another. But to repeat, in many or most cases, one cannot reasonably assume it is only one or the other. And then we cannot state any odds that the sample came from a particular universe. This is a very difficult point to grasp, experience shows, but a crucial one. (It is the sort of subtle issue that makes statistics so difficult.)\nThe additional assumptions necessary to talk about the probability that the red seed came from a given plant are the stuff of statistical inference. And they must be combined with such “objective” probabilistic assessments as the probability that a 1-red-3-black plant will produce one red, or 5 reds among 10 seeds.\nNow let us move one step further. Instead of stating as a fact under our control that there is a .5 chance of the sample being drawn from each of the two buckets in the problem above, let us assume that we do not know the probability of each bucket being picked, but instead we estimate a probability of .5 for each bucket, based on a variety of other information that all is uncertain. But though the facts are now different, the most reasonable estimate of the odds that the observed sample was drawn from one or the other bucket will not be different than before — because in both situations we were working with a “prior probability” of .5.\n\nNow let us go a step further by allowing the universes from which the sample may have come to have different assumed probabilities as well as different compositions. That is, we now consider prior probabilities other than .5.\nHow do we decide which universe(s) to investigate for the probability of producing the observed sample, and of producing samples that are even less likely, in the sense of being more surprising? That judgment depends upon the purpose of your analysis, upon your point of view of how statistics ought to be done, and upon some other factors.\nIt should be noted that the logic described so far applies in exactly the same fashion whether we do our work estimating probabilities with the resampling method or with conventional methods. We can figure the probability of nine or more green chips from a universe of (say) p = .7 with either approach.\nSo far we have discussed the comparison of various hypotheses and possible universes. We must also consider where the consideration of the reliability of estimates comes in. This leads to the concept of confidence limits, which will be discussed in Chapter 26 and Chapter 27." + }, + { + "objectID": "inference_intro.html#samples-whose-observations-may-have-more-than-two-values", + "href": "inference_intro.html#samples-whose-observations-may-have-more-than-two-values", + "title": "18  Introduction to Statistical Inference", + "section": "18.2 Samples Whose Observations May Have More Than Two Values", + "text": "18.2 Samples Whose Observations May Have More Than Two Values\nSo far we have discussed samples and universes that we can characterize as proportions of elements which can have only one of two characteristics — green or other, in this case, which is equivalent to “1” or “0.” This expositional choice has been solely for clarity. All the ideas discussed above pertain just as well to samples whose observations may have more than two values, and which may be either discrete or continuous." + }, + { + "objectID": "inference_intro.html#summary-and-conclusions", + "href": "inference_intro.html#summary-and-conclusions", + "title": "18  Introduction to Statistical Inference", + "section": "18.3 Summary and conclusions", + "text": "18.3 Summary and conclusions\nA statistical question asks about the probabilities of a sample having arisen from various source universes in light of the evidence of a sample. In every case, the statistical answer comes from considering the behavior of particular specified universes in relation to the sample evidence and to the behavior of other possible universes. That is, a statistical problem is an exercise in postulating universes of interest and interpreting the probabilistic distributions of results of those universes. The preceding sentence is the key operational idea in statistical inference.\nDifferent sorts of realistic contexts call for different ways of framing the inquiry. For each of the established models there are types of problems which fit that model better than other models, and other types of problems for which the model is quite inappropriate.\nFundamental wisdom in statistics, as in all other contexts, is to employ a large tool kit rather than just applying only a hammer, screwdriver, or wrench no matter what the problem is at hand. (Philosopher Abraham Kaplan once stated Kaplan’s Law of scientific method: Give a small boy a hammer and there is nothing that he will encounter that does not require pounding.) Studying the text of a poem statistically to infer whether Shakespeare or Bacon was the more likely author is quite different than inferring whether bioengineer Smythe can produce an increase in the proportion of calves, and both are different from decisions about whether to remove a basketball player from the game or to produce a new product.\nSome key points: 1) In statistical inference as in all sound thinking, one’s purpose is central . All judgments should be made relative to that purpose, and in light of costs and benefits. (This is the spirit of the Neyman-Pearson approach). 2) One cannot avoid making judgments; the process of statistical inference cannot ever be perfectly routinized or objectified. Even in science, fitting a model to experience requires judgment. 3) The best ways to infer are different in different situations — economics, psychology, history, business, medicine, engineering, physics, and so on. 4) Different tools must be used when the situations call for them — sequential vs. fixed sampling, Neyman-Pearson vs. Fisher, and so on. 5) In statistical inference it is wise not to argue about the proper conclusion when the data and procedures are ambiguous. Instead, whenever possible, one should go back and get more data, hence lessening the importance of the efficiency of statistical tests. In some cases one cannot easily get more data, or even conduct an experiment, as in biostatistics with cancer patients. And with respect to the past one cannot produce more historical data. But one can gather more and different kinds of data, e.g. the history of research on smoking and lung cancer.\n\n\n\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New York: Dover Publications, Inc." + }, + { + "objectID": "point_estimation.html#ways-to-estimate-the-mean", + "href": "point_estimation.html#ways-to-estimate-the-mean", + "title": "19  Point Estimation", + "section": "19.1 Ways to estimate the mean", + "text": "19.1 Ways to estimate the mean\n\n19.1.1 The Method of Moments\nSince elementary school you have been taught to estimate the mean of a universe (or calculate the mean of a sample) by taking a simple arithmetic average. A fancy name for that process is “the method of moments.” It is the equivalent of estimating the center of gravity of a pole by finding the place where it will balance on your finger. If the pole has the same size and density all along its length, that balance point will be halfway between the endpoints, and the point may be thought of as the arithmetic average of the distances from the balance point of all the one-centimeter segments of the pole.\nConsider this example:\nExample: Twenty-nine Out of Fifty People Polled Say They Will Vote For The Democrat. Who Will Win The Election? The Relationship Between The Sample Proportion and The Population Proportion in a Two-Outcome Universe.\nYou take a random sample of 50 people in Maryland and ask which party’s candidate for governor they will vote for. Twenty-nine say they will vote for the Democrat. Let’s say it is reasonable to assume in this case that people will vote exactly as they say they will. The statistical question then facing you is: What proportion of the voters in Maryland will vote for the Democrat in the general election?\nYour intuitive best guess is that the proportion of the “universe” — which is composed of voters in the general election, in this case — will be the same as the proportion of the sample. That is, 58 percent = 29/50 is likely to be your guess about the proportion that will vote Democratic. Of course, your estimate may be too high or too low in this particular case, but in the long run — that is, if you take many samples like this one — on the average the sample mean will equal the universe (population) proportion, for reasons to be discussed later.\nThe sample mean seems to be the “natural” estimator of the population mean in this and many other cases. That is, it seems quite natural to say that the best estimate is the sample mean, and indeed it probably is best. But why? This is the problem of inverse probability that has bedeviled statisticians for two centuries.\nIf the only information that you have (or that seems relevant) is the evidence of the sample, then there would seem to be no basis for judging that the shape and location of the population differs to the “left” or “right” from that of the sample. That is often a strong argument.\nAnother way of saying much the same thing: If a sample has been drawn randomly, each single observation is a representative estimator of the mean; if you only have one observation, that observation is your best guess about the center of the distribution (if you have no reason to believe that the distribution of the population is peculiar — such as not being symmetrical). And therefore the sum of 2, 3…n of such observations (divided by their number) should have that same property, based on basic principles.\nBut if you are on a ship at sea and a leaf comes raining down from the sky, your best guess about the location of the tree from which it comes is not directly above you, and if two leaves fall, the midpoint of them is not the best location guess, either; you know that trees don’t grow at sea, and birds sometimes carry leaves out to sea.\nWe’ll return to this subject when we discuss criteria of methods.\n\n\n19.1.2 Expected Value and the Method of Moments\nConsider this gamble: You and another person roll a die. If it falls with the “6” upwards you get $4, and otherwise you pay $1. If you play 120 times, at the end of the day you would expect to have (20 * $4 - 100 * $1 =) -$20 dollars. We say that -$20 is your “expected value,” and your expected value per roll is (-$20 / 120 =) $.166 or the loss of 1/6 of a dollar. If you get $5 instead of $4, your expected value is $0.\nThis is exactly the same idea as the method of moments, and we even use the same term — “expected value,” or “expectation” — for the outcome of a calculation of the mean of a distribution. We say that the expected value for the success of rolling a “6” with a single cast of a die is 1/6, and that the expected value of rolling a “6” or a “5” is (1/6 + 1/6 = ) 2/6.\n\n\n19.1.3 The Maximum Likelihood Principle\nAnother way of thinking about estimation of the population mean asks: Which population(s) would, among the possible populations, have the highest probability of producing the observed sample? This criterion frequently produces the same answer as the method of moments, but in some situations the estimates differ. Furthermore, the logic of the maximum-likelihood principle is important.\nConsider that you draw without replacement six balls — 2 black and 4 white — from a bucket that contains twenty balls. What would you guess is the composition of the bucket from which they were drawn? Is it likely that those balls came from a bucket with 4 white and 16 black balls? Rather obviously not, because it would be most unusual to get all the 4 white balls in your draw. Indeed, we can estimate the probability of that happening with simulation or formula to be about .003.\nHow about a bucket with 2 black and 18 whites? The probability is much higher than with the previous bucket, but it still is low — about .075.\nLet us now estimate the probabilities for all buckets across the range of probabilities. In Figure 19.1 we see that the bucket with the highest probability of producing the observed sample has the same proportions of black and white balls as does the sample. This is called the “maximum likelihood universe.” Nor should this be very surprising, because that universe obviously has an equal chance of producing samples with proportions below and above that observed proportion — as was discussed in connection with the method of moments.\nWe should note, however, that the probability that even such a maximum-likelihood universe would produce exactly the observed sample is very low (though it has an even lower probability of producing any other sample).\n\n\n\n\n\nFigure 19.1: Number of White Balls in the Universe (N=20)" + }, + { + "objectID": "point_estimation.html#choice-of-estimation-method", + "href": "point_estimation.html#choice-of-estimation-method", + "title": "19  Point Estimation", + "section": "19.2 Choice of Estimation Method", + "text": "19.2 Choice of Estimation Method\nWhen should you base your estimate on the method of moments, or of maximum likelihood, or still some other principle? There is no general answer. Sound estimation requires that you think long and hard about the purpose of your estimation, and fit the method to the purpose. I am well aware that this is a very vague statement. But though it may be an uncomfortable idea to live with, guidance to sound statistical method must be vague because it requires sound judgment and deep knowledge of the particular set of facts about the situation at hand." + }, + { + "objectID": "point_estimation.html#criteria-of-estimates", + "href": "point_estimation.html#criteria-of-estimates", + "title": "19  Point Estimation", + "section": "19.3 Criteria of estimates", + "text": "19.3 Criteria of estimates\nHow should one judge the soundness of the process that produces an estimate? General criteria include representativeness and accuracy . But these are pretty vague; we’ll have to get more specific.\n\n19.3.1 Unbiasedness\nConcerning representativeness: We want a procedure that will not be systematically in error in one direction or another. In technical terms, we want an “unbiased estimate,” if possible. “Unbiased” in this case does not mean “friendly” or “unprejudiced,” but rather implies that on the average — that is, in the long run, after taking repeated samples — estimates that are too high will about balance (in percentage terms) those that are too low. The mean of the universe (or the proportion, if we are speaking of two-valued “binomial situations”) is a frequent object of our interest. And the sample mean is (in most cases) an unbiased estimate of the population mean.\nLet’s now see an informal proof that the mean of a randomlydrawn sample is an “unbiased” estimator of the population mean. That is, the errors of the sample means will cancel out after repeated samples because the mean of a large number of sample means approaches the population mean. A second “law” to be informally proven is that the size of the inaccuracy of a sample proportion is largest when the population proportion is near 50 percent, and smallest when it approaches zero percent or 100 percent.\nThe statement that the sample mean is an unbiased estimate of the population mean holds for many but not all kinds of samples — proportions of two-outcome (Democrat-Republican) events (as in this case) and also the means of many measured-data universes (heights, speeds, and so on) that we will come to later.\nBut, you object, I have only said that this is so; I haven’t proven it. Quite right. Now we will go beyond this simple assertion, though we won’t reach the level of formal proof. This discussion applies to conventional analytic statistical theory as well as to the resampling approach.\nWe want to know why the mean of a repeated sample — or the proportion, in the case of a binomial universe — tends to equal the mean of the universe (or the proportion of a binomial sample). Consider a population of one thousand voters. Split the population into random sub-populations of 500 voters each; let’s call these sub-populations by the name “samples.” Almost inevitably, the proportions voting Democratic in the samples will not exactly equal the “true” proportions in the population. (Why not? Well, why should they split evenly? There is no general reason why they should.) But if the sample proportions do not equal the population proportion, we can say that the extent of the difference between the two sample proportions and the population proportion will be identical but in the opposite direction .\nIf the population proportion is 600/1000 = 60 percent, and one sample’s proportion is 340/500 = 68 percent, then the other sample’s proportion must be (600-340 = 260)/500 = 52 percent. So if in the very long run you would choose each of these two samples about half the time (as you would if you selected between the two samples randomly) the average of the sample proportions would be (68 percent + 52 percent)/2 = 60 percent. This shows that on the average the sample proportion is a fair and unbiased estimate of the population proportion — if the sample is half the size of the population.\nIf we now sub-divide each of our two samples of 500 (each of which was half the population size) into equal-size subsamples of 250 each, the same argument will hold for the proportions of the samples of 250 with respect to the sample of 500: The proportion of a 250-voter sample is an unbiased estimate of the proportion of the 500-voter sample from which it is drawn. It seems inductively reasonable, then, that if the proportion of a 250-voter sample is an unbiased estimate of the 500-voter sample from which it is drawn, and the proportion of a 500-voter sample is an unbiased estimate of the 1000-voter population, then the proportion of a 250-voter sample should be an unbiased estimate of the population proportion. And if so, this argument should hold for samples of 1/2 x 250 = 125, and so on — in fact for any size sample.\nThe argument given above is not a rigorous formal proof. But I doubt that the non-mathematician needs, or will benefit from, a more formal proof of this proposition. You are more likely to be persuaded if you demonstrate this proposition to yourself experimentally in the following manner:\n\nStep 1. Let “1-6” = Democrat, “7-10” = Republican\nStep 2. Choose a sample of, say, ten random numbers, and record the proportion Democrat (the sample proportion).\nStep 3. Repeat step 2 a thousand times.\nStep 4. Compute the mean of the sample proportions, and compare it to the population proportion of 60 percent. This result should be close enough to reassure you that on the average the sample proportion is an “unbiased” estimate of the population proportion, though in any particular sample it may be substantially off in either direction.\n\n\n\n19.3.2 Efficiency\nWe want an estimate to be accurate, in the sense that it is as close to the “actual” value of the parameter as possible. Sometimes it is possible to get more accuracy at the cost of biasing the estimate. More than that does not need to be said here.\n\n\n19.3.3 Maximum Likelihood\nKnowing that a particular value is the most likely of all values may be of importance in itself. For example, a person betting on one horse in a horse race is interested in his/her estimate of the winner having the highest possible probability, and is not the slightest bit interested in getting nearly the right horse. Maximum likelihood estimates are of particular interest in such situations.\nSee (Savage 1972, chap. 15), for many other criteria of estimators." + }, + { + "objectID": "point_estimation.html#criteria-of-the-criteria", + "href": "point_estimation.html#criteria-of-the-criteria", + "title": "19  Point Estimation", + "section": "19.4 Criteria of the Criteria", + "text": "19.4 Criteria of the Criteria\nWhat should we look for in choosing criteria? Logically, this question should precede the above list of criteria.\nSavage (1972, chap. 15) has urged that we should always think in terms of the consequences of choosing criteria, in light of our purposes in making the estimate. I believe that he is making an important point. But it often is very hard work to think the matter through all the way to the consequences of the criteria chosen. And in most cases, such fine inquiry is not needed, in the sense that the estimating procedure chosen will be the same no matter what consequences are considered.1" + }, + { + "objectID": "point_estimation.html#estimation-of-accuracy-of-the-point-estimate", + "href": "point_estimation.html#estimation-of-accuracy-of-the-point-estimate", + "title": "19  Point Estimation", + "section": "19.5 Estimation of accuracy of the point estimate", + "text": "19.5 Estimation of accuracy of the point estimate\nSo far we have discussed how to make a point estimate, and criteria of good estimators. We also are interested in estimating the accuracy of that estimate. That subject — which is harder to grapple with — is discussed in Chapter 26 and Chapter 27 on confidence intervals.\nMost important: One cannot sensibly talk about the accuracy of probabilities in the abstract, without reference to some set of facts. In the abstract, the notion of accuracy loses any meaning, and invites confusion and argument." + }, + { + "objectID": "point_estimation.html#sec-uses-of-mean", + "href": "point_estimation.html#sec-uses-of-mean", + "title": "19  Point Estimation", + "section": "19.6 Uses of the mean", + "text": "19.6 Uses of the mean\nLet’s consider when the use of a device such as the mean is valuable, in the context of the data on marksmen in Table 19.1.2. If we wish to compare marksman A versus marksman B, we can immediately see that marksman A hit the bullseye (80 shots for 3 points each time) as many times as marksman B hit either the bullseye or simply got in the black (30 shots for 3 points and 50 shots for 2 points), and A hit the black (2 points) as many times as B just got in the white (1 point). From these two comparisons covering all the shots, in both of which comparisons A does better, it is immediately obvious that marksman A is better than marksman B. We can say that A’s score dominates B’s score.\n\n\nTable 19.1: Score percentages by marksman\n\n\n\n\n\n\n\nScore\n# occurrences\nProbability\n\n\n\n\nMarksman A\n\n\n1\n0\n0\n\n\n2\n20\n.2\n\n\n3\n80\n.8\n\n\nMarksman B\n\n\n1\n20\n.2\n\n\n2\n50\n.5\n\n\n3\n30\n.3\n\n\nMarksman C\n\n\n1\n10\n.1\n\n\n2\n60\n.6\n\n\n3\n30\n.3\n\n\n\n\nWhen we turn to comparing marksman C to marksman D, however, we cannot say that one “dominates” the other as we could with the comparison of marksmen A and B. Therefore, we turn to a summarizing device. One such device that is useful here is the mean. For marksman C the mean score is \\((40 * 1) + (10 * 2) + (50 * 3) = 210\\), while for marksman D the mean score is \\((10 * 1) + (60 * 2) + (30 * 3) = 220\\). Hence we can say that D is better than C even though D’s score does not dominate C’s score in the bullseye category.\nAnother use of the mean (Gnedenko, Aleksandr, and Khinchin 1962, 68) is shown in the estimation of the number of matches that we need to start fires for an operation carried out 20 times in a day (Table 19.2). Let’s say that the number of cases where s/he needs 1, 2 … 5 matches to start a fire are as follows (along with their probabilities) based on the last 100 fires started:\n\n\nTable 19.2: Number of matches needed to start a fire\n\n\nNumber of Matches\nNumber of Cases\nProbabilities\n\n\n\n\n1\n7\n.16\n\n\n2\n16\n.16\n\n\n3\n55\n.55\n\n\n4\n21\n.21\n\n\n5\n1\n.01\n\n\n\n\nIf you know that the operator will be lighting twenty fires, you can estimate the number of matches that s/he will need by multiplying the mean number of matches (which turns out be \\(1 * .07 + 2 * 0.16 + 3 * 0.55 + 4 * 0.21 + 5 * 0.01 = 2.93\\)) in the observed experience by 20. Here you are using the mean as an indication of a representative case.\nIt is common for writers to immediately produce the data in the forms of percentages or probabilities. But I think it is important to include in our discussion the absolute numbers, because this is what one must begin with in practice. And keeping the absolute numbers in mind is likely to avoid some confusions that arise if one immediately goes to percentages or to probabilities.\nStill another use for the mean is when you have a set of observations with error in them. The mean of the observations probably is your best guess about which is the “right” one. Furthermore, the distance you are likely to be off the mark is less if you select the mean of the observations. An example might be a series of witnesses giving the police their guesses about the height of a man who overturned an outhouse. The mean probably is the best estimate to give to police officers as a description of the perpetrator (though it would be helpful to give the range of the observations as well).\nWe use the mean so often, in so many different circumstances, that we become used to it and never think about its nature. So let’s do so a bit now.\nDifferent statistical ideas are appropriate for business and engineering decisions, biometrics, econometrics, scientific explanation (the philosophers’ case), and other fields. So nothing said here holds everywhere and always.\nOne might ask: What is the “meaning” of a mean? But that is not a helpful question. Rather, we should ask about the uses of a mean. Usually a mean is used to summarize a set of data. As we saw with marksmen C and D, it often is difficult to look at a table of data and obtain an overall idea of how big or how small the observations are; the mean (or other measurements) can help. Or if you wish to compare two sets of data where the distributions of observations overlap each other, comparing the means of the two distributions can often help you better understand the matter.\nAnother complication is the confusion between description and estimation , which makes it difficult to decide where to place the topic of descriptive statistics in a textbook. For example, compare the mean income of all men in the U. S., as measured by the decennial census. This mean of the universe can have a very different meaning from the mean of a sample of men with respect to the same characteristic. The sample mean is a point estimate, a statistical device, whereas the mean of the universe is a description. The use of the mean as an estimator is fraught with complications. Still, maybe it is no more complicated than deciding what describer to use for a population. This entire matter is much more complex than it appears at first glance.\nWhen the sample size approaches in size the entire population — when the sample becomes closer and closer to being the same as the population — the two issues blend. What does that tell us? Anything? What is the relationship between a baseball player’s average for two weeks, and his/her lifetime average? This is subtle stuff — rivaling the subtleness of arguments about inference versus probability, and about the nature of confidence limits (see Chapter 26 and Chapter 27 ). Maybe the only solid answer is to try to stay super-clear on what you are doing for what purpose, and to ask continually what job you want the statistic (or describer) to do for you.\nThe issue of the relationship of sample size to population size arises here. If the sample size equals or approaches the population size, the very notion of estimation loses its meaning.\nThe notion of “best estimator” makes no sense in some situations, including the following: a) You draw one black ball from a bucket. You cannot put confidence intervals around your estimate of the proportion of black balls, except to say that the proportion is somewhere between 1 and 0. No one would proceed without bringing in more information. That is, when there is almost no information, you simply cannot make much of an estimate — and the resampling method breaks down, too. It does not help much to shift the discussion to the models of the buckets, because then the issue is the unknown population of the buckets, in which case we need to bring in our general knowledge. b) When the sample size equals or is close to the population size, as discussed in this section, the data are a description rather than an estimate, because the sample is getting to be much the same as the universe; that is, if there are twelve people in your family, and you randomly take a sample of the amount of sugar used by eight members of the family, the results of the sample cannot be very different than if you compute the amount for all twelve family members. In such a case, the interpretation of the mean becomes complex.\nUnderlying all estimation is the assumption of continuation, which follows from random sampling — that there is no reason to expect the next sample to be different from the present one in any particular fashion, mean or variation. But we do expect it to be different in some fashion because of sampling variability." + }, + { + "objectID": "point_estimation.html#conclusion", + "href": "point_estimation.html#conclusion", + "title": "19  Point Estimation", + "section": "19.7 Conclusion", + "text": "19.7 Conclusion\nA Newsweek article says, “According to a recent reader’s survey in Bride’s magazine, the average blowout [wedding] will set you back about $16,000” (Feb 15, 1993, p. 67). That use of the mean (I assume) for the average, rather than the median, could cost the parents of some brides a pretty penny. It could be that the cost for the average person — that is, the median expenditure — might be a lot less than $16,000. (A few million dollar weddings could have a huge effect on a survey mean.) An inappropriate standard of comparison might enter into some family discussions as a result of this article, and cause higher outlays than otherwise. This chapter helps one understand the nature of such estimates.\n\n\n\n\nGnedenko, Boris Vladimirovich, I Aleksandr, and Akovlevich Khinchin. 1962. An Elementary Introduction to the Theory of Probability. New York, NY, USA: Dover Publications, Inc. https://archive.org/details/gnedenko-khinchin-an-elementary-introduction-to-the-theory-of-probability.\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New York: Dover Publications, Inc." + }, + { + "objectID": "framing_questions.html#introduction", + "href": "framing_questions.html#introduction", + "title": "20  Framing Statistical Questions", + "section": "20.1 Introduction", + "text": "20.1 Introduction\nChapter 3 - Chapter 15 discussed problems in probability theory. That is, we have been estimating the probability of a composite event resulting from a system in which we know the probabilities of the simple events — the “parameters” of the situation.\nThen Chapter 17 - Chapter 19 discussed the underlying philosophy of statistical inference.\nNow we turn to inferential-statistical problems. Up until now, we have been estimating the complex probabilities of known universes — the topic of probability . Now as we turn to problems in statistics , we seek to learn the characteristics of an unknown system — the basic probabilities of its simple events and parameters. (Here we note again, however, that in the process of dealing with them, all statistical-inferential problems eventually are converted into problems of pure probability). To assess the characteristics of the system in such problems, we employ the characteristics of the sample(s) that have been drawn from it.\nFor further discussion on the distinction between inferential statistics and probability theory, see Chapter 2 - Chapter 3.\nThis chapter begins the topic of hypothesis testing . The issue is: whether to adjudge that a particular sample (or samples) come(s) from a particular universe. A two-outcome yes-no universe is discussed first. Then we move on to “measured-data” universes, which are more complex than yes-no outcomes because the variables can take on many values, and because we ask somewhat more complex questions about the relationships of the samples to the universes. This topic is continued in subsequent chapters.\nIn a typical hypothesis-testing problem presented in this chapter, one sample of hospital patients is treated with a new drug and a second sample is not treated but rather given a “placebo.” After obtaining results from the samples, the “null” or “test” or “benchmark” hypothesis would be that the resulting drug and placebo samples are drawn from the same universe. This device of the null hypothesis is the equivalent of stating that the drug had no effect on the patients. It is a special intellectual strategy developed to handle such statistical questions.\nWe start with the scientific question: Does the medicine have an effect? We then translate it into a testable statistical question: How likely is it that the sample means come from the same universe? This process of question-translation is the crucial step in hypothesis-testing and inferential statistics. The chapter then explains how to solve these problems using resampling methods after you have formulated the proper statistical question.\nThough the examples in the chapter mostly focus on tests of hypotheses, the procedures also apply to confidence intervals, which will be discussed later." + }, + { + "objectID": "framing_questions.html#translating-scientific-questions-into-probabilistic-and-statistical-questions", + "href": "framing_questions.html#translating-scientific-questions-into-probabilistic-and-statistical-questions", + "title": "20  Framing Statistical Questions", + "section": "20.2 Translating scientific questions into probabilistic and statistical questions", + "text": "20.2 Translating scientific questions into probabilistic and statistical questions\nThe first step in using probability and statistics is to translate the scientific question into a statistical question. Once you know exactly which prob-stats question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy (though subtle). The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms.\nThough this translation is difficult, it involves no mathematics. Rather, this step requires only hard thought. You cannot beg off by saying, “I have no brain for math!” The need is for a brain that will do clear thinking, rather than a brain especially talented in mathematics. A person who uses conventional methods can avoid this hard thinking by simply grabbing the formula for some test without understanding why s/he chooses that test. But resampling pushes you to do this thinking explicitly.\nThis crucial process of translating from a pre-statistical question to a statistical question takes place in all statistical inference. But its nature comes out most sharply with respect to testing hypotheses, so most of what will be said about it will be in that context." + }, + { + "objectID": "framing_questions.html#the-three-types-of-questions", + "href": "framing_questions.html#the-three-types-of-questions", + "title": "20  Framing Statistical Questions", + "section": "20.3 The three types of questions", + "text": "20.3 The three types of questions\nLet’s consider the natures of conceptual, operational, and statistical questions.\n\n20.3.1 The Scientific Question\nA study for either scientific or decision-making purposes properly begins with a general question about the nature of the world — that is, a conceptual or theoretical question. One must then transform this question into an operational-empirical form that one can study scientifically. Thence comes the translation into a technical-statistical question.\nThe scientific-conceptual-theoretical question can be an issue of theory, or a policy choice, or the result of curiosity at large.\nExamples include: Can a bioengineer increase the chance of female calves being born? Is copper becoming less scarce? Are the prices of liquor systematically different in states where the liquor stores are publicly owned compared to states where they are privately owned? Does a new formulation of pig rations lead to faster hog growth? Was the rate of unemployment higher last month than the long-run average, or was the higher figure likely to be the result of sampling error? What are the margins of probable error for an unemployment survey?\n\n\n20.3.2 The Operational-Empirical Question\nThe operational-empirical question is framed in measurable quantities in a meaningful design. Examples include: How likely is this state of affairs (say, the new pig-food formulation) to cause an event such as was observed (say, the observed increase in hog growth)? How likely is it that the mean unemployment rate of a sample taken from the universe of interest (say, the labor force, with an unemployment rate of 10 percent) will be between 11 percent and 12 percent? What is the probability of getting three girls in the first four children if the probability of a girl is .48? How unlikely is it to get nine females out of ten calves in an experiment on your farm? Did the price of copper fall between 1800 and the present? These questions are in the form of empirical questions, which have already been transformed by operationalizing from scientific-conceptual questions.\n\n\n20.3.3 The Statistical Question\nAt this point one must decide whether the conceptual-scientific question is of the form of either a) or b):\n\nA test about whether some sample will frequently happen by chance rather than being very surprising — a test of the “significance” of a hypothesis. Such hypothesis testing takes the following form: How likely is a given “universe” to produce some sample like x? This leads to interpretation about: How likely is a given universe to be the cause of this observed sample?\nA question about the accuracy of the estimate of a parameter of the population based upon sample evidence (an inquiry about “confidence intervals”). This sort of question is considered by some (but not by me) to be a question in estimation — that is, one’s best guess about (say) the magnitude and probable error of the mean or median of a population. This is the form of a question about confidence limits — how likely is the mean to be between x and y?\n\nNotice that the statistical question is framed as a question in probability." + }, + { + "objectID": "framing_questions.html#illustrative-translations", + "href": "framing_questions.html#illustrative-translations", + "title": "20  Framing Statistical Questions", + "section": "20.4 Illustrative translations", + "text": "20.4 Illustrative translations\nThe best way to explain how to translate a scientific question into a statistical question is to illustrate the process.\n\n20.4.1 Illustration A — beliefs about smoking\nWere doctors’ beliefs as of 1964 about the harmfulness of cigarette smoking (and doctors’ own smoking behavior) affected by the social groups among whom the doctors live (Simon 1967)? That was the theoretical question. We decided to define the doctors’ reference groups as the states in which they live, because data about doctors and smoking were available state by state (Modern Medicine, 1964). We could then translate this question into an operational and testable scientific hypothesis by asking this question: Do doctors in tobacco-economy states differ from doctors in other states in their smoking, and in their beliefs about smoking?\nWhich numbers would help us answer this question, and how do we interpret those numbers? We now were ready to ask the statistical question: Do doctors in tobacco-economy states “belong to the same universe” (with respect to smoking) as do other doctors? That is, do doctors in tobacco-economy states have the same characteristics — at least, those characteristics we are interested in, smoking in this case — as do other doctors? Later we shall see that the way to proceed is to consider the statistical hypothesis that these doctors do indeed belong to that same universe; that hypothesis and the universe will be called “benchmark hypothesis” and “benchmark universe” respectively — or in more conventional usage, the “null hypothesis.”\nIf the tobacco-economy doctors do indeed belong to the benchmark universe — that is, if the benchmark hypothesis is correct — then there is a 49/50 chance that doctors in some state other than the state in which tobacco is most important will have the highest rate of cigarette smoking. But in fact we observe that the state in which tobacco accounts for the largest proportion of the state’s income — North Carolina — had (as of 1964) a higher proportion of doctors who smoked than any other state. (Furthermore, a lower proportion of doctors in North Carolina than in any other state said that they believed that smoking is a health hazard.)\nOf course, it is possible that it was just chance that North Carolina doctors smoked most, but the chance is only 1 in 50 if the benchmark hypothesis is correct. Obviously, some state had to have the highest rate, and the chance for any other state was also 1 in 50. But, because our original scientific hypothesis was that North Carolina doctors’ smoking rate would be highest, and we then observed that it was highest even though the chance was only 1 in 50, the observation became interesting and meaningful to us. It means that the chances are strong that there was a connection between the importance of tobacco in the economy of a state and the rate of cigarette smoking among doctors living there (as of 1964).\nTo consider this problem from another direction, it would be rare for North Carolina to have the highest smoking rate for doctors if there were no special reason for it; in fact, it would occur only once in fifty times. But, if there were a special reason — and we hypothesize that the tobacco economy provides the reason — then it would not seem unusual or rare for North Carolina to have the highest rate; therefore we choose to believe in the not-so-unusual phenomenon, that the tobacco economy caused doctors to smoke cigarettes.\nLike many (most? all?) actual situations, the cigarettes and doctors’ smoking issue is a rather messy business. Did I have a clear-cut, theoretically-derived prediction before I began? Maybe I did a bit of “data dredging” — that is, maybe I started with a vague expectation, and only arrived at my sharp hypothesis after I saw the data. This would weaken the probabilistic interpretation of the test of significance — but this is something that a scientific investigator does not like to do because it weakens his/her claim for attention and chance of publication. On the other hand, if one were a Bayesian, one could claim that one had a prior probability that the observed effect would occur, and the observed data strengthens that prior; but this procedure would not seem proper to many other investigators. The only wholly satisfactory conclusion is to obtain more data — but as of 1993, there does not seem to have been another data set collected since 1964, and collecting a set by myself is not feasible.\nThis clearly is a case of statistical inference that one could argue about, though perhaps it is true that all cases where the data are sufficiently ambiguous as to require a test of significance are also sufficiently ambiguous that they are properly subject to argument.\nFor some decades the hypothetico-deductive framework was the leading point of view in empirical science. It insisted that the empirical and statistical investigation should be preceded by theory, and only propositions suggested by the theory should be tested. Investigators were not supposed to go back and forth from data to theory to testing. It is now clear that this is an ivory-tower irrelevance, and no one lived by the hypothetico-deductive strictures anyway — just pretended to. Furthermore, there is no sound reason to feel constrained by it, though it strengthens your conclusions if you had theoretical reason in advance to expect the finding you obtained.\n\n\n20.4.2 Illustration B — is it a cure?\nDoes medicine CCC cure some particular cancer? That’s the scientific question. So you give the medicine to six patients who have the cancer and you do not give it to six similar patients who have the cancer. Your sample contains only twelve people because it is not feasible for you to obtain a larger sample. Five of six “medicine” patients get well, two of six “no medicine” patients get well. Does the medicine cure the cancer? That is, if future cancer patients take the medicine, will their rate of recovery be higher than if they did not take the medicine?\nOne way to translate the scientific question into a statistical question is to ask: Do the “medicine” patients belong to the same universe as the “no medicine” patients? That is, we ask whether “medicine” patients still have the same chances of getting well from the cancer as do the “no medicine” patients, or whether the medicine has bettered the chances of those who took it and thus removed them from the original universe, with its original chances of getting well. The original universe, to which the “no medicine” patients must still belong, is the benchmark universe. Shortly we shall see that we proceed by comparing the observed results against the benchmark hypothesis that the “medicine” patients still belong to the benchmark universe — that is, they still have the same chance of getting well as the “no medicine” patients.\nWe want to know whether or not the medicine does any good. This question is the same as asking whether patients who take medicine are still in the same population (universe) as “no medicine” patients, or whether they now belong to a different population in which patients have higher chances of getting well. To recapitulate our translations, we move from asking: Does the medicine cure the cancer? to, Do “medicine” patients have the same chance of getting well as “no medicine” patients?; and finally, to: Do “medicine” patients belong to the same universe (population) as “no medicine” patients? Remember that “population” in this sense does not refer to the population at large, but rather to a group of cancer sufferers (perhaps an infinitely large group) who have given chances of getting well, on the average. Groups with different chances of getting well are called “different populations” (universes). Shortly we shall see how to answer this statistical question. We must keep in mind that our ultimate concern in cases like this one is to predict future results of the medicine, that is, to predict whether use of the medicine will lead to a higher recovery rate than would be observed without the medicine.\n\n\n20.4.3 Illustration C — a better method for teaching reading\nIs method Alpha a better method of teaching reading than method Beta? That is, will method Alpha produce a higher average reading score in the future than will method Beta? Twenty children taught to read with method Alpha have an average reading score of 79, whereas children taught with method Beta have an average score of 84. To translate this scientific question into a statistical question we ask: Do children taught with method Alpha come from the same universe (population) as children taught with method Beta? Again, “universe” (population) does not mean the town or social group the children come from, and indeed the experiment will make sense only if the children do come from the same population, in that sense of “population.” What we want to know is whether or not the children belong to the same statistical population (universe), defined according to their reading ability, after they have studied with method Alpha or method Beta.\n\n\n20.4.4 Illustration D — better fertilizer\nIf one plot of ground is treated with fertilizer, and another similar plot is not treated, the benchmark (null) hypothesis is that the corn raised on the treated plot is no different than the corn raised on the untreated lot — that is, that the corn from the treated plot comes from (“belongs to”) the same universe as the corn from the untreated plot. If our statistical test makes it seem very unlikely that a universe like that from which the untreated-plot corn comes would also produce corn such as came from the treated plot, then we are willing to believe that the fertilizer has an effect. For a psychological example, substitute the words “group of children” for “plot,” “special training” for “fertilizer,” and “I.Q. score” for “corn.”\nThere is nothing sacred about the benchmark (null) hypothesis of “no difference.” You could just as well test the benchmark hypothesis that the corn comes from a universe that averages 110 bushels per acre, if you have reason to be especially interested in knowing whether or not the fertilizer produces more than 110 bushels per acre. But in many cases it is reasonable to test the probability that a sample comes from the population that does not receive the special treatment of medicine, fertilizer, or training." + }, + { + "objectID": "framing_questions.html#generalizing-from-sample-to-universe", + "href": "framing_questions.html#generalizing-from-sample-to-universe", + "title": "20  Framing Statistical Questions", + "section": "20.5 Generalizing from sample to universe", + "text": "20.5 Generalizing from sample to universe\nSo far we have discussed the scientific question and the statistical question. Remember that there is always a generalization question, too: Do the statistical results from this particular sample of, say, rats apply to a universe of humans? This question can be answered only with wisdom, common sense, and general knowledge, and not with probability statistics.\nTranslating from a scientific question into a statistical question is mostly a matter of asking the probability that some given benchmark universe (population) will produce one or more observed samples. Notice that we must (at least for general scientific testing purposes) ask about a given universe whose composition we assume to be known , rather than about a range of universes, or about a universe whose properties are unknown. In fact, there is really only one question that probability statistics can answer: Given some particular benchmark universe of some stated composition, what is the probability that an observed sample would come from it? (Please notice the subtle but all-important difference between the words “would come” in the previous sentence, and the word “came.”) A variation of this question is: Given two (or more) samples, what is the probability that they would come from the same universe — that is, that the same universe would produce both of them? In this latter case, the relevant benchmark universe is implicitly the universe whose composition is the two samples combined.\nThe necessity for stating the characteristics of the universe in question becomes obvious when you think about it for a moment. Probability-statistical testing adds up to comparing a sample with a particular benchmark universe, and asking whether there probably is a difference between the sample and the universe. To carry out this comparison, we ask how likely it is that the benchmark universe would produce a sample like the observed sample.\n\nBut in order to find out whether or not a universe could produce a given sample, we must ask whether or not some particular universe — with stated characteristics — could produce the sample. There is no doubt that some universe could produce the sample by a random process; in fact, some universe did. The only sensible question, then, is whether or not a particular universe, with stated (or known) characteristics, is likely to produce such a sample. In the case of the medicine, the universe with which we compare the sample who took the medicine is the benchmark universe to which that sample would belong if the medicine had had no effect. This comparison leads to the benchmark (null) hypothesis that the sample comes from a population in which the medicine (or other experimental treatment) seems to have no effect . It is to avoid confusion inherent in the term “null hypothesis” that I replace it with the term “benchmark hypothesis.”\nThe concept of the benchmark (null) hypothesis is not easy to grasp. The best way to learn its meaning is to see how it is used in practice. For example, we say we are willing to believe that the medicine has an effect if it seems very unlikely from the number who get well that the patients given the medicine still belong to the same benchmark universe as the patients given no medicine at all — that is, if the benchmark hypothesis is unlikely." + }, + { + "objectID": "framing_questions.html#the-steps-in-statistical-inference", + "href": "framing_questions.html#the-steps-in-statistical-inference", + "title": "20  Framing Statistical Questions", + "section": "20.6 The steps in statistical inference", + "text": "20.6 The steps in statistical inference\nThese are the steps in conducting statistical inference\n\nStep 1. Frame a question in the form of: What is the chance of getting the observed sample x from some specified population X? For example, what is the probability of getting a sample of 9 females and one male from a population where the probability of getting a single female is .48?\nStep 2. Reframe the question in the form of: What kinds of samples does population X produce, with which probabilities? That is, what is the probability of the observed sample x (9 females in 10 calves), given that a population is X (composed of 48 percent females)? Or in notation, what is \\(P(x | X)\\)?\nStep 3. Actually investigate the behavior of S with respect to S and other samples. This can be done in two ways:\n\n\nUse the calculus of probability (the formulaic method), perhaps resorting to the Monte Carlo method if an appropriate formula does not exist. Or\nResampling (in the larger sense), which equals the Monte Carlo method minus its use for approximations, investigation of complex functions in statistics and other theoretical mathematics, and non-resampling uses elsewhere in science. Resampling in the more restricted sense includes bootstrap, permutation, and other non-parametric methods. More about the resampling procedure follows in the paragraphs to come, and then in later chapters in the book. \n\n\nStep 4. Interpret the probabilities that result from step 3 in terms of acceptance or rejection of hypotheses, surety of conclusions, and as inputs to decision theory.1\n\nThe following short definition of statistical inference summarizes the previous four steps:\n\nStatistical inference equals the selection of a probabilistic model to resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.\n\nStating the steps to be followed in a procedure is an operational definition of the procedure. My belief in the clarifying power of this device (the operational definition) is embodied in the set of steps given in Chapter 15 for the various aspects of statistical inference. A canonical question-and-answer procedure for testing hypotheses will be found in Chapter 25, and one for confidence intervals will be found in Chapter 26." + }, + { + "objectID": "framing_questions.html#summary", + "href": "framing_questions.html#summary", + "title": "20  Framing Statistical Questions", + "section": "20.7 Summary", + "text": "20.7 Summary\nWe define resampling to include problems in inferential statistics as well as problems in probability as follows: Using the entire set of data you have in hand, or using the given data-generating mechanism (such as a die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples. That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.\nProblems in pure probability may at first seem different in nature than problems in statistical inference. But the same logic as stated in this definition applies to both varieties of problems. The difference is that in probability problems the “model” is known in advance — say, the model implicit in a deck of poker cards plus a game’s rules for dealing and counting the results — rather than the model being assumed to be best estimated by the observed data, as in resampling statistics.\nThe hardest job in using probability statistics, and the most important, is to translate the scientific question into a form to which statistics can give a sensible answer. You must translate scientific questions into the appropriate form for statistical operations , so that you know which operations to perform. This is the part of the job that requires hard, clear thinking — though it is non-mathematical thinking — and it is the part that someone else usually cannot easily do for you.\nOnce you know exactly which probability-statistical question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy. The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms. Though this step is hard, it involves no mathematics . This step requires only hard, clear thinking . You cannot beg off by saying “I have no brain for math!” To flub this step is to admit that you have no brain for clear thinking, rather than no brain for mathematics.\n\n\n\n\nSimon, Julian Lincoln. 1967. “Doctors, Smoking, and Reference Groups.” Public Opinion Quarterly 31 (4): 646–47." + }, + { + "objectID": "testing_counts_1.html#introduction", + "href": "testing_counts_1.html#introduction", + "title": "21  Hypothesis-Testing with Counted Data, Part 1", + "section": "21.1 Introduction", + "text": "21.1 Introduction\nThe first task in inferential statistics is to make one or more point estimates — that is, to make one or more statements about how much there is of something we are interested in — including especially the mean and the dispersion. (That work goes under the label “estimation” and is discussed in Chapter 19.) Frequently the next step, after making such quantitative estimation of the universe from which a sample has been drawn, is to consider whether two or more samples are different from each other, or whether the single sample is different from a specified value; this work goes under the label “hypothesis testing.” We ask: Did something happen? Or: Is there a difference between two universes? These are yes-no questions.\nIn other cases, the next step is to inquire into the reliability of the estimates; this goes under the label “confidence intervals.” (Some writers include assessing reliability under the rubric of estimation, but I judge it better not to do so).\nSo: Having reviewed how to convert hypothesis-testing problems into statistically testable questions in Chapter 20, we now must ask: How does one employ resampling methods to make the statistical test? As is always the case when using resampling techniques, there is no unique series of steps by which to proceed. The crucial criterion in assessing the model is whether it accurately simulates the actual event. With hypothesis-testing problems, any number of models may be correct. Generally speaking, though, the model that makes fullest use of the quantitative information available from the data is the best model.\nWhen attempting to deduce the characteristics of a universe from sample data, or when asking whether a sample was drawn from a particular universe, a crucial issue is whether a “one-tailed test” or a “two-tailed test” should be applied. That is, in examining the results of our resampling experiment based on the benchmark universe, do we examine both ends of the frequency distribution, or just one? If there is strong reason to believe a priori that the difference between the benchmark (null) universe and the sample will be in a given direction — for example if you hypothesize that the sample mean will be smaller than the mean of the benchmark universe — you should then employ a one-tailed test . If you do not have strong basis for such a prediction, use the two-tailed test. As an example, when a scientist tests a new medication, his/her hypothesis would be that the number of patients who get well will be higher in the treated group than in the control group. Thus, s/he applies the one-tailed test. See the text below for more detail on one- and two-tailed tests.\nSome language first:\nHypothesis: In inferential statistics, a statement or claim about a universe that can be tested and that you wish to investigate.\nTesting: The process of investigating the validity of a hypothesis.\nBenchmark (or null) hypothesis: A particular hypothesis chosen for convenience when testing hypotheses in inferential statistics. For example, we could test the hypothesis that there is no difference between a sample and a given universe, or between two samples, or that a parameter is less than or greater than a certain value. The benchmark universe refers to this hypothesis. (The concept of the benchmark or null hypothesis was discussed in Chapter 9 and Chapter 20.)\nNow let us begin the actual statistical testing of various sorts of hypotheses about samples and populations." + }, + { + "objectID": "testing_counts_1.html#should-a-single-sample-of-counted-data-be-considered-different-from-a-benchmark-universe", + "href": "testing_counts_1.html#should-a-single-sample-of-counted-data-be-considered-different-from-a-benchmark-universe", + "title": "21  Hypothesis-Testing with Counted Data, Part 1", + "section": "21.2 Should a single sample of counted data be considered different from a benchmark universe?", + "text": "21.2 Should a single sample of counted data be considered different from a benchmark universe?\n\n21.2.0.1 Example: Does Irradiation Affect the Sex Ratio in Fruit Flies?\nWhere the Benchmark Universe Mean (in this case, the Proportion) is Known, is the Mean (Proportion) of the Population Affected by the Treatment?)\nYou think you have developed a technique for irradiating the genes of fruit flies so that the sex ratio of the offspring will not be half males and half females. In the first twenty cases you treat, there are fourteen males and six females. Does this experimental result confirm that the irradiation does work?\nFirst convert the scientific question — whether or not the treatment affects the sex distribution — into a probability-statistical question: Is the observed sample likely to have come from a benchmark universe in which the sex ratio is one male to one female? The benchmark (null) hypothesis, then, is that the treatment makes no difference and the sample comes from the one-male-to-one-female universe. Therefore, we investigate how likely a one-to-one universe is to produce a distribution of fourteen or more of just one sex.\nA coin has a one-to-one (one out of two) chance of coming up tails. Therefore, we might flip a coin in groups of twenty flips, and count the number of heads in each twenty flips. Or we can use a random number table. The following steps will produce a sound estimate:\n\nStep 1. Let heads = male, tails = female.\nStep 2. Flip twenty coins and count the number of males. If 14 or more males occur, record “yes.” Also, if 6 or fewer males occur, record “yes” because this means we have gotten 14 or more females. Otherwise, record “no.”\nStep 3. Repeat step 2 perhaps 100 times.\nStep 4. Calculate the proportion “yes” in the 100 trials. This proportion estimates the probability that a fruit-fly population with a propensity to produce 50 percent males will by chance produce as many as 14 or as few as 6 males in a sample of 20 flies.\n\n\n\n\n\nTable 21.1: Results from 25 random trials for Fruitfly problem\n\n\nTrial no\n# of heads\n>=14 or <= 6\n\n\n\n\n1\n8\nNo\n\n\n2\n8\nNo\n\n\n3\n12\nNo\n\n\n4\n9\nNo\n\n\n5\n12\nNo\n\n\n6\n10\nNo\n\n\n7\n9\nNo\n\n\n8\n14\nYes\n\n\n9\n14\nYes\n\n\n10\n10\nNo\n\n\n11\n9\nNo\n\n\n12\n8\nNo\n\n\n13\n13\nNo\n\n\n14\n5\nYes\n\n\n15\n7\nNo\n\n\n16\n11\nNo\n\n\n17\n11\nNo\n\n\n18\n10\nNo\n\n\n19\n10\nNo\n\n\n20\n11\nNo\n\n\n21\n8\nNo\n\n\n22\n9\nNo\n\n\n23\n16\nYes\n\n\n24\n4\nYes\n\n\n25\n13\nNo\n\n\n\n\n\n\n\n\nTable 21.1 shows the results obtained in twenty-five trials of twenty flips each. In three of the twenty-five trials (12 percent) there were fourteen or more heads, which we call “males,” and in two of the twenty-five trials (8 percent) there six or fewer heads, meaning there were fourteen or more tails (“females”). We can therefore estimate that, even if the treatment does not affect the sex and the births over a long period really are one to one, five out of twenty-five times (20 percent) we would get fourteen or more of one sex or the other. Therefore, finding fourteen males out of twenty births is not overwhelming evidence that the treatment has any effect, even though the result is suggestive.\nHow accurate is the estimate? Seventy-five more trials were made, and of the 100 trials eight contained fourteen or more “males” (8 percent), and 9 trials contained fourteen or more “females” (9 percent), a total of 17 percent. So the first twenty-five trials gave a fairly reliable indication. As a matter of fact, analytically-based computation (not explained here) shows that the probability of getting fourteen or more females out of twenty births is .057 and, of course, the same for fourteen or more males from a one-to-one universe, implying a total probability of .114 of getting fourteen or more males or females.\nNow let us obtain larger and more accurate simulation samples with the computer. The key step in the Python notebook below represents male fruit flies with the string 'male' and female fruit flies with the string 'female'. The rnd.choice function is then used to generate 20 of these strings with an equal probability that either string is selected. This simulates randomly choosing 20 fruit flies on the benchmark assumption — the “null hypothesis” — that each fruit fly has an equal chance of being a male or female. Now we want to discover the chances of getting more than 13 (i.e., 14 or more) males or more than 13 females under these conditions. So we use np.sum to count the number of males in each random sample and then store this value in the scores array of this number for each sample. We repeat these steps 10,000 times.\nAfter ten thousand samples have been drawn, we count (sum) how often there were more than 13 males and then count the number of times there were fewer than 7 males (because if there were fewer than 7 males there must have been more than 13 females). When we add the two results together we have the probability that the results obtained from the sample of irradiated fruit flies would be obtained from a random sample of fruit flies.\n\nStart of fruit_fly notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# set up the random number generator\nrnd = np.random.default_rng()\n\n\n# Set the number of trials\nn_trials = 10000\n\n# set the sample size for each trial\nsample_size = 20\n\n# An empty array to store the trials\nscores = np.zeros(n_trials)\n\n# Do 1000 trials\nfor i in range(n_trials):\n\n # Generate 20 simulated fruit flies, where each has an equal chance of being\n # male or female\n a = rnd.choice(['male', 'female'], size = sample_size, p = [0.5, 0.5], replace = True)\n\n # count the number of males in the sample\n b = np.sum(a == 'male')\n\n # store the result of this trial\n scores[i] = b\n\n# Produce a histogram of the trial results\nplt.title(f\"Number of males in {n_trials} samples of \\n{sample_size} simulated fruit flies\")\nplt.hist(scores)\nplt.xlabel('Number of Males')\nplt.ylabel('Frequency')\nplt.show()\n\n\n\n\n\n\n\n\nIn the histogram above, we see that in 16 percent of the trials, the number of males was 14 or more, or 6 or fewer. Or instead of reading the results from the histogram, we can calculate the result by tacking on the following commands to the above program:\n\n# Determine the number of trials in which we had 14 or more males.\nj = np.sum(scores >= 14)\n\n# Determine the number of trials in which we had 6 or fewer males.\nk = np.sum(scores <= 6)\n\n# Add the two results together.\nm = j + k\n\n# Convert to a proportion.\nmm = m / n_trials\n\n# Print the results.\nprint(mm)\n\n0.1191\n\n\nEnd of fruit_fly notebook\n\n\nNotice that the strength of the evidence for the effectiveness of the radiation treatment depends upon the original question: whether or not the treatment had any effect on the sex of the fruit fly, which is a two-tailed question. If there were reason to believe at the start that the treatment could increase only the number of males , then we would focus our attention on the result that in only three of the twenty-five trials were fourteen or more males. There would then be only a 3/25 = 0.12 probability of getting the observed results by chance if the treatment really has no effect, rather than the weaker odds against obtaining fourteen or more of either males or females.\nTherefore, whether you decide to figure the odds of just fourteen or more males (what is called a “one-tail test”) or the odds for fourteen or more males plus fourteen or more females (a “two-tail test”), depends upon your advance knowledge of the subject. If you have no reason to believe that the treatment will have an effect only in the direction of creating more males and if you figure the odds for the one-tail test anyway, then you will be kidding yourself. Theory comes to bear here. If you have a strong hypothesis, deduced from a strong theory, that there will be more males, then you should figure one-tail odds, but if you have no such theory you should figure the weaker two-tail odds.1\nIn the case of the next problem concerning calves, we shall see that a one-tail test is appropriate because we have no interest in producing more male calves. Before leaving this example, let us review our intellectual strategy in handling the problem. First we observe a result (14 males in 20 flies) which differs from the proportion of the benchmark population (50 percent males). Because we have treated this sample with irradiation and observed a result that differs from the untreated benchmark-population’s mean, we speculate that the irradiation caused the sample to differ from the untreated population. We wish to check on whether this speculation is correct.\nWhen asking whether this speculation is correct, we are implicitly asking whether future irradiation would also produce a proportion of males higher than 50 percent. That is, we are implicitly asking whether irradiated flies would produce more samples with male proportions as high as 14/20 than would occur by chance in the absence of irradiation.\nIf samples as far away as 14/20 from the benchmark population mean of 10/20 would occur frequently by chance, then we would not be impressed with that experimental evidence as proof that irradiation does affect the sex ratio. Hence we set up a model that will tell us the frequency with which samples of 14 or more males out of 20 births would be observed by chance. Carrying out the resampling procedure tells us that perhaps a tenth of the time such samples would be observed by chance. That is not extremely frequent, but it is not infrequent either. Hence we would probably conclude that the evidence is provocative enough to justify further experimentation, but not so strong that we should immediately believe in the truth of this speculation.\nThe logic of attaching meaning to the probabilistic outcome of a test of a hypothesis is discussed in Chapter 22. There also is more about the concept of the level of significance in Chapter 22.\nBecause of the great importance of this sort of case, which brings out the basic principles particularly clearly, let us consider another example:\n\n\n21.2.1 Example: Does a treatment increase the female calf rate?\nWhat is the probability that among 10 calves born, 9 or more will be female?\nLet’s consider this question in the context of a set of queries for performing statistical inference that will be discussed further in Chapter 25.\nThe question: (From Hodges Jr and Lehmann (1970)): Female calves are more valuable than males. A bio-engineer claims to be able to cause more females to be born than the expected 50 percent rate. He conducts his procedure, and nine females are born out of the next 10 pregnancies among the treated cows. Should you believe his claim? That is, what is the probability of a result this (or more) surprising occurring by chance if his procedure has no effect? In this problem, we assume that on average 100 of 206 births are female, in contrast to the 50-50 benchmark universe in the previous problem.\nWhat is the purpose of the work?: Female calves are more valuable than male calves.\nStatistical inference?: Yes.\nConfidence interval or Test of hypothesis?: Test of hypothesis.\nWill you state the costs and benefits of various outcomes, or a loss function?: Yes. One need only say that the benefits are very large, and if the results are promising, it is worth gathering more data to confirm results.\nHow many samples of data are part of the hypothesis test?: One.\nWhat is the size of the first sample about which you wish to make significance statements?: Ten.\nWhat comparison(s) to make?: Compare the sample to the benchmark universe.\nWhat is the benchmark universe: that embodies the null hypothesis? 100/206 female.\nWhich symbols for the observed entities?: Balls in bucket, or numbers.\nWhat values or ranges of values?: We could write numbers 1 through 206 on pieces of paper, and take numbers 1-100 as “male” and 101-206 as “female”. Or we could use some other mechanism to give us a 100/206 chance of any one calf being female.\nFinite or infinite universe?: Infinite.\nWhich sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)?: Ten calves.\nWhat procedure to produce the sample entities?: Sampling with replacement.\nSimple (single step) or complex (multiple “if” drawings)?: Can think of it either way.\nWhat to record as the outcome of each resample trial?: The proportion (or number) of females.\nWhat is the criterion to be used in the test?: The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of 100/206 females.\n“One tail” or “two tail” test?: One tail, because the farmer is only interested in females. Finding a large proportion of males would not be of interest; it would not cause rejecting the null hypothesis.\nThe actual computation of probability may be done in several ways, as discussed earlier for four children and for ten cows. Conventional methods are discussed for comparison in Chapter 25. Here is the resampling solution in Python.\n\nStart of female_calves notebook\n\nDownload notebook\nInteract\n\n\n\n# set the number of trials\nn_trials = 10000\n\n# set the size of each sample\nsample_size = 10\n\n# Probability of any one calf being female.\np_female = 100 / 206\n\n# an array to store the results\nscores = np.zeros(n_trials)\n\n# for 10000 repeats\nfor i in range(n_trials):\n\n a = rnd.choice(['female', 'male'],\n p=[p_female, 1 - p_female],\n size = sample_size)\n b = np.sum(a == 'female')\n\n # store the result of the current trial\n scores[i] = b\n\n# plot a histogram of the scores\nplt.title(f\"Number of females in {n_trials} samples of \\n{sample_size} simulated calves\")\nplt.hist(scores)\nplt.xlabel('Number of Females')\nplt.ylabel('Frequency')\nplt.show()\n\n\n\n\n\n\n\n# count the number of scores that were greater than or equal to 9\nk = np.sum(scores >= 9)\n\n# express as a proportion\nkk = k / n_trials\n\n# show the proportion\nprint(f\"The probability of 9 or 10 females occurring by chance is {kk}\")\n\nThe probability of 9 or 10 females occurring by chance is 0.0084\n\n\nWe read from the result in vector kk in the “calves” program that the probability of 9 or 10 females occurring by chance is a bit more than one percent.\nEnd of female_calves notebook\n\n\n\n\n21.2.2 Example: A Public-Opinion Poll\nIs the Proportion of a Population Greater Than a Given Value?\nA municipal official wants to determine whether a majority of the town’s residents are for or against the awarding of a high-speed broadband internet contract, and he asks you to take a poll. You judge that the voter registration records are a fair representation of the universe in which the politician was interested, and you therefore decided to interview a random selection of registered voters. Of a sample of fifty people who expressed opinions, thirty said “yes” they were for the plan and twenty said “no,” they were against it. How conclusively do the results show that the people in town want this internet contract?\nNow comes some necessary subtle thinking in the interpretation of what seems like a simple problem. Notice that our aim in the analysis is to avoid the mistake of saying that the town favors the plan when in fact it does not favor the plan. Our chance of making this mistake is greatest when the voters are evenly split, so we choose as the benchmark (null) hypothesis that 50 percent of the town does not want the plan. This statement really means that “50 percent or more do not want the plan.” We could assess the probability of obtaining our result from a population that is split (say) 52-48 against, but such a probability would necessarily be even smaller, and we are primarily interested in assessing the maximum probability of being wrong. If the maximum probability of error turns out to be inconsequential, then we need not worry about less likely errors.\nThis problem is very much like the one-group fruit fly irradiation problem above. The only difference is that now we are comparing the observed sample against an arbitrary value of 50 percent (because that is the break-point in a situation where the majority decides) whereas in Section 21.2.0.1 we compared the observed sample against the normal population proportion (also 50 percent, because that is the normal proportion of males). But it really does not matter why we are comparing the observed sample to the figure of 50 percent; the procedure is the same in both cases. (Please notice that there is nothing special about the 50 percent figure; the same procedure would be followed for 20 percent or 85 percent.)\nIn brief, we a) take two pieces of paper, write “Yes” on one and “No” on the other, put them in a bucket b) draw a piece of paper from the bucket, record whether it was “Yes” or “No”, replace, and repeat 50 times c) count the number of “yeses” and “noes” in the first fifty draws, c) repeat for perhaps a hundred trials, then d) count the proportion of the trials in which a 50-50 universe would produce thirty or more “yes” answers.\nIn operational steps, the procedure is as follows:\n\nStep 1. “1-5” = no, “6-0” = yes.\nStep 2. In 50 random numbers, count the “yeses,” and record “false positive” if 30 or more “yeses.”\nStep 3. Repeat step 2 perhaps 100 times.\nStep 4. Calculate the proportion of experimental trials showing “false positive.” This estimates the probability that as many as 30 “yeses” would be observed by chance in a sample of 50 people if half (or more) are really against the plan.\n\n\n\n\n\nTable 21.2: Results from 20 random trials for contract poll problem\n\n\nTrial no\n# of \"Noes\"\n# of \"Yeses\"\n>= 30 \"Yeses\"\n\n\n\n\n1\n21\n29\n\n\n\n2\n25\n25\n\n\n\n3\n25\n25\n\n\n\n4\n25\n25\n\n\n\n5\n28\n22\n\n\n\n6\n28\n22\n\n\n\n7\n25\n25\n\n\n\n8\n28\n22\n\n\n\n9\n26\n24\n\n\n\n10\n22\n28\n\n\n\n11\n27\n23\n\n\n\n12\n25\n25\n\n\n\n13\n22\n28\n\n\n\n14\n24\n26\n\n\n\n15\n27\n23\n\n\n\n16\n27\n23\n\n\n\n17\n28\n22\n\n\n\n18\n26\n24\n\n\n\n19\n33\n17\n\n\n\n20\n23\n27\n\n\n\n\n\n\n\n\n\nIn Table 21.2, we see the results of twenty trials; 0 of 20 times (0 percent), 30 or more “yeses” were observed by chance. So our “significance level” or “prob value” is 0 percent, which is normally too high to feel confident that our poll results are reliable. This is the probability that as many as thirty of fifty people would say “yes” by chance if the population were “really” split evenly. (If the population were split so that more than 50 percent were against the plan, the probability would be even less that the observed results would occur by chance. In this sense, the benchmark hypothesis is conservative). On the other hand, if we had been counting the number of times there are 30 or more “No” votes that, in our setup, have the same odds as to 30 or more “Yes” votes, there would have been one. This indicates how samples can vary just by chance.\nTaken together, the evidence suggests that the mayor would be wise not to place very much confidence in the poll results, but rather ought to act with caution or else take a larger sample of voters.\n\nStart of contract_poll notebook\n\nDownload notebook\nInteract\n\n\nThis Python notebook generates samples of 50 simulated voters on the assumption that only 50 percent are in favor of the contract. Then it counts (sums) the number of samples where over 29 (30 or more) of the 50 respondents said they were in favor of the contract. (That is, we use a “one-tailed test.”) The result in the kk variable is the chance of a “false positive,” that is, 30 or more people saying they favor a contract when support for the proposal is actually split evenly down the middle.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n# We will do 10,000 iterations.\nn = 10_000\n\n# Make an array of integers to store the \"Yes\" counts.\nyeses = np.zeros(n, dtype=int)\n\nfor i in range(n):\n answers = rnd.choice(['No', 'Yes'], size=50)\n yeses[i] = np.sum(answers == 'Yes')\n\n# Produce a histogram of the trial results.\n# Use integer bins for histogram, from 10 through 40.\nplt.hist(yeses, bins=range(10, 41))\nplt.title('Number of yes votes out of 50, in null universe')\n\n\n\n\n\n\n\n\nIn the histogram above, we see that about 11 percent of our trials had 30 or more voters in favor, despite the fact that they were drawn from a population that was split 50-50. Python will calculate this proportion directly if we add the following commands to the above:\n\nk = np.sum(yeses >= 30)\nkk = k / n\nprint('Proportion >= 30:', np.round(kk, 2))\n\nProportion >= 30: 0.1\n\n\nEnd of contract_poll notebook\n\n\nThe section above discusses testing hypotheses about a single sample of counted data relative to a benchmark universe. This section discusses the issue of whether two samples with counted data should be considered the same or different.\n\n\n21.2.3 Example: Did the Trump-Clinton Poll Indicate that Trump Would Win?\n\nStart of trump_clinton notebook\n\nDownload notebook\nInteract\n\n\nWhat is the probability that a sample outcome such as actually observed (840 Trump, 660 Clinton) would occur by chance if Clinton is “really” ahead — that is, if Clinton has 50 percent (or more) of the support? To restate in sharper statistical language: What is the probability that the observed sample or one even more favorable to Trump would occur if the universe has a mean of 50 percent or below?\nHere is a procedure that responds to that question:\n\nCreate a benchmark universe with one ball marked “Trump” and another marked “Clinton”\nDraw a ball, record its marking, and replace. (We sample with replacement to simulate the practically-infinite population of U. S. voters.)\nRepeat step 2 1500 times and count the number of “Trump”s. If 840 or greater, record “Y”; otherwise, record “N.”\nRepeat steps 3 and 4 perhaps 1000 or 10,000 times, and count the number of “Y”s. The outcome estimates the probability that 840 or more Trump choices would occur if the universe is “really” half or more in favor of Clinton.\n\nThis procedure may be done as follows with Python.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n# Number of repeats we will run.\nn = 10_000\n\n# Make an integer array to store the counts.\ntrumps = np.zeros(n, dtype=int)\n\nfor i in range(n):\n votes = rnd.choice(['Trump', 'Clinton'], size=1500)\n trumps[i] = np.sum(votes == 'Trump')\n\n# Integer bins from 675 through 825 in steps of 5.\nplt.hist(trumps, bins=range(675, 826, 5))\nplt.title('Number of Trump voters of 1500 in null-world simulation')\n\n# How often >= 840 Trump votes in random draw?\nk = np.sum(trumps >= 840)\n# As a proportion of simulated resamples.\nkk = k / n\n\nprint('Proportion voting for Trump:', kk)\n\nProportion voting for Trump: 0.0\n\n\n\n\n\n\n\n\n\nThe value for kk is our estimate of the probability that Trump’s “victory” in the sample would occur by chance if he really were behind. In this case, our probability estimate is less than 1 in 10,000 (< 0.0001).\nEnd of trump_clinton notebook\n\n\n\n\n\n21.2.4 Example: Comparison of Possible Cancer Cure to Placebo\nDo Two Binomial Populations Differ in Their Proportions.\nSection 21.2.0.1 used an observed sample of male and female fruitflies to test the benchmark (null) hypothesis that the flies came from a universe with a one-to-one sex ratio, and the poll data problem also compared results to a 50-50 hypothesis. The calves problem also compared the results to a single benchmark universe — a proportion of 100/206 females. Now we want to compare two samples with each other , rather than comparing one sample with a hypothesized universe. That is, in this example we are not comparing one sample to a benchmark universe, but rather asking whether both samples come from the same universe. The universe from which both samples come, if both belong to the same universe, may be thought of as the benchmark universe, in this case.\nThe scientific question is whether pill P cures a rare cancer. A researcher gave pill P to six patients selected randomly from a group of twelve cancer patients; of the six, five got well. He gave an inactive placebo to the other six patients, and two of them got well. Does the evidence justify a conclusion that the pill has a curative effect?\n(An identical statistical example would serve for an experiment on methods of teaching reading to children. In such a situation the researcher would respond to inconclusive results by running the experiment on more subjects, but in cases like the cancer-pill example the researcher often cannot obtain more subjects.)\nWe can answer the stated question by combining the two samples and testing both samples against the resulting combined universe. In this case, the universe is twelve subjects, seven (5 + 2) of whom got well. How likely would such a universe produce two samples as far apart as five of six, and two of six, patients who get well? In other words, how often will two samples of six subjects, each drawn from a universe in which 7/12 of the patients get well, be as far apart as 5 - 2 = 3 patients in favor of the sample designated “pill”? This is obviously a one-tail test, for we have no reason to believe that the pill group might do less well than the placebo group.\nWe might construct a twelve-sided die, seven of whose sides are marked “get well.” Or put 12 pieces of paper in a bucket, seven with “get well” and five with “not well”. Or we would use pairs of numbers from the random-number table, with numbers “01-07” corresponding to get well, numbers “08-12” corresponding to “not get well,” and all other numbers omitted. (If you wish to save time, you can work out a system that uses more numbers and skips fewer, but that is up to you.) Designate the first six subjects “pill” and the next six subjects “placebo.”\nThe specific procedure might be as follows:\n\nStep 1. Write “get well” on seven pieces of paper, “not well” on another five. Put the 12 pieces of paper into a bucket.\nStep 2. Select two groups, “pill” and “placebo”, each with six random draws (with replacement) from the 12 pieces of paper.\nStep 3. Record how many “get well” in each group.\nStep 4. Subtract the result in group “placebo” from that in group “pill” (the difference may be negative).\nStep 5. Repeat steps 1-4 perhaps 100 times.\nStep 6. Compute the proportion of trials in which the pill does better by three or more cases.\n\n\n\n\n\nTable 21.3: Results from 25 random trials for pill/placebo\n\n\nTrial no\n# of pill cures\n# of placebo cures\nDifference\n\n\n\n\n1\n2\n3\n-1\n\n\n2\n4\n3\n1\n\n\n3\n5\n2\n3\n\n\n4\n3\n3\n0\n\n\n5\n5\n2\n3\n\n\n6\n4\n4\n0\n\n\n7\n3\n3\n0\n\n\n8\n3\n3\n0\n\n\n9\n3\n3\n0\n\n\n10\n4\n5\n-1\n\n\n11\n4\n5\n-1\n\n\n12\n3\n4\n-1\n\n\n13\n0\n3\n-3\n\n\n14\n5\n4\n1\n\n\n15\n3\n3\n0\n\n\n16\n5\n3\n2\n\n\n17\n5\n1\n4\n\n\n18\n3\n4\n-1\n\n\n19\n4\n2\n2\n\n\n20\n2\n4\n-2\n\n\n21\n2\n6\n-4\n\n\n22\n5\n5\n0\n\n\n23\n4\n5\n-1\n\n\n24\n3\n3\n0\n\n\n25\n4\n5\n-1\n\n\n\n\n\n\n\n\nIn the trials shown in Table 21.3, in three cases (12 percent) the difference between the randomly-drawn groups is three cases or greater. Apparently it is somewhat unusual — it happens 12 percent of the time — for this universe to generate “pill” samples in which the number of recoveries exceeds the number in the “placebo” samples by three or more. Therefore the answer to the scientific question, based on these samples, is that there is some reason to think that the medicine does have a favorable effect. But the investigator might sensibly await more data before reaching a firm conclusion about the pill’s efficiency, given the 12 percent probability.\n\nStart of pill_placebo notebook\n\nDownload notebook\nInteract\n\n\nNow for a Python solution. Again, the benchmark hypothesis is that pill P has no effect, and we ask how often, on this assumption, the results that were obtained from the actual test of the pill would occur by chance.\nGiven that in the test 7 of 12 patients overall got well, the benchmark hypothesis assumes 7/12 to be the chances of any random patient being cured. We generate two similar samples of 6 patients, both taken from the same universe composed of the combined samples — the bootstrap procedure. We count (sum) the number who are “get well” in each sample. Then we subtract the number who got well in the “pill” sample from the number who got well in the “no-pill” sample. We record the resulting difference for each trial in the variable pill_betters.\nIn the actual test, 3 more patients got well in the sample given the pill than in the sample given the placebo. We therefore count how many of the trials yield results where the difference between the sample given the pill and the sample not given the pill was greater than 2 (equal to or greater than 3). This result is the probability that the results derived from the actual test would be obtained from random samples drawn from a population which has a constant cure rate, pill or no pill.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n# The bucket with the pieces of paper.\noptions = np.repeat(['get well', 'not well'], [7, 5])\n\nn = 10_000\n\npill_betters = np.zeros(n, dtype=int)\n\nfor i in range(n):\n pill = rnd.choice(options, size=6)\n pill_cures = np.sum(pill == 'get well')\n placebo = rnd.choice(options, size=6)\n placebo_cures = np.sum(placebo == 'get well')\n pill_betters[i] = pill_cures - placebo_cures\n\nplt.hist(pill_betters, bins=range(-6, 7))\nplt.title('Number of extra cures pill vs placebo in null universe')\n\n\n\n\n\n\n\n\nRecall our actual observed results: In the medicine group, three more patients were cured than in the placebo group. From the histogram, we see that in only about 8 percent of the simulated trials did the “medicine” group do as well or better. The results seem to suggest — but by no means conclusively — that the medicine’s performance is not due to chance. Further study would probably be warranted. The following commands added to the above program will calculate this proportion directly:\n\n# How many trials gave an advantage of 3 or greater to the pill?\nk = np.sum(pill_betters >= 3)\n# Convert to a proportion.\nkk = k / n\n# Print the result.\nprint('Proportion with advantage of 3 or more for pill:',\n np.round(kk, 2))\n\nProportion with advantage of 3 or more for pill: 0.07\n\n\nEnd of pill_placebo notebook\n\n\nAs I (JLS) wrote when I first proposed this bootstrap method in 1969, this method is not the standard way of handling the problem; it is not even analogous to the standard analytic difference-of-proportions method (though since then it has become widely accepted). Though the method shown is quite direct and satisfactory, there are also many other resampling methods that one might construct to solve the same problem. By all means, invent your own statistics rather than simply trying to copy the methods described here; the examples given here only illustrate the process of inventing statistics rather than offering solutions for all classes of problems.\n\n\n21.2.5 Example: Did Attitudes About Marijuana Change?\n\nConsider two polls, each asking 1500 Americans about marijuana legalization. One poll, taken in 1980, found 52 percent of respondents in favor of decriminalization; the other, taken in 1985, found 46 percent in favor of decriminalization (Wonnacott and Wonnacott 1990, 275). Our null (benchmark) hypothesis is that both samples came from the same universe (the universe made up of the total of the two sets of observations). If so, let us then ask how likely would be two polls to produce results as different as were observed? Hence we construct a universe with a mean of 49 percent (the mean of the two polls of 52 percent and 46 percent), and repeatedly draw pairs of samples of size 1500 from it.\nTo see how the construction of the appropriate question is much more challenging intellectually than is the actual mathematics, let us consider another possibility suggested by a student: What about considering the universe to be the earlier poll with a mean of 52 percent, and then asking the probability that the later poll of 1500 people with a mean of 46 percent would come from it? Indeed, on first thought that procedure seems reasonable.\nUpon reflection — and it takes considerable thought on these matters to get them right — that would not be an appropriate procedure. The student’s suggested procedure would be the same as assuming that we had long-run solid knowledge of the universe, as if based on millions of observations, and then asking about the probability of a particular sample drawn from it. That does not correspond to the facts.\nThe only way to find the approach you eventually consider best — and there is no guarantee that it is indeed correct — is by close reference to the particular facts of the case.\n\n\n21.2.6 Example: Infarction and Cholesterol: Framingham Study\nIt is so important to understand the logic of hypothesis tests, and of the resampling method of doing them, that we will now tackle another problem similar to the preceding one.\nThis will be the first of several problems that use data from the famous Framingham study (drawn from Kahn and Sempos (1989)) concerning the development of myocardial infarction 16 years after the Framingham study began, for men ages 35- 44 with serum cholesterol above 250, compared to those with serum cholesterol below 250. The raw data are shown in Table 21.4. The data are from (Shurtleff 1970), cited in (Kahn and Sempos 1989, 12:61, Table 3-8). Kahn and Sempos divided the cases into “high” and “low” cholesterol.\n\n\nTable 21.4: Development of Myocardial Infarction in Men Aged 35-44 After 16 Years\n\n\nSerum Cholesterol\nDeveloped MI\nDidn’t Develop MI\nTotal\n\n\n\n\n> 250\n10\n125\n135\n\n\n<= 250\n21\n449\n470\n\n\n\n\nThe statistical logic properly begins by asking: How likely is that the two observed groups “really” came from the same “population” with respect to infarction rates? That is, we start with this question: How sure should one be that there is a difference in myocardial infarction rates between the high and low-cholesterol groups? Operationally, we address this issue by asking how likely it is that two groups as different in disease rates as the observed groups would be produced by the same “statistical universe.”\nKey step: We assume that the relevant “benchmark” or “null hypothesis” population (universe) is the composite of the two observed groups. That is, if there were no “true” difference in infarction rates between the two serum-cholesterol groups, and the observed disease differences occurred just because of sampling variation, the most reasonable representation of the population from which they came is the composite of the two observed groups.\nTherefore, we compose a hypothetical “benchmark” universe containing (135 + 470 =) 605 men at risk, and designate (10 + 21 =) 31 of them as infarction cases. We want to determine how likely it is that a universe like this one would produce — just by chance — two groups that differ as much as do the actually observed groups. That is, how often would random sampling from this universe produce one sub-sample of 135 men containing a large enough number of infarctions, and the other sub-sample of 470 men producing few enough infarctions, that the difference in occurrence rates would be as high as the observed difference of .029? (10/135 = .074, and 21/470 = .045, and .074 - .045 = .029).\nSo far, everything that has been said applies both to the conventional formulaic method and to the “new statistics” resampling method. But the logic is seldom explained to the reader of a piece of research — if indeed the researcher her/ himself grasps what the formula is doing. And if one just grabs for a formula with a prayer that it is the right one, one need never analyze the statistical logic of the problem at hand.\nNow we tackle this problem with a method that you would think of yourself if you began with the following mind-set: How can I simulate the mechanism whose operation I wish to understand? These steps will do the job:\n\nStep 1: Fill a bucket with 605 balls, 31 red (infarction) and the rest (605 — 31 = 574) green (no infarction).\nStep 2: Draw a sample of 135 (simulating the high serum-cholesterol group), one ball at a time and throwing it back after it is drawn to keep the simulated probability of an infarction the same throughout the sample; record the number of reds. Then do the same with another sample of 470 (the low serum-cholesterol group).\nStep 3: Calculate the difference in infarction rates for the two simulated groups, and compare it to the actual difference of .029; if the simulated difference is that large, record “Yes” for this trial; if not, record “No.”\nStep 4: Repeat steps 2 and 3 until a total of (say) 400 or 1000 trials have been completed. Compute the frequency with which the simulated groups produce a difference as great as actually observed. This frequency is an estimate of the probability that a difference as great as actually observed in Framingham would occur even if serum cholesterol has no effect upon myocardial infarction.\n\nThe procedure above can be carried out with balls in a bucket in a few hours. Yet it is natural to seek the added convenience of the computer to draw the samples. Here is a Python program:\n\nStart of framingham_hearts notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\nn = 10_0005\nmen = np.repeat(['infarction', 'no infarction'], [31, 574])\n\nn_high = 135 # Number of men with high cholesterol\nn_low = 470 # Number of men with low cholesterol\n\ninfarct_differences = np.zeros(n)\n\nfor i in range(n):\n highs = rnd.choice(men, size=n_high)\n lows = rnd.choice(men, size=n_low)\n high_infarcts = np.sum(highs == 'infarction')\n low_infarcts = np.sum(lows == 'infarction')\n high_prop = high_infarcts / n_high\n low_prop = low_infarcts / n_low\n infarct_differences[i] = high_prop - low_prop\n\nplt.hist(infarct_differences, bins=np.arange(-0.1, 0.1, 0.005))\nplt.title('Infarct proportion differences in null universe')\n\n# How often was the resampled difference >= the observed difference?\nk = np.sum(infarct_differences >= 0.029)\n# Convert this result to a proportion\nkk = k / n\n\nprint('Proportion of trials with difference >= observed:',\n np.round(kk, 2))\n\nProportion of trials with difference >= observed: 0.09\n\n\n\n\n\n\n\n\n\nThe results of the test using this program may be seen in the histogram. We find — perhaps surprisingly — that a difference as large as observed would occur by chance around 10 percent of the time. (If we were not guided by the theoretical expectation that high serum cholesterol produces heart disease, we might include the 10 percent difference going in the other direction, giving a 20 percent chance). Even a ten percent chance is sufficient to call into question the conclusion that high serum cholesterol is dangerous. At a minimum, this statistical result should call for more research before taking any strong action clinically or otherwise.\nEnd of framingham_hearts notebook\n\n\nWhere should one look to determine which procedures should be used to deal with a problem such as set forth above? Unlike the formulaic approach, the basic source is not a manual which sets forth a menu of formulas together with sets of rules about when they are appropriate. Rather, you consult your own understanding about what is happening in (say) the Framingham situation, and the question that needs to be answered, and then you construct a “model” that is as faithful to the facts as is possible. The bucket-sampling described above is such a model for the case at hand.\nTo connect up what we have done with the conventional approach, one could apply a z test (conceptually similar to the t test, but applicable to yes-no data; it is the Normal-distribution approximation to the large binomial distribution). Do so, we find that the results are much the same as the resampling result — an eleven percent probability.\nSomeone may ask: Why do a resampling test when you can use a standard device such as a z or t test? The great advantage of resampling is that it avoids using the wrong method. The researcher is more likely to arrive at sound conclusions with resampling because s/he can understand what s/he is doing, instead of blindly grabbing a formula which may be in error.\nThe textbook from which the problem is drawn is an excellent one; the difficulty of its presentation is an inescapable consequence of the formulaic approach to probability and statistics. The body of complex algebra and tables that only a rare expert understands down to the foundations constitutes an impenetrable wall to understanding. Yet without such understanding, there can be only rote practice, which leads to frustration and error.\n\n\n21.2.7 Example: Is One Pig Ration More Effective Than the Other?\nTesting For a Difference in Means With a Two-by-Two Classification.\nEach of two new types of ration is fed to twelve pigs. A farmer wants to know whether ration A or ration B is better.2 The weight gains in pounds for pigs fed on rations A and B are:\nA: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 31\nB: 26, 24, 28, 29, 30, 29, 31, 29, 32, 26, 28, 32\nThe statistical question may be framed as follows: should one consider that the pigs fed on the different rations come from the same universe with respect to weight gains?\nIn the actual experiment, 9 of the 12 pigs who were fed ration A also were in the top half of weight gains. How likely is it that one group of 12 randomly-chosen pigs would contain 9 of the 12 top weight gainers?\nOne approach to the problem is to divide the pigs into two groups — the twelve with the highest weight gains, and the twelve with the lowest weight gains — and examine whether an unusually large number of high-weight-gain pigs were fed on one or the other of the rations.\nWe can make this test by ordering and grouping the twenty four pigs:\nHigh-weight group:\n38 (ration A), 35 (A), 34 (A), 34 (A), 32 (B), 32 (A), 32 (A), 32 (B), 31 (A),\n31 (B), 31 (A), 31 (A)\nLow-weight group:\n30 (B), 29 (A), 29 (A), 29 (B), 29 (B), 29 (B), 28 (B), 28 (B), 26 (A), 26 (B),\n26 (B), 24 (B).\nAmong the twelve high-weight-gain pigs, nine were fed on ration A. We ask: Is this further from an even split than we are likely to get by chance? Let us take twelve red and twelve black cards, shuffle them, and deal out twelve cards (the other twelve need not be dealt out). Count the proportion of the hands in which one ration comes up nine or more times in the first twelve cards, to reflect ration A’s appearance nine times among the highest twelve weight gains. More specifically:\n\nStep 1. Constitute a deck of twelve red and twelve black cards, and shuffle.\nStep 2. Deal out twelve cards, count the number red, and record “yes” if there are nine or more of either red or black.\nStep 3. Repeat step 2 perhaps fifty times.\nStep 4. Compute the proportion “yes.” This proportion estimates the probability sought.\n\n\n\n\n\nTable 21.5: Results from 25 random trials for pig rations\n\n\nTrial no\n# red\n# black\n>=9 red or black\n\n\n\n\n1\n2\n10\n+\n\n\n2\n7\n5\n\n\n\n3\n5\n7\n\n\n\n4\n9\n3\n+\n\n\n5\n9\n3\n+\n\n\n6\n7\n5\n\n\n\n7\n6\n6\n\n\n\n8\n6\n6\n\n\n\n9\n7\n5\n\n\n\n10\n7\n5\n\n\n\n11\n7\n5\n\n\n\n12\n6\n6\n\n\n\n13\n4\n8\n\n\n\n14\n6\n6\n\n\n\n15\n5\n7\n\n\n\n16\n4\n8\n\n\n\n17\n8\n4\n\n\n\n18\n4\n8\n\n\n\n19\n8\n4\n\n\n\n20\n8\n4\n\n\n\n21\n5\n7\n\n\n\n22\n8\n4\n\n\n\n23\n8\n4\n\n\n\n24\n9\n3\n+\n\n\n25\n6\n6\n\n\n\n\n\n\n\n\n\nTable 21.5 shows the results of 25 trials. In four (marked by + signs) of the 25 (that is, 16 percent of the trials) there were nine or more either red or black cards in the first twelve cards. Again the results suggest that it would be slightly unusual for the results to favor one ration or the other so strongly just by chance if they come from the same universe.\nNow the Python procedure to answer the question:\n\nStart of pig_rations notebook\n\nDownload notebook\nInteract\n\n\nThe ranks = np.arange(1, 25) statement creates an array of numbers 1 through 24, which will represent the rankings of weight gains for each of the 24 pigs. We repeat the following procedure for 10000 trials. First we shuffle the elements of array ranks so that the rank numbers for weight gains are randomized and placed in array shuffled. We then select the first 12 elements of shuffled and place them in first_12; this represents the rankings of a randomly-selected group of 12 pigs. We next count (sum) in n_top the number of pigs whose rankings for weight gain were in the top half — that is, a rank of less than 13. We record that number in top_ranks, and then continue the loop, until we finish our n trials.\nSince we did not know beforehand the direction of the effect of ration A on weight gain, we want to count the times that either more than 8 of the random selection of 12 pigs were in the top half of the rankings, or that fewer than 4 of these pigs were in the top half of the weight gain rankings — (The latter is the same as counting the number of times that more than 8 of the 12 non-selected random pigs were in the top half in weight gain.)\nWe do so with the final two sum statements. By adding the two results n_gte_9 and n_lte_3 together, we have the number of times out of 10,000 that differences in weight gains in two groups as dramatic as those obtained in the actual experiment would occur by chance.\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n# Constitute the set of the weight gain rank orders. ranks is now a vector\n# consisting of the numbers 1 — 24, in that order.\nranks = np.arange(1, 25)\n\nn = 10_000\n\ntop_ranks = np.zeros(n, dtype=int)\n\nfor i in range(n):\n # Shuffle the ranks of the weight gains.\n shuffled = rnd.permuted(ranks)\n # Take the first 12 ranks.\n first_12 = shuffled[:12]\n # Determine how many of these randomly selected 12 ranks are less than\n # 12 (i.e. 1-12), put that result in n_top.\n n_top = np.sum(first_12 <= 12)\n # Keep track of each trial result in top_ranks\n top_ranks[i] = n_top\n\nplt.hist(top_ranks, bins=np.arange(1, 12))\nplt.title('Number of top 12 ranks in pig-ration trials')\n\n\n\n\n\n\n\n\nWe see from the histogram that, in about 3 percent of the trials, either more than 8 or fewer than 4 top half ranks (1-12) made it into the random group of twelve that we selected. Python will calculate this for us as follows:\n\n# Determine how many of the trials yielded 9 or more top ranks.\nn_gte_9 = np.sum(top_ranks >= 9)\n# Determine how many trials yielded 3 or fewer of the top ranks.\n# If there were 3 or fewer, then 9 or more of the top ranks must\n# have been in the other group (not selected).\nn_lte_3 = np.sum(top_ranks <= 3)\n# Add the two together.\nn_both = n_gte_9 + n_lte_3\n# Convert to a proportion.\nprop_both = n_both / n\n\nprint('Trial proportion >=9 top ranks in either group:',\n np.round(prop_both, 2))\n\nTrial proportion >=9 top ranks in either group: 0.04\n\n\nThe decisions that are warranted on the basis of the results depend upon one’s purpose. If writing a scientific paper on the merits of ration A is the ultimate purpose, it would be sensible to test another batch of pigs to get further evidence. (Or you could proceed to employ another sort of test for a slightly more precise evaluation.) But if the goal is a decision on which type of ration to buy for a small farm and they are the same price, just go ahead and buy ration A because, even if it is no better than ration B, you have strong evidence that it is no worse .\nEnd of pig_rations notebook\n\n\n\n\n21.2.8 Example: Do Planet Densities Differ?\nConsider the five planets known to the ancient world.\nMosteller and Rourke (1973, 17–19) ask us to compare the densities of the three planets farther from the sun than is the earth (Mars, density 0.71; Jupiter, 0.24; and Saturn, 0.12) against the densities of the planets closer to the sun than is the earth (Mercury, 0.68; Venus, 0.94).\nThe average density of the distant planets is .357, of the closer planets is .81. Is this difference (.353) statistically surprising, or is it likely to occur in a chance ordering of these planets?\nWe can answer this question with a permutation test; such sampling without replacement makes sense here because we are considering the entire set of planets, rather than a sample drawn from a larger population of planets (the word “population” is used here, rather than “universe,” to avoid confusion.) And because the number of objects is so small, one could examine all possible arrangements (permutations), and see how many have (say) differences in mean densities between the two groups as large as observed.\nAnother method that Mosteller and Rourke suggest is by a comparison of the density ranks of the two sets, where Saturn has rank 1 and Venus has rank 5. This might have a scientific advantage if the sample data are dominated by a single “outlier,” whose domination is removed when we rank the data.\nWe see that the sum of the ranks for the “closer” set is 3+5=8. We can then ask: If the ranks were assigned at random, how likely is it that a set of two planets would have a sum as large as 8? Again, because the sample is small, we can examine all the possible permutations, as Mosteller and Rourke do in Table 3-1 (Mosteller and Rourke 1973, 56) (Substitute “Closer” for “B,” “Further” for “A”). In two of the ten permutations, a sum of ranks as great as 8 is observed, so the probability of a result as great as observed happening by chance is 20 percent, using these data. (We could just as well consider the difference in mean ranks between the two groups: (8/2 - 7/3 = 10 / 6 = 1.67).\n\n\nTo illuminate the logic of this test, consider comparing the heights of two samples of trees. If sample A has the five tallest trees, and sample B has the five shortest trees, the difference in mean ranks will be (6+7+8+9+10=) 40 — (1+2+3+4+5=) 15, the largest possible difference. If the groups are less sharply differentiated — for example, if sample A has #3 and sample B has #8 — the difference in ranks will be less than the maximum of 40, as you can quickly verify.\nThe method we have just used is called a Mann-Whitney test, though that label is usually applied when the data are too many to examine all the possible permutations; in that case one conventionally uses a table prepared by formula. In the case where there are too many for a complete permutation test, our resampling algorithm is as follows (though we’ll continue with the planets example):\n\nCompute the mean ranks of the two groups.\nCalculate the difference between the means computed in step 1.\nCreate a bucket containing the ranks from 1 to the number of observations (5, in the case of the planets)\nShuffle the ranks.\nSince we are working with the ranked data, we must draw without replacement, because there can only be one #3, one #7, and so on. So draw the number of observations corresponding to the number of observations — 2 “Closer” and 3 “Further.”\nCompute the mean ranks of the two simulated groups of planets.\nCalculate the difference between the means computed in step 5 and record.\nRepeat steps 4 through 7 perhaps 1000 times.\nCount how often the shuffled difference in ranks exceeds the observed difference from step 2 (1.67).\n\n\nStart of planet_densities notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\n\nrnd = np.random.default_rng()\n\n# Steps 1 and 2.\nactual_mean_diff = 8 / 2 - 7 / 3\n\n# Step 3\nranks = np.arange(1, 6)\n\nn = 10_000\n\nmean_differences = np.zeros(n)\n\nfor i in range(n):\n # Step 4\n shuffled = rnd.permuted(ranks)\n # Step 5\n closer = shuffled[:2] # First 2\n further = shuffled[2:] # Last 3\n # Step 6\n mean_close = np.mean(closer)\n mean_far = np.mean(further)\n # Step 7\n mean_differences[i] = mean_close - mean_far\n\n# Step 9\nk = np.sum(mean_differences >= actual_mean_diff)\nprob = k / n\n\nprint('Proportion of trials with mean difference >= 1.67:',\n np.round(prob, 2))\n\nProportion of trials with mean difference >= 1.67: 0.19\n\n\nInterpretation: 19 percent of the time, random shufflings produced a difference in ranks as great as or greater than observed. Hence, on the strength of this evidence, we should not conclude that there is a statistically surprising difference in densities between the further planets and the closer planets.\nEnd of planet_densities notebook" + }, + { + "objectID": "testing_counts_1.html#conclusion", + "href": "testing_counts_1.html#conclusion", + "title": "21  Hypothesis-Testing with Counted Data, Part 1", + "section": "21.3 Conclusion", + "text": "21.3 Conclusion\nThis chapter has begun the actual work of testing hypotheses. The next chapter continues with discussion of somewhat more complex problems with counted data — more complex to think about, but no more difficult to actually treat mathematically with resampling simulation. If you have understood the general logic of the procedures used up until this point, you are in command of all the necessary conceptual knowledge to construct your own tests to answer any statistical question. A lot more practice, working on a variety of problems, obviously would help. But the key elements are simple: 1) Model the real situation accurately, 2) experiment with the model, and 3) compare the results of the model with the observed results.\n\n\n\n\nDixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to Statistical Analysis.”\n\n\nHodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic Concepts of Probability and Statistics. 2nd ed. San Francisco, California: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9.\n\n\nKahn, Harold A, and Christopher T Sempos. 1989. Statistical Methods in Epidemiology. Vol. 12. Monographs in Epidemiology and Biostatistics. New York: Oxford University Press. https://www.google.co.uk/books/edition/Statistical_Methods_in_Epidemiology/YERYAgAAQBAJ.\n\n\nMosteller, Frederick, and Robert E. K. Rourke. 1973. Sturdy Statistics: Nonparametrics and Order Statistics. Addison-Wesley Publishing Company.\n\n\nShurtleff, Dewey. 1970. “Some Characteristics Related to the Incidence of Cardiovascular Disease and Death: Framingham Study, 16-Year Follow-up.” Section 26. Edited by William B. Kannel and Tavia Gordon. The Framingham Study: An Epidemiological Investigation of Cardiovascular Disease. Washington, D.C.: U.S. Government Printing Office. https://upload.wikimedia.org/wikipedia/commons/6/6d/The_Framingham_study_-_an_epidemiological_investigation_of_cardiovascular_disease_sec.26_1970_%28IA_framinghamstudye00kann_25%29.pdf.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory Statistics. 5th ed. New York: John Wiley & Sons." + }, + { + "objectID": "significance.html#the-logic-of-hypothesis-tests", + "href": "significance.html#the-logic-of-hypothesis-tests", + "title": "22  The Concept of Statistical Significance in Testing Hypotheses", + "section": "22.1 The logic of hypothesis tests", + "text": "22.1 The logic of hypothesis tests\nLet’s address the logic of hypothesis tests by considering a variety of examples in everyday thinking:\nConsider the nine-year-old who tells the teacher that the dog ate the homework. Why does the teacher not accept the child’s excuse? Clearly it is because the event would be too “unusual.” But why do we think that way?\nLet’s speculate that you survey a million adults, and only three report that they have ever heard of a real case where a dog ate somebody’s homework. You are a teacher, and a student comes in without homework and says that a dog ate the homework. It could have happened — your survey reports that it really has happened in three lifetimes out of a million. But the event happens only very infrequently .\nTherefore, you probably conclude that because the event is so unlikely, something else must have happened — and the likeliest alternative is that the student did not do the homework. The logic is that if an event seems very unlikely, it would therefore surprise us greatly if it were to actually happen, and therefore we assume that there must be a better explanation. This is why we look askance at unlikely coincidences when they are to someone’s benefit.\nThe same line of reasoning was the logic of John Arbuthnot’s hypothesis test (1710) about the ratio of births by sex in the first published hypothesis test, though his extension of logic to God’s design as an alternative hypothesis goes beyond the standard modern framework. It is also the implicit logic in the research on puerperal fever, cholera, and beri-beri, the data for which were shown in Chapter 17, though no explicit mention was made of probability in those cases.\nTwo students sat next to each other at an ACT college-entrance examination in Kentucky in 1987. Out of 219 questions, 211 of the answers were identical, including many that were wrong. Student A was a high school athlete in Kentucky who had failed two previous SAT exams, and Student B thought he saw Student A copying from him. Should one believe that Student A cheated? (The Washington Post , April 19, 1992, p. D2.)\nYou say to yourself: It would be most unlikely that the two test-takers would answer that many questions identically by chance — and we can compute how unlikely that event would be. Because that event is so unlikely, we therefore conclude that one or both cheated. And indeed, the testing service invalidated the athlete’s exam. On the other hand, if all the questions that were answered identically were correct , the result might not be unreasonable. If we knew in how many cases they made the same mistakes , the inquiry would have been clearer, but the newspaper did not contain those details.\nThe court is hearing a murder case. There is no eye-witness, and the evidence consists of such facts as the height and weight and age of the person charged, and other circumstantial evidence. Only one person in 50 million has such characteristics, and you find such a person. Will you convict the person, or will you believe that the evidence was just a coincidence? Of course the evidence might have occurred by bad luck, but the probability is very, very small (1 in 50 million). Will you therefore conclude that because the chance is so small, it is reasonable to assume that the person charged committed the crime?\nSometimes the unusual really happens — the court errs by judging that the wrong person did it, and that person goes to prison or even is executed. The best we can do is to make the criterion strict: “Beyond a reasonable doubt.” (People ask: What probability does that criterion represent? But the court will not provide a numerical answer.)\nSomebody says to you: I am going to deal out five cards and it will be a royal flush — ten, jack, queen, king, and ace of the same suit. The person deals the cards and lo and behold! the royal flush appears. Do you think the occurrence happened just by chance? No, you are likely to be very dubious that it happened by chance. Therefore, you believe there must be some other explanation — that the person fixed the cards, for example.\nNote: You don’t attach the same meaning to any other permutation (say 3, 6, 7, 7, and king of various suits), even though that permutation is just as rare — unless the person announced exactly that permutation in advance.\nIndeed, even if the person says nothing , you will be surprised at a royal flush, because this hand has meaning , whereas another given set of five cards do not have any special meaning.\nYou see six Volvos in one home’s driveway, and you conclude that it is a Volvo club meeting, or a Volvo salesperson’s meeting. Why? Because it is unlikely that six people not connected formally by Volvo ownership would be friends of the same person.\nTwo important points complicate the concept of statistical significance:\n\nWith a large enough sample, every treatment or variable will seem different from every other. Two faces of even a good die (say, “1” and “2”) will produce different results in the very very long run.\nStatistical significance does not imply economic or social significance. Two faces of a die may be statistically different in a huge sample of throws, but a 1/10,000 difference between them is too small to make an economic difference in betting. Statistical significance is only a filter . If it appears, one should then proceed to decide whether there is substantive significance.\n\nInterpreting statistical significance is sometimes complex, especially when the interpretation depends heavily upon your prior expectations — as it often does. For example, how should a basketball coach decide whether or not to bench a player for poor performance after a series of missed shots at the basket?\nConsider Coach John Thompson who, after Charles Smith missed 10 of 12 shots in the 1989 Georgetown-Notre Dame NCAA game, took Smith out of the game for a time (The Washington Post, March 20, 1989, p. C1). The scientific or decision problem is: Should the coach consider that Smith is not now a 47 percent shooter as he normally is, and therefore the coach should bench him? The statistical question is: How likely is a shooter with a 47 percent average to produce 10 of 12 misses? The key issue in the statistical question concerns the total number of shot attempts we should consider.\nWould Coach Thompson take Smith out of the game after he missed one shot? Clearly not. Why not? Because one “expects” Smith to miss a shot half the time, and missing one shot therefore does not seem unusual.\nHow about after Smith misses two shots in a row? For the same reason the coach still would not bench him, because this event happens “often” — more specifically, about once in every sequence of four shots.\nHow about after 9 misses out of ten shots? Notice the difference between this case and 9 females among ten calves. In the case of the calves, we expected half females because the experiment is a single isolated trial. The event considered by itself has a small enough probability that it seems unexpected rather than expected. (“Unexpected” seems to be closely related to “happens seldom” or “unusual” in our psychology.) And an event that happens seldom seems to call for explanation, and also seems to promise that it will yield itself to explanation by some unusual concatenation of forces. That is, unusual events lead us to think that they have unusual causes; that is the nub of the matter. (But on the other hand, one can sometimes benefit by paying attention to unusual events, as scientists know when they investigate outliers.)\nIn basketball shooting, we expect 47 percent of Smith’s individual shots to be successful, and we also expect that average for each set of shots. But we also expect some sets of shots to be far from that average because we observe many sets; such variation is inevitable. So when we see a single set of 9 misses in ten shots, we are not very surprised.\nBut how about 29 misses in 30 shots? At some point, one must start to pay attention. (And of course we would pay more attention if beforehand, and never at any other time, the player said, “I can’t see the basket today. My eyes are dim.”)\nSo, how should one proceed? Perhaps proceed the same way as with a coin that keeps coming down heads a very large proportion of the throws, over a long series of tosses: At some point you examine it to see if it has two heads. But if your investigation is negative, in the absence of an indication other than the behavior in question , you continue to believe that there is no explanation and you assume that the event is “chance” and should not be acted upon . In the same way, a coach might ask a player if there is an explanation for the many misses. But if the player answers “no,” the coach should not bench him. (There are difficulties here with truth-telling, of course, but let that go for now.)\nThe key point for the basketball case and other repetitive situations is not to judge that there is an unusual explanation from the behavior of a single sample alone , just as with a short sequence of stock-price changes.\nWe all need to learn that “irregular” (a good word here) sequences are less unusual than they seem to the naked intuition. A streak of 10 out of 12 misses for a 47 percent shooter occurs about 3 percent of the time. That is, about every 33 shots Smith takes, he will begin a sequence of 12 shots that will end with 3 or fewer baskets — perhaps once in every couple of games. This does not seem “very” unusual, perhaps. And if the coach treats each such case as unusual, he will be losing some of the services of a better player than he replaces him with.\nIn brief, how hard one should search for an explanation should depend on the probability of the event. But one should (almost) assume the absence of an explanation unless one actually finds it.\nBayesian analysis (Chapter 31) could be brought to bear upon the matter, bringing in your prior probabilities based on the knowledge of research that has shown that there is no such thing as a “hot hand” in basketball (see Chapter 14), together with some sort of cost-benefit error-loss calculation comparing Smith and the next best available player." + }, + { + "objectID": "significance.html#the-concept-of-statistical-significance", + "href": "significance.html#the-concept-of-statistical-significance", + "title": "22  The Concept of Statistical Significance in Testing Hypotheses", + "section": "22.2 The concept of statistical significance", + "text": "22.2 The concept of statistical significance\n“Significance level” is a common term in probability statistics. It corresponds roughly to the probability that the assumed benchmark universe could give rise to a sample as extreme as the observed sample by chance. The results of Example 16-1 would be phrased as follows: The hypothesis that the radiation treatment affects the sex of the fruit fly offspring is accepted as true at the probability level of .16 (sometimes stated as the 16 percent level of significance). (A more common way of expressing this idea would be to say that the hypothesis is not rejected at the .16 probability level or the 16 percent level of significance. But “not rejected” and “accepted” really do mean much the same thing, despite some arguments to the contrary.) This kind of statistical work is called hypothesis testing.\nThe question of which significance level should be considered “significant” is difficult. How great must a coincidence be before you refuse to believe that it is only a coincidence? It has been conventional in social science to say that if the probability that something happens by chance is less than 5 percent, it is significant. But sometimes the stiffer standard of 1 percent is used. Actually, any fixed cut-off significance level is arbitrary. (And even the whole notion of saying that a hypothesis “is true” or “is not true” is sometimes not useful.) Whether a one-tailed or two-tailed test is used will influence your significance level, and this is why care must be taken in making that choice.\n\n\n\n\nArbuthnot, John. 1710. “An Argument for Divine Providence, Taken from the Constant Regularity Observ’d in the Births of Both Sexes. By Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of the College of Physitians and the Royal Society.” Philosophical Transactions of the Royal Society of London 27 (328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011." + }, + { + "objectID": "testing_counts_2.html#comparisons-among-more-than-two-samples-of-counted-data", + "href": "testing_counts_2.html#comparisons-among-more-than-two-samples-of-counted-data", + "title": "23  The Statistics of Hypothesis-Testing with Counted Data, Part 2", + "section": "23.1 Comparisons among more than two samples of counted data", + "text": "23.1 Comparisons among more than two samples of counted data\nExample 17-1: Do Any of Four Treatments Affect the Sex Ratio in Fruit Flies? (When the Benchmark Universe Proportion is Known, Is the Propor tion of the Binomial Population Affected by Any of the Treatments?) (Program “4treat”)\nSuppose that, instead of experimenting with just one type of radiation treatment on the flies (as in Example 15-1), you try four different treatments, which we shall label A, B, C, and D. Treatment A produces fourteen males and six females, but treatments B, C, and D produce ten, eleven, and ten males, respectively. It is immediately obvious that there is no reason to think that treatment B, C, or D affects the sex ratio. But what about treatment A?\nA frequent and dangerous mistake made by young scientists is to scrounge around in the data for the most extreme result, and then treat it as if it were the only result. In the context of this example, it would be fallacious to think that the probability of the fourteen-males-to-six females split observed for treatment A is the same as the probability that we figured for a single experiment in Example 15-1. Instead, we must consider that our benchmark universe is composed of four sets of twenty trials, each trial having a 50-50 probability of being male. We can consider that our previous trials 1-4 in Example 15-1 constitute a single new trial, and each subsequent set of four previous trials constitute another new trial. We then ask how likely a new trial of our sets of twenty flips is to produce one set with fourteen or more of one or the other sex.\nLet us make the procedure explicit, but using random numbers instead of coins this time:\nStep 1. Let “1-5” = males, “6-0” = females\nStep 2. Choose four groups of twenty numbers. If for any group there are 14 or more males, record “yes”; if 13 or less, record “no.”\nStep 3. Repeat perhaps 1000 times.\nStep 4. Calculate the proportion “yes” in the 1000 trials. This proportion estimates the probability that a fruit fly population with a proportion of 50 percent males will produce as many as 14 males in at least one of four samples of 20 flies.\nWe begin the trials with data as in Table 17-1. In two of the six simulation trials, more than one sample shows 14 or more males. Another trial shows fourteen or more females . Without even concerning ourselves about whether we should be looking at males or females, or just males, or needing to do more trials, we can see that it would be very common indeed to have one of four treatments show fourteen or more of one sex just by chance. This discovery clearly indicates that a result that would be fairly unusual (three in twenty-five) for a single sample alone is commonplace in one of four observed samples.\nTable 17-1\nNumber of “Males” in Groups of 20 (Based on Random Numbers)\nTrial Group A Group B Group C Group D Yes / No\n>= 14 or <= 6\n\n\n\n1\n11\n12\n8\n12\nNo\n\n\n2\n12\n7\n9\n8\nNo\n\n\n3\n6\n10\n10\n10\nYes\n\n\n4\n9\n9\n12\n7\nNo\n\n\n5\n14\n12\n13\n10\nYes\n\n\n6\n11\n14\n9\n7\nYes\n\n\n\nA key point of the RESAMPLING STATS program “4TREAT” is that each sample consists of four sets of 20 randomly generated hypothetical fruit flies. And if we consider 1000 trials, we will be examining 4000 sets of 20 fruit flies.\nIn each trial we GENERATE up to 4 random samples of 20 fruit flies, and for each, we count the number of males (“1”s) and then check whether that group has more than 13 of either sex (actually, more than 13 “1”s or less than 7 “1”s). If it does, then we change J to 1, which informs us that for this sample, at least 1 group of 20 fruit flies had results as unusual as the results from the fruit flies exposed to the four treatments.\nAfter the 1000 runs are made, we count the number of trials where one sample had a group of fruit flies with 14 or more of either sex, and PRINT the results.\n\n' Program file: \"4treat.rss\"\n\nREPEAT 1000\n ' Do 1000 experiments.\n COPY (0) j\n ' j indicates whether we have obtained a trial group with 14 or more of\n ' either sex. We start at \"0\" (= no).\n REPEAT 4\n ' Repeat the following steps 4 times to constitute 4 trial groups of 20\n ' flies each.\n GENERATE 20 1,2 a\n ' Generate randomly 20 \"1\"s and \"2\"s and put them in a; let \"1\"\n\n ' = male.\n COUNT a =1 b\n ' Count the number of males, put the result in b.\n IF b >= 14\n ' If the result is 14 or more males, then\n END\n COPY (1) j\n ' Set the indicator to \"1.\"\n\n ' End the IF condition.\n IF b <= 6\n ' If the result is 6 or fewer males (the same as 14 or more females), then\n END\n COPY (1) j\n ' Set the indicator to \"1.\"\n\n ' End the IF condition.\n END\nEND\n' End the procedure for one group, go back and repeat until all four\n' groups have been done.\nSCORE j z\n' j now tells us whether we got a result as extreme as that observed (j =\n' \"1\" if we did, j = \"0\" if not). We must keep track in z of this result\n' for each experiment.\n\n' End one experiment, go back and repeat until all 1000 are complete.\nCOUNT z =1 k\n' Count the number of experiments in which we had results as extreme as\n' those observed.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the result.\n\n' Note: The file \"4treat\" on the Resampling Stats software disk contains\n' this set of commands.\nIn one set of 1000 trials, there were more than 13 or less than 7 males 33 percent of the time — clearly not an unusual occurrence.\nExample 17-2: Do Four Psychological Treatments Differ in Effectiveness? (Do Several Two-Outcome Samples Differ Among Themselves in Their Propor tions? (Program “4treat1”)\nConsider four different psychological treatments designed to rehabilitate juvenile delinquents. Instead of a numerical test score, there is only a “yes” or a “no” answer as to whether the juvenile has been rehabilitated or has gotten into trouble again. Label the treatments P, R, S, and T, each of which is administered to a separate group of twenty juvenile delinquents. The number of rehabilitations per group has been: P, 17; R, 10; S, 10; T, 7. Is it improbable that all four groups come from the same universe?\nThis problem is like the placebo vs. cancer-cure problem, but now there are more than two samples. It is also like the four-sample irradiated-fruit flies example (Example 17-1), except that now we are not asking whether any or some of the samples differ from a given universe (50-50 sex ratio in that case). Rather, we are now asking whether there are differences among the samples themselves. Please keep in mind that we are still dealing with two-outcome (yes-or-no, well-or-sick) problems. Later we shall take up problems that are similar except that the outcomes are “quantitative.”\nIf all four groups were drawn from the same universe, that universe has an estimated rehabilitation rate of 17/20 + 10/20 + 10/20 + 7/20 = 44/80 = 55/100, because the observed data taken as a whole constitute our best guess as to the nature of the universe from which they come — again, if they all come from the same universe. (Please think this matter over a bit, because it is important and subtle. It may help you to notice the absence of any other information about the universe from which they have all come, if they have come from the same universe.)\nTherefore, select twenty two-digit numbers for each group from the random-number table, marking “yes” for each number “1-55” and “no” for each number “56-100.” Conduct a number of such trials. Then count the proportion of times that the difference between the highest and lowest groups is larger than the widest observed difference, the difference between P and T (17-7 = 10). In Table 17-2, none of the first six trials shows anywhere near as large a difference as the observed range of 10, suggesting that it would be rare for four treatments that are “really” similar to show so great a difference. There is thus reason to believe that P and T differ in their effects.\nTable 7-2\nResults of Six Random Trials for Problem “Delinquents”\n\n\n\nTrial\nP\nR\nS\nT\nLargest Minus Smallest\n\n\n1\n11\n9\n8\n12\n4\n\n\n2\n10\n10\n12\n12\n2\n\n\n3\n9\n12\n8\n12\n4\n\n\n4\n9\n11\n12\n10\n3\n\n\n5\n10\n10\n11\n12\n1\n\n\n6\n11\n11\n9\n11\n2\n\n\n\nThe strategy of the RESAMPLING STATS solution to “Delinquents” is similar to the strategy for previous problems in this chapter. The benchmark (null) hypothesis is that the treatments do not differ in their effects observed, and we estimate the probability that the observed results would occur by chance using the benchmark universe. The only new twist is that we must instruct the computer to find the groups with the highest and the lowest numbers of rehabilitations.\nUsing RESAMPLING STATS we GENERATE four “treatments,” each represented by 20 numbers, each number randomly selected between 1 and 100. We let 1-55 = success, 56-100\n= failure. Follow along in the program for the rest of the procedure:\n\n' Program file: \"4treat1.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 20 1,100 a\n ' The first treatment group, where \"1-55\" = success, \"56-100\" = failure\n GENERATE 20 1,100 b\n ' The second group\n GENERATE 20 1,100 c\n ' The third group\n GENERATE 20 1,100 d\n ' The fourth group\n COUNT a <=55 aa\n ' Count the first group's successes\n COUNT b <=55 bb\n ' Same for second, third & fourth groups\n COUNT c <=55 cc\n COUNT d <=55 dd\nEND\nSUBTRACT aa bb ab\n' Now find all the pairwise differences in successes among the groups\nSUBTRACT aa cc ac\nSUBTRACT aa dd ad\nSUBTRACT bb cc bc\nSUBTRACT bb dd bd\nSUBTRACT cc dd cd\nCONCAT ab ac ad bc bd cd e\n' Concatenate, or join, all the differences in a single vector e\nABS e f\n' Since we are interested only in the magnitude of the difference, not its\n' direction, we take the ABSolute value of all the differences.\nMAX f g\n' Find the largest of all the differences\nSCORE g z\n' Keep score of the largest\n\n' End a trial, go back and repeat until all 1000 are complete.\nCOUNT z >=10 k\n' How many of the trials yielded a maximum difference greater than the\n' observed maximum difference?\nDIVIDE k 1000 kk\n' Convert to a proportion\nPRINT kk\n' Note: The file \"4treat1\" on the Resampling Stats software disk contains\n' this set of commands.\nOne percent of the experiments with randomly generated treatments from a common success rate of .55 produced differences in excess of the observed maximum difference (10).\nAn alternative approach to this problem would be to deal with each result’s departure from the mean, rather than the largest difference among the pairs. Once again, we want to deal with absolute departures, since we are interested only in magnitude of difference. We could take the absolute value of the differences, as above, but we will try something different here. Squaring the differences also renders them all positive: this is a common approach in statistics.\nThe first step is to examine our data and calculate this measure: The mean is 11, the differences are 6, 1, 1, and 4, the\nsquared differences are 36, 1, 1, and 16, and their sum is 54. Our experiment will be, as before, to constitute four groups of 20 at random from a universe with a 55 percent rehabilitation rate. We then calculate this same measure for the random groups. If it is frequently larger than 54, then we conclude that a uniform cure rate of 55 percent could easily have produced the observed results. The program that follows also GENERATES the four treatments by using a REPEAT loop, rather than spelling out the GENERATE command 4 times as above. In RESAMPLING STATS:\n\n' Program file: \"testing_counts_2_02.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n REPEAT 4\n ' Repeat the following steps 4 times to constitute 4 groups of 20 and\n ' count their rehabilitation rates.\n GENERATE 20 1,100 a\n ' Randomly generate 20 numbers between 1 and 100 and put them in a; let\n ' 1-55 = rehabilitation, 56-100 no rehab.\n COUNT a between 1 55 b\n ' Count the number of rehabs, put the result in b.\n SCORE b w\n ' Keep track of the 4 rehab rates for the group of 20.\n END\n ' End the procedure for one group of 20, go back and repeat until all 4\n ' are done.\n MEAN w x\n ' Calculate the mean\n SUMSQRDEV w x y\n ' Find the sum of squared deviations between group rehab rates (w) and the\n ' overall rate (x).\n SCORE y z\n ' Keep track of the result for each trial.\n CLEAR w\n ' Erase the contents of w to prepare for the next trial.\nEND\n' End one experiment, go back and repeat until all 1000 are complete.\nHISTOGRAM z\n' Produce a histogram of trial results.\n4 Treatments\n\nsum of squared differences\nFrom this histogram, we see that in only 1 percent of the cases did our trial sum of squared differences equal or exceed 54, confirming our conclusion that this is an unusual result. We can have RESAMPLING STATS calculate this proportion:\n\n' Program file: \"4treat2.rss\"\n\nCOUNT z >= 54 k\n' Determine how many trials produced differences as great as those\n' observed.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the results.\n\n' Note: The file \"4treat2\" on the Resampling Stats software disk contains\n' this set of commands.\nThe conventional way to approach this problem would be with what is known as a “chi-square test.”\nExample 17-3: Three-way Comparison\nIn a national election poll of 750 respondents in May, 1992, George Bush got 36 percent of the preferences (270 voters), Ross Perot got 30 percent (225 voters), and Bill Clinton got 28 percent (210 voters) ( Wall Street Journal, October 29, 1992, A16). Assuming that the poll was representative of actual voting, how likely is it that Bush was actually behind and just came out ahead in this poll by chance? Or to put it differently, what was the probability that Bush actually had a plurality of support, rather than that his apparent advantage was a matter of sampling variability? We test this by constructing a universe in which Bush is slightly behind (in practice, just equal), and then drawing samples to see how likely it is that those samples will show Bush ahead.\nWe must first find that universe — among all possible universes that yield a conclusion contrary to the conclusion shown by the data, and one in which we are interested — that has the highest probability of producing the observed sample. With a two-person race the universe is obvious: a universe that is evenly split except for a single vote against “our” candidate who is now in the lead, i.e. in practice a 50-50 universe. In that simple case we then ask the probability that that universe would produce a sample as far out in the direction of the conclusion drawn from the observed sample as the observed sample.\nWith a three-person race, however, the decision is not obvious (and if this problem becomes too murky for you, skip over it; it is included here more for fun than anything else). And there is no standard method for handling this problem in conventional statistics (a solution in terms of a confidence interval was first offered in 1992, and that one is very complicated and not very satisfactory to me). But the sort of thinking that we must labor to accomplish is also required for any conventional solution; the difficulty is inherent in the problem, rather than being inherent in resampling, and resampling will be at least as simple and understandable as any formulaic approach.\nThe relevant universe is (or so I think) a universe that is 35 Bush — 35 Perot — 30 Clinton (for a race where the poll indicates a 36-30-28 split); the 35-35-30 universe is of interest because it is the universe that is closest to the observed sample that does not provide a win for Bush (leaving out the “undecideds” for convenience); it is roughly analogous to the 50-50 split in the two-person race, though a clear-cut argument would require a lot more discussion. A universe that is split 34-34-32, or any of the other possible universes, is less likely to produce a 36-30-28 sample (such as was observed) than is a 35-35-30 universe, I believe, but that is a checkable matter. (In technical terms, it might be a “maximum likelihood universe” that we are looking for.)\nWe might also try a 36-36-28 universe to see if that produces a result very different than the 35-35-30 universe.\nAmong those universes where Bush is behind (or equal), a universe that is split 50-50-0 (with just one extra vote for the closest opponent to Bush) would be the most likely to produce a 6 percent difference between the top two candidates by chance, but we are not prepared to believe that the voters are split in such a fashion. This assumption shows that we are bringing some judgments to bear from outside the observed data.\nFor now, the point is not how to discover the appropriate benchmark hypothesis, but rather its criterion — which is, I repeat, that universe (among all possible universes) that yields a conclusion contrary to the conclusion shown by the data (and in which we are interested) and that (among such universes that yield such a conclusion) has the highest probability of producing the observed sample.\nLet’s go through the logic again: 1) Bush apparently has a 6 percent lead over the second-place candidate. 2) We ask if the second-place candidate might be ahead if all voters were polled. We test that by setting up a universe in which the second-place candidate is infinitesimally ahead (in practice, we make the two top candidates equal in our hypothetical universe). And we make the third-place candidate somewhere close to the top two candidates. 3) We then draw samples from this universe and observe how often the result is a 6 percent lead for the top candidate (who starts off just below equal in the universe).\nFrom here on, the procedure is straightforward: Determine how likely that universe is to produce a sample as far (or further) away in the direction of “our” candidate winning. (One could do something like this even if the candidate of interest were not now in the lead.)\nThis problem teaches again that one must think explicitly about the choice of a benchmark hypothesis. The grounds for the choice of the benchmark hypothesis should precede the program, or should be included as an extended comment within the program.\nThis program embodies the previous line of thought.\n\n' Program file: \"testing_counts_2_04.rss\"\n\nURN 35#1 35#2 30#3 univ 1= Bush, 2= Perot, 3=Clinton\nREPEAT 1000\n SAMPLE 750 univ samp\n ' Take a sample of 750 votes\n COUNT samp =1 bush\n ' Count the Bush voters, etc.\n COUNT samp =2 pero\n ' Perot voters\n COUNT samp =3 clin\n ' Clinton voters\n CONCAT pero clin others\n ' Join Perot & Clinton votes\n MAX others second\n ' Find the larger of the other two\n SUBTRACT bush second d\n ' Find Bush's margin over 2nd\n SCORE d z\nEND\nHISTOGRAM z\nCOUNT z >=46 m\n' Compare to the observed margin in the sample of 750 corresponding to a 6\n' percent margin by Bush over 2nd place finisher (rounded)\nDIVIDE m 1000 mm\nPRINT mm\n\n\n\nFigure 23.1: Samples of 750 Voters:\n\n\nThe result is — Bush’s margin over 2nd (mm) = 0.018.\nWhen we run this program with a 36-36-28 split, we also get a similar result — 2.6 percent. That is, the analysis shows a probability of only 2.6 percent that Bush would score a 6 percentage point “victory” in the sample, by chance, if the universe were split as specified. So Bush could feels reasonably confident that at the time the poll was taken, he was ahead of the other two candidates." + }, + { + "objectID": "testing_counts_2.html#paired-comparisons-with-counted-data", + "href": "testing_counts_2.html#paired-comparisons-with-counted-data", + "title": "23  The Statistics of Hypothesis-Testing with Counted Data, Part 2", + "section": "23.2 Paired Comparisons With Counted Data", + "text": "23.2 Paired Comparisons With Counted Data\nExample 17-4: The Pig Rations Again, But Comparing Pairs of Pigs (Paired-Comparison Test) (Program “Pigs2”)\nTo illustrate how several different procedures can reasonably be used to deal with a given problem, here is another way to decide whether pig ration A is “really” better: We can assume that the order of the pig scores listed within each ration group is random — perhaps the order of the stalls the pigs were kept in, or their alphabetical-name order, or any other random order not related to their weights . Match the first pig eating ration A with the first pig eating ration B, and also match the second pigs, the third pigs, and so forth. Then count the number of matched pairs on which ration A does better. On nine of twelve pairings ration A does better, that is, 31.0 > 26.0, 34.0 > 24.0, and so forth.\nNow we can ask: If the two rations are equally good, how often will one ration exceed the other nine or more times out of twelve, just by chance? This is the same as asking how often either heads or tails will come up nine or more times in twelve tosses. (This is a “two-tailed” test because, as far as we know, either ration may be as good as or better than the other.) Once we have decided to treat the problem in this manner, it is quite similar to Example 15-1 (the first fruitfly irradiation problem). We ask how likely it is that the outcome will be as far away as the observed outcome (9 “heads” of 12) from 6 of 12 (which is what we expect to get by chance in this case if the two rations are similar).\nSo we conduct perhaps fifty trials as in Table 17-3, where an asterisk denotes nine or more heads or tails.\nStep 1. Let odd numbers equal “A better” and even numbers equal “B better.”\nStep 2. Examine 12 random digits and check whether 9 or more, or 3 or less, are odd. If so, record “yes,” otherwise “no.”\nStep 3. Repeat step 2 fifty times.\nStep 4. Compute the proportion “yes,” which estimates the probability sought.\nThe results are shown in Table 17-3.\nIn 8 of 50 simulation trials, one or the other ration had nine or more tosses in its favor. Therefore, we estimate the probability to be .16 (eight of fifty) that samples this different would be generated by chance if the samples came from the same universe.\nTable 17-3\nResults From Fifty Simulation Trials Of The Problem “Pigs2”\n\n\n\n\n\n\n\n\n\n\n\nTrial\nHeads” or Odds”\n(Ration A)\n“Tails” or “Evems”\n(Ration B)\nTrial\n“Heads” or Odds”\n(Ration A)\n“Tails” or “Evens”\n(Ration B)\n\n\n1\n6\n6\n26\n6\n6\n\n\n2\n4\n8\n27\n5\n7\n\n\n3\n6\n6\n28\n7\n5\n\n\n4\n7\n5\n29\n4\n8\n\n\n* 5\n3\n9\n30\n6\n6\n\n\n6\n5\n7\n* 31\n9\n3\n\n\n7\n8\n4\n* 32\n2\n10\n\n\n8\n6\n6\n33\n7\n5\n\n\n9\n7\n5\n34\n5\n7\n\n\n*10\n9\n3\n35\n6\n6\n\n\n11\n7\n5\n36\n8\n4\n\n\n*12\n3\n9\n37\n6\n6\n\n\n13\n5\n7\n38\n4\n8\n\n\n14\n6\n6\n39\n5\n7\n\n\n15\n6\n6\n40\n8\n4\n\n\n16\n8\n4\n41\n5\n7\n\n\n17\n5\n7\n42\n6\n6\n\n\n*18\n9\n3\n43\n5\n7\n\n\n19\n6\n6\n44\n7\n5\n\n\n20\n7\n5\n45\n6\n6\n\n\n21\n4\n8\n46\n4\n8\n\n\n* 22\n10\n2\n47\n5\n7\n\n\n23\n6\n6\n48\n5\n7\n\n\n24\n5\n7\n49\n8\n4\n\n\n*25\n3\n9\n50\n7\n5\n\n\n\nNow for a RESAMPLING STATS program and results. “Pigs2” is different from “Pigs1” in that it compares the weight-gain results of pairs of pigs, instead of simply looking at the rankings for weight gains.\nThe key to “Pigs2” is the GENERATE statement. If we assume that ration A does not have an effect on weight gain (which is the “benchmark” or “null” hypothesis), then the results of the actual experiment would be no different than if we randomly GENERATE numbers “1” and “2” and treat a “1” as a larger weight gain for the ration A pig, and a “2” as a larger weight gain for the ration B pig. Both events have a .5 chance of occurring for each pair of pigs because if the rations had no effect on weight gain (the null hypothesis), ration A pigs would have larger weight gains about half of the time. The next step is to COUNT the number of times that the weight gains of one group (call it the group fed with ration A) were larger than the weight gains of the other (call it the group fed with ration B). The complete program follows:\n\n' Program file: \"pigs2.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 12 1,2 a\n ' Generate randomly 12 \"1\"s and \"2\"s, put them in a. This represents 12\n ' \"pairings\" where \"1\" = ration a \"wins,\" \"2\" = ration b = \"wins.\"\n COUNT a =1 b\n ' Count the number of \"pairings\" where ration a won, put the result in b.\n SCORE b z\n ' Keep track of the result in z\nEND\n' End the trial, go back and repeat until all 100 trials are complete.\nCOUNT z >= 9 j\n' Determine how often we got 9 or more \"wins\" for ration a.\nCOUNT z <= 3 k\n' Determine how often we got 3 or fewer \"wins\" for ration a.\nADD j k m\n' Add the two together\nDIVIDE m 100 mm\n' Convert to a proportion\nPRINT mm\n' Print the result.\n\n' Note: The file \"pigs2\" on the Resampling Stats software disk contains\n' this set of commands.\nNotice how we proceeded in Examples 15-6 and 17-4. The data were originally quantitative — weight gains in pounds for each pig. But for simplicity we classified the data into simpler counted-data formats. The first format (Example 15-6) was a rank order, from highest to lowest. The second format (Example 17-4) was simply higher-lower, obtained by randomly pairing the observations (using alphabetical letter, or pig’s stall number, or whatever was the cause of the order in which the data were presented to be random). Classifying the data in either of these ways loses some information and makes the subsequent tests somewhat cruder than more refined analysis could provide (as we shall see in the next chapter), but the loss of efficiency is not crucial in many such cases. We shall see how to deal directly with the quantitative data in Chapter 24.\nExample 17-5: Merged Firms Compared to Two Non-Merged Groups\nIn a study by Simon, Mokhtari, and Simon (1996), a set of 33 advertising agencies that merged over a period of years were each compared to entities within two groups (each also of 33 firms) that did not merge; one non-merging group contained firms of roughly the same size as the final merged entities, and the other non-merging group contained pairs of non-merging firms whose total size was roughly the same as the total size of the merging entities.\nThe idea behind the matching was that each pair of merged firms was compared against\n\na pair of contemporaneous firms that were roughly the same size as the merging firms before the merger, and\na single firm that was roughly the same size as the merged entity after the merger.\nHere (Table 17-4) are the data (provided by the authors):\nTable 17-4\nRevenue Growth In Year 1 Following Merger\nSet # Merged Match1 Match2\n\n\n\n1\n-0.20000\n0.02564\n0.000000\n\n\n2\n-0.34831\n-0.12500\n0.080460\n\n\n3\n0.07514\n0.06322\n-0.023121\n\n\n4\n0.12613\n-0.04199\n0.164671\n\n\n5\n-0.10169\n0.08000\n0.277778\n\n\n6\n0.03784\n0.14907\n0.430168\n\n\n7\n0.11616\n0.15183\n0.142857\n\n\n8\n-0.09836\n0.03774\n0.040000\n\n\n9\n0.02137\n0.07661\n.0111111\n\n\n10\n-0.01711\n0.28434\n0.189139\n\n\n11\n-0.36478\n0.13907\n0.038869\n\n\n12\n0.08814\n0.03874\n0.094792\n\n\n13\n-0.26316\n0.05641\n0.045139\n\n\n14\n-0.04938\n0.05371\n0.008333\n\n\n15\n0.01146\n0.04805\n0.094817\n\n\n16\n0.00975\n0.19816\n0.060929\n\n\n17\n0.07143\n0.42083\n-0.024823\n\n\n18\n0.00183\n0.07432\n0.053191\n\n\n19\n0.00482\n-0.00707\n0.050083\n\n\n20\n-0.05399\n0.17152\n0.109524\n\n\n21\n0.02270\n0.02788\n-0.022456\n\n\n22\n0.05984\n0.04857\n0.167064\n\n\n23\n-0.05987\n0.02643\n0.020676\n\n\n24\n-0.08861\n-0.05927\n0.077067\n\n\n25\n-0.02483\n-0.01839\n0.059633\n\n\n26\n0.07643\n0.01262\n0.034635\n\n\n27\n-0.00170\n-0.04549\n0.053571\n\n\n28\n-0.21975\n0.34309\n0.042789\n\n\n29\n0.38237\n0.22105\n0.115773\n\n\n30\n-0.00676\n0.25494\n0.237047\n\n\n31\n-0.16298\n0.01124\n0.190476\n\n\n32\n0.19182\n0.15048\n0.151994\n\n\n33\n0.06116\n0.17045\n0.093525\n\n\n\nComparisons were made in several years before and after the mergings to see whether the merged entities did better or worse than the non-merging entities they were matched with by the researchers, but for simplicity we may focus on just one of the more important years in which they were compared — say, the revenue growth rates in the year after the merger.\nHere are those average revenue growth rates for the three groups:\nYear’s rev. growth\n\n\n\nMERGED\n-0.0213\n\n\nMATCH 1\n0.092085\n\n\nMATCH 2\n0.095931\n\n\n\nWe could do a general test to determine whether there are differences among the means of the three groups, as was done in the “Differences Among 4 Pig Rations” problem (Section 24.0.1). However, we note that there may be considerable variation from one matched set to another — variation which can obscure the overall results if we resample from a large general bucket.\nTherefore, we use the following resampling procedure that maintains the separation between matched sets by converting each observation into a rank (1, 2 or 3) within the matched set.\nHere (Table 17-5) are those ranks:\nTable 17-5\nRanked Within Matched Set (1 = worst, 3 = best)\nSet # Merged Match1 Match2\n\n\n\n1\n1\n3\n2\n\n\n2\n1\n2\n3\n\n\n3\n3\n2\n1\n\n\n4\n2\n1\n3\n\n\n5\n1\n2\n3\n\n\n6\n1\n3\n2\n\n\n7\n1\n3\n2\n\n\n8\n1\n2\n3\n\n\n9\n1\n2\n3\n\n\n10\n1\n2\n3\n\n\n11\n1\n3\n2\n\n\n12\n2\n1\n3\n\n\n13\n1\n3\n2\n\n\n14\n1\n3\n2\n\n\n15\n1\n2\n3\n\n\n16\n1\n3\n2\n\n\n17\n2\n3\n1\n\n\n18\n1\n3\n2\n\n\n\n\n\n\nSet #\nMerged\nMatch1\nMatch2\n\n\n19\n2\n1\n3\n\n\n20\n1\n3\n2\n\n\n21\n2\n2\n3\n\n\n22\n2\n2\n3\n\n\n23\n1\n3\n2\n\n\n24\n1\n2\n3\n\n\n25\n1\n2\n3\n\n\n26\n3\n1\n2\n\n\n27\n2\n1\n3\n\n\n28\n1\n3\n2\n\n\n29\n3\n2\n1\n\n\n30\n1\n3\n2\n\n\n31\n1\n2\n3\n\n\n32\n3\n1\n2\n\n\n33\n1\n3\n2\n\n\n\nThese are the average ranks for the three groups (1 = worst, 3\n= best):\n\n\n\nMERGED\n1.45\n\n\nMATCH 1\n2.18\n\n\nMATCH 2\n2.36\n\n\n\nIs it possible that the merged group received such a low (poor) average ranking just by chance? The null hypothesis is that the ranks within each set were assigned randomly, and that “merged” came out so poorly just by chance. The following procedure simulates random assignment of ranks to the “merged” group:\n\nRandomly select 33 integers between “1” and “3” (inclusive).\nFind the average rank & record.\nRepeat steps 1 and 2, say, 1000 times.\nFind out how often the average rank is as low as 1.45\n\n\nHere’s a RESAMPLING STATS program (“merge.sta”):\n\n' Program file: \"testing_counts_2_06.rss\"\n\nREPEAT 1000\n GENERATE 33 (1 2 3) ranks\n MEAN ranks ranksum\n SCORE ranksum z\nEND\nHISTOGRAM z\nCOUNT z <=1.45 k\nDIVIDE k 1000 kk\nPRINT kk\n\nResult: kk = 0\nInterpretation: 1000 random selections of 33 ranks never produced an average as low as the observed average. Therefore we rule out chance as an explanation for the poor ranking of the merged firms.\nExactly the same technique might be used in experimental medical studies wherein subjects in an experimental group are matched with two different entities that receive placebos or control treatments.\nFor example, there have been several recent three-way tests of treatments for depression: drug therapy versus cognitive therapy versus combined drug and cognitive therapy. If we are interested in the combined drug-therapy treatment in particular, comparing it to standard existing treatments, we can proceed in the same fashion as in the merger problem.\nWe might just as well consider the real data from the merger as hypothetical data for a proposed test in 33 triplets of people that have been matched within triplet by sex, age, and years of education. The three treatments were to be chosen randomly within each triplet.\nAssume that we now switch scales from the merger data, so that #1 = best and #3 = worst, and that the outcomes on a series of tests were ranked from best (#1) to worst (#3) within each triplet. Assume that the combined drug-and-therapy regime has the best average rank. How sure can we be that the observed result would not occur by chance? Here are the data from the merger study, seen here as Table 17-5-b:\nTable 17-5-b\nRanked Therapies Within Matched Patient Triplets\n(hypothetical data identical to merger data) (1 = best, 3 = worst)\nTriplet # Therapy Only Combined Drug Only\n\n\n\n1\n1\n3\n2\n\n\n2\n1\n2\n3\n\n\n3\n3\n2\n1\n\n\n4\n2\n1\n3\n\n\n5\n1\n2\n3\n\n\n6\n1\n3\n2\n\n\n7\n1\n3\n2\n\n\n8\n1\n2\n3\n\n\n9\n1\n2\n3\n\n\n10\n1\n2\n3\n\n\n11\n1\n3\n2\n\n\n12\n2\n1\n3\n\n\n13\n1\n3\n2\n\n\n14\n1\n3\n2\n\n\n15\n1\n2\n3\n\n\n16\n1\n3\n2\n\n\n17\n2\n3\n1\n\n\n18\n1\n3\n2\n\n\n19\n2\n1\n3\n\n\n20\n1\n3\n2\n\n\n21\n2\n1\n3\n\n\n22\n2\n1\n3\n\n\n23\n1\n3\n2\n\n\n24\n1\n2\n3\n\n\n25\n1\n2\n3\n\n\n26\n3\n1\n2\n\n\n27\n2\n1\n3\n\n\n28\n1\n3\n2\n\n\n29\n3\n2\n1\n\n\n30\n1\n3\n2\n\n\n31\n1\n2\n3\n\n\n32\n3\n1\n2\n\n\n33\n1\n3\n2\n\n\n\nThese are the average ranks for the three groups (“1” = best, “3”= worst):\n\n\n\nCombined\n1.45\n\n\nDrug\n2.18\n\n\nTherapy\n2.36\n\n\n\nIn these hypothetical data, the average rank for the drug and therapy regime is 1.45. Is it likely that the regimes do not “really” differ with respect to effectiveness, and that the drug and therapy regime came out with the best rank just by the luck of the draw? We test by asking, “If there is no difference, what is the probability that the treatment of interest will get an average rank this good, just by chance?”\nWe proceed exactly as with the solution for the merger problem (see above).\nIn the above problems, we did not concern ourselves with chance outcomes for the other therapies (or the matched firms) because they were not our primary focus. If, in actual fact, one of them had done exceptionally well or poorly, we would have paid little notice because their performance was not the object of the study. We needed, therefore, only to guard against the possibility that chance good luck for our therapy of interest might have led us to a hasty conclusion.\nSuppose now that we are not interested primarily in the combined drug-therapy treatment, and that we have three treatments being tested, all on equal footing. It is no longer sufficient to ask the question “What is the probability that the combined therapy could come out this well just by chance?” We must now ask “What is the probability that any of the therapies could have come out this well by chance?” (Perhaps you can guess that this probability will be higher than the probability that our chosen therapy will do so well by chance.)\nHere is a resampling procedure that will answer this question:\n\nPut the numbers “1”, “2” and “3” (corresponding to ranks) in a bucket\nShuffle the numbers and deal them out to three locations that correspond to treatments (call the locations “t1,” “t2,” and “t3”)\nRepeat step two another 32 times (for a total of 33 repetitions, for 33 matched triplets)\nFind the average rank for each location (treatment.\nRecord the minimum (best) score.\nRepeat steps 2-4, say, 1000 times.\nFind out how often the minimum average rank for any treatment is as low as 1.45\n\n\n' Program file: \"testing_counts_2_07.rss\"\n\nNUMBERS (1 2 3) a\n' Step 1 above\nREPEAT 1000\n ' Step 6\n REPEAT 33\n ' Step 3\n SHUFFLE a a\n ' Step 2\n SCORE a t1 t2 t3\n ' Step 2\n END\n ' Step 3\n MEAN t1 tt1\n ' Step 4\n MEAN t2 tt2\n MEAN t3 tt3\n CLEAR t1\n ' Clear the vectors where we've stored the ranks for this trial (must do\n ' this whenever we have a SCORE statement that's part of a \"nested\" repeat\n ' loop)\n CLEAR t2\n CLEAR t3\n CONCAT tt1 tt2 tt3 b\n ' Part of step 5\n MIN b bb\n ' Part of step 5\n SCORE bb z\n ' Part of step 5\nEND\n' Step 6\nHISTOGRAM z\nCOUNT z <=1.45 k\n' Step 7\nDIVIDE k 1000 kk\nPRINT kk\nInterpretation: 1000 random shufflings of 33 ranks, apportioned to three “treatments,” never produced for the best treatment in the three an average as low as the observed average, therefore we rule out chance as an explanation for the success of the combined therapy.\nAn interesting feature of the mergers (or depression treatment) problem is that it would be hard to find a conventional test that would handle this three-way comparison in an efficient manner. Certainly it would be impossible to find a test that does not require formulae and tables that only a talented professional statistician could manage satisfactorily, and even s/ he is not likely to fully understand those formulaic procedures.\n\nResult: kk = 0" + }, + { + "objectID": "testing_counts_2.html#technical-note", + "href": "testing_counts_2.html#technical-note", + "title": "23  The Statistics of Hypothesis-Testing with Counted Data, Part 2", + "section": "23.3 Technical note", + "text": "23.3 Technical note\nSome of the tests introduced in this chapter are similar to standard nonparametric rank and sign tests. They differ less in the structure of the test statistic than in the way in which significance is assessed (the comparison is to multiple simulations of a model based on the benchmark hypothesis, rather than to critical values calculated analytically).\n\n\n\n\nSimon, Julian Lincoln, Manouchehr Mokhtari, and Daniel H Simon. 1996. “Are Mergers Beneficial or Detrimental? Evidence from Advertising Agencies.” International Journal of the Economics of Business 3 (1): 69–82." + }, + { + "objectID": "testing_measured.html#differences-among-four-means", + "href": "testing_measured.html#differences-among-four-means", + "title": "24  The Statistics of Hypothesis-Testing With Measured Data", + "section": "24.1 Differences among four means", + "text": "24.1 Differences among four means\nExample 18-6: Differences Among Four Pig Rations (Test for Differences Among Means of More Than Two Samples of Measured Data) (File “PIGS4”)\nIn Examples 15-1 and 15-4 we investigated whether or not the results shown by a single sample are sufficiently different from a null (benchmark) hypothesis so that the sample is unlikely to have come from the null-hypothesis benchmark universe. In Examples 15-7, 17-1, and 18-1 we then investigated whether or not the results shown by two samples suggest that both had come from the same universe, a universe that was assumed to be the composite of the two samples. Now as in Example 17-2 we investigate whether or not several samples come from the same universe, except that now we work with measured data rather than with counted data.\nIf one experiments with each of 100 different pig foods on twelve pigs, some of the foods will show much better results than will others just by chance , just as one family in sixteen is likely to have the very “high” number of 4 daughters in its first four children. Therefore, it is wrong reasoning to try out the 100 pig foods, select the food that shows the best results, and then compare it statistically with the average (sum) of all the other foods (or worse, with the poorest food). With such a procedure and enough samples, you will surely find one (or more) that seems very atypical statistically. A bridge hand with 12 or 13 spades seems very atypical, too, but if you deal enough bridge hands you will sooner or later get one with 12 or 13 spades — as a purely chance phenomenon, dealt randomly from a standard deck. Therefore we need a test that prevents our falling into such traps. Such a test usually operates by taking into account the differences among all the foods that were tried.\nThe method of Example 18-1 can be extended to handle this problem. Assume that four foods were each tested on twelve pigs. The weight gains in pounds for the pigs fed on foods A and B were as before. For foods C and D the weight gains were:\nRation C: 30, 30, 32, 31, 29, 27, 25, 30, 31, 32, 34, 33\nRation D: 32, 25, 31, 26, 32, 27, 28, 29, 29, 28, 23, 25\nNow construct a benchmark universe of forty-eight index cards, one for each weight gain. Then deal out sets of four hands randomly. More specifically:\nStep 1. Constitute a universe of the forty-eight observed weight gains in the four samples, writing the weight gains on cards.\nStep 2. Draw four groups of twelve weight gains, with replacement, since we are drawing from a hypothesized infinite universe in which consecutive draws are independent. Determine whether the difference between the lowest and highest group means is as large or larger than the observed difference. If so write “yes,” otherwise “no.”\nStep 3. Repeat step 2 fifty times.\nStep 4. Count the trials in which the differences between the simulated groups with the highest and lowest means are as large or larger than the differences between the means of the highest and lowest observed samples. The proportion of such trials to the total number of trials is the probability that all four samples would differ as much as do the observed samples if they (in technical terms) come from the same universe.\nThe problem “Pigs4,” as handled by the steps given above, is quite similar to the way we handled Example TKTK, except that the data are measured (in pounds of weight gain) rather than simply counted (the number of rehabilitations).\nInstead of working through a program for the procedure outlined above, let us consider a different approach to the problem — computing the difference between each pair of foods, six differences in all, converting all minus (-) signs to (+) differences. Then we can total the six differences, and compare the total with the sum of the six differences in the observed sample. The proportion of the resampling trials in which the observed sample sum is exceeded by the sum of the differences in the trials is the probability that the observed samples would differ as much as they do if they come from the same universe.5\nOne naturally wonders whether this latter test statistic is better than the range, as discussed above. It would seem obvious that using the information contained in all four samples should increase the precision of the estimate. And indeed it is so, as you can confirm for yourself by comparing the results of the two approaches. But in the long run, the estimate provided by the two approaches would be much the same. That is, there is no reason to think that one or another of the estimates is biased . However, successive samples from the population would steady down faster to the true value using the four-groupbased estimate than they would using the range. That is, the four-group-based estimate would require a smaller sample of pigs.\nIs there reason to prefer one or the other approach from the point of view of some decision that might be made? One might think that the range procedure throws light on which one of the foods is best in a way that the four-group-based approach does not. But this is not correct. Both approaches answer this question, and only this question: Are the results from the four foods likely to have resulted from the same “universe” of weight gains or not? If one wants to know whether the best food is similar to, say, all the other three, the appropriate approach would be a two -sample approach similar to various two -sample examples discussed earlier. (It would be still another question to ask whether the best food is different from the worst. One would then use a procedure different from either of those discussed above.)\nIf the foods cost the same, one would not need even a twosample analysis to decide which food to feed. Feed the one whose results are best in the experiment, without bothering to ask whether it is “really” the best; you can’t go wrong as long as it doesn’t cost more to use it. (One could inquire about the probability that the food yielding the best results in the experiment would attain those results by chance even if it was worse than the others by some stipulated amount, but pursuing that line of thought may be left to the student as an exercise.)\nIn the problem “Pigs4,” we want a measure of how the groups differ. The obvious first step is to add up the total weight gains for each group: 382, 344, 364, 335. The next step is to calculate the differences between all the possible combinations of groups: 382-344=38, 382-364=18, 382-335=47, 344-364= -20, 344-335=9, 364-335=29." + }, + { + "objectID": "testing_measured.html#using-squared-differences", + "href": "testing_measured.html#using-squared-differences", + "title": "24  The Statistics of Hypothesis-Testing With Measured Data", + "section": "24.2 Using Squared Differences", + "text": "24.2 Using Squared Differences\nHere we face a choice. We could work with the absolute differences — that is, the results of the subtractions — treating each result as a positive number even if it is negative. We have seen this approach before. Therefore let us now take the opportunity of showing another approach. Instead of working with the absolute differences, we square each difference, and then SUM the squares. An advantage of working with the squares is that they are positive — a negative number squared is positive — which is convenient. Additionally, conventional statistics works mainly with squared quantities, and therefore it is worth getting familiar with that point of view. The squared differences in this case add up to 5096.\nUsing RESAMPLING STATS, we shuffle all the weight gains together, select four random groups, and determine whether the squared differences in the resample exceed 5096. If they do so with regularity, then we conclude that the observed differences could easily have occurred by chance.\nWith the CONCAT command, we string the four vectors into a single vector. After SHUFFLEing the 48-pig weight-gain vector G into H, we TAKE four randomized samples. And we compute the squared differences between the pairs of groups and SUM the squared differences just as we did above for the observed groups.\nLast, we examine how often the simulated-trials data produce differences among the groups as large as (or larger than) the actually observed data — 5096.\n\n' Program file: \"pigs4.rss\"\n\nNUMBERS (34 29 26 32 35 38 31 34 30 29 32 31) a\nNUMBERS (26 24 28 29 30 29 32 26 31 29 32 28) b\nNUMBERS (30 30 32 31 29 27 25 30 31 32 34 33) c\nNUMBERS (32 25 31 26 32 27 28 29 29 28 23 25) d\n' (Record the data for the 4 foods)\nCONCAT a b c d g\n' Combine the four vectors into g\nREPEAT 1000\n ' Do 1000 trials\n SHUFFLE g h\n ' Shuffle all the weight gains.\n SAMPLE 12 h p\n ' Take 4 random samples, with replacement.\n SAMPLE 12 h q\n SAMPLE 12 h r\n SAMPLE 12 h s\n SUM p i\n ' Sum the weight gains for the 4 resamples.\n SUM q j\n SUM r k\n SUM s l\n SUBTRACT i j ij\n ' Find the differences between all the possible pairs of resamples.\n SUBTRACT i k ik\n SUBTRACT i l il\n SUBTRACT j k jk\n SUBTRACT j l jl\n SUBTRACT k l kl\n MULTIPLY ij ij ijsq\n ' Find the squared differences.\n MULTIPLY ik ik iksq\n MULTIPLY il il ilsq\n MULTIPLY jk jk jksq\n MULTIPLY jl jl jlsq\n MULTIPLY kl kl klsq\n ADD ijsq iksq ilsq jksq jlsq klsq total\n ' Add them together.\n SCORE total z\n ' Keep track of the total for each trial.\nEND\n' End one trial, go back and repeat until 1000 trials are complete.\nHISTOGRAM z\n' Produce a histogram of the trial results.\nCOUNT z >= 5096 k\n' Find out how many trials produced differences among groups as great as\n' or greater than those observed.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the result.\n\n' Note: The file \"pigs4\" on the Resampling Stats software disk contains\n' this set of commands.\nPIGS4: Differences Among Four Pig Rations\n\nsums of squares\nWe find that our observed sum of squares — 5096 — was exceeded by randomly-drawn sums of squares in only 3 percent of our trials. We conclude that the four treatments are likely not all similar." + }, + { + "objectID": "testing_measured.html#exercises", + "href": "testing_measured.html#exercises", + "title": "24  The Statistics of Hypothesis-Testing With Measured Data", + "section": "24.3 Exercises", + "text": "24.3 Exercises\nSolutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.\nExercise 18-1\nThe data shown in Table 18-3 (Hollander and Wolfe 1999, 39, Table 3.1) might be data for the outcomes of two different mechanics, showing the length of time until the next overhaul is needed for nine pairs of similar vehicles. Or they could be two readings made by different instruments on the same sample of rock. In fact, they represent data for two successive tests for depression on the Hamilton scale, before and after drug therapy.\n\nTable 18-3\nHamilton Depression Scale Values\n\n\n\n\n\n\n\n\nPatient #\nScore Before\nScore After\n\n\n\n\n1 2 3 4 5 6 7 8 9\n1.83 .50 1.62 2.48 1.68 1.88 1.55 3.06 1.3\n.878 .647 .598 2.05 1.06 1.29 1.06 3.14 1.29\n\n\n\nThe task is to perform a test that will help decide whether there is a difference in the depression scores at the two visits (or the performances of the two mechanics). Perform both a bootstrap test and a permutation test, and give some reason for preferring one to the other in principle. How much do they differ in practice?\nExercise 18-2\nThirty-six of 72 (.5) taxis surveyed in Pittsburgh had visible seatbelts. Seventy-seven of 129 taxis in Chicago (.597) had visible seatbelts. Calculate a confidence interval for the difference in proportions, estimated at -.097. (Source: Peskun, Peter H., “A New Confidence Interval Method Based on the Normal Approximation for the Difference of Two Binomial Probabilities,” Journal of the American Statistical Association , 6/93 p. 656).\n\n\n\n\nChung, James H, and Donald AS Fraser. 1958. “Randomization Tests for a Multivariate Two-Sample Problem.” Journal of the American Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf.\n\n\nDwass, Meyer. 1957. “Modified Randomization Tests for Nonparametric Hypotheses.” The Annals of Mathematical Statistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf.\n\n\nEfron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to the Bootstrap.” In Monographs on Statistics and Applied Probability, edited by David R Cox, David V Hinkley, Nancy Reid, Donald B Rubin, and Bernard W Silverman. Vol. 57. New York: Chapman & Hall.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nHollander, Myles, and Douglas A Wolfe. 1999. Nonparametric Statistical Methods. 2nd ed. Wiley Series in Probability and Statistics: Applied Probability and Statistics. New York: John Wiley & Sons, Inc. https://archive.org/details/nonparametricsta0000ed2holl.\n\n\nPitman, Edwin JG. 1937. “Significance Tests Which May Be Applied to Samples from Any Populations.” Supplement to the Journal of the Royal Statistical Society 4 (1): 119–30. https://www.jstor.org/stable/pdf/2984124.pdf.\n\n\nSimon, Julian Lincoln, and David M Simon. 1996. “The Effects of Regulations on State Liquor Prices.” Empirica 23: 303–16." + }, + { + "objectID": "testing_procedures.html#introduction", + "href": "testing_procedures.html#introduction", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.1 Introduction", + "text": "25.1 Introduction\nThe previous chapters have presented procedures for making statistical inferences that apply to both testing hypotheses and constructing confidence intervals: This chapter focuses on specific procedures for testing hypotheses.\n`The general idea in testing hypotheses is to ask: Is there some other universe which might well have produced the observed sample? So we consider alternative hypotheses. This is a straightforward exercise in probability, asking about behavior of one or more universes. The choice of another universe(s) to examine depends upon purposes and other considerations." + }, + { + "objectID": "testing_procedures.html#canonical-question-and-answer-procedure-for-testing-hypotheses", + "href": "testing_procedures.html#canonical-question-and-answer-procedure-for-testing-hypotheses", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.2 Canonical question-and-answer procedure for testing hypotheses", + "text": "25.2 Canonical question-and-answer procedure for testing hypotheses" + }, + { + "objectID": "testing_procedures.html#skeleton-procedure-for-testing-hypotheses", + "href": "testing_procedures.html#skeleton-procedure-for-testing-hypotheses", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.3 Skeleton procedure for testing hypotheses", + "text": "25.3 Skeleton procedure for testing hypotheses\nAkin to skeleton procedure for questions in probability and confidence intervals shown elsewhere\nThe following series of questions will be repeated below in the context of a specific inference.\nWhat is the question? What is the purpose to be served by answering the question?\nIs this a “probability” or a “statistics” question?\nAssuming the Question is a Statistical Inference Question\nWhat is the form of the statistics question?\nHypothesis test, or confidence interval, or other inference? One must first decide whether the conceptual-scientific question is of the form a) a test about the probability that some sample is likely to happen by chance rather than being very surprising (a test of a hypothesis), or b) a question about the accuracy of the estimate of a parameter of the population based upon sample evidence (a confidence interval):\nAssuming the Question Concerns Testing Hypotheses\nWill you state the costs and benefits of various outcomes, perhaps in the form of a “loss function”? If “yes,” what are they?\nHow many samples of data have been observed?\nOne, two, more than two?\nWhat is the description of the observed sample(s)?\nRaw data?\nWhich characteristic(s) (parameters) of the population are of interest to you?\nWhat are the statistics of the sample(s) that refer to this (these) characteristics(s) in which you are interested?\nWhat comparison(s) to make?\nSamples to each other?\nSample to particular universe(s)? If so, which?\nWhat is the benchmark (null) universe?\nThis may include presenting the raw data and/or such summary statistics as the computed mean, median, standard deviation, range, interquartile range, other:\nIf there is to be a Neyman-Pearson-type alternative universe, what is it? (In most cases the answer to this technical question is “no.”)\nWhich symbols for the observed entities?\nDiscrete or continuous?\nWhat values or ranges of values?\nWhich sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? (Answer: samples the same size as has been observed)\n[Here one may continue with the conventional method, using perhaps a t or f or chi-square test or whatever: Everything up to now is the same whether continuing with resampling or with standard parametric test.]\nWhat procedure will be used to produce the resampled entities?\nRandomly drawn?\nSimple (single step) or complex (multiple “if” drawings)?\nWhat procedure to produce resample?\nWhich universe will you draw them from? With or without replacement?\nWhat size resamples? Number of resample trials?\nWhat to record as outcome of each resample trial?\nMean, median, or whatever of resample?\nClassifying the outcomes\nWhat is the criterion of significance to be used in evaluating the results of the test?\nStating the distribution of results\nGraph of each statistic recorded — occurrences for each value.\nCount the outcomes that exceed criterion and divide by number of trials." + }, + { + "objectID": "testing_procedures.html#an-example-can-the-bio-engineer-increase-the-female-calf-rate", + "href": "testing_procedures.html#an-example-can-the-bio-engineer-increase-the-female-calf-rate", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.4 An example: can the bio-engineer increase the female calf rate?", + "text": "25.4 An example: can the bio-engineer increase the female calf rate?\nThe question. (from (Hodges Jr and Lehmann 1970, 310): Female calves are more valuable than male calves. A bio-engineer claims to have a method that can produce more females. He tests the procedure on ten of your pregnant cows, and the result is nine females. Should you believe that his method has some effect? That is, what is the probability of a result this surprising occurring by chance?\nThe purpose: Female calves are more valuable than male.\nInference? Yes.\nTest of hypothesis? Yes.\nWill you state the costs and benefits of various outcomes (or a loss function)? We need only say that the benefits of a method that works are very large, and if the results are promising, it is worth gathering more data to confirm results.\nHow many samples of data are part of the significance test? One\nWhat is the size of the first sample about which you wish to make significance statements? Ten.\nWhat comparison(s) to make? Compare sample to benchmark universe.\nWhat is the benchmark universe that embodies the null hypothesis? 50-50 female, or 100/206 female.\nIf there is to be a Neyman-Pearson alternative universe , what is it? None.\nWhich symbols for the observed entities? Balls in bucket, or numbers.\nWhat values or ranges of values? 0-1, (1-100), or 101-206.\nFinite or infinite? Infinite.\nWhich sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? Ten calves compared to universe.\nWhat procedure to produce entities? Sampling with replacement,\nSimple (single step) or complex (multiple “if” drawings)? One can think of it either way.\nWhat to record as outcome of each resample trial? The proportion (or number) of females.\nWhat is the criterion to be used in the test? The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of half females. (Or frame in terms of a significance level.)\n“One-tail” or “two-tail” test? One tail, because the farmer is only interested in females: Finding a large proportion of males would not be of interest, and would not cause one to reject the null hypothesis.\nComputation of the probability sought. The actual computation of probability may be done with several formulaic or sample-space methods, and with several resampling methods: I will first show a resampling method and then several conventional methods. The following material, which allows one to compare resampling and conventional methods, is more germane to the earlier explication of resampling taken altogether in earlier chapters than it is to the theory of hypothesis tests discussed in this chapter, but it is more expedient to present it here." + }, + { + "objectID": "testing_procedures.html#computation-of-probabilities-with-resampling", + "href": "testing_procedures.html#computation-of-probabilities-with-resampling", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.5 Computation of Probabilities with Resampling", + "text": "25.5 Computation of Probabilities with Resampling\nWe can do the problem by hand as follows:\n\nConstitute a bucket with either one blue and one pink ball, or 106 blue and 100 pink balls.\nDraw ten balls with replacement, count pinks, and record.\nRepeat step (2) say 400 times.\nCalculate proportion of results with 9 or 10 pinks.\n\nOr, we can take advantage of the speed and efficiency of the computer as follows:\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nrnd = np.random.default_rng()\n\nn = 10000\n\nfemales = np.zeros(n)\n\nfor i in range(n):\n samp = rnd.choice(['female', 'male'], size=10, replace=True)\n females[i] = np.sum(samp == 'female')\n\nplt.hist(females, bins='auto')\n\nk = np.sum(females >= 9)\nkk = k / n\nprint('Proportion with >= 9 females:', kk)\n\nProportion with >= 9 females: 0.0127\n\n\n\n\n\n\n\n\n\nThis outcome implies that there is roughly a one percent chance that one would observe 9 or 10 female births in a single sample of 10 calves if the probability of a female on each birth is .5. This outcome should help the decision-maker decide about the plausibility of the bio-engineer’s claim to be able to increase the probability of female calves being born." + }, + { + "objectID": "testing_procedures.html#conventional-methods", + "href": "testing_procedures.html#conventional-methods", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.6 Conventional methods", + "text": "25.6 Conventional methods\n\n25.6.1 The Sample Space and First Principles\nAssume for a moment that our problem is a smaller one and therefore much easier — the probability of getting two females in two calves if the probability of a female is .5. One could then map out what mathematicians call the “sample space,” a technique that (in its simplest form) assigns to each outcome a single point, and find the proportion of points that correspond to a “success.” We list all four possible combinations — FF, FM, MF, MM. Now we look at the ratio of the number of combinations that have 2 females to the total, which is 1/4. We may then interpret this probability.\nWe might also use this method for (say) five female calves in a row. We can make a list of possibilities such as FFFFF, MFFFF, MMFFF, MMMFFF … MFMFM … MMMMM. There will be 2*2*2*2*2 = 32 possibilities, and 64 and 128 possibilities for six and seven calves respectively. But when we get as high as ten calves, this method would become very troublesome.\n\n\n25.6.2 Sample Space Calculations\nFor two females in a row, we could use the well known, and very simple, multiplication rule; we could do so even for ten females in a row. But calculating the probability of nine females in ten is a bit more complex.\n\n\n25.6.3 Pascal’s Triangle\nOne can use Pascal’s Triangle to obtain binomial coefficients for p = .5 and a sample size of 10, focusing on those for 9 or 10 successes. Then calculate the proportion of the total cases with 9 or 10 “successes” in one direction, to find the proportion of cases that pass beyond the criterion of 9 females. The method of Pascal’s Triangle requires more complete understanding of the probabilistic system than does the resampling simulation described above because Pascal’s Triangle requires that one understand the entire structure; simulation requires only that you follow the rules of the model.\n\n\n25.6.4 The Quincunx\nThe quincunx — a device that filters tiny balls through a set of bumper points not unlike a pinball machine, mentioned here simply for completeness — is more a simulation method than theoretical, but it may be considered “conventional.” Hence, it is included here.\n\n\n25.6.5 Table of Binomial Coefficients\nPascal’s Triangle becomes cumbersome or impractical with large numbers — say, 17 females of 20 births — or with probabilities other than .5. One might produce the binomial coefficients by algebraic multiplication, but that, too, becomes tedious even with small sample sizes. One can also use the pre-computed table of binomial coefficients found in any standard text. But the probabilities for n = 10 and 9 or 10 females are too small to be shown.\n\n\n25.6.6 Binomial Formula\nFor larger sample sizes, one can use the binomial formula. The binomial formula gives no deeper understanding of the statistical structure than does the Triangle (but it does yield a deeper understanding of the pure mathematics). With very large numbers, even the binomial formula is cumbersome.\n\n\n25.6.7 The Normal Approximation\nWhen the sample size becomes too large for any of the above methods, one can then use the Normal approximation, which yields results close to the binomial (as seen very nicely in the output of the quincunx). But use of the Normal distribution requires an estimate of the standard deviation, which can be derived either by formula or by resampling. (See a more extended parallel discussion in Chapter 27 on confidence intervals for the Bush-Dukakis comparison.)\nThe desired probability can be obtained from the Z formula and a standard table of the Normal distribution found in every elementary text.\nThe Z table can be made less mysterious if we generate it with simulation, or with graph paper or Archimedes’ method, using as raw material (say) five “continuous” (that is, non-binomial) distributions, many of which are skewed: 1) Draw samples of (say) 50 or 100. 2) Plot the means to see that the Normal shape is the outcome. Then 3) standardize with the standard deviation by marking the standard deviations onto the histograms.\nThe aim of the above exercise and the heart of the conventional parametric method is to compare the sample result — the mean — to a standardized plot of the means of samples drawn from the universe of interest to see how likely it is that that universe produces means deviating as much from the universe mean as does our observed sample mean. The steps are:\n\nEstablish the Normal shape — from the exercise above, or from the quincunx or Pascal’s Triangle or the binomial formula or the formula for the Normal approximation or some other device.\nStandardize that shape in standard deviations.\nCompute the Z score for the sample mean — that is, its deviation from the universe mean in standard deviations.\nExamine the Normal (or really, tables computed from graph paper, etc.) to find the probability of a mean deviating that far by chance.\n\nThis is the canon of the procedure for most parametric work in statistics. (For some small samples, accuracy is improved with an adjustment.)" + }, + { + "objectID": "testing_procedures.html#choice-of-the-benchmark-universebruce", + "href": "testing_procedures.html#choice-of-the-benchmark-universebruce", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.7 Choice of the benchmark universe1", + "text": "25.7 Choice of the benchmark universe1\nIn the example of the ten calves, the choice of a benchmark universe — a universe that (on average) produces equal proportions of males and females — seems rather straightforward and even automatic, requiring no difficult judgments. But in other cases the process requires more judgments.\nLet’s consider another case where the choice of a benchmark universe requires no difficult judgments. Assume the U.S. Department of Labor’s Bureau of Labor Statistics (BLS) takes a very large sample — say, 20,000 persons — and finds a 10 percent unemployment rate. At some later time another but smaller sample is drawn — 2,000 persons — showing an 11 percent unemployment rate. Should BLS conclude that unemployment has risen, or is there a large chance that the difference between 10 percent and 11 percent is due to sample variability? In this case, it makes rather obvious sense to ask how often a sample of 2,000 drawn from a universe of 10 percent unemployment (ignoring the variability in the larger sample) will be as different as 11 percent due solely to sample variability? This problem differs from that of the calves only in the proportions and the sizes of the samples.\nLet’s change the facts and assume that a very large sample had not been drawn and only a sample of 2,000 had been taken, indicating 11 percent unemployment. A policy-maker asks the probability that unemployment is above ten percent. It would still seem rather straightforward to ask how often a universe of 10 percent unemployment would produce a sample of 2000 with a proportion of 11 percent unemployed.\nStill another problem where the choice of benchmark hypothesis is relatively straightforward: Say that BLS takes two samples of 2000 persons a month apart, and asks whether there is a difference in the results. Pooling the two samples and examining how often two samples drawn from the pooled universe would be as different as observed seems obvious.\nOne of the reasons that the above cases — especially the two-sample case — seem so clear-cut is that the variance of the benchmark hypothesis is not an issue, being implied by the fact that the samples deal with proportions. If the data were continuous, however, this issue would quickly arise. Consider, for example, that the BLS might take the same sorts of samples and ask unemployed persons the lengths of time they had been unemployed. Comparing a small sample to a very large one would be easy to decide about. And even comparing two small samples might be straightforward — simply pooling them as is.\nBut what about if you have a sample of 2,000 with data on lengths of unemployment spells with a mean of 30 days, and you are asked the probability that it comes from a universe with a mean of 25 days? Now there arises the question about the amount of variability to assume for that benchmark universe. Should it be the variability observed in the sample? That is probably an overestimate, because a universe with a smaller mean would probably have a smaller variance, too. So some judgment is required; there cannot be an automatic “objective” process here, whether one proceeds with the conventional or the resampling method.\nThe example of the comparison of liquor retailing systems in Section 24.0.2 provides more material on this subject." + }, + { + "objectID": "testing_procedures.html#why-is-statistics-and-hypothesis-testing-so-difficult", + "href": "testing_procedures.html#why-is-statistics-and-hypothesis-testing-so-difficult", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.8 Why is statistics — and hypothesis testing — so difficult?", + "text": "25.8 Why is statistics — and hypothesis testing — so difficult?\nWhy is statistics such a difficult subject? The aforegoing procedural outline provides a window to the explanation. Hypothesis testing — as is also true of the construction of confidence intervals (but unlike simple probability problems) — involves a very long chain of reasoning, perhaps longer than in any other realm of systematic thinking. Furthermore, many decisions in the process require judgment that goes beyond technical analysis. All this emerges as one proceeds through the skeleton procedure above with any specific example.\n(Bayes’ rule also is very difficult intuitively, but that probably is a result of the twists and turns required in all complex problems in conditional probability. Decision-tree analysis is counter-intuitive, too, probably because it starts at the end instead of the beginning of the story, as we are usually accustomed to doing.)\n\n\n\n\nHodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic Concepts of Probability and Statistics. 2nd ed. San Francisco, California: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9." + }, + { + "objectID": "confidence_1.html#introduction", + "href": "confidence_1.html#introduction", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.1 Introduction", + "text": "26.1 Introduction\nThis chapter discusses how to assess the accuracy of a point estimate of the mean, median, or other statistic of a sample. We want to know: How close is our estimate of (say) the sample mean likely to be to the population mean? The chapter begins with an intuitive discussion of the relationship between a) a statistic derived from sample data, and b) a parameter of a universe from which the sample is drawn. Then we discuss the actual construction of confidence intervals using two different approaches which produce the same numbers though they have different logic. The following chapter shows illustrations of these procedures.\nThe accuracy of an estimate is a hard intellectual nut to crack, so hard that for hundreds of years statisticians and scientists wrestled with the problem with little success; it was not until the last century or two that much progress was made. The kernel of the problem is learning the extent of the variation in the population. But whereas the sample mean can be used straightforwardly to estimate the population mean, the extent of variation in the sample does not directly estimate the extent of the variation in the population, because the variation differs at different places in the distribution, and there is no reason to expect it to be symmetrical around the estimate or the mean.\nThe intellectual difficulty of confidence intervals is one reason why they are less prominent in statistics literature and practice than are tests of hypotheses (though statisticians often favor confidence intervals). Another reason is that tests of hypotheses are more fundamental for pure science because they address the question that is at the heart of all knowledge-getting: “Should these groups be considered different or the same ?” The statistical inference represented by confidence limits addresses what seems to be a secondary question in most sciences (though not in astronomy or perhaps physics): “How reliable is the estimate?” Still, confidence intervals are very important in some applied sciences such as geology — estimating the variation in grades of ores, for example — and in some parts of business and industry.\nConfidence intervals and hypothesis tests are not disjoint ideas. Indeed, hypothesis testing of a single sample against a benchmark value is (in all schools of thought, I believe) operationally identical with the most common way (Approach 1 below) of constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different for confidence limits and hypothesis tests.\nThe logic of confidence intervals is on shakier ground, in my judgment, than that of hypothesis testing, though there are many thoughtful and respected statisticians who argue that the logic of confidence intervals is better grounded and leads less often to error.\nConfidence intervals are considered by many to be part of the same topic as estimation , being an estimation of accuracy, in their view. And confidence intervals and hypothesis testing are seen as sub-cases of each other by some people. Whatever the importance of these distinctions among these intellectual tasks in other contexts, they need not concern us here." + }, + { + "objectID": "confidence_1.html#estimating-the-accuracy-of-a-sample-mean", + "href": "confidence_1.html#estimating-the-accuracy-of-a-sample-mean", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.2 Estimating the accuracy of a sample mean", + "text": "26.2 Estimating the accuracy of a sample mean\nIf one draws a sample that is very, very large — large enough so that one need not worry about sample size and dispersion in the case at hand — from a universe whose characteristics one knows , one then can deduce the probability that the sample mean will fall within a given distance of the population mean. Intuitively, it seems as if one should also be able to reverse the process — to infer something about the location of the population mean from the sample mean . But this inverse inference turns out to be a slippery business indeed.\nLet’s put it differently: It is all very well to say — as one logically may — that on average the sample mean (or other point estimator) equals a population parameter in most situations.\nBut what about the result of any particular sample? How accurate or inaccurate an estimate of the population mean is the sample likely to produce?\nBecause the logic of confidence intervals is subtle, most statistics texts skim right past the conceptual difficulties, and go directly to computation. Indeed, the topic of confidence intervals has been so controversial that some eminent statisticians refuse to discuss it at all. And when the concept is combined with the conventional algebraic treatment, the composite is truly baffling; the formal mathematics makes impossible any intuitive understanding. For students, “pluginski” is the only viable option for passing exams.\nWith the resampling method, however, the estimation of confidence intervals is easy. The topic then is manageable though subtle and challenging — sometimes pleasurably so. Even beginning undergraduates can enjoy the subtlety and find that it feels good to stretch the brain and get down to fundamentals.\nOne thing is clear: Despite the subtlety of the topic, the accuracy of estimates must be dealt with, one way or another.\nI hope the discussion below resolves much of the confusion of the topic." + }, + { + "objectID": "confidence_1.html#the-logic-of-confidence-intervals", + "href": "confidence_1.html#the-logic-of-confidence-intervals", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.3 The logic of confidence intervals", + "text": "26.3 The logic of confidence intervals\nTo preview the treatment of confidence intervals presented below: We do not learn about the reliability of sample estimates of the mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because this cannot be done in principle . Instead, we investigate the behavior of various universes in the neighborhood of the sample, universes whose characteristics are chosen on the basis of their similarity to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes that are implicitly suggested by the sample evidence but are not logically implied by that evidence.\nThe examples worked in the following chapter help explain why statistics is a difficult subject. The procedure required to transit successfully from the original question to a statistical probability, and then through a sensible interpretation of the probability, involves a great many choices about the appropriate model based on analysis of the problem at hand; a wrong choice at any point dooms the procedure. The actual computation of the probability — whether done with formulaic probability theory or with resampling simulation — is only a very small part of the procedure, and it is the least difficult part if one proceeds with resampling. The difficulties in the statistical process are not mathematical but rather stem from the hard clear thinking needed to understand the nature of the situation and to ascertain the appropriate way to model it.\nAgain, the purpose of a confidence interval is to help us assess the reliability of a statistic of the sample — for example, its mean or median — as an estimator of the parameter of the universe. The line of thought runs as follows: It is possible to map the distribution of the means (or other such parameter) of samples of any given size (the size of interest in any investigation usually being the size of the observed sample) and of any given pattern of dispersion (which we will assume for now can be estimated from the sample) that a universe in the neighborhood of the sample will produce. For example, we can compute how large an interval to the right and left of a postulated universe’s mean is required to include 45 percent of the samples on either side of the mean.\nWhat cannot be done is to draw conclusions from sample evidence about the nature of the universe from which it was drawn, in the absence of some information about the set of universes from which it might have been drawn. That is, one can investigate the behavior of one or more specified universes, and discover the absolute and relative probabilities that the given specified universe(s) might produce such a sample. But the universe(s) to be so investigated must be specified in advance (which is consistent with the Bayesian view of statistics). To put it differently, we can employ probability theory to learn the pattern(s) of results produced by samples drawn from a particular specified universe, and then compare that pattern to the observed sample. But we cannot infer the probability that that sample was drawn from any given universe in the absence of knowledge of the other possible sources of the sample. That is a subtle difference, I know, but I hope that the following discussion makes it understandable." + }, + { + "objectID": "confidence_1.html#computing-confidence-intervals", + "href": "confidence_1.html#computing-confidence-intervals", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.4 Computing confidence intervals", + "text": "26.4 Computing confidence intervals\nIn the first part of the discussion we shall leave aside the issue of estimating the extent of the dispersion — a troublesome matter, but one which seldom will result in unsound conclusions even if handled crudely. To start from scratch again: The first — and seemingly straightforward — step is to estimate the mean of the population based on the sample data. The next and more complex step is to ask about the range of values (and their probabilities) that the estimate of the mean might take — that is, the construction of confidence intervals. It seems natural to assume that if our best guess about the population mean is the value of the sample mean, our best guesses about the various values that the population mean might take if unbiased sampling error causes discrepancies between population parameters and sample statistics, should be values clustering around the sample mean in a symmetrical fashion (assuming that asymmetry is not forced by the distribution — as for example, the binomial is close to symmetric near its middle values). But how far away from the sample mean might the population mean be?\nLet’s walk slowly through the logic, going back to basics to enhance intuition. Let’s start with the familiar saying, “The apple doesn’t fall far from the tree.” Imagine that you are in a very hypothetical place where an apple tree is above you, and you are not allowed to look up at the tree, whose trunk has an infinitely thin diameter. You see an apple on the ground. You must now guess where the trunk (center) of the tree is. The obvious guess for the location of the trunk is right above the apple. But the trunk is not likely to be exactly above the apple because of the small probability of the trunk being at any particular location, due to sampling dispersion.\nThough you find it easy to make a best guess about where the mean is (the true trunk), with the given information alone you have no way of making an estimate of the probability that the mean is one place or another, other than that the probability is the same that the tree is to the north or south, east or west, of you. You have no idea about how far the center of the tree is from you. You cannot even put a maximum on the distance it is from you, and without a maximum you could not even reasonably assume a rectangular distribution, or a Normal distribution, or any other.\nNext you see two apples. What guesses do you make now? The midpoint between the two obviously is your best guess about the location of the center of the tree. But still there is no way to estimate the probability distribution of the location of the center of the tree.\nNow assume you are given still another piece of information: The outermost spread of the tree’s branches (the range) equals the distance between the two apples you see. With this information, you could immediately locate the boundaries of the location of the center of the tree. But this is only because the answer you sought was given to you in disguised form.\nYou could, however, come up with some statements of relative probabilities. In the absence of prior information on where the tree might be, you would offer higher odds that the center (the trunk) is in any unit of area close to the center of your two apples than in a unit of area far from the center. That is, if you are told that either one apple, or two apples, came from one of two specified trees whose locations are given , with no reason to believe it is one tree or the other (later, we can put other prior probabilities on the two trees), and you are also told the dispersions, you now can put relative probabilities on one tree or the other being the source. (Note to the advanced student: This is like the Neyman-Pearson procedure, and it is easily reconciled with the Bayesian point of view to be explored later. One can also connect this concept of relative probability to the Fisherian concept of maximum likelihood — which is a probability relative to all others). And you could list from high to low the probabilities for each unit of area in the neighborhood of your apple sample. But this procedure is quite different from making any single absolute numerical probability estimate of the location of the mean.\nNow let’s say you see 10 apples on the ground. Of course your best estimate is that the trunk of the tree is at their arithmetic center. But how close to the actual tree trunk (the population mean) is your estimate likely to be? This is the question involved in confidence intervals. We want to estimate a range (around the center, which we estimate with the center mean of the sample, we said) within which we are pretty sure that the trunk lies.\nTo simplify, we consider variation along only one dimension — that is, on (say) a north-south line rather than on two dimensions (the entire surface).\nWe first note that you have no reason to estimate the trunk’s location to be outside the sample pattern, or at its edge, though it could be so in principle.\nIf the pattern of the 10 apples is tight, you imagine the pattern of the likely locations of the population mean to be tight; if not, not. That is, it is intuitively clear that there is some connection between how spread out are the sample observations and your confidence about the location of the population mean . For example, consider two patterns of a thousand apples, one with twice the spread of another, where we measure spread by (say) the diameter of the circle that holds the inner half of the apples for each tree, or by the standard deviation. It makes sense that if the two patterns have the same center point (mean), you would put higher odds on the tree with the smaller spread being within some given distance — say, a foot — of the estimated mean. But what odds would you give on that bet?" + }, + { + "objectID": "confidence_1.html#procedure-for-estimating-confidence-intervals", + "href": "confidence_1.html#procedure-for-estimating-confidence-intervals", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.5 Procedure for estimating confidence intervals", + "text": "26.5 Procedure for estimating confidence intervals\nHere is a canonical list of questions that help organize one’s thinking when constructing confidence intervals. The list is comparable to the lists for questions in probability and for hypothesis testing provided in earlier chapters. This set of questions will be applied operationally in Chapter 27.\nWhat Is The Question?\nWhat is the purpose to be served by answering the question? Is this a “probability” or a “statistics” question?\nIf the Question Is a Statistical Inference Question:\nWhat is the form of the statistics question?\nHypothesis test or confidence limits or other inference?\nAssuming Question Is About Confidence Limits:\nWhat is the description of the sample that has been observed?\nRaw data?\nStatistics of the sample?\nWhich universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess of the properties of the universe whose parameter you wish to make statements about? Finite or infinite? Bayesian possibilities?\nWhich parameter do you wish to make statements about?\nMean, median, standard deviation, range, interquartile range, other?\nWhich symbols for the observed entities?\nDiscrete or continuous?\nWhat values or ranges of values?\nIf the universe is as guessed at, for which samples do you wish to estimate the variation? (Answer: samples the same size as has been observed)\nHere one may continue with the conventional method, using perhaps a t or F or chi-square test or whatever. Everything up to now is the same whether continuing with resampling or with standard parametric test.\nWhat procedure to produce the original entities in the sample?\nWhat universe will you draw them from? Random selection?\nWhat size resample?\nSimple (single step) or complex (multiple “if” drawings)?\nWhat procedure to produce resamples?\nWith or without replacement? Number of drawings?\nWhat to record as result of resample drawing?\nMean, median, or whatever of resample\nStating the Distribution of Results\nHistogram, frequency distribution, other?\nChoice Of Confidence Bounds\nOne or two-tailed?\n90%, 95%, etc.?\nComputation of Probabilities Within Chosen Bounds" + }, + { + "objectID": "confidence_1.html#summary", + "href": "confidence_1.html#summary", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.6 Summary", + "text": "26.6 Summary\nThis chapter discussed the theoretical basis for assessing the accuracy of population averages from sample data. The following chapter shows two very different approaches to confidence intervals, and provides examples of the computations." + }, + { + "objectID": "confidence_2.html#approach-1-the-distance-between-sample-and-population-mean", + "href": "confidence_2.html#approach-1-the-distance-between-sample-and-population-mean", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.1 Approach 1: The distance between sample and population mean", + "text": "27.1 Approach 1: The distance between sample and population mean\nIf the study of probability can tell us the probability that a given population will produce a sample with a mean at a given distance x from the population mean, and if a sample is an unbiased estimator of the population, then it seems natural to turn the matter around and interpret the same sort of data as telling us the probability that the estimate of the population mean is that far from the “actual” population mean. A fly in the ointment is our lack of knowledge of the dispersion, but we can safely put that aside for now. (See below, however.)\nThis first approach begins by assuming that the universe that actually produced the sample has the same amount of dispersion (but not necessarily the same mean) that one would estimate from the sample. One then produces (either with resampling or with Normal distribution theory) the distribution of sample means that would occur with repeated sampling from that designated universe with samples the size of the observed sample. One can then compute the distance between the (assumed) population mean and (say) the inner 45 percent of sample means on each side of the actually observed sample mean.\nThe crucial step is to shift vantage points. We look from the sample to the universe, instead of from a hypothesized universe to simulated samples (as we have done so far). This same interval as computed above must be the relevant distance as when one looks from the sample to the universe. Putting this algebraically, we can state (on the basis of either simulation or formal calculation) that for any given population S, and for any given distance \\(d\\) from its mean \\(\\mu\\), that \\(P((\\mu - \\bar{x}) < d) = \\alpha\\), where \\(\\bar{x}\\) is a randomly generated sample mean and \\(\\alpha\\) is the probability resulting from the simulation or calculation.\nThe above equation focuses on the deviation of various sample means (\\(\\bar{x}\\)) from a stated population mean (\\(\\mu\\)). But we are logically entitled to read the algebra in another fashion, focusing on the deviation of \\(\\mu\\) from a randomly generated sample mean. This implies that for any given randomly generated sample mean we observe, the same probability (\\(\\alpha\\)) describes the probability that \\(\\mu\\) will be at a distance \\(d\\) or less from the observed \\(\\bar{x}\\). (I believe that this is the logic underlying the conventional view of confidence intervals, but I have yet to find a clear-cut statement of it; in any case, it appears to be logically correct.)\nTo repeat this difficult idea in slightly different words: If one draws a sample (large enough to not worry about sample size and dispersion), one can say in advance that there is a probability \\(p\\) that the sample mean (\\(\\bar{x}\\)) will fall within \\(z\\) standard deviations of the population mean (\\(\\mu\\)). One estimates the population dispersion from the sample. If there is a probability \\(p\\) that \\(\\bar{x}\\) is within \\(z\\) standard deviations of \\(\\mu\\), then with probability \\(p\\), \\(\\mu\\) must be within that same \\(z\\) standard deviations of \\(\\bar{x}\\). To repeat, this is, I believe, the heart of the standard concept of the confidence interval, to the extent that there is thought through consensus on the matter.\nSo we can state for such populations the probability that the distance between the population and sample means will be \\(d\\) or less. Or with respect to a given distance, we can say that the probability that the population and sample means will be that close together is \\(p\\).\nThat is, we start by focusing on how much the sample mean diverges from the known population mean. But then — and to repeat once more this key conceptual step — we refocus our attention to begin with the sample mean and then discuss the probability that the population mean will be within a given distance. The resulting distance is what we call the “confidence interval.”\nPlease notice that the distribution (universe) assumed at the beginning of this approach did not include the assumption that the distribution is centered on the sample mean or anywhere else. It is true that the sample mean is used for purposes of reporting the location of the estimated universe mean . But despite how the subject is treated in the conventional approach, the estimated population mean is not part of the work of constructing confidence intervals. Rather, the calculations apply in the same way to all universes in the neighborhood of the sample (which are assumed, for the purpose of the work, to have the same dispersion). And indeed, it must be so, because the probability that the universe from which the sample was drawn is centered exactly at the sample mean is very small.\nThis independence of the confidence-intervals construction from the mean of the sample (and the mean of the estimated universe) is surprising at first, but after a bit of thought it makes sense.\nIn this first approach, as noted more generally above, we do not make estimates of the confidence intervals on the basis of any logical inference from any one particular sample to any one particular universe, because this cannot be done in principle ; it is the futile search for this connection that for decades roiled the brains of so many statisticians and now continues to trouble the minds of so many students. Instead, we investigate the behavior of (in this first approach) the universe that has a higher probability of producing the observed sample than does any other universe (in the absence of any additional evidence to the contrary), and whose characteristics are chosen on the basis of its resemblance to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes, the universe(s) being implicitly suggested by the sample evidence but not logically implied by that evidence. And there are no grounds for dispute about exactly what is being done — only about how to interpret the results.\nOne difficulty with the above approach is that the estimate of the population dispersion does not rest on sound foundations; this matter will be discussed later, but it is not likely to lead to a seriously misleading conclusion.\nA second difficulty with this approach is in interpreting the result. What is the justification for focusing our attention on a universe centered on the sample mean? While this particular universe may be more likely than any other, it undoubtedly has a low probability. And indeed, the statement of the confidence intervals refers to the probabilities that the sample has come from universes other than the universe centered at the sample mean, and quite a distance from it.\nMy answer to this question does not rest on a set of meaningful mathematical axioms, and I assert that a meaningful axiomatic answer is impossible in principle. Rather, I reason that we should consider the behavior of this universe because other universes near it will produce much the same results, differing only in dispersion from this one, and this difference is not likely to be crucial; this last assumption is all-important, of course. True, we do not know what the dispersion might be for the “true” universe. But elsewhere (Simon, forthcoming) I argue that the concept of the “true universe” is not helpful — or maybe even worse than nothing — and should be forsworn. And we can postulate a dispersion for any other universe we choose to investigate. That is, for this postulation we unabashedly bring in any other knowledge we may have. The defense for such an almost-arbitrary move would be that this is a second-order matter relative to the location of the estimated universe mean, and therefore it is not likely to lead to serious error. (This sort of approximative guessing sticks in the throats of many trained mathematicians, of course, who want to feel an unbroken logic leading backwards into the mists of axiom formation. But the axioms themselves inevitably are chosen arbitrarily just as there is arbitrariness in the practice at hand, though the choice process for axioms is less obvious and more hallowed by having been done by the masterminds of the past. (See Simon (1998), on the necessity for judgment.) The absence of a sequence of equations leading from some first principles to the procedure described in the paragraph above is evidence of what is felt to be missing by those who crave logical justification. The key equation in this approach is formally unassailable, but it seems to come from nowhere.)\nIn the examples in the following chapter may be found computations for two population distributions — one binomial and one quantitative — of the histograms of the sample means produced with this procedure.\nOperationally, we use the observed sample mean, together with an estimate of the dispersion from the sample, to estimate a mean and dispersion for the population. Then with reference to the sample mean we state a combination of a distance (on each side) and a probability pertaining to the population mean. The computational examples will illustrate this procedure.\nOnce we have obtained a numerical answer, we must decide how to interpret it. There is a natural and almost irresistible tendency to talk about the probability that the mean of the universe lies within the intervals, but this has proven confusing and controversial. Interpretation in terms of a repeated process is not very satisfying intuitively.1\nIn my view, it is not worth arguing about any “true” interpretation of these computations. One could sensibly interpret the computations in terms of the odds a decision maker, given the evidence, would reasonably offer about the relative probabilities that the sample came from one of two specified universes (one of them probably being centered on the sample); this does provide some information on reliability, but this procedure departs from the concept of confidence intervals.\n\n27.1.1 Example: Counted Data: The Accuracy of Political Polls\nConsider the reliability of a randomly selected 1988 presidential election poll, showing 840 intended votes for Bush and 660 intended votes for Dukakis out of 1500 (Wonnacott and Wonnacott 1990, 5). Let us work through the logic of this example.\n\n\nWhat is the question? Stated technically, what are the 95% confidence limits for the proportion of Bush supporters in the population? (The proportion is the mean of a binomial population or sample, of course.) More broadly, within which bounds could one confidently believe that the population proportion was likely to lie? At this stage of the work, we must already have translated the conceptual question (in this case, a decision-making question from the point of view of the candidates) into a statistical question. (See Chapter 20 on translating questions into statistical form.)\nWhat is the purpose to be served by answering this question? There is no sharp and clear answer in this case. The goal could be to satisfy public curiosity, or strategy planning for a candidate (though a national proportion is not as helpful for planning strategy as state data would be). A secondary goal might be to help guide decisions about the sample size of subsequent polls.\nIs this a “probability” or a “probability-statistics” question? The latter; we wish to infer from sample to population rather than the converse.\nGiven that this is a statistics question: What is the form of the statistics question — confidence limits or hypothesis testing? Confidence limits.\nGiven that the question is about confidence limits: What is the description of the sample that has been observed? a) The raw sample data — the observed numbers of interviewees are 840 for Bush and 660 for Dukakis — constitutes the best description of the universe. The statistics of the sample are the given proportions — 56 percent for Bush, 44 percent for Dukakis.\nWhich universe? (Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe about whose parameter you wish to make statements? The best guess is that the population proportion is the sample proportion — that is, the population contains 56 percent Bush votes, 44 percent Dukakis votes.\nPossibilities for Bayesian analysis? Not in this case, unless you believe that the sample was biased somehow.\nWhich parameter(s) do you wish to make statements about? Mean, median, standard deviation, range, interquartile range, other? We wish to estimate the proportion in favor of Bush (or Dukakis).\nWhich symbols for the observed entities? Perhaps 56 green and 44 yellow balls, if a bucket is used, or “0” and “1” if the computer is used.\nDiscrete or continuous distribution? In principle, discrete. (All distributions must be discrete in practice.)\nWhat values or ranges of values?* “0” or “1.”\nFinite or infinite? Infinite — the sample is small relative to the population.\nIf the universe is what you guess it to be, for which samples do you wish to estimate the variation? A sample the same size as the observed poll.\n\nHere one may continue either with resampling or with the conventional method. Everything done up to now would be the same whether continuing with resampling or with a standard parametric test." + }, + { + "objectID": "confidence_2.html#conventional-calculational-methods", + "href": "confidence_2.html#conventional-calculational-methods", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.2 Conventional Calculational Methods", + "text": "27.2 Conventional Calculational Methods\nEstimating the Distribution of Differences Between Sample and Population Means With the Normal Distribution.\nIn the conventional approach, one could in principle work from first principles with lists and sample space, but that would surely be too cumbersome. One could work with binomial proportions, but this problem has too large a sample for tree-drawing and quincunx techniques; even the ordinary textbook table of binomial coefficients is too small for this job. Calculating binomial coefficients also is a big job. So instead one would use the Normal approximation to the binomial formula.\n(Note to the beginner: The distribution of means that we manipulate has the Normal shape because of the operation of the Law of Large Numbers (The Central Limit theorem). Sums and averages, when the sample is reasonably large, take on this shape even if the underlying distribution is not Normal. This is a truly astonishing property of randomly drawn samples — the distribution of their means quickly comes to resemble a “Normal” distribution, no matter the shape of the underlying distribution. We then standardize it with the standard deviation or other devices so that we can state the probability distribution of the sampling error of the mean for any sample of reasonable size.)\nThe exercise of creating the Normal shape empirically is simply a generalization of particular cases such as we will later create here for the poll by resampling simulation. One can also go one step further and use the formula of de Moivre-Laplace-Gauss to describe the empirical distributions, and to serve instead of the empirical distributions. Looking ahead now, the difference between resampling and the conventional approach can be said to be that in the conventional approach we simply plot the Gaussian distribution very carefully, and use a formula instead of the empirical histograms, afterwards putting the results in a standardized table so that we can read them quickly without having to recreate the curve each time we use it. More about the nature of the Normal distribution may be found in Simon (forthcoming).\nAll the work done above uses the information specified previously — the sample size of 1500, the drawing with replacement, the observed proportion as the criterion." + }, + { + "objectID": "confidence_2.html#confidence-intervals-empirically-with-resampling", + "href": "confidence_2.html#confidence-intervals-empirically-with-resampling", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.3 Confidence Intervals Empirically — With Resampling", + "text": "27.3 Confidence Intervals Empirically — With Resampling\nEstimating the Distribution of Differences Between Sample and Population Means By Resampling\n\nWhat procedure to produce entities?: Random selection from bucket or computer.\nSimple (single step) or complex (multiple “if” drawings)?: Simple.\nWhat procedure to produce resamples? That is, with or without replacement? With replacement.\nNumber of drawings observations in actual sample, and hence, number of drawings in resamples? 1500.\nWhat to record as result of each resample drawing? Mean, median, or whatever of resample? The proportion is what we seek.\nStating the distribution of results : The distribution of proportions for the trial samples.\nChoice of confidence bounds? : 95%, two tails (choice made by the textbook that posed the problem).\nComputation of probabilities within chosen bounds : Read the probabilistic result from the histogram of results.\nComputation of upper and lower confidence bounds: Locate the values corresponding to the 2.5th and 97.5th percentile of the resampled proportions.\n\nBecause the theory of confidence intervals is so abstract (even with the resampling method of computation), let us now walk through this resampling demonstration slowly, using the conventional Approach 1 described previously. We first produce a sample, and then see how the process works in reverse to estimate the reliability of the sample, using the Bush-Dukakis poll as an example. The computer program follows below.\n\nStep 1: Draw a sample of 1500 voters from a universe that, based on the observed sample, is 56 percent for Bush, 44 percent for Dukakis. The first such sample produced by the computer happens to be 53 percent for Bush; it might have been 58 percent, or 55 percent, or very rarely, 49 percent for Bush.\nStep 2: Repeat step 1 perhaps 400 or 1000 times.\nStep 3: Estimate the distribution of means (proportions) of samples of size 1500 drawn from this 56-44 percent Bush- Dukakis universe; the resampling result is shown below.\nStep 4: In a fashion similar to what was done in steps 13, now compute the 95 percent confidence intervals for some other postulated universe mean — say 53% for Bush, 47% for Dukakis. This step produces a confidence interval that is not centered on the sample mean and the estimated universe mean, and hence it shows the independence of the procedure from that magnitude. And we now compare the breadth of the estimated confidence interval generated with the 53-47 percent universe against the confidence interval derived from the corresponding distribution of sample means generated by the “true” Bush-Dukakis population of 56 percent — 44 percent. If the procedure works well, the results of the two procedures should be similar.\n\nNow we interpret the results using this first approach. The histogram shows the probability that the difference between the sample mean and the population mean — the error in the sample result — will be about 2.5 percentage points too low. It follows that about 47.5 percent (half of 95 percent) of the time, a sample like this one will be between the population mean and 2.5 percent too low. We do not know the actual population mean. But for any observed sample like this one, we can say that there is a 47.5 percent chance that the distance between it and the mean of the population that generated it is minus 2.5 percent or less.\nNow a crucial step: We turn around the statement just above, and say that there is an 47.5 percent chance that the population mean is less than three percentage points higher than the mean of a sample drawn like this one, but at or above the sample mean. (And we do the same for the other side of the sample mean.) So to recapitulate: We observe a sample and its mean. We estimate the error by experimenting with one or more universes in that neighborhood, and we then give the probability that the population mean is within that margin of error from the sample mean.\n\n27.3.1 Example: Measured Data Example — the Bootstrap\nA feed merchant decides to experiment with a new pig ration — ration A — on twelve pigs. To obtain a random sample, he provides twelve customers (selected at random) with sufficient food for one pig. After 4 weeks, the 12 pigs experience an average gain of 508 ounces. The weight gain of the individual pigs are as follows: 496, 544, 464, 416, 512, 560, 608, 544, 480, 466, 512, 496.\nThe merchant sees that the ration produces results that are quite variable (from a low of 466 ounces to a high of 560 ounces) and is therefore reluctant to advertise an average weight gain of 508 ounces. He speculates that a different sample of pigs might well produce a different average weight gain.\nUnfortunately, it is impractical to sample additional pigs to gain additional information about the universe of weight gains. The merchant must rely on the data already gathered. How can these data be used to tell us more about the sampling variability of the average weight gain?\nRecalling that all we know about the universe of weight gains is the sample we have observed, we can replicate that sample millions of times, creating a “pseudo-universe” that embodies all our knowledge about the real universe. We can then draw additional samples from this pseudo-universe and see how they behave.\nMore specifically, we replicate each observed weight gain millions of times — we can imagine writing each result that many times on separate pieces of paper — then shuffle those weight gains and pick out a sample of 12. Average the weight gain for that sample, and record the result. Take repeated samples, and record the result for each. We can then make a histogram of the results; it might look something like this:\n\n\n\n\n\n\n\n\n\nThough we do not know the true average weight gain, we can use this histogram to estimate the bounds within which it falls. The merchant can consider various weight gains for advertising purposes, and estimate the probability that the true weight gain falls below the value. For example, he might wish to advertise a weight gain of 500 ounces. Examining the histogram, we see that about 36% of our samples yielded weight gains less than 500 ounces. The merchant might wish to choose a lower weight gain to advertise, to reduce the risk of overstating the effectiveness of the ration.\nThis illustrates the “bootstrap” method. By re-using our original sample many times (and using nothing else), we are able to make inferences about the population from which the sample came. This problem would conventionally be addressed with the “t-test.”\n\n\n27.3.2 Example: Measured Data Example: Estimating Tree Diameters\n\nWhat is the question? A horticulturist is experimenting with a new type of tree. She plants 20 of them on a plot of land, and measures their trunk diameter after two years. She wants to establish a 90% confidence interval for the population average trunk diameter. For the data given below, calculate the mean of the sample and calculate (or describe a simulation procedure for calculating) a 90% confidence interval around the mean. Here are the 20 diameters, in centimeters and in no particular order (Table 27.1):\n\n\nTable 27.1: Tree Diameters, in Centimeters\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n8.5\n7.6\n9.3\n5.5\n11.4\n6.9\n6.5\n12.9\n8.7\n4.8\n\n\n4.2\n8.1\n6.5\n5.8\n6.7\n2.4\n11.1\n7.1\n8.8\n7.2\n\n\n\n\nWhat is the purpose to be served by answering the question? Either research & development, or pure science.\nIs this a “probability” or a “statistics” question? Statistics.\nWhat is the form of the statistics question? Confidence limits.\nWhat is the description of the sample that has been observed? The raw data as shown above.\nStatistics of the sample ? Mean of the tree data.\nWhich universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe whose parameter you wish to make statements about? Answer: The universe is like the sample above but much, much bigger. That is, in the absence of other information, we imagine this “bootstrap” universe as a collection of (say) one million trees of 8.5 centimeters width, one million of 7.2 centimeters, and so on. We’ll see in a moment that the device of sampling with replacement makes it unnecessary for us to work with such a large universe; by replacing each element after we draw it in a resample, we achieve the same effect as creating an almost-infinite universe from which to draw the resamples. (Are there possibilities for Bayesian analysis?) No Bayesian prior information will be included.\nWhich parameter do you wish to make statements about? The mean.\nWhich symbols for the observed entities? Cards or computer entries with numbers 8.5…7.2, sample of an infinite size.\nIf the universe is as guessed at, for which samples do you wish to estimate the variation? Samples of size 20.\n\nHere one may continue with the conventional method. Everything up to now is the same whether continuing with resampling or with a standard parametric test. The information listed above is the basis for a conventional test.\nContinuing with resampling:\n\nWhat procedure will be used to produce the trial entities? Random selection: simple (single step), not complex (multiple “if”) sample drawings).\nWhat procedure to produce resamples? With replacement. As noted above, sampling with replacement allows us to forego creating a very large bootstrap universe; replacing the elements after we draw them achieves the same effect as would an infinite universe.\nNumber of drawings? 20 trees\nWhat to record as result of resample drawing? The mean.\nHow to state the distribution of results? See histogram.\nChoice of confidence bounds? 90%, two-tailed.\nComputation of values of the resample statistic corresponding to chosen confidence bounds? Read from histogram.\n\nAs has been discussed in Chapter 19, it often is more appropriate to work with the median than with the mean. One reason is that the median is not so sensitive to the extreme observations as is the mean. Another reason is that one need not assume a Normal distribution for the universe under study: this consideration affects conventional statistics but usually does not affect resampling, but it is worth keeping mind when a statistician is making a choice between a parametric (that is, Normal-based) and a non-parametric procedure.\n\n\n27.3.3 Example: Determining a Confidence Interval for the Median Aluminum Content in Theban Jars\nData for the percentages of aluminum content in a sample of 18 ancient Theban jars (Catling and Jones 1977) are as follows, arranged in ascending order: 11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5, 15.0, 15.1, 15.8, 16.0, 16.3, 16.5, 16.9, 17.0, 17.2, 17.5, 19.0. Consider now putting a confidence interval around the median of 15.45 (halfway between the middle observations 15.1 and 15.8).\nOne may simply estimate a confidence interval around the median with a bootstrap procedure by substituting the median for the mean in the usual bootstrap procedure for estimating a confidence limit around the mean, as follows:\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nrnd = np.random.default_rng()\n\ndata = np.array(\n [11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5, 15.0, 15.1, 15.8, 16.0, 16.3,\n 16.5, 16.9, 17.0, 17.2, 17.5, 19.0]\n)\nobserved_median = np.median(data)\n\nn = 10000\nmedians = np.zeros(n)\n\nfor i in range(n):\n sample = rnd.choice(data, size=18, replace=True)\n # In the line above, replace=True is the default, so we could leave it out to\n # get the same result. We added it just to emphasize that bootstrap samples\n # are samples _with_ replacement.\n medians[i] = np.median(sample)\n\nplt.hist(medians, bins='auto')\n\nprint('Observed median aluminum content:', observed_median)\n\nObserved median aluminum content: 15.45\n\npp = np.percentile(medians, (2.5, 97.5))\nprint('Estimate of 95 percent confidence interval:', pp)\n\nEstimate of 95 percent confidence interval: [14.15 16.7 ]\n\n\n\n\n\n\n\n\n\n(This problem would be approached conventionally with a binomial procedure leading to quite wide confidence intervals (Deshpande, Gore, and Shanubhogue 1995, 32)).\n\n\n\n27.3.4 Example: Confidence Interval for the Median Price Elasticity of Demand for Cigarettes\nThe data for a measure of responsiveness of demand to a price change (the “elasticity” — percent change in demand divided by percent change in price) are shown for cigarette price changes as follows (Table 27.2). I (JLS) computed the data from cigarette sales data preceding and following a tax change in a state (Lyon and Simon 1968).\n\n\nTable 27.2: Price elasticity of demand in various states at various dates\n\n\n\n\n\n\n\n\n\n\n\n\n1.725\n1.139\n.957\n.863\n.802\n.517\n.407\n.304\n\n\n.204\n.125\n.122\n.106\n.031\n-.032\n-.1\n-.142\n\n\n-.174\n-.234\n-.240\n-.251\n-.277\n-.301\n-.302\n-.302\n\n\n-.307\n-.328\n-.329\n-.346\n-.357\n-.376\n-.377\n-.383\n\n\n-.385\n-.393\n-.444\n-.482\n-.511\n-.538\n-.541\n-.549\n\n\n-.554\n-.600\n-.613\n-.644\n-.692\n-.713\n-.724\n-.734\n\n\n-.749\n-.752\n-.753\n-.766\n-.805\n-.866\n-.926\n-.971\n\n\n-.972\n-.975\n-1.018\n-1.024\n-1.066\n-1.118\n-1.145\n-1.146\n\n\n-1.157\n-1.282\n-1.339\n-1.420\n-1.443\n-1.478\n-2.041\n-2.092\n\n\n-7.100\n\n\n\n\n\n\n\n\n\n\n\nThe positive observations (implying an increase in demand when the price rises) run against all theory, but can be considered to be the result simply of measurement errors, and treated as they stand. Aside from this minor complication, the reader may work this example similarly to the case of the Theban jars. Consider this program:\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nrnd = np.random.default_rng()\n\ndata = np.array([\n 1.725, 1.139, 0.957, 0.863, 0.802, 0.517, 0.407, 0.304,\n 0.204, 0.125, 0.122, 0.106, 0.031, -0.032, -0.1, -0.142,\n -0.174, -0.234, -0.240, -0.251, -0.277, -0.301, -0.302, -0.302,\n -0.307, -0.328, -0.329, -0.346, -0.357, -0.376, -0.377, -0.383,\n -0.385, -0.393, -0.444, -0.482, -0.511, -0.538, -0.541, -0.549,\n -0.554, -0.600, -0.613, -0.644, -0.692, -0.713, -0.724, -0.734,\n -0.749, -0.752, -0.753, -0.766, -0.805, -0.866, -0.926, -0.971,\n -0.972, -0.975, -1.018, -1.024, -1.066, -1.118, -1.145, -1.146,\n -1.157, -1.282, -1.339, -1.420, -1.443, -1.478, -2.041, -2.092,\n -7.100\n])\ndata_median = np.median(data)\n\nn = 10000\n\nmedians = np.zeros(n)\n\nfor i in range(n):\n sample = np.random.choice(data, size=73, replace=True)\n medians[i] = np.median(sample)\n\nplt.hist(medians, bins='auto')\n\nprint('Observed median elasticity', data_median)\n\nObserved median elasticity -0.511\n\npp = np.percentile(medians, (2.5, 97.5))\nprint('Estimate of 95 percent confidence interval', pp)\n\nEstimate of 95 percent confidence interval [-0.692 -0.357]" + }, + { + "objectID": "confidence_2.html#measured-data-example-confidence-intervals-for-a-difference-between-two-means", + "href": "confidence_2.html#measured-data-example-confidence-intervals-for-a-difference-between-two-means", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.4 Measured Data Example: Confidence Intervals For a Difference Between Two Means", + "text": "27.4 Measured Data Example: Confidence Intervals For a Difference Between Two Means\nThis is another example from the mice data.\nReturning to the data on the survival times of the two groups of mice in Section 24.0.3. It is the view of this book that confidence intervals should be calculated for a difference between two groups only if one is reasonably satisfied that the difference is not due to chance. Some statisticians might choose to compute a confidence interval in this case nevertheless, some because they believe that the confidence-interval machinery is more appropriate to deciding whether the difference is the likely outcome of chance than is the machinery of a hypothesis test in which you are concerned with the behavior of a benchmark or null universe. So let us calculate a confidence interval for these data, which will in any case demonstrate the technique for determining a confidence interval for a difference between two samples.\nOur starting point is our estimate for the difference in mean survival times between the two samples — 30.63 days. We ask “How much might this estimate be in error? If we drew additional samples from the control universe and additional samples from the treatment universe, how much might they differ from this result?”\nWe do not have the ability to go back to these universes and draw more samples, but from the samples themselves we can create hypothetical universes that embody all that we know about the treatment and control universes. We imagine replicating each element in each sample millions of times to create a hypothetical control universe and (separately) a hypothetical treatment universe. Then we can draw samples (separately) from these hypothetical universes to see how reliable is our original estimate of the difference in means (30.63 days).\nActually, we use a shortcut — instead of copying each sample element a million times, we simply replace it after drawing it for our resample, thus creating a universe that is effectively infinite.\nHere are the steps:\n\nStep 1: Consider the two samples separately as the relevant universes.\nStep 2: Draw a sample of 7 with replacement from the treatment group and calculate the mean.\nStep 3: Draw a sample of 9 with replacement from the control group and calculate the mean.\nStep 4: Calculate the difference in means (treatment minus control) & record.\nStep 5: Repeat steps 2-4 many times.\nStep 6: Review the distribution of resample means; the 5th and 95th percentiles are estimates of the endpoints of a 90 percent confidence interval.\n\nHere is a Python example:\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nrnd = np.random.default_rng()\n\ntreatment = np.array([94, 38, 23, 197, 99, 16, 141])\ncontrol = np.array([52, 10, 40, 104, 51, 27, 146, 30, 46])\n\nobserved_diff = np.mean(treatment) - np.mean(control)\n\nn = 10000\nmean_delta = np.zeros(n)\n\nfor i in range(n):\n treatment_sample = rnd.choice(treatment, size=7, replace=True)\n control_sample = rnd.choice(control, size=9, replace=True)\n mean_delta[i] = np.mean(treatment_sample) - np.mean(control_sample)\n\nplt.hist(mean_delta, bins='auto')\n\nprint('Observed difference in means:', observed_diff)\n\nObserved difference in means: 30.63492063492064\n\npp = np.percentile(mean_delta, (5, 95))\nprint('Estimate of 90 percent confidence interval:', pp)\n\nEstimate of 90 percent confidence interval: [-12.6515873 74.7484127]\n\n\n\n\n\n\n\n\n\nInterpretation: This means that one can be 90 percent confident that the mean of the difference (which is estimated to be 30.635) falls between -12.652) and 74.748). So the reliability of the estimate of the mean is very small." + }, + { + "objectID": "confidence_2.html#count-data-example-confidence-limit-on-a-proportion-framingham-cholesterol-data", + "href": "confidence_2.html#count-data-example-confidence-limit-on-a-proportion-framingham-cholesterol-data", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.5 Count Data Example: Confidence Limit on a Proportion, Framingham Cholesterol Data", + "text": "27.5 Count Data Example: Confidence Limit on a Proportion, Framingham Cholesterol Data\nThe Framingham cholesterol data were used in Section 21.2.6 to illustrate the first classic question in statistical inference — interpretation of sample data for testing hypotheses. Now we use the same data for the other main theme in statistical inference — the estimation of confidence intervals. Indeed, the bootstrap method discussed above was originally devised for estimation of confidence intervals. The bootstrap method may also be used to calculate the appropriate sample size for experiments and surveys, another important topic in statistics.\nConsider for now just the data for the sub-group of 135 high-cholesterol men in Table 21.4. Our second classic statistical question is as follows: How much confidence should we have that if we were to take a much larger sample than was actually obtained, the sample mean (that is, the proportion 10/135 = .07) would be in some close vicinity of the observed sample mean? Let us first carry out a resampling procedure to answer the questions, waiting until afterwards to discuss the logic of the inference.\n\nConstruct a bucket containing 135 balls — 10 red (infarction) and 125 green (no infarction) to simulate the universe as we guess it to be.\nMix, choose a ball, record its color, replace it, and repeat 135 times (to simulate a sample of 135 men).\nRecord the number of red balls among the 135 balls drawn.\nRepeat steps 2-3 perhaps 10000 times, and observe how much the total number of reds varies from sample to sample. We arbitrarily denote the boundary lines that include 47.5 percent of the hypothetical samples on each side of the sample mean as the 95 percent “confidence limits” around the mean of the actual population.\n\nHere is a Python program:\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nrnd = np.random.default_rng()\n\nmen = np.repeat([1, 0], repeats=[10, 125])\n\nn = 10000\nz = np.zeros(n)\n\nfor i in range(n):\n sample = rnd.choice(men, size=135, replace=True)\n infarctions = np.sum(sample == 1)\n z[i] = infarctions / 135\n\nplt.hist(z, bins='auto')\n\npp = np.percentile(z, (2.5, 97.5))\nprint('Estimate of 95 percent confidence interval', pp)\n\nEstimate of 95 percent confidence interval [0.02962963 0.11851852]\n\n\n\n\n\n\n\n\n\n(The result is the 95 percent confidence interval, enclosing 95 percent of the resample results)\nThe variation in the histogram above highlights the fact that a sample containing only 10 cases of infarction is very small, and the number of observed cases — or the proportion of cases — necessarily varies greatly from sample to sample. Perhaps the most important implication of this statistical analysis, then, is that we badly need to collect additional data.\nAgain, this is a classic problem in confidence intervals, found in all subject fields. The language used in the cholesterol-infarction example is exactly the same as the language used for the Bush-Dukakis poll above except for labels and numbers.\nAs noted above, the philosophic logic of confidence intervals is quite deep and controversial, less obvious than for the hypothesis test. The key idea is that we can estimate for any given universe the probability P that a sample’s mean will fall within any given distance D of the universe’s mean; we then turn this around and assume that if we know the sample mean, the probability is P that the universe mean is within distance D of it. This inversion is more slippery than it may seem. But the logic is exactly the same for the formulaic method and for resampling. The only difference is how one estimates the probabilities — either with a numerical resampling simulation (as here), or with a formula or other deductive mathematical device (such as counting and partitioning all the possibilities, as Galileo did when he answered a gambler’s question about three dice). And when one uses the resampling method, the probabilistic calculations are the least demanding part of the work. One then has mental capacity available to focus on the crucial part of the job — framing the original question soundly, choosing a model for the facts so as to properly resemble the actual situation, and drawing appropriate inferences from the simulation." + }, + { + "objectID": "confidence_2.html#approach-2-probability-of-various-universes-producing-this-sample", + "href": "confidence_2.html#approach-2-probability-of-various-universes-producing-this-sample", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.6 Approach 2: Probability of various universes producing this sample", + "text": "27.6 Approach 2: Probability of various universes producing this sample\nA second approach to the general question of estimate accuracy is to analyze the behavior of a variety of universes centered at other points on the line, rather than the universe centered on the sample mean. One can ask the probability that a distribution centered away from the sample mean, with a given dispersion, would produce (say) a 10-apple scatter having a mean as far away from the given point as the observed sample mean. If we assume the situation to be symmetric, we can find a point at which we can say that a distribution centered there would have only a (say) 5 percent chance of producing the observed sample. And we can also say that a distribution even further away from the sample mean would have an even lower probability of producing the given sample. But we cannot turn the matter around and say that there is any particular chance that the distribution that actually produced the observed sample is between that point and the center of the sample.\nImagine a situation where you are standing on one side of a canyon, and you are hit by a baseball, the only ball in the vicinity that day. Based on experiments, you can estimate that a baseball thrower who you see standing on the other side of the canyon has only a 5 percent chance of hitting you with a single throw. But this does not imply that the source of the ball that hit you was someone else standing in the middle of the canyon, because that is patently impossible. That is, your knowledge about the behavior of the “boundary” universe does not logically imply anything about the existence and behavior of any other universes. But just as in the discussion of testing hypotheses, if you know that one possibility is unlikely, it is reasonable that as a result you will draw conclusions about other possibilities in the context of your general knowledge and judgment.\nWe can find the “boundary” distribution(s) we seek if we a) specify a measure of dispersion, and b) try every point along the line leading away from the sample mean, until we find that distribution that produces samples such as that observed with a (say) 5 percent probability or less.\nTo estimate the dispersion, in many cases we can safely use an estimate based on the sample dispersion, using either resampling or Normal distribution theory. The hardest cases for resampling are a) a very small sample of data, and b) a proportion near 0 or near 1.0 (because the presence or absence in the sample of a small number of observations can change the estimate radically, and therefore a large sample is needed for reliability). In such situations one should use additional outside information, or Normal distribution theory, or both.\nWe can also create a confidence interval in the following fashion: We can first estimate the dispersion for a universe in the general neighborhood of the sample mean, using various devices to be “conservative,” if we like.2 Given the estimated dispersion, we then estimate the probability distribution of various amounts of error between observed sample means and the population mean. We can do this with resampling simulation as follows: a) Create other universes at various distances from the sample mean, but with other characteristics similar to the universe that we postulate for the immediate neighborhood of the sample, and b) experiment with those universes. One can also apply the same logic with a more conventional parametric approach, using general knowledge of the sampling distribution of the mean, based on Normal distribution theory or previous experience with resampling. We shall not discuss the latter method here.\nAs with approach 1, we do not make any probability statements about where the population mean may be found. Rather, we discuss only what various hypothetical universes might produce , and make inferences about the “actual” population’s characteristics by comparison with those hypothesized universes.\nIf we are interested in (say) a 95 percent confidence interval, we want to find the distribution on each side of the sample mean that would produce a sample with a mean that far away only 2.5 percent of the time (2 * .025 = 1-.95). A shortcut to find these “border distributions” is to plot the sampling distribution of the mean at the center of the sample, as in Approach 1. Then find the (say) 2.5 percent cutoffs at each end of that distribution. On the assumption of equal dispersion at the two points along the line, we now reproduce the previously-plotted distribution with its centroid (mean) at those 2.5 percent points on the line. The new distributions will have 2.5 percent of their areas on the other side of the mean of the sample.\n\n27.6.1 Example: Approach 2 for Counted Data: the Bush-Dukakis Poll\nLet’s implement Approach 2 for counted data, using for comparison the Bush-Dukakis poll data discussed earlier in the context of Approach 1.\nWe seek to state, for universes that we select on the basis that their results will interest us, the probability that they (or it, for a particular universe) would produce a sample as far or farther away from the mean of the universe in question as the mean of the observed sample — 56 percent for Bush. The most interesting universe is that which produces such a sample only about 5 percent of the time, simply because of the correspondence of this value to a conventional breakpoint in statistical inference. So we could experiment with various universes by trial and error to find this universe.\nWe can learn from our previous simulations of the Bush — Dukakis poll in Approach 1 that about 95 percent of the samples fall within .025 on either side of the sample mean (which we had been implicitly assuming is the location of the population mean). If we assume (and there seems no reason not to) that the dispersions of the universes we experiment with are the same, we will find (by symmetry) that the universe we seek is centered on those points .025 away from .56, or .535 and .585.\nFrom the standpoint of Approach 2, then, the conventional sample formula that is centered at the mean can be considered a shortcut to estimating the boundary distributions. We say that the boundary is at the point that centers a distribution which has only a (say) 2.5 percent chance of producing the observed sample; it is that distribution which is the subject of the discussion, and not the distribution which is centered at \\(\\mu = \\bar{x}\\). Results of these simulations are shown in Figure 27.1.\n\n\n\nFigure 27.1: Approach 2 for Bush-Dukakis problem\n\n\nAbout these distributions centered at .535 and .585 — or more importantly for understanding an election situation, the universe centered at .535 — one can say: Even if the “true” value is as low as 53.5 percent for Bush, there is only a 2 ½ percent chance that a sample as high as 56 percent pro-Bush would be observed. (The values of a 2 ½ percent probability and a 2 ½ percent difference between 56 percent and 53.5 percent coincide only by chance in this case.) It would be even more revealing in an election situation to make a similar statement about the universe located at 50-50, but this would bring us almost entirely within the intellectual ambit of hypothesis testing.\nTo restate, then: Moving progressively farther away from the sample mean, we can eventually find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can then say that this point represents a “limit” or “boundary” so that the interval between it and the sample mean may be called a confidence interval.\n\n\n27.6.2 Example: Approach 2 for Measured Data: The Diameters of Trees\nTo implement Approach 2 for measured data, one may proceed exactly as with Approach 1 above except that the output of the simulation with the sample mean as midpoint will be used for guidance about where to locate trial universes for Approach 2. The results for the tree diameter data (Table 27.1) are shown in Figure 27.2.\n\n\n\nFigure 27.2: Approach 2 for tree diameters" + }, + { + "objectID": "confidence_2.html#interpretation-of-approach-2", + "href": "confidence_2.html#interpretation-of-approach-2", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.7 Interpretation of Approach 2", + "text": "27.7 Interpretation of Approach 2\nNow to interpret the results of the second approach: Assume that the sample is not drawn in a biased fashion (such as the wind blowing all the apples in the same direction), and that the population has the same dispersion as the sample. We can then say that distributions centered at the two endpoints of the 95 percent confidence interval (each of them including a tail in the direction of the observed sample mean with 2.5 percent of the area), or even further away from the sample mean, will produce the observed sample only 5 percent of the time or less .\nThe result of the second approach is more in the spirit of a hypothesis test than of the usual interpretation of confidence intervals. Another statement of the result of the second approach is: We postulate a given universe — say, a universe at (say) the two-tailed 95 percent boundary line. We then say: The probability that the observed sample would be produced by a universe with a mean as far (or further) from the observed sample’s mean as the universe under investigation is only 2.5 percent. This is similar to the probability value interpretation of a hypothesis-test framework. It is not a direct statement about the location of the mean of the universe from which the sample has been drawn. But it is certainly reasonable to derive a betting-odds interpretation of the statement just above, to wit: The chances are 2½ in 100 (or, the odds are 2½ to 97½ ) that a population located here would generate a sample with a mean as far away as the observed sample. And it would seem legitimate to proceed to the further betting-odds statement that (assuming we have no additional information) the odds are 97 ½ to 2 ½ that the mean of the universe that generated this sample is no farther away from the sample mean than the mean of the boundary universe under discussion. About this statement there is nothing slippery, and its meaning should not be controversial.\nHere again the tactic for interpreting the statistical procedure is to restate the facts of the behavior of the universe that we are manipulating and examining at that moment. We use a heuristic device to find a particular distribution — the one that is at (say) the 97 ½ –2 ½ percent boundary — and simply state explicitly what the distribution tells us implicitly: The probability of this distribution generating the observed sample (or a sample even further removed) is 2 ½ percent. We could go on to say (if it were of interest to us at the moment) that because the probability of this universe generating the observed sample is as low as it is, we “reject” the “hypothesis” that the sample came from a universe this far away or further. Or in other words, we could say that because we would be very surprised if the sample were to have come from this universe, we instead believe that another hypothesis is true. The “other” hypothesis often is that the universe that generated the sample has a mean located at the sample mean or closer to it than the boundary universe.\nThe behavior of the universe at the 97 ½ –2 ½ percent boundary line can also be interpreted in terms of our “confidence” about the location of the mean of the universe that generated the observed sample. We can say: At this boundary point lies the end of the region within which we would bet 97 ½ to 2 ½ that the mean of the universe that generated this sample lies to the (say) right of it.\nAs noted in the preview to this chapter, we do not learn about the reliability of sample estimates of the population mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because in principle this cannot be done . Instead, in this second approach we investigate the behavior of various universes at the borderline of the neighborhood of the sample, those universes being chosen on the basis of their resemblances to the sample. We seek, for example, to find the universes that would produce samples with the mean of the observed sample less than (say) 5 percent of the time. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of hypothesized universes, the hypotheses being implicitly suggested by the sample evidence but not logically implied by that evidence.\nApproaches 1 and 2 may (if one chooses) be seen as identical conceptually as well as (in many cases) computationally (except for the asymmetric distributions mentioned earlier). But as I see it, the interpretation of them is rather different, and distinguishing them helps one’s intuitive understanding." + }, + { + "objectID": "confidence_2.html#exercises", + "href": "confidence_2.html#exercises", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.8 Exercises", + "text": "27.8 Exercises\nSolutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.\n\n27.8.1 Exercise 1\nIn a sample of 200 people, 7 percent are found to be unemployed. Determine a 95 percent confidence interval for the true population proportion.\n\n\n27.8.2 Exercise 2\nA sample of 20 batteries is tested, and the average lifetime is 28.85 months. Establish a 95 percent confidence interval for the true average value. The sample values (lifetimes in months) are listed below.\n30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31\n\n\n27.8.3 Exercise 3\nSuppose we have 10 measurements of Optical Density on a batch of HIV negative control:\n.02 .026 .023 .017 .022 .019 .018 .018 .017 .022\nDerive a 95 percent confidence interval for the sample mean. Are there enough measurements to produce a satisfactory answer?\n\n\n\n\nCatling, HW, and RE Jones. 1977. “A Reinvestigation of the Provenance of the Inscribed Stirrup Jars Found at Thebes.” Archaeometry 19 (2): 137–46.\n\n\nDeshpande, Jayant V, AP Gore, and A Shanubhogue. 1995. Statistical Analysis of Nonnormal Data. Taylor & Francis. https://www.google.co.uk/books/edition/Statistical_Analysis_of_Nonnormal_Data/sS0on2XqwwoC.\n\n\nLee, Peter M. 2012. Bayesian Statistics: An Introduction. 4th ed. Wiley Online Library. https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/book.htm.\n\n\nLyon, Herbert L, and Julian Lincoln Simon. 1968. “Price Elasticity of the Demand for Cigarettes in the United States.” American Journal of Agricultural Economics 50 (4): 888–95.\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New York: Dover Publications, Inc.\n\n\nSimon, Julian Lincoln. 1998. “The Philosophy and Practice of Resampling Statistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory Statistics. 5th ed. New York: John Wiley & Sons." + }, + { + "objectID": "reliability_average.html#the-problem-of-uncertainty-about-the-dispersion", + "href": "reliability_average.html#the-problem-of-uncertainty-about-the-dispersion", + "title": "28  Some Last Words About the Reliability of Sample Averages", + "section": "28.1 The problem of uncertainty about the dispersion", + "text": "28.1 The problem of uncertainty about the dispersion\nThe inescapable difficulty of estimating the amount of dispersion in the population has greatly exercised statisticians over the years. Hence I must try to clarify the matter. Yet in practice this issue turns out not to be the likely source of much error even if one is somewhat wrong about the extent of dispersion, and therefore we should not let it be a stumbling block in the way of our producing estimates of the accuracy of samples in estimating population parameters.\nStudent’s t test was designed to get around the problem of the lack of knowledge of the population dispersion. But Wallis and Roberts wrote about the t test: “[F]ar-reaching as have been the consequences of the t distribution for technical statistics, in elementary applications it does not differ enough from the normal distribution…to justify giving beginners this added complexity.” [wallis1956statistics], p. x) “Although Student’s t and the F ratio are explained…the student…is advised not ordinarily to use them himself but to use the shortcut methods… These, being non-parametric and involving simpler computations, are more nearly foolproof in the hands of the beginner — and, ordinarily, only a little less powerful.” (p. xi)1\nIf we knew the population parameter — the proportion, in the case we will discuss — we could easily determine how inaccurate the sample proportion is likely to be. If, for example, we wanted to know about the likely inaccuracy of the proportion of a sample of 100 voters drawn from a population of a million that is 60% Democratic, we could simply simulate drawing (say) 200 samples of 100 voters from such a universe, and examine the average inaccuracy of the 200 sample proportions.\nBut in fact we do not know the characteristics of the actual universe. Rather, the nature of the actual universe is what we seek to learn about. Of course, if the amount of variation among samples were the same no matter what the Republican-Democrat proportions in the universe, the issue would still be simple, because we could then estimate the average inaccuracy of the sample proportion for any universe and then assume that it would hold for our universe. But it is reasonable to suppose that the amount of variation among samples will be different for different Democrat-Republican proportions in the universe.\nLet us first see why the amount of variation among samples drawn from a given universe is different with different relative proportions of the events in the universe. Consider a universe of 999,999 Democrats and one Republican. Most samples of 100 taken from this universe will contain 100 Democrats. A few (and only a very, very few) samples will contain 99 Democrats and one Republican. So the biggest possible difference between the sample proportion and the population proportion (99.9999%) is less than one percent (for the very few samples of 99% Democrats). And most of the time the difference will only be the tiny difference between a sample of 100 Democrats (sample proportion = 100%), and the population proportion of 99.9999%.\nCompare the above to the possible difference between a sample of 100 from a universe of half a million Republicans and half a million Democrats. At worst a sample could be off by as much as 50% (if it got zero Republicans or zero Democrats), and at best it is unlikely to get exactly 50 of each. So it will almost always be off by 1% or more.\nIt seems, therefore, intuitively reasonable (and in fact it is true) that the likely difference between a sample proportion and the population proportion is greatest with a 50%-50% universe, least with a 0%-100% universe, and somewhere in between for probabilities, in the fashion of Figure 28.1.\n\n\n\n\n\nFigure 28.1: Relationship Between the Population Proportion and the Likely Error In a Sample\n\n\n\n\nPerhaps it will help to clarify the issue of estimating dispersion if we consider this: If we compare estimates for a second sample based on a) the population , versus b) the first sample , the former will be more accurate than the latter, because of the sampling variation in the first sample that affects the latter estimate. But we cannot estimate that sampling variation without knowing more about the population." + }, + { + "objectID": "reliability_average.html#notes-on-the-use-of-confidence-intervals", + "href": "reliability_average.html#notes-on-the-use-of-confidence-intervals", + "title": "28  Some Last Words About the Reliability of Sample Averages", + "section": "28.2 Notes on the use of confidence intervals", + "text": "28.2 Notes on the use of confidence intervals\n\nConfidence intervals are used more frequently in the physical sciences — indeed, the concept was developed for use in astronomy — than in bio-statistics and in the social sciences; in these latter fields, measurement is less often the main problem and the distinction between hypotheses often is difficult.\nSome statisticians suggest that one can do hypothesis tests with the confidence-interval concept. But that seems to me equivalent to suggesting that one can get from New York to Chicago by flying first to Los Angeles. Additionally, the logic of hypothesis tests is much clearer than the logic of confidence intervals, and it corresponds to our intuitions so much more easily.\nDiscussions of confidence intervals sometimes assert that one cannot make a probability statement about where the population mean may be, yet can make statements about the probability that a particular set of samples may bound that mean.\n\nIf we agree that our interest is upcoming events and probably decision-making, then we obviously are interested in putting betting odds on the location of the population mean (and subsequent samples). And a statement about process will not help us with that, but only a probability statement.\nMoving progressively farther away from the sample mean, we can find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can say that this point represents a “limit” or “boundary” between which and the sample mean may be called a confidence interval, I suppose.\nThis issue is discussed in more detail in Simon (1998, published online)." + }, + { + "objectID": "reliability_average.html#overall-summary-and-conclusions-about-confidence-intervals", + "href": "reliability_average.html#overall-summary-and-conclusions-about-confidence-intervals", + "title": "28  Some Last Words About the Reliability of Sample Averages", + "section": "28.3 Overall summary and conclusions about confidence intervals", + "text": "28.3 Overall summary and conclusions about confidence intervals\nThe first task in statistics is to measure how much — to make a quantitative estimate of the universe from which a given sample has been drawn, including especially the average and the dispersion; the theory of point estimation is discussed in Chapter 19.\nThe next task is to make inferences about the meaning of the estimates. A hypothesis test helps us decide whether two or more universes are the same or different from each other. In contrast, the confidence interval concept helps us decide on the reliability of an estimate.\nConfidence intervals and hypothesis tests are not entirely disjoint. In fact, hypothesis testing of a single sample against a benchmark value is, under all interpretations, I think, operationally identical with constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different because the questions which they are designed to answer are different.\nHaving now worked through the entire procedure of producing a confidence interval, it should be glaringly obvious why statistics is such a difficult subject. The procedure is very long, and involves a very large number of logical steps. Such a long logical train is very hard to control intellectually, and very hard to follow with one’s intuition. The actual computation of the probabilities is the very least of it, almost a trivial exercise.\n\n\n\n\nSimon, Julian Lincoln. 1998. “The Philosophy and Practice of Resampling Statistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New Approach. New York: The Free Press." + }, + { + "objectID": "correlation_causation.html#preview", + "href": "correlation_causation.html#preview", + "title": "29  Correlation and Causation", + "section": "29.1 Preview", + "text": "29.1 Preview\nThe correlation (speaking in a loose way for now) between two variables measures the strength of the relationship between them. A positive “linear” correlation between two variables x and y implies that high values of x are associated with high values of y, and that low values of x are associated with low values of y. A negative correlation implies the opposite; high values of x are associated with low values of y. By definition a “correlation coefficient” close to zero indicates little or no linear relationship between two variables; correlation coefficients close to 1 and -1 denote a strong positive or negative relationship. We will generally use a simpler measure of correlation than the correlation coefficient, however.\nOne way to measure correlation with the resampling method is to rank both variables from highest to lowest, and investigate how often in randomly-generated samples the rankings of the two variables are as close to each other as the rankings in the observed variables. A better approach, because it uses more of the quantitative information contained in the data though it requires more computation, is to multiply the values for the corresponding pairs of values for the two variables, and compare the sum of the resulting products to the analogous sum for randomly-generated pairs of the observed variable values. The last section of the chapter shows how the strength of a relationship can be determined when the data are counted, rather than measured. First comes some discussion of the philosophical issues involved in correlation and causation." + }, + { + "objectID": "correlation_causation.html#introduction-to-correlation-and-causation", + "href": "correlation_causation.html#introduction-to-correlation-and-causation", + "title": "29  Correlation and Causation", + "section": "29.2 Introduction to correlation and causation", + "text": "29.2 Introduction to correlation and causation\nThe questions in examples Section 12.1 to Section 13.3.3 have been stated in the following form: Does the independent variable (say, irradiation; or type of pig ration) have an effect upon the dependent variable (say, sex of fruit flies; or weight gain of pigs)? This is another way to state the following question: Is there a causal relationship between the independent variable(s) and the dependent variable? (“Independent” or “control” is the name we give to the variable(s) the researcher believes is (are) responsible for changes in the other variable, which we call the “dependent” or “response” variable.)\nA causal relationship cannot be defined perfectly neatly. Even an experiment does not determine perfectly whether a relationship deserves to be called “causal” because, among other reasons, the independent variable may not be clear-cut. For example, even if cigarette smoking experimentally produces cancer in rats, it might be the paper and not the tobacco that causes the cancer. Or consider the fabled gentlemen who got experimentally drunk on bourbon and soda on Monday night, scotch and soda on Tuesday night, and brandy and soda on Wednesday night — and stayed sober Thursday night by drinking nothing. With a vast inductive leap of scientific imagination, they treated their experience as an empirical demonstration that soda, the common element each evening, was the cause of the inebriated state they had experienced. Notice that their deduction was perfectly sound, given only the recent evidence they had. Other knowledge of the world is necessary to set them straight. That is, even in a controlled experiment there is often no way except subject-matter knowledge to avoid erroneous conclusions about causality. Nothing except substantive knowledge or scientific intuition would have led them to the recognition that it is the alcohol rather than the soda that made them drunk, as long as they always took soda with their drinks . And no statistical procedure can suggest to them that they ought to experiment with the presence and absence of soda. If this is true for an experiment, it must also be true for an uncontrolled study.\nHere are some tests that a relationship usually must pass to be called causal. That is, a working definition of a particular causal relationship is expressed in a statement that has these important characteristics:\n\nIt is an association that is strong enough so that the observer believes it to have a predictive (explanatory) power great enough to be scientifically useful or interesting. For example, he is not likely to say that wearing glasses causes (or is a cause of) auto accidents if the observed correlation is .07, even if the sample is large enough to make the correlation statistically significant. In other words, unimportant relationships are not likely to be labeled causal.\nVarious observers may well differ in judging whether or not an association is strong enough to be important and therefore “causal.” And the particular field in which the observer works may affect this judgment. This is an indication that whether or not a relationship is dubbed “causal” involves a good deal of human judgment and is subject to dispute.\nThe “side conditions” must be sufficiently few and sufficiently observable so that the relationship will apply under a wide enough range of conditions to be considered useful or interesting. In other words, the relationship must not require too many “if”s, “and”s, and “but”s in order to hold . For example, one might say that an increase in income caused an increase in the birth rate if this relationship were observed everywhere. But, if the relationship were found to hold only in developed countries, among the educated classes, and among the higher-income groups, then it would be less likely to be called “causal” — even if the correlation were extremely high once the specified conditions had been met. A similar example can be made of the relationship between income and happiness.\nFor a relationship to be called “causal,” there should be sound reason to believe that, even if the control variable were not the “real” cause (and it never is), other relevant “hidden” and “real” cause variables must also change consistently with changes in the control variables. That is, a variable being manipulated may reasonably be called “causal” if the real variable for which it is believed to be a proxy must always be tied intimately to it. (Between two variables, v and w, v may be said to be the “more real” cause and w a “spurious” cause, if v and w require the same side conditions, except that v does not require w as a side condition.) This third criterion (non-spuriousness) is of particular importance to policy makers. The difference between it and the previous criterion for side conditions is that a plenitude of very restrictive side conditions may take the relationship out of the class of causal relationships, even though the effects of the side conditions are known . This criterion of nonspuriousness concerns variables that are as yet unknown and unevaluated but that have a possible ability to upset the observed association.\nExamples of spurious relationships and hidden-third-factor causation are commonplace. For a single example, toy sales rise in December. There is no danger in saying that December causes an increase in toy sales, even though it is “really” Christmas that causes the increase, because Christmas and December practically always accompany each other.\nBelief that the relationship is not spurious is increased if many likely variables have been investigated and none removes the relationship. This is further demonstration that the test of whether or not an association should be called “causal” cannot be a logical one; there is no way that one can express in symbolic logic the fact that many other variables have been tried without changing the relationship in question.\nThe more tightly a relationship is bound into (that is, deduced from, compatible with, and logically connected to) a general framework of theory, the stronger is its claim to be called “causal.” For an economics example, observed positive relationships between the interest rate and business investment and between profits and investment are more likely to be called “causal” than is the relationship between liquid assets and investment. This is so because the first two statements can be deduced from classical price theory, whereas the third statement cannot. Connection to a theoretical framework provides support for belief that the side conditions necessary for the statement to hold true are not restrictive and that the likelihood of spurious correlation is not great; because a statement is logically connected to the rest of the system, the statement tends to stand or fall as the rest of the system stands or falls. And, because the rest of the system of economic theory has, over a long period of time and in a wide variety of tests, been shown to have predictive power, a statement connected with it is cloaked in this mantle.\n\nThe social sciences other than economics do not have such well-developed bodies of deductive theory, and therefore this criterion of causality does not weigh as heavily in sociology, for instance, as in economics. Rather, the other social sciences seem to substitute a weaker and more general criterion, that is, whether or not the statement of the relationship is accompanied by other statements that seem to “explain” the “mechanism” by which the relationship operates. Consider, for example, the relationship between the phases of the moon and the suicide rate. The reason that sociologists do not call it causal is that there are no auxiliary propositions that explain the relationship and describe an operative mechanism. On the other hand, the relationship between broken homes and juvenile delinquency is often referred to as “causal,” in large part because a large body of psychoanalytic theory serves to explain why a child raised without one or the other parent, or in the presence of parental strife, should not adjust readily.\nFurthermore, one can never decide with perfect certainty whether in any given situation one variable “causes” a particular change in another variable. At best, given your particular purposes in investigating a phenomena, you may be safe in judging that very likely there is causal influence.\nIn brief, it is correct to say (as it is so often said) that correlation does not prove causation — if we add the word “completely” to make it “correlation does not completely prove causation.” On the other hand, causation can never be “proven” completely by correlation or any other tool or set of tools, including experimentation. The best we can do is make informed judgments about whether to call a relationship causal.\nIt is clear, however, that in any situation where we are interested in the possibility of causation, we must at least know whether there is a relationship (correlation) between the variables of interest; the existence of a relationship is necessary for a relationship to be judged causal even if it is not sufficient to receive the causal label. And in other situations where we are not even interested in causality, but rather simply want to predict events or understand the structure of a system, we may be interested in the existence of relationships quite apart from questions about causations. Therefore our next set of problems deals with the probability of there being a relationship between two measured variables, variables that can take on any values (say, the values on a test of athletic scores) rather than just two values (say, whether or not there has been irradiation.)1\nAnother way to think about such problems is to ask whether two variables are independent of each other — that is, whether you know anything about the value of one variable if you know the value of the other in a particular case — or whether they are not independent but rather are related." + }, + { + "objectID": "correlation_causation.html#a-note-on-association-compared-to-testing-a-hypothesis", + "href": "correlation_causation.html#a-note-on-association-compared-to-testing-a-hypothesis", + "title": "29  Correlation and Causation", + "section": "29.3 A Note on Association Compared to Testing a Hypothesis", + "text": "29.3 A Note on Association Compared to Testing a Hypothesis\nProblems in which we investigate a) whether there is an association , versus b) whether there is a difference between just two groups, often look very similar, especially when the data constitute a 2-by-2 table. There is this important difference between the two types of analysis, however: Questions about association refer to variables — say weight and age — and it never makes sense to ask whether there is a difference between variables (except when asking whether they measure the same quantity). Questions about similarity or difference refer to groups of individuals , and in such a situation it does make sense to ask whether or not two groups are observably different from each other.\nExample 23-1: Is Athletic Ability Directly Related to Intelligence? (Is There Correlation Between Two Variables or Are They Independent?) (Program “Ability1”)\nA scientist often wants to know whether or not two characteristics go together, that is, whether or not they are correlated (that is, related or associated). For example, do youths with high athletic ability tend to also have high I.Q.s?\nHypothetical physical-education scores of a group of ten high-school boys are shown in Table 23-1, ordered from high to low, along with the I.Q. score for each boy. The ranks for each student’s athletic and I.Q. scores are then shown in columns 3 and 4.\nTable 23-1\nHypothetical Athletic and I.Q. Scores for High School Boys\n\n\n\nAthletic Score\nI.Q. Score\nAthletic Rank\nI.Q.Rank\n\n\n(1)\n(2)\n(3)\n(4)\n\n\n97\n114\n1\n3\n\n\n94\n120\n2\n1\n\n\n93\n107\n3\n7\n\n\n90\n113\n4\n4\n\n\n87\n118\n5\n2\n\n\n86\n101\n6\n8\n\n\n86\n109\n7\n6\n\n\n85\n110\n8\n5\n\n\n81\n100\n9\n9\n\n\n76\n99\n10\n10\n\n\n\nWe want to know whether a high score on athletic ability tends to be found along with a high I.Q. score more often than would be expected by chance. Therefore, our strategy is to see how often high scores on both variables are found by chance. We do this by disassociating the two variables and making two separate and independent universes, one composed of the athletic scores and another of the I.Q. scores. Then we draw pairs of observations from the two universes at random, and compare the experimental patterns that occur by chance to what actually is observed to occur in the world.\nThe first testing scheme we shall use is similar to our first approach to the pig rations — splitting the results into just “highs” and “lows.” We take ten cards, one of each denomination from “ace” to “10,” shuffle, and deal five cards to correspond to the first five athletic ranks. The face values then correspond to the\nI.Q. ranks. Under the benchmark hypothesis the athletic ranks will not be associated with the I.Q. ranks. Add the face values in the first five cards in each trial; the first hand includes 2, 4, 5, 6, and 9, so the sum is 26. Record, shuffle, and repeat perhaps ten times. Then compare the random results to the sum of the observed ranks of the five top athletes, which equals 17.\nThe following steps describe a slightly different procedure than that just described, because this one may be easier to understand:\nStep 1. Convert the athletic and I.Q. scores to ranks. Then constitute a universe of spades, “ace” to “10,” to correspond to the athletic ranks, and a universe of hearts, “ace” to “10,” to correspond to the IQ ranks.\nStep 2. Deal out the well-shuffled cards into pairs, each pair with an athletic score and an I.Q. score.\nStep 3. Locate the cards with the top five athletic ranks, and add the I.Q. rank scores on their paired cards. Compare this sum to the observed sum of 17. If 17 or less, indicate “yes,” otherwise “no.” (Why do we use “17 or less” rather than “less than 17”? Because we are asking the probability of a score this low or lower .)\nStep 4. Repeat steps 2 and 3 ten times.\nStep 5. Calculate the proportion “yes.” This estimates the probability sought.\nIn Table 23-2 we see that the observed sum (17) is lower than the sum of the top 5 ranks in all but one (shown by an asterisk) of the ten random trials (trial 5), which suggests that there is a good chance (9 in 10) that the five best athletes will not have I.Q. scores that high by chance. But it might be well to deal some more to get a more reliable average. We add thirty hands, and thirty-nine of the total forty hands exceed the observed rank value, so the probability that the observed correlation of athletic and I.Q. scores would occur by chance is about\n.025. In other words, if there is no real association between the variables, the probability that the top 5 ranks would sum to a number this low or lower is only 1 in 40, and it therefore seems reasonable to believe that high athletic ability tends to accompany a high I.Q.\nTable 23-2\nResults of 40 Random Trials of The Problem “Ability”\n(Note: Observed sum of IQ ranks: 17)\n\n\n\nTrial\nSum of IQ Ranks\nYes or No\n\n\n1\n26\nNo\n\n\n2\n23\nNo\n\n\n3\n22\nNo\n\n\n4\n37\nNo\n\n\n* 5\n16\nYes\n\n\n6\n22\nNo\n\n\n7\n22\nNo\n\n\n8\n28\nNo\n\n\n9\n38\nNo\n\n\n10\n22\nNo\n\n\n11\n35\nNo\n\n\n12\n36\nNo\n\n\n13\n31\nNo\n\n\n14\n29\nNo\n\n\n15\n32\nNo\n\n\n16\n25\nNo\n\n\n17\n25\nNo\n\n\n18\n29\nNo\n\n\n19\n25\nNo\n\n\n20\n22\nNo\n\n\n21\n30\nNo\n\n\n22\n31\nNo\n\n\n23\n35\nNo\n\n\n24\n25\nNo\n\n\n25\n33\nNo\n\n\n26\n30\nNo\n\n\n27\n24\nNo\n\n\n28\n29\nNo\n\n\n29\n30\nNo\n\n\n30\n31\nNo\n\n\n31\n30\nNo\n\n\n32\n21\nNo\n\n\n33\n25\nNo\n\n\n34\n19\nNo\n\n\n35\n29\nNo\n\n\n36\n23\nNo\n\n\n37\n23\nNo\n\n\n38\n34\nNo\n\n\n39\n23\nNo\n\n\n40\n26\nNo\n\n\n\nThe RESAMPLING STATS program “Ability1” creates an array containing the I.Q. rankings of the top 5 students in athletics. The SUM of these I.Q. rankings constitutes the observed result to be tested against randomly-drawn samples. We observe that the actual I.Q. rankings of the top five athletes sums to 17. The more frequently that the sum of 5 randomly-generated rankings (out of 10) is as low as this observed number, the higher is the probability that there is no relationship between athletic performance and I.Q. based on these data.\nFirst we record the NUMBERS “1” through “10” into vector\nA. Then we SHUFFLE the numbers so the rankings are in a random order. Then TAKE the first 5 of these numbers and put them in another array, D, and SUM them, putting the result in E. We repeat this procedure 1000 times, recording each result in a scorekeeping vector: Z. Graphing Z, we get a HIS- TOGRAM that shows us how often our randomly assigned sums are equal to or below 17.\n\n' Program file: \"correlation_causation_00.rss\"\n\nREPEAT 1000\n ' Repeat the experiment 1000 times.\n NUMBERS 1,10 a\n ' Constitute the set of I.Q. ranks.\n SHUFFLE a b\n ' Shuffle them.\n TAKE b 1,5 d\n ' Take the first 5 ranks.\n SUM d e\n ' Sum those ranks.\n SCORE e z\n ' Keep track of the result of each trial.\nEND\n' End the experiment, go back and repeat.\nHISTOGRAM z\n' Produce a histogram of trial results.\nABILITY1: Random Selection of 5 Out of 10 Ranks\n\nSum of top 5 ranks\nWe see that in only about 2% of the trials did random selection of ranks produce a total of 17 or lower. RESAMPLING STATS will calculate this for us directly:\n\n' Program file: \"ability1.rss\"\n\nCOUNT z <= 17 k\n' Determine how many trials produced sums of ranks \\<= 17 by chance.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the results.\n\n' Note: The file \"ability1\" on the Resampling Stats software disk contains\n' this set of commands.\nWhy do we sum the ranks of the first five athletes and compare them with the second five athletes, rather than comparing the top three, say, with the bottom seven? Indeed, we could have looked at the top three, two, four, or even six or seven. The first reason for splitting the group in half is that an even split uses the available information more fully, and therefore we obtain greater efficiency. (I cannot prove this formally here, but perhaps it makes intuitive sense to you.) A second reason is that getting into the habit of always looking at an even split reduces the chances that you will pick and choose in such a manner as to fool yourself. For example, if the I.Q. ranks of the top five athletes were 3, 2, 1, 10, and 9, we would be deceiving ourselves if, after looking the data over, we drew the line between athletes 3 and 4. (More generally, choosing an appropriate measure before examining the data will help you avoid fooling yourself in such matters.)\nA simpler but less efficient approach to this same problem is to classify the top-half athletes by whether or not they were also in the top half of the I.Q. scores. Of the first five athletes actually observed, four were in the top five I.Q. scores. We can then shuffle five black and five red cards and see how often four or more (that is, four or five) blacks come up with the first five cards. The proportion of times that four or more blacks occurs in the trial is the probability that an association as strong as that observed might occur by chance even if there is no association. Table 23-3 shows a proportion of five trials out of twenty.\nIn the RESAMPLING STATS program “Ability2” we first note that the top 5 athletes had 4 of the top 5 I.Q. scores. So we constitute the set of 10 IQ rankings (vector A). We then SHUFFLE A and TAKE 5 I.Q. rankings (out of 10). We COUNT how many are in the top 5, and keep SCORE of the result. After REPEATing 1000 times, we find out how often we select 4 of the top 5.\nTable 23-3\nResults of 20 Random Trials of the Problem “ABILITY2”\nObserved Score: 4\n\n\n\nTrial\nScore\nYes or No\n\n\n1\n4\nYes\n\n\n2\n2\nNo\n\n\n3\n2\nNo\n\n\n4\n2\nNo\n\n\n5\n3\nNo\n\n\n6\n2\nNo\n\n\n7\n4\nYes\n\n\n8\n3\nNo\n\n\n9\n3\nNo\n\n\n10\n4\nYes\n\n\n11\n3\nNo\n\n\n12\n1\nNo\n\n\n13\n3\nNo\n\n\n14\n3\nNo\n\n\n15\n4\nYes\n\n\n16\n3\nNo\n\n\n17\n2\nNo\n\n\n18\n2\nNo\n\n\n19\n2\nNo\n\n\n20\n4\nYes\n\n\n\n\n' Program file: \"ability2.rss\"\n\nREPEAT 1000\n ' Do 1000 experiments.\n NUMBERS 1,10 a\n ' Constitute the set of I.Q. ranks.\n SHUFFLE a b\n ' Shuffle them.\n TAKE b 1,5 c\n ' Take the first 5 ranks.\n COUNT c between 1 5 d\n ' Of those 5, count how many are among the top half of the ranks (1-5).\n SCORE d z\n ' Keep track of that result in z\nEND\n' End one experiment, go back and repeat until all 1000 are complete.\nCOUNT z >= 4 k\n' Determine how many trials produced 4 or more top ranks by chance.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the result.\n\n' Note: The file \"ability2\" on the Resampling Stats software disk contains\n' this set of commands.\nSo far we have proceeded on the theory that if there is any relationship between athletics and I.Q., then the better athletes have higher rather than lower I.Q. scores. The justification for this assumption is that past research suggests that it is probably true. But if we had not had the benefit of that past research, we would then have had to proceed somewhat differently; we would have had to consider the possibility that the top five athletes could have I.Q. scores either higher or lower than those of the other students. The results of the “two-tail” test would have yielded odds weaker than those we observed.\nExample 23-2: Athletic Ability and I.Q. a Third Way.\n(Program “Ability3”).\nExample 23-1 investigated the relationship between I.Q. and athletic score by ranking the two sets of scores. But ranking of scores loses some efficiency because it uses only an “ordinal” (rank-ordered) rather than a “cardinal” (measured) scale; the numerical shadings and relative relationships are lost when we convert to ranks. Therefore let us consider a test of correlation that uses the original cardinal numerical scores.\nFirst a little background: Figure 29.1 and Figure 29.2 show two hypothetical cases of very high association among the I.Q. and athletic scores used in previous examples. Figure 29.1 indicates that the higher the I.Q. score, the higher the athletic score. With a boy’s athletic score you can thus predict quite well his I.Q. score by means of a hand-drawn line — or vice versa. The same is true of Figure 29.2, but in the opposite direction. Notice that even though athletic score is on the x-axis (horizontal) and I.Q. score is on the y-axis (vertical), the athletic score does not cause the I.Q. score. (It is an unfortunate deficiency of such diagrams that some variable must arbitrarily be placed on the x-axis, whether you intend to suggest causation or not.)\n\n\n\n\n\nFigure 29.1: Hypothetical Scores for I.Q. and Athletic Ability — 1\n\n\n\n\n\n\n\n\n\nFigure 29.2: Hypothetical Scores for I.Q. and Athletic Ability — 2\n\n\n\n\nIn Figure 29.3, which plots the scores as given in table 23-1 the prediction of athletic score given I.Q. score, or vice versa, is less clear-cut than in Figure 29.1. On the basis of Figure 29.3 alone, one can say only that there might be some association between the two variables.\n\n\n\n\n\nFigure 29.3: Given Scores for I.Q. and Athletic Ability" + }, + { + "objectID": "correlation_causation.html#correlation-sum-of-products", + "href": "correlation_causation.html#correlation-sum-of-products", + "title": "29  Correlation and Causation", + "section": "29.4 Correlation: sum of products", + "text": "29.4 Correlation: sum of products\nNow let us take advantage of a handy property of numbers. The more closely two sets of numbers match each other in order, the higher the sums of their products. Consider the following arrays of the numbers 1, 2, and 3:\n1 x 1 = 1\n2 x 2 = 4 (columns in matching order) 3 x 3 = 9\nSUM = 14\n1 x 2 = 2\n2 x 3 = 6 (columns not in matching order) 3 x 1 = 3\nSUM = 11\nI will not attempt a mathematical proof, but the reader is encouraged to try additional combinations to be sure that the highest sum is obtained when the order of the two columns is the same. Likewise, the lowest sum is obtained when the two columns are in perfectly opposite order:\n1 x 3 = 3\n2 x 2 = 4 (columns in opposite order) 3 x 1 = 3\nSUM = 10\nConsider the cases in Table 23-4 which are chosen to illustrate a perfect (linear) association between x (Column 1) and y 1 (Column 2), and also between x (Column 1) and y 2 (Column 4); the numbers shown in Columns 3 and 5 are those that would be consistent with perfect associations. Notice the sum of the multiples of the x and y values in the two cases. It is either higher ( xy 1) or lower ( xy 2) than for any other possible way of arranging the y ’s. Any other arrangement of the y’s ( y 3, in Column 6, for example, chosen at random), when multiplied by the x ’s in Column 1, ( xy 3), produces a sum that falls somewhere between the sums of xy 1 and xy 2, as is the case with any other set of y 3’s which is not perfectly correlated with the x ’s.\nTable 23-5, below, shows that the sum of the products of the observed I.Q. scores multiplied by athletic scores (column 7) is between the sums that would occur if the I.Q. scores were ranked from best to worst (column 3) and worst to best (column 5). The extent of correlation (association) can thus be measured by whether the sum of the multiples of the observed x\nand y values is relatively much higher or much lower than are sums of randomly-chosen pairs of x and y .\nTable 23-4\nComparison of Sums of Multiplications\n\n\n\nStrong Positive Relationship\nStrong Negative Relationship\nRandom Pairings\n\n\n\n\n\n\nX\nY1\nX*Y1\nY2\nX*Y2\nY3\nX*Y3\n\n\n2\n2\n4\n10\n20\n4\n8\n\n\n4\n4\n16\n8\n32\n8\n32\n\n\n6\n6\n36\n6\n36\n6\n36\n\n\n8\n8\n64\n4\n48\n2\n16\n\n\n10\n10\n100\n2\n20\n10\n100\n\n\nSUMS:\n\n220\n\n156\n\n192\n\n\n\nTable 23-5\nSums of Products: IQ and Athletic Scores\n\n\n\n1\n2\n3\n4\n5\n6\n7\n\n\nAthletic\nHypothetical\nCol. 1 x\nHypothetical\nCol. 1 x\nActual\nCol. 1 x\n\n\nScore\nI.Q.\nCol.2\nI.Q.\nCol. 4\nI.Q.\nCol.6\n\n\n97\n120\n11640\n99\n9603\n114\n11058\n\n\n94\n118\n11092\n100\n9400\n120\n11280\n\n\n93\n114\n10602\n101\n9393\n107\n9951\n\n\n90\n113\n10170\n107\n9630\n113\n10170\n\n\n87\n110\n9570\n109\n9483\n118\n10266\n\n\n86\n109\n9374\n110\n8460\n101\n8686\n\n\n86\n107\n9202\n113\n9718\n109\n9374\n\n\n85\n101\n8585\n114\n9690\n110\n9350\n\n\n81\n100\n8100\n118\n9558\n100\n8100\n\n\n76\n99\n7524\n120\n9120\n99\n7524\n\n\nSUMS:\n\n95859\n\n95055\n\n95759\n\n\n\n3 Cases:\n\nPerfect positive correlation (hypothetical); column 3\nPerfect negative correlation (hypothetical); column 5\nObserved; column 7\n\nNow we attack the I.Q. and athletic-score problem using the property of numbers just discussed. First multiply the x and y values of the actual observations, and sum them to be 95,759 (Table 23-5). Then write the ten observed I.Q. scores on cards, and assign the cards in random order to the ten athletes, as shown in column 1 in Table 23-6.\nMultiply by the x’s, and sum as in Table 23-7. If the I.Q. scores and athletic scores are positively associated , that is, if high I.Q.s and high athletic scores go together, then the sum of the multiplications for the observed sample will be higher than for most of the random trials. (If high I.Q.s go with low athletic scores, the sum of the multiplications for the observed sample will be lower than most of the random trials.)\nTable 23-6\nRandom Drawing of I.Q. Scores and Pairing (Randomly) Against Athletic Scores (20 Trials)\nTrial Number\nAthletic 1 2 3 4 5 6 7 8 9 10\nScore\n\n\n\n97\n114\n109\n110\n118\n107\n114\n107\n120\n100\n114\n\n\n94\n101\n113\n113\n101\n118\n100\n110\n109\n120\n107\n\n\n93\n107\n118\n100\n99\n120\n101\n114\n99\n110\n113\n\n\n90\n113\n101\n118\n114\n101\n113\n100\n118\n99\n99\n\n\n87\n120\n100\n101\n100\n110\n107\n113\n114\n101\n118\n\n\n86\n100\n110\n120\n107\n113\n110\n118\n101\n118\n101\n\n\n86\n110\n107\n99\n109\n100\n120\n120\n113\n114\n120\n\n\n85\n99\n99\n104\n120\n99\n109\n101\n107\n109\n109\n\n\n81\n118\n120\n114\n110\n114\n99\n99\n100\n107\n109\n\n\n76\n109\n114\n109\n113\n109\n118\n109\n110\n113\n110\n\n\nTrial Number\n\n\n\n\n\n\n\n\n\n\n\n\nAthletic Score\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n\n\n97\n109\n118\n101\n109\n107\n100\n99\n113\n99\n110\n\n\n94\n101\n110\n114\n118\n101\n107\n114\n101\n109\n113\n\n\n93\n120\n120\n100\n120\n114\n113\n100\n100\n120\n100\n\n\n90\n110\n118\n109\n110\n99\n109\n107\n109\n110\n99\n\n\n87\n100\n100\n120\n99\n118\n114\n110\n110\n107\n101\n\n\n86\n118\n99\n107\n100\n109\n118\n113\n118\n100\n118\n\n\n86\n99\n101\n99\n101\n100\n99\n101\n107\n114\n120\n\n\n85\n107\n114\n110\n114\n120\n110\n120\n120\n118\n100\n\n\n81\n114\n107\n113\n113\n110\n101\n109\n114\n101\n100\n\n\n76\n113\n109\n118\n107\n113\n120\n118\n99\n118\n107\n\n\n\nTable 23-7\nResults of Sum Products for Above 20 Random Trials\n\n\n\nTrial\nSum of Multiplications\nTrial\nSum of Multiplications\n\n\n1\n95,430\n11\n95,406\n\n\n2\n95,426\n12\n95,622\n\n\n3\n95,446\n13\n95,250\n\n\n4\n95,381\n14\n95,599\n\n\n5\n95,542\n15\n95,323\n\n\n6\n95,362\n16\n95,308\n\n\n7\n95,508\n17\n95,220\n\n\n8\n95,590\n18\n95,443\n\n\n9\n95,379\n19\n95,421\n\n\n10\n95,532\n20\n95,528\n\n\n\nMore specifically, by the steps:\nStep 1. Write the ten I.Q. scores on one set of cards, and the ten athletic scores on another set of cards.\nStep 2. Pair the I.Q. and athletic-score cards at random. Multiply the scores in each pair, and add the results of the ten multiplications.\nStep 3. Subtract the experimental sum in step 2 from the observed sum, 95,759.\nStep 4. Repeat steps 2 and 3 twenty times.\nStep 5. Compute the proportion of trials where the difference is negative, which estimates the probability that an association as strong as the observed would occur by chance.\nThe sums of the multiplications for 20 trials are shown in Table 23-7. No random-trial sum was as high as the observed sum, which suggests that the probability of an association this strong happening by chance is so low as to approach zero. (An empirically-observed probability is never actually zero.)\nThis program can be solved particularly easily with RESAMPLING STATS. The arrays A and B in program “Ability3” list the athletic scores and the I.Q. scores respectively of 10 “actual” students ordered from highest to lowest athletic score. We MULTIPLY the corresponding elements of these arrays and proceed to compare the sum of these multiplications to the sums of experimental multiplications in which the elements are selected randomly.\nFinally, we COUNT the trials in which the sum of the products of the randomly-paired athletic and I.Q. scores equals or exceeds the sum of the products in the observed data.\n\n' Program file: \"correlation_causation_03.rss\"\n\nNUMBERS (97 94 93 90 87 86 86 85 81 76) a\n' Record athletic scores, highest to lowest.\nNUMBERS (114 120 107 113 118 101 109 110 100 99) b\n' Record corresponding IQ scores for those students.\nMULTIPLY a b c\n' Multiply the two sets of scores together.\nSUM c d\n' Sum the results — the \"observed value.\"\nREPEAT 1000\n ' Do 1000 experiments.\n SHUFFLE a e\n ' Shuffle the athletic scores so we can pair them against IQ scores.\n MULTIPLY e b f\n ' Multiply the shuffled athletic scores by the I.Q. scores. (Note that we\n ' could shuffle the I.Q. scores too but it would not achieve any greater\n ' randomization.)\n SUM f j\n ' Sum the randomized multiplications.\n SUBTRACT d j k\n ' Subtract the sum from the sum of the \"observed\" multiplication.\n SCORE k z\n ' Keep track of the result in z.\nEND\n' End one trial, go back and repeat until 1000 trials are complete.\nHISTOGRAM z\n' Obtain a histogram of the trial results.\nRandom Sums of Products\nATHLETES & IQ SCORES\n\nobserved sum less random sum\nWe see that obtaining a chance trial result as great as that observed was rare. RESAMPLING STATS will calculate this proportion for us:\n\n' Program file: \"ability3.rss\"\n\nCOUNT z <= 0 k\n' Determine in how many trials the random sum of products was less than\n' the observed sum of products.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Note: The file \"ability3\" on the Resampling Stats software disk contains\n' this set of commands.\nExample 23-3: Correlation Between Adherence to Medication Regime and Change in Cholesterol\nEfron and Tibshirani (1993, 72) show data on the extents to which 164 men a) took the drug prescribed to them (cholostyramine), and b) showed a decrease in total plasma cholesterol. Table 23-8 shows these values (note that a positive value in the “decrease in cholesterol” column denotes a decrease in cholesterol, while a negative value denotes an increase.)\nTable 23-8\n\n\n\nTaken\nTaken\nTaken\n\nTaken\n\n\n0 -5.25\n27\n-1.50 71\n59.50\n95 32.50\n\n\n0 -7.25\n28\n23.50 71\n14.75\n95 70.75\n\n\n0 -6.25\n29\n33.00 72\n63.00\n95 18.25\n\n\n0 11.50\n31\n4.25 72\n0.00\n95 76.00\n\n\n2 21.00\n32\n18.75 73\n42.00\n95 75.75\n\n\n2 -23.00\n32\n8.50 74\n41.25\n95 78.75\n\n\n2 5.75\n33\n3.25 75\n36.25\n95 54.75\n\n\n3 3.25\n33\n27.75 76\n66.50\n95 77.00\n\n\n3 8.75\n34\n30.75 77\n61.75\n96 68.00\n\n\n4 8.75\n34\n-1.50 77\n14.00\n96 73.00\n\n\n4 -10.25\n34\n1.00 78\n36.00\n96 28.75\n\n\n7 -10.50\n34\n7.75 78\n39.50\n96 26.75\n\n\n8 19.75\n35\n-15.75 81\n1.00\n96 56.00\n\n\n8 -0.50\n36\n33.50 82\n53.50\n96 47.50\n\n\n8 29.25\n36\n36.25 84\n46.50\n96 30.25\n\n\n8 36.25\n37\n5.50 85\n51.00\n96 21.00\n\n\n9 10.75\n38\n25.50 85\n39.00\n97 79.00\n\n\n9 19.50\n41\n20.25 87\n-0.25\n97 69.00\n\n\n9 17.25\n43\n33.25 87\n1.00\n97 80.00\n\n\n10 3.50\n45\n56.75 87\n46.75\n97 86.00\n\n\n10 11.25\n45\n4.25 87\n11.50\n98 54.75\n\n\n11 -13.00\n47\n32.50 87\n2.75\n98 26.75\n\n\n12 24.00\n50\n54.50 88\n48.75\n98 80.00\n\n\n13 2.50\n50\n-4.25 89\n56.75\n98 42.25\n\n\n15 3.00\n51\n42.75 90\n29.25\n98 6.00\n\n\n15 5.50\n51\n62.75 90\n72.50\n98 104.75\n\n\n16 21.25\n52\n64.25 91\n41.75\n98 94.25\n\n\n16 29.75\n53\n30.25 92\n48.50\n98 41.25\n\n\n17 7.50\n54\n14.75 92\n61.25\n98 40.25\n\n\n18 -16.50\n54\n47.25 92\n29.50\n99 51.50\n\n\n20 4.50\n56\n18.00 92\n59.75\n99 82.75\n\n\n20 39.00\n57\n13.75 93\n71.00\n99 85.00\n\n\n21 -5.75\n57\n48.75 93\n37.75\n99 70.00\n\n\n21 -21.00\n58\n43.00 93\n41.00\n100 92.00\n\n\n21 0.25\n60\n27.75 93\n9.75\n100 73.75\n\n\n22 -10.25\n62\n44.50 93\n53.75\n100 54.00\n\n\n24 -0.50\n64\n22.50 94\n62.50\n100 69.50\n\n\n25 -19.00\n64\n-14.50 94\n39.00\n100 101.50\n\n\n25 15.75\n64\n-20.75 94\n3.25\n100 68.00\n\n\n26 6.00\n67\n46.25 94\n60.00\n100 44.75\n\n\n27 10.50\n68\n39.50 95\n113.25\n100 86.75\n\n\n\n% Prescribed Dosage\nDecrease in Cholesterol\n% Prescribed Dosage\nDecrease in Cholesterol\n% Prescribed Dosage\nDecrease in Cholesterol\n% Prescribed Dosage\nDecrease in Cholesterol\nThe aim is to assess the effect of the compliance on the improvement. There are two related issues:\n\nWhat form of regression should be fitted to these data, which we address later, and\nIs there reason to believe that the relationship is meaningful? That is, we wish to ascertain if there is any meaningful correlation between the variables — because if there is no relationship between the variables, there is no basis for regressing one on the other. Sometimes people jump ahead in the latter question to first run the regression and then ask whether the regression slope coefficient(s) is (are) different than zero, but this usually is not sound practice. The sensible way to proceed is first to graph the data to see whether there is visible indication of a relationship.\n\nEfron and Tibshirani do this, and they find sufficient intuitive basis in the graph to continue the analysis. The next step is to investigate whether a measure of relationship is statistically significant; this we do as follows (program “inp10”):\n\nMultiply the observed values for each of the 164 participants on the independent x variable (cholostyramine — percent of prescribed dosage actually taken) and the dependent y variable (cholesterol), and sum the results — it’s 439,140.\nRandomly shuffle the dependent variable y values among the participants. The sampling is being done without replacement, though an equally good argument could be made for sampling with replacement; the results do not differ meaningfully, however, because the sample size is so large.\nThen multiply these x and y hypothetical values for each of the 164 participants, sum the results and record.\nRepeat steps 2 and 3 perhaps 1000 times.\nDetermine how often the shuffled sum-of-products exceeds the observed value (439,140).\n\nThe following program in RESAMPLING STATS provides the solution:\n\n' Program file: \"correlation_causation_05.rss\"\n\nREAD FILE “inp10” x y\n' Data\nMULTIPLY x y xy\n' Step 1 above\nSUM xy xysum\n' Note: xysum = 439,140 (4.3914e+05)\nREPEAT 1000\n ' Do 1000 simulations (step 4 above)\n SHUFFLE x xrandom\n ' Step 2 above\n MULTIPLY xrandom y xy\n ' Step 3 above\n SUM xy newsum\n ' Step 3 above\n SCORE newsum scrboard\n ' Step 3 above\nEND\n' Step 4 above\nCOUNT scorboard >=439140 prob\n' Step 5 above\nPRINT xysum prob\n' Result: prob = 0. Interpretation: 1000 simulated random shufflings never\n' produced a sum-of-products as high as the observed value. Hence we rule\n' out random chance as an explanation for the observed correlation.\nExample 23-3: Is There A Relationship Between Drinking Beer And Being In Favor of Selling Beer? (Testing for a Relationship Between Counted-Data Variables.) (Program “Beerpoll”)\nThe data for athletic ability and I.Q. were measured. Therefore, we could use them in their original “cardinal” form, or we could split them up into “high” and “low” groups. Often, however, the individual observations are recorded only as “yes” or “no,” which makes it more difficult to ascertain the existence of a relationship. Consider the poll responses in Table 23-8 to two public-opinion survey questions: “Do you drink beer?” and “Are you in favor of local option on the sale of beer?”.2\n\nTable 23-9\nResults of Observed Sample For Problem “Beerpoll”\n\n\n\nDo you favor local option on the sale of beer?\nDo you drink beer?\n\n\n\n\n\nYes\nNo\nTotal\n\n\nFavor\n45\n20\n65\n\n\nDon’t Favor\n7\n6\n13\n\n\nTotal\n52\n26\n78\n\n\n\nHere is the statistical question: Is a person’s opinion on “local option” related to whether or not he drinks beer? Our resampling solution begins by noting that there are seventy-eight respondents, sixty-five of whom approve local option and thirteen of whom do not. Therefore write “approve” on sixty-five index cards and “not approve” on thirteen index cards. Now take another set of seventy-eight index cards, preferably of a different color, and write “yes” on fifty-two of them and “no” on twenty-six of them, corresponding to the numbers of people who do and do not drink beer in the sample. Now lay them down in random pairs , one from each pile.\nIf there is a high association between the variables, then real life observations will bunch up in the two diagonal cells in the upper left and lower right in Table 23-8. (Ignore the “total” data for now.) Therefore, subtract one sum of two diagonal cells from the other sum for the observed data: (45 + 6) - (20 + 7) = 24. Then compare this difference to the comparable differences found in random trials. The proportion of times that the simulated-trial difference exceeds the observed difference is the probability that the observed difference of +24 might occur by chance, even if there is no relationship between the two variables. (Notice that, in this case, we are working on the assumption that beer drinking is positively associated with approval of local option and not the inverse. We are interested only in differences that are equal to or exceed +24 when the northeast-southwest diagonal is subtracted from the northwest-southeast diagonal.)\nWe can carry out a resampling test with this procedure:\nStep 1. Write “approve” on 65 and “disapprove” on 13 red index cards, respectively; write “Drink” and “Don’t drink” on 52 and 26 white cards, respectively.\nStep 2. Pair the two sets of cards randomly. Count the numbers of the four possible pairs: (1) “approve-drink,” (2) “disapprove-don’t drink,” (3) “disapprove-drink,” and (4) “approve-don’t drink.” Record the number of these combinations, as in Table 23-10, where columns 1-4 correspond to the four cells in Table 23-9.\nStep 3. Add (column 1 plus column 4) and (column 2 plus column 3), and subtract the result in the second parenthesis from the result in the first parenthesis. If the difference is equal to or greater than 24, record “yes,” otherwise “no.”\nStep 4. Repeat steps 2 and 3 perhaps a hundred times.\nStep 5. Calculate the proportion “yes,” which estimates the probability that an association this great or greater would be observed by chance.\nTable 23-10\nResults of One Random Trial of the Problem “Beerpoll”\n\n\n\n\n\n\n\n\n\n\n\n\n(1)\n(2)\n(3)\n(4)\n(5)\n\n\nTrial\nApprove Yes\nApprove No\nDisappr ove Yes\nDisappr ove No\n(Col 1 + Col 4) -\n(Col 2 + Col 3)\n\n\n\n1 43 22 9 4 47-31=16\nA series of ten trials in this case (see Table 23-9) indicates that the observed difference is very often exceeded, which suggests that there is no relationship between beer drinking and opinion.\nThe RESAMPLING STATS program “Beerpoll” does this repetitively. From the “actual” sample results we know that 52 respondents drink beer and 26 do not. We create the vector “drink” with 52 “1”s for those who drink beer, and 26 “2”s for those who do not. We also create the vector “sale” with 65 “1”s (approve) and 13 “2”s (disapprove). In the actual sample, 51 of the 78 respondents had “consistent” responses to the two questions — that is, people who both favor the sale of beer and drink beer, or who are against the sale of beer and do not drink beer. We want to randomly pair the responses to the two questions to compare against that observed result to test the relationship.\nTo accomplish this aim, we REPEAT the following procedure 1000 times. We SHUFFLE drink to drink$ so that the responses are randomly ordered. Now when we SUBTRACT the corresponding elements of the two arrays, a “0” will appear in each element of the new array c for which there was consistency in the response of the two questions. We therefore COUNT the times that c equals “0” and place this result in d, and the number of times c does not equal 0, and place this result in e. Find the difference (d minus e), and SCORE this to z.\nSCORE Z stores for each trial the number of consistent responses minus inconsistent responses. To determine whether the results of the actual sample indicate a relationship between the responses to the two questions, we check how often the random trials had a difference (between consistent and inconsistent responses) as great as 24, the value in the observed sample.\n\n' Program file: \"beerpoll.rss\"\n\nURN 52#1 26#0 drink\n' Constitute the set of 52 beer drinkers, represented by 52 \"1\"s, and the\n' set of 26 non-drinkers, represented by \"2\"s.\nURN 57#1 21#0 sale\n' The same set of individuals classified by whether they favor (\"1\") or\n' don't favor (\"0\") the sale of beer.\n\n' Note: F is now the vector {1 1 1 1 1 1 \\... 0 0 0 0 0 \\...} where 1 =\n' people in favor, 0 = people opposed.\nREPEAT 1000\n ' Repeat the experiment 1000 times.\n SHUFFLE drink drink$\n ' Shuffle the beer drinkers/non-drinker, call the shuffled set drink\\*.\n\n ' Note: drink\\$ is now a vector like {1 1 1 0 1 0 0 1 0 1 1 0 0 \\...}\n ' where 1 = drinker, 0 = non-drinker.\nEND\nSUBTRACT drink$ sale c\n' Subtract the favor/don't favor set from the drink/don't drink set.\n' Consistent responses are someone who drinks favoring the sale of beer (a\n' \"1\" and a \"1\") or someone who doesn't drink opposing the sale of beer.\n' When subtracted, consistent responses *(and only consistent responses)*\n' produce a \"0.\"\nCOUNT c =0 d\n' Count the number of consistent responses (those equal to \"0\").\nCOUNT c <> 0 e\n' Count the \"inconsistent\" responses (those not equal to \"0\").\nSUBTRACT d e f\n' Find the difference\nSCORE f z\n' Keep track of the results of each trial.\n\n' End one trial, go back and repeat until all 1000 trials are complete.\nHISTOGRAM z\n' Produce a histogram of the trial result.\n\n' Note: The file \"beerpoll\" on the Resampling Stats software disk contains\n' this set of commands.\nAre Drinkers More Likely to Favor Local Option & Vice Versa\n\n# consistent responses thru chance draw\nThe actual results showed a difference of 24. In the histogram we see that a difference that large or larger happened just by chance pairing — without any relationship between the two variables — 23% of the time. Hence, we conclude that there is little evidence of a relationship between the two variables.\nThough the test just described may generally be appropriate for data of this sort, it may well not be appropriate in some particular case. Let’s consider a set of data where even if the test showed that an association existed, we would not believe the test result to be meaningful.\nSuppose the survey results had been as presented in Table 23-11. We see that non-beer drinkers have a higher rate of approval of allowing beer drinking, which does not accord with experience or reason. Hence, without additional explanation we would not believe that a meaningful relationship exists among these variables even if the test showed one to exist. (Still another reason to doubt that a relationship exists is that the absolute differences are too small — there is only a 6% difference in disapproval between drink and don’t drink groups — to mean anything to anyone. On both grounds, then, it makes sense simply to act as if there were no difference between the two groups and to run no test .).\nTable 23-11\nBeer Poll In Which Results Are Not In Accord With Expectation Or Reason\n\n\n\n\n% Approve\n% Disapprove\nTotal\n\n\nBeer Drinkers\n71%\n29%\n100%\n\n\nNon-Beer Drinkers\n77%\n23%\n100%\n\n\n\nThe lesson to be learned from this is that one should inspect the data carefully before applying a statistical test, and only test for “significance” if the apparent relationships accord with theory, general understanding, and common sense.\nExample 23-4: Do Athletes Really Have “Slumps”? (Are Successive Events in a Series Independent, or is There a Relationship Between Them?)\nThe important concept of independent events was introduced earlier. Various scientific and statistical decisions depend upon whether or not a series of events is independent. But how does one know whether or not the events are independent? Let us consider a baseball example.\nBaseball players and their coaches believe that on some days and during some weeks a player will bat better than on other days and during other weeks. And team managers and coaches act on the belief that there are periods in which players do poorly — slumps — by temporarily replacing the player with another after a period of poor performance. The underlying belief is that a series of failures indicates a temporary (or permanent) change in the player’s capacity to play well, and it therefore makes sense to replace him until the evil spirit passes on, either of its own accord or by some change in the player’s style.\nBut even if his hits come randomly, a player will have runs of good luck and runs of bad luck just by chance — just as does a card player. The problem, then, is to determine whether (a) the runs of good and bad batting are merely runs of chance, and the probability of success for each event remains the same throughout the series of events — which would imply that the batter’s ability is the same at all times, and coaches should not take recent performance heavily into account when deciding which players should play; or (b) whether a batter really does have a tendency to do better at some times than at others, which would imply that there is some relationship between the occurrence of success in one trial event and the probability of success in the next trial event, and therefore that it is reasonable to replace players from time to time.\nLet’s analyze the batting of a player we shall call “Slug.” Here are the results of Slug’s first 100 times at bat during the 1987 season (“H” = hit, “X” = out):\nX X X X X X H X X H X H H X X X X X X X X H X X X X X H X X X X H H X X X X X H X X H X H X X X H H X X X X X H X H X X X X H H X H H X X X X X X X X X X H X X X H X X H X X H X H X X H X X X H X X X.\nNow, do Slug’s hits tend to come in bunches? That would be the case if he really did have a tendency to do better at some times than at others. Therefore, let us compare Slug’s results with those of a deck of cards or a set of random numbers that we know has no tendency to do better at some times than at others.\nDuring this period of 100 times at bat, Slug has averaged one hit in every four times at bat — a .250 batting average. This average is the same as the chance of one card suit’s coming up. We designate hearts as “hits” and prepare a deck of 100 cards, twenty-five “H”s (hearts, or “hit”) and seventy-five “X”s (other suit, or “out”). Here is the sequence in which the 100 randomly-shuffled cards fell:\nX X H X X X X H H X X X H H H X X X X X H X X X H X X H X X X X H X H H X X X X X X X X X H X X X X X X H H X X X X X H H H X X X X X X H X H X H X X H X H X X X X X X X X X H X X X X X X X H H H X X.\nNow we can compare whether or not Slug’s hits are bunched up more than they would be by random chance; we can do so by counting the clusters (also called “runs”) of consecutive hits and outs for Slug and for the cards. Slug had forty-three clusters, which is more than the thirty-seven clusters in the cards; it therefore does not seem that there is a tendency for Slug’s hits to cluster together. (A larger number of clusters indicates a lower tendency to cluster.)\nOf course, the single trial of 100 cards shown above might have an unusually high or low number of clusters. To be safer, lay out, (say,) ten trials of 100 cards each, and compare Slug’s number of clusters with the various trials. The proportion of trials with more clusters than Slug’s indicates whether or not Slug’s hits have a tendency to bunch up. (But caution: This proportion cannot be interpreted directly as a probability.)\nNow the steps:\nStep 1. Constitute a bucket with 3 slips of paper that say “out” and one that says “hit.” Or “01-25” = hits (H), “26-00” = outs (X), Slug’s long-run average.\nStep 2. Sample 100 slips of paper, with replacement, record “hit” or “out” each time, or write a series of “H’s” or “X’s” corresponding to 100 numbers, each selected randomly between 1 and 100.\nStep 3. Count the number of “clusters,” that is, the number of “runs” of the same event, “H”s or “X”s.\nStep 4. Compare the outcome in step 3 with Slug’s outcome, 43 clusters. If 43 or fewer; write “yes,” otherwise “no.”\nStep 5. Repeat steps 2-4 a hundred times.\nStep 6. Compute the proportion “yes.” This estimates the probability that Slug’s record is not characterized by more “slumps” than would be caused by chance. A very low proportion of “yeses” indicates longer (and hence fewer) “streaks” and “slumps” than would result by chance.\nIn RESAMPLING STATS, we can do this experiment 1000 times.\n\n' Program file: \"sluggo.rss\"\n\nREPEAT 1000\n URN 3#0 1#1 a\n SAMPLE 100 a b\n ' Sample 100 \"at-bats\" from a\n RUNS b >=1 c\n ' How many runs (of any length \\>=1) are there in the 100 at-bats?\n SCORE c z\nEND\nHISTOGRAM z\n' Note: The file \"sluggo\" on the Resampling Stats software disk contains\n' this set of commands.\nExamining the histogram, we see that 43 runs is not at all an unusual occurrence:\n“Runs” in 100 At-Bats\n\n# “runs” of same outcome\nThe manager wants to look at this matter in a somewhat different fashion, however. He insists that the existence of slumps is proven by the fact that the player sometimes does not get a hit for an abnormally long period of time. One way of testing whether or not the coach is right is by comparing an average player’s longest slump in a 100-at-bat season with the longest run of outs in the first card trial. Assume that Slug is a player picked at random . Then compare Slug’s longest slump — say, 10 outs in a row — with the longest cluster of a single simulated 100-at-bat trial with the cards, 9 outs. This result suggests that Slug’s apparent slump might well have resulted by chance.\nThe estimate can be made more accurate by taking the average longest slump (cluster of outs) in ten simulated 400-at-bat trials. But notice that we do not compare Slug’s slump against the longest slump found in ten such simulated trials. We want to know the longest cluster of outs that would be found under average conditions, and the hand with the longest slump is not average or typical. Determining whether to compare Slug’s slump with the average longest slump or with the longest of the ten longest slumps is a decision of crucial importance. There are no mathematical or logical rules to help you. What is required is hard, clear thinking. Experience can help you think clearly, of course, but these decisions are not easy or obvious even to the most experienced statisticians.\nThe coach may then refer to the protracted slump of one of the twenty-five players on his team to prove that slumps really occur. But, of twenty-five random 100-at-bat trials, one will contain a slump longer than any of the other twenty-four, and that slump will be considerably longer than average. A fair comparison, then, would be between the longest slump of his longest-slumping player, and the longest run of outs found among twenty-five random trials. In fact, the longest run among twenty-five hands of 100 cards was fifteen outs in a row. And, if we had set some of the hands for lower (and higher) batting averages than .250, the longest slump in the cards would have been even longer.\nResearch by Roberts and his students at the University of Chicago shows that in fact slumps do not exist, as I conjectured in the first publication of this material in 1969. (Of course, a batter feels as if he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, sandlot players and professionals alike feel confident — just as gamblers often feel that they’re on a “streak.” But there seems to be no connection between a player’s performance and whether he feels hot or cold, astonishing as that may be.)\nAverages over longer periods may vary systematically, as Ty Cobb’s annual batting average varied non-randomly from season to season, Roberts found. But short-run analyses of dayto-day and week-to-week individual and team performances in most sports have shown results similar to the outcomes that a lottery-type random-number machine would produce.\nRemember, too, the study by Gilovich, Vallone, and Twersky of basketball mentioned in Chapter 14. To repeat, their analyses “provided no evidence for a positive correlation between the outcomes of successive shots.” That is, knowing whether a shooter has or has not scored on the previous sheet — or in any previous sequence of shots — is useless for predicting whether he will score again.\nThe species homo sapiens apparently has a powerful propensity to believe that one can find a pattern even when there is no pattern to be found. Two decades ago I cooked up several series of random numbers that looked like weekly prices of publicly-traded stocks. Players in the experiment were told to buy and sell stocks as they chose. Then I repeatedly gave them “another week’s prices,” and allowed them to buy and sell again. The players did all kinds of fancy calculating, using a wild variety of assumptions — although there was no possible way that the figuring could help them.\nWhen I stopped the game before completing the 10 buy-andsell sessions they expected, subjects would ask that the game go on. Then I would tell them that there was no basis to believe that there were patterns in the data, because the “prices” were just randomly-generated numbers. Winning or losing therefore did not depend upon the subjects’ skill. Nevertheless, they demanded that the game not stop until the 10 “weeks” had been played, so they could find out whether they “won” or “lost.”\nThis study of batting illustrates how one can test for independence among various trials. The trials are independent if each observation is randomly chosen with replacement from the universe, in which case there is no reason to believe that one observation will be related to the observations directly before and after; as it is said, “the coin has no memory.”\nThe year-to-year level of Lake Michigan is an example in which observations are not independent. If Lake Michigan is very high in one year, it is likely to be higher than average the following year because some of the high level carries over from one year into the next.3 We could test this hypothesis by writing down whether the level in each year from, say, 1860 to 1975 was higher or lower than the median level for those years. We would then count the number of runs of “higher” and “lower” and compare the number of runs of “black” and “red” with a deck of that many cards; we would find fewer runs in the lake level than in an average hand of 116 (1976-1860) cards, though this test is hardly necessary. (But are the changes in Lake Michigan’s level independent from year to year? If the level went up last year, is there a better than 50-50 chance that the level will also go up this year? The answer to this question is not so obvious. One could compare the numbers of runs of ups and downs against an average hand of cards, just as with the hits and outs in baseball.)\nExercise for students: How could one check whether the successive numbers in a random-number table are independent?" + }, + { + "objectID": "correlation_causation.html#exercises", + "href": "correlation_causation.html#exercises", + "title": "29  Correlation and Causation", + "section": "29.5 Exercises", + "text": "29.5 Exercises\nSolutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.\nExercise 23-1\nTable 23-12 shows voter participation rates in the various states in the 1844 presidential election. Should we conclude that there was a negative relationship between the participation rate (a) and the vote spread (b) between the parties in the election? (Adapted from (Noreen 1989, 20, Table 2-4):\nTable 23-12\nVoter Participation In The 1844 Presidential Election\n\n\n\nState\nParticipation (a)\nSpread (b)\n\n\nMaine\n67.5\n13\n\n\nNew Hampshire\n65.6\n19\n\n\nVermont\n65.7\n18\n\n\nMassachusetts\n59.3\n12\n\n\nRhode Island\n39.8\n20\n\n\nConnecticut\n76.1\n5\n\n\nNew York\n73.6\n1\n\n\nNew Jersey\n81.6\n1\n\n\nPennsylvania\n75.5\n2\n\n\nDelaware\n85.0\n3\n\n\nMaryland\n80.3\n5\n\n\nVirginia\n54.5\n6\n\n\nNorth Carolina\n79.1\n5\n\n\nGeorgia\n94.0\n4\n\n\nKentucky\n80.3\n8\n\n\nTennessee\n89.6\n1\n\n\nLouisiana\n44.7\n3\n\n\nAlabama\n82.7\n8\n\n\nMississippi\n89.7\n13\n\n\nOhio\n83.6\n2\n\n\nIndiana\n84.9\n2\n\n\nIllinois\n76.3\n12\n\n\nMissouri\n74.7\n17\n\n\nArkansas\n68.8\n26\n\n\nMichigan\n79.3\n6\n\n\nNational Average\n74.9\n9\n\n\n\nThe observed correlation coefficient between voter participation and spread is -.37398. Is this more negative that what might occur by chance, if no correlation exists?\nExercise 23-2\nWe would like to know whether, among major-league baseball players, home runs (per 500 at-bats) and strikeouts (per 500 at-bat’s) are correlated. We first use the procedure as used above for I.Q. and athletic ability — multiplying the elements within each pair. (We will later use a more “sophisticated” measure, the correlation coefficient.)\nThe data for 18 randomly-selected players in the 1989 season are as follows, as they would appear in the first lines of the program.\n\n' Program file: \"correlation_causation_08.rss\"\n\nNUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32) homeruns\nNUMBERS (135 153 120 161 138 175 126 200 205 147 165 124 169 156 36 98 82 131) strikeout\n' Exercise: Complete this program.\nExercise 23-3\nIn the previous example relating strikeouts and home runs, we used the procedure of multiplying the elements within each pair. Now we use a more “sophisticated” measure, the correlation coefficient, which is simply a standardized form of the multiplicands, but sufficiently well known that we calculate it with a pre-set command.\nExercise: Write a program that uses the correlation coefficient to test the significance of the association between home runs and strikeouts.\nExercise 23-4\nAll the other things equal, an increase in a country’s money supply is inflationary and should have a negative impact on the exchange rate for the country’s currency. The data in the following table were computed using data from tables in the 1983/1984 Statistical Yearbook of the United Nations :\nTable 23-13\nMoney Supply and Exchange Rate Changes\n% Change % Change % Change % Change\nExch. Rate Money Supply Exch. Rate Money Supply\n\n\n\nAustralia\n0.089\n0.035\nBelgium\n0.134\n0.003\n\n\nBotswana\n0.351\n0.085\nBurma\n0.064\n0.155\n\n\nBurundi\n0.064\n0.064\nCanada\n0.062\n0.209\n\n\nChile\n0.465\n0.126\nChina\n0.411\n0.555\n\n\nCosta Rica\n0.100\n0.100\nCyprus\n0.158\n0.044\n\n\nDenmark\n0.140\n0.351\nEcuador\n0.242\n0.356\n\n\nFiji\n0.093\n0.000\nFinland\n0.124\n0.164\n\n\nFrance\n0.149\n0.090\nGermany\n0.156\n0.061\n\n\nGreece\n0.302\n0.202\nHungary\n0.133\n0.049\n\n\nIndia\n0.187\n0.184\nIndonesia\n0.080\n0.132\n\n\nItaly\n0.167\n0.124\nJamaica\n0.504\n0.237\n\n\nJapan\n0.081\n0.069\nJordan\n0.092\n0.010\n\n\nKenya\n0.144\n0.141\nKorea\n0.040\n0.006\n\n\nKuwait\n0.038\n-0.180\nLebanon\n0.619\n0.065\n\n\nMadagascar\n0.337\n0.244\nMalawi\n0.205\n0.203\n\n\nMalaysia\n0.037\n-0.006\nMalta\n0.003\n0.003\n\n\nMauritania\n0.180\n0.192\nMauritius\n0.226\n0.136\n\n\nMexico\n0.338\n0.599\nMorocco\n0.076\n0.076\n\n\nNetherlands\n0.158\n0.078\nNew Zealand\n0.370\n0.098\n\n\nNigeria\n0.079\n0.082\nNorway\n0.177\n0.242\n\n\nPapua\n0.075\n0.209\nPhilippines\n0.411\n0.035\n\n\nPortugal\n0.288\n0.166\nRomania\n-0.029\n0.039\n\n\nRwanda\n0.059\n0.083\nSamoa\n0.348\n0.118\n\n\nSaudi Arabia\n0.023\n0.023\nSeychelles\n0.063\n0.031\n\n\nSingapore\n0.024\n0.030\nSolomon Is\n0.101\n0.526\n\n\nSomalia\n0.481\n0.238\nSouth Africa\n0.624\n0.412\n\n\nSpain\n0.107\n0.086\nSri Lanka\n0.051\n0.141\n\n\nSwitzerland\n0.186\n0.186\nTunisia\n0.193\n0.068\n\n\nTurkey\n0.573\n0.181\nUK\n0.255\n0.154\n\n\nUSA\n0.000\n0.156\nVanatuva\n0.008\n0.331\n\n\nYemen\n0.253\n0.247\nYugoslavia\n0.685\n0.432\n\n\nZaire\n0.343\n0.244\nZambia\n0.457\n0.094\n\n\nZimbabwe\n0.359\n0.164\n\n\n\n\n\n\nPercentage changes in exchange rates and money supply between 1983 and 1984 for various countries.\nAre changes in the exchange rates and in money supplies related to each other? That is, are they correlated?\n\nExercise: Should the algorithm of non-computer resampling steps be similar to the algorithm for I.Q. and athletic ability shown in the text? One can also work with the correlation coefficient rather then the sum-of-products method, and expect to get the same result.\n\nWrite a series of non-computer resampling steps to solve this problem.\nWrite a computer program to implement those steps.\n\n\n\n\n\nDixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to Statistical Analysis.”\n\n\nEfron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to the Bootstrap.” In Monographs on Statistics and Applied Probability, edited by David R Cox, David V Hinkley, Nancy Reid, Donald B Rubin, and Bernard W Silverman. Vol. 57. New York: Chapman & Hall.\n\n\nNoreen, Eric W. 1989. Computer-Intensive Methods for Testing Hypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research Methods in Social Science. 3rd ed. New York: Random House.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New Approach. New York: The Free Press." + }, + { + "objectID": "how_big_sample.html#issues-in-determining-sample-size", + "href": "how_big_sample.html#issues-in-determining-sample-size", + "title": "30  How Large a Sample?", + "section": "30.1 Issues in determining sample size", + "text": "30.1 Issues in determining sample size\nSometime in the course of almost every study — preferably early in the planning stage — the researcher must decide how large a sample to take. Deciding the size of sample to take is likely to puzzle and distress you at the beginning of your research career. You have to decide somehow, but there are no simple, obvious guides for the decision.\nFor example, one of the first studies I worked on was a study of library economics (Fussler and Simon 1961), which required taking a sample of the books from the library’s collections. Sampling was expensive, and we wanted to take a correctly sized sample. But how large should the sample be? The longer we searched the literature, and the more people we asked, the more frustrated we got because there just did not seem to be a clear-cut answer. Eventually we found out that, even though there are some fairly rational ways of fixing the sample size, most sample sizes in most studies are fixed simply (and irrationally) by the amount of money that is available or by the sample size that similar research has used in the past.\nThe rational way to choose a sample size is by weighing the benefits you can expect in information against the cost of increasing the sample size. In principle you should continue to increase the sample size until the benefit and cost of an additional sampled unit are equal.1\nThe benefit of additional information is not easy to estimate even in applied research, and it is extraordinarily difficult to estimate in basic research. Therefore, it has been the practice of researchers to set up target goals of the degree of accuracy they wish to achieve, or to consider various degrees of accuracy that might be achieved with various sample sizes, and then to balance the degree of accuracy with the cost of achieving that accuracy. The bulk of this chapter is devoted to learning how the sample size is related to accuracy in simple situations.\nIn complex situations, however, and even in simple situations for beginners, you are likely to feel frustrated by the difficulties of relating accuracy to sample size, in which case you cry out to a supervisor, “Don’t give me complicated methods, just give me a rough number based on your greatest experience.” My inclination is to reply to you, “Sometimes life is hard and there is no shortcut.” On the other hand, perhaps you can get more information than misinformation out of knowing sample sizes that have been used in other studies. Table 24-1 shows the middle (modal), 25th percentile, and 75th percentile scores for — please keep this in mind — National Opinion Surveys in the top panel. The bottom panel shows how subgroup analyses affect sample size.\nPretest sample sizes are smaller, of course, perhaps 25-100 observations. Samples in research for Master’s and Ph.D. theses are likely to be closer to a pretest than to national samples.\nTable 24-1\nMost Common Sample Sizes Used for National and Regional Studies By Subject Matter\nSubject Matter National Regional\n\n\n\nSubject Matter\nMode\nQ3\nQ1\nMode\nQ3\nQ1\n\n\nFinancial\n1000+\n—\n—\n100 40\n0 50\n\n\n\nMedical\n1000+\n1000+\n500\n1000+ 10\n00+ 25\n0\n\n\nOther Behavior\n1000+\n—\n—\n700 10\n00 30\n0\n\n\nAttitudes\n1000+\n1000+\n500\n700 10\n00 40\n0\n\n\nLaboratory Experiments\n—\n—\n—\n100 20\n0 50\n\n\n\n\nTypical Sample Sizes for Studies of Human and Institutional Populations\nPeople or Households Institutions\n\n\n\n\nPeople or house\nholds\nInstitutions\n\n\n\nSubgroup Analyses\nNational\nSpecial\nNational\nSpecial\n\n\nNone or few\n1000-1500\n200-500\n200-500\n50-200\n\n\nAverage\n1500-2500\n500-1000\n500-1000\n200-500\n\n\nMany\n2500+\n1000+\n1000+\n500+\n\n\n\nSOURCE: From Applied Sampling, by Seymour Sudman (1976, 86 — 87) copyright Academic Press, reprinted by permission.\nOnce again, the sample size ought to depend on the proportions of the sample that have the characteristics you are interested in, the extent to which you want to learn about subgroups as well as the universe as a whole, and of course the purpose of your study, the value of the information, and the cost. Also, keep in mind that the added information that you obtain from an additional sample observation tends to be smaller as the sample size gets larger. You must quadruple the sample to halve the error.\nNow let us consider some specific cases. The first examples taken up here are from the descriptive type of study, and the latter deal with sample sizes in relationship research." + }, + { + "objectID": "how_big_sample.html#some-practical-examples", + "href": "how_big_sample.html#some-practical-examples", + "title": "30  How Large a Sample?", + "section": "30.2 Some practical examples", + "text": "30.2 Some practical examples\nExample 24-1\nWhat proportion of the homes in Countryville are tuned into television station WCNT’s ten o’clock news program? That is the question your telephone survey aims to answer, and you want to know how many randomly selected homes you must telephone to obtain a sufficiently large sample.\nBegin by guessing the likeliest answer, say 30 percent in this case. Do not worry if you are off by 5 per cent or even 10 per cent; and you will probably not be further off than that. Select a first-approximation sample size of perhaps 400; this number is selected from my general experience, but it is just a starting point. Then proceed through the first 400 numbers in the random-number table, marking down a yes for numbers 1-3 and no for numbers 4-10 (because 3/10 was your estimate of the proportion listening). Then add the number of yes and no . Carry out perhaps ten sets of such trials, the results of which are in Table 24-2.\nTable 24-2\n% DIFFERENCE FROM\nTrial Number “Yes” Number “No” Expected Mean of 30%\n\n\n\n\n(120 “Yes”)\n\n\n\n\n1\n115\n285\n1.25\n\n\n2\n119\n281\n0.25\n\n\n3\n116\n284\n1.00\n\n\n4\n114\n286\n1.50\n\n\n5\n107\n293\n3.25\n\n\n6\n116\n284\n1.00\n\n\n7\n132\n268\n3.00\n\n\n8\n123\n277\n0.75\n\n\n9\n121\n279\n0.25\n\n\n10\n114\n286\n1.50\n\n\nMean\n\n\n1.37\n\n\n\nBased on these ten trials, you can estimate that if you take a sample of 400 and if the “real” viewing level is 30 percent, your average percentage error will be 1.375 percent on either side of 30 percent. That is, with a sample of 400, half the time your error will be greater than 1.375 percent if 3/10 of the universe is listening.\nNow you must decide whether the estimated error is small enough for your needs. If you want greater accuracy than a sample of 400 will give you, increase the sample size, using this important rule of thumb: To cut the error in half, you must quadruple the sample size. In other words, if you want a sample that will give you an error of only 0.55 percent on the average, you must increase the sample size to 1,600 interviews. Similarly, if you cut the sample size to 100, the average error will be only 2.75 percent (double 1.375 percent) on either side of 30 percent. If you distrust this rule of thumb, run ten or so trials on sample sizes of 100 or 1,600, and see what error you can expect to obtain on the average.\nIf the “real” viewership is 20 percent or 40 percent, instead of 30 percent, the accuracy you will obtain from a sample size of 400 will not be very different from an “actual” viewership of 30 percent, so do not worry about that too much, as long as you are in the right general vicinity.\nAccuracy is slightly greater in smaller universes but only slightly. For example, a sample of 400 would give perfect accuracy if Countryville had only 400 residents. And a sample of 400 will give slightly greater accuracy for a town of 800 residents than for a city of 80,000 residents. But, beyond the point at which the sample is a large fraction of the total universe, there is no difference in accuracy with increases in the size of universe. This point is very important. For any given level of accuracy, identical sample sizes give the same level of accuracy for Podunk (population 8,000) or New York City (population 8 million). The ratio of the sample size to the population of Podunk or New York City means nothing at all, even though it intuitively seems to be important.\nThe size of the sample must depend upon which population or subpopulations you wish to describe. For example, Alfred Kinsey’s sample size for the classic “Sexual Behavior in the Human Male” (1948) would have seemed large, by customary practice, for generalizations about the United States population as a whole. But, as Kinsey explains: “… the chief concern of the present study is an understanding of the sexual behavior of each segment of the population, and that it is only secondarily concerned with generalization for the population as a whole.” (1948, 82, italics added). Therefore Kinsey’s sample had to include subsamples large enough to obtain the desired accuracy in each of these sub-universes. The U.S. Census offers a similar illustration. When the U.S. Bureau of the Census aims to estimate only a total or an average for the United States as a whole — as, for example, in the Current Population Survey estimate of unemployment — a sample of perhaps 50,000 is big enough. But the decennial census aims to make estimates for all the various communities in the country, estimates that require adequate subsamples in each of these sub-universes; such is the justification for the decennial census’ sample size of so many millions. Television ratings illustrate both types of purpose. Nielsen ratings, for example, are sold primarily to national network advertisers. These advertisers on national television networks usually sell their goods all across the country and are therefore interested primarily in the total United States viewership for a program, rather than in the viewership in various demographic subgroups. The appropriate calculations for Nielsen sample size will therefore refer to the total United States sample. But other organizations sell rating services to local television and radio stations for use in soliciting advertising over the local stations rather than over the network as a whole. Each local sample must then be large enough to provide reasonable accuracy, and, considered as a whole, the samples for the local stations therefore add up to a much larger sample than the Nielsen and other nationwide samples.\nThe problem may be handled with the following Python program. This program represents viewers with the string 'viewers' and non-viewers as 'not viewers'. It then asks rnd.choice to choose randomly between 'viewer' and 'not viewer' with a 30% (p=0.3) chance of getting a 'viewer' and a 70% chance of getting a 'not viewer'. It gets a sample of 400 such numbers, counts (with np.sum the “viewers” then finds how much this sample diverges from the expected number of viewers (30% of 400 = 120). It repeats this procedure 10000 times, and then calculates the average divergence.\n\nStart of viewer_numbers notebook\n\nDownload notebook\nInteract\n\n\n\nimport numpy as np\n\n# set up the random number generator\nrnd = np.random.default_rng()\n\n\n# set the number of trials\nn_trials = 10000\n\n# an empty array to store the scores\nscores = np.zeros(n_trials)\n\n# What are the options to choose from?\noptions = ['viewer', 'not viewer']\n\n# do n_trials trials\nfor i in range(n_trials):\n\n # Choose 'viewer' 30% of the time.\n a = rnd.choice(options, size=400, p=[0.3, 0.7])\n\n # count the viewers\n b = np.sum(a == 'viewer')\n\n # how different from expected?\n c = 120 - b\n\n # absolute value of the difference\n d = np.abs(c)\n\n # express as a proportion of sample\n e = d / 400\n\n # keep score of the result\n scores[i] = e\n\n# find the mean divergence\nk = np.mean(scores)\n\n# Show the result\nk\n\n0.018184000000000002\n\n\n\nEnd of viewer_numbers notebook\n\nIt is a simple matter to go back and try a sample size of (say) 1600 rather than 400, and examine the effect on the mean difference.\nExample 24-2\nThis example, like Example 24-1, illustrates the choice of sample size for estimating a summarization statistic. Later examples deal with sample sizes for probability statistics.\nHark back to the pig-ration problems presented earlier, and consider the following set of pig weight-gains recorded for ration A: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 30. Assume that\nour purpose now is to estimate the average weight gain for ration A, so that the feed company can advertise to farmers how much weight gain to expect from ration A. If the universe is made up of pig weight-gains like those we observed, we can simulate the universe with, say, 1 million weight gains of thirty-one pounds, 1 million of thirty-four pounds, and so on for the twelve observed weight gains. Or, more conveniently, as accuracy will not be affected much, we can make up a universe of say, thirty cards for each thirty-one-pound gain, thirty cards for each thirty-four-pound gains and so forth, yielding a deck of 30 x 12 = 360 cards. Then shuffle, and, just for a starting point, try sample sizes of twelve pigs. The means of the samples for twenty such trials are as in Table 24-3.\nNow ask yourself whether a sample size of twelve pigs gives you enough accuracy. There is a .5 chance that the mean for the sample will be more than .65 or .92 pound (the two median deviations) or (say) .785 pound (the midpoint of the two medians) from the mean of the universe that generates such samples, which in this situation is 31.75 pounds. Is this close enough? That is up to you to decide in light of the purposes for which you are running the experiment. (The logic of the inference you make here is inevitably murky, and use of the term “real mean” can make it even murkier, as is seen in the discussion in Chapters 20-22 on confidence intervals.)\nTo see how accuracy is affected by larger samples, try a sample size of forty-eight “pigs” dealt from the same deck. (But, if the sample size were to be much larger than forty-eight, you might need a “universe” greater than 360 cards.) The results of twenty trials are in Table 24-4.\nIn half the trials with a sample size of forty-eight the difference between the sample mean and the “real” mean of 31.75 will be .36 or .37 pound (the median deviations), smaller than with the values of .65 and .92 for samples of 12 pigs. Again, is this too little accuracy for you? If so, increase the sample size further.\nTable 24-3\n\n\n\n\n\n\n\n\n\n\n\nTrial\nMean\nAbsolut e Devisatio n of Trial Mean\nfrom Actual Mean\nTrial\nMean\nAbsolut e Deviation of Trial Mean\nfrom Actual Mean\n\n\n1\n31.77\n.02\n11\n32.10\n.35\n\n\n2\n32.27\n1.52\n12\n30.67\n1.08\n\n\n3\n31.75\n.00\n13\n32.42\n.67\n\n\n4\n30.83\n.92\n14\n30.67\n1.08\n\n\n5\n30.52\n1.23\n15\n32.25\n.50\n\n\n6\n31.60\n.15\n16\n31.60\n.15\n\n\n7\n32.46\n.71\n17\n32.33\n.58\n\n\n8\n31.10\n.65\n18\n33.08\n1.33\n\n\n9\n32.42\n.35\n19\n33.01\n1.26\n\n\n10\n30.60\n1.15\n20\n30.60\n1.15\n\n\nMean\n\n\n\n\n31.75\n\n\n\nThe attentive reader of this example may have been troubled by this question: How do you know what kind of a distribution of values is contained in the universe before the sample is taken? The answer is that you guess, just as in Example 24-1 you guessed at the mean of the universe. If you guess wrong, you will get either more accuracy or less accuracy than you expected from a given sample size, but the results will not be fatal; if you obtain more accuracy than you wanted, you have wasted some money, and, if you obtain less accuracy, your sample dispersion will tell you so, and you can then augment the sample to boost the accuracy. But an error in guessing will not introduce error into your final results.\nTable 24-4\n\n\n\n\n\n\n\n\n\n\n\nTrial\nMean\nAbsolut e Deviation of Trial Mean\nfrom Actual Mean\nTrial\nMean\nAbsolut e Deviation of Trial Mean\nfrom Actual Mean\n\n\n1\n31.80\n.05\n11\n31.93\n.18\n\n\n2\n32.27\n.52\n12\n32.40\n.65\n\n\n3\n31.82\n.07\n13\n31.32\n.43\n\n\n4\n31.39\n.36\n14\n32.07\n.68\n\n\n5\n31.22\n.53\n15\n32.03\n.28\n\n\n6\n31.88\n.13\n16\n31.95\n.20\n\n\n7\n31.37\n.38\n17\n31.75\n.00\n\n\n8\n31.48\n.27\n18\n31.11\n.64\n\n\n9\n31.20\n.55\n19\n31.96\n.21\n\n\n10\n32.01\n.26\n20\n31.32\n.43\n\n\nMean\n\n\n\n\n31.75\n\n\n\nThe guess should be based on something, however. One source for guessing is your general knowledge of the likely dispersion; for example, if you were estimating male heights in Rhode Island, you would be able to guess what proportion of observations would fall within 2 inches, 4 inches, 6 inches, and 8 inches, perhaps, of the real value. Or, much better yet, a very small pretest will yield quite satisfactory estimates of the dispersion.\nHere is a RESAMPLING STATS program that will let you try different sample sizes, and then take bootstrap samples to determine the range of sampling error. You set the sample size with the DATA command, and the NUMBERS command records the data. Above I noted that we could sample without replacement from a “deck” of thirty “31”’s, thirty “34”’s, etc, as a substitute for creating a universe of a million “31”’s, a million “34”’s, etc. We can achieve the same effect if we replace each card after we sample it; this is equivalent to creating a “deck” of an infinite number of “31”’s, “34”’s, etc. That is what the SAMPLE command does, below. Note that the sample size is determined by the value of the “sampsize” variable, which you set at the beginning. From here on the program takes the MEAN of each sample, keeps SCORE of that result, and produces a HISTOGRAM. The PERCENTILE command will also tell you what values enclose 90% of all sample results, excluding those below the 5th percentile and above the 95th percentile.\nHere is a program for a sample size of 12.\n\n' Program file: \"how_big_sample_01.rss\"\n\nDATA (12) sampsize\nNUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a\nREPEAT 1000\n SAMPLE sampsize a b\n MEAN b c\n SCORE c z\nEND\nHISTOGRAM z\nPERCENTILE z (5 95) k\nPRINT k\n' **Bin Center Freq Pct Cum Pct**\n\n\n\n\n29.0\n\n2\n0.2\n0.2\n\n\n29.5\n\n4\n0.4\n0.6\n\n\n30.0\n\n30\n3.0\n3.6\n\n\n30.5\n\n71\n7.1\n10.7\n\n\n31.0\n\n162\n16.2\n26.9\n\n\n31.5\n\n209\n20.9\n47.8\n\n\n32.0\n\n237\n23.7\n71.5\n\n\n32.5\n\n143\n14.3\n85.8\n\n\n33.0\n\n90\n9.0\n94.8\n\n\n33.5\n\n37\n3.7\n98.5\n\n\n34.0\n\n12\n1.2\n99.7\n\n\n34.5\n\n3\n0.3\n100.0\n\n\nk = 30.417\n33.25\n\n\n\n\n\n\nExample 24-3\nThis is the first example of sample-size estimation for probability (testing) statistics, rather than the summarization statistics dealt with above.\nRecall the problem of the sex of fruit-fly offspring discussed in Example 15-1. The question now is, how large a sample is needed to determine whether the radiation treatment results in a sex ratio other than a 50-50 male-female split?\nThe first step is, as usual, difficult but necessary. As the researcher, you must guess what the sex ratio will be if the treatment does have an effect. Let’s say that you use all your general knowledge of genetics and of this treatment and that you guess the sex ratio will be 75 percent males and 25 percent females if the treatment alters the ratio from 50-50.\nIn the random-number table let “01-25” stand for females and “26-00” for males. Take twenty successive pairs of numbers for each trial, and run perhaps fifty trials, as in Table 24-5.\nTable 24-5\n\n\n\n1\n4\n16\n18\n7\n13\n34\n4\n16\n\n\n2\n6\n14\n19\n3\n17\n35\n6\n14\n\n\n3\n6\n14\n20\n7\n13\n36\n3\n17\n\n\n4\n5\n15\n21\n4\n16\n37\n8\n12\n\n\n5\n5\n15\n22\n4\n16\n38\n4\n16\n\n\n6\n3\n17\n23\n5\n15\n39\n3\n17\n\n\n7\n7\n13\n24\n8\n12\n40\n6\n14\n\n\n8\n6\n14\n25\n4\n16\n41\n5\n15\n\n\n9\n3\n17\n26\n1\n19\n42\n2\n18\n\n\n10\n2\n18\n27\n5\n15\n43\n8\n12\n\n\n11\n6\n14\n28\n3\n17\n44\n4\n16\n\n\n12\n1\n19\n29\n8\n12\n45\n6\n14\n\n\n13\n6\n14\n30\n8\n12\n46\n5\n15\n\n\n14\n3\n17\n31\n5\n15\n47\n3\n17\n\n\n15\n1\n19\n32\n3\n17\n48\n5\n15\n\n\n16\n5\n15\n33\n4\n16\n49\n3\n17\n\n\n17\n5\n15\n\n\n\n50\n5\n15\n\n\n\nTrial Females Males Trial Females Males Trial Females Males\nIn Example 15-1 with a sample of twenty flies that contained fourteen or more males, we found only an 8% probability that such an extreme sample would result from a 50-50 universe. Therefore, if we observe such an extreme sample, we rule out a 50-50 universe.\nNow Table 24-5 tells us that, if the ratio is really 75 to 25, then a sample of twenty will show fourteen or more males forty-two of fifty times (84 percent of the time). If we take a sample of twenty flies and if the ratio is really 75-25, we will make the correct decision by deciding that the split is not 50-50 84 percent of the time.\nPerhaps you are not satisfied with reaching the right conclusion only 84 percent of the time. In that case, still assuming that the ratio will really be 75-25 if it is not 50-50, you need to take a sample larger than twenty flies. How much larger? That depends on how much surer you want to be. Follow the same procedure for a sample size of perhaps eighty flies. First work out for a sample of eighty, as was done in Example 15-1 for a sample of twenty, the number of males out of eighty that you would need to find for the odds to be, say, 9 to 1 that the universe is not 50-50; your estimate turns out to be forty-eight males. Then run fifty trials of eighty flies each on the basis of 75-25 probability, and see how often you would not get as many as forty-eight males in the sample. Table 24-6 shows the results we got. No trial was anywhere near as low as forty-eight, which suggests that a sample of eighty is larger than necessary if the split is really 75-25.\nTable 24-6\n\n\nTrial Females Males Trial Females Males Trial Females Males\n\n\n\n1\n21\n59\n18\n13\n67\n34\n21\n59\n\n\n2\n22\n58\n19\n19\n61\n35\n17\n63\n\n\n3\n13\n67\n20\n17\n63\n36\n22\n58\n\n\n4\n15\n65\n21\n17\n63\n37\n19\n61\n\n\n5\n22\n58\n22\n18\n62\n38\n21\n59\n\n\n6\n21\n59\n23\n26\n54\n39\n21\n59\n\n\n7\n13\n67\n24\n20\n60\n40\n21\n59\n\n\n8\n24\n56\n25\n16\n64\n41\n21\n59\n\n\n9\n16\n64\n26\n22\n58\n42\n18\n62\n\n\n10\n21\n59\n27\n16\n64\n43\n19\n61\n\n\n11\n20\n60\n28\n21\n59\n44\n17\n63\n\n\n12\n19\n61\n29\n22\n58\n45\n13\n67\n\n\n13\n21\n59\n30\n21\n59\n46\n16\n64\n\n\n14\n17\n63\n31\n22\n58\n47\n21\n59\n\n\n15\n22\n68\n32\n19\n61\n48\n16\n64\n\n\n16\n22\n68\n33\n10\n70\n49\n17\n63\n\n\n17\n17\n63\n\n\n\n50\n21\n59\n\n\n\nTable 24-7\nTrial Females Males Trial Females Males Trial Females Males\n\n\n\n1\n35\n45\n18\n32\n48\n34\n35\n45\n\n\n2\n36\n44\n19\n28\n52\n35\n36\n44\n\n\n3\n35\n45\n20\n32\n48\n36\n29\n51\n\n\n4\n35\n45\n21\n33\n47\n37\n36\n44\n\n\n5\n36\n44\n22\n37\n43\n38\n36\n44\n\n\n6\n36\n44\n23\n36\n44\n39\n31\n49\n\n\n7\n36\n44\n24\n31\n49\n40\n29\n51\n\n\n8\n34\n46\n25\n27\n53\n41\n30\n50\n\n\n9\n34\n46\n26\n30\n50\n42\n35\n45\n\n\n10\n29\n51\n27\n31\n49\n43\n32\n48\n\n\n11\n29\n51\n28\n33\n47\n44\n30\n50\n\n\n12\n32\n48\n29\n37\n43\n45\n37\n43\n\n\n13\n29\n51\n30\n30\n50\n46\n31\n49\n\n\n14\n31\n49\n31\n31\n49\n47\n36\n44\n\n\n15\n28\n52\n32\n32\n48\n48\n34\n64\n\n\n16\n33\n47\n33\n34\n46\n49\n29\n51\n\n\n17\n36\n44\n\n\n\n50\n37\n43\n\n\n\n\nIt is obvious that, if the split you guess at is 60 to 40 rather than 75 to 25, you will need a bigger sample to obtain the “correct” result with the same probability. For example, run some eighty-fly random-number trials with 1-40 representing males and 51-100 representing females. Table 24-7 shows that only twenty-four of fifty (48 percent) of the trials reach the necessary cut-off at which one would judge that a sample of eighty really does not come from a universe that is split 50-50; therefore, a sample of eighty is not big enough if the split is 60-40.\nTo review the main principles of this example: First, the closer together the two possible universes from which you think the sample might have come (50-50 and 60-40 are closer together than are 50-50 and 75-25), the larger the sample needed to distinguish between them. Second, the surer you want to be that you reach the right decision based upon the sample evidence, the larger the sample you need.\nThe problem may be handled with the following RESAMPLING STATS program. We construct a benchmark universe that is 60-40 male-female, and take samples of size 80, observing whether the numbers of males and females differs enough in these resamples to rule out a 50-50 universe. Recall that we need at least 48 males to say that the proportion of males is not 50%.\n\n' Program file: \"how_big_sample_02.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 80 1,10 a\n ' Generate 80 \"flies,\" each represented by a number between 1 and 10 where\n ' \\<= 6 is a male\n COUNT a <=6 b\n ' Count the males\n SCORE b z\n ' Keep score\nEND\nCOUNT z >=48 k\n' How many of the trials produced more than 48 males?\nDIVIDE k 1000 kk\n' Convert to a proportion\nPRINT kk\n' If the result \"kk\" is close to 1, we then know that samples of size 80\n' will almost always produce samples with enough males to avoid misleading\n' us into thinking that they could have come from a universe in which\n' males and females are split 50-50.\nExample 24-3\nReferring back to Example 15-3, on the cable-television poll, how large a sample should you have taken? Pretend that the data have not yet been collected. You need some estimate of how the results will turn out before you can select a sample size. But you have not the foggiest idea how the results will turn out. Therefore, go out and take a very small sample, maybe ten people, to give you some idea of whether people will split quite evenly or unevenly. Seven of your ten initial interviews say they are for CATV. How large a sample do you now need to provide an answer of which you can be fairly sure?\nUsing the techniques of the previous chapter, we estimate roughly that from a sample of fifty people at least thirty-two would have to vote the same way for you to believe that the odds are at least 19 to 1 that the sample does not misrepresent the universe, that is, that the sample does not show a majority different from that of the whole universe if you polled everyone. This estimate is derived from the resampling experiment described in example 15-3. The table shows that if half the people (or more) are against cable television, only one in twenty times will thirty-two (or more) people of a sample of fifty say that they are for cable television; that is, only one of twenty trials with a 50-50 universe will produce as many as thirty-two yeses if a majority of the population is against it.\nTherefore, designate numbers 1-30 as no and 31-00 as yes in the random-number table (that is, 70 percent, as in your estimate based on your presample of ten), work through a trial sample size of fifty, and count the number of yeses . Run through perhaps ten or fifteen trials, and reckon how often the observed number of yeses exceeds thirty-two, the number you must exceed for a result you can rely on. In Table 24-8 we see that a sample of fifty respondents, from a universe split 70-30, will show that many yeses a preponderant proportion of the time — in fact, in fifteen of fifteen experiments; therefore, the sample size of fifty is large enough if the split is “really” 70-30.\nTable 24-8\n\n\n\nTrial\nNo\nYes\nTrial\nNo\nYes\n\n\n1\n13\n37\n9\n15\n35\n\n\n2\n14\n36\n10\n9\n41\n\n\n3\n18\n32\n11\n15\n35\n\n\n4\n10\n40\n12\n15\n35\n\n\n5\n13\n37\n13\n9\n41\n\n\n6\n15\n35\n14\n16\n34\n\n\n7\n14\n36\n15\n17\n33\n\n\n\nThe following RESAMPLING STATS program takes samples of size 50 from a universe that is 70% “yes.” It then observes how often such samples produce more than 32 “yeses” — the number we must get if we are to be sure that the sample is not from a 50/50 universe.\n\n' Program file: \"how_big_sample_03.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 50 1,10 a\n ' Generate 50 numbers between 1 and 10, let 1-7 = yes.\n COUNT a <=7 b\n ' Count the \"yeses\"\n SCORE b z\n ' Keep score of the result\nEND\nCOUNT z >=32 k\n' Count how often the sample result \\>= our 32 cutoff (recall that samples\n' with 32 or fewer \"yeses\" cannot be ruled out of a 50/50 universe)\nDIVIDE k 1000 kk\n' Convert to a proportion\nIf “kk” is close to 1, we can be confident that this sample will be large enough to avoid a result that we might mistakenly think comes from a 50/50 universe (provided that the real universe is 70% favorable).\nExample 24-4\nHow large a sample is needed to determine whether there is any difference between the two pig rations in Example 15-7? The first step is to guess the results of the tests. You estimate that the average for ration A will be a weight gain of thirty-two pounds. You further guess that twelve pigs on ration A might gain thirty-six, thirty-five, thirty-four, thirty-three, thirty-three, thirty-two, thirty-two, thirty-one, thirty-one, thirty, twentynine, and twenty-eight pounds. This set of guesses has an equal number of pigs above and below the average and more pigs close to the average than farther away. That is, there are more pigs at 33 and 31 pounds than at 36 and 28 pounds. This would seem to be a reasonable distribution of pigs around an average of 32 pounds. In similar fashion, you guess an average weight gain of 28 pounds for ration B and a distribution of 32, 31, 30, 29, 29, 28, 28, 27, 27, 26, 25, and 24 pounds.\nLet us review the basic strategy. We want to find a sample size large enough so that a large proportion of the time it will reveal a difference between groups big enough to be accepted as not attributable to chance. First, then, we need to find out how big the difference must be to be accepted as evidence that the difference is not attributable to chance. We do so from trials with samples that size from the benchmark universe. We state that a difference larger than the benchmark universe will usually produce is not attributable to chance.\nIn this case, let us try samples of 12 pigs on each ration. First we draw two samples from a combined benchmark universe made up of the results that we have guessed will come from ration A and ration B. (The procedure is the same as was followed in Example 15-7.) We find that in 19 out of 20 trials the difference between the two observed groups of 12 pigs was 3 pounds or less. Now we investigate how often samples of 12 pigs, drawn from the separate universes, will show a mean difference as large as 3 pounds. We do so by making up a deck of 25 or 50 cards for each of the 12 hypothesized A’s and each of the 12 B’s, with the ration name and the weight gain written on it — that is, a deck of, say, 300 cards for each ration. Then from each deck we draw a set of 12 cards at random, record the group averages, and find the difference.\nHere is the same work done with more runs on the computer:\n\n' Program file: \"how_big_sample_04.rss\"\n\nNUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a\nNUMBERS (32 32 31 30 29 29 29 28 28 26 26 24) b\nREPEAT 1000\n SAMPLE 12 a aa\n MEAN aa aaa\n SAMPLE 12 b bb\n MEAN bb bbb\n SUBTRACT aaa bbb c\n SCORE c z\nEND\nHISTOGRAM z\n' **Difference in mean weights between resamples**\n\nTherefore, two samples of twelve pigs each are clearly large enough, and, in fact, even smaller samples might be sufficient if the universes are really like those we guessed at. If, on the other hand, the differences in the guessed universes had been smaller, then twelve-pig groups would have seemed too small and we would then have had to try out larger sample sizes, say forty-eight pigs in each group and perhaps 200 pigs in each group if forty-eight were not enough. And so on until the sample size is large enough to promise the accuracy we want. (In that case, the decks would also have to be much larger, of course.)\nIf we had guessed different universes for the two rations, then the sample sizes required would have been larger or smaller. If we had guessed the averages for the two samples to be closer together, then we would have needed larger samples. Also, if we had guessed the weight gains within each universe to be less spread out, the samples could have been smaller and vice versa.\nThe following RESAMPLING STATS program first records the data from the two samples, and then draws from decks of infinite size by sampling with replacement from the original samples.\n\n' Program file: \"how_big_sample_05.rss\"\n\nDATA (36 35 34 33 33 32 32 31 31 30 29 28) a\nDATA (32 31 30 29 29 28 28 27 27 26 25 24) b\nREPEAT 1000\n SAMPLE 12 a aa\n ' Draw a sample of 12 from ration a with replacement (this is like drawing\n ' from a large deck made up of many replicates of the elements in a)\n SAMPLE 12 b bb\n ' Same for b\n MEAN aa aaa\n ' Find the averages of the resamples\n MEAN bb bbb\n SUBTRACT aaa bbb c\n ' Find the difference\n SCORE c z\nEND\nCOUNT z >=3 k\n' How often did the difference exceed the cutoff point for our\n' significance test of 3 pounds?\nDIVIDE k 1000 kk\nPRINT kk\n' If kk is close to zero, we know that the sample size is large enough\n' that samples drawn from the universes we have hypothesized will not\n' mislead us into thinking that they could come from the same universe." + }, + { + "objectID": "how_big_sample.html#step-wise-sample-size-determination", + "href": "how_big_sample.html#step-wise-sample-size-determination", + "title": "30  How Large a Sample?", + "section": "30.3 Step-wise sample-size determination", + "text": "30.3 Step-wise sample-size determination\nOften it is wisest to determine the sample size as you go along, rather than fixing it firmly in advance. In sequential sampling, you continue sampling until the split is sufficiently even to make you believe you have a reliable answer.\nRelated techniques work in a series of jumps from sample size to sample size. Step-wise sampling makes it less likely that you will take a sample that is much larger than necessary. For example, in the cable-television case, if you took a sample of perhaps fifty you could see whether the split was as wide as 32-18, which you figure you need for 9 to 1 odds that your answer is right. If the split were not that wide, you would sample another fifty, another 100, or however large a sample you needed until you reached a split wide enough to satisfy you that your answer was reliable and that you really knew which way the entire universe would vote.\nStep-wise sampling is not always practical, however, and the cable-television telephone-survey example is unusually favorable for its use. One major pitfall is that the early responses to a mail survey, for example, do not provide a random sample of the whole, and therefore it is a mistake simply to look at the early returns when the split is not wide enough to justify a verdict. If you have listened to early radio or television reports of election returns, you know how misleading the reports from the first precincts can be if we regard them as a fair sample of the whole.2\nStratified sampling is another device that helps reduce the sample size required, by balancing the amounts of information you obtain in the various strata. (Cluster sampling does not reduce the sample size. Rather, it aims to reduce the cost of obtaining a sample that will produce a given level of accuracy.)" + }, + { + "objectID": "how_big_sample.html#summary", + "href": "how_big_sample.html#summary", + "title": "30  How Large a Sample?", + "section": "30.4 Summary", + "text": "30.4 Summary\nSample sizes are too often determined on the basis of convention or of the available budget. A more rational method of choosing the size of the sample is by balancing the diminution of error expected with a larger sample, and its value, against the cost of increasing the sample size. The relationship of various sample sizes to various degrees of accuracy can be estimated with resampling methods, which are illustrated here.\n\n\n\n\nFussler, Herman Howe, and Julian Lincoln Simon. 1961. Patterns in the Use of Books in Large Research Libraries. Chicago: University of Chicago Library.\n\n\nHansen, Morris H, William N Hurwitz, and William G Madow. 1953. “Sample Survey Methods and Theory. Vol. I. Methods and Applications.” https://archive.org/details/SampleSurveyMethodsAndTheoryVol1.\n\n\nKinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948. “Sexual Behavior in the Human Male.” W. B. Saunders Company. https://books.google.co.uk/books?id=pfMKrY3VvigC.\n\n\nLorie, James Hirsch, and Harry V Roberts. 1951. Basic Methods of Marketing Research. McGraw-Hill.\n\n\nSchlaifer, Robert. 1961. Introduction to Statistics for Business Decisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl.\n\n\nSudman, Seymour. 1976. Applied Sampling. New York: Academic Press. https://archive.org/details/appliedsampling0000unse." + }, + { + "objectID": "bayes_simulation.html#simple-decision-problems", + "href": "bayes_simulation.html#simple-decision-problems", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.1 Simple decision problems", + "text": "31.1 Simple decision problems\n\n31.1.1 Assessing the Likelihood That a Used Car Will Be Sound\nConsider a problem in estimating the soundness of a used car one considers purchasing (after (Wonnacott and Wonnacott 1990, 93–94)). Seventy percent of the cars are known to be OK on average, and 30 percent are faulty. Of the cars that are really OK, a mechanic correctly identifies 80 percent as “OK” but says that 20 percent are “faulty”; of those that are faulty, the mechanic correctly identifies 90 percent as faulty and says (incorrectly) that 10 percent are OK.\nWe wish to know the probability that if the mechanic says a car is “OK,” it really is faulty. Phrased differently, what is the probability of a car being faulty if the mechanic said it was OK?\nWe can get the desired probabilities directly by simulation without knowing Bayes’ rule, as we shall see. But one must be able to model the physical problem correctly in order to proceed with the simulation; this requirement of a clearly visualized model is a strong point in favor of simulation.\n\nNote that we are only interested in outcomes where the mechanic approved a car.\nFor each car, generate a label of either “faulty” or “working” with probabilities of 0.3 and 0.7, respectively.\nFor each faulty car, we generate one of two labels, “approved” or “not approved” with probabilities 0.1 and 0.9, respectively.\nFor each working car, we generate one of two labels, “approved” or “not approved” with probabilities 0.7 and 0.3, respectively.\nOut of all cars “approved”, count how many are “faulty”. The ratio between these numbers is our answer.\n\nHere is the whole thing:\n\nimport numpy as np\n\nN = 10000 # number of cars\n\n# Counters for number of approved, number of approved and faulty\napproved = 0\napproved_and_faulty = 0\n\nfor i in range(N):\n\n # Decide whether the car is faulty or working, with a probability of\n # 0.3 and 0.7 respectively\n car = np.random.choice(['faulty', 'working'], p=[0.3, 0.7])\n\n if car == 'faulty':\n # What the mechanic says of a faulty car\n mechanic_says = np.random.choice(['approved', 'not approved'], p=[0.1, 0.9])\n else:\n # What the mechanic says of a working car\n mechanic_says = np.random.choice(['approved', 'not approved'], p=[0.7, 0.3])\n\n if mechanic_says == 'approved':\n approved += 1\n\n if car == 'faulty':\n approved_and_faulty += 1\n\nk = approved_and_faulty / approved\n\nprint(f'{k * 100:.2}%')\n\n5.7%\n\n\nThe answer looks to be somewhere between 5 and 6%. The code clearly follows the description step by step, but it is also quite slow. If we can improve the code, we may be able to do our simulation with more cars, and get a more accurate answer.\nLet’s use arrays to store the states of all cars in the lot simultaneously:\n\nN = 1000000 # number of cars; we made this number larger by a factor of 100\n\n# Generate an array with as many entries as there are cars, each\n# being either 'working' or 'faulty'\ncars = np.random.choice(['working', 'faulty'], p=[0.7, 0.3], size=N)\n\n# Count how many cars are working\nN_working = np.sum(cars == 'working')\n\n# All the rest are faulty\nN_faulty = N - N_working\n\n# Create a new array in which to store what a mechanic says\n# about the car: 'approved' or 'not approved'\nmechanic_says = np.empty_like(cars, dtype=object)\n\n# We start with the working cars; what does the mechanic say about them?\n# Generate 'approved' or 'not approved' labels with the given probabilities.\nmechanic_says[cars == 'working'] = np.random.choice(\n ['approved', 'not approved'], p=[0.8, 0.2], size=N_working\n)\n\n# Similarly, for each faulty car, generate 'approved'/'not approved'\n# labels with the given probabilities.\nmechanic_says[cars == 'faulty'] = np.random.choice(\n ['approved', 'not approved'], p=[0.1, 0.9], size=N_faulty\n)\n\n# Identify all cars that were approved\n# This produces a binary mask, an array that looks like:\n# [True, False, False, True, ... ]\napproved = (mechanic_says == 'approved')\n\n# Identify cars that are faulty AND were approved\nfaulty_but_approved = (cars == 'faulty') & approved\n\n# Count the number of cars that are faulty but approved, as well as\n# the total number of cars that were approved\nN_faulty_but_approved = np.sum(faulty_but_approved)\nN_approved = np.sum(approved)\n\n# Calculate the ratio, which is the answer we seek\nk = N_faulty_but_approved / N_approved\n\nprint(f'{k * 100:.2}%')\n\n5.1%\n\n\nThe code now runs much faster, and with a larger number of cars we see that the answer is closer to a 5% chance of a car being broken after it has been approved by a mechanic.\n\n\n31.1.2 Calculation without simulation\nSimulation forces us to model our problem clearly and concretely in code. Such code is most often easier to reason about than opaque statistical methods. Running the simulation gives a good sense of what the correct answer should be. Thereafter, we can still look into different — sometimes more elegant or accurate — ways of modeling and solving the problem.\nLet’s examine the following diagram of our car selection:\n\nWe see that there are two paths, highlighted, that results in a car being approved by a mechanic. Either a car can be working, and correctly identified as such by a mechanic; or the car can be broken, while the mechanic mistakenly determines it to be working. Our question only pertains to these two paths, so we do not need to study the rest of the tree.\nIn the long run, in our simulation, about 70% of the cars will end with the label “working”, and about 30% will end up with the label “faulty”. We just took 10000 sample cars above but, in fact, the larger the number of cars we take, the closer we will get to 70% “working” and 30% “faulty”. So, with many samples, we can think of 70% of these samples flowing down the “working” path, and 30% flowing along the “faulty” path.\nNow, we want to know, of all the cars approved by a mechanic, how many are faulty:\n\\[ \\frac{\\mathrm{cars_{\\mathrm{faulty}}}}{\\mathrm{cars}_{\\mathrm{approved}}} \\]\nWe follow the two highlighted paths in the tree:\n\nOf a large sample of cars, 30% are faulty. Of these, 10% are approved by a mechanic. That is, 30% * 10% = 3% of all cars.\nOf all cars, 70% work. Of these, 80% are approved by a mechanic. That is, 70% * 80% = 56% of all cars.\n\nThe percentage of faulty cars, out of approved cars, becomes:\n\\[\n3\\% / (56\\% + 3\\%) = 5.08\\%\n\\]\nNotation-wise, it is a bit easier to calculate these sums using proportions rather than percentages:\n\nFaulty cars approved by a mechanic: 0.3 * 0.1 = 0.03\nWorking cars approved by a mechanic: 0.7 * 0.8 = 0.56\n\nFraction of faulty cars out of approved cars: 0.03 / (0.03 + 0.56) = 0.0508\nWe see that every time the tree branches, it filters the cars: some go to one branch, the rest to another. In our code, we used the AND (&) operator to find the intersection between faulty AND approved cars, i.e., to filter out from all faulty cars only the cars that were ALSO approved." + }, + { + "objectID": "bayes_simulation.html#probability-interpretation", + "href": "bayes_simulation.html#probability-interpretation", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.2 Probability interpretation", + "text": "31.2 Probability interpretation\n\n31.2.1 Probability from proportion\nIn these examples, we often calculate proportions. In the given simulation:\n\nHow many cars are approved by a mechanic? 59/100.\nHow many of those 59 were faulty? 3/59.\n\nWe often also count how commonly events occur: “it rained 4 out of the 10 days”.\nAn extension of this idea is to predict the probability of an event occurring, based on what we had seen in the past. We can say “out of 100 days, there was some rain on 20 of them; we therefore estimate that the probability of rain occurring is 20/100”. Of course, this is not a complex or very accurate weather model; for that, we’d need to take other factors—such as season—into consideration. Overall, the more observations we have, the better our probability estimates become. We discussed this idea previously in “The Law of Large Numbers”.\n\n\n31.2.1.1 Ratios of proportions\nAt our mechanic’s yard, we can ask “how many red cars here are faulty”? To calculate that, we’d first count the number of red cars, then the number of those red cars that are also broken, then calculate the ratio: red_cars_faulty / red_cars.\nWe could just as well have worked in percentages: percentage_of_red_cars_broken / percentage_of_cars_that_are_red, since that is (red_cars_broken / 100) / (red_cars / 100)—the same ratio calculated before.\nOur point is that the denominator doesn’t matter when calculating ratios, so we could just as well have written:\n(red_cars_broken / all_cars) / (red_cars / all_cars)\nor\n\\[\nP(\\text{cars that are red and that are broken}) / P(\\text{red cars})\n\\]\n\n\n\n\n31.2.2 Probability relationships: conditional probability\nHere’s one way of writing the probability that a car is broken:\n\\[\nP(\\text{car is broken})\n\\]\nWe can shorten “car is broken” to B, and write the same thing as:\n\\[\nP(B)\n\\]\nSimilarly, we could write the probability that a car is red as:\n\\[\nP(R)\n\\]\nWe might also want to express the conditional probability, as in the probability that the car is broken, given that we already know that the car is red:\n\\[\nP(\\text{car is broken GIVEN THAT car is red})\n\\]\nThat is getting getting pretty verbose, so we will shorten this as we did above:\n\\[\nP(B \\text{ GIVEN THAT } R)\n\\]\nTo make things even more compact, we write “GIVEN THAT” as a vertical bar | — so the whole thing becomes:\n\\[\nP(B | R)\n\\]\nWe read this as “the probability that the car is broken given that the car is red”. Such a probability is known as a conditional probability. We discuss these in more details in Ch TKTK.\n\nIn our original problem, we ask what the chance is of a car being broken given that a mechanic approved it. As discussed under “Ratios of proportions”, it can be calculated with:\n\\[\nP(\\text{car broken | mechanic approved})\n= P(\\text{car broken and mechanic approved}) / P(\\text{mechanic approved})\n\\]\nWe have already used \\(B\\) to mean “broken” (above), so let us use \\(A\\) to mean “mechanic approved”. Then we can write the statement above in a more compact way:\n\\[\nP(B | A) = P(B \\text{ and } A) / P(A)\n\\]\nTo put this generally, conditional probabilities for two events \\(X\\) and \\(Y\\) can be written as:\n\\(P(X | Y) = P(X \\text{ and } Y) / P(Y)\\)\nWhere (again) \\(\\text{ and }\\) means that both events occur.\n\n\n31.2.3 Example: conditional probability\nLet’s discuss a very relevant example. You get a COVID test, and the test is negative. Now, you would like to know what the chance is of you having COVID.\nWe have the following information:\n\n1.5% of people in your area have COVID\nThe false positive rate of the tests (i.e., that they detect COVID when it is absent) is very low at 0.5%\nThe false negative rate (i.e., that they fail to detect COVID when it is present) is quite high at 40%\n\n\nAgain, we start with our simulation.\n\n# The number of people\nN = 1000000\n\n# For each person, generate a True or False label,\n# indicating that they have / don't have COVID\nperson_has_covid = np.random.choice(\n [True, False], p=[0.015, 0.985],\n size=N\n)\n\n# Calculate the numbers of people with and without COVID\nN_with_covid = np.sum(person_has_covid)\nN_without_covid = N - N_with_covid\n\n# In this array, we will store, for each person, whether they\n# had a positive or a negative test\ntest_result = np.zeros_like(person_has_covid, dtype=bool)\n\n# Draw test results for people with COVID\ntest_result[person_has_covid] = np.random.choice(\n [True, False], p=[0.6, 0.4],\n size=N_with_covid\n)\n\n# Draw test results for people without COVID\ntest_result[~person_has_covid] = np.random.choice(\n [True, False], p=[0.005, 0.995],\n size=N_without_covid\n)\n\n# Get the COVID statuses of all those with negative tests\n# (`test_result` is a boolean mask, like `[True, False, False, True, ...]`,\n# and `~test_result` flips all boolean values to `[False, True, True, False, ...]`.\ncovid_status_negative_test = person_has_covid[~test_result]\n\n# Now, count how many with COVID had a negative test results\nN_with_covid_and_negative_test = np.sum(covid_status_negative_test)\n\n# And how many people, overall, had negative test results\nN_with_negative_test = len(covid_status_negative_test)\n\nk = N_with_covid_and_negative_test / N_with_negative_test\n\nprint(k)\n\n0.0061110186992100815\n\n\nThis gives around 0.006 or 0.6%.\nNow that we have a rough indication of what the answer should be, let’s try and calculate it directly, based on the tree of informatiom shown earlier.\nWe will use these abbreviations:\n\n\\(C^+\\) means Covid positive (you do actually have Covid).\n\\(C^-\\) means Covid negative (you do not actually have Covid).\n\\(T^+\\) means the Covid test was positive.\n\\(T^-\\) means the Covid test was negative.\n\nFor example \\(P(C^+ | T^-)\\) is the probability (\\(P\\)) that you do actually have Covid (\\(C^+\\)) given that (\\(|\\)) the test was negative (\\(T^-\\)).\nWe would like to know the probability of having COVID given that your test was negative (\\(P(C^+ | T^-)\\)). Using the conditional probability relationship from above, we can write:\n\\[\nP(C^+ | T^-) = P(C^+ \\text{ and } T^-) / P(T^-)\n\\]\nWe see from the tree diagram that \\(P(C^+ \\text{ and } T^-) = P(T^- | C^+) * P(C^+) = .4 * .015 = 0.006\\).\n\nWe observe that \\(P(T^-) = P(T^- \\text{ and } C^-) + P(T^- \\text{ and } C^+)\\), i.e. that we can obtain a negative test result through two paths, having COVID or not having COVID. We expand these further as conditional probabilities:\n\\(P(T^- \\text{ and } C^-) = P(T^- | C^-) * P(C^-)\\)\nand\n\\(P(T^- \\text{ and } C^+) = P(T^- | C^+) * P(C^+)\\).\nWe can now calculate\n\\[\nP(T^-) = P(T^- | C^-) * P(C^-) + P(T^- | C^+) * P(C^+)\n\\]\n\\[\n= .995 * .985 + .4 * .015 = 0.986\n\\]\nThe answer, then, is:\n\\(P(C^+ | T^-) = 0.006 / 0.986 = 0.0061\\) or 0.61%.\nThis matches very closely our simulation result, so we have some confidence that we have done the calculation correctly.\n\n\n31.2.4 Estimating Driving Risk for Insurance Purposes\nAnother sort of introductory problem, following after (Feller 1968, p 122):\nA mutual insurance company charges its members according to the risk of having an car accident. It is known that there are two classes of people — 80 percent of the population with good driving judgment and with a probability of .06 of having an accident each year, and 20 percent with poor judgment and a probability of .6 of having an accident each year. The company’s policy is to charge $100 for each percent of risk, i. e., a driver with a probability of .6 should pay 60*$100 = $6000.\nIf nothing is known of a driver except that they had an accident last year, what fee should they pay?\nAnother way to phrase this question is: given that a driver had an accident last year, what is the probability of them having an accident overall?\nWe will proceed as follows:\n\nGenerate a population of N people. Label each as good driver or poor driver.\nSimulate the last year for each person: did they have an accident or not?\nSelect only the ones that had an accident last year.\nAmong those, calculate what their average risk is of making an accident. This will indicate the appropriate insurance premium.\n\n\nN = 100000\ncost_per_percent = 100\n\npeople = np.random.choice(\n ['good driver', 'poor driver'], p=[0.8, 0.2],\n size=N\n)\n\ngood_driver = (people == 'good driver')\npoor_driver = ~good_driver\n\n# Did they have an accident last year?\nhad_accident = np.zeros(N, dtype=bool)\nhad_accident[good_driver] = np.random.choice(\n [True, False], p=[0.06, 0.94],\n size=np.sum(good_driver)\n)\nhad_accident[poor_driver] = np.random.choice(\n [True, False], p=[0.6, 0.4],\n size=np.sum(poor_driver)\n)\n\nppl_with_accidents = people[had_accident]\nN_good_driver_accidents = np.sum(ppl_with_accidents == 'good driver')\nN_poor_driver_accidents = np.sum(ppl_with_accidents == 'poor driver')\nN_all_with_accidents = N_good_driver_accidents + N_poor_driver_accidents\n\navg_risk_percent = (N_good_driver_accidents * 0.06 +\n N_poor_driver_accidents * 0.6) / N_all_with_accidents * 100\n\npremium = avg_risk_percent * cost_per_percent\n\nprint(f'{premium:.0f} USD')\n\n4484 USD\n\n\nThe answer should be around 4450 USD.\n\n\n31.2.5 Screening for Disease\n\nThis is a classic Bayesian problem (quoted by Tversky and Kahneman (1982, 154), from Cascells et al. (1978, 999)):\n\nIf a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?\n\nTversky and Kahneman note that among the respondents — students and staff at Harvard Medical School — “the most common response, given by almost half of the participants, was 95%” — very much the wrong answer.\nTo obtain an answer by simulation, we may rephrase the question above with (hypothetical) absolute numbers as follows:\nIf a test to detect a disease whose prevalence has been estimated to be about 100,000 in the population of 100 million persons over age 40 (that is, about 1 in a thousand) has been observed to have a false positive rate of 60 in 1200 observations, and never gives a negative result if a person really has the disease, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?\nIf the raw numbers are not available, the problem can be phrased in such terms as “about 1 case in 1000” and “about 5 false positives in 100 cases.”)\nOne may obtain an answer as follows:\n\nConstruct bucket A with 999 white beads and 1 black bead, and bucket B with 95 green beads and 5 red beads. A more complete problem that also discusses false negatives would need a third bucket.\nPick a bead from bucket A. If black, record “T,” replace the bead, and end the trial. If white, continue to step 3.\nIf a white bead is drawn from bucket A, select a bead from bucket B. If red, record “F” and replace the bead, and if green record “N” and replace the bead.\nRepeat steps 2-4 perhaps 10,000 times, and in the results count the proportion of “T”s to (“T”s plus “F”s) ignoring the “N”s).\nOf course 10,000 draws would be tedious, but even after a few hundred draws a person would be likely to draw the correct conclusion that the proportion of “T”s to (“T”s plus “F”s) would be small. And it is easy with a computer to do 10,000 trials very quickly.\nNote that the respondents in the Cascells et al. study were not naive; the medical staff members were supposed to understand statistics. Yet most doctors and other personnel offered wrong answers. If simulation can do better than the standard deductive method, then simulation would seem to be the method of choice. And only one piece of training for simulation is required: Teach the habit of saying “I’ll simulate it” and then actually doing so." + }, + { + "objectID": "bayes_simulation.html#fundamental-problems-in-statistical-practice", + "href": "bayes_simulation.html#fundamental-problems-in-statistical-practice", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.3 Fundamental problems in statistical practice", + "text": "31.3 Fundamental problems in statistical practice\nBox and Tiao (1992) begin their classic exposition of Bayesian statistics with the analysis of a famous problem first published by Fisher (1959, 18).\n\n…there are mice of two colors, black and brown. The black mice are of two genetic kinds, homozygotes (BB) and heterozygotes (Bb), and the brown mice are of one kind (bb). It is known from established genetic theory that the probabilities associated with offspring from various matings are as listed in Table 31.1.\n\n\n\nTable 31.1: Probabilities for Genetic Character of Mice Offspring (Box and Tiao 1992, 12–14)\n\n\n\nBB (black)\nBb (black)\nbb (brown)\n\n\n\n\nBB mated with bb\n0\n1\n0\n\n\nBb mated with bb\n0\n½\n½\n\n\nBb mated with Bb\n¼\n½\n¼\n\n\n\n\nSuppose we have a “test” mouse which has been produced by a mating between two (Bb) mice and is black. What is the genetic kind of this mouse?\nTo answer that, we look at the information in the last line of the table: it shows that the probabilities of a test mouse is of kind BB and Bb are precisely known, and are 1/3 and 2/3 respectively ((1/4)/(1/4 + 1/2) vs (1/2)/(1/4 + 1/2)). We call this our “prior” estimate — in other words, our estimate before seeing data.\nSuppose the test mouse is now mated with a brown mouse (of kind bb) and produces seven black offspring. Before, we thought that it was more likely for the parent to be of kind Bb than of kind BB. But if that were true, then we would have expected to have seen some brown offspring (the probability of mating Bb with bb resulting in brown offspring is given as 0.5). Therefore, we sense that it may now be more likely that the parent was of type BB instead. How do we quantify that?\nOne can calculate, as Fisher (1959, 19) did, the probabilities after seeing the data (we call this the posterior probability). This is typically done using using Bayes’ rule.\nBut instead of doing that, let’s take the easy route out and simulate the situation instead.\n\nWe begin, as do Box and Tiao, by restricting our attention to the third line in Table Table 31.1. We draw a mouse with label ‘BB’, ‘Bb’, or ‘bb’, using those probabilities. We were told that the “test mouse” is black, so if we draw ‘bb’, we try again. (Alternatively, we could draw ‘BB’ and ‘Bb’ with probabilities of 1/3 and 2/3 respectively.)\nWe now want to examine the offspring of the test mouse when mated with a brown “bb” mouse. Specifically, we are only interested in cases where all offspring were black. We will store the genetic kind of the parents of such offspring so that we can count them later.\nIf our test mouse is “BB”, we already know that all their offspring will be black (“Bb”). Thus, store “BB” in the parent list.\nIf our test mouse is “Bb”, we have a bit more work to do. Draw seven offspring from the middle row of Table tbl-mice-genetics. If all the offspring are black, store “Bb” in the parent list.\nRepeat steps 1-3 perhaps 10000 times.\nNow, out of all parents count the numbers of “BB” vs “Bb”.\n\nWe will do a naïve implementation that closely follows the logic described above, followed by a slightly optimized version.\n\nN = 100000\n\nparents = []\n\nfor i in range(N):\n test_mouse = np.random.choice(['BB', 'Bb', 'bb'], p=[0.25, 0.5, 0.25])\n\n # The test mouse is black; since we drew a brown mouse skip this trial\n if test_mouse == 'bb':\n continue\n\n # If the test mouse is 'BB', all 7 children are guaranteed to\n # be 'Bb' black.\n # Therefore, add 'BB' to the parent list.\n if test_mouse == 'BB':\n parents.append('BB')\n\n # If the parent mouse is 'Bb', we draw 7 children to\n # see whether all of them are black ('Bb').\n # The probabilities come from the middle row of the table.\n if test_mouse == 'Bb':\n children = np.random.choice(['Bb', 'bb'], p=[0.5, 0.5], size=7)\n if np.all(children == 'Bb'):\n parents.append('Bb')\n\n# Now, count how many parents were 'BB' vs 'Bb'\nparents = np.array(parents)\n\nparents_BB = (parents == 'BB')\nparents_Bb = (parents == 'Bb')\nN_B = len(parents)\n\np_BB = np.sum(parents_BB) / N_B\np_Bb = np.sum(parents_Bb) / N_B\n\nprint(f'p_BB = {p_BB:.3f}')\n\np_BB = 0.986\n\nprint(f'p_Bb = {p_Bb:.3f}')\n\np_Bb = 0.014\n\nprint(f'Ratio: {p_BB/p_Bb:.1f}')\n\nRatio: 69.4\n\n\nWe see that all the offspring being black considerably changes the situation! We started with the odds being 2:1 in favor of Bb vs BB. The “posterior” or “after the evidence” ratio is closer to 64:1 in favor of BB! (1973, pp. 12-14)\nLet’s tune the code a bit to run faster. Instead of doing the trials one mouse at a time, we will do the whole bunch together.\n\nN = 1000000\n\n# In N trials, pair two Bb mice and generate a child\ntest_mice = np.random.choice(['BB', 'Bb', 'bb'], p=[0.25, 0.5, 0.25], size=N)\n\n# The resulting test mouse is black, so filter out all brown ones\ntest_mice = test_mice[test_mice != 'bb']\nM = len(test_mice)\n\n# Each test mouse will now be mated with a brown mouse, producing 7 offspring.\n# We then store whether all the offspring were black or not.\nall_offspring_black = np.zeros(M, dtype=bool)\n\n# If a test mouse is 'BB', we are assured that all its offspring\n# will be black\nall_offspring_black[test_mice == 'BB'] = True\n\n# If a test mouse is 'Bb', we have to generate its offspring and\n# see whether they are all black or not\ntest_mice_Bb = (test_mice == 'Bb')\nN_test_mice_Bb = np.sum(test_mice_Bb)\n\n# Generate all offspring of all 'Bb' test mice\noffspring = np.random.choice(\n ['Bb', 'bb'], p=[0.5, 0.5], size=(N_test_mice_Bb, 7)\n)\nall_offspring_black[test_mice_Bb] = np.all(offspring == 'Bb', axis=1)\n\n# Find the genetic types of the parents of all-black offspring\nparents = test_mice[all_offspring_black]\n\n# Calculate what fraction of parents were 'BB' vs 'Bb'\nparents_BB = (parents == 'BB')\nparents_Bb = (parents == 'Bb')\nN_B = np.sum(all_offspring_black)\n\np_BB = np.sum(parents_BB) / N_B\np_Bb = np.sum(parents_Bb) / N_B\n\nprint(f'p_BB = {p_BB:.3f}')\n\np_BB = 0.985\n\nprint(f'p_Bb = {p_Bb:.3f}')\n\np_Bb = 0.015\n\nprint(f'Ratio: {p_BB/p_Bb:.1f}')\n\nRatio: 64.1\n\n\nThis yields a similar result, but in much shorter time — which means we can increase the number of trials and get a more accurate result.\n\nCreating the correct simulation procedure is not trivial, because Bayesian reasoning is subtle — a reason it has been the cause of controversy for more than two centuries. But it certainly is not easier to create a correct procedure using analytic tools (except in the cookbook sense of plug-and-pray). And the difficult mathematics that underlie the analytic method (see e.g. (Box and Tiao 1992, Appendix A1.1) make it almost impossible for the statistician to fully understand the procedure from beginning to end. If one is interested in insight, the simulation procedure might well be preferred.1" + }, + { + "objectID": "bayes_simulation.html#problems-based-on-normal-and-other-distributions", + "href": "bayes_simulation.html#problems-based-on-normal-and-other-distributions", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.4 Problems based on normal and other distributions", + "text": "31.4 Problems based on normal and other distributions\nThis section should be skipped by all except advanced practitioners of statistics.\nMuch of the work in Bayesian analysis for scientific purposes treats the combining of prior distributions having Normal and other standard shapes with sample evidence which may also be represented with such standard functions. The mathematics involved often is formidable, though some of the calculational formulas are fairly simple and even intuitive.\nThese problems may be handled with simulation by replacing the Normal (or other) distribution with the original raw data when data are available, or by a set of discrete sub-universes when distributions are subjective.\nMeasured data from a continuous distribution present a special problem because the probability of any one observed value is very low, often approaching zero, and hence the probability of a given set of observed values usually cannot be estimated sensibly; this is the reason for the conventional practice of working with a continuous distribution itself, of course. But a simulation necessarily works with discrete values. A feasible procedure must bridge this gulf.\nThe logic for a problem of Schlaifer’s (1961, example 17.1) will only be sketched out. The procedure is rather novel, but it has not heretofore been published and therefore must be considered tentative and requiring particular scrutiny.\n\n31.4.1 An Intermediate Problem in Conditional Probability\nSchlaifer employs a quality-control problem for his leading example of Bayesian estimation with Normal sampling. A chemical manufacturer wants to estimate the amount of yield of a crucial ingredient X in a batch of raw material in order to decide whether it should receive special handling. The yield ranges between 2 and 3 pounds (per gallon), and the manufacturer has compiled the distribution of the last 100 batches.\nThe manufacturer currently uses the decision rule that if the mean of nine samples from the batch (which vary only because of measurement error, which is the reason that he takes nine samples rather than just one) indicates that the batch mean is greater than 2.5 gallons, the batch is accepted. The first question Schlaifer asks, as a sampling-theory waystation to the more general question, is the likelihood that a given batch with any given yield — say 2.3 gallons — will produce a set of samples with a mean as great or greater than 2.5 gallons.\nWe are told that the manufacturer has in hand nine samples from a given batch; they are 1.84, 1.75, 1.39, 1.65, 3.53, 1.03,\n2.73, 2.86, and 1.96, with a mean of 2.08. Because we are also told that the manufacturer considers the extent of sample variation to be the same at all yield levels, we may — if we are again working with 2.3 as our example of a possible universe — therefore add (2.3 minus 2.08 =) 0.22 to each of these nine observations, so as to constitute a bootstrap-type universe; we do this on the grounds that this is our best guess about the constitution of that distribution with a mean at (say) 2.3.\nWe then repeatedly draw samples of nine observations from this distribution (centered at 2.3) to see how frequently its mean exceeds 2.5. This work is so straightforward that we need not even state the steps in the procedure.\n\n\n31.4.2 Estimating the Posterior Distribution\nNext we estimate the posterior distribution. Figure 31.1 shows the prior distribution of batch yields, based on 100 previous batches.\n\n\n\n\n\nFigure 31.1: Posterior distribution of batch yields\n\n\n\n\nNotation: S m = set of batches (where total S = 100) with a particular mean m (say, m = 2.1). x i = particular observation (say, x 3 = 1.03). s = the set of x i .\nWe now perform for each of the S m (categorized into the tenth-of-gallon divisions between 2.1 and 3.0 gallons), each corresponding to one of the yields ranging from 2.1 to 3.0, the same sort of sampling operation performed for S m=2.3 in the previous problem. But now, instead of using the manufacturer’s decision criterion of 2.5, we construct an interval of arbitrary width around the sample mean of 2.08 — say at .1 intervals from 2.03 to 2.13 — and then work with the weighted proportions of sample means that fall into this interval.\n\nUsing a bootstrap-like approach, we presume that the sub-universe of observations related to each S m equals the mean of that S m — say, 2.1) plus (minus) the mean of the x i (equals 2.05) added to (subtracted from) each of the nine x i , say, 1.03 + .05 = 1.08. For a distribution centered at 2.3, the values would be (1.84 + .22 = 2.06, 1.75 + .22 = 1.97…).\nWorking with the distribution centered at 2.3 as an example: Constitute a universe of the values (1.84+.22=2.06, 1.75 + .22 = 1.97…). Here we may notice that the variability in the sample enters into the analysis at this point, rather than when the sample evidence is combined with the prior distribution; this is in contrast to conventional Bayesian practice where the posterior is the result of the prior and sample means weighted by the reciprocals of the variances (see e.g. (Box and Tiao 1992, 17 and Appendix A1.1)).\nDraw nine observations from this universe (with replacement, of course), compute the mean, and record.\nRepeat step 2 perhaps 1000 times and plot the distribution of outcomes.\nCompute the percentages of the means within (say) .5 on each side of the sample mean, i. e. from 2.03–2.13. The resulting number — call it UP i — is the un-standardized (un-normalized) effect of this sub-distribution in the posterior distribution.\nRepeat steps 1-5 to cover each other possible batch yield from 2.0 to 3.0 (2.3 was just done).\nWeight each of these sub-distributions — actually, its UP i — by its prior probability, and call that WP i -.\nStandardize the WP i s to a total probability of 1.0. The result is the posterior distribution. The value found is 2.283, which the reader may wish to compare with a theoretically-obtained result (which Schlaifer does not give).\n\nThis procedure must be biased because the numbers of “hits” will differ between the two sides of the mean for all sub-distributions except that one centered at the same point as the sample, but the extent and properties of this bias are as-yet unknown. The bias would seem to be smaller as the interval is smaller, but a small interval requires a large number of simulations; a satisfactorily narrow interval surely will contain relatively few trials, which is a practical problem of still-unknown dimensions.\nAnother procedure — less theoretically justified and probably more biased — intended to get around the problem of the narrowness of the interval, is as follows:\n\n(5a.) Compute the percentages of the means on each side of the sample mean, and note the smaller of the two (or in another possible process, the difference of the two). The resulting number — call it UP i — is the un-standardized (un-normalized) weight of this sub-distribution in the posterior distribution.\n\nAnother possible criterion — a variation on the procedure in 5a — is the difference between the two tails; for a universe with the same mean as the sample, this difference would be zero." + }, + { + "objectID": "bayes_simulation.html#conclusion", + "href": "bayes_simulation.html#conclusion", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.5 Conclusion", + "text": "31.5 Conclusion\nAll but the simplest problems in conditional probability are confusing to the intuition even if not difficult mathematically. But when one tackles Bayesian and other problems in probability with experimental simulation methods rather than with logic, neither simple nor complex problems need be difficult for experts or beginners.\nThis chapter shows how simulation can be a helpful and illuminating way to approach problems in Bayesian analysis.\nSimulation has two valuable properties for Bayesian analysis:\n\nIt can provide an effective way to handle problems whose analytic solution may be difficult or impossible.\nSimulation can provide insight to problems that otherwise are difficult to understand fully, as is peculiarly the case with Bayesian analysis.\n\nBayesian problems of updating estimates can be handled easily and straightforwardly with simulation, whether the data are discrete or continuous. The process and the results tend to be intuitive and transparent. Simulation works best with the original raw data rather than with abstractions from them via percentages and distributions. This can aid the understanding as well as facilitate computation.\n\n\n\n\nBox, George E. P., and George C. Tiao. 1992. Bayesian Inference in Statistical Analysis. New York: Wiley & Sons, Inc. https://www.google.co.uk/books/edition/Bayesian_Inference_in_Statistical_Analys/T8Askeyk1k4C.\n\n\nCascells, Ward, Arno Schoenberger, and Thomas B. Grayboys. 1978. “Interpretation by Physicians of Clinical Laboratory Results.” New England Journal of Medicine 299: 999–1001. https://www.nejm.org/doi/full/10.1056/NEJM197811022991808.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nFisher, Ronald Aylmer. 1959. “Statistical Methods and Scientific Inference.” https://archive.org/details/statisticalmetho0000fish.\n\n\nPeirce, Charles Sanders. 1923. Chance, Love, and Logic: Philosophical Essays. New York: Harcourt Brace & Company, Inc. https://www.gutenberg.org/files/65274/65274-h/65274-h.htm.\n\n\nSchlaifer, Robert. 1961. Introduction to Statistics for Business Decisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl.\n\n\nTversky, Amos, and Daniel Kahneman. 1982. “Evidential Impact of Base Rates.” In Judgement Under Uncertainty: Heuristics and Biases, edited by Daniel Kahneman, Paul Slovic, and Amos Tversky. Cambridge: Cambridge University Press. https://www.google.co.uk/books/edition/Judgment_Under_Uncertainty/_0H8gwj4a1MC.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory Statistics. 5th ed. New York: John Wiley & Sons." + }, + { + "objectID": "exercise_solutions.html#solution-18-2", + "href": "exercise_solutions.html#solution-18-2", + "title": "32  Exercise Solutions", + "section": "32.1 Solution 18-2", + "text": "32.1 Solution 18-2\n\nURN 36#1 36#0 pit\nURN 77#1 52#0 chi\nREPEAT 1000\n SAMPLE 72 pit pit$\n SAMPLE 129 chi chi$\n MEAN pit$ p\n MEAN chi$ c\n SUBTRACT p c d\n SCORE d scrboard\nEND\nHISTOGRAM scrboard\nPERCENTILE scrboard (2.5 97.5) interval\nPRINT interval\n\nResults:\nINTERVAL = -0.25921 0.039083 (estimated 95 percent confidence interval)." + }, + { + "objectID": "exercise_solutions.html#solution-21-1", + "href": "exercise_solutions.html#solution-21-1", + "title": "32  Exercise Solutions", + "section": "32.2 Solution 21-1", + "text": "32.2 Solution 21-1\n\nREPEAT 1000\n GENERATE 200 1,100 a\n COUNT a <= 7 b\n DIVIDE b 200 c\n SCORE c scrboard\nEND\nHISTOGRAM scrboard\nPERCENTILE z (2.5 97.5) interval\nPRINT interval\n\nResult:\nINTERVAL = 0.035 0.105 [estimated 95 percent confidence interval]" + }, + { + "objectID": "exercise_solutions.html#solution-21-2", + "href": "exercise_solutions.html#solution-21-2", + "title": "32  Exercise Solutions", + "section": "32.3 Solution 21-2", + "text": "32.3 Solution 21-2\nWe use the “bootstrap” technique of drawing many bootstrap re-samples with replacement from the original sample, and observing how the re-sample means are distributed.\n\nNUMBERS (30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31) a\n\nREPEAT 1000\n ' Do 1000 trials or simulations\n SAMPLE 20 a b\n ' Draw 20 lifetimes from a, randomly and with replacement\n MEAN b c\n ' Find the average lifetime of the 20\n SCORE c scrboard\n ' Keep score\nEND\n\nHISTOGRAM scrboard\n' Graph the experiment results\n\nPERCENTILE scrboard (2.5 97.5) interval\n' Identify the 2.5th and 97.5th percentiles. These percentiles will\n' enclose 95 percent of the resample means.\n\nResult:\nINTERVAL = 27.7 30.05 [estimated 95 percent confidence interval]" + }, + { + "objectID": "exercise_solutions.html#solution-21-3", + "href": "exercise_solutions.html#solution-21-3", + "title": "32  Exercise Solutions", + "section": "32.4 Solution 21-3", + "text": "32.4 Solution 21-3\n\nNUMBERS (.02 .026 .023 .017 .022 .019 .018 .018 .017 .022) a\nREPEAT 1000\n SAMPLE 10 a b\n MEAN b c\n SCORE c scrboard\nEND\nHISTOGRAM scrboard\nPERCENTILE scrboard (2.5 97.5) interval\nPRINT interval\n\nResult:\nINTERVAL = 0.0187 0.0219 [estimated 95 percent confidence interval]" + }, + { + "objectID": "exercise_solutions.html#solution-23-1", + "href": "exercise_solutions.html#solution-23-1", + "title": "32  Exercise Solutions", + "section": "32.5 Solution 23-1", + "text": "32.5 Solution 23-1\n\nCreate two groups of paper cards: 25 with participation rates, and 25 with the spread values. Arrange the cards in pairs in accordance with the table, and compute the correlation coefficient between the shuffled participation and spread variables.\nShuffle one of the sets, say that with participation, and compute correlation between shuffled participation and spread.\nRepeat step 2 many, say 1000, times. Compute the proportion of the trials in which correlation was at least as negative as that for the original data.\n\n\nDATA (67.5 65.6 65.7 59.3 39.8 76.1 73.6 81.6 75.5 85.0 80.3\n54.5 79.1 94.0 80.3 89.6 44.7 82.7 89.7 83.6 84.9 76.3 74.7\n68.8 79.3) partic1\n\nDATA (13 19 18 12 20 5 1 1 2 3 5 6 5 4 8 1 3 18 13 2 2 12 17 26 6)\nspread1\n\nCORR partic1 spread1 corr\n\n' compute correlation - it’s -.37\nREPEAT 1000\n SHUFFLE partic1 partic2\n ' shuffle the participation rates\n CORR partic2 spread1 corrtria\n ' compute re-sampled correlation\n SCORE corrtria z\n ' keep the value in the scoreboard\nEND\nHISTOGRAM z\nCOUNT z <= -.37 n\n' count the trials when result <= -.37\nDIVIDE n 1000 prob\n' compute the proportion of such trials\nPRINT prob\nConclusion: The results of 5 Monte Carlo experiments each of a thousand such simulations are as follows:\nprob = 0.028, 0.045, 0.036, 0.04, 0.025.\nFrom this we may conclude that the voter participation rates probably are negatively related to the vote spread in the election. The actual value of the correlation (-.37398) cannot be explained by chance alone. In our Monte Carlo simulation of the null-hypothesis a correlation that negative is found only 3 percent — 4 percent of the time.\nDistribution of the test statistic’s value in 1000 independent trials corresponding to the null-hypothesis:" + }, + { + "objectID": "exercise_solutions.html#solution-23-2", + "href": "exercise_solutions.html#solution-23-2", + "title": "32  Exercise Solutions", + "section": "32.6 Solution 23-2", + "text": "32.6 Solution 23-2\n\nNUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)\nhomeruns\nNUMBERS (135 153 120 161 138 175 126 200 205 147 165 124\n169 156 36 98 82 131) strikeout\nMULTIPLY homerun strikeout r\nSUM r s\nREPEAT 1000\n SHUFFLE strikeout strikout2\n MULTIPLY strikout2 homeruns c\n SUM c cc\n SUBTRACT s cc d\n SCORE d scrboard\nEND\nHISTOGRAM scrboard\nCOUNT scrboard >=s k\nDIVIDE k 1000 kk\nPRINT kk\n\nResult: kk = 0\nInterpretation: In 1000 simulations, random shuffling never produced a value as high as observed. Therefore, we conclude that random chance could not be responsible for the observed degree of correlation." + }, + { + "objectID": "exercise_solutions.html#solution-23-3", + "href": "exercise_solutions.html#solution-23-3", + "title": "32  Exercise Solutions", + "section": "32.7 Solution 23-3", + "text": "32.7 Solution 23-3\n\nNUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)\nhomeruns\nNUMBERS (135 153 120 161 138 175 126 200 205 147 165 124\n169 156 36 98 82 131) strikeou\nCORR homeruns strikeou r\n REPEAT 1000\n SHUFFLE strikeou strikou2\n CORR strikou2 homeruns r$\n SCORE r$ scrboard\nEND\nHISTOGRAM scrboard\nCOUNT scrboard >=0.62 k\nDIVIDE k 1000 kk\nPRINT kk r\n\nResult: kk = .001\nInterpretation: A correlation coefficient as high as the observed value (.62) occurred only 1 out of 1000 times by chance. Hence, we rule out chance as an explanation for such a high value of the correlation coefficient." + }, + { + "objectID": "exercise_solutions.html#solution-23-4", + "href": "exercise_solutions.html#solution-23-4", + "title": "32  Exercise Solutions", + "section": "32.8 Solution 23-4", + "text": "32.8 Solution 23-4\n\nREAD FILE “noreen2.dat” exrate msuppl\n' read data from file\nCORR exrate msuppl stat\n' compute correlation stat (it’s .419)\nREPEAT 1000\n SHUFFLE msuppl msuppl$\n ' shuffle money supply values\n CORR exrate msuppl$ stat$\n ' compute correlation\n SCORE stat$ scrboard\n ' keep the value in a scoreboard\nEND\nPRINT stat\nHISTOGRAM scrboard\nCOUNT scrboard >=0.419 k\nDIVIDE k 1000 prob\nPRINT prob\nDistribution of the correlation after permutation of the data:\n\nResult: prob = .001\nInterpretation: The observed correlation (.419) between the exchange rate and the money supply is seldom exceeded by random experiments with these data. Thus, the observed result 0.419 cannot be explained by chance alone and we conclude that it is statistically significant." + }, + { + "objectID": "acknowlegements.html#for-the-second-edition", + "href": "acknowlegements.html#for-the-second-edition", + "title": "33  Acknowledgements", + "section": "33.1 For the second edition", + "text": "33.1 For the second edition\nMany people have helped in the long evolution of this work. First was the late Max Beberman, who in 1967 immediately recognized the potential of resampling statistics for high school students as well as for all others. Louis Guttman and Joseph Doob provided important encouragement about the theoretical and practical value of resampling statistics. Allen Holmes cooperated with me in teaching the first class at University High School in Urbana, Illinois, in 1967. Kenneth Travers found and supervised several PhD students — David Atkinson and Carolyn Shevokas outstanding among them — who experimented with resampling statistics in high school and college classrooms and proved its effectiveness; Travers also carried the message to many secondary school teachers in person and in his texts. In 1973 Dan Weidenfield efficiently wrote the first program for the mainframe (then called “Simple Stats”). Derek Kumar wrote the first interactive program for the Apple II. Chad McDaniel developed the IBM version, with touchup by Henry van Kuijk and Yoram Kochavi. Carlos Puig developed the powerful 1990 version of the program. William E. Kirwan, Robert Dorfman, and Rudolf Lamone have provided their good offices for us to harness the resources of the University of Maryland and, in particular, the College of Business and Management. Terry Oswald worked day and night with great dedication on the program and on commercial details to start the marketing of RESAMPLING STATS. In mid-1989, Peter Bruce assumed the overall stewardship of RESAMPLING STATS, and has been proceeding with energy, good judgment, and courage. He has contributed to this volume in many ways, always excellently (including the writing and re-writing of programs, as well as explanations of the bootstrap and of the interpretation of p-values). Vladimir Koliadin wrote the code for several of the problems in this edition, and Cheinan Marks programmed the Windows and Macintosh versions of Resampling Stats. Toni York handled the typesetting and desktop publishing through various iterations, Barbara Shaw provided expert proofreading and desktop publishing services for the second printing of the second edition, and Chris Brest produced many of the figures. Thanks to all of you, and to others who should be added to the list." + }, + { + "objectID": "technical_note.html", + "href": "technical_note.html", + "title": "34  Technical Note to the Professional Reader", + "section": "", + "text": "The material presented in this book fits together with the technical literature as follows: Though I (JLS) had proceeded from first principles rather than from the literature, I have from the start cited work by Chung and Fraser (1958) and Meyer Dwass (1957) They suggested taking samples of permutations in a two-sample test as a way of extending the applicability of Fisher’s randomization test (1935; 1960, chap. III, section 21). Resampling with replacement from a single sample to determine sample statistic variability was suggested by Simon (1969). Independent work by Efron (1979) explored the properties of this technique (Efron termed it the “bootstrap”) and lent it theoretical support. The notion of using these techniques routinely and in preference to conventional techniques based on Gaussian assumptions was suggested by Simon (1969) and by Simon, Atkinson, and Shevokas (1976).\n\n\n\n\nChung, James H, and Donald AS Fraser. 1958. “Randomization Tests for a Multivariate Two-Sample Problem.” Journal of the American Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf.\n\n\nDwass, Meyer. 1957. “Modified Randomization Tests for Nonparametric Hypotheses.” The Annals of Mathematical Statistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf.\n\n\nEfron, Bradley. 1979. “Bootstrap Methods; Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nSimon, Julian Lincoln. 1969. Basic Research Methods in Social Science. 1st ed. New York: Random House.\n\n\nSimon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976. “Probability and Statistics: Experimental Results of a Radically Different Teaching Method.” The American Mathematical Monthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf." + }, + { + "objectID": "references.html", + "href": "references.html", + "title": "References", + "section": "", + "text": "Ani Adhikari, John DeNero, and David Wagner. 2021. Computational and\nInferential Thinking: The Foundations of Data Science. https://inferentialthinking.com. https://inferentialthinking.com.\n\n\nArbuthnot, John. 1710. “An Argument for Divine Providence, Taken\nfrom the Constant Regularity Observ’d in the Births of Both Sexes. By\nDr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of\nthe College of Physitians and the Royal Society.”\nPhilosophical Transactions of the Royal Society of London 27\n(328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011.\n\n\nBarnett, Vic. 1982. Comparative Statistical Inference. 2nd ed.\nWiley Series in Probability and Mathematical Statistics. Chichester:\nJohn Wiley & Sons. https://archive.org/details/comparativestati0000barn.\n\n\nBox, George E. P., and George C. Tiao. 1992. Bayesian Inference in\nStatistical Analysis. New York: Wiley & Sons, Inc.\nhttps://www.google.co.uk/books/edition/Bayesian_Inference_in_Statistical_Analys/T8Askeyk1k4C.\n\n\nBrooks, Charles Ernest Pelham. 1928. “Periodicities in the Nile\nFloods.” Memoirs of the Royal Meteorological Society 2\n(12): 9--26. https://www.rmets.org/sites/default/files/papers/brooksmem2-12.pdf.\n\n\nBulmer, M. G. 1979. Principles of Statistics. New York, NY:\nDover Publications, inc. https://archive.org/details/principlesofstat0000bulm.\n\n\nBurnett, Ed. 1988. The Complete Direct Mail List Handbook:\nEverything You Need to Know about Lists and How to Use Them for Greater\nProfit. Englewood Cliffs, New Jersey: Prentice Hall. https://archive.org/details/completedirectma00burn.\n\n\nCascells, Ward, Arno Schoenberger, and Thomas B. Grayboys. 1978.\n“Interpretation by Physicians of Clinical Laboratory\nResults.” New England Journal of Medicine 299: 999–1001.\nhttps://www.nejm.org/doi/full/10.1056/NEJM197811022991808.\n\n\nCatling, HW, and RE Jones. 1977. “A Reinvestigation of the\nProvenance of the Inscribed Stirrup Jars Found at Thebes.”\nArchaeometry 19 (2): 137–46.\n\n\nChung, James H, and Donald AS Fraser. 1958. “Randomization Tests\nfor a Multivariate Two-Sample Problem.” Journal of the\nAmerican Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf.\n\n\nCipolla, C. M. 1981. Fighting the Plague in Seventeenth-Century\nItaly. Merle Curti Lectures. Madison, Wisconsin: University of\nWisconsin Press. https://books.google.co.uk/books?id=Ct\\_OJYgnKCsC.\n\n\nCobb, George W. 2007. “The Introductory Statistics Course: A\nPtolemaic Curriculum?” Technology Innovations in Statistics\nEducation 1 (1). https://escholarship.org/uc/item/6hb3k0nz.\n\n\nColeman, William. 1987. “Experimental Physiology and Statistical\nInference: The Therapeutic Trial in Nineteenth Century\nGermany.” In The Probabilistic Revolution:\nVolume 2: Ideas in the Sciences, edited by Lorenz Krüger, Gerd\nGigerenzer, and Mary S. Morgan. An MIT Press Classic. MIT Press. https://books.google.co.uk/books?id=SLftmgEACAAJ.\n\n\nCook, Earl. 1976. “Limits to Exploitation of Nonrenewable\nResources.” Science 191 (4228): 677–82. https://www.jstor.org/stable/pdf/1741483.pdf.\n\n\nDavenport, Thomas H, and DJ Patil. 2012. “Data Scientist: The\nSexiest Job of the 21st Century.” Harvard Business\nReview 90 (10): 70–76. https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century.\n\n\nDeshpande, Jayant V, AP Gore, and A Shanubhogue. 1995. Statistical\nAnalysis of Nonnormal Data. Taylor & Francis. https://www.google.co.uk/books/edition/Statistical_Analysis_of_Nonnormal_Data/sS0on2XqwwoC.\n\n\nDixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to\nStatistical Analysis.”\n\n\nDonoho, David. 2017. “50 Years of Data Science.”\nJournal of Computational and Graphical Statistics 26 (4):\n745–66. http://courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf.\n\n\nDunleavy, Kieron, Stefania Pittaluga, John Janik, Nicole Grant, Margaret\nShovlin, Richard Little, Robert Yarchoan, Seth Steinberg, Elaine S.\nJaffe, and Wyndham H. Wilson. 2006. “Novel\nTreatment of Burkitt Lymphoma with Dose-Adjusted EPOCH-Rituximab:\nPreliminary Results Showing Excellent Outcome.”\nBlood 108 (11): 2736–36. https://doi.org/10.1182/blood.V108.11.2736.2736.\n\n\nDwass, Meyer. 1957. “Modified Randomization Tests for\nNonparametric Hypotheses.” The Annals of Mathematical\nStatistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf.\n\n\nEfron, Bradley. 1979. “Bootstrap Methods; Another Look at the\nJackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf.\n\n\nEfron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to\nthe Bootstrap.” In Monographs on Statistics and Applied\nProbability, edited by David R Cox, David V Hinkley, Nancy Reid,\nDonald B Rubin, and Bernard W Silverman. Vol. 57. New York:\nChapman & Hall.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its\nApplications: Volume i. 3rd ed. Vol. 1. New York: John Wiley &\nSons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nFeynman, Richard P., and Ralph Leighton. 1988. What Do You\nCare What Other People Think? Further Adventures of a Curious\nCharacter. New York, NY: W. W. Norton; Company, Inc. https://archive.org/details/whatdoyoucarewha0000feyn_x5w7.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed.\nEdinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1959. “Statistical Methods and Scientific Inference.”\nhttps://archive.org/details/statisticalmetho0000fish.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh:\nOliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nFussler, Herman Howe, and Julian Lincoln Simon. 1961. Patterns in\nthe Use of Books in Large Research Libraries. Chicago: University\nof Chicago Library.\n\n\nGardner, Martin. 1985. Mathematical Magic Show. Penguin Books\nLtd, Harmondsworth.\n\n\n———. 2001. The Colossal Book of Mathematics. W.W. Norton &\nCompany Inc., New York. https://archive.org/details/B-001-001-265.\n\n\nGilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot\nHand in Basketball: On the Misperception of Random Sequences.”\nCognitive Psychology 17 (3): 295–314. https://www.joelvelasco.net/teaching/122/Gilo.Vallone.Tversky.pdf.\n\n\nGnedenko, Boris Vladimirovich, I Aleksandr, and Akovlevich Khinchin.\n1962. An Elementary Introduction to the Theory of Probability.\nNew York, NY, USA: Dover Publications, Inc. https://archive.org/details/gnedenko-khinchin-an-elementary-introduction-to-the-theory-of-probability.\n\n\nGoldberg, Samuel. 1986. Probability: An Introduction. Courier\nCorporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC.\n\n\nGraunt, John. 1759. “Natural and Political Observations Mentioned\nin a Following Index and Made Upon the Bills of Mortality.” In\nCollection of Yearly Bills of Mortality, from 1657 to 1758\nInclusive, edited by Thomas Birch. London: A. Miller. https://archive.org/details/collectionyearl00hebegoog.\n\n\nHald, Anders. 1990. A History of Probability and Statistics and\nTheir Applications Before 1750. New York: John Wiley & Sons. https://archive.org/details/historyofprobabi0000hald.\n\n\nHansen, Morris H, William N Hurwitz, and William G Madow. 1953.\n“Sample Survey Methods and Theory. Vol. I. Methods and\nApplications.” https://archive.org/details/SampleSurveyMethodsAndTheoryVol1.\n\n\nHodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic\nConcepts of Probability and Statistics. 2nd ed. San Francisco,\nCalifornia: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9.\n\n\nHollander, Myles, and Douglas A Wolfe. 1999. Nonparametric\nStatistical Methods. 2nd ed. Wiley Series in Probability and\nStatistics: Applied Probability and Statistics. New York: John Wiley\n& Sons, Inc. https://archive.org/details/nonparametricsta0000ed2holl.\n\n\nHyndman, Rob J, and Yanan Fan. 1996. “Sample Quantiles in\nStatistical Packages.” The American Statistician 50 (4):\n361–65. https://www.jstor.org/stable/pdf/2684934.pdf.\n\n\nKahn, Harold A, and Christopher T Sempos. 1989. Statistical Methods\nin Epidemiology. Vol. 12. Monographs in Epidemiology and\nBiostatistics. New York: Oxford University Press. https://www.google.co.uk/books/edition/Statistical_Methods_in_Epidemiology/YERYAgAAQBAJ.\n\n\nKinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948.\n“Sexual Behavior in the Human Male.” W. B. Saunders\nCompany. https://books.google.co.uk/books?id=pfMKrY3VvigC.\n\n\nKornberg, Arthur. 1991. For the Love of Enzymes: The Odyssey of a\nBiochemist. Cambridge, Massachusetts: Harvard University Press. https://archive.org/details/forloveofenzymes00arth.\n\n\nKotz, Samuel, and Norman Lloyd Johnson. 1992. Breakthroughs in\nStatistics. New York: Springer-Verlag.\n\n\nLee, Peter M. 2012. Bayesian Statistics: An Introduction. 4th\ned. Wiley Online Library. https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/book.htm.\n\n\nLorie, James Hirsch, and Harry V Roberts. 1951. Basic Methods of\nMarketing Research. McGraw-Hill.\n\n\nLyon, Herbert L, and Julian Lincoln Simon. 1968. “Price Elasticity\nof the Demand for Cigarettes in the United States.” American\nJournal of Agricultural Economics 50 (4): 888–95.\n\n\nMartineau, Adrian R, David A Jolliffe, Richard L Hooper, Lauren\nGreenberg, John F Aloia, Peter Bergman, Gal Dubnov-Raz, et al. 2017.\n“Vitamin D Supplementation to Prevent Acute\nRespiratory Tract Infections: Systematic Review and Meta-Analysis of\nIndividual Participant Data.” Bmj 356.\n\n\nMcCabe, George P, and Linda Doyle McCabe. 1989. Instructor’s Guide\nwith Solutions for Introduction to the Practice of Statistics. New\nYork: W. H. Freeman.\n\n\nMosteller, Frederick. 1987. Fifty Challenging Problems in\nProbability with Solutions. Courier Corporation.\n\n\nMosteller, Frederick, and Robert E. K. Rourke. 1973. Sturdy\nStatistics: Nonparametrics and Order Statistics. Addison-Wesley\nPublishing Company.\n\n\nMosteller, Frederick, Robert E. K. Rourke, and George Brinton Thomas Jr.\n1961. Probability with Statistical Applications. 2nd ed. https://archive.org/details/probabilitywiths0000most.\n\n\nNoreen, Eric W. 1989. Computer-Intensive Methods for Testing\nHypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore.\n\n\nPeirce, Charles Sanders. 1923. Chance, Love, and Logic:\nPhilosophical Essays. New York: Harcourt Brace & Company, Inc.\nhttps://www.gutenberg.org/files/65274/65274-h/65274-h.htm.\n\n\nPiketty, Thomas. 2018. “Brahmin Left Vs Merchant Right: Rising\nInequality & the Changing Structure of Political Conflict.”\n2018. https://www.prsinstitute.org/downloads/related/economics/RisingInequalityandtheChangingStructureofPoliticalConflict1.pdf.\n\n\nPitman, Edwin JG. 1937. “Significance Tests Which May Be Applied\nto Samples from Any Populations.” Supplement to the Journal\nof the Royal Statistical Society 4 (1): 119–30. https://www.jstor.org/stable/pdf/2984124.pdf.\n\n\nRaiffa, Howard. 1968. “Decision Analysis: Introductory Lectures on\nChoices Under Uncertainty.” https://archive.org/details/decisionanalysis0000raif.\n\n\nRuark, Arthur Edward, and Harold Clayton Urey. 1930. Atoms,\nMoleculues and Quanta. New York, NY: McGraw-Hill book\ncompany, inc. https://archive.org/details/atomsmoleculesqu00ruar.\n\n\nRussell, Bertrand. 1945. A History of Western\nPhilosophy. New York: Simon; Schuster.\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New\nYork: Dover Publications, Inc.\n\n\nSavant, Marilyn vos. 1990. “Ask Marilyn.” 1990. https://web.archive.org/web/20160318182523/http://marilynvossavant.com/game-show-problem.\n\n\nSchlaifer, Robert. 1961. Introduction to Statistics for Business\nDecisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl.\n\n\nSelvin, Steve. 1975. “Letters to the Editor.” The\nAmerican Statistician 29 (1): 67. http://www.jstor.org/stable/2683689.\n\n\nSemmelweis, Ignác Fülöp. 1983. The Etiology, Concept, and\nProphylaxis of Childbed Fever. Translated by K. Codell Carter.\nMadison, Wisconsin: University of Wisconsin Press. https://archive.org/details/etiologyconcepta0000unse.\n\n\nShurtleff, Dewey. 1970. “Some Characteristics Related to the\nIncidence of Cardiovascular Disease and Death: Framingham Study, 16-Year\nFollow-up.” Section 26. Edited by William B. Kannel and Tavia\nGordon. The Framingham Study: An Epidemiological Investigation of\nCardiovascular Disease. Washington, D.C.: U.S. Government Printing\nOffice. https://upload.wikimedia.org/wikipedia/commons/6/6d/The_Framingham_study_-_an_epidemiological_investigation_of_cardiovascular_disease_sec.26_1970_%28IA_framinghamstudye00kann_25%29.pdf.\n\n\nSimon, Julian Lincoln. 1967. “Doctors, Smoking, and Reference\nGroups.” Public Opinion Quarterly 31 (4): 646–47.\n\n\n———. 1969. Basic Research Methods in Social Science. 1st ed.\nNew York: Random House.\n\n\n———. 1992. Resampling: The New Statistics. 1st ed.\nArlington, VA: Resampling Stats Inc.\n\n\n———. 1998. “The Philosophy and Practice of Resampling\nStatistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy.\n\n\nSimon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976.\n“Probability and Statistics: Experimental Results of a Radically\nDifferent Teaching Method.” The American Mathematical\nMonthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research\nMethods in Social Science. 3rd ed. New York: Random House.\n\n\nSimon, Julian Lincoln, and Allen Holmes. 1969. “A New Way to Teach\nProbability Statistics.” The Mathematics Teacher 62 (4):\n283–88.\n\n\nSimon, Julian Lincoln, Manouchehr Mokhtari, and Daniel H Simon. 1996.\n“Are Mergers Beneficial or Detrimental? Evidence from Advertising\nAgencies.” International Journal of the Economics of\nBusiness 3 (1): 69–82.\n\n\nSimon, Julian Lincoln, and David M Simon. 1996. “The Effects of\nRegulations on State Liquor Prices.” Empirica 23:\n303–16.\n\n\nStøvring, H. 1999. “On Radicke and His Method for Testing Mean\nDifferences.” Journal of the Royal Statistical Society:\nSeries D (The Statistician) 48 (2): 189–201. https://www.jstor.org/stable/pdf/2681185.pdf.\n\n\nSudman, Seymour. 1976. Applied Sampling. New York:\nAcademic Press. https://archive.org/details/appliedsampling0000unse.\n\n\nTukey, John W. 1977. Exploratory Data Analysis. Reading, MA,\nUSA: Addison-Wesley.\n\n\nTversky, Amos, and Daniel Kahneman. 1982. “Evidential Impact of\nBase Rates.” In Judgement Under Uncertainty: Heuristics and\nBiases, edited by Daniel Kahneman, Paul Slovic, and Amos Tversky.\nCambridge: Cambridge University Press. https://www.google.co.uk/books/edition/Judgment_Under_Uncertainty/_0H8gwj4a1MC.\n\n\nVazsonyi, Andrew. 1999. “Which Door Has the Cadillac.”\nDecision Line 30 (1): 17–19. https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New\nApproach. New York: The Free Press.\n\n\nWhitworth, William Allen. 1897. DCC Exercises in Choice\nand Chance. Cambridge, UK: Deighton Bell; Co. https://archive.org/details/dccexerciseschoi00whit.\n\n\nWinslow, Charles-Edward Amory. 1980. The Conquest of Epidemic\nDisease: A Chapter in the History of Ideas. Madison, Wisconsin:\nUniversity of Wisconsin Press. https://archive.org/details/conquestofepidem0000wins_p3k0.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory\nStatistics. 5th ed. New York: John Wiley & Sons.\n\n\nZhou, Qixing, Christopher E Gibson, and Robert H Foy. 2000.\n“Long-Term Changes of Nitrogen and Phosphorus Loadings to a Large\nLake in North-West Ireland.” Water Research 34 (3):\n922–26. https://doi.org/10.1016/S0043-1354(99)00199-2." + } +] \ No newline at end of file diff --git a/python-book/significance.html b/python-book/significance.html new file mode 100644 index 00000000..b5880c94 --- /dev/null +++ b/python-book/significance.html @@ -0,0 +1,688 @@ + + + + + + + + + +Resampling statistics - 22  The Concept of Statistical Significance in Testing Hypotheses + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

22  The Concept of Statistical Significance in Testing Hypotheses

+
+ + + +
+ + + + +
+ + +
+ +

This chapter offers an interpretation of the meaning of the concept of statistical significance and the term “significant” in connection with the logic of significance tests. It also discusses the concept of “level of significance.”

+
+

22.1 The logic of hypothesis tests

+

Let’s address the logic of hypothesis tests by considering a variety of examples in everyday thinking:

+

Consider the nine-year-old who tells the teacher that the dog ate the homework. Why does the teacher not accept the child’s excuse? Clearly it is because the event would be too “unusual.” But why do we think that way?

+

Let’s speculate that you survey a million adults, and only three report that they have ever heard of a real case where a dog ate somebody’s homework. You are a teacher, and a student comes in without homework and says that a dog ate the homework. It could have happened — your survey reports that it really has happened in three lifetimes out of a million. But the event happens only very infrequently .

+

Therefore, you probably conclude that because the event is so unlikely, something else must have happened — and the likeliest alternative is that the student did not do the homework. The logic is that if an event seems very unlikely, it would therefore surprise us greatly if it were to actually happen, and therefore we assume that there must be a better explanation. This is why we look askance at unlikely coincidences when they are to someone’s benefit.

+

The same line of reasoning was the logic of John Arbuthnot’s hypothesis test (1710) about the ratio of births by sex in the first published hypothesis test, though his extension of logic to God’s design as an alternative hypothesis goes beyond the standard modern framework. It is also the implicit logic in the research on puerperal fever, cholera, and beri-beri, the data for which were shown in Chapter 17, though no explicit mention was made of probability in those cases.

+

Two students sat next to each other at an ACT college-entrance examination in Kentucky in 1987. Out of 219 questions, 211 of the answers were identical, including many that were wrong. Student A was a high school athlete in Kentucky who had failed two previous SAT exams, and Student B thought he saw Student A copying from him. Should one believe that Student A cheated? (The Washington Post , April 19, 1992, p. D2.)

+

You say to yourself: It would be most unlikely that the two test-takers would answer that many questions identically by chance — and we can compute how unlikely that event would be. Because that event is so unlikely, we therefore conclude that one or both cheated. And indeed, the testing service invalidated the athlete’s exam. On the other hand, if all the questions that were answered identically were correct , the result might not be unreasonable. If we knew in how many cases they made the same mistakes , the inquiry would have been clearer, but the newspaper did not contain those details.

+

The court is hearing a murder case. There is no eye-witness, and the evidence consists of such facts as the height and weight and age of the person charged, and other circumstantial evidence. Only one person in 50 million has such characteristics, and you find such a person. Will you convict the person, or will you believe that the evidence was just a coincidence? Of course the evidence might have occurred by bad luck, but the probability is very, very small (1 in 50 million). Will you therefore conclude that because the chance is so small, it is reasonable to assume that the person charged committed the crime?

+

Sometimes the unusual really happens — the court errs by judging that the wrong person did it, and that person goes to prison or even is executed. The best we can do is to make the criterion strict: “Beyond a reasonable doubt.” (People ask: What probability does that criterion represent? But the court will not provide a numerical answer.)

+

Somebody says to you: I am going to deal out five cards and it will be a royal flush — ten, jack, queen, king, and ace of the same suit. The person deals the cards and lo and behold! the royal flush appears. Do you think the occurrence happened just by chance? No, you are likely to be very dubious that it happened by chance. Therefore, you believe there must be some other explanation — that the person fixed the cards, for example.

+

Note: You don’t attach the same meaning to any other permutation (say 3, 6, 7, 7, and king of various suits), even though that permutation is just as rare — unless the person announced exactly that permutation in advance.

+

Indeed, even if the person says nothing , you will be surprised at a royal flush, because this hand has meaning , whereas another given set of five cards do not have any special meaning.

+

You see six Volvos in one home’s driveway, and you conclude that it is a Volvo club meeting, or a Volvo salesperson’s meeting. Why? Because it is unlikely that six people not connected formally by Volvo ownership would be friends of the same person.

+

Two important points complicate the concept of statistical significance:

+
    +
  1. With a large enough sample, every treatment or variable will seem different from every other. Two faces of even a good die (say, “1” and “2”) will produce different results in the very very long run.
  2. +
  3. Statistical significance does not imply economic or social significance. Two faces of a die may be statistically different in a huge sample of throws, but a 1/10,000 difference between them is too small to make an economic difference in betting. Statistical significance is only a filter . If it appears, one should then proceed to decide whether there is substantive significance.
  4. +
+

Interpreting statistical significance is sometimes complex, especially when the interpretation depends heavily upon your prior expectations — as it often does. For example, how should a basketball coach decide whether or not to bench a player for poor performance after a series of missed shots at the basket?

+

Consider Coach John Thompson who, after Charles Smith missed 10 of 12 shots in the 1989 Georgetown-Notre Dame NCAA game, took Smith out of the game for a time (The Washington Post, March 20, 1989, p. C1). The scientific or decision problem is: Should the coach consider that Smith is not now a 47 percent shooter as he normally is, and therefore the coach should bench him? The statistical question is: How likely is a shooter with a 47 percent average to produce 10 of 12 misses? The key issue in the statistical question concerns the total number of shot attempts we should consider.

+

Would Coach Thompson take Smith out of the game after he missed one shot? Clearly not. Why not? Because one “expects” Smith to miss a shot half the time, and missing one shot therefore does not seem unusual.

+

How about after Smith misses two shots in a row? For the same reason the coach still would not bench him, because this event happens “often” — more specifically, about once in every sequence of four shots.

+

How about after 9 misses out of ten shots? Notice the difference between this case and 9 females among ten calves. In the case of the calves, we expected half females because the experiment is a single isolated trial. The event considered by itself has a small enough probability that it seems unexpected rather than expected. (“Unexpected” seems to be closely related to “happens seldom” or “unusual” in our psychology.) And an event that happens seldom seems to call for explanation, and also seems to promise that it will yield itself to explanation by some unusual concatenation of forces. That is, unusual events lead us to think that they have unusual causes; that is the nub of the matter. (But on the other hand, one can sometimes benefit by paying attention to unusual events, as scientists know when they investigate outliers.)

+

In basketball shooting, we expect 47 percent of Smith’s individual shots to be successful, and we also expect that average for each set of shots. But we also expect some sets of shots to be far from that average because we observe many sets; such variation is inevitable. So when we see a single set of 9 misses in ten shots, we are not very surprised.

+

But how about 29 misses in 30 shots? At some point, one must start to pay attention. (And of course we would pay more attention if beforehand, and never at any other time, the player said, “I can’t see the basket today. My eyes are dim.”)

+

So, how should one proceed? Perhaps proceed the same way as with a coin that keeps coming down heads a very large proportion of the throws, over a long series of tosses: At some point you examine it to see if it has two heads. But if your investigation is negative, in the absence of an indication other than the behavior in question , you continue to believe that there is no explanation and you assume that the event is “chance” and should not be acted upon . In the same way, a coach might ask a player if there is an explanation for the many misses. But if the player answers “no,” the coach should not bench him. (There are difficulties here with truth-telling, of course, but let that go for now.)

+

The key point for the basketball case and other repetitive situations is not to judge that there is an unusual explanation from the behavior of a single sample alone , just as with a short sequence of stock-price changes.

+

We all need to learn that “irregular” (a good word here) sequences are less unusual than they seem to the naked intuition. A streak of 10 out of 12 misses for a 47 percent shooter occurs about 3 percent of the time. That is, about every 33 shots Smith takes, he will begin a sequence of 12 shots that will end with 3 or fewer baskets — perhaps once in every couple of games. This does not seem “very” unusual, perhaps. And if the coach treats each such case as unusual, he will be losing some of the services of a better player than he replaces him with.

+

In brief, how hard one should search for an explanation should depend on the probability of the event. But one should (almost) assume the absence of an explanation unless one actually finds it.

+

Bayesian analysis (Chapter 31) could be brought to bear upon the matter, bringing in your prior probabilities based on the knowledge of research that has shown that there is no such thing as a “hot hand” in basketball (see Chapter 14), together with some sort of cost-benefit error-loss calculation comparing Smith and the next best available player.

+
+
+

22.2 The concept of statistical significance

+

“Significance level” is a common term in probability statistics. It corresponds roughly to the probability that the assumed benchmark universe could give rise to a sample as extreme as the observed sample by chance. The results of Example 16-1 would be phrased as follows: The hypothesis that the radiation treatment affects the sex of the fruit fly offspring is accepted as true at the probability level of .16 (sometimes stated as the 16 percent level of significance). (A more common way of expressing this idea would be to say that the hypothesis is not rejected at the .16 probability level or the 16 percent level of significance. But “not rejected” and “accepted” really do mean much the same thing, despite some arguments to the contrary.) This kind of statistical work is called hypothesis testing.

+

The question of which significance level should be considered “significant” is difficult. How great must a coincidence be before you refuse to believe that it is only a coincidence? It has been conventional in social science to say that if the probability that something happens by chance is less than 5 percent, it is significant. But sometimes the stiffer standard of 1 percent is used. Actually, any fixed cut-off significance level is arbitrary. (And even the whole notion of saying that a hypothesis “is true” or “is not true” is sometimes not useful.) Whether a one-tailed or two-tailed test is used will influence your significance level, and this is why care must be taken in making that choice.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/site_libs/bootstrap/bootstrap-icons.css b/python-book/site_libs/bootstrap/bootstrap-icons.css new file mode 100644 index 00000000..94f19404 --- /dev/null +++ b/python-book/site_libs/bootstrap/bootstrap-icons.css @@ -0,0 +1,2018 @@ +@font-face { + font-display: block; + font-family: "bootstrap-icons"; + src: +url("./bootstrap-icons.woff?2ab2cbbe07fcebb53bdaa7313bb290f2") format("woff"); +} + +.bi::before, +[class^="bi-"]::before, +[class*=" bi-"]::before { + display: inline-block; + font-family: bootstrap-icons !important; + font-style: normal; + font-weight: normal !important; + font-variant: normal; + text-transform: none; + line-height: 1; + vertical-align: -.125em; + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; +} + +.bi-123::before { content: "\f67f"; } +.bi-alarm-fill::before { content: "\f101"; } +.bi-alarm::before { content: "\f102"; } +.bi-align-bottom::before { content: "\f103"; } +.bi-align-center::before { content: "\f104"; } +.bi-align-end::before { content: "\f105"; } +.bi-align-middle::before { content: "\f106"; } +.bi-align-start::before { content: "\f107"; } +.bi-align-top::before { content: "\f108"; } +.bi-alt::before { content: "\f109"; } +.bi-app-indicator::before { content: "\f10a"; } +.bi-app::before { content: "\f10b"; } +.bi-archive-fill::before { content: "\f10c"; } +.bi-archive::before { content: "\f10d"; } +.bi-arrow-90deg-down::before { content: "\f10e"; } +.bi-arrow-90deg-left::before { content: "\f10f"; } +.bi-arrow-90deg-right::before { content: "\f110"; } +.bi-arrow-90deg-up::before { content: "\f111"; } +.bi-arrow-bar-down::before { content: "\f112"; } +.bi-arrow-bar-left::before { content: "\f113"; } +.bi-arrow-bar-right::before { content: "\f114"; } +.bi-arrow-bar-up::before { content: "\f115"; } +.bi-arrow-clockwise::before { content: "\f116"; } +.bi-arrow-counterclockwise::before { content: "\f117"; } +.bi-arrow-down-circle-fill::before { content: "\f118"; } +.bi-arrow-down-circle::before { content: "\f119"; } +.bi-arrow-down-left-circle-fill::before { content: "\f11a"; } +.bi-arrow-down-left-circle::before { content: "\f11b"; } +.bi-arrow-down-left-square-fill::before { content: "\f11c"; } +.bi-arrow-down-left-square::before { content: "\f11d"; } +.bi-arrow-down-left::before { content: "\f11e"; } +.bi-arrow-down-right-circle-fill::before { content: "\f11f"; } +.bi-arrow-down-right-circle::before { content: "\f120"; } +.bi-arrow-down-right-square-fill::before { content: "\f121"; } +.bi-arrow-down-right-square::before { content: "\f122"; } +.bi-arrow-down-right::before { content: "\f123"; } +.bi-arrow-down-short::before { content: "\f124"; } +.bi-arrow-down-square-fill::before { content: "\f125"; } +.bi-arrow-down-square::before { content: "\f126"; } +.bi-arrow-down-up::before { content: "\f127"; } +.bi-arrow-down::before { content: "\f128"; } +.bi-arrow-left-circle-fill::before { content: "\f129"; } +.bi-arrow-left-circle::before { content: "\f12a"; } +.bi-arrow-left-right::before { content: "\f12b"; } +.bi-arrow-left-short::before { content: "\f12c"; } +.bi-arrow-left-square-fill::before { content: "\f12d"; } +.bi-arrow-left-square::before { content: "\f12e"; } +.bi-arrow-left::before { content: "\f12f"; } +.bi-arrow-repeat::before { content: "\f130"; } +.bi-arrow-return-left::before { content: "\f131"; } +.bi-arrow-return-right::before { content: "\f132"; } +.bi-arrow-right-circle-fill::before { content: "\f133"; } +.bi-arrow-right-circle::before { content: "\f134"; } +.bi-arrow-right-short::before { content: "\f135"; } +.bi-arrow-right-square-fill::before { content: "\f136"; } +.bi-arrow-right-square::before { content: "\f137"; } +.bi-arrow-right::before { content: "\f138"; } +.bi-arrow-up-circle-fill::before { content: "\f139"; } +.bi-arrow-up-circle::before { content: "\f13a"; } +.bi-arrow-up-left-circle-fill::before { content: "\f13b"; } +.bi-arrow-up-left-circle::before { content: "\f13c"; } +.bi-arrow-up-left-square-fill::before { content: "\f13d"; } +.bi-arrow-up-left-square::before { content: "\f13e"; } +.bi-arrow-up-left::before { content: "\f13f"; } +.bi-arrow-up-right-circle-fill::before { content: "\f140"; } +.bi-arrow-up-right-circle::before { content: "\f141"; } +.bi-arrow-up-right-square-fill::before { content: "\f142"; } +.bi-arrow-up-right-square::before { content: "\f143"; } +.bi-arrow-up-right::before { content: "\f144"; } +.bi-arrow-up-short::before { content: "\f145"; } +.bi-arrow-up-square-fill::before { content: "\f146"; } +.bi-arrow-up-square::before { content: "\f147"; } +.bi-arrow-up::before { content: "\f148"; } +.bi-arrows-angle-contract::before { content: "\f149"; } +.bi-arrows-angle-expand::before { content: "\f14a"; } +.bi-arrows-collapse::before { content: "\f14b"; } +.bi-arrows-expand::before { content: "\f14c"; } +.bi-arrows-fullscreen::before { content: "\f14d"; } +.bi-arrows-move::before { content: "\f14e"; } +.bi-aspect-ratio-fill::before { content: "\f14f"; } +.bi-aspect-ratio::before { content: "\f150"; } +.bi-asterisk::before { content: "\f151"; } +.bi-at::before { content: "\f152"; } +.bi-award-fill::before { content: "\f153"; } +.bi-award::before { content: "\f154"; } +.bi-back::before { content: "\f155"; } +.bi-backspace-fill::before { content: "\f156"; } +.bi-backspace-reverse-fill::before { content: "\f157"; } +.bi-backspace-reverse::before { content: "\f158"; } +.bi-backspace::before { content: "\f159"; } +.bi-badge-3d-fill::before { content: "\f15a"; } +.bi-badge-3d::before { content: "\f15b"; } +.bi-badge-4k-fill::before { content: "\f15c"; } +.bi-badge-4k::before { content: "\f15d"; } +.bi-badge-8k-fill::before { content: "\f15e"; } +.bi-badge-8k::before { content: "\f15f"; } +.bi-badge-ad-fill::before { content: "\f160"; } +.bi-badge-ad::before { content: "\f161"; } +.bi-badge-ar-fill::before { content: "\f162"; } +.bi-badge-ar::before { content: "\f163"; } +.bi-badge-cc-fill::before { content: "\f164"; } +.bi-badge-cc::before { content: "\f165"; } +.bi-badge-hd-fill::before { content: "\f166"; } +.bi-badge-hd::before { content: "\f167"; } +.bi-badge-tm-fill::before { content: "\f168"; } +.bi-badge-tm::before { content: "\f169"; } +.bi-badge-vo-fill::before { content: "\f16a"; } +.bi-badge-vo::before { content: "\f16b"; } +.bi-badge-vr-fill::before { content: "\f16c"; } +.bi-badge-vr::before { content: "\f16d"; } +.bi-badge-wc-fill::before { content: "\f16e"; } +.bi-badge-wc::before { content: "\f16f"; } +.bi-bag-check-fill::before { content: "\f170"; } +.bi-bag-check::before { content: "\f171"; } +.bi-bag-dash-fill::before { content: "\f172"; } +.bi-bag-dash::before { content: "\f173"; } +.bi-bag-fill::before { content: "\f174"; } +.bi-bag-plus-fill::before { content: "\f175"; } +.bi-bag-plus::before { content: "\f176"; } +.bi-bag-x-fill::before { content: "\f177"; } +.bi-bag-x::before { content: "\f178"; } +.bi-bag::before { content: "\f179"; } +.bi-bar-chart-fill::before { content: "\f17a"; } +.bi-bar-chart-line-fill::before { content: "\f17b"; } +.bi-bar-chart-line::before { content: "\f17c"; } +.bi-bar-chart-steps::before { content: "\f17d"; } +.bi-bar-chart::before { content: "\f17e"; } +.bi-basket-fill::before { content: "\f17f"; } +.bi-basket::before { content: "\f180"; } +.bi-basket2-fill::before { content: "\f181"; } +.bi-basket2::before { content: "\f182"; } +.bi-basket3-fill::before { content: "\f183"; } +.bi-basket3::before { content: "\f184"; } +.bi-battery-charging::before { content: "\f185"; } +.bi-battery-full::before { content: "\f186"; } +.bi-battery-half::before { content: "\f187"; } +.bi-battery::before { content: "\f188"; } +.bi-bell-fill::before { content: "\f189"; } +.bi-bell::before { content: "\f18a"; } +.bi-bezier::before { content: "\f18b"; } +.bi-bezier2::before { content: "\f18c"; } +.bi-bicycle::before { content: "\f18d"; } +.bi-binoculars-fill::before { content: "\f18e"; } +.bi-binoculars::before { content: "\f18f"; } +.bi-blockquote-left::before { content: "\f190"; } +.bi-blockquote-right::before { content: "\f191"; } +.bi-book-fill::before { content: "\f192"; } +.bi-book-half::before { content: "\f193"; } +.bi-book::before { content: "\f194"; } +.bi-bookmark-check-fill::before { content: "\f195"; } +.bi-bookmark-check::before { content: "\f196"; } +.bi-bookmark-dash-fill::before { content: "\f197"; } +.bi-bookmark-dash::before { content: "\f198"; } +.bi-bookmark-fill::before { content: "\f199"; } +.bi-bookmark-heart-fill::before { content: "\f19a"; } +.bi-bookmark-heart::before { content: "\f19b"; } +.bi-bookmark-plus-fill::before { content: "\f19c"; } +.bi-bookmark-plus::before { content: "\f19d"; } +.bi-bookmark-star-fill::before { content: "\f19e"; } +.bi-bookmark-star::before { content: "\f19f"; } +.bi-bookmark-x-fill::before { content: "\f1a0"; } +.bi-bookmark-x::before { content: "\f1a1"; } +.bi-bookmark::before { content: "\f1a2"; } +.bi-bookmarks-fill::before { content: "\f1a3"; } +.bi-bookmarks::before { content: "\f1a4"; } +.bi-bookshelf::before { content: "\f1a5"; } +.bi-bootstrap-fill::before { content: "\f1a6"; } +.bi-bootstrap-reboot::before { content: "\f1a7"; } +.bi-bootstrap::before { content: "\f1a8"; } +.bi-border-all::before { content: "\f1a9"; } +.bi-border-bottom::before { content: "\f1aa"; } +.bi-border-center::before { content: "\f1ab"; } +.bi-border-inner::before { content: "\f1ac"; } +.bi-border-left::before { content: "\f1ad"; } +.bi-border-middle::before { content: "\f1ae"; } +.bi-border-outer::before { content: "\f1af"; } +.bi-border-right::before { content: "\f1b0"; } +.bi-border-style::before { content: "\f1b1"; } +.bi-border-top::before { content: "\f1b2"; } +.bi-border-width::before { content: "\f1b3"; } +.bi-border::before { content: "\f1b4"; } +.bi-bounding-box-circles::before { content: "\f1b5"; } +.bi-bounding-box::before { content: "\f1b6"; } +.bi-box-arrow-down-left::before { content: "\f1b7"; } +.bi-box-arrow-down-right::before { content: "\f1b8"; } +.bi-box-arrow-down::before { content: "\f1b9"; } +.bi-box-arrow-in-down-left::before { content: "\f1ba"; } +.bi-box-arrow-in-down-right::before { content: "\f1bb"; } +.bi-box-arrow-in-down::before { content: "\f1bc"; } +.bi-box-arrow-in-left::before { content: "\f1bd"; } +.bi-box-arrow-in-right::before { content: "\f1be"; } +.bi-box-arrow-in-up-left::before { content: "\f1bf"; } +.bi-box-arrow-in-up-right::before { content: "\f1c0"; } +.bi-box-arrow-in-up::before { content: "\f1c1"; } +.bi-box-arrow-left::before { content: "\f1c2"; } +.bi-box-arrow-right::before { content: "\f1c3"; } +.bi-box-arrow-up-left::before { content: "\f1c4"; } +.bi-box-arrow-up-right::before { content: "\f1c5"; } +.bi-box-arrow-up::before { content: "\f1c6"; } +.bi-box-seam::before { content: "\f1c7"; } +.bi-box::before { content: "\f1c8"; } +.bi-braces::before { content: "\f1c9"; } +.bi-bricks::before { content: "\f1ca"; } +.bi-briefcase-fill::before { content: "\f1cb"; } +.bi-briefcase::before { content: "\f1cc"; } +.bi-brightness-alt-high-fill::before { content: "\f1cd"; } +.bi-brightness-alt-high::before { content: "\f1ce"; } +.bi-brightness-alt-low-fill::before { content: "\f1cf"; } +.bi-brightness-alt-low::before { content: "\f1d0"; } +.bi-brightness-high-fill::before { content: "\f1d1"; } +.bi-brightness-high::before { content: "\f1d2"; } +.bi-brightness-low-fill::before { content: "\f1d3"; } +.bi-brightness-low::before { content: "\f1d4"; } +.bi-broadcast-pin::before { content: "\f1d5"; } +.bi-broadcast::before { content: "\f1d6"; } +.bi-brush-fill::before { content: "\f1d7"; } +.bi-brush::before { content: "\f1d8"; } +.bi-bucket-fill::before { content: "\f1d9"; } +.bi-bucket::before { content: "\f1da"; } +.bi-bug-fill::before { content: "\f1db"; } +.bi-bug::before { content: "\f1dc"; } +.bi-building::before { content: "\f1dd"; } +.bi-bullseye::before { content: "\f1de"; } +.bi-calculator-fill::before { content: "\f1df"; } +.bi-calculator::before { content: "\f1e0"; } +.bi-calendar-check-fill::before { content: "\f1e1"; } +.bi-calendar-check::before { content: "\f1e2"; } +.bi-calendar-date-fill::before { content: "\f1e3"; } +.bi-calendar-date::before { content: "\f1e4"; } +.bi-calendar-day-fill::before { content: "\f1e5"; } +.bi-calendar-day::before { content: "\f1e6"; } +.bi-calendar-event-fill::before { content: "\f1e7"; } +.bi-calendar-event::before { content: "\f1e8"; } +.bi-calendar-fill::before { content: "\f1e9"; } +.bi-calendar-minus-fill::before { content: "\f1ea"; } +.bi-calendar-minus::before { content: "\f1eb"; } +.bi-calendar-month-fill::before { content: "\f1ec"; } +.bi-calendar-month::before { content: "\f1ed"; } +.bi-calendar-plus-fill::before { content: "\f1ee"; } +.bi-calendar-plus::before { content: "\f1ef"; } +.bi-calendar-range-fill::before { content: "\f1f0"; } +.bi-calendar-range::before { content: "\f1f1"; } +.bi-calendar-week-fill::before { content: "\f1f2"; } +.bi-calendar-week::before { content: "\f1f3"; } +.bi-calendar-x-fill::before { content: "\f1f4"; } +.bi-calendar-x::before { content: "\f1f5"; } +.bi-calendar::before { content: "\f1f6"; } +.bi-calendar2-check-fill::before { content: "\f1f7"; } +.bi-calendar2-check::before { content: "\f1f8"; } +.bi-calendar2-date-fill::before { content: "\f1f9"; } +.bi-calendar2-date::before { content: "\f1fa"; } +.bi-calendar2-day-fill::before { content: "\f1fb"; } +.bi-calendar2-day::before { content: "\f1fc"; } +.bi-calendar2-event-fill::before { content: "\f1fd"; } +.bi-calendar2-event::before { content: "\f1fe"; } +.bi-calendar2-fill::before { content: "\f1ff"; } +.bi-calendar2-minus-fill::before { content: "\f200"; } +.bi-calendar2-minus::before { content: "\f201"; } +.bi-calendar2-month-fill::before { content: "\f202"; } +.bi-calendar2-month::before { content: "\f203"; } +.bi-calendar2-plus-fill::before { content: "\f204"; } +.bi-calendar2-plus::before { content: "\f205"; } +.bi-calendar2-range-fill::before { content: "\f206"; } +.bi-calendar2-range::before { content: "\f207"; } +.bi-calendar2-week-fill::before { content: "\f208"; } +.bi-calendar2-week::before { content: "\f209"; } +.bi-calendar2-x-fill::before { content: "\f20a"; } +.bi-calendar2-x::before { content: "\f20b"; } +.bi-calendar2::before { content: "\f20c"; } +.bi-calendar3-event-fill::before { content: "\f20d"; } +.bi-calendar3-event::before { content: "\f20e"; } +.bi-calendar3-fill::before { content: "\f20f"; } +.bi-calendar3-range-fill::before { content: "\f210"; } +.bi-calendar3-range::before { content: "\f211"; } +.bi-calendar3-week-fill::before { content: "\f212"; } +.bi-calendar3-week::before { content: "\f213"; } +.bi-calendar3::before { content: "\f214"; } +.bi-calendar4-event::before { content: "\f215"; } +.bi-calendar4-range::before { content: "\f216"; } +.bi-calendar4-week::before { content: "\f217"; } +.bi-calendar4::before { content: "\f218"; } +.bi-camera-fill::before { content: "\f219"; } +.bi-camera-reels-fill::before { content: "\f21a"; } +.bi-camera-reels::before { content: "\f21b"; } +.bi-camera-video-fill::before { content: "\f21c"; } +.bi-camera-video-off-fill::before { content: "\f21d"; } +.bi-camera-video-off::before { content: "\f21e"; } +.bi-camera-video::before { content: "\f21f"; } +.bi-camera::before { content: "\f220"; } +.bi-camera2::before { content: "\f221"; } +.bi-capslock-fill::before { content: "\f222"; } +.bi-capslock::before { content: "\f223"; } +.bi-card-checklist::before { content: "\f224"; } +.bi-card-heading::before { content: "\f225"; } +.bi-card-image::before { content: "\f226"; } +.bi-card-list::before { content: "\f227"; } +.bi-card-text::before { content: "\f228"; } +.bi-caret-down-fill::before { content: "\f229"; } +.bi-caret-down-square-fill::before { content: "\f22a"; } +.bi-caret-down-square::before { content: "\f22b"; } +.bi-caret-down::before { content: "\f22c"; } +.bi-caret-left-fill::before { content: "\f22d"; } +.bi-caret-left-square-fill::before { content: "\f22e"; } +.bi-caret-left-square::before { content: "\f22f"; } +.bi-caret-left::before { content: "\f230"; } +.bi-caret-right-fill::before { content: "\f231"; } +.bi-caret-right-square-fill::before { content: "\f232"; } +.bi-caret-right-square::before { content: "\f233"; } +.bi-caret-right::before { content: "\f234"; } +.bi-caret-up-fill::before { content: "\f235"; } +.bi-caret-up-square-fill::before { content: "\f236"; } +.bi-caret-up-square::before { content: "\f237"; } +.bi-caret-up::before { content: "\f238"; } +.bi-cart-check-fill::before { content: "\f239"; } +.bi-cart-check::before { content: "\f23a"; } +.bi-cart-dash-fill::before { content: "\f23b"; } +.bi-cart-dash::before { content: "\f23c"; } +.bi-cart-fill::before { content: "\f23d"; } +.bi-cart-plus-fill::before { content: "\f23e"; } +.bi-cart-plus::before { content: "\f23f"; } +.bi-cart-x-fill::before { content: "\f240"; } +.bi-cart-x::before { content: "\f241"; } +.bi-cart::before { content: "\f242"; } +.bi-cart2::before { content: "\f243"; } +.bi-cart3::before { content: "\f244"; } +.bi-cart4::before { content: "\f245"; } +.bi-cash-stack::before { content: "\f246"; } +.bi-cash::before { content: "\f247"; } +.bi-cast::before { content: "\f248"; } +.bi-chat-dots-fill::before { content: "\f249"; } +.bi-chat-dots::before { content: "\f24a"; } +.bi-chat-fill::before { content: "\f24b"; } +.bi-chat-left-dots-fill::before { content: "\f24c"; } +.bi-chat-left-dots::before { content: "\f24d"; } +.bi-chat-left-fill::before { content: "\f24e"; } +.bi-chat-left-quote-fill::before { content: "\f24f"; } +.bi-chat-left-quote::before { content: "\f250"; } +.bi-chat-left-text-fill::before { content: "\f251"; } +.bi-chat-left-text::before { content: "\f252"; } +.bi-chat-left::before { content: "\f253"; } +.bi-chat-quote-fill::before { content: "\f254"; } +.bi-chat-quote::before { content: "\f255"; } +.bi-chat-right-dots-fill::before { content: "\f256"; } +.bi-chat-right-dots::before { content: "\f257"; } +.bi-chat-right-fill::before { content: "\f258"; } +.bi-chat-right-quote-fill::before { content: "\f259"; } +.bi-chat-right-quote::before { content: "\f25a"; } +.bi-chat-right-text-fill::before { content: "\f25b"; } +.bi-chat-right-text::before { content: "\f25c"; } +.bi-chat-right::before { content: "\f25d"; } +.bi-chat-square-dots-fill::before { content: "\f25e"; } +.bi-chat-square-dots::before { content: "\f25f"; } +.bi-chat-square-fill::before { content: "\f260"; } +.bi-chat-square-quote-fill::before { content: "\f261"; } +.bi-chat-square-quote::before { content: "\f262"; } +.bi-chat-square-text-fill::before { content: "\f263"; } +.bi-chat-square-text::before { content: "\f264"; } +.bi-chat-square::before { content: "\f265"; } +.bi-chat-text-fill::before { content: "\f266"; } +.bi-chat-text::before { content: "\f267"; } +.bi-chat::before { content: "\f268"; } +.bi-check-all::before { content: "\f269"; } +.bi-check-circle-fill::before { content: "\f26a"; } +.bi-check-circle::before { content: "\f26b"; } +.bi-check-square-fill::before { content: "\f26c"; } +.bi-check-square::before { content: "\f26d"; } +.bi-check::before { content: "\f26e"; } +.bi-check2-all::before { content: "\f26f"; } +.bi-check2-circle::before { content: "\f270"; } +.bi-check2-square::before { content: "\f271"; } +.bi-check2::before { content: "\f272"; } +.bi-chevron-bar-contract::before { content: "\f273"; } +.bi-chevron-bar-down::before { content: "\f274"; } +.bi-chevron-bar-expand::before { content: "\f275"; } +.bi-chevron-bar-left::before { content: "\f276"; } +.bi-chevron-bar-right::before { content: "\f277"; } +.bi-chevron-bar-up::before { content: "\f278"; } +.bi-chevron-compact-down::before { content: "\f279"; } +.bi-chevron-compact-left::before { content: "\f27a"; } +.bi-chevron-compact-right::before { content: "\f27b"; } +.bi-chevron-compact-up::before { content: "\f27c"; } +.bi-chevron-contract::before { content: "\f27d"; } +.bi-chevron-double-down::before { content: "\f27e"; } +.bi-chevron-double-left::before { content: "\f27f"; } +.bi-chevron-double-right::before { content: "\f280"; } +.bi-chevron-double-up::before { content: "\f281"; } +.bi-chevron-down::before { content: "\f282"; } +.bi-chevron-expand::before { content: "\f283"; } +.bi-chevron-left::before { content: "\f284"; } +.bi-chevron-right::before { content: "\f285"; } +.bi-chevron-up::before { content: "\f286"; } +.bi-circle-fill::before { content: "\f287"; } +.bi-circle-half::before { content: "\f288"; } +.bi-circle-square::before { content: "\f289"; } +.bi-circle::before { content: "\f28a"; } +.bi-clipboard-check::before { content: "\f28b"; } +.bi-clipboard-data::before { content: "\f28c"; } +.bi-clipboard-minus::before { content: "\f28d"; } +.bi-clipboard-plus::before { content: "\f28e"; } +.bi-clipboard-x::before { content: "\f28f"; } +.bi-clipboard::before { content: "\f290"; } +.bi-clock-fill::before { content: "\f291"; } +.bi-clock-history::before { content: "\f292"; } +.bi-clock::before { content: "\f293"; } +.bi-cloud-arrow-down-fill::before { content: "\f294"; } +.bi-cloud-arrow-down::before { content: "\f295"; } +.bi-cloud-arrow-up-fill::before { content: "\f296"; } +.bi-cloud-arrow-up::before { content: "\f297"; } +.bi-cloud-check-fill::before { content: "\f298"; } +.bi-cloud-check::before { content: "\f299"; } +.bi-cloud-download-fill::before { content: "\f29a"; } +.bi-cloud-download::before { content: "\f29b"; } +.bi-cloud-drizzle-fill::before { content: "\f29c"; } +.bi-cloud-drizzle::before { content: "\f29d"; } +.bi-cloud-fill::before { content: "\f29e"; } +.bi-cloud-fog-fill::before { content: "\f29f"; } +.bi-cloud-fog::before { content: "\f2a0"; } +.bi-cloud-fog2-fill::before { content: "\f2a1"; } +.bi-cloud-fog2::before { content: "\f2a2"; } +.bi-cloud-hail-fill::before { content: "\f2a3"; } +.bi-cloud-hail::before { content: "\f2a4"; } +.bi-cloud-haze-1::before { content: "\f2a5"; } +.bi-cloud-haze-fill::before { content: "\f2a6"; } +.bi-cloud-haze::before { content: "\f2a7"; } +.bi-cloud-haze2-fill::before { content: "\f2a8"; } +.bi-cloud-lightning-fill::before { content: "\f2a9"; } +.bi-cloud-lightning-rain-fill::before { content: "\f2aa"; } +.bi-cloud-lightning-rain::before { content: "\f2ab"; } +.bi-cloud-lightning::before { content: "\f2ac"; } +.bi-cloud-minus-fill::before { content: "\f2ad"; } +.bi-cloud-minus::before { content: "\f2ae"; } +.bi-cloud-moon-fill::before { content: "\f2af"; } +.bi-cloud-moon::before { content: "\f2b0"; } +.bi-cloud-plus-fill::before { content: "\f2b1"; } +.bi-cloud-plus::before { content: "\f2b2"; } +.bi-cloud-rain-fill::before { content: "\f2b3"; } +.bi-cloud-rain-heavy-fill::before { content: "\f2b4"; } +.bi-cloud-rain-heavy::before { content: "\f2b5"; } +.bi-cloud-rain::before { content: "\f2b6"; } +.bi-cloud-slash-fill::before { content: "\f2b7"; } +.bi-cloud-slash::before { content: "\f2b8"; } +.bi-cloud-sleet-fill::before { content: "\f2b9"; } +.bi-cloud-sleet::before { content: "\f2ba"; } +.bi-cloud-snow-fill::before { content: "\f2bb"; } +.bi-cloud-snow::before { content: "\f2bc"; } +.bi-cloud-sun-fill::before { content: "\f2bd"; } +.bi-cloud-sun::before { content: "\f2be"; } +.bi-cloud-upload-fill::before { content: "\f2bf"; } +.bi-cloud-upload::before { content: "\f2c0"; } +.bi-cloud::before { content: "\f2c1"; } +.bi-clouds-fill::before { content: "\f2c2"; } +.bi-clouds::before { content: "\f2c3"; } +.bi-cloudy-fill::before { content: "\f2c4"; } +.bi-cloudy::before { content: "\f2c5"; } +.bi-code-slash::before { content: "\f2c6"; } +.bi-code-square::before { content: "\f2c7"; } +.bi-code::before { content: "\f2c8"; } +.bi-collection-fill::before { content: "\f2c9"; } +.bi-collection-play-fill::before { content: "\f2ca"; } +.bi-collection-play::before { content: "\f2cb"; } +.bi-collection::before { content: "\f2cc"; } +.bi-columns-gap::before { content: "\f2cd"; } +.bi-columns::before { content: "\f2ce"; } +.bi-command::before { content: "\f2cf"; } +.bi-compass-fill::before { content: "\f2d0"; } +.bi-compass::before { content: "\f2d1"; } +.bi-cone-striped::before { content: "\f2d2"; } +.bi-cone::before { content: "\f2d3"; } +.bi-controller::before { content: "\f2d4"; } +.bi-cpu-fill::before { content: "\f2d5"; } +.bi-cpu::before { content: "\f2d6"; } +.bi-credit-card-2-back-fill::before { content: "\f2d7"; } +.bi-credit-card-2-back::before { content: "\f2d8"; } +.bi-credit-card-2-front-fill::before { content: "\f2d9"; } +.bi-credit-card-2-front::before { content: "\f2da"; } +.bi-credit-card-fill::before { content: "\f2db"; } +.bi-credit-card::before { content: "\f2dc"; } +.bi-crop::before { content: "\f2dd"; } +.bi-cup-fill::before { content: "\f2de"; } +.bi-cup-straw::before { content: "\f2df"; } +.bi-cup::before { content: "\f2e0"; } +.bi-cursor-fill::before { content: "\f2e1"; } +.bi-cursor-text::before { content: "\f2e2"; } +.bi-cursor::before { content: "\f2e3"; } +.bi-dash-circle-dotted::before { content: "\f2e4"; } +.bi-dash-circle-fill::before { content: "\f2e5"; } +.bi-dash-circle::before { content: "\f2e6"; } +.bi-dash-square-dotted::before { content: "\f2e7"; } +.bi-dash-square-fill::before { content: "\f2e8"; } +.bi-dash-square::before { content: "\f2e9"; } +.bi-dash::before { content: "\f2ea"; } +.bi-diagram-2-fill::before { content: "\f2eb"; } +.bi-diagram-2::before { content: "\f2ec"; } +.bi-diagram-3-fill::before { content: "\f2ed"; } +.bi-diagram-3::before { content: "\f2ee"; } +.bi-diamond-fill::before { content: "\f2ef"; } +.bi-diamond-half::before { content: "\f2f0"; } +.bi-diamond::before { content: "\f2f1"; } +.bi-dice-1-fill::before { content: "\f2f2"; } +.bi-dice-1::before { content: "\f2f3"; } +.bi-dice-2-fill::before { content: "\f2f4"; } +.bi-dice-2::before { content: "\f2f5"; } +.bi-dice-3-fill::before { content: "\f2f6"; } +.bi-dice-3::before { content: "\f2f7"; } +.bi-dice-4-fill::before { content: "\f2f8"; } +.bi-dice-4::before { content: "\f2f9"; } +.bi-dice-5-fill::before { content: "\f2fa"; } +.bi-dice-5::before { content: "\f2fb"; } +.bi-dice-6-fill::before { content: "\f2fc"; } +.bi-dice-6::before { content: "\f2fd"; } +.bi-disc-fill::before { content: "\f2fe"; } +.bi-disc::before { content: "\f2ff"; } +.bi-discord::before { content: "\f300"; } +.bi-display-fill::before { content: "\f301"; } +.bi-display::before { content: "\f302"; } +.bi-distribute-horizontal::before { content: "\f303"; } +.bi-distribute-vertical::before { content: "\f304"; } +.bi-door-closed-fill::before { content: "\f305"; } +.bi-door-closed::before { content: "\f306"; } +.bi-door-open-fill::before { content: "\f307"; } +.bi-door-open::before { content: "\f308"; } +.bi-dot::before { content: "\f309"; } +.bi-download::before { content: "\f30a"; } +.bi-droplet-fill::before { content: "\f30b"; } +.bi-droplet-half::before { content: "\f30c"; } +.bi-droplet::before { content: "\f30d"; } +.bi-earbuds::before { content: "\f30e"; } +.bi-easel-fill::before { content: "\f30f"; } +.bi-easel::before { content: "\f310"; } +.bi-egg-fill::before { content: "\f311"; } +.bi-egg-fried::before { content: "\f312"; } +.bi-egg::before { content: "\f313"; } +.bi-eject-fill::before { content: "\f314"; } +.bi-eject::before { content: "\f315"; } +.bi-emoji-angry-fill::before { content: "\f316"; } +.bi-emoji-angry::before { content: "\f317"; } +.bi-emoji-dizzy-fill::before { content: "\f318"; } +.bi-emoji-dizzy::before { content: "\f319"; } +.bi-emoji-expressionless-fill::before { content: "\f31a"; } +.bi-emoji-expressionless::before { content: "\f31b"; } +.bi-emoji-frown-fill::before { content: "\f31c"; } +.bi-emoji-frown::before { content: "\f31d"; } +.bi-emoji-heart-eyes-fill::before { content: "\f31e"; } +.bi-emoji-heart-eyes::before { content: "\f31f"; } +.bi-emoji-laughing-fill::before { content: "\f320"; } +.bi-emoji-laughing::before { content: "\f321"; } +.bi-emoji-neutral-fill::before { content: "\f322"; } +.bi-emoji-neutral::before { content: "\f323"; } +.bi-emoji-smile-fill::before { content: "\f324"; } +.bi-emoji-smile-upside-down-fill::before { content: "\f325"; } +.bi-emoji-smile-upside-down::before { content: "\f326"; } +.bi-emoji-smile::before { content: "\f327"; } +.bi-emoji-sunglasses-fill::before { content: "\f328"; } +.bi-emoji-sunglasses::before { content: "\f329"; } +.bi-emoji-wink-fill::before { content: "\f32a"; } +.bi-emoji-wink::before { content: "\f32b"; } +.bi-envelope-fill::before { content: "\f32c"; } +.bi-envelope-open-fill::before { content: "\f32d"; } +.bi-envelope-open::before { content: "\f32e"; } +.bi-envelope::before { content: "\f32f"; } +.bi-eraser-fill::before { content: "\f330"; } +.bi-eraser::before { content: "\f331"; } +.bi-exclamation-circle-fill::before { content: "\f332"; } +.bi-exclamation-circle::before { content: "\f333"; } +.bi-exclamation-diamond-fill::before { content: "\f334"; } +.bi-exclamation-diamond::before { content: "\f335"; } +.bi-exclamation-octagon-fill::before { content: "\f336"; } +.bi-exclamation-octagon::before { content: "\f337"; } +.bi-exclamation-square-fill::before { content: "\f338"; } +.bi-exclamation-square::before { content: "\f339"; } +.bi-exclamation-triangle-fill::before { content: "\f33a"; } +.bi-exclamation-triangle::before { content: "\f33b"; } +.bi-exclamation::before { content: "\f33c"; } +.bi-exclude::before { content: "\f33d"; } +.bi-eye-fill::before { content: "\f33e"; } +.bi-eye-slash-fill::before { content: "\f33f"; } +.bi-eye-slash::before { content: "\f340"; } +.bi-eye::before { content: "\f341"; } +.bi-eyedropper::before { content: "\f342"; } +.bi-eyeglasses::before { content: "\f343"; } +.bi-facebook::before { content: "\f344"; } +.bi-file-arrow-down-fill::before { content: "\f345"; } +.bi-file-arrow-down::before { content: "\f346"; } +.bi-file-arrow-up-fill::before { content: "\f347"; } +.bi-file-arrow-up::before { content: "\f348"; } +.bi-file-bar-graph-fill::before { content: "\f349"; } +.bi-file-bar-graph::before { content: "\f34a"; } +.bi-file-binary-fill::before { content: "\f34b"; } +.bi-file-binary::before { content: "\f34c"; } +.bi-file-break-fill::before { content: "\f34d"; } +.bi-file-break::before { content: "\f34e"; } +.bi-file-check-fill::before { content: "\f34f"; } +.bi-file-check::before { content: "\f350"; } +.bi-file-code-fill::before { content: "\f351"; } +.bi-file-code::before { content: "\f352"; } +.bi-file-diff-fill::before { content: "\f353"; } +.bi-file-diff::before { content: "\f354"; } +.bi-file-earmark-arrow-down-fill::before { content: "\f355"; } +.bi-file-earmark-arrow-down::before { content: "\f356"; } +.bi-file-earmark-arrow-up-fill::before { content: "\f357"; } +.bi-file-earmark-arrow-up::before { content: "\f358"; } +.bi-file-earmark-bar-graph-fill::before { content: "\f359"; } +.bi-file-earmark-bar-graph::before { content: "\f35a"; } +.bi-file-earmark-binary-fill::before { content: "\f35b"; } +.bi-file-earmark-binary::before { content: "\f35c"; } +.bi-file-earmark-break-fill::before { content: "\f35d"; } +.bi-file-earmark-break::before { content: "\f35e"; } +.bi-file-earmark-check-fill::before { content: "\f35f"; } +.bi-file-earmark-check::before { content: "\f360"; } +.bi-file-earmark-code-fill::before { content: "\f361"; } +.bi-file-earmark-code::before { content: "\f362"; } +.bi-file-earmark-diff-fill::before { content: "\f363"; } +.bi-file-earmark-diff::before { content: "\f364"; } +.bi-file-earmark-easel-fill::before { content: "\f365"; } +.bi-file-earmark-easel::before { content: "\f366"; } +.bi-file-earmark-excel-fill::before { content: "\f367"; } +.bi-file-earmark-excel::before { content: "\f368"; } +.bi-file-earmark-fill::before { content: "\f369"; } +.bi-file-earmark-font-fill::before { content: "\f36a"; } +.bi-file-earmark-font::before { content: "\f36b"; } +.bi-file-earmark-image-fill::before { content: "\f36c"; } +.bi-file-earmark-image::before { content: "\f36d"; } +.bi-file-earmark-lock-fill::before { content: "\f36e"; } +.bi-file-earmark-lock::before { content: "\f36f"; } +.bi-file-earmark-lock2-fill::before { content: "\f370"; } +.bi-file-earmark-lock2::before { content: "\f371"; } +.bi-file-earmark-medical-fill::before { content: "\f372"; } +.bi-file-earmark-medical::before { content: "\f373"; } +.bi-file-earmark-minus-fill::before { content: "\f374"; } +.bi-file-earmark-minus::before { content: "\f375"; } +.bi-file-earmark-music-fill::before { content: "\f376"; } +.bi-file-earmark-music::before { content: "\f377"; } +.bi-file-earmark-person-fill::before { content: "\f378"; } +.bi-file-earmark-person::before { content: "\f379"; } +.bi-file-earmark-play-fill::before { content: "\f37a"; } +.bi-file-earmark-play::before { content: "\f37b"; } +.bi-file-earmark-plus-fill::before { content: "\f37c"; } +.bi-file-earmark-plus::before { content: "\f37d"; } +.bi-file-earmark-post-fill::before { content: "\f37e"; } +.bi-file-earmark-post::before { content: "\f37f"; } +.bi-file-earmark-ppt-fill::before { content: "\f380"; } +.bi-file-earmark-ppt::before { content: "\f381"; } +.bi-file-earmark-richtext-fill::before { content: "\f382"; } +.bi-file-earmark-richtext::before { content: "\f383"; } +.bi-file-earmark-ruled-fill::before { content: "\f384"; } +.bi-file-earmark-ruled::before { content: "\f385"; } +.bi-file-earmark-slides-fill::before { content: "\f386"; } +.bi-file-earmark-slides::before { content: "\f387"; } +.bi-file-earmark-spreadsheet-fill::before { content: "\f388"; } +.bi-file-earmark-spreadsheet::before { content: "\f389"; } +.bi-file-earmark-text-fill::before { content: "\f38a"; } +.bi-file-earmark-text::before { content: "\f38b"; } +.bi-file-earmark-word-fill::before { content: "\f38c"; } +.bi-file-earmark-word::before { content: "\f38d"; } +.bi-file-earmark-x-fill::before { content: "\f38e"; } +.bi-file-earmark-x::before { content: "\f38f"; } +.bi-file-earmark-zip-fill::before { content: "\f390"; } +.bi-file-earmark-zip::before { content: "\f391"; } +.bi-file-earmark::before { content: "\f392"; } +.bi-file-easel-fill::before { content: "\f393"; } +.bi-file-easel::before { content: "\f394"; } +.bi-file-excel-fill::before { content: "\f395"; } +.bi-file-excel::before { content: "\f396"; } +.bi-file-fill::before { content: "\f397"; } +.bi-file-font-fill::before { content: "\f398"; } +.bi-file-font::before { content: "\f399"; } +.bi-file-image-fill::before { content: "\f39a"; } +.bi-file-image::before { content: "\f39b"; } +.bi-file-lock-fill::before { content: "\f39c"; } +.bi-file-lock::before { content: "\f39d"; } +.bi-file-lock2-fill::before { content: "\f39e"; } +.bi-file-lock2::before { content: "\f39f"; } +.bi-file-medical-fill::before { content: "\f3a0"; } +.bi-file-medical::before { content: "\f3a1"; } +.bi-file-minus-fill::before { content: "\f3a2"; } +.bi-file-minus::before { content: "\f3a3"; } +.bi-file-music-fill::before { content: "\f3a4"; } +.bi-file-music::before { content: "\f3a5"; } +.bi-file-person-fill::before { content: "\f3a6"; } +.bi-file-person::before { content: "\f3a7"; } +.bi-file-play-fill::before { content: "\f3a8"; } +.bi-file-play::before { content: "\f3a9"; } +.bi-file-plus-fill::before { content: "\f3aa"; } +.bi-file-plus::before { content: "\f3ab"; } +.bi-file-post-fill::before { content: "\f3ac"; } +.bi-file-post::before { content: "\f3ad"; } +.bi-file-ppt-fill::before { content: "\f3ae"; } +.bi-file-ppt::before { content: "\f3af"; } +.bi-file-richtext-fill::before { content: "\f3b0"; } +.bi-file-richtext::before { content: "\f3b1"; } +.bi-file-ruled-fill::before { content: "\f3b2"; } +.bi-file-ruled::before { content: "\f3b3"; } +.bi-file-slides-fill::before { content: "\f3b4"; } +.bi-file-slides::before { content: "\f3b5"; } +.bi-file-spreadsheet-fill::before { content: "\f3b6"; } +.bi-file-spreadsheet::before { content: "\f3b7"; } +.bi-file-text-fill::before { content: "\f3b8"; } +.bi-file-text::before { content: "\f3b9"; } +.bi-file-word-fill::before { content: "\f3ba"; } +.bi-file-word::before { content: "\f3bb"; } +.bi-file-x-fill::before { content: "\f3bc"; } +.bi-file-x::before { content: "\f3bd"; } +.bi-file-zip-fill::before { content: "\f3be"; } +.bi-file-zip::before { content: "\f3bf"; } +.bi-file::before { content: "\f3c0"; } +.bi-files-alt::before { content: "\f3c1"; } +.bi-files::before { content: "\f3c2"; } +.bi-film::before { content: "\f3c3"; } +.bi-filter-circle-fill::before { content: "\f3c4"; } +.bi-filter-circle::before { content: "\f3c5"; } +.bi-filter-left::before { content: "\f3c6"; } +.bi-filter-right::before { content: "\f3c7"; } +.bi-filter-square-fill::before { content: "\f3c8"; } +.bi-filter-square::before { content: "\f3c9"; } +.bi-filter::before { content: "\f3ca"; } +.bi-flag-fill::before { content: "\f3cb"; } +.bi-flag::before { content: "\f3cc"; } +.bi-flower1::before { content: "\f3cd"; } +.bi-flower2::before { content: "\f3ce"; } +.bi-flower3::before { content: "\f3cf"; } +.bi-folder-check::before { content: "\f3d0"; } +.bi-folder-fill::before { content: "\f3d1"; } +.bi-folder-minus::before { content: "\f3d2"; } +.bi-folder-plus::before { content: "\f3d3"; } +.bi-folder-symlink-fill::before { content: "\f3d4"; } +.bi-folder-symlink::before { content: "\f3d5"; } +.bi-folder-x::before { content: "\f3d6"; } +.bi-folder::before { content: "\f3d7"; } +.bi-folder2-open::before { content: "\f3d8"; } +.bi-folder2::before { content: "\f3d9"; } +.bi-fonts::before { content: "\f3da"; } +.bi-forward-fill::before { content: "\f3db"; } +.bi-forward::before { content: "\f3dc"; } +.bi-front::before { content: "\f3dd"; } +.bi-fullscreen-exit::before { content: "\f3de"; } +.bi-fullscreen::before { content: "\f3df"; } +.bi-funnel-fill::before { content: "\f3e0"; } +.bi-funnel::before { content: "\f3e1"; } +.bi-gear-fill::before { content: "\f3e2"; } +.bi-gear-wide-connected::before { content: "\f3e3"; } +.bi-gear-wide::before { content: "\f3e4"; } +.bi-gear::before { content: "\f3e5"; } +.bi-gem::before { content: "\f3e6"; } +.bi-geo-alt-fill::before { content: "\f3e7"; } +.bi-geo-alt::before { content: "\f3e8"; } +.bi-geo-fill::before { content: "\f3e9"; } +.bi-geo::before { content: "\f3ea"; } +.bi-gift-fill::before { content: "\f3eb"; } +.bi-gift::before { content: "\f3ec"; } +.bi-github::before { content: "\f3ed"; } +.bi-globe::before { content: "\f3ee"; } +.bi-globe2::before { content: "\f3ef"; } +.bi-google::before { content: "\f3f0"; } +.bi-graph-down::before { content: "\f3f1"; } +.bi-graph-up::before { content: "\f3f2"; } +.bi-grid-1x2-fill::before { content: "\f3f3"; } +.bi-grid-1x2::before { content: "\f3f4"; } +.bi-grid-3x2-gap-fill::before { content: "\f3f5"; } +.bi-grid-3x2-gap::before { content: "\f3f6"; } +.bi-grid-3x2::before { content: "\f3f7"; } +.bi-grid-3x3-gap-fill::before { content: "\f3f8"; } +.bi-grid-3x3-gap::before { content: "\f3f9"; } +.bi-grid-3x3::before { content: "\f3fa"; } +.bi-grid-fill::before { content: "\f3fb"; } +.bi-grid::before { content: "\f3fc"; } +.bi-grip-horizontal::before { content: "\f3fd"; } +.bi-grip-vertical::before { content: "\f3fe"; } +.bi-hammer::before { content: "\f3ff"; } +.bi-hand-index-fill::before { content: "\f400"; } +.bi-hand-index-thumb-fill::before { content: "\f401"; } +.bi-hand-index-thumb::before { content: "\f402"; } +.bi-hand-index::before { content: "\f403"; } +.bi-hand-thumbs-down-fill::before { content: "\f404"; } +.bi-hand-thumbs-down::before { content: "\f405"; } +.bi-hand-thumbs-up-fill::before { content: "\f406"; } +.bi-hand-thumbs-up::before { content: "\f407"; } +.bi-handbag-fill::before { content: "\f408"; } +.bi-handbag::before { content: "\f409"; } +.bi-hash::before { content: "\f40a"; } +.bi-hdd-fill::before { content: "\f40b"; } +.bi-hdd-network-fill::before { content: "\f40c"; } +.bi-hdd-network::before { content: "\f40d"; } +.bi-hdd-rack-fill::before { content: "\f40e"; } +.bi-hdd-rack::before { content: "\f40f"; } +.bi-hdd-stack-fill::before { content: "\f410"; } +.bi-hdd-stack::before { content: "\f411"; } +.bi-hdd::before { content: "\f412"; } +.bi-headphones::before { content: "\f413"; } +.bi-headset::before { content: "\f414"; } +.bi-heart-fill::before { content: "\f415"; } +.bi-heart-half::before { content: "\f416"; } +.bi-heart::before { content: "\f417"; } +.bi-heptagon-fill::before { content: "\f418"; } +.bi-heptagon-half::before { content: "\f419"; } +.bi-heptagon::before { content: "\f41a"; } +.bi-hexagon-fill::before { content: "\f41b"; } +.bi-hexagon-half::before { content: "\f41c"; } +.bi-hexagon::before { content: "\f41d"; } +.bi-hourglass-bottom::before { content: "\f41e"; } +.bi-hourglass-split::before { content: "\f41f"; } +.bi-hourglass-top::before { content: "\f420"; } +.bi-hourglass::before { content: "\f421"; } +.bi-house-door-fill::before { content: "\f422"; } +.bi-house-door::before { content: "\f423"; } +.bi-house-fill::before { content: "\f424"; } +.bi-house::before { content: "\f425"; } +.bi-hr::before { content: "\f426"; } +.bi-hurricane::before { content: "\f427"; } +.bi-image-alt::before { content: "\f428"; } +.bi-image-fill::before { content: "\f429"; } +.bi-image::before { content: "\f42a"; } +.bi-images::before { content: "\f42b"; } +.bi-inbox-fill::before { content: "\f42c"; } +.bi-inbox::before { content: "\f42d"; } +.bi-inboxes-fill::before { content: "\f42e"; } +.bi-inboxes::before { content: "\f42f"; } +.bi-info-circle-fill::before { content: "\f430"; } +.bi-info-circle::before { content: "\f431"; } +.bi-info-square-fill::before { content: "\f432"; } +.bi-info-square::before { content: "\f433"; } +.bi-info::before { content: "\f434"; } +.bi-input-cursor-text::before { content: "\f435"; } +.bi-input-cursor::before { content: "\f436"; } +.bi-instagram::before { content: "\f437"; } +.bi-intersect::before { content: "\f438"; } +.bi-journal-album::before { content: "\f439"; } +.bi-journal-arrow-down::before { content: "\f43a"; } +.bi-journal-arrow-up::before { content: "\f43b"; } +.bi-journal-bookmark-fill::before { content: "\f43c"; } +.bi-journal-bookmark::before { content: "\f43d"; } +.bi-journal-check::before { content: "\f43e"; } +.bi-journal-code::before { content: "\f43f"; } +.bi-journal-medical::before { content: "\f440"; } +.bi-journal-minus::before { content: "\f441"; } +.bi-journal-plus::before { content: "\f442"; } +.bi-journal-richtext::before { content: "\f443"; } +.bi-journal-text::before { content: "\f444"; } +.bi-journal-x::before { content: "\f445"; } +.bi-journal::before { content: "\f446"; } +.bi-journals::before { content: "\f447"; } +.bi-joystick::before { content: "\f448"; } +.bi-justify-left::before { content: "\f449"; } +.bi-justify-right::before { content: "\f44a"; } +.bi-justify::before { content: "\f44b"; } +.bi-kanban-fill::before { content: "\f44c"; } +.bi-kanban::before { content: "\f44d"; } +.bi-key-fill::before { content: "\f44e"; } +.bi-key::before { content: "\f44f"; } +.bi-keyboard-fill::before { content: "\f450"; } +.bi-keyboard::before { content: "\f451"; } +.bi-ladder::before { content: "\f452"; } +.bi-lamp-fill::before { content: "\f453"; } +.bi-lamp::before { content: "\f454"; } +.bi-laptop-fill::before { content: "\f455"; } +.bi-laptop::before { content: "\f456"; } +.bi-layer-backward::before { content: "\f457"; } +.bi-layer-forward::before { content: "\f458"; } +.bi-layers-fill::before { content: "\f459"; } +.bi-layers-half::before { content: "\f45a"; } +.bi-layers::before { content: "\f45b"; } +.bi-layout-sidebar-inset-reverse::before { content: "\f45c"; } +.bi-layout-sidebar-inset::before { content: "\f45d"; } +.bi-layout-sidebar-reverse::before { content: "\f45e"; } +.bi-layout-sidebar::before { content: "\f45f"; } +.bi-layout-split::before { content: "\f460"; } +.bi-layout-text-sidebar-reverse::before { content: "\f461"; } +.bi-layout-text-sidebar::before { content: "\f462"; } +.bi-layout-text-window-reverse::before { content: "\f463"; } +.bi-layout-text-window::before { content: "\f464"; } +.bi-layout-three-columns::before { content: "\f465"; } +.bi-layout-wtf::before { content: "\f466"; } +.bi-life-preserver::before { content: "\f467"; } +.bi-lightbulb-fill::before { content: "\f468"; } +.bi-lightbulb-off-fill::before { content: "\f469"; } +.bi-lightbulb-off::before { content: "\f46a"; } +.bi-lightbulb::before { content: "\f46b"; } +.bi-lightning-charge-fill::before { content: "\f46c"; } +.bi-lightning-charge::before { content: "\f46d"; } +.bi-lightning-fill::before { content: "\f46e"; } +.bi-lightning::before { content: "\f46f"; } +.bi-link-45deg::before { content: "\f470"; } +.bi-link::before { content: "\f471"; } +.bi-linkedin::before { content: "\f472"; } +.bi-list-check::before { content: "\f473"; } +.bi-list-nested::before { content: "\f474"; } +.bi-list-ol::before { content: "\f475"; } +.bi-list-stars::before { content: "\f476"; } +.bi-list-task::before { content: "\f477"; } +.bi-list-ul::before { content: "\f478"; } +.bi-list::before { content: "\f479"; } +.bi-lock-fill::before { content: "\f47a"; } +.bi-lock::before { content: "\f47b"; } +.bi-mailbox::before { content: "\f47c"; } +.bi-mailbox2::before { content: "\f47d"; } +.bi-map-fill::before { content: "\f47e"; } +.bi-map::before { content: "\f47f"; } +.bi-markdown-fill::before { content: "\f480"; } +.bi-markdown::before { content: "\f481"; } +.bi-mask::before { content: "\f482"; } +.bi-megaphone-fill::before { content: "\f483"; } +.bi-megaphone::before { content: "\f484"; } +.bi-menu-app-fill::before { content: "\f485"; } +.bi-menu-app::before { content: "\f486"; } +.bi-menu-button-fill::before { content: "\f487"; } +.bi-menu-button-wide-fill::before { content: "\f488"; } +.bi-menu-button-wide::before { content: "\f489"; } +.bi-menu-button::before { content: "\f48a"; } +.bi-menu-down::before { content: "\f48b"; } +.bi-menu-up::before { content: "\f48c"; } +.bi-mic-fill::before { content: "\f48d"; } +.bi-mic-mute-fill::before { content: "\f48e"; } +.bi-mic-mute::before { content: "\f48f"; } +.bi-mic::before { content: "\f490"; } +.bi-minecart-loaded::before { content: "\f491"; } +.bi-minecart::before { content: "\f492"; } +.bi-moisture::before { content: "\f493"; } +.bi-moon-fill::before { content: "\f494"; } +.bi-moon-stars-fill::before { content: "\f495"; } +.bi-moon-stars::before { content: "\f496"; } +.bi-moon::before { content: "\f497"; } +.bi-mouse-fill::before { content: "\f498"; } +.bi-mouse::before { content: "\f499"; } +.bi-mouse2-fill::before { content: "\f49a"; } +.bi-mouse2::before { content: "\f49b"; } +.bi-mouse3-fill::before { content: "\f49c"; } +.bi-mouse3::before { content: "\f49d"; } +.bi-music-note-beamed::before { content: "\f49e"; } +.bi-music-note-list::before { content: "\f49f"; } +.bi-music-note::before { content: "\f4a0"; } +.bi-music-player-fill::before { content: "\f4a1"; } +.bi-music-player::before { content: "\f4a2"; } +.bi-newspaper::before { content: "\f4a3"; } +.bi-node-minus-fill::before { content: "\f4a4"; } +.bi-node-minus::before { content: "\f4a5"; } +.bi-node-plus-fill::before { content: "\f4a6"; } +.bi-node-plus::before { content: "\f4a7"; } +.bi-nut-fill::before { content: "\f4a8"; } +.bi-nut::before { content: "\f4a9"; } +.bi-octagon-fill::before { content: "\f4aa"; } +.bi-octagon-half::before { content: "\f4ab"; } +.bi-octagon::before { content: "\f4ac"; } +.bi-option::before { content: "\f4ad"; } +.bi-outlet::before { content: "\f4ae"; } +.bi-paint-bucket::before { content: "\f4af"; } +.bi-palette-fill::before { content: "\f4b0"; } +.bi-palette::before { content: "\f4b1"; } +.bi-palette2::before { content: "\f4b2"; } +.bi-paperclip::before { content: "\f4b3"; } +.bi-paragraph::before { content: "\f4b4"; } +.bi-patch-check-fill::before { content: "\f4b5"; } +.bi-patch-check::before { content: "\f4b6"; } +.bi-patch-exclamation-fill::before { content: "\f4b7"; } +.bi-patch-exclamation::before { content: "\f4b8"; } +.bi-patch-minus-fill::before { content: "\f4b9"; } +.bi-patch-minus::before { content: "\f4ba"; } +.bi-patch-plus-fill::before { content: "\f4bb"; } +.bi-patch-plus::before { content: "\f4bc"; } +.bi-patch-question-fill::before { content: "\f4bd"; } +.bi-patch-question::before { content: "\f4be"; } +.bi-pause-btn-fill::before { content: "\f4bf"; } +.bi-pause-btn::before { content: "\f4c0"; } +.bi-pause-circle-fill::before { content: "\f4c1"; } +.bi-pause-circle::before { content: "\f4c2"; } +.bi-pause-fill::before { content: "\f4c3"; } +.bi-pause::before { content: "\f4c4"; } +.bi-peace-fill::before { content: "\f4c5"; } +.bi-peace::before { content: "\f4c6"; } +.bi-pen-fill::before { content: "\f4c7"; } +.bi-pen::before { content: "\f4c8"; } +.bi-pencil-fill::before { content: "\f4c9"; } +.bi-pencil-square::before { content: "\f4ca"; } +.bi-pencil::before { content: "\f4cb"; } +.bi-pentagon-fill::before { content: "\f4cc"; } +.bi-pentagon-half::before { content: "\f4cd"; } +.bi-pentagon::before { content: "\f4ce"; } +.bi-people-fill::before { content: "\f4cf"; } +.bi-people::before { content: "\f4d0"; } +.bi-percent::before { content: "\f4d1"; } +.bi-person-badge-fill::before { content: "\f4d2"; } +.bi-person-badge::before { content: "\f4d3"; } +.bi-person-bounding-box::before { content: "\f4d4"; } +.bi-person-check-fill::before { content: "\f4d5"; } +.bi-person-check::before { content: "\f4d6"; } +.bi-person-circle::before { content: "\f4d7"; } +.bi-person-dash-fill::before { content: "\f4d8"; } +.bi-person-dash::before { content: "\f4d9"; } +.bi-person-fill::before { content: "\f4da"; } +.bi-person-lines-fill::before { content: "\f4db"; } +.bi-person-plus-fill::before { content: "\f4dc"; } +.bi-person-plus::before { content: "\f4dd"; } +.bi-person-square::before { content: "\f4de"; } +.bi-person-x-fill::before { content: "\f4df"; } +.bi-person-x::before { content: "\f4e0"; } +.bi-person::before { content: "\f4e1"; } +.bi-phone-fill::before { content: "\f4e2"; } +.bi-phone-landscape-fill::before { content: "\f4e3"; } +.bi-phone-landscape::before { content: "\f4e4"; } +.bi-phone-vibrate-fill::before { content: "\f4e5"; } +.bi-phone-vibrate::before { content: "\f4e6"; } +.bi-phone::before { content: "\f4e7"; } +.bi-pie-chart-fill::before { content: "\f4e8"; } +.bi-pie-chart::before { content: "\f4e9"; } +.bi-pin-angle-fill::before { content: "\f4ea"; } +.bi-pin-angle::before { content: "\f4eb"; } +.bi-pin-fill::before { content: "\f4ec"; } +.bi-pin::before { content: "\f4ed"; } +.bi-pip-fill::before { content: "\f4ee"; } +.bi-pip::before { content: "\f4ef"; } +.bi-play-btn-fill::before { content: "\f4f0"; } +.bi-play-btn::before { content: "\f4f1"; } +.bi-play-circle-fill::before { content: "\f4f2"; } +.bi-play-circle::before { content: "\f4f3"; } +.bi-play-fill::before { content: "\f4f4"; } +.bi-play::before { content: "\f4f5"; } +.bi-plug-fill::before { content: "\f4f6"; } +.bi-plug::before { content: "\f4f7"; } +.bi-plus-circle-dotted::before { content: "\f4f8"; } +.bi-plus-circle-fill::before { content: "\f4f9"; } +.bi-plus-circle::before { content: "\f4fa"; } +.bi-plus-square-dotted::before { content: "\f4fb"; } +.bi-plus-square-fill::before { content: "\f4fc"; } +.bi-plus-square::before { content: "\f4fd"; } +.bi-plus::before { content: "\f4fe"; } +.bi-power::before { content: "\f4ff"; } +.bi-printer-fill::before { content: "\f500"; } +.bi-printer::before { content: "\f501"; } +.bi-puzzle-fill::before { content: "\f502"; } +.bi-puzzle::before { content: "\f503"; } +.bi-question-circle-fill::before { content: "\f504"; } +.bi-question-circle::before { content: "\f505"; } +.bi-question-diamond-fill::before { content: "\f506"; } +.bi-question-diamond::before { content: "\f507"; } +.bi-question-octagon-fill::before { content: "\f508"; } +.bi-question-octagon::before { content: "\f509"; } +.bi-question-square-fill::before { content: "\f50a"; } +.bi-question-square::before { content: "\f50b"; } +.bi-question::before { content: "\f50c"; } +.bi-rainbow::before { content: "\f50d"; } +.bi-receipt-cutoff::before { content: "\f50e"; } +.bi-receipt::before { content: "\f50f"; } +.bi-reception-0::before { content: "\f510"; } +.bi-reception-1::before { content: "\f511"; } +.bi-reception-2::before { content: "\f512"; } +.bi-reception-3::before { content: "\f513"; } +.bi-reception-4::before { content: "\f514"; } +.bi-record-btn-fill::before { content: "\f515"; } +.bi-record-btn::before { content: "\f516"; } +.bi-record-circle-fill::before { content: "\f517"; } +.bi-record-circle::before { content: "\f518"; } +.bi-record-fill::before { content: "\f519"; } +.bi-record::before { content: "\f51a"; } +.bi-record2-fill::before { content: "\f51b"; } +.bi-record2::before { content: "\f51c"; } +.bi-reply-all-fill::before { content: "\f51d"; } +.bi-reply-all::before { content: "\f51e"; } +.bi-reply-fill::before { content: "\f51f"; } +.bi-reply::before { content: "\f520"; } +.bi-rss-fill::before { content: "\f521"; } +.bi-rss::before { content: "\f522"; } +.bi-rulers::before { content: "\f523"; } +.bi-save-fill::before { content: "\f524"; } +.bi-save::before { content: "\f525"; } +.bi-save2-fill::before { content: "\f526"; } +.bi-save2::before { content: "\f527"; } +.bi-scissors::before { content: "\f528"; } +.bi-screwdriver::before { content: "\f529"; } +.bi-search::before { content: "\f52a"; } +.bi-segmented-nav::before { content: "\f52b"; } +.bi-server::before { content: "\f52c"; } +.bi-share-fill::before { content: "\f52d"; } +.bi-share::before { content: "\f52e"; } +.bi-shield-check::before { content: "\f52f"; } +.bi-shield-exclamation::before { content: "\f530"; } +.bi-shield-fill-check::before { content: "\f531"; } +.bi-shield-fill-exclamation::before { content: "\f532"; } +.bi-shield-fill-minus::before { content: "\f533"; } +.bi-shield-fill-plus::before { content: "\f534"; } +.bi-shield-fill-x::before { content: "\f535"; } +.bi-shield-fill::before { content: "\f536"; } +.bi-shield-lock-fill::before { content: "\f537"; } +.bi-shield-lock::before { content: "\f538"; } +.bi-shield-minus::before { content: "\f539"; } +.bi-shield-plus::before { content: "\f53a"; } +.bi-shield-shaded::before { content: "\f53b"; } +.bi-shield-slash-fill::before { content: "\f53c"; } +.bi-shield-slash::before { content: "\f53d"; } +.bi-shield-x::before { content: "\f53e"; } +.bi-shield::before { content: "\f53f"; } +.bi-shift-fill::before { content: "\f540"; } +.bi-shift::before { content: "\f541"; } +.bi-shop-window::before { content: "\f542"; } +.bi-shop::before { content: "\f543"; } +.bi-shuffle::before { content: "\f544"; } +.bi-signpost-2-fill::before { content: "\f545"; } +.bi-signpost-2::before { content: "\f546"; } +.bi-signpost-fill::before { content: "\f547"; } +.bi-signpost-split-fill::before { content: "\f548"; } +.bi-signpost-split::before { content: "\f549"; } +.bi-signpost::before { content: "\f54a"; } +.bi-sim-fill::before { content: "\f54b"; } +.bi-sim::before { content: "\f54c"; } +.bi-skip-backward-btn-fill::before { content: "\f54d"; } +.bi-skip-backward-btn::before { content: "\f54e"; } +.bi-skip-backward-circle-fill::before { content: "\f54f"; } +.bi-skip-backward-circle::before { content: "\f550"; } +.bi-skip-backward-fill::before { content: "\f551"; } +.bi-skip-backward::before { content: "\f552"; } +.bi-skip-end-btn-fill::before { content: "\f553"; } +.bi-skip-end-btn::before { content: "\f554"; } +.bi-skip-end-circle-fill::before { content: "\f555"; } +.bi-skip-end-circle::before { content: "\f556"; } +.bi-skip-end-fill::before { content: "\f557"; } +.bi-skip-end::before { content: "\f558"; } +.bi-skip-forward-btn-fill::before { content: "\f559"; } +.bi-skip-forward-btn::before { content: "\f55a"; } +.bi-skip-forward-circle-fill::before { content: "\f55b"; } +.bi-skip-forward-circle::before { content: "\f55c"; } +.bi-skip-forward-fill::before { content: "\f55d"; } +.bi-skip-forward::before { content: "\f55e"; } +.bi-skip-start-btn-fill::before { content: "\f55f"; } +.bi-skip-start-btn::before { content: "\f560"; } +.bi-skip-start-circle-fill::before { content: "\f561"; } +.bi-skip-start-circle::before { content: "\f562"; } +.bi-skip-start-fill::before { content: "\f563"; } +.bi-skip-start::before { content: "\f564"; } +.bi-slack::before { content: "\f565"; } +.bi-slash-circle-fill::before { content: "\f566"; } +.bi-slash-circle::before { content: "\f567"; } +.bi-slash-square-fill::before { content: "\f568"; } +.bi-slash-square::before { content: "\f569"; } +.bi-slash::before { content: "\f56a"; } +.bi-sliders::before { content: "\f56b"; } +.bi-smartwatch::before { content: "\f56c"; } +.bi-snow::before { content: "\f56d"; } +.bi-snow2::before { content: "\f56e"; } +.bi-snow3::before { content: "\f56f"; } +.bi-sort-alpha-down-alt::before { content: "\f570"; } +.bi-sort-alpha-down::before { content: "\f571"; } +.bi-sort-alpha-up-alt::before { content: "\f572"; } +.bi-sort-alpha-up::before { content: "\f573"; } +.bi-sort-down-alt::before { content: "\f574"; } +.bi-sort-down::before { content: "\f575"; } +.bi-sort-numeric-down-alt::before { content: "\f576"; } +.bi-sort-numeric-down::before { content: "\f577"; } +.bi-sort-numeric-up-alt::before { content: "\f578"; } +.bi-sort-numeric-up::before { content: "\f579"; } +.bi-sort-up-alt::before { content: "\f57a"; } +.bi-sort-up::before { content: "\f57b"; } +.bi-soundwave::before { content: "\f57c"; } +.bi-speaker-fill::before { content: "\f57d"; } +.bi-speaker::before { content: "\f57e"; } +.bi-speedometer::before { content: "\f57f"; } +.bi-speedometer2::before { content: "\f580"; } +.bi-spellcheck::before { content: "\f581"; } +.bi-square-fill::before { content: "\f582"; } +.bi-square-half::before { content: "\f583"; } +.bi-square::before { content: "\f584"; } +.bi-stack::before { content: "\f585"; } +.bi-star-fill::before { content: "\f586"; } +.bi-star-half::before { content: "\f587"; } +.bi-star::before { content: "\f588"; } +.bi-stars::before { content: "\f589"; } +.bi-stickies-fill::before { content: "\f58a"; } +.bi-stickies::before { content: "\f58b"; } +.bi-sticky-fill::before { content: "\f58c"; } +.bi-sticky::before { content: "\f58d"; } +.bi-stop-btn-fill::before { content: "\f58e"; } +.bi-stop-btn::before { content: "\f58f"; } +.bi-stop-circle-fill::before { content: "\f590"; } +.bi-stop-circle::before { content: "\f591"; } +.bi-stop-fill::before { content: "\f592"; } +.bi-stop::before { content: "\f593"; } +.bi-stoplights-fill::before { content: "\f594"; } +.bi-stoplights::before { content: "\f595"; } +.bi-stopwatch-fill::before { content: "\f596"; } +.bi-stopwatch::before { content: "\f597"; } +.bi-subtract::before { content: "\f598"; } +.bi-suit-club-fill::before { content: "\f599"; } +.bi-suit-club::before { content: "\f59a"; } +.bi-suit-diamond-fill::before { content: "\f59b"; } +.bi-suit-diamond::before { content: "\f59c"; } +.bi-suit-heart-fill::before { content: "\f59d"; } +.bi-suit-heart::before { content: "\f59e"; } +.bi-suit-spade-fill::before { content: "\f59f"; } +.bi-suit-spade::before { content: "\f5a0"; } +.bi-sun-fill::before { content: "\f5a1"; } +.bi-sun::before { content: "\f5a2"; } +.bi-sunglasses::before { content: "\f5a3"; } +.bi-sunrise-fill::before { content: "\f5a4"; } +.bi-sunrise::before { content: "\f5a5"; } +.bi-sunset-fill::before { content: "\f5a6"; } +.bi-sunset::before { content: "\f5a7"; } +.bi-symmetry-horizontal::before { content: "\f5a8"; } +.bi-symmetry-vertical::before { content: "\f5a9"; } +.bi-table::before { content: "\f5aa"; } +.bi-tablet-fill::before { content: "\f5ab"; } +.bi-tablet-landscape-fill::before { content: "\f5ac"; } +.bi-tablet-landscape::before { content: "\f5ad"; } +.bi-tablet::before { content: "\f5ae"; } +.bi-tag-fill::before { content: "\f5af"; } +.bi-tag::before { content: "\f5b0"; } +.bi-tags-fill::before { content: "\f5b1"; } +.bi-tags::before { content: "\f5b2"; } +.bi-telegram::before { content: "\f5b3"; } +.bi-telephone-fill::before { content: "\f5b4"; } +.bi-telephone-forward-fill::before { content: "\f5b5"; } +.bi-telephone-forward::before { content: "\f5b6"; } +.bi-telephone-inbound-fill::before { content: "\f5b7"; } +.bi-telephone-inbound::before { content: "\f5b8"; } +.bi-telephone-minus-fill::before { content: "\f5b9"; } +.bi-telephone-minus::before { content: "\f5ba"; } +.bi-telephone-outbound-fill::before { content: "\f5bb"; } +.bi-telephone-outbound::before { content: "\f5bc"; } +.bi-telephone-plus-fill::before { content: "\f5bd"; } +.bi-telephone-plus::before { content: "\f5be"; } +.bi-telephone-x-fill::before { content: "\f5bf"; } +.bi-telephone-x::before { content: "\f5c0"; } +.bi-telephone::before { content: "\f5c1"; } +.bi-terminal-fill::before { content: "\f5c2"; } +.bi-terminal::before { content: "\f5c3"; } +.bi-text-center::before { content: "\f5c4"; } +.bi-text-indent-left::before { content: "\f5c5"; } +.bi-text-indent-right::before { content: "\f5c6"; } +.bi-text-left::before { content: "\f5c7"; } +.bi-text-paragraph::before { content: "\f5c8"; } +.bi-text-right::before { content: "\f5c9"; } +.bi-textarea-resize::before { content: "\f5ca"; } +.bi-textarea-t::before { content: "\f5cb"; } +.bi-textarea::before { content: "\f5cc"; } +.bi-thermometer-half::before { content: "\f5cd"; } +.bi-thermometer-high::before { content: "\f5ce"; } +.bi-thermometer-low::before { content: "\f5cf"; } +.bi-thermometer-snow::before { content: "\f5d0"; } +.bi-thermometer-sun::before { content: "\f5d1"; } +.bi-thermometer::before { content: "\f5d2"; } +.bi-three-dots-vertical::before { content: "\f5d3"; } +.bi-three-dots::before { content: "\f5d4"; } +.bi-toggle-off::before { content: "\f5d5"; } +.bi-toggle-on::before { content: "\f5d6"; } +.bi-toggle2-off::before { content: "\f5d7"; } +.bi-toggle2-on::before { content: "\f5d8"; } +.bi-toggles::before { content: "\f5d9"; } +.bi-toggles2::before { content: "\f5da"; } +.bi-tools::before { content: "\f5db"; } +.bi-tornado::before { content: "\f5dc"; } +.bi-trash-fill::before { content: "\f5dd"; } +.bi-trash::before { content: "\f5de"; } +.bi-trash2-fill::before { content: "\f5df"; } +.bi-trash2::before { content: "\f5e0"; } +.bi-tree-fill::before { content: "\f5e1"; } +.bi-tree::before { content: "\f5e2"; } +.bi-triangle-fill::before { content: "\f5e3"; } +.bi-triangle-half::before { content: "\f5e4"; } +.bi-triangle::before { content: "\f5e5"; } +.bi-trophy-fill::before { content: "\f5e6"; } +.bi-trophy::before { content: "\f5e7"; } +.bi-tropical-storm::before { content: "\f5e8"; } +.bi-truck-flatbed::before { content: "\f5e9"; } +.bi-truck::before { content: "\f5ea"; } +.bi-tsunami::before { content: "\f5eb"; } +.bi-tv-fill::before { content: "\f5ec"; } +.bi-tv::before { content: "\f5ed"; } +.bi-twitch::before { content: "\f5ee"; } +.bi-twitter::before { content: "\f5ef"; } +.bi-type-bold::before { content: "\f5f0"; } +.bi-type-h1::before { content: "\f5f1"; } +.bi-type-h2::before { content: "\f5f2"; } +.bi-type-h3::before { content: "\f5f3"; } +.bi-type-italic::before { content: "\f5f4"; } +.bi-type-strikethrough::before { content: "\f5f5"; } +.bi-type-underline::before { content: "\f5f6"; } +.bi-type::before { content: "\f5f7"; } +.bi-ui-checks-grid::before { content: "\f5f8"; } +.bi-ui-checks::before { content: "\f5f9"; } +.bi-ui-radios-grid::before { content: "\f5fa"; } +.bi-ui-radios::before { content: "\f5fb"; } +.bi-umbrella-fill::before { content: "\f5fc"; } +.bi-umbrella::before { content: "\f5fd"; } +.bi-union::before { content: "\f5fe"; } +.bi-unlock-fill::before { content: "\f5ff"; } +.bi-unlock::before { content: "\f600"; } +.bi-upc-scan::before { content: "\f601"; } +.bi-upc::before { content: "\f602"; } +.bi-upload::before { content: "\f603"; } +.bi-vector-pen::before { content: "\f604"; } +.bi-view-list::before { content: "\f605"; } +.bi-view-stacked::before { content: "\f606"; } +.bi-vinyl-fill::before { content: "\f607"; } +.bi-vinyl::before { content: "\f608"; } +.bi-voicemail::before { content: "\f609"; } +.bi-volume-down-fill::before { content: "\f60a"; } +.bi-volume-down::before { content: "\f60b"; } +.bi-volume-mute-fill::before { content: "\f60c"; } +.bi-volume-mute::before { content: "\f60d"; } +.bi-volume-off-fill::before { content: "\f60e"; } +.bi-volume-off::before { content: "\f60f"; } +.bi-volume-up-fill::before { content: "\f610"; } +.bi-volume-up::before { content: "\f611"; } +.bi-vr::before { content: "\f612"; } +.bi-wallet-fill::before { content: "\f613"; } +.bi-wallet::before { content: "\f614"; } +.bi-wallet2::before { content: "\f615"; } +.bi-watch::before { content: "\f616"; } +.bi-water::before { content: "\f617"; } +.bi-whatsapp::before { content: "\f618"; } +.bi-wifi-1::before { content: "\f619"; } +.bi-wifi-2::before { content: "\f61a"; } +.bi-wifi-off::before { content: "\f61b"; } +.bi-wifi::before { content: "\f61c"; } +.bi-wind::before { content: "\f61d"; } +.bi-window-dock::before { content: "\f61e"; } +.bi-window-sidebar::before { content: "\f61f"; } +.bi-window::before { content: "\f620"; } +.bi-wrench::before { content: "\f621"; } +.bi-x-circle-fill::before { content: "\f622"; } +.bi-x-circle::before { content: "\f623"; } +.bi-x-diamond-fill::before { content: "\f624"; } +.bi-x-diamond::before { content: "\f625"; } +.bi-x-octagon-fill::before { content: "\f626"; } +.bi-x-octagon::before { content: "\f627"; } +.bi-x-square-fill::before { content: "\f628"; } +.bi-x-square::before { content: "\f629"; } +.bi-x::before { content: "\f62a"; } +.bi-youtube::before { content: "\f62b"; } +.bi-zoom-in::before { content: "\f62c"; } +.bi-zoom-out::before { content: "\f62d"; } +.bi-bank::before { content: "\f62e"; } +.bi-bank2::before { content: "\f62f"; } +.bi-bell-slash-fill::before { content: "\f630"; } +.bi-bell-slash::before { content: "\f631"; } +.bi-cash-coin::before { content: "\f632"; } +.bi-check-lg::before { content: "\f633"; } +.bi-coin::before { content: "\f634"; } +.bi-currency-bitcoin::before { content: "\f635"; } +.bi-currency-dollar::before { content: "\f636"; } +.bi-currency-euro::before { content: "\f637"; } +.bi-currency-exchange::before { content: "\f638"; } +.bi-currency-pound::before { content: "\f639"; } +.bi-currency-yen::before { content: "\f63a"; } +.bi-dash-lg::before { content: "\f63b"; } +.bi-exclamation-lg::before { content: "\f63c"; } +.bi-file-earmark-pdf-fill::before { content: "\f63d"; } +.bi-file-earmark-pdf::before { content: "\f63e"; } +.bi-file-pdf-fill::before { content: "\f63f"; } +.bi-file-pdf::before { content: "\f640"; } +.bi-gender-ambiguous::before { content: "\f641"; } +.bi-gender-female::before { content: "\f642"; } +.bi-gender-male::before { content: "\f643"; } +.bi-gender-trans::before { content: "\f644"; } +.bi-headset-vr::before { content: "\f645"; } +.bi-info-lg::before { content: "\f646"; } +.bi-mastodon::before { content: "\f647"; } +.bi-messenger::before { content: "\f648"; } +.bi-piggy-bank-fill::before { content: "\f649"; } +.bi-piggy-bank::before { content: "\f64a"; } +.bi-pin-map-fill::before { content: "\f64b"; } +.bi-pin-map::before { content: "\f64c"; } +.bi-plus-lg::before { content: "\f64d"; } +.bi-question-lg::before { content: "\f64e"; } +.bi-recycle::before { content: "\f64f"; } +.bi-reddit::before { content: "\f650"; } +.bi-safe-fill::before { content: "\f651"; } +.bi-safe2-fill::before { content: "\f652"; } +.bi-safe2::before { content: "\f653"; } +.bi-sd-card-fill::before { content: "\f654"; } +.bi-sd-card::before { content: "\f655"; } +.bi-skype::before { content: "\f656"; } +.bi-slash-lg::before { content: "\f657"; } +.bi-translate::before { content: "\f658"; } +.bi-x-lg::before { content: "\f659"; } +.bi-safe::before { content: "\f65a"; } +.bi-apple::before { content: "\f65b"; } +.bi-microsoft::before { content: "\f65d"; } +.bi-windows::before { content: "\f65e"; } +.bi-behance::before { content: "\f65c"; } +.bi-dribbble::before { content: "\f65f"; } +.bi-line::before { content: "\f660"; } +.bi-medium::before { content: "\f661"; } +.bi-paypal::before { content: "\f662"; } +.bi-pinterest::before { content: "\f663"; } +.bi-signal::before { content: "\f664"; } +.bi-snapchat::before { content: "\f665"; } +.bi-spotify::before { content: "\f666"; } +.bi-stack-overflow::before { content: "\f667"; } +.bi-strava::before { content: "\f668"; } +.bi-wordpress::before { content: "\f669"; } +.bi-vimeo::before { content: "\f66a"; } +.bi-activity::before { content: "\f66b"; } +.bi-easel2-fill::before { content: "\f66c"; } +.bi-easel2::before { content: "\f66d"; } +.bi-easel3-fill::before { content: "\f66e"; } +.bi-easel3::before { content: "\f66f"; } +.bi-fan::before { content: "\f670"; } +.bi-fingerprint::before { content: "\f671"; } +.bi-graph-down-arrow::before { content: "\f672"; } +.bi-graph-up-arrow::before { content: "\f673"; } +.bi-hypnotize::before { content: "\f674"; } +.bi-magic::before { content: "\f675"; } +.bi-person-rolodex::before { content: "\f676"; } +.bi-person-video::before { content: "\f677"; } +.bi-person-video2::before { content: "\f678"; } +.bi-person-video3::before { content: "\f679"; } +.bi-person-workspace::before { content: "\f67a"; } +.bi-radioactive::before { content: "\f67b"; } +.bi-webcam-fill::before { content: "\f67c"; } +.bi-webcam::before { content: "\f67d"; } +.bi-yin-yang::before { content: "\f67e"; } +.bi-bandaid-fill::before { content: "\f680"; } +.bi-bandaid::before { content: "\f681"; } +.bi-bluetooth::before { content: "\f682"; } +.bi-body-text::before { content: "\f683"; } +.bi-boombox::before { content: "\f684"; } +.bi-boxes::before { content: "\f685"; } +.bi-dpad-fill::before { content: "\f686"; } +.bi-dpad::before { content: "\f687"; } +.bi-ear-fill::before { content: "\f688"; } +.bi-ear::before { content: "\f689"; } +.bi-envelope-check-1::before { content: "\f68a"; } +.bi-envelope-check-fill::before { content: "\f68b"; } +.bi-envelope-check::before { content: "\f68c"; } +.bi-envelope-dash-1::before { content: "\f68d"; } +.bi-envelope-dash-fill::before { content: "\f68e"; } +.bi-envelope-dash::before { content: "\f68f"; } +.bi-envelope-exclamation-1::before { content: "\f690"; } +.bi-envelope-exclamation-fill::before { content: "\f691"; } +.bi-envelope-exclamation::before { content: "\f692"; } +.bi-envelope-plus-fill::before { content: "\f693"; } +.bi-envelope-plus::before { content: "\f694"; } +.bi-envelope-slash-1::before { content: "\f695"; } +.bi-envelope-slash-fill::before { content: "\f696"; } +.bi-envelope-slash::before { content: "\f697"; } +.bi-envelope-x-1::before { content: "\f698"; } +.bi-envelope-x-fill::before { content: "\f699"; } +.bi-envelope-x::before { content: "\f69a"; } +.bi-explicit-fill::before { content: "\f69b"; } +.bi-explicit::before { content: "\f69c"; } +.bi-git::before { content: "\f69d"; } +.bi-infinity::before { content: "\f69e"; } +.bi-list-columns-reverse::before { content: "\f69f"; } +.bi-list-columns::before { content: "\f6a0"; } +.bi-meta::before { content: "\f6a1"; } +.bi-mortorboard-fill::before { content: "\f6a2"; } +.bi-mortorboard::before { content: "\f6a3"; } +.bi-nintendo-switch::before { content: "\f6a4"; } +.bi-pc-display-horizontal::before { content: "\f6a5"; } +.bi-pc-display::before { content: "\f6a6"; } +.bi-pc-horizontal::before { content: "\f6a7"; } +.bi-pc::before { content: "\f6a8"; } +.bi-playstation::before { content: "\f6a9"; } +.bi-plus-slash-minus::before { content: "\f6aa"; } +.bi-projector-fill::before { content: "\f6ab"; } +.bi-projector::before { content: "\f6ac"; } +.bi-qr-code-scan::before { content: "\f6ad"; } +.bi-qr-code::before { content: "\f6ae"; } +.bi-quora::before { content: "\f6af"; } +.bi-quote::before { content: "\f6b0"; } +.bi-robot::before { content: "\f6b1"; } +.bi-send-check-fill::before { content: "\f6b2"; } +.bi-send-check::before { content: "\f6b3"; } +.bi-send-dash-fill::before { content: "\f6b4"; } +.bi-send-dash::before { content: "\f6b5"; } +.bi-send-exclamation-1::before { content: "\f6b6"; } +.bi-send-exclamation-fill::before { content: "\f6b7"; } +.bi-send-exclamation::before { content: "\f6b8"; } +.bi-send-fill::before { content: "\f6b9"; } +.bi-send-plus-fill::before { content: "\f6ba"; } +.bi-send-plus::before { content: "\f6bb"; } +.bi-send-slash-fill::before { content: "\f6bc"; } +.bi-send-slash::before { content: "\f6bd"; } +.bi-send-x-fill::before { content: "\f6be"; } +.bi-send-x::before { content: "\f6bf"; } +.bi-send::before { content: "\f6c0"; } +.bi-steam::before { content: "\f6c1"; } +.bi-terminal-dash-1::before { content: "\f6c2"; } +.bi-terminal-dash::before { content: "\f6c3"; } +.bi-terminal-plus::before { content: "\f6c4"; } +.bi-terminal-split::before { content: "\f6c5"; } +.bi-ticket-detailed-fill::before { content: "\f6c6"; } +.bi-ticket-detailed::before { content: "\f6c7"; } +.bi-ticket-fill::before { content: "\f6c8"; } +.bi-ticket-perforated-fill::before { content: "\f6c9"; } +.bi-ticket-perforated::before { content: "\f6ca"; } +.bi-ticket::before { content: "\f6cb"; } +.bi-tiktok::before { content: "\f6cc"; } +.bi-window-dash::before { content: "\f6cd"; } +.bi-window-desktop::before { content: "\f6ce"; } +.bi-window-fullscreen::before { content: "\f6cf"; } +.bi-window-plus::before { content: "\f6d0"; } +.bi-window-split::before { content: "\f6d1"; } +.bi-window-stack::before { content: "\f6d2"; } +.bi-window-x::before { content: "\f6d3"; } +.bi-xbox::before { content: "\f6d4"; } +.bi-ethernet::before { content: "\f6d5"; } +.bi-hdmi-fill::before { content: "\f6d6"; } +.bi-hdmi::before { content: "\f6d7"; } +.bi-usb-c-fill::before { content: "\f6d8"; } +.bi-usb-c::before { content: "\f6d9"; } +.bi-usb-fill::before { content: "\f6da"; } +.bi-usb-plug-fill::before { content: "\f6db"; } +.bi-usb-plug::before { content: "\f6dc"; } +.bi-usb-symbol::before { content: "\f6dd"; } +.bi-usb::before { content: "\f6de"; } +.bi-boombox-fill::before { content: "\f6df"; } +.bi-displayport-1::before { content: "\f6e0"; } +.bi-displayport::before { content: "\f6e1"; } +.bi-gpu-card::before { content: "\f6e2"; } +.bi-memory::before { content: "\f6e3"; } +.bi-modem-fill::before { content: "\f6e4"; } +.bi-modem::before { content: "\f6e5"; } +.bi-motherboard-fill::before { content: "\f6e6"; } +.bi-motherboard::before { content: "\f6e7"; } +.bi-optical-audio-fill::before { content: "\f6e8"; } +.bi-optical-audio::before { content: "\f6e9"; } +.bi-pci-card::before { content: "\f6ea"; } +.bi-router-fill::before { content: "\f6eb"; } +.bi-router::before { content: "\f6ec"; } +.bi-ssd-fill::before { content: "\f6ed"; } +.bi-ssd::before { content: "\f6ee"; } +.bi-thunderbolt-fill::before { content: "\f6ef"; } +.bi-thunderbolt::before { content: "\f6f0"; } +.bi-usb-drive-fill::before { content: "\f6f1"; } +.bi-usb-drive::before { content: "\f6f2"; } +.bi-usb-micro-fill::before { content: "\f6f3"; } +.bi-usb-micro::before { content: "\f6f4"; } +.bi-usb-mini-fill::before { content: "\f6f5"; } +.bi-usb-mini::before { content: "\f6f6"; } +.bi-cloud-haze2::before { content: "\f6f7"; } +.bi-device-hdd-fill::before { content: "\f6f8"; } +.bi-device-hdd::before { content: "\f6f9"; } +.bi-device-ssd-fill::before { content: "\f6fa"; } +.bi-device-ssd::before { content: "\f6fb"; } +.bi-displayport-fill::before { content: "\f6fc"; } +.bi-mortarboard-fill::before { content: "\f6fd"; } +.bi-mortarboard::before { content: "\f6fe"; } +.bi-terminal-x::before { content: "\f6ff"; } +.bi-arrow-through-heart-fill::before { content: "\f700"; } +.bi-arrow-through-heart::before { content: "\f701"; } +.bi-badge-sd-fill::before { content: "\f702"; } +.bi-badge-sd::before { content: "\f703"; } +.bi-bag-heart-fill::before { content: "\f704"; } +.bi-bag-heart::before { content: "\f705"; } +.bi-balloon-fill::before { content: "\f706"; } +.bi-balloon-heart-fill::before { content: "\f707"; } +.bi-balloon-heart::before { content: "\f708"; } +.bi-balloon::before { content: "\f709"; } +.bi-box2-fill::before { content: "\f70a"; } +.bi-box2-heart-fill::before { content: "\f70b"; } +.bi-box2-heart::before { content: "\f70c"; } +.bi-box2::before { content: "\f70d"; } +.bi-braces-asterisk::before { content: "\f70e"; } +.bi-calendar-heart-fill::before { content: "\f70f"; } +.bi-calendar-heart::before { content: "\f710"; } +.bi-calendar2-heart-fill::before { content: "\f711"; } +.bi-calendar2-heart::before { content: "\f712"; } +.bi-chat-heart-fill::before { content: "\f713"; } +.bi-chat-heart::before { content: "\f714"; } +.bi-chat-left-heart-fill::before { content: "\f715"; } +.bi-chat-left-heart::before { content: "\f716"; } +.bi-chat-right-heart-fill::before { content: "\f717"; } +.bi-chat-right-heart::before { content: "\f718"; } +.bi-chat-square-heart-fill::before { content: "\f719"; } +.bi-chat-square-heart::before { content: "\f71a"; } +.bi-clipboard-check-fill::before { content: "\f71b"; } +.bi-clipboard-data-fill::before { content: "\f71c"; } +.bi-clipboard-fill::before { content: "\f71d"; } +.bi-clipboard-heart-fill::before { content: "\f71e"; } +.bi-clipboard-heart::before { content: "\f71f"; } +.bi-clipboard-minus-fill::before { content: "\f720"; } +.bi-clipboard-plus-fill::before { content: "\f721"; } +.bi-clipboard-pulse::before { content: "\f722"; } +.bi-clipboard-x-fill::before { content: "\f723"; } +.bi-clipboard2-check-fill::before { content: "\f724"; } +.bi-clipboard2-check::before { content: "\f725"; } +.bi-clipboard2-data-fill::before { content: "\f726"; } +.bi-clipboard2-data::before { content: "\f727"; } +.bi-clipboard2-fill::before { content: "\f728"; } +.bi-clipboard2-heart-fill::before { content: "\f729"; } +.bi-clipboard2-heart::before { content: "\f72a"; } +.bi-clipboard2-minus-fill::before { content: "\f72b"; } +.bi-clipboard2-minus::before { content: "\f72c"; } +.bi-clipboard2-plus-fill::before { content: "\f72d"; } +.bi-clipboard2-plus::before { content: "\f72e"; } +.bi-clipboard2-pulse-fill::before { content: "\f72f"; } +.bi-clipboard2-pulse::before { content: "\f730"; } +.bi-clipboard2-x-fill::before { content: "\f731"; } +.bi-clipboard2-x::before { content: "\f732"; } +.bi-clipboard2::before { content: "\f733"; } +.bi-emoji-kiss-fill::before { content: "\f734"; } +.bi-emoji-kiss::before { content: "\f735"; } +.bi-envelope-heart-fill::before { content: "\f736"; } +.bi-envelope-heart::before { content: "\f737"; } +.bi-envelope-open-heart-fill::before { content: "\f738"; } +.bi-envelope-open-heart::before { content: "\f739"; } +.bi-envelope-paper-fill::before { content: "\f73a"; } +.bi-envelope-paper-heart-fill::before { content: "\f73b"; } +.bi-envelope-paper-heart::before { content: "\f73c"; } +.bi-envelope-paper::before { content: "\f73d"; } +.bi-filetype-aac::before { content: "\f73e"; } +.bi-filetype-ai::before { content: "\f73f"; } +.bi-filetype-bmp::before { content: "\f740"; } +.bi-filetype-cs::before { content: "\f741"; } +.bi-filetype-css::before { content: "\f742"; } +.bi-filetype-csv::before { content: "\f743"; } +.bi-filetype-doc::before { content: "\f744"; } +.bi-filetype-docx::before { content: "\f745"; } +.bi-filetype-exe::before { content: "\f746"; } +.bi-filetype-gif::before { content: "\f747"; } +.bi-filetype-heic::before { content: "\f748"; } +.bi-filetype-html::before { content: "\f749"; } +.bi-filetype-java::before { content: "\f74a"; } +.bi-filetype-jpg::before { content: "\f74b"; } +.bi-filetype-js::before { content: "\f74c"; } +.bi-filetype-jsx::before { content: "\f74d"; } +.bi-filetype-key::before { content: "\f74e"; } +.bi-filetype-m4p::before { content: "\f74f"; } +.bi-filetype-md::before { content: "\f750"; } +.bi-filetype-mdx::before { content: "\f751"; } +.bi-filetype-mov::before { content: "\f752"; } +.bi-filetype-mp3::before { content: "\f753"; } +.bi-filetype-mp4::before { content: "\f754"; } +.bi-filetype-otf::before { content: "\f755"; } +.bi-filetype-pdf::before { content: "\f756"; } +.bi-filetype-php::before { content: "\f757"; } +.bi-filetype-png::before { content: "\f758"; } +.bi-filetype-ppt-1::before { content: "\f759"; } +.bi-filetype-ppt::before { content: "\f75a"; } +.bi-filetype-psd::before { content: "\f75b"; } +.bi-filetype-py::before { content: "\f75c"; } +.bi-filetype-raw::before { content: "\f75d"; } +.bi-filetype-rb::before { content: "\f75e"; } +.bi-filetype-sass::before { content: "\f75f"; } +.bi-filetype-scss::before { content: "\f760"; } +.bi-filetype-sh::before { content: "\f761"; } +.bi-filetype-svg::before { content: "\f762"; } +.bi-filetype-tiff::before { content: "\f763"; } +.bi-filetype-tsx::before { content: "\f764"; } +.bi-filetype-ttf::before { content: "\f765"; } +.bi-filetype-txt::before { content: "\f766"; } +.bi-filetype-wav::before { content: "\f767"; } +.bi-filetype-woff::before { content: "\f768"; } +.bi-filetype-xls-1::before { content: "\f769"; } +.bi-filetype-xls::before { content: "\f76a"; } +.bi-filetype-xml::before { content: "\f76b"; } +.bi-filetype-yml::before { content: "\f76c"; } +.bi-heart-arrow::before { content: "\f76d"; } +.bi-heart-pulse-fill::before { content: "\f76e"; } +.bi-heart-pulse::before { content: "\f76f"; } +.bi-heartbreak-fill::before { content: "\f770"; } +.bi-heartbreak::before { content: "\f771"; } +.bi-hearts::before { content: "\f772"; } +.bi-hospital-fill::before { content: "\f773"; } +.bi-hospital::before { content: "\f774"; } +.bi-house-heart-fill::before { content: "\f775"; } +.bi-house-heart::before { content: "\f776"; } +.bi-incognito::before { content: "\f777"; } +.bi-magnet-fill::before { content: "\f778"; } +.bi-magnet::before { content: "\f779"; } +.bi-person-heart::before { content: "\f77a"; } +.bi-person-hearts::before { content: "\f77b"; } +.bi-phone-flip::before { content: "\f77c"; } +.bi-plugin::before { content: "\f77d"; } +.bi-postage-fill::before { content: "\f77e"; } +.bi-postage-heart-fill::before { content: "\f77f"; } +.bi-postage-heart::before { content: "\f780"; } +.bi-postage::before { content: "\f781"; } +.bi-postcard-fill::before { content: "\f782"; } +.bi-postcard-heart-fill::before { content: "\f783"; } +.bi-postcard-heart::before { content: "\f784"; } +.bi-postcard::before { content: "\f785"; } +.bi-search-heart-fill::before { content: "\f786"; } +.bi-search-heart::before { content: "\f787"; } +.bi-sliders2-vertical::before { content: "\f788"; } +.bi-sliders2::before { content: "\f789"; } +.bi-trash3-fill::before { content: "\f78a"; } +.bi-trash3::before { content: "\f78b"; } +.bi-valentine::before { content: "\f78c"; } +.bi-valentine2::before { content: "\f78d"; } +.bi-wrench-adjustable-circle-fill::before { content: "\f78e"; } +.bi-wrench-adjustable-circle::before { content: "\f78f"; } +.bi-wrench-adjustable::before { content: "\f790"; } +.bi-filetype-json::before { content: "\f791"; } +.bi-filetype-pptx::before { content: "\f792"; } +.bi-filetype-xlsx::before { content: "\f793"; } +.bi-1-circle-1::before { content: "\f794"; } +.bi-1-circle-fill-1::before { content: "\f795"; } +.bi-1-circle-fill::before { content: "\f796"; } +.bi-1-circle::before { content: "\f797"; } +.bi-1-square-fill::before { content: "\f798"; } +.bi-1-square::before { content: "\f799"; } +.bi-2-circle-1::before { content: "\f79a"; } +.bi-2-circle-fill-1::before { content: "\f79b"; } +.bi-2-circle-fill::before { content: "\f79c"; } +.bi-2-circle::before { content: "\f79d"; } +.bi-2-square-fill::before { content: "\f79e"; } +.bi-2-square::before { content: "\f79f"; } +.bi-3-circle-1::before { content: "\f7a0"; } +.bi-3-circle-fill-1::before { content: "\f7a1"; } +.bi-3-circle-fill::before { content: "\f7a2"; } +.bi-3-circle::before { content: "\f7a3"; } +.bi-3-square-fill::before { content: "\f7a4"; } +.bi-3-square::before { content: "\f7a5"; } +.bi-4-circle-1::before { content: "\f7a6"; } +.bi-4-circle-fill-1::before { content: "\f7a7"; } +.bi-4-circle-fill::before { content: "\f7a8"; } +.bi-4-circle::before { content: "\f7a9"; } +.bi-4-square-fill::before { content: "\f7aa"; } +.bi-4-square::before { content: "\f7ab"; } +.bi-5-circle-1::before { content: "\f7ac"; } +.bi-5-circle-fill-1::before { content: "\f7ad"; } +.bi-5-circle-fill::before { content: "\f7ae"; } +.bi-5-circle::before { content: "\f7af"; } +.bi-5-square-fill::before { content: "\f7b0"; } +.bi-5-square::before { content: "\f7b1"; } +.bi-6-circle-1::before { content: "\f7b2"; } +.bi-6-circle-fill-1::before { content: "\f7b3"; } +.bi-6-circle-fill::before { content: "\f7b4"; } +.bi-6-circle::before { content: "\f7b5"; } +.bi-6-square-fill::before { content: "\f7b6"; } +.bi-6-square::before { content: "\f7b7"; } +.bi-7-circle-1::before { content: "\f7b8"; } +.bi-7-circle-fill-1::before { content: "\f7b9"; } +.bi-7-circle-fill::before { content: "\f7ba"; } +.bi-7-circle::before { content: "\f7bb"; } +.bi-7-square-fill::before { content: "\f7bc"; } +.bi-7-square::before { content: "\f7bd"; } +.bi-8-circle-1::before { content: "\f7be"; } +.bi-8-circle-fill-1::before { content: "\f7bf"; } +.bi-8-circle-fill::before { content: "\f7c0"; } +.bi-8-circle::before { content: "\f7c1"; } +.bi-8-square-fill::before { content: "\f7c2"; } +.bi-8-square::before { content: "\f7c3"; } +.bi-9-circle-1::before { content: "\f7c4"; } +.bi-9-circle-fill-1::before { content: "\f7c5"; } +.bi-9-circle-fill::before { content: "\f7c6"; } +.bi-9-circle::before { content: "\f7c7"; } +.bi-9-square-fill::before { content: "\f7c8"; } +.bi-9-square::before { content: "\f7c9"; } +.bi-airplane-engines-fill::before { content: "\f7ca"; } +.bi-airplane-engines::before { content: "\f7cb"; } +.bi-airplane-fill::before { content: "\f7cc"; } +.bi-airplane::before { content: "\f7cd"; } +.bi-alexa::before { content: "\f7ce"; } +.bi-alipay::before { content: "\f7cf"; } +.bi-android::before { content: "\f7d0"; } +.bi-android2::before { content: "\f7d1"; } +.bi-box-fill::before { content: "\f7d2"; } +.bi-box-seam-fill::before { content: "\f7d3"; } +.bi-browser-chrome::before { content: "\f7d4"; } +.bi-browser-edge::before { content: "\f7d5"; } +.bi-browser-firefox::before { content: "\f7d6"; } +.bi-browser-safari::before { content: "\f7d7"; } +.bi-c-circle-1::before { content: "\f7d8"; } +.bi-c-circle-fill-1::before { content: "\f7d9"; } +.bi-c-circle-fill::before { content: "\f7da"; } +.bi-c-circle::before { content: "\f7db"; } +.bi-c-square-fill::before { content: "\f7dc"; } +.bi-c-square::before { content: "\f7dd"; } +.bi-capsule-pill::before { content: "\f7de"; } +.bi-capsule::before { content: "\f7df"; } +.bi-car-front-fill::before { content: "\f7e0"; } +.bi-car-front::before { content: "\f7e1"; } +.bi-cassette-fill::before { content: "\f7e2"; } +.bi-cassette::before { content: "\f7e3"; } +.bi-cc-circle-1::before { content: "\f7e4"; } +.bi-cc-circle-fill-1::before { content: "\f7e5"; } +.bi-cc-circle-fill::before { content: "\f7e6"; } +.bi-cc-circle::before { content: "\f7e7"; } +.bi-cc-square-fill::before { content: "\f7e8"; } +.bi-cc-square::before { content: "\f7e9"; } +.bi-cup-hot-fill::before { content: "\f7ea"; } +.bi-cup-hot::before { content: "\f7eb"; } +.bi-currency-rupee::before { content: "\f7ec"; } +.bi-dropbox::before { content: "\f7ed"; } +.bi-escape::before { content: "\f7ee"; } +.bi-fast-forward-btn-fill::before { content: "\f7ef"; } +.bi-fast-forward-btn::before { content: "\f7f0"; } +.bi-fast-forward-circle-fill::before { content: "\f7f1"; } +.bi-fast-forward-circle::before { content: "\f7f2"; } +.bi-fast-forward-fill::before { content: "\f7f3"; } +.bi-fast-forward::before { content: "\f7f4"; } +.bi-filetype-sql::before { content: "\f7f5"; } +.bi-fire::before { content: "\f7f6"; } +.bi-google-play::before { content: "\f7f7"; } +.bi-h-circle-1::before { content: "\f7f8"; } +.bi-h-circle-fill-1::before { content: "\f7f9"; } +.bi-h-circle-fill::before { content: "\f7fa"; } +.bi-h-circle::before { content: "\f7fb"; } +.bi-h-square-fill::before { content: "\f7fc"; } +.bi-h-square::before { content: "\f7fd"; } +.bi-indent::before { content: "\f7fe"; } +.bi-lungs-fill::before { content: "\f7ff"; } +.bi-lungs::before { content: "\f800"; } +.bi-microsoft-teams::before { content: "\f801"; } +.bi-p-circle-1::before { content: "\f802"; } +.bi-p-circle-fill-1::before { content: "\f803"; } +.bi-p-circle-fill::before { content: "\f804"; } +.bi-p-circle::before { content: "\f805"; } +.bi-p-square-fill::before { content: "\f806"; } +.bi-p-square::before { content: "\f807"; } +.bi-pass-fill::before { content: "\f808"; } +.bi-pass::before { content: "\f809"; } +.bi-prescription::before { content: "\f80a"; } +.bi-prescription2::before { content: "\f80b"; } +.bi-r-circle-1::before { content: "\f80c"; } +.bi-r-circle-fill-1::before { content: "\f80d"; } +.bi-r-circle-fill::before { content: "\f80e"; } +.bi-r-circle::before { content: "\f80f"; } +.bi-r-square-fill::before { content: "\f810"; } +.bi-r-square::before { content: "\f811"; } +.bi-repeat-1::before { content: "\f812"; } +.bi-repeat::before { content: "\f813"; } +.bi-rewind-btn-fill::before { content: "\f814"; } +.bi-rewind-btn::before { content: "\f815"; } +.bi-rewind-circle-fill::before { content: "\f816"; } +.bi-rewind-circle::before { content: "\f817"; } +.bi-rewind-fill::before { content: "\f818"; } +.bi-rewind::before { content: "\f819"; } +.bi-train-freight-front-fill::before { content: "\f81a"; } +.bi-train-freight-front::before { content: "\f81b"; } +.bi-train-front-fill::before { content: "\f81c"; } +.bi-train-front::before { content: "\f81d"; } +.bi-train-lightrail-front-fill::before { content: "\f81e"; } +.bi-train-lightrail-front::before { content: "\f81f"; } +.bi-truck-front-fill::before { content: "\f820"; } +.bi-truck-front::before { content: "\f821"; } +.bi-ubuntu::before { content: "\f822"; } +.bi-unindent::before { content: "\f823"; } +.bi-unity::before { content: "\f824"; } +.bi-universal-access-circle::before { content: "\f825"; } +.bi-universal-access::before { content: "\f826"; } +.bi-virus::before { content: "\f827"; } +.bi-virus2::before { content: "\f828"; } +.bi-wechat::before { content: "\f829"; } +.bi-yelp::before { content: "\f82a"; } +.bi-sign-stop-fill::before { content: "\f82b"; } +.bi-sign-stop-lights-fill::before { content: "\f82c"; } +.bi-sign-stop-lights::before { content: "\f82d"; } +.bi-sign-stop::before { content: "\f82e"; } +.bi-sign-turn-left-fill::before { content: "\f82f"; } +.bi-sign-turn-left::before { content: "\f830"; } +.bi-sign-turn-right-fill::before { content: "\f831"; } +.bi-sign-turn-right::before { content: "\f832"; } +.bi-sign-turn-slight-left-fill::before { content: "\f833"; } +.bi-sign-turn-slight-left::before { content: "\f834"; } +.bi-sign-turn-slight-right-fill::before { content: "\f835"; } +.bi-sign-turn-slight-right::before { content: "\f836"; } +.bi-sign-yield-fill::before { content: "\f837"; } +.bi-sign-yield::before { content: "\f838"; } +.bi-ev-station-fill::before { content: "\f839"; } +.bi-ev-station::before { content: "\f83a"; } +.bi-fuel-pump-diesel-fill::before { content: "\f83b"; } +.bi-fuel-pump-diesel::before { content: "\f83c"; } +.bi-fuel-pump-fill::before { content: "\f83d"; } +.bi-fuel-pump::before { content: "\f83e"; } +.bi-0-circle-fill::before { content: "\f83f"; } +.bi-0-circle::before { content: "\f840"; } +.bi-0-square-fill::before { content: "\f841"; } +.bi-0-square::before { content: "\f842"; } +.bi-rocket-fill::before { content: "\f843"; } +.bi-rocket-takeoff-fill::before { content: "\f844"; } +.bi-rocket-takeoff::before { content: "\f845"; } +.bi-rocket::before { content: "\f846"; } +.bi-stripe::before { content: "\f847"; } +.bi-subscript::before { content: "\f848"; } +.bi-superscript::before { content: "\f849"; } +.bi-trello::before { content: "\f84a"; } +.bi-envelope-at-fill::before { content: "\f84b"; } +.bi-envelope-at::before { content: "\f84c"; } +.bi-regex::before { content: "\f84d"; } +.bi-text-wrap::before { content: "\f84e"; } +.bi-sign-dead-end-fill::before { content: "\f84f"; } +.bi-sign-dead-end::before { content: "\f850"; } +.bi-sign-do-not-enter-fill::before { content: "\f851"; } +.bi-sign-do-not-enter::before { content: "\f852"; } +.bi-sign-intersection-fill::before { content: "\f853"; } +.bi-sign-intersection-side-fill::before { content: "\f854"; } +.bi-sign-intersection-side::before { content: "\f855"; } +.bi-sign-intersection-t-fill::before { content: "\f856"; } +.bi-sign-intersection-t::before { content: "\f857"; } +.bi-sign-intersection-y-fill::before { content: "\f858"; } +.bi-sign-intersection-y::before { content: "\f859"; } +.bi-sign-intersection::before { content: "\f85a"; } +.bi-sign-merge-left-fill::before { content: "\f85b"; } +.bi-sign-merge-left::before { content: "\f85c"; } +.bi-sign-merge-right-fill::before { content: "\f85d"; } +.bi-sign-merge-right::before { content: "\f85e"; } +.bi-sign-no-left-turn-fill::before { content: "\f85f"; } +.bi-sign-no-left-turn::before { content: "\f860"; } +.bi-sign-no-parking-fill::before { content: "\f861"; } +.bi-sign-no-parking::before { content: "\f862"; } +.bi-sign-no-right-turn-fill::before { content: "\f863"; } +.bi-sign-no-right-turn::before { content: "\f864"; } +.bi-sign-railroad-fill::before { content: "\f865"; } +.bi-sign-railroad::before { content: "\f866"; } +.bi-building-add::before { content: "\f867"; } +.bi-building-check::before { content: "\f868"; } +.bi-building-dash::before { content: "\f869"; } +.bi-building-down::before { content: "\f86a"; } +.bi-building-exclamation::before { content: "\f86b"; } +.bi-building-fill-add::before { content: "\f86c"; } +.bi-building-fill-check::before { content: "\f86d"; } +.bi-building-fill-dash::before { content: "\f86e"; } +.bi-building-fill-down::before { content: "\f86f"; } +.bi-building-fill-exclamation::before { content: "\f870"; } +.bi-building-fill-gear::before { content: "\f871"; } +.bi-building-fill-lock::before { content: "\f872"; } +.bi-building-fill-slash::before { content: "\f873"; } +.bi-building-fill-up::before { content: "\f874"; } +.bi-building-fill-x::before { content: "\f875"; } +.bi-building-fill::before { content: "\f876"; } +.bi-building-gear::before { content: "\f877"; } +.bi-building-lock::before { content: "\f878"; } +.bi-building-slash::before { content: "\f879"; } +.bi-building-up::before { content: "\f87a"; } +.bi-building-x::before { content: "\f87b"; } +.bi-buildings-fill::before { content: "\f87c"; } +.bi-buildings::before { content: "\f87d"; } +.bi-bus-front-fill::before { content: "\f87e"; } +.bi-bus-front::before { content: "\f87f"; } +.bi-ev-front-fill::before { content: "\f880"; } +.bi-ev-front::before { content: "\f881"; } +.bi-globe-americas::before { content: "\f882"; } +.bi-globe-asia-australia::before { content: "\f883"; } +.bi-globe-central-south-asia::before { content: "\f884"; } +.bi-globe-europe-africa::before { content: "\f885"; } +.bi-house-add-fill::before { content: "\f886"; } +.bi-house-add::before { content: "\f887"; } +.bi-house-check-fill::before { content: "\f888"; } +.bi-house-check::before { content: "\f889"; } +.bi-house-dash-fill::before { content: "\f88a"; } +.bi-house-dash::before { content: "\f88b"; } +.bi-house-down-fill::before { content: "\f88c"; } +.bi-house-down::before { content: "\f88d"; } +.bi-house-exclamation-fill::before { content: "\f88e"; } +.bi-house-exclamation::before { content: "\f88f"; } +.bi-house-gear-fill::before { content: "\f890"; } +.bi-house-gear::before { content: "\f891"; } +.bi-house-lock-fill::before { content: "\f892"; } +.bi-house-lock::before { content: "\f893"; } +.bi-house-slash-fill::before { content: "\f894"; } +.bi-house-slash::before { content: "\f895"; } +.bi-house-up-fill::before { content: "\f896"; } +.bi-house-up::before { content: "\f897"; } +.bi-house-x-fill::before { content: "\f898"; } +.bi-house-x::before { content: "\f899"; } +.bi-person-add::before { content: "\f89a"; } +.bi-person-down::before { content: "\f89b"; } +.bi-person-exclamation::before { content: "\f89c"; } +.bi-person-fill-add::before { content: "\f89d"; } +.bi-person-fill-check::before { content: "\f89e"; } +.bi-person-fill-dash::before { content: "\f89f"; } +.bi-person-fill-down::before { content: "\f8a0"; } +.bi-person-fill-exclamation::before { content: "\f8a1"; } +.bi-person-fill-gear::before { content: "\f8a2"; } +.bi-person-fill-lock::before { content: "\f8a3"; } +.bi-person-fill-slash::before { content: "\f8a4"; } +.bi-person-fill-up::before { content: "\f8a5"; } +.bi-person-fill-x::before { content: "\f8a6"; } +.bi-person-gear::before { content: "\f8a7"; } +.bi-person-lock::before { content: "\f8a8"; } +.bi-person-slash::before { content: "\f8a9"; } +.bi-person-up::before { content: "\f8aa"; } +.bi-scooter::before { content: "\f8ab"; } +.bi-taxi-front-fill::before { content: "\f8ac"; } +.bi-taxi-front::before { content: "\f8ad"; } +.bi-amd::before { content: "\f8ae"; } +.bi-database-add::before { content: "\f8af"; } +.bi-database-check::before { content: "\f8b0"; } +.bi-database-dash::before { content: "\f8b1"; } +.bi-database-down::before { content: "\f8b2"; } +.bi-database-exclamation::before { content: "\f8b3"; } +.bi-database-fill-add::before { content: "\f8b4"; } +.bi-database-fill-check::before { content: "\f8b5"; } +.bi-database-fill-dash::before { content: "\f8b6"; } +.bi-database-fill-down::before { content: "\f8b7"; } +.bi-database-fill-exclamation::before { content: "\f8b8"; } +.bi-database-fill-gear::before { content: "\f8b9"; } +.bi-database-fill-lock::before { content: "\f8ba"; } +.bi-database-fill-slash::before { content: "\f8bb"; } +.bi-database-fill-up::before { content: "\f8bc"; } +.bi-database-fill-x::before { content: "\f8bd"; } +.bi-database-fill::before { content: "\f8be"; } +.bi-database-gear::before { content: "\f8bf"; } +.bi-database-lock::before { content: "\f8c0"; } +.bi-database-slash::before { content: "\f8c1"; } +.bi-database-up::before { content: "\f8c2"; } +.bi-database-x::before { content: "\f8c3"; } +.bi-database::before { content: "\f8c4"; } +.bi-houses-fill::before { content: "\f8c5"; } +.bi-houses::before { content: "\f8c6"; } +.bi-nvidia::before { content: "\f8c7"; } +.bi-person-vcard-fill::before { content: "\f8c8"; } +.bi-person-vcard::before { content: "\f8c9"; } +.bi-sina-weibo::before { content: "\f8ca"; } +.bi-tencent-qq::before { content: "\f8cb"; } +.bi-wikipedia::before { content: "\f8cc"; } diff --git a/python-book/site_libs/bootstrap/bootstrap-icons.woff b/python-book/site_libs/bootstrap/bootstrap-icons.woff new file mode 100644 index 00000000..18d21d45 Binary files /dev/null and b/python-book/site_libs/bootstrap/bootstrap-icons.woff differ diff --git a/python-book/site_libs/bootstrap/bootstrap.min.css b/python-book/site_libs/bootstrap/bootstrap.min.css new file mode 100644 index 00000000..a519bf95 --- /dev/null +++ b/python-book/site_libs/bootstrap/bootstrap.min.css @@ -0,0 +1,10 @@ +/*! + * Bootstrap v5.1.3 (https://getbootstrap.com/) + * Copyright 2011-2021 The Bootstrap Authors + * Copyright 2011-2021 Twitter, Inc. + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) + */@import"https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@300;400;700&display=swap";:root{--bs-blue: #2780e3;--bs-indigo: #6610f2;--bs-purple: #613d7c;--bs-pink: #e83e8c;--bs-red: #ff0039;--bs-orange: #f0ad4e;--bs-yellow: #ff7518;--bs-green: #3fb618;--bs-teal: #20c997;--bs-cyan: #9954bb;--bs-white: #fff;--bs-gray: #6c757d;--bs-gray-dark: #373a3c;--bs-gray-100: #f8f9fa;--bs-gray-200: #e9ecef;--bs-gray-300: #dee2e6;--bs-gray-400: #ced4da;--bs-gray-500: #adb5bd;--bs-gray-600: #6c757d;--bs-gray-700: #495057;--bs-gray-800: #373a3c;--bs-gray-900: #212529;--bs-default: #373a3c;--bs-primary: #2780e3;--bs-secondary: #373a3c;--bs-success: #3fb618;--bs-info: #9954bb;--bs-warning: #ff7518;--bs-danger: #ff0039;--bs-light: #f8f9fa;--bs-dark: #373a3c;--bs-default-rgb: 55, 58, 60;--bs-primary-rgb: 39, 128, 227;--bs-secondary-rgb: 55, 58, 60;--bs-success-rgb: 63, 182, 24;--bs-info-rgb: 153, 84, 187;--bs-warning-rgb: 255, 117, 24;--bs-danger-rgb: 255, 0, 57;--bs-light-rgb: 248, 249, 250;--bs-dark-rgb: 55, 58, 60;--bs-white-rgb: 255, 255, 255;--bs-black-rgb: 0, 0, 0;--bs-body-color-rgb: 55, 58, 60;--bs-body-bg-rgb: 255, 255, 255;--bs-font-sans-serif: "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";--bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;--bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0));--bs-root-font-size: 17px;--bs-body-font-family: var(--bs-font-sans-serif);--bs-body-font-size: 1rem;--bs-body-font-weight: 400;--bs-body-line-height: 1.5;--bs-body-color: #373a3c;--bs-body-bg: #fff}*,*::before,*::after{box-sizing:border-box}:root{font-size:var(--bs-root-font-size)}body{margin:0;font-family:var(--bs-body-font-family);font-size:var(--bs-body-font-size);font-weight:var(--bs-body-font-weight);line-height:var(--bs-body-line-height);color:var(--bs-body-color);text-align:var(--bs-body-text-align);background-color:var(--bs-body-bg);-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:rgba(0,0,0,0)}hr{margin:1rem 0;color:inherit;background-color:currentColor;border:0;opacity:.25}hr:not([size]){height:1px}h6,.h6,h5,.h5,h4,.h4,h3,.h3,h2,.h2,h1,.h1{margin-top:0;margin-bottom:.5rem;font-weight:400;line-height:1.2}h1,.h1{font-size:calc(1.325rem + 0.9vw)}@media(min-width: 1200px){h1,.h1{font-size:2rem}}h2,.h2{font-size:calc(1.29rem + 0.48vw)}@media(min-width: 1200px){h2,.h2{font-size:1.65rem}}h3,.h3{font-size:calc(1.27rem + 0.24vw)}@media(min-width: 1200px){h3,.h3{font-size:1.45rem}}h4,.h4{font-size:1.25rem}h5,.h5{font-size:1.1rem}h6,.h6{font-size:1rem}p{margin-top:0;margin-bottom:1rem}abbr[title],abbr[data-bs-original-title]{text-decoration:underline dotted;-webkit-text-decoration:underline dotted;-moz-text-decoration:underline dotted;-ms-text-decoration:underline dotted;-o-text-decoration:underline dotted;cursor:help;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}ol,ul{padding-left:2rem}ol,ul,dl{margin-top:0;margin-bottom:1rem}ol ol,ul ul,ol ul,ul ol{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem;padding:.625rem 1.25rem;border-left:.25rem solid #e9ecef}blockquote p:last-child,blockquote ul:last-child,blockquote ol:last-child{margin-bottom:0}b,strong{font-weight:bolder}small,.small{font-size:0.875em}mark,.mark{padding:.2em;background-color:#fcf8e3}sub,sup{position:relative;font-size:0.75em;line-height:0;vertical-align:baseline}sub{bottom:-0.25em}sup{top:-0.5em}a{color:#2780e3;text-decoration:underline;-webkit-text-decoration:underline;-moz-text-decoration:underline;-ms-text-decoration:underline;-o-text-decoration:underline}a:hover{color:#1f66b6}a:not([href]):not([class]),a:not([href]):not([class]):hover{color:inherit;text-decoration:none}pre,code,kbd,samp{font-family:var(--bs-font-monospace);font-size:1em;direction:ltr /* rtl:ignore */;unicode-bidi:bidi-override}pre{display:block;margin-top:0;margin-bottom:1rem;overflow:auto;font-size:0.875em;color:#000;background-color:#f7f7f7;padding:.5rem;border:1px solid #dee2e6}pre code{background-color:rgba(0,0,0,0);font-size:inherit;color:inherit;word-break:normal}code{font-size:0.875em;color:#9753b8;background-color:#f7f7f7;padding:.125rem .25rem;word-wrap:break-word}a>code{color:inherit}kbd{padding:.4rem .4rem;font-size:0.875em;color:#fff;background-color:#212529}kbd kbd{padding:0;font-size:1em;font-weight:700}figure{margin:0 0 1rem}img,svg{vertical-align:middle}table{caption-side:bottom;border-collapse:collapse}caption{padding-top:.5rem;padding-bottom:.5rem;color:#6c757d;text-align:left}th{text-align:inherit;text-align:-webkit-match-parent}thead,tbody,tfoot,tr,td,th{border-color:inherit;border-style:solid;border-width:0}label{display:inline-block}button{border-radius:0}button:focus:not(:focus-visible){outline:0}input,button,select,optgroup,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}select:disabled{opacity:1}[list]::-webkit-calendar-picker-indicator{display:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button}button:not(:disabled),[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled){cursor:pointer}::-moz-focus-inner{padding:0;border-style:none}textarea{resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{float:left;width:100%;padding:0;margin-bottom:.5rem;font-size:calc(1.275rem + 0.3vw);line-height:inherit}@media(min-width: 1200px){legend{font-size:1.5rem}}legend+*{clear:left}::-webkit-datetime-edit-fields-wrapper,::-webkit-datetime-edit-text,::-webkit-datetime-edit-minute,::-webkit-datetime-edit-hour-field,::-webkit-datetime-edit-day-field,::-webkit-datetime-edit-month-field,::-webkit-datetime-edit-year-field{padding:0}::-webkit-inner-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:textfield}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-color-swatch-wrapper{padding:0}::file-selector-button{font:inherit}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}iframe{border:0}summary{display:list-item;cursor:pointer}progress{vertical-align:baseline}[hidden]{display:none !important}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:calc(1.625rem + 4.5vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-1{font-size:5rem}}.display-2{font-size:calc(1.575rem + 3.9vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-2{font-size:4.5rem}}.display-3{font-size:calc(1.525rem + 3.3vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-3{font-size:4rem}}.display-4{font-size:calc(1.475rem + 2.7vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-4{font-size:3.5rem}}.display-5{font-size:calc(1.425rem + 2.1vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-5{font-size:3rem}}.display-6{font-size:calc(1.375rem + 1.5vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-6{font-size:2.5rem}}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:0.875em;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote>:last-child{margin-bottom:0}.blockquote-footer{margin-top:-1rem;margin-bottom:1rem;font-size:0.875em;color:#6c757d}.blockquote-footer::before{content:"— "}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:0.875em;color:#6c757d}.grid{display:grid;grid-template-rows:repeat(var(--bs-rows, 1), 1fr);grid-template-columns:repeat(var(--bs-columns, 12), 1fr);gap:var(--bs-gap, 1.5rem)}.grid .g-col-1{grid-column:auto/span 1}.grid .g-col-2{grid-column:auto/span 2}.grid .g-col-3{grid-column:auto/span 3}.grid .g-col-4{grid-column:auto/span 4}.grid .g-col-5{grid-column:auto/span 5}.grid .g-col-6{grid-column:auto/span 6}.grid .g-col-7{grid-column:auto/span 7}.grid .g-col-8{grid-column:auto/span 8}.grid .g-col-9{grid-column:auto/span 9}.grid .g-col-10{grid-column:auto/span 10}.grid .g-col-11{grid-column:auto/span 11}.grid .g-col-12{grid-column:auto/span 12}.grid .g-start-1{grid-column-start:1}.grid .g-start-2{grid-column-start:2}.grid .g-start-3{grid-column-start:3}.grid .g-start-4{grid-column-start:4}.grid .g-start-5{grid-column-start:5}.grid .g-start-6{grid-column-start:6}.grid .g-start-7{grid-column-start:7}.grid .g-start-8{grid-column-start:8}.grid .g-start-9{grid-column-start:9}.grid .g-start-10{grid-column-start:10}.grid .g-start-11{grid-column-start:11}@media(min-width: 576px){.grid .g-col-sm-1{grid-column:auto/span 1}.grid .g-col-sm-2{grid-column:auto/span 2}.grid .g-col-sm-3{grid-column:auto/span 3}.grid .g-col-sm-4{grid-column:auto/span 4}.grid .g-col-sm-5{grid-column:auto/span 5}.grid .g-col-sm-6{grid-column:auto/span 6}.grid .g-col-sm-7{grid-column:auto/span 7}.grid .g-col-sm-8{grid-column:auto/span 8}.grid .g-col-sm-9{grid-column:auto/span 9}.grid .g-col-sm-10{grid-column:auto/span 10}.grid .g-col-sm-11{grid-column:auto/span 11}.grid .g-col-sm-12{grid-column:auto/span 12}.grid .g-start-sm-1{grid-column-start:1}.grid .g-start-sm-2{grid-column-start:2}.grid .g-start-sm-3{grid-column-start:3}.grid .g-start-sm-4{grid-column-start:4}.grid .g-start-sm-5{grid-column-start:5}.grid .g-start-sm-6{grid-column-start:6}.grid .g-start-sm-7{grid-column-start:7}.grid .g-start-sm-8{grid-column-start:8}.grid .g-start-sm-9{grid-column-start:9}.grid .g-start-sm-10{grid-column-start:10}.grid .g-start-sm-11{grid-column-start:11}}@media(min-width: 768px){.grid .g-col-md-1{grid-column:auto/span 1}.grid .g-col-md-2{grid-column:auto/span 2}.grid .g-col-md-3{grid-column:auto/span 3}.grid .g-col-md-4{grid-column:auto/span 4}.grid .g-col-md-5{grid-column:auto/span 5}.grid .g-col-md-6{grid-column:auto/span 6}.grid .g-col-md-7{grid-column:auto/span 7}.grid .g-col-md-8{grid-column:auto/span 8}.grid .g-col-md-9{grid-column:auto/span 9}.grid .g-col-md-10{grid-column:auto/span 10}.grid .g-col-md-11{grid-column:auto/span 11}.grid .g-col-md-12{grid-column:auto/span 12}.grid .g-start-md-1{grid-column-start:1}.grid .g-start-md-2{grid-column-start:2}.grid .g-start-md-3{grid-column-start:3}.grid .g-start-md-4{grid-column-start:4}.grid .g-start-md-5{grid-column-start:5}.grid .g-start-md-6{grid-column-start:6}.grid .g-start-md-7{grid-column-start:7}.grid .g-start-md-8{grid-column-start:8}.grid .g-start-md-9{grid-column-start:9}.grid .g-start-md-10{grid-column-start:10}.grid .g-start-md-11{grid-column-start:11}}@media(min-width: 992px){.grid .g-col-lg-1{grid-column:auto/span 1}.grid .g-col-lg-2{grid-column:auto/span 2}.grid .g-col-lg-3{grid-column:auto/span 3}.grid .g-col-lg-4{grid-column:auto/span 4}.grid .g-col-lg-5{grid-column:auto/span 5}.grid .g-col-lg-6{grid-column:auto/span 6}.grid .g-col-lg-7{grid-column:auto/span 7}.grid .g-col-lg-8{grid-column:auto/span 8}.grid .g-col-lg-9{grid-column:auto/span 9}.grid .g-col-lg-10{grid-column:auto/span 10}.grid .g-col-lg-11{grid-column:auto/span 11}.grid .g-col-lg-12{grid-column:auto/span 12}.grid .g-start-lg-1{grid-column-start:1}.grid .g-start-lg-2{grid-column-start:2}.grid .g-start-lg-3{grid-column-start:3}.grid .g-start-lg-4{grid-column-start:4}.grid .g-start-lg-5{grid-column-start:5}.grid .g-start-lg-6{grid-column-start:6}.grid .g-start-lg-7{grid-column-start:7}.grid .g-start-lg-8{grid-column-start:8}.grid .g-start-lg-9{grid-column-start:9}.grid .g-start-lg-10{grid-column-start:10}.grid .g-start-lg-11{grid-column-start:11}}@media(min-width: 1200px){.grid .g-col-xl-1{grid-column:auto/span 1}.grid .g-col-xl-2{grid-column:auto/span 2}.grid .g-col-xl-3{grid-column:auto/span 3}.grid .g-col-xl-4{grid-column:auto/span 4}.grid .g-col-xl-5{grid-column:auto/span 5}.grid .g-col-xl-6{grid-column:auto/span 6}.grid .g-col-xl-7{grid-column:auto/span 7}.grid .g-col-xl-8{grid-column:auto/span 8}.grid .g-col-xl-9{grid-column:auto/span 9}.grid .g-col-xl-10{grid-column:auto/span 10}.grid .g-col-xl-11{grid-column:auto/span 11}.grid .g-col-xl-12{grid-column:auto/span 12}.grid .g-start-xl-1{grid-column-start:1}.grid .g-start-xl-2{grid-column-start:2}.grid .g-start-xl-3{grid-column-start:3}.grid .g-start-xl-4{grid-column-start:4}.grid .g-start-xl-5{grid-column-start:5}.grid .g-start-xl-6{grid-column-start:6}.grid .g-start-xl-7{grid-column-start:7}.grid .g-start-xl-8{grid-column-start:8}.grid .g-start-xl-9{grid-column-start:9}.grid .g-start-xl-10{grid-column-start:10}.grid .g-start-xl-11{grid-column-start:11}}@media(min-width: 1400px){.grid .g-col-xxl-1{grid-column:auto/span 1}.grid .g-col-xxl-2{grid-column:auto/span 2}.grid .g-col-xxl-3{grid-column:auto/span 3}.grid .g-col-xxl-4{grid-column:auto/span 4}.grid .g-col-xxl-5{grid-column:auto/span 5}.grid .g-col-xxl-6{grid-column:auto/span 6}.grid .g-col-xxl-7{grid-column:auto/span 7}.grid .g-col-xxl-8{grid-column:auto/span 8}.grid .g-col-xxl-9{grid-column:auto/span 9}.grid .g-col-xxl-10{grid-column:auto/span 10}.grid .g-col-xxl-11{grid-column:auto/span 11}.grid .g-col-xxl-12{grid-column:auto/span 12}.grid .g-start-xxl-1{grid-column-start:1}.grid .g-start-xxl-2{grid-column-start:2}.grid .g-start-xxl-3{grid-column-start:3}.grid .g-start-xxl-4{grid-column-start:4}.grid .g-start-xxl-5{grid-column-start:5}.grid .g-start-xxl-6{grid-column-start:6}.grid .g-start-xxl-7{grid-column-start:7}.grid .g-start-xxl-8{grid-column-start:8}.grid .g-start-xxl-9{grid-column-start:9}.grid .g-start-xxl-10{grid-column-start:10}.grid .g-start-xxl-11{grid-column-start:11}}.table{--bs-table-bg: transparent;--bs-table-accent-bg: transparent;--bs-table-striped-color: #373a3c;--bs-table-striped-bg: rgba(0, 0, 0, 0.05);--bs-table-active-color: #373a3c;--bs-table-active-bg: rgba(0, 0, 0, 0.1);--bs-table-hover-color: #373a3c;--bs-table-hover-bg: rgba(0, 0, 0, 0.075);width:100%;margin-bottom:1rem;color:#373a3c;vertical-align:top;border-color:#dee2e6}.table>:not(caption)>*>*{padding:.5rem .5rem;background-color:var(--bs-table-bg);border-bottom-width:1px;box-shadow:inset 0 0 0 9999px var(--bs-table-accent-bg)}.table>tbody{vertical-align:inherit}.table>thead{vertical-align:bottom}.table>:not(:first-child){border-top:2px solid #b6babc}.caption-top{caption-side:top}.table-sm>:not(caption)>*>*{padding:.25rem .25rem}.table-bordered>:not(caption)>*{border-width:1px 0}.table-bordered>:not(caption)>*>*{border-width:0 1px}.table-borderless>:not(caption)>*>*{border-bottom-width:0}.table-borderless>:not(:first-child){border-top-width:0}.table-striped>tbody>tr:nth-of-type(odd)>*{--bs-table-accent-bg: var(--bs-table-striped-bg);color:var(--bs-table-striped-color)}.table-active{--bs-table-accent-bg: var(--bs-table-active-bg);color:var(--bs-table-active-color)}.table-hover>tbody>tr:hover>*{--bs-table-accent-bg: var(--bs-table-hover-bg);color:var(--bs-table-hover-color)}.table-primary{--bs-table-bg: #d4e6f9;--bs-table-striped-bg: #c9dbed;--bs-table-striped-color: #000;--bs-table-active-bg: #bfcfe0;--bs-table-active-color: #000;--bs-table-hover-bg: #c4d5e6;--bs-table-hover-color: #000;color:#000;border-color:#bfcfe0}.table-secondary{--bs-table-bg: #d7d8d8;--bs-table-striped-bg: #cccdcd;--bs-table-striped-color: #000;--bs-table-active-bg: #c2c2c2;--bs-table-active-color: #000;--bs-table-hover-bg: #c7c8c8;--bs-table-hover-color: #000;color:#000;border-color:#c2c2c2}.table-success{--bs-table-bg: #d9f0d1;--bs-table-striped-bg: #cee4c7;--bs-table-striped-color: #000;--bs-table-active-bg: #c3d8bc;--bs-table-active-color: #000;--bs-table-hover-bg: #c9dec1;--bs-table-hover-color: #000;color:#000;border-color:#c3d8bc}.table-info{--bs-table-bg: #ebddf1;--bs-table-striped-bg: #dfd2e5;--bs-table-striped-color: #000;--bs-table-active-bg: #d4c7d9;--bs-table-active-color: #000;--bs-table-hover-bg: #d9ccdf;--bs-table-hover-color: #000;color:#000;border-color:#d4c7d9}.table-warning{--bs-table-bg: #ffe3d1;--bs-table-striped-bg: #f2d8c7;--bs-table-striped-color: #000;--bs-table-active-bg: #e6ccbc;--bs-table-active-color: #000;--bs-table-hover-bg: #ecd2c1;--bs-table-hover-color: #000;color:#000;border-color:#e6ccbc}.table-danger{--bs-table-bg: #ffccd7;--bs-table-striped-bg: #f2c2cc;--bs-table-striped-color: #000;--bs-table-active-bg: #e6b8c2;--bs-table-active-color: #000;--bs-table-hover-bg: #ecbdc7;--bs-table-hover-color: #000;color:#000;border-color:#e6b8c2}.table-light{--bs-table-bg: #f8f9fa;--bs-table-striped-bg: #ecedee;--bs-table-striped-color: #000;--bs-table-active-bg: #dfe0e1;--bs-table-active-color: #000;--bs-table-hover-bg: #e5e6e7;--bs-table-hover-color: #000;color:#000;border-color:#dfe0e1}.table-dark{--bs-table-bg: #373a3c;--bs-table-striped-bg: #414446;--bs-table-striped-color: #fff;--bs-table-active-bg: #4b4e50;--bs-table-active-color: #fff;--bs-table-hover-bg: #46494b;--bs-table-hover-color: #fff;color:#fff;border-color:#4b4e50}.table-responsive{overflow-x:auto;-webkit-overflow-scrolling:touch}@media(max-width: 575.98px){.table-responsive-sm{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 767.98px){.table-responsive-md{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 991.98px){.table-responsive-lg{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 1199.98px){.table-responsive-xl{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 1399.98px){.table-responsive-xxl{overflow-x:auto;-webkit-overflow-scrolling:touch}}.form-label,.shiny-input-container .control-label{margin-bottom:.5rem}.col-form-label{padding-top:calc(0.375rem + 1px);padding-bottom:calc(0.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(0.5rem + 1px);padding-bottom:calc(0.5rem + 1px);font-size:1.25rem}.col-form-label-sm{padding-top:calc(0.25rem + 1px);padding-bottom:calc(0.25rem + 1px);font-size:0.875rem}.form-text{margin-top:.25rem;font-size:0.875em;color:#6c757d}.form-control{display:block;width:100%;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#373a3c;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;border-radius:0;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control{transition:none}}.form-control[type=file]{overflow:hidden}.form-control[type=file]:not(:disabled):not([readonly]){cursor:pointer}.form-control:focus{color:#373a3c;background-color:#fff;border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-control::-webkit-date-and-time-value{height:1.5em}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}.form-control::file-selector-button{padding:.375rem .75rem;margin:-0.375rem -0.75rem;margin-inline-end:.75rem;color:#373a3c;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control::file-selector-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::file-selector-button{background-color:#dde0e3}.form-control::-webkit-file-upload-button{padding:.375rem .75rem;margin:-0.375rem -0.75rem;margin-inline-end:.75rem;color:#373a3c;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control::-webkit-file-upload-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button{background-color:#dde0e3}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;line-height:1.5;color:#373a3c;background-color:rgba(0,0,0,0);border:solid rgba(0,0,0,0);border-width:1px 0}.form-control-plaintext.form-control-sm,.form-control-plaintext.form-control-lg{padding-right:0;padding-left:0}.form-control-sm{min-height:calc(1.5em + 0.5rem + 2px);padding:.25rem .5rem;font-size:0.875rem}.form-control-sm::file-selector-button{padding:.25rem .5rem;margin:-0.25rem -0.5rem;margin-inline-end:.5rem}.form-control-sm::-webkit-file-upload-button{padding:.25rem .5rem;margin:-0.25rem -0.5rem;margin-inline-end:.5rem}.form-control-lg{min-height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem}.form-control-lg::file-selector-button{padding:.5rem 1rem;margin:-0.5rem -1rem;margin-inline-end:1rem}.form-control-lg::-webkit-file-upload-button{padding:.5rem 1rem;margin:-0.5rem -1rem;margin-inline-end:1rem}textarea.form-control{min-height:calc(1.5em + 0.75rem + 2px)}textarea.form-control-sm{min-height:calc(1.5em + 0.5rem + 2px)}textarea.form-control-lg{min-height:calc(1.5em + 1rem + 2px)}.form-control-color{width:3rem;height:auto;padding:.375rem}.form-control-color:not(:disabled):not([readonly]){cursor:pointer}.form-control-color::-moz-color-swatch{height:1.5em}.form-control-color::-webkit-color-swatch{height:1.5em}.form-select{display:block;width:100%;padding:.375rem 2.25rem .375rem .75rem;-moz-padding-start:calc(0.75rem - 3px);font-size:1rem;font-weight:400;line-height:1.5;color:#373a3c;background-color:#fff;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23373a3c' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right .75rem center;background-size:16px 12px;border:1px solid #ced4da;border-radius:0;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}@media(prefers-reduced-motion: reduce){.form-select{transition:none}}.form-select:focus{border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-select[multiple],.form-select[size]:not([size="1"]){padding-right:.75rem;background-image:none}.form-select:disabled{background-color:#e9ecef}.form-select:-moz-focusring{color:rgba(0,0,0,0);text-shadow:0 0 0 #373a3c}.form-select-sm{padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:0.875rem}.form-select-lg{padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.form-check,.shiny-input-container .checkbox,.shiny-input-container .radio{display:block;min-height:1.5rem;padding-left:0;margin-bottom:.125rem}.form-check .form-check-input,.form-check .shiny-input-container .checkbox input,.form-check .shiny-input-container .radio input,.shiny-input-container .checkbox .form-check-input,.shiny-input-container .checkbox .shiny-input-container .checkbox input,.shiny-input-container .checkbox .shiny-input-container .radio input,.shiny-input-container .radio .form-check-input,.shiny-input-container .radio .shiny-input-container .checkbox input,.shiny-input-container .radio .shiny-input-container .radio input{float:left;margin-left:0}.form-check-input,.shiny-input-container .checkbox input,.shiny-input-container .checkbox-inline input,.shiny-input-container .radio input,.shiny-input-container .radio-inline input{width:1em;height:1em;margin-top:.25em;vertical-align:top;background-color:#fff;background-repeat:no-repeat;background-position:center;background-size:contain;border:1px solid rgba(0,0,0,.25);appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;color-adjust:exact;-webkit-print-color-adjust:exact}.form-check-input[type=radio],.shiny-input-container .checkbox input[type=radio],.shiny-input-container .checkbox-inline input[type=radio],.shiny-input-container .radio input[type=radio],.shiny-input-container .radio-inline input[type=radio]{border-radius:50%}.form-check-input:active,.shiny-input-container .checkbox input:active,.shiny-input-container .checkbox-inline input:active,.shiny-input-container .radio input:active,.shiny-input-container .radio-inline input:active{filter:brightness(90%)}.form-check-input:focus,.shiny-input-container .checkbox input:focus,.shiny-input-container .checkbox-inline input:focus,.shiny-input-container .radio input:focus,.shiny-input-container .radio-inline input:focus{border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-check-input:checked,.shiny-input-container .checkbox input:checked,.shiny-input-container .checkbox-inline input:checked,.shiny-input-container .radio input:checked,.shiny-input-container .radio-inline input:checked{background-color:#2780e3;border-color:#2780e3}.form-check-input:checked[type=checkbox],.shiny-input-container .checkbox input:checked[type=checkbox],.shiny-input-container .checkbox-inline input:checked[type=checkbox],.shiny-input-container .radio input:checked[type=checkbox],.shiny-input-container .radio-inline input:checked[type=checkbox]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10l3 3l6-6'/%3e%3c/svg%3e")}.form-check-input:checked[type=radio],.shiny-input-container .checkbox input:checked[type=radio],.shiny-input-container .checkbox-inline input:checked[type=radio],.shiny-input-container .radio input:checked[type=radio],.shiny-input-container .radio-inline input:checked[type=radio]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e")}.form-check-input[type=checkbox]:indeterminate,.shiny-input-container .checkbox input[type=checkbox]:indeterminate,.shiny-input-container .checkbox-inline input[type=checkbox]:indeterminate,.shiny-input-container .radio input[type=checkbox]:indeterminate,.shiny-input-container .radio-inline input[type=checkbox]:indeterminate{background-color:#2780e3;border-color:#2780e3;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e")}.form-check-input:disabled,.shiny-input-container .checkbox input:disabled,.shiny-input-container .checkbox-inline input:disabled,.shiny-input-container .radio input:disabled,.shiny-input-container .radio-inline input:disabled{pointer-events:none;filter:none;opacity:.5}.form-check-input[disabled]~.form-check-label,.form-check-input[disabled]~span,.form-check-input:disabled~.form-check-label,.form-check-input:disabled~span,.shiny-input-container .checkbox input[disabled]~.form-check-label,.shiny-input-container .checkbox input[disabled]~span,.shiny-input-container .checkbox input:disabled~.form-check-label,.shiny-input-container .checkbox input:disabled~span,.shiny-input-container .checkbox-inline input[disabled]~.form-check-label,.shiny-input-container .checkbox-inline input[disabled]~span,.shiny-input-container .checkbox-inline input:disabled~.form-check-label,.shiny-input-container .checkbox-inline input:disabled~span,.shiny-input-container .radio input[disabled]~.form-check-label,.shiny-input-container .radio input[disabled]~span,.shiny-input-container .radio input:disabled~.form-check-label,.shiny-input-container .radio input:disabled~span,.shiny-input-container .radio-inline input[disabled]~.form-check-label,.shiny-input-container .radio-inline input[disabled]~span,.shiny-input-container .radio-inline input:disabled~.form-check-label,.shiny-input-container .radio-inline input:disabled~span{opacity:.5}.form-check-label,.shiny-input-container .checkbox label,.shiny-input-container .checkbox-inline label,.shiny-input-container .radio label,.shiny-input-container .radio-inline label{cursor:pointer}.form-switch{padding-left:2.5em}.form-switch .form-check-input{width:2em;margin-left:-2.5em;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e");background-position:left center;transition:background-position .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-switch .form-check-input{transition:none}}.form-switch .form-check-input:focus{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2393c0f1'/%3e%3c/svg%3e")}.form-switch .form-check-input:checked{background-position:right center;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.form-check-inline,.shiny-input-container .checkbox-inline,.shiny-input-container .radio-inline{display:inline-block;margin-right:1rem}.btn-check{position:absolute;clip:rect(0, 0, 0, 0);pointer-events:none}.btn-check[disabled]+.btn,.btn-check:disabled+.btn{pointer-events:none;filter:none;opacity:.65}.form-range{width:100%;height:1.5rem;padding:0;background-color:rgba(0,0,0,0);appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}.form-range:focus{outline:0}.form-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(39,128,227,.25)}.form-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(39,128,227,.25)}.form-range::-moz-focus-outer{border:0}.form-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-0.25rem;background-color:#2780e3;border:0;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}@media(prefers-reduced-motion: reduce){.form-range::-webkit-slider-thumb{transition:none}}.form-range::-webkit-slider-thumb:active{background-color:#bed9f7}.form-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:rgba(0,0,0,0);cursor:pointer;background-color:#dee2e6;border-color:rgba(0,0,0,0)}.form-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#2780e3;border:0;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}@media(prefers-reduced-motion: reduce){.form-range::-moz-range-thumb{transition:none}}.form-range::-moz-range-thumb:active{background-color:#bed9f7}.form-range::-moz-range-track{width:100%;height:.5rem;color:rgba(0,0,0,0);cursor:pointer;background-color:#dee2e6;border-color:rgba(0,0,0,0)}.form-range:disabled{pointer-events:none}.form-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.form-range:disabled::-moz-range-thumb{background-color:#adb5bd}.form-floating{position:relative}.form-floating>.form-control,.form-floating>.form-select{height:calc(3.5rem + 2px);line-height:1.25}.form-floating>label{position:absolute;top:0;left:0;height:100%;padding:1rem .75rem;pointer-events:none;border:1px solid rgba(0,0,0,0);transform-origin:0 0;transition:opacity .1s ease-in-out,transform .1s ease-in-out}@media(prefers-reduced-motion: reduce){.form-floating>label{transition:none}}.form-floating>.form-control{padding:1rem .75rem}.form-floating>.form-control::placeholder{color:rgba(0,0,0,0)}.form-floating>.form-control:focus,.form-floating>.form-control:not(:placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:-webkit-autofill{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-select{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:focus~label,.form-floating>.form-control:not(:placeholder-shown)~label,.form-floating>.form-select~label{opacity:.65;transform:scale(0.85) translateY(-0.5rem) translateX(0.15rem)}.form-floating>.form-control:-webkit-autofill~label{opacity:.65;transform:scale(0.85) translateY(-0.5rem) translateX(0.15rem)}.input-group{position:relative;display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:stretch;-webkit-align-items:stretch;width:100%}.input-group>.form-control,.input-group>.form-select{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto;width:1%;min-width:0}.input-group>.form-control:focus,.input-group>.form-select:focus{z-index:3}.input-group .btn{position:relative;z-index:2}.input-group .btn:focus{z-index:3}.input-group-text{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#373a3c;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da}.input-group-lg>.form-control,.input-group-lg>.form-select,.input-group-lg>.input-group-text,.input-group-lg>.btn{padding:.5rem 1rem;font-size:1.25rem}.input-group-sm>.form-control,.input-group-sm>.form-select,.input-group-sm>.input-group-text,.input-group-sm>.btn{padding:.25rem .5rem;font-size:0.875rem}.input-group-lg>.form-select,.input-group-sm>.form-select{padding-right:3rem}.input-group>:not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback){margin-left:-1px}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:0.875em;color:#3fb618}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:0.875rem;color:#fff;background-color:rgba(63,182,24,.9)}.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip,.is-valid~.valid-feedback,.is-valid~.valid-tooltip{display:block}.was-validated .form-control:valid,.form-control.is-valid{border-color:#3fb618;padding-right:calc(1.5em + 0.75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%233fb618' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(0.375em + 0.1875rem) center;background-size:calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-control:valid:focus,.form-control.is-valid:focus{border-color:#3fb618;box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + 0.75rem);background-position:top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem)}.was-validated .form-select:valid,.form-select.is-valid{border-color:#3fb618}.was-validated .form-select:valid:not([multiple]):not([size]),.was-validated .form-select:valid:not([multiple])[size="1"],.form-select.is-valid:not([multiple]):not([size]),.form-select.is-valid:not([multiple])[size="1"]{padding-right:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23373a3c' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%233fb618' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-select:valid:focus,.form-select.is-valid:focus{border-color:#3fb618;box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated .form-check-input:valid,.form-check-input.is-valid{border-color:#3fb618}.was-validated .form-check-input:valid:checked,.form-check-input.is-valid:checked{background-color:#3fb618}.was-validated .form-check-input:valid:focus,.form-check-input.is-valid:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated .form-check-input:valid~.form-check-label,.form-check-input.is-valid~.form-check-label{color:#3fb618}.form-check-inline .form-check-input~.valid-feedback{margin-left:.5em}.was-validated .input-group .form-control:valid,.input-group .form-control.is-valid,.was-validated .input-group .form-select:valid,.input-group .form-select.is-valid{z-index:1}.was-validated .input-group .form-control:valid:focus,.input-group .form-control.is-valid:focus,.was-validated .input-group .form-select:valid:focus,.input-group .form-select.is-valid:focus{z-index:3}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:0.875em;color:#ff0039}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:0.875rem;color:#fff;background-color:rgba(255,0,57,.9)}.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip,.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip{display:block}.was-validated .form-control:invalid,.form-control.is-invalid{border-color:#ff0039;padding-right:calc(1.5em + 0.75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23ff0039'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23ff0039' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(0.375em + 0.1875rem) center;background-size:calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-control:invalid:focus,.form-control.is-invalid:focus{border-color:#ff0039;box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + 0.75rem);background-position:top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem)}.was-validated .form-select:invalid,.form-select.is-invalid{border-color:#ff0039}.was-validated .form-select:invalid:not([multiple]):not([size]),.was-validated .form-select:invalid:not([multiple])[size="1"],.form-select.is-invalid:not([multiple]):not([size]),.form-select.is-invalid:not([multiple])[size="1"]{padding-right:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23373a3c' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23ff0039'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23ff0039' stroke='none'/%3e%3c/svg%3e");background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-select:invalid:focus,.form-select.is-invalid:focus{border-color:#ff0039;box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated .form-check-input:invalid,.form-check-input.is-invalid{border-color:#ff0039}.was-validated .form-check-input:invalid:checked,.form-check-input.is-invalid:checked{background-color:#ff0039}.was-validated .form-check-input:invalid:focus,.form-check-input.is-invalid:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated .form-check-input:invalid~.form-check-label,.form-check-input.is-invalid~.form-check-label{color:#ff0039}.form-check-inline .form-check-input~.invalid-feedback{margin-left:.5em}.was-validated .input-group .form-control:invalid,.input-group .form-control.is-invalid,.was-validated .input-group .form-select:invalid,.input-group .form-select.is-invalid{z-index:2}.was-validated .input-group .form-control:invalid:focus,.input-group .form-control.is-invalid:focus,.was-validated .input-group .form-select:invalid:focus,.input-group .form-select.is-invalid:focus{z-index:3}.btn{display:inline-block;font-weight:400;line-height:1.5;color:#373a3c;text-align:center;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;vertical-align:middle;cursor:pointer;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;background-color:rgba(0,0,0,0);border:1px solid rgba(0,0,0,0);padding:.375rem .75rem;font-size:1rem;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.btn{transition:none}}.btn:hover{color:#373a3c}.btn-check:focus+.btn,.btn:focus{outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.btn:disabled,.btn.disabled,fieldset:disabled .btn{pointer-events:none;opacity:.65}.btn-default{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-default:hover{color:#fff;background-color:#2f3133;border-color:#2c2e30}.btn-check:focus+.btn-default,.btn-default:focus{color:#fff;background-color:#2f3133;border-color:#2c2e30;box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-check:checked+.btn-default,.btn-check:active+.btn-default,.btn-default:active,.btn-default.active,.show>.btn-default.dropdown-toggle{color:#fff;background-color:#2c2e30;border-color:#292c2d}.btn-check:checked+.btn-default:focus,.btn-check:active+.btn-default:focus,.btn-default:active:focus,.btn-default.active:focus,.show>.btn-default.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-default:disabled,.btn-default.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-primary{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-primary:hover{color:#fff;background-color:#216dc1;border-color:#1f66b6}.btn-check:focus+.btn-primary,.btn-primary:focus{color:#fff;background-color:#216dc1;border-color:#1f66b6;box-shadow:0 0 0 .25rem rgba(71,147,231,.5)}.btn-check:checked+.btn-primary,.btn-check:active+.btn-primary,.btn-primary:active,.btn-primary.active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#1f66b6;border-color:#1d60aa}.btn-check:checked+.btn-primary:focus,.btn-check:active+.btn-primary:focus,.btn-primary:active:focus,.btn-primary.active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(71,147,231,.5)}.btn-primary:disabled,.btn-primary.disabled{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-secondary{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-secondary:hover{color:#fff;background-color:#2f3133;border-color:#2c2e30}.btn-check:focus+.btn-secondary,.btn-secondary:focus{color:#fff;background-color:#2f3133;border-color:#2c2e30;box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-check:checked+.btn-secondary,.btn-check:active+.btn-secondary,.btn-secondary:active,.btn-secondary.active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#2c2e30;border-color:#292c2d}.btn-check:checked+.btn-secondary:focus,.btn-check:active+.btn-secondary:focus,.btn-secondary:active:focus,.btn-secondary.active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-secondary:disabled,.btn-secondary.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-success{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-success:hover{color:#fff;background-color:#369b14;border-color:#329213}.btn-check:focus+.btn-success,.btn-success:focus{color:#fff;background-color:#369b14;border-color:#329213;box-shadow:0 0 0 .25rem rgba(92,193,59,.5)}.btn-check:checked+.btn-success,.btn-check:active+.btn-success,.btn-success:active,.btn-success.active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#329213;border-color:#2f8912}.btn-check:checked+.btn-success:focus,.btn-check:active+.btn-success:focus,.btn-success:active:focus,.btn-success.active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(92,193,59,.5)}.btn-success:disabled,.btn-success.disabled{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-info{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-info:hover{color:#fff;background-color:#82479f;border-color:#7a4396}.btn-check:focus+.btn-info,.btn-info:focus{color:#fff;background-color:#82479f;border-color:#7a4396;box-shadow:0 0 0 .25rem rgba(168,110,197,.5)}.btn-check:checked+.btn-info,.btn-check:active+.btn-info,.btn-info:active,.btn-info.active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#7a4396;border-color:#733f8c}.btn-check:checked+.btn-info:focus,.btn-check:active+.btn-info:focus,.btn-info:active:focus,.btn-info.active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(168,110,197,.5)}.btn-info:disabled,.btn-info.disabled{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-warning{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-warning:hover{color:#fff;background-color:#d96314;border-color:#cc5e13}.btn-check:focus+.btn-warning,.btn-warning:focus{color:#fff;background-color:#d96314;border-color:#cc5e13;box-shadow:0 0 0 .25rem rgba(255,138,59,.5)}.btn-check:checked+.btn-warning,.btn-check:active+.btn-warning,.btn-warning:active,.btn-warning.active,.show>.btn-warning.dropdown-toggle{color:#fff;background-color:#cc5e13;border-color:#bf5812}.btn-check:checked+.btn-warning:focus,.btn-check:active+.btn-warning:focus,.btn-warning:active:focus,.btn-warning.active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(255,138,59,.5)}.btn-warning:disabled,.btn-warning.disabled{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-danger{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-danger:hover{color:#fff;background-color:#d90030;border-color:#cc002e}.btn-check:focus+.btn-danger,.btn-danger:focus{color:#fff;background-color:#d90030;border-color:#cc002e;box-shadow:0 0 0 .25rem rgba(255,38,87,.5)}.btn-check:checked+.btn-danger,.btn-check:active+.btn-danger,.btn-danger:active,.btn-danger.active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#cc002e;border-color:#bf002b}.btn-check:checked+.btn-danger:focus,.btn-check:active+.btn-danger:focus,.btn-danger:active:focus,.btn-danger.active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(255,38,87,.5)}.btn-danger:disabled,.btn-danger.disabled{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-light{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:focus+.btn-light,.btn-light:focus{color:#000;background-color:#f9fafb;border-color:#f9fafb;box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-check:checked+.btn-light,.btn-check:active+.btn-light,.btn-light:active,.btn-light.active,.show>.btn-light.dropdown-toggle{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:checked+.btn-light:focus,.btn-check:active+.btn-light:focus,.btn-light:active:focus,.btn-light.active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-light:disabled,.btn-light.disabled{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-dark{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-dark:hover{color:#fff;background-color:#2f3133;border-color:#2c2e30}.btn-check:focus+.btn-dark,.btn-dark:focus{color:#fff;background-color:#2f3133;border-color:#2c2e30;box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-check:checked+.btn-dark,.btn-check:active+.btn-dark,.btn-dark:active,.btn-dark.active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#2c2e30;border-color:#292c2d}.btn-check:checked+.btn-dark:focus,.btn-check:active+.btn-dark:focus,.btn-dark:active:focus,.btn-dark.active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-dark:disabled,.btn-dark.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-outline-default{color:#373a3c;border-color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-default:hover{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:focus+.btn-outline-default,.btn-outline-default:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-check:checked+.btn-outline-default,.btn-check:active+.btn-outline-default,.btn-outline-default:active,.btn-outline-default.active,.btn-outline-default.dropdown-toggle.show{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:checked+.btn-outline-default:focus,.btn-check:active+.btn-outline-default:focus,.btn-outline-default:active:focus,.btn-outline-default.active:focus,.btn-outline-default.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-outline-default:disabled,.btn-outline-default.disabled{color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-primary{color:#2780e3;border-color:#2780e3;background-color:rgba(0,0,0,0)}.btn-outline-primary:hover{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-check:focus+.btn-outline-primary,.btn-outline-primary:focus{box-shadow:0 0 0 .25rem rgba(39,128,227,.5)}.btn-check:checked+.btn-outline-primary,.btn-check:active+.btn-outline-primary,.btn-outline-primary:active,.btn-outline-primary.active,.btn-outline-primary.dropdown-toggle.show{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-check:checked+.btn-outline-primary:focus,.btn-check:active+.btn-outline-primary:focus,.btn-outline-primary:active:focus,.btn-outline-primary.active:focus,.btn-outline-primary.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(39,128,227,.5)}.btn-outline-primary:disabled,.btn-outline-primary.disabled{color:#2780e3;background-color:rgba(0,0,0,0)}.btn-outline-secondary{color:#373a3c;border-color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-secondary:hover{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:focus+.btn-outline-secondary,.btn-outline-secondary:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-check:checked+.btn-outline-secondary,.btn-check:active+.btn-outline-secondary,.btn-outline-secondary:active,.btn-outline-secondary.active,.btn-outline-secondary.dropdown-toggle.show{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:checked+.btn-outline-secondary:focus,.btn-check:active+.btn-outline-secondary:focus,.btn-outline-secondary:active:focus,.btn-outline-secondary.active:focus,.btn-outline-secondary.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-outline-secondary:disabled,.btn-outline-secondary.disabled{color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-success{color:#3fb618;border-color:#3fb618;background-color:rgba(0,0,0,0)}.btn-outline-success:hover{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-check:focus+.btn-outline-success,.btn-outline-success:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.5)}.btn-check:checked+.btn-outline-success,.btn-check:active+.btn-outline-success,.btn-outline-success:active,.btn-outline-success.active,.btn-outline-success.dropdown-toggle.show{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-check:checked+.btn-outline-success:focus,.btn-check:active+.btn-outline-success:focus,.btn-outline-success:active:focus,.btn-outline-success.active:focus,.btn-outline-success.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.5)}.btn-outline-success:disabled,.btn-outline-success.disabled{color:#3fb618;background-color:rgba(0,0,0,0)}.btn-outline-info{color:#9954bb;border-color:#9954bb;background-color:rgba(0,0,0,0)}.btn-outline-info:hover{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-check:focus+.btn-outline-info,.btn-outline-info:focus{box-shadow:0 0 0 .25rem rgba(153,84,187,.5)}.btn-check:checked+.btn-outline-info,.btn-check:active+.btn-outline-info,.btn-outline-info:active,.btn-outline-info.active,.btn-outline-info.dropdown-toggle.show{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-check:checked+.btn-outline-info:focus,.btn-check:active+.btn-outline-info:focus,.btn-outline-info:active:focus,.btn-outline-info.active:focus,.btn-outline-info.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(153,84,187,.5)}.btn-outline-info:disabled,.btn-outline-info.disabled{color:#9954bb;background-color:rgba(0,0,0,0)}.btn-outline-warning{color:#ff7518;border-color:#ff7518;background-color:rgba(0,0,0,0)}.btn-outline-warning:hover{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-check:focus+.btn-outline-warning,.btn-outline-warning:focus{box-shadow:0 0 0 .25rem rgba(255,117,24,.5)}.btn-check:checked+.btn-outline-warning,.btn-check:active+.btn-outline-warning,.btn-outline-warning:active,.btn-outline-warning.active,.btn-outline-warning.dropdown-toggle.show{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-check:checked+.btn-outline-warning:focus,.btn-check:active+.btn-outline-warning:focus,.btn-outline-warning:active:focus,.btn-outline-warning.active:focus,.btn-outline-warning.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(255,117,24,.5)}.btn-outline-warning:disabled,.btn-outline-warning.disabled{color:#ff7518;background-color:rgba(0,0,0,0)}.btn-outline-danger{color:#ff0039;border-color:#ff0039;background-color:rgba(0,0,0,0)}.btn-outline-danger:hover{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-check:focus+.btn-outline-danger,.btn-outline-danger:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.5)}.btn-check:checked+.btn-outline-danger,.btn-check:active+.btn-outline-danger,.btn-outline-danger:active,.btn-outline-danger.active,.btn-outline-danger.dropdown-toggle.show{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-check:checked+.btn-outline-danger:focus,.btn-check:active+.btn-outline-danger:focus,.btn-outline-danger:active:focus,.btn-outline-danger.active:focus,.btn-outline-danger.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.5)}.btn-outline-danger:disabled,.btn-outline-danger.disabled{color:#ff0039;background-color:rgba(0,0,0,0)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa;background-color:rgba(0,0,0,0)}.btn-outline-light:hover{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:focus+.btn-outline-light,.btn-outline-light:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-check:checked+.btn-outline-light,.btn-check:active+.btn-outline-light,.btn-outline-light:active,.btn-outline-light.active,.btn-outline-light.dropdown-toggle.show{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:checked+.btn-outline-light:focus,.btn-check:active+.btn-outline-light:focus,.btn-outline-light:active:focus,.btn-outline-light.active:focus,.btn-outline-light.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-outline-light:disabled,.btn-outline-light.disabled{color:#f8f9fa;background-color:rgba(0,0,0,0)}.btn-outline-dark{color:#373a3c;border-color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-dark:hover{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:focus+.btn-outline-dark,.btn-outline-dark:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-check:checked+.btn-outline-dark,.btn-check:active+.btn-outline-dark,.btn-outline-dark:active,.btn-outline-dark.active,.btn-outline-dark.dropdown-toggle.show{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:checked+.btn-outline-dark:focus,.btn-check:active+.btn-outline-dark:focus,.btn-outline-dark:active:focus,.btn-outline-dark.active:focus,.btn-outline-dark.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-outline-dark:disabled,.btn-outline-dark.disabled{color:#373a3c;background-color:rgba(0,0,0,0)}.btn-link{font-weight:400;color:#2780e3;text-decoration:underline;-webkit-text-decoration:underline;-moz-text-decoration:underline;-ms-text-decoration:underline;-o-text-decoration:underline}.btn-link:hover{color:#1f66b6}.btn-link:disabled,.btn-link.disabled{color:#6c757d}.btn-lg,.btn-group-lg>.btn{padding:.5rem 1rem;font-size:1.25rem;border-radius:0}.btn-sm,.btn-group-sm>.btn{padding:.25rem .5rem;font-size:0.875rem;border-radius:0}.fade{transition:opacity .15s linear}@media(prefers-reduced-motion: reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{height:0;overflow:hidden;transition:height .2s ease}@media(prefers-reduced-motion: reduce){.collapsing{transition:none}}.collapsing.collapse-horizontal{width:0;height:auto;transition:width .35s ease}@media(prefers-reduced-motion: reduce){.collapsing.collapse-horizontal{transition:none}}.dropup,.dropend,.dropdown,.dropstart{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid rgba(0,0,0,0);border-bottom:0;border-left:.3em solid rgba(0,0,0,0)}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;z-index:1000;display:none;min-width:10rem;padding:.5rem 0;margin:0;font-size:1rem;color:#373a3c;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15)}.dropdown-menu[data-bs-popper]{top:100%;left:0;margin-top:.125rem}.dropdown-menu-start{--bs-position: start}.dropdown-menu-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-end{--bs-position: end}.dropdown-menu-end[data-bs-popper]{right:0;left:auto}@media(min-width: 576px){.dropdown-menu-sm-start{--bs-position: start}.dropdown-menu-sm-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-sm-end{--bs-position: end}.dropdown-menu-sm-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 768px){.dropdown-menu-md-start{--bs-position: start}.dropdown-menu-md-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-md-end{--bs-position: end}.dropdown-menu-md-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 992px){.dropdown-menu-lg-start{--bs-position: start}.dropdown-menu-lg-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-lg-end{--bs-position: end}.dropdown-menu-lg-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 1200px){.dropdown-menu-xl-start{--bs-position: start}.dropdown-menu-xl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xl-end{--bs-position: end}.dropdown-menu-xl-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 1400px){.dropdown-menu-xxl-start{--bs-position: start}.dropdown-menu-xxl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xxl-end{--bs-position: end}.dropdown-menu-xxl-end[data-bs-popper]{right:0;left:auto}}.dropup .dropdown-menu[data-bs-popper]{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid rgba(0,0,0,0);border-bottom:.3em solid;border-left:.3em solid rgba(0,0,0,0)}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-menu[data-bs-popper]{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropend .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid rgba(0,0,0,0);border-right:0;border-bottom:.3em solid rgba(0,0,0,0);border-left:.3em solid}.dropend .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-toggle::after{vertical-align:0}.dropstart .dropdown-menu[data-bs-popper]{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropstart .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropstart .dropdown-toggle::after{display:none}.dropstart .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid rgba(0,0,0,0);border-right:.3em solid;border-bottom:.3em solid rgba(0,0,0,0)}.dropstart .dropdown-toggle:empty::after{margin-left:0}.dropstart .dropdown-toggle::before{vertical-align:0}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid rgba(0,0,0,.15)}.dropdown-item{display:block;width:100%;padding:.25rem 1rem;clear:both;font-weight:400;color:#212529;text-align:inherit;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;white-space:nowrap;background-color:rgba(0,0,0,0);border:0}.dropdown-item:hover,.dropdown-item:focus{color:#1e2125;background-color:#e9ecef}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#2780e3}.dropdown-item.disabled,.dropdown-item:disabled{color:#adb5bd;pointer-events:none;background-color:rgba(0,0,0,0)}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1rem;margin-bottom:0;font-size:0.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1rem;color:#212529}.dropdown-menu-dark{color:#dee2e6;background-color:#373a3c;border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item{color:#dee2e6}.dropdown-menu-dark .dropdown-item:hover,.dropdown-menu-dark .dropdown-item:focus{color:#fff;background-color:rgba(255,255,255,.15)}.dropdown-menu-dark .dropdown-item.active,.dropdown-menu-dark .dropdown-item:active{color:#fff;background-color:#2780e3}.dropdown-menu-dark .dropdown-item.disabled,.dropdown-menu-dark .dropdown-item:disabled{color:#adb5bd}.dropdown-menu-dark .dropdown-divider{border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item-text{color:#dee2e6}.dropdown-menu-dark .dropdown-header{color:#adb5bd}.btn-group,.btn-group-vertical{position:relative;display:inline-flex;vertical-align:middle}.btn-group>.btn,.btn-group-vertical>.btn{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto}.btn-group>.btn-check:checked+.btn,.btn-group>.btn-check:focus+.btn,.btn-group>.btn:hover,.btn-group>.btn:focus,.btn-group>.btn:active,.btn-group>.btn.active,.btn-group-vertical>.btn-check:checked+.btn,.btn-group-vertical>.btn-check:focus+.btn,.btn-group-vertical>.btn:hover,.btn-group-vertical>.btn:focus,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn.active{z-index:1}.btn-toolbar{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;justify-content:flex-start;-webkit-justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn:not(:first-child),.btn-group>.btn-group:not(:first-child){margin-left:-1px}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after,.dropend .dropdown-toggle-split::after{margin-left:0}.dropstart .dropdown-toggle-split::before{margin-right:0}.btn-sm+.dropdown-toggle-split,.btn-group-sm>.btn+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-lg+.dropdown-toggle-split,.btn-group-lg>.btn+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{flex-direction:column;-webkit-flex-direction:column;align-items:flex-start;-webkit-align-items:flex-start;justify-content:center;-webkit-justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn:not(:first-child),.btn-group-vertical>.btn-group:not(:first-child){margin-top:-1px}.nav{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem;color:#2780e3;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out}@media(prefers-reduced-motion: reduce){.nav-link{transition:none}}.nav-link:hover,.nav-link:focus{color:#1f66b6}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-link{margin-bottom:-1px;background:none;border:1px solid rgba(0,0,0,0)}.nav-tabs .nav-link:hover,.nav-tabs .nav-link:focus{border-color:#e9ecef #e9ecef #dee2e6;isolation:isolate}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:rgba(0,0,0,0);border-color:rgba(0,0,0,0)}.nav-tabs .nav-link.active,.nav-tabs .nav-item.show .nav-link{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px}.nav-pills .nav-link{background:none;border:0}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#2780e3}.nav-fill>.nav-link,.nav-fill .nav-item{flex:1 1 auto;-webkit-flex:1 1 auto;text-align:center}.nav-justified>.nav-link,.nav-justified .nav-item{flex-basis:0;-webkit-flex-basis:0;flex-grow:1;-webkit-flex-grow:1;text-align:center}.nav-fill .nav-item .nav-link,.nav-justified .nav-item .nav-link{width:100%}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding-top:.5rem;padding-bottom:.5rem}.navbar>.container-xxl,.navbar>.container-xl,.navbar>.container-lg,.navbar>.container-md,.navbar>.container-sm,.navbar>.container,.navbar>.container-fluid{display:flex;display:-webkit-flex;flex-wrap:inherit;-webkit-flex-wrap:inherit;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between}.navbar-brand{padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;white-space:nowrap}.navbar-nav{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static}.navbar-text{padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{flex-basis:100%;-webkit-flex-basis:100%;flex-grow:1;-webkit-flex-grow:1;align-items:center;-webkit-align-items:center}.navbar-toggler{padding:.25 0;font-size:1.25rem;line-height:1;background-color:rgba(0,0,0,0);border:1px solid rgba(0,0,0,0);transition:box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.navbar-toggler{transition:none}}.navbar-toggler:hover{text-decoration:none}.navbar-toggler:focus{text-decoration:none;outline:0;box-shadow:0 0 0 .25rem}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;background-repeat:no-repeat;background-position:center;background-size:100%}.navbar-nav-scroll{max-height:var(--bs-scroll-height, 75vh);overflow-y:auto}@media(min-width: 576px){.navbar-expand-sm{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-sm .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm .navbar-nav-scroll{overflow:visible}.navbar-expand-sm .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}.navbar-expand-sm .offcanvas-header{display:none}.navbar-expand-sm .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-sm .offcanvas-top,.navbar-expand-sm .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-sm .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 768px){.navbar-expand-md{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-md .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md .navbar-nav-scroll{overflow:visible}.navbar-expand-md .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}.navbar-expand-md .offcanvas-header{display:none}.navbar-expand-md .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-md .offcanvas-top,.navbar-expand-md .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-md .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 992px){.navbar-expand-lg{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-lg .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg .navbar-nav-scroll{overflow:visible}.navbar-expand-lg .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}.navbar-expand-lg .offcanvas-header{display:none}.navbar-expand-lg .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-lg .offcanvas-top,.navbar-expand-lg .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-lg .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 1200px){.navbar-expand-xl{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-xl .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl .navbar-nav-scroll{overflow:visible}.navbar-expand-xl .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}.navbar-expand-xl .offcanvas-header{display:none}.navbar-expand-xl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-xl .offcanvas-top,.navbar-expand-xl .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-xl .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 1400px){.navbar-expand-xxl{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-xxl .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-xxl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xxl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xxl .navbar-nav-scroll{overflow:visible}.navbar-expand-xxl .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-xxl .navbar-toggler{display:none}.navbar-expand-xxl .offcanvas-header{display:none}.navbar-expand-xxl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-xxl .offcanvas-top,.navbar-expand-xxl .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-xxl .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}.navbar-expand{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand .navbar-nav-scroll{overflow:visible}.navbar-expand .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-expand .offcanvas-header{display:none}.navbar-expand .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand .offcanvas-top,.navbar-expand .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}.navbar-light{background-color:#f8f9fa}.navbar-light .navbar-brand{color:#545555}.navbar-light .navbar-brand:hover,.navbar-light .navbar-brand:focus{color:#1a5698}.navbar-light .navbar-nav .nav-link{color:#545555}.navbar-light .navbar-nav .nav-link:hover,.navbar-light .navbar-nav .nav-link:focus{color:rgba(26,86,152,.8)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(84,85,85,.75)}.navbar-light .navbar-nav .show>.nav-link,.navbar-light .navbar-nav .nav-link.active{color:#1a5698}.navbar-light .navbar-toggler{color:#545555;border-color:rgba(84,85,85,0)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:#545555}.navbar-light .navbar-text a,.navbar-light .navbar-text a:hover,.navbar-light .navbar-text a:focus{color:#1a5698}.navbar-dark{background-color:#f8f9fa}.navbar-dark .navbar-brand{color:#545555}.navbar-dark .navbar-brand:hover,.navbar-dark .navbar-brand:focus{color:#1a5698}.navbar-dark .navbar-nav .nav-link{color:#545555}.navbar-dark .navbar-nav .nav-link:hover,.navbar-dark .navbar-nav .nav-link:focus{color:rgba(26,86,152,.8)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(84,85,85,.75)}.navbar-dark .navbar-nav .show>.nav-link,.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active{color:#1a5698}.navbar-dark .navbar-toggler{color:#545555;border-color:rgba(84,85,85,0)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:#545555}.navbar-dark .navbar-text a,.navbar-dark .navbar-text a:hover,.navbar-dark .navbar-text a:focus{color:#1a5698}.card{position:relative;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125)}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0}.card>.list-group:last-child{border-bottom-width:0}.card>.card-header+.list-group,.card>.list-group+.card-footer{border-top:0}.card-body{flex:1 1 auto;-webkit-flex:1 1 auto;padding:1rem 1rem}.card-title{margin-bottom:.5rem}.card-subtitle{margin-top:-0.25rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link+.card-link{margin-left:1rem}.card-header{padding:.5rem 1rem;margin-bottom:0;background-color:#adb5bd;border-bottom:1px solid rgba(0,0,0,.125)}.card-footer{padding:.5rem 1rem;background-color:#adb5bd;border-top:1px solid rgba(0,0,0,.125)}.card-header-tabs{margin-right:-0.5rem;margin-bottom:-0.5rem;margin-left:-0.5rem;border-bottom:0}.card-header-pills{margin-right:-0.5rem;margin-left:-0.5rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1rem}.card-img,.card-img-top,.card-img-bottom{width:100%}.card-group>.card{margin-bottom:.75rem}@media(min-width: 576px){.card-group{display:flex;display:-webkit-flex;flex-flow:row wrap;-webkit-flex-flow:row wrap}.card-group>.card{flex:1 0 0%;-webkit-flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}}.accordion-button{position:relative;display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;width:100%;padding:1rem 1.25rem;font-size:1rem;color:#373a3c;text-align:left;background-color:#fff;border:0;overflow-anchor:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,border-radius .15s ease}@media(prefers-reduced-motion: reduce){.accordion-button{transition:none}}.accordion-button:not(.collapsed){color:#2373cc;background-color:#e9f2fc;box-shadow:inset 0 -1px 0 rgba(0,0,0,.125)}.accordion-button:not(.collapsed)::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%232373cc'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");transform:rotate(-180deg)}.accordion-button::after{flex-shrink:0;-webkit-flex-shrink:0;width:1.25rem;height:1.25rem;margin-left:auto;content:"";background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23373a3c'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-size:1.25rem;transition:transform .2s ease-in-out}@media(prefers-reduced-motion: reduce){.accordion-button::after{transition:none}}.accordion-button:hover{z-index:2}.accordion-button:focus{z-index:3;border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.accordion-header{margin-bottom:0}.accordion-item{background-color:#fff;border:1px solid rgba(0,0,0,.125)}.accordion-item:not(:first-of-type){border-top:0}.accordion-body{padding:1rem 1.25rem}.accordion-flush .accordion-collapse{border-width:0}.accordion-flush .accordion-item{border-right:0;border-left:0}.accordion-flush .accordion-item:first-child{border-top:0}.accordion-flush .accordion-item:last-child{border-bottom:0}.breadcrumb{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;padding:0 0;margin-bottom:1rem;list-style:none}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{float:left;padding-right:.5rem;color:#6c757d;content:var(--bs-breadcrumb-divider, ">") /* rtl: var(--bs-breadcrumb-divider, ">") */}.breadcrumb-item.active{color:#6c757d}.pagination{display:flex;display:-webkit-flex;padding-left:0;list-style:none}.page-link{position:relative;display:block;color:#2780e3;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background-color:#fff;border:1px solid #dee2e6;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.page-link{transition:none}}.page-link:hover{z-index:2;color:#1f66b6;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;color:#1f66b6;background-color:#e9ecef;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.page-item:not(:first-child) .page-link{margin-left:-1px}.page-item.active .page-link{z-index:3;color:#fff;background-color:#2780e3;border-color:#2780e3}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;background-color:#fff;border-color:#dee2e6}.page-link{padding:.375rem .75rem}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:0.875rem}.badge{display:inline-block;padding:.35em .65em;font-size:0.75em;font-weight:700;line-height:1;color:#fff;text-align:center;white-space:nowrap;vertical-align:baseline}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.alert{position:relative;padding:1rem 1rem;margin-bottom:1rem;border:0 solid rgba(0,0,0,0)}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:3rem}.alert-dismissible .btn-close{position:absolute;top:0;right:0;z-index:2;padding:1.25rem 1rem}.alert-default{color:#212324;background-color:#d7d8d8;border-color:#c3c4c5}.alert-default .alert-link{color:#1a1c1d}.alert-primary{color:#174d88;background-color:#d4e6f9;border-color:#bed9f7}.alert-primary .alert-link{color:#123e6d}.alert-secondary{color:#212324;background-color:#d7d8d8;border-color:#c3c4c5}.alert-secondary .alert-link{color:#1a1c1d}.alert-success{color:#266d0e;background-color:#d9f0d1;border-color:#c5e9ba}.alert-success .alert-link{color:#1e570b}.alert-info{color:#5c3270;background-color:#ebddf1;border-color:#e0cceb}.alert-info .alert-link{color:#4a285a}.alert-warning{color:#99460e;background-color:#ffe3d1;border-color:#ffd6ba}.alert-warning .alert-link{color:#7a380b}.alert-danger{color:#902;background-color:#ffccd7;border-color:#ffb3c4}.alert-danger .alert-link{color:#7a001b}.alert-light{color:#959596;background-color:#fefefe;border-color:#fdfdfe}.alert-light .alert-link{color:#777778}.alert-dark{color:#212324;background-color:#d7d8d8;border-color:#c3c4c5}.alert-dark .alert-link{color:#1a1c1d}@keyframes progress-bar-stripes{0%{background-position-x:.5rem}}.progress{display:flex;display:-webkit-flex;height:.5rem;overflow:hidden;font-size:0.75rem;background-color:#e9ecef}.progress-bar{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;justify-content:center;-webkit-justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#2780e3;transition:width .6s ease}@media(prefers-reduced-motion: reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent);background-size:.5rem .5rem}.progress-bar-animated{animation:1s linear infinite progress-bar-stripes}@media(prefers-reduced-motion: reduce){.progress-bar-animated{animation:none}}.list-group{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;padding-left:0;margin-bottom:0}.list-group-numbered{list-style-type:none;counter-reset:section}.list-group-numbered>li::before{content:counters(section, ".") ". ";counter-increment:section}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:hover,.list-group-item-action:focus{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#373a3c;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.5rem 1rem;color:#212529;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#2780e3;border-color:#2780e3}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media(min-width: 576px){.list-group-horizontal-sm{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 768px){.list-group-horizontal-md{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 992px){.list-group-horizontal-lg{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 1200px){.list-group-horizontal-xl{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 1400px){.list-group-horizontal-xxl{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-xxl>.list-group-item.active{margin-top:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-default{color:#212324;background-color:#d7d8d8}.list-group-item-default.list-group-item-action:hover,.list-group-item-default.list-group-item-action:focus{color:#212324;background-color:#c2c2c2}.list-group-item-default.list-group-item-action.active{color:#fff;background-color:#212324;border-color:#212324}.list-group-item-primary{color:#174d88;background-color:#d4e6f9}.list-group-item-primary.list-group-item-action:hover,.list-group-item-primary.list-group-item-action:focus{color:#174d88;background-color:#bfcfe0}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#174d88;border-color:#174d88}.list-group-item-secondary{color:#212324;background-color:#d7d8d8}.list-group-item-secondary.list-group-item-action:hover,.list-group-item-secondary.list-group-item-action:focus{color:#212324;background-color:#c2c2c2}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#212324;border-color:#212324}.list-group-item-success{color:#266d0e;background-color:#d9f0d1}.list-group-item-success.list-group-item-action:hover,.list-group-item-success.list-group-item-action:focus{color:#266d0e;background-color:#c3d8bc}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#266d0e;border-color:#266d0e}.list-group-item-info{color:#5c3270;background-color:#ebddf1}.list-group-item-info.list-group-item-action:hover,.list-group-item-info.list-group-item-action:focus{color:#5c3270;background-color:#d4c7d9}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#5c3270;border-color:#5c3270}.list-group-item-warning{color:#99460e;background-color:#ffe3d1}.list-group-item-warning.list-group-item-action:hover,.list-group-item-warning.list-group-item-action:focus{color:#99460e;background-color:#e6ccbc}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#99460e;border-color:#99460e}.list-group-item-danger{color:#902;background-color:#ffccd7}.list-group-item-danger.list-group-item-action:hover,.list-group-item-danger.list-group-item-action:focus{color:#902;background-color:#e6b8c2}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#902;border-color:#902}.list-group-item-light{color:#959596;background-color:#fefefe}.list-group-item-light.list-group-item-action:hover,.list-group-item-light.list-group-item-action:focus{color:#959596;background-color:#e5e5e5}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#959596;border-color:#959596}.list-group-item-dark{color:#212324;background-color:#d7d8d8}.list-group-item-dark.list-group-item-action:hover,.list-group-item-dark.list-group-item-action:focus{color:#212324;background-color:#c2c2c2}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#212324;border-color:#212324}.btn-close{box-sizing:content-box;width:1em;height:1em;padding:.25em .25em;color:#000;background:rgba(0,0,0,0) url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 011.414 0L8 6.586 14.293.293a1 1 0 111.414 1.414L9.414 8l6.293 6.293a1 1 0 01-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 01-1.414-1.414L6.586 8 .293 1.707a1 1 0 010-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat;border:0;opacity:.5}.btn-close:hover{color:#000;text-decoration:none;opacity:.75}.btn-close:focus{outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25);opacity:1}.btn-close:disabled,.btn-close.disabled{pointer-events:none;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;opacity:.25}.btn-close-white{filter:invert(1) grayscale(100%) brightness(200%)}.toast{width:350px;max-width:100%;font-size:0.875rem;pointer-events:auto;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .5rem 1rem rgba(0,0,0,.15)}.toast.showing{opacity:0}.toast:not(.show){display:none}.toast-container{width:max-content;width:-webkit-max-content;width:-moz-max-content;width:-ms-max-content;width:-o-max-content;max-width:100%;pointer-events:none}.toast-container>:not(:last-child){margin-bottom:.75rem}.toast-header{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;padding:.5rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-header .btn-close{margin-right:-0.375rem;margin-left:.75rem}.toast-body{padding:.75rem;word-wrap:break-word}.modal{position:fixed;top:0;left:0;z-index:1055;display:none;width:100%;height:100%;overflow-x:hidden;overflow-y:auto;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:transform .3s ease-out;transform:translate(0, -50px)}@media(prefers-reduced-motion: reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{transform:none}.modal.modal-static .modal-dialog{transform:scale(1.02)}.modal-dialog-scrollable{height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:100%;overflow:hidden}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;min-height:calc(100% - 1rem)}.modal-content{position:relative;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1050;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:flex;display:-webkit-flex;flex-shrink:0;-webkit-flex-shrink:0;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6}.modal-header .btn-close{padding:.5rem .5rem;margin:-0.5rem -0.5rem -0.5rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto;padding:1rem}.modal-footer{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;flex-shrink:0;-webkit-flex-shrink:0;align-items:center;-webkit-align-items:center;justify-content:flex-end;-webkit-justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6}.modal-footer>*{margin:.25rem}@media(min-width: 576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{height:calc(100% - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-sm{max-width:300px}}@media(min-width: 992px){.modal-lg,.modal-xl{max-width:800px}}@media(min-width: 1200px){.modal-xl{max-width:1140px}}.modal-fullscreen{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen .modal-content{height:100%;border:0}.modal-fullscreen .modal-body{overflow-y:auto}@media(max-width: 575.98px){.modal-fullscreen-sm-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-sm-down .modal-content{height:100%;border:0}.modal-fullscreen-sm-down .modal-body{overflow-y:auto}}@media(max-width: 767.98px){.modal-fullscreen-md-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-md-down .modal-content{height:100%;border:0}.modal-fullscreen-md-down .modal-body{overflow-y:auto}}@media(max-width: 991.98px){.modal-fullscreen-lg-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-lg-down .modal-content{height:100%;border:0}.modal-fullscreen-lg-down .modal-body{overflow-y:auto}}@media(max-width: 1199.98px){.modal-fullscreen-xl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xl-down .modal-content{height:100%;border:0}.modal-fullscreen-xl-down .modal-body{overflow-y:auto}}@media(max-width: 1399.98px){.modal-fullscreen-xxl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xxl-down .modal-content{height:100%;border:0}.modal-fullscreen-xxl-down .modal-body{overflow-y:auto}}.tooltip{position:absolute;z-index:1080;display:block;margin:0;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:0.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .tooltip-arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .tooltip-arrow::before{position:absolute;content:"";border-color:rgba(0,0,0,0);border-style:solid}.bs-tooltip-top,.bs-tooltip-auto[data-popper-placement^=top]{padding:.4rem 0}.bs-tooltip-top .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow{bottom:0}.bs-tooltip-top .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before{top:-1px;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-end,.bs-tooltip-auto[data-popper-placement^=right]{padding:0 .4rem}.bs-tooltip-end .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-end .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before{right:-1px;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-bottom,.bs-tooltip-auto[data-popper-placement^=bottom]{padding:.4rem 0}.bs-tooltip-bottom .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow{top:0}.bs-tooltip-bottom .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before{bottom:-1px;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-start,.bs-tooltip-auto[data-popper-placement^=left]{padding:0 .4rem}.bs-tooltip-start .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-start .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before{left:-1px;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000}.popover{position:absolute;top:0;left:0 /* rtl:ignore */;z-index:1070;display:block;max-width:276px;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:0.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2)}.popover .popover-arrow{position:absolute;display:block;width:1rem;height:.5rem}.popover .popover-arrow::before,.popover .popover-arrow::after{position:absolute;display:block;content:"";border-color:rgba(0,0,0,0);border-style:solid}.bs-popover-top>.popover-arrow,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow{bottom:calc(-0.5rem - 1px)}.bs-popover-top>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-top>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-end>.popover-arrow,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow{left:calc(-0.5rem - 1px);width:.5rem;height:1rem}.bs-popover-end>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-end>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-bottom>.popover-arrow,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow{top:calc(-0.5rem - 1px)}.bs-popover-bottom>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-bottom>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-bottom .popover-header::before,.bs-popover-auto[data-popper-placement^=bottom] .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-0.5rem;content:"";border-bottom:1px solid #f0f0f0}.bs-popover-start>.popover-arrow,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow{right:calc(-0.5rem - 1px);width:.5rem;height:1rem}.bs-popover-start>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-start>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem 1rem;margin-bottom:0;font-size:1rem;background-color:#f0f0f0;border-bottom:1px solid rgba(0,0,0,.2)}.popover-header:empty{display:none}.popover-body{padding:1rem 1rem;color:#373a3c}.carousel{position:relative}.carousel.pointer-event{touch-action:pan-y;-webkit-touch-action:pan-y;-moz-touch-action:pan-y;-ms-touch-action:pan-y;-o-touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;backface-visibility:hidden;-webkit-backface-visibility:hidden;-moz-backface-visibility:hidden;-ms-backface-visibility:hidden;-o-backface-visibility:hidden;transition:transform .6s ease-in-out}@media(prefers-reduced-motion: reduce){.carousel-item{transition:none}}.carousel-item.active,.carousel-item-next,.carousel-item-prev{display:block}.carousel-item-next:not(.carousel-item-start),.active.carousel-item-end{transform:translateX(100%)}.carousel-item-prev:not(.carousel-item-end),.active.carousel-item-start{transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;transform:none}.carousel-fade .carousel-item.active,.carousel-fade .carousel-item-next.carousel-item-start,.carousel-fade .carousel-item-prev.carousel-item-end{z-index:1;opacity:1}.carousel-fade .active.carousel-item-start,.carousel-fade .active.carousel-item-end{z-index:0;opacity:0;transition:opacity 0s .6s}@media(prefers-reduced-motion: reduce){.carousel-fade .active.carousel-item-start,.carousel-fade .active.carousel-item-end{transition:none}}.carousel-control-prev,.carousel-control-next{position:absolute;top:0;bottom:0;z-index:1;display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;justify-content:center;-webkit-justify-content:center;width:15%;padding:0;color:#fff;text-align:center;background:none;border:0;opacity:.5;transition:opacity .15s ease}@media(prefers-reduced-motion: reduce){.carousel-control-prev,.carousel-control-next{transition:none}}.carousel-control-prev:hover,.carousel-control-prev:focus,.carousel-control-next:hover,.carousel-control-next:focus{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-prev-icon,.carousel-control-next-icon{display:inline-block;width:2rem;height:2rem;background-repeat:no-repeat;background-position:50%;background-size:100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:2;display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;padding:0;margin-right:15%;margin-bottom:1rem;margin-left:15%;list-style:none}.carousel-indicators [data-bs-target]{box-sizing:content-box;flex:0 1 auto;-webkit-flex:0 1 auto;width:30px;height:3px;padding:0;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border:0;border-top:10px solid rgba(0,0,0,0);border-bottom:10px solid rgba(0,0,0,0);opacity:.5;transition:opacity .6s ease}@media(prefers-reduced-motion: reduce){.carousel-indicators [data-bs-target]{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:1.25rem;left:15%;padding-top:1.25rem;padding-bottom:1.25rem;color:#fff;text-align:center}.carousel-dark .carousel-control-prev-icon,.carousel-dark .carousel-control-next-icon{filter:invert(1) grayscale(100)}.carousel-dark .carousel-indicators [data-bs-target]{background-color:#000}.carousel-dark .carousel-caption{color:#000}@keyframes spinner-border{to{transform:rotate(360deg) /* rtl:ignore */}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:-0.125em;border:.25em solid currentColor;border-right-color:rgba(0,0,0,0);border-radius:50%;animation:.75s linear infinite spinner-border}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:-0.125em;background-color:currentColor;border-radius:50%;opacity:0;animation:.75s linear infinite spinner-grow}.spinner-grow-sm{width:1rem;height:1rem}@media(prefers-reduced-motion: reduce){.spinner-border,.spinner-grow{animation-duration:1.5s;-webkit-animation-duration:1.5s;-moz-animation-duration:1.5s;-ms-animation-duration:1.5s;-o-animation-duration:1.5s}}.offcanvas{position:fixed;bottom:0;z-index:1045;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;visibility:hidden;background-color:#fff;background-clip:padding-box;outline:0;transition:transform .3s ease-in-out}@media(prefers-reduced-motion: reduce){.offcanvas{transition:none}}.offcanvas-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.offcanvas-backdrop.fade{opacity:0}.offcanvas-backdrop.show{opacity:.5}.offcanvas-header{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:1rem 1rem}.offcanvas-header .btn-close{padding:.5rem .5rem;margin-top:-0.5rem;margin-right:-0.5rem;margin-bottom:-0.5rem}.offcanvas-title{margin-bottom:0;line-height:1.5}.offcanvas-body{flex-grow:1;-webkit-flex-grow:1;padding:1rem 1rem;overflow-y:auto}.offcanvas-start{top:0;left:0;width:400px;border-right:1px solid rgba(0,0,0,.2);transform:translateX(-100%)}.offcanvas-end{top:0;right:0;width:400px;border-left:1px solid rgba(0,0,0,.2);transform:translateX(100%)}.offcanvas-top{top:0;right:0;left:0;height:30vh;max-height:100%;border-bottom:1px solid rgba(0,0,0,.2);transform:translateY(-100%)}.offcanvas-bottom{right:0;left:0;height:30vh;max-height:100%;border-top:1px solid rgba(0,0,0,.2);transform:translateY(100%)}.offcanvas.show{transform:none}.placeholder{display:inline-block;min-height:1em;vertical-align:middle;cursor:wait;background-color:currentColor;opacity:.5}.placeholder.btn::before{display:inline-block;content:""}.placeholder-xs{min-height:.6em}.placeholder-sm{min-height:.8em}.placeholder-lg{min-height:1.2em}.placeholder-glow .placeholder{animation:placeholder-glow 2s ease-in-out infinite}@keyframes placeholder-glow{50%{opacity:.2}}.placeholder-wave{mask-image:linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);-webkit-mask-image:linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);mask-size:200% 100%;-webkit-mask-size:200% 100%;animation:placeholder-wave 2s linear infinite}@keyframes placeholder-wave{100%{mask-position:-200% 0%;-webkit-mask-position:-200% 0%}}.clearfix::after{display:block;clear:both;content:""}.link-default{color:#373a3c}.link-default:hover,.link-default:focus{color:#2c2e30}.link-primary{color:#2780e3}.link-primary:hover,.link-primary:focus{color:#1f66b6}.link-secondary{color:#373a3c}.link-secondary:hover,.link-secondary:focus{color:#2c2e30}.link-success{color:#3fb618}.link-success:hover,.link-success:focus{color:#329213}.link-info{color:#9954bb}.link-info:hover,.link-info:focus{color:#7a4396}.link-warning{color:#ff7518}.link-warning:hover,.link-warning:focus{color:#cc5e13}.link-danger{color:#ff0039}.link-danger:hover,.link-danger:focus{color:#cc002e}.link-light{color:#f8f9fa}.link-light:hover,.link-light:focus{color:#f9fafb}.link-dark{color:#373a3c}.link-dark:hover,.link-dark:focus{color:#2c2e30}.ratio{position:relative;width:100%}.ratio::before{display:block;padding-top:var(--bs-aspect-ratio);content:""}.ratio>*{position:absolute;top:0;left:0;width:100%;height:100%}.ratio-1x1{--bs-aspect-ratio: 100%}.ratio-4x3{--bs-aspect-ratio: 75%}.ratio-16x9{--bs-aspect-ratio: 56.25%}.ratio-21x9{--bs-aspect-ratio: 42.8571428571%}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}.sticky-top{position:sticky;top:0;z-index:1020}@media(min-width: 576px){.sticky-sm-top{position:sticky;top:0;z-index:1020}}@media(min-width: 768px){.sticky-md-top{position:sticky;top:0;z-index:1020}}@media(min-width: 992px){.sticky-lg-top{position:sticky;top:0;z-index:1020}}@media(min-width: 1200px){.sticky-xl-top{position:sticky;top:0;z-index:1020}}@media(min-width: 1400px){.sticky-xxl-top{position:sticky;top:0;z-index:1020}}.hstack{display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;align-items:center;-webkit-align-items:center;align-self:stretch;-webkit-align-self:stretch}.vstack{display:flex;display:-webkit-flex;flex:1 1 auto;-webkit-flex:1 1 auto;flex-direction:column;-webkit-flex-direction:column;align-self:stretch;-webkit-align-self:stretch}.visually-hidden,.visually-hidden-focusable:not(:focus):not(:focus-within){position:absolute !important;width:1px !important;height:1px !important;padding:0 !important;margin:-1px !important;overflow:hidden !important;clip:rect(0, 0, 0, 0) !important;white-space:nowrap !important;border:0 !important}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.vr{display:inline-block;align-self:stretch;-webkit-align-self:stretch;width:1px;min-height:1em;background-color:currentColor;opacity:.25}.align-baseline{vertical-align:baseline !important}.align-top{vertical-align:top !important}.align-middle{vertical-align:middle !important}.align-bottom{vertical-align:bottom !important}.align-text-bottom{vertical-align:text-bottom !important}.align-text-top{vertical-align:text-top !important}.float-start{float:left !important}.float-end{float:right !important}.float-none{float:none !important}.opacity-0{opacity:0 !important}.opacity-25{opacity:.25 !important}.opacity-50{opacity:.5 !important}.opacity-75{opacity:.75 !important}.opacity-100{opacity:1 !important}.overflow-auto{overflow:auto !important}.overflow-hidden{overflow:hidden !important}.overflow-visible{overflow:visible !important}.overflow-scroll{overflow:scroll !important}.d-inline{display:inline !important}.d-inline-block{display:inline-block !important}.d-block{display:block !important}.d-grid{display:grid !important}.d-table{display:table !important}.d-table-row{display:table-row !important}.d-table-cell{display:table-cell !important}.d-flex{display:flex !important}.d-inline-flex{display:inline-flex !important}.d-none{display:none !important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15) !important}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075) !important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175) !important}.shadow-none{box-shadow:none !important}.position-static{position:static !important}.position-relative{position:relative !important}.position-absolute{position:absolute !important}.position-fixed{position:fixed !important}.position-sticky{position:sticky !important}.top-0{top:0 !important}.top-50{top:50% !important}.top-100{top:100% !important}.bottom-0{bottom:0 !important}.bottom-50{bottom:50% !important}.bottom-100{bottom:100% !important}.start-0{left:0 !important}.start-50{left:50% !important}.start-100{left:100% !important}.end-0{right:0 !important}.end-50{right:50% !important}.end-100{right:100% !important}.translate-middle{transform:translate(-50%, -50%) !important}.translate-middle-x{transform:translateX(-50%) !important}.translate-middle-y{transform:translateY(-50%) !important}.border{border:1px solid #dee2e6 !important}.border-0{border:0 !important}.border-top{border-top:1px solid #dee2e6 !important}.border-top-0{border-top:0 !important}.border-end{border-right:1px solid #dee2e6 !important}.border-end-0{border-right:0 !important}.border-bottom{border-bottom:1px solid #dee2e6 !important}.border-bottom-0{border-bottom:0 !important}.border-start{border-left:1px solid #dee2e6 !important}.border-start-0{border-left:0 !important}.border-default{border-color:#373a3c !important}.border-primary{border-color:#2780e3 !important}.border-secondary{border-color:#373a3c !important}.border-success{border-color:#3fb618 !important}.border-info{border-color:#9954bb !important}.border-warning{border-color:#ff7518 !important}.border-danger{border-color:#ff0039 !important}.border-light{border-color:#f8f9fa !important}.border-dark{border-color:#373a3c !important}.border-white{border-color:#fff !important}.border-1{border-width:1px !important}.border-2{border-width:2px !important}.border-3{border-width:3px !important}.border-4{border-width:4px !important}.border-5{border-width:5px !important}.w-25{width:25% !important}.w-50{width:50% !important}.w-75{width:75% !important}.w-100{width:100% !important}.w-auto{width:auto !important}.mw-100{max-width:100% !important}.vw-100{width:100vw !important}.min-vw-100{min-width:100vw !important}.h-25{height:25% !important}.h-50{height:50% !important}.h-75{height:75% !important}.h-100{height:100% !important}.h-auto{height:auto !important}.mh-100{max-height:100% !important}.vh-100{height:100vh !important}.min-vh-100{min-height:100vh !important}.flex-fill{flex:1 1 auto !important}.flex-row{flex-direction:row !important}.flex-column{flex-direction:column !important}.flex-row-reverse{flex-direction:row-reverse !important}.flex-column-reverse{flex-direction:column-reverse !important}.flex-grow-0{flex-grow:0 !important}.flex-grow-1{flex-grow:1 !important}.flex-shrink-0{flex-shrink:0 !important}.flex-shrink-1{flex-shrink:1 !important}.flex-wrap{flex-wrap:wrap !important}.flex-nowrap{flex-wrap:nowrap !important}.flex-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-0{gap:0 !important}.gap-1{gap:.25rem !important}.gap-2{gap:.5rem !important}.gap-3{gap:1rem !important}.gap-4{gap:1.5rem !important}.gap-5{gap:3rem !important}.justify-content-start{justify-content:flex-start !important}.justify-content-end{justify-content:flex-end !important}.justify-content-center{justify-content:center !important}.justify-content-between{justify-content:space-between !important}.justify-content-around{justify-content:space-around !important}.justify-content-evenly{justify-content:space-evenly !important}.align-items-start{align-items:flex-start !important}.align-items-end{align-items:flex-end !important}.align-items-center{align-items:center !important}.align-items-baseline{align-items:baseline !important}.align-items-stretch{align-items:stretch !important}.align-content-start{align-content:flex-start !important}.align-content-end{align-content:flex-end !important}.align-content-center{align-content:center !important}.align-content-between{align-content:space-between !important}.align-content-around{align-content:space-around !important}.align-content-stretch{align-content:stretch !important}.align-self-auto{align-self:auto !important}.align-self-start{align-self:flex-start !important}.align-self-end{align-self:flex-end !important}.align-self-center{align-self:center !important}.align-self-baseline{align-self:baseline !important}.align-self-stretch{align-self:stretch !important}.order-first{order:-1 !important}.order-0{order:0 !important}.order-1{order:1 !important}.order-2{order:2 !important}.order-3{order:3 !important}.order-4{order:4 !important}.order-5{order:5 !important}.order-last{order:6 !important}.m-0{margin:0 !important}.m-1{margin:.25rem !important}.m-2{margin:.5rem !important}.m-3{margin:1rem !important}.m-4{margin:1.5rem !important}.m-5{margin:3rem !important}.m-auto{margin:auto !important}.mx-0{margin-right:0 !important;margin-left:0 !important}.mx-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-3{margin-right:1rem !important;margin-left:1rem !important}.mx-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-5{margin-right:3rem !important;margin-left:3rem !important}.mx-auto{margin-right:auto !important;margin-left:auto !important}.my-0{margin-top:0 !important;margin-bottom:0 !important}.my-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-0{margin-top:0 !important}.mt-1{margin-top:.25rem !important}.mt-2{margin-top:.5rem !important}.mt-3{margin-top:1rem !important}.mt-4{margin-top:1.5rem !important}.mt-5{margin-top:3rem !important}.mt-auto{margin-top:auto !important}.me-0{margin-right:0 !important}.me-1{margin-right:.25rem !important}.me-2{margin-right:.5rem !important}.me-3{margin-right:1rem !important}.me-4{margin-right:1.5rem !important}.me-5{margin-right:3rem !important}.me-auto{margin-right:auto !important}.mb-0{margin-bottom:0 !important}.mb-1{margin-bottom:.25rem !important}.mb-2{margin-bottom:.5rem !important}.mb-3{margin-bottom:1rem !important}.mb-4{margin-bottom:1.5rem !important}.mb-5{margin-bottom:3rem !important}.mb-auto{margin-bottom:auto !important}.ms-0{margin-left:0 !important}.ms-1{margin-left:.25rem !important}.ms-2{margin-left:.5rem !important}.ms-3{margin-left:1rem !important}.ms-4{margin-left:1.5rem !important}.ms-5{margin-left:3rem !important}.ms-auto{margin-left:auto !important}.p-0{padding:0 !important}.p-1{padding:.25rem !important}.p-2{padding:.5rem !important}.p-3{padding:1rem !important}.p-4{padding:1.5rem !important}.p-5{padding:3rem !important}.px-0{padding-right:0 !important;padding-left:0 !important}.px-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-3{padding-right:1rem !important;padding-left:1rem !important}.px-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-5{padding-right:3rem !important;padding-left:3rem !important}.py-0{padding-top:0 !important;padding-bottom:0 !important}.py-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-0{padding-top:0 !important}.pt-1{padding-top:.25rem !important}.pt-2{padding-top:.5rem !important}.pt-3{padding-top:1rem !important}.pt-4{padding-top:1.5rem !important}.pt-5{padding-top:3rem !important}.pe-0{padding-right:0 !important}.pe-1{padding-right:.25rem !important}.pe-2{padding-right:.5rem !important}.pe-3{padding-right:1rem !important}.pe-4{padding-right:1.5rem !important}.pe-5{padding-right:3rem !important}.pb-0{padding-bottom:0 !important}.pb-1{padding-bottom:.25rem !important}.pb-2{padding-bottom:.5rem !important}.pb-3{padding-bottom:1rem !important}.pb-4{padding-bottom:1.5rem !important}.pb-5{padding-bottom:3rem !important}.ps-0{padding-left:0 !important}.ps-1{padding-left:.25rem !important}.ps-2{padding-left:.5rem !important}.ps-3{padding-left:1rem !important}.ps-4{padding-left:1.5rem !important}.ps-5{padding-left:3rem !important}.font-monospace{font-family:var(--bs-font-monospace) !important}.fs-1{font-size:calc(1.325rem + 0.9vw) !important}.fs-2{font-size:calc(1.29rem + 0.48vw) !important}.fs-3{font-size:calc(1.27rem + 0.24vw) !important}.fs-4{font-size:1.25rem !important}.fs-5{font-size:1.1rem !important}.fs-6{font-size:1rem !important}.fst-italic{font-style:italic !important}.fst-normal{font-style:normal !important}.fw-light{font-weight:300 !important}.fw-lighter{font-weight:lighter !important}.fw-normal{font-weight:400 !important}.fw-bold{font-weight:700 !important}.fw-bolder{font-weight:bolder !important}.lh-1{line-height:1 !important}.lh-sm{line-height:1.25 !important}.lh-base{line-height:1.5 !important}.lh-lg{line-height:2 !important}.text-start{text-align:left !important}.text-end{text-align:right !important}.text-center{text-align:center !important}.text-decoration-none{text-decoration:none !important}.text-decoration-underline{text-decoration:underline !important}.text-decoration-line-through{text-decoration:line-through !important}.text-lowercase{text-transform:lowercase !important}.text-uppercase{text-transform:uppercase !important}.text-capitalize{text-transform:capitalize !important}.text-wrap{white-space:normal !important}.text-nowrap{white-space:nowrap !important}.text-break{word-wrap:break-word !important;word-break:break-word !important}.text-default{--bs-text-opacity: 1;color:rgba(var(--bs-default-rgb), var(--bs-text-opacity)) !important}.text-primary{--bs-text-opacity: 1;color:rgba(var(--bs-primary-rgb), var(--bs-text-opacity)) !important}.text-secondary{--bs-text-opacity: 1;color:rgba(var(--bs-secondary-rgb), var(--bs-text-opacity)) !important}.text-success{--bs-text-opacity: 1;color:rgba(var(--bs-success-rgb), var(--bs-text-opacity)) !important}.text-info{--bs-text-opacity: 1;color:rgba(var(--bs-info-rgb), var(--bs-text-opacity)) !important}.text-warning{--bs-text-opacity: 1;color:rgba(var(--bs-warning-rgb), var(--bs-text-opacity)) !important}.text-danger{--bs-text-opacity: 1;color:rgba(var(--bs-danger-rgb), var(--bs-text-opacity)) !important}.text-light{--bs-text-opacity: 1;color:rgba(var(--bs-light-rgb), var(--bs-text-opacity)) !important}.text-dark{--bs-text-opacity: 1;color:rgba(var(--bs-dark-rgb), var(--bs-text-opacity)) !important}.text-black{--bs-text-opacity: 1;color:rgba(var(--bs-black-rgb), var(--bs-text-opacity)) !important}.text-white{--bs-text-opacity: 1;color:rgba(var(--bs-white-rgb), var(--bs-text-opacity)) !important}.text-body{--bs-text-opacity: 1;color:rgba(var(--bs-body-color-rgb), var(--bs-text-opacity)) !important}.text-muted{--bs-text-opacity: 1;color:#6c757d !important}.text-black-50{--bs-text-opacity: 1;color:rgba(0,0,0,.5) !important}.text-white-50{--bs-text-opacity: 1;color:rgba(255,255,255,.5) !important}.text-reset{--bs-text-opacity: 1;color:inherit !important}.text-opacity-25{--bs-text-opacity: 0.25}.text-opacity-50{--bs-text-opacity: 0.5}.text-opacity-75{--bs-text-opacity: 0.75}.text-opacity-100{--bs-text-opacity: 1}.bg-default{--bs-bg-opacity: 1;background-color:rgba(var(--bs-default-rgb), var(--bs-bg-opacity)) !important}.bg-primary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-primary-rgb), var(--bs-bg-opacity)) !important}.bg-secondary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-secondary-rgb), var(--bs-bg-opacity)) !important}.bg-success{--bs-bg-opacity: 1;background-color:rgba(var(--bs-success-rgb), var(--bs-bg-opacity)) !important}.bg-info{--bs-bg-opacity: 1;background-color:rgba(var(--bs-info-rgb), var(--bs-bg-opacity)) !important}.bg-warning{--bs-bg-opacity: 1;background-color:rgba(var(--bs-warning-rgb), var(--bs-bg-opacity)) !important}.bg-danger{--bs-bg-opacity: 1;background-color:rgba(var(--bs-danger-rgb), var(--bs-bg-opacity)) !important}.bg-light{--bs-bg-opacity: 1;background-color:rgba(var(--bs-light-rgb), var(--bs-bg-opacity)) !important}.bg-dark{--bs-bg-opacity: 1;background-color:rgba(var(--bs-dark-rgb), var(--bs-bg-opacity)) !important}.bg-black{--bs-bg-opacity: 1;background-color:rgba(var(--bs-black-rgb), var(--bs-bg-opacity)) !important}.bg-white{--bs-bg-opacity: 1;background-color:rgba(var(--bs-white-rgb), var(--bs-bg-opacity)) !important}.bg-body{--bs-bg-opacity: 1;background-color:rgba(var(--bs-body-bg-rgb), var(--bs-bg-opacity)) !important}.bg-transparent{--bs-bg-opacity: 1;background-color:rgba(0,0,0,0) !important}.bg-opacity-10{--bs-bg-opacity: 0.1}.bg-opacity-25{--bs-bg-opacity: 0.25}.bg-opacity-50{--bs-bg-opacity: 0.5}.bg-opacity-75{--bs-bg-opacity: 0.75}.bg-opacity-100{--bs-bg-opacity: 1}.bg-gradient{background-image:var(--bs-gradient) !important}.user-select-all{user-select:all !important}.user-select-auto{user-select:auto !important}.user-select-none{user-select:none !important}.pe-none{pointer-events:none !important}.pe-auto{pointer-events:auto !important}.rounded{border-radius:.25rem !important}.rounded-0{border-radius:0 !important}.rounded-1{border-radius:.2em !important}.rounded-2{border-radius:.25rem !important}.rounded-3{border-radius:.3rem !important}.rounded-circle{border-radius:50% !important}.rounded-pill{border-radius:50rem !important}.rounded-top{border-top-left-radius:.25rem !important;border-top-right-radius:.25rem !important}.rounded-end{border-top-right-radius:.25rem !important;border-bottom-right-radius:.25rem !important}.rounded-bottom{border-bottom-right-radius:.25rem !important;border-bottom-left-radius:.25rem !important}.rounded-start{border-bottom-left-radius:.25rem !important;border-top-left-radius:.25rem !important}.visible{visibility:visible !important}.invisible{visibility:hidden !important}@media(min-width: 576px){.float-sm-start{float:left !important}.float-sm-end{float:right !important}.float-sm-none{float:none !important}.d-sm-inline{display:inline !important}.d-sm-inline-block{display:inline-block !important}.d-sm-block{display:block !important}.d-sm-grid{display:grid !important}.d-sm-table{display:table !important}.d-sm-table-row{display:table-row !important}.d-sm-table-cell{display:table-cell !important}.d-sm-flex{display:flex !important}.d-sm-inline-flex{display:inline-flex !important}.d-sm-none{display:none !important}.flex-sm-fill{flex:1 1 auto !important}.flex-sm-row{flex-direction:row !important}.flex-sm-column{flex-direction:column !important}.flex-sm-row-reverse{flex-direction:row-reverse !important}.flex-sm-column-reverse{flex-direction:column-reverse !important}.flex-sm-grow-0{flex-grow:0 !important}.flex-sm-grow-1{flex-grow:1 !important}.flex-sm-shrink-0{flex-shrink:0 !important}.flex-sm-shrink-1{flex-shrink:1 !important}.flex-sm-wrap{flex-wrap:wrap !important}.flex-sm-nowrap{flex-wrap:nowrap !important}.flex-sm-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-sm-0{gap:0 !important}.gap-sm-1{gap:.25rem !important}.gap-sm-2{gap:.5rem !important}.gap-sm-3{gap:1rem !important}.gap-sm-4{gap:1.5rem !important}.gap-sm-5{gap:3rem !important}.justify-content-sm-start{justify-content:flex-start !important}.justify-content-sm-end{justify-content:flex-end !important}.justify-content-sm-center{justify-content:center !important}.justify-content-sm-between{justify-content:space-between !important}.justify-content-sm-around{justify-content:space-around !important}.justify-content-sm-evenly{justify-content:space-evenly !important}.align-items-sm-start{align-items:flex-start !important}.align-items-sm-end{align-items:flex-end !important}.align-items-sm-center{align-items:center !important}.align-items-sm-baseline{align-items:baseline !important}.align-items-sm-stretch{align-items:stretch !important}.align-content-sm-start{align-content:flex-start !important}.align-content-sm-end{align-content:flex-end !important}.align-content-sm-center{align-content:center !important}.align-content-sm-between{align-content:space-between !important}.align-content-sm-around{align-content:space-around !important}.align-content-sm-stretch{align-content:stretch !important}.align-self-sm-auto{align-self:auto !important}.align-self-sm-start{align-self:flex-start !important}.align-self-sm-end{align-self:flex-end !important}.align-self-sm-center{align-self:center !important}.align-self-sm-baseline{align-self:baseline !important}.align-self-sm-stretch{align-self:stretch !important}.order-sm-first{order:-1 !important}.order-sm-0{order:0 !important}.order-sm-1{order:1 !important}.order-sm-2{order:2 !important}.order-sm-3{order:3 !important}.order-sm-4{order:4 !important}.order-sm-5{order:5 !important}.order-sm-last{order:6 !important}.m-sm-0{margin:0 !important}.m-sm-1{margin:.25rem !important}.m-sm-2{margin:.5rem !important}.m-sm-3{margin:1rem !important}.m-sm-4{margin:1.5rem !important}.m-sm-5{margin:3rem !important}.m-sm-auto{margin:auto !important}.mx-sm-0{margin-right:0 !important;margin-left:0 !important}.mx-sm-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-sm-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-sm-3{margin-right:1rem !important;margin-left:1rem !important}.mx-sm-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-sm-5{margin-right:3rem !important;margin-left:3rem !important}.mx-sm-auto{margin-right:auto !important;margin-left:auto !important}.my-sm-0{margin-top:0 !important;margin-bottom:0 !important}.my-sm-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-sm-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-sm-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-sm-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-sm-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-sm-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-sm-0{margin-top:0 !important}.mt-sm-1{margin-top:.25rem !important}.mt-sm-2{margin-top:.5rem !important}.mt-sm-3{margin-top:1rem !important}.mt-sm-4{margin-top:1.5rem !important}.mt-sm-5{margin-top:3rem !important}.mt-sm-auto{margin-top:auto !important}.me-sm-0{margin-right:0 !important}.me-sm-1{margin-right:.25rem !important}.me-sm-2{margin-right:.5rem !important}.me-sm-3{margin-right:1rem !important}.me-sm-4{margin-right:1.5rem !important}.me-sm-5{margin-right:3rem !important}.me-sm-auto{margin-right:auto !important}.mb-sm-0{margin-bottom:0 !important}.mb-sm-1{margin-bottom:.25rem !important}.mb-sm-2{margin-bottom:.5rem !important}.mb-sm-3{margin-bottom:1rem !important}.mb-sm-4{margin-bottom:1.5rem !important}.mb-sm-5{margin-bottom:3rem !important}.mb-sm-auto{margin-bottom:auto !important}.ms-sm-0{margin-left:0 !important}.ms-sm-1{margin-left:.25rem !important}.ms-sm-2{margin-left:.5rem !important}.ms-sm-3{margin-left:1rem !important}.ms-sm-4{margin-left:1.5rem !important}.ms-sm-5{margin-left:3rem !important}.ms-sm-auto{margin-left:auto !important}.p-sm-0{padding:0 !important}.p-sm-1{padding:.25rem !important}.p-sm-2{padding:.5rem !important}.p-sm-3{padding:1rem !important}.p-sm-4{padding:1.5rem !important}.p-sm-5{padding:3rem !important}.px-sm-0{padding-right:0 !important;padding-left:0 !important}.px-sm-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-sm-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-sm-3{padding-right:1rem !important;padding-left:1rem !important}.px-sm-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-sm-5{padding-right:3rem !important;padding-left:3rem !important}.py-sm-0{padding-top:0 !important;padding-bottom:0 !important}.py-sm-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-sm-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-sm-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-sm-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-sm-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-sm-0{padding-top:0 !important}.pt-sm-1{padding-top:.25rem !important}.pt-sm-2{padding-top:.5rem !important}.pt-sm-3{padding-top:1rem !important}.pt-sm-4{padding-top:1.5rem !important}.pt-sm-5{padding-top:3rem !important}.pe-sm-0{padding-right:0 !important}.pe-sm-1{padding-right:.25rem !important}.pe-sm-2{padding-right:.5rem !important}.pe-sm-3{padding-right:1rem !important}.pe-sm-4{padding-right:1.5rem !important}.pe-sm-5{padding-right:3rem !important}.pb-sm-0{padding-bottom:0 !important}.pb-sm-1{padding-bottom:.25rem !important}.pb-sm-2{padding-bottom:.5rem !important}.pb-sm-3{padding-bottom:1rem !important}.pb-sm-4{padding-bottom:1.5rem !important}.pb-sm-5{padding-bottom:3rem !important}.ps-sm-0{padding-left:0 !important}.ps-sm-1{padding-left:.25rem !important}.ps-sm-2{padding-left:.5rem !important}.ps-sm-3{padding-left:1rem !important}.ps-sm-4{padding-left:1.5rem !important}.ps-sm-5{padding-left:3rem !important}.text-sm-start{text-align:left !important}.text-sm-end{text-align:right !important}.text-sm-center{text-align:center !important}}@media(min-width: 768px){.float-md-start{float:left !important}.float-md-end{float:right !important}.float-md-none{float:none !important}.d-md-inline{display:inline !important}.d-md-inline-block{display:inline-block !important}.d-md-block{display:block !important}.d-md-grid{display:grid !important}.d-md-table{display:table !important}.d-md-table-row{display:table-row !important}.d-md-table-cell{display:table-cell !important}.d-md-flex{display:flex !important}.d-md-inline-flex{display:inline-flex !important}.d-md-none{display:none !important}.flex-md-fill{flex:1 1 auto !important}.flex-md-row{flex-direction:row !important}.flex-md-column{flex-direction:column !important}.flex-md-row-reverse{flex-direction:row-reverse !important}.flex-md-column-reverse{flex-direction:column-reverse !important}.flex-md-grow-0{flex-grow:0 !important}.flex-md-grow-1{flex-grow:1 !important}.flex-md-shrink-0{flex-shrink:0 !important}.flex-md-shrink-1{flex-shrink:1 !important}.flex-md-wrap{flex-wrap:wrap !important}.flex-md-nowrap{flex-wrap:nowrap !important}.flex-md-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-md-0{gap:0 !important}.gap-md-1{gap:.25rem !important}.gap-md-2{gap:.5rem !important}.gap-md-3{gap:1rem !important}.gap-md-4{gap:1.5rem !important}.gap-md-5{gap:3rem !important}.justify-content-md-start{justify-content:flex-start !important}.justify-content-md-end{justify-content:flex-end !important}.justify-content-md-center{justify-content:center !important}.justify-content-md-between{justify-content:space-between !important}.justify-content-md-around{justify-content:space-around !important}.justify-content-md-evenly{justify-content:space-evenly !important}.align-items-md-start{align-items:flex-start !important}.align-items-md-end{align-items:flex-end !important}.align-items-md-center{align-items:center !important}.align-items-md-baseline{align-items:baseline !important}.align-items-md-stretch{align-items:stretch !important}.align-content-md-start{align-content:flex-start !important}.align-content-md-end{align-content:flex-end !important}.align-content-md-center{align-content:center !important}.align-content-md-between{align-content:space-between !important}.align-content-md-around{align-content:space-around !important}.align-content-md-stretch{align-content:stretch !important}.align-self-md-auto{align-self:auto !important}.align-self-md-start{align-self:flex-start !important}.align-self-md-end{align-self:flex-end !important}.align-self-md-center{align-self:center !important}.align-self-md-baseline{align-self:baseline !important}.align-self-md-stretch{align-self:stretch !important}.order-md-first{order:-1 !important}.order-md-0{order:0 !important}.order-md-1{order:1 !important}.order-md-2{order:2 !important}.order-md-3{order:3 !important}.order-md-4{order:4 !important}.order-md-5{order:5 !important}.order-md-last{order:6 !important}.m-md-0{margin:0 !important}.m-md-1{margin:.25rem !important}.m-md-2{margin:.5rem !important}.m-md-3{margin:1rem !important}.m-md-4{margin:1.5rem !important}.m-md-5{margin:3rem !important}.m-md-auto{margin:auto !important}.mx-md-0{margin-right:0 !important;margin-left:0 !important}.mx-md-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-md-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-md-3{margin-right:1rem !important;margin-left:1rem !important}.mx-md-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-md-5{margin-right:3rem !important;margin-left:3rem !important}.mx-md-auto{margin-right:auto !important;margin-left:auto !important}.my-md-0{margin-top:0 !important;margin-bottom:0 !important}.my-md-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-md-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-md-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-md-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-md-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-md-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-md-0{margin-top:0 !important}.mt-md-1{margin-top:.25rem !important}.mt-md-2{margin-top:.5rem !important}.mt-md-3{margin-top:1rem !important}.mt-md-4{margin-top:1.5rem !important}.mt-md-5{margin-top:3rem !important}.mt-md-auto{margin-top:auto !important}.me-md-0{margin-right:0 !important}.me-md-1{margin-right:.25rem !important}.me-md-2{margin-right:.5rem !important}.me-md-3{margin-right:1rem !important}.me-md-4{margin-right:1.5rem !important}.me-md-5{margin-right:3rem !important}.me-md-auto{margin-right:auto !important}.mb-md-0{margin-bottom:0 !important}.mb-md-1{margin-bottom:.25rem !important}.mb-md-2{margin-bottom:.5rem !important}.mb-md-3{margin-bottom:1rem !important}.mb-md-4{margin-bottom:1.5rem !important}.mb-md-5{margin-bottom:3rem !important}.mb-md-auto{margin-bottom:auto !important}.ms-md-0{margin-left:0 !important}.ms-md-1{margin-left:.25rem !important}.ms-md-2{margin-left:.5rem !important}.ms-md-3{margin-left:1rem !important}.ms-md-4{margin-left:1.5rem !important}.ms-md-5{margin-left:3rem !important}.ms-md-auto{margin-left:auto !important}.p-md-0{padding:0 !important}.p-md-1{padding:.25rem !important}.p-md-2{padding:.5rem !important}.p-md-3{padding:1rem !important}.p-md-4{padding:1.5rem !important}.p-md-5{padding:3rem !important}.px-md-0{padding-right:0 !important;padding-left:0 !important}.px-md-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-md-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-md-3{padding-right:1rem !important;padding-left:1rem !important}.px-md-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-md-5{padding-right:3rem !important;padding-left:3rem !important}.py-md-0{padding-top:0 !important;padding-bottom:0 !important}.py-md-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-md-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-md-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-md-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-md-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-md-0{padding-top:0 !important}.pt-md-1{padding-top:.25rem !important}.pt-md-2{padding-top:.5rem !important}.pt-md-3{padding-top:1rem !important}.pt-md-4{padding-top:1.5rem !important}.pt-md-5{padding-top:3rem !important}.pe-md-0{padding-right:0 !important}.pe-md-1{padding-right:.25rem !important}.pe-md-2{padding-right:.5rem !important}.pe-md-3{padding-right:1rem !important}.pe-md-4{padding-right:1.5rem !important}.pe-md-5{padding-right:3rem !important}.pb-md-0{padding-bottom:0 !important}.pb-md-1{padding-bottom:.25rem !important}.pb-md-2{padding-bottom:.5rem !important}.pb-md-3{padding-bottom:1rem !important}.pb-md-4{padding-bottom:1.5rem !important}.pb-md-5{padding-bottom:3rem !important}.ps-md-0{padding-left:0 !important}.ps-md-1{padding-left:.25rem !important}.ps-md-2{padding-left:.5rem !important}.ps-md-3{padding-left:1rem !important}.ps-md-4{padding-left:1.5rem !important}.ps-md-5{padding-left:3rem !important}.text-md-start{text-align:left !important}.text-md-end{text-align:right !important}.text-md-center{text-align:center !important}}@media(min-width: 992px){.float-lg-start{float:left !important}.float-lg-end{float:right !important}.float-lg-none{float:none !important}.d-lg-inline{display:inline !important}.d-lg-inline-block{display:inline-block !important}.d-lg-block{display:block !important}.d-lg-grid{display:grid !important}.d-lg-table{display:table !important}.d-lg-table-row{display:table-row !important}.d-lg-table-cell{display:table-cell !important}.d-lg-flex{display:flex !important}.d-lg-inline-flex{display:inline-flex !important}.d-lg-none{display:none !important}.flex-lg-fill{flex:1 1 auto !important}.flex-lg-row{flex-direction:row !important}.flex-lg-column{flex-direction:column !important}.flex-lg-row-reverse{flex-direction:row-reverse !important}.flex-lg-column-reverse{flex-direction:column-reverse !important}.flex-lg-grow-0{flex-grow:0 !important}.flex-lg-grow-1{flex-grow:1 !important}.flex-lg-shrink-0{flex-shrink:0 !important}.flex-lg-shrink-1{flex-shrink:1 !important}.flex-lg-wrap{flex-wrap:wrap !important}.flex-lg-nowrap{flex-wrap:nowrap !important}.flex-lg-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-lg-0{gap:0 !important}.gap-lg-1{gap:.25rem !important}.gap-lg-2{gap:.5rem !important}.gap-lg-3{gap:1rem !important}.gap-lg-4{gap:1.5rem !important}.gap-lg-5{gap:3rem !important}.justify-content-lg-start{justify-content:flex-start !important}.justify-content-lg-end{justify-content:flex-end !important}.justify-content-lg-center{justify-content:center !important}.justify-content-lg-between{justify-content:space-between !important}.justify-content-lg-around{justify-content:space-around !important}.justify-content-lg-evenly{justify-content:space-evenly !important}.align-items-lg-start{align-items:flex-start !important}.align-items-lg-end{align-items:flex-end !important}.align-items-lg-center{align-items:center !important}.align-items-lg-baseline{align-items:baseline !important}.align-items-lg-stretch{align-items:stretch !important}.align-content-lg-start{align-content:flex-start !important}.align-content-lg-end{align-content:flex-end !important}.align-content-lg-center{align-content:center !important}.align-content-lg-between{align-content:space-between !important}.align-content-lg-around{align-content:space-around !important}.align-content-lg-stretch{align-content:stretch !important}.align-self-lg-auto{align-self:auto !important}.align-self-lg-start{align-self:flex-start !important}.align-self-lg-end{align-self:flex-end !important}.align-self-lg-center{align-self:center !important}.align-self-lg-baseline{align-self:baseline !important}.align-self-lg-stretch{align-self:stretch !important}.order-lg-first{order:-1 !important}.order-lg-0{order:0 !important}.order-lg-1{order:1 !important}.order-lg-2{order:2 !important}.order-lg-3{order:3 !important}.order-lg-4{order:4 !important}.order-lg-5{order:5 !important}.order-lg-last{order:6 !important}.m-lg-0{margin:0 !important}.m-lg-1{margin:.25rem !important}.m-lg-2{margin:.5rem !important}.m-lg-3{margin:1rem !important}.m-lg-4{margin:1.5rem !important}.m-lg-5{margin:3rem !important}.m-lg-auto{margin:auto !important}.mx-lg-0{margin-right:0 !important;margin-left:0 !important}.mx-lg-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-lg-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-lg-3{margin-right:1rem !important;margin-left:1rem !important}.mx-lg-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-lg-5{margin-right:3rem !important;margin-left:3rem !important}.mx-lg-auto{margin-right:auto !important;margin-left:auto !important}.my-lg-0{margin-top:0 !important;margin-bottom:0 !important}.my-lg-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-lg-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-lg-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-lg-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-lg-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-lg-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-lg-0{margin-top:0 !important}.mt-lg-1{margin-top:.25rem !important}.mt-lg-2{margin-top:.5rem !important}.mt-lg-3{margin-top:1rem !important}.mt-lg-4{margin-top:1.5rem !important}.mt-lg-5{margin-top:3rem !important}.mt-lg-auto{margin-top:auto !important}.me-lg-0{margin-right:0 !important}.me-lg-1{margin-right:.25rem !important}.me-lg-2{margin-right:.5rem !important}.me-lg-3{margin-right:1rem !important}.me-lg-4{margin-right:1.5rem !important}.me-lg-5{margin-right:3rem !important}.me-lg-auto{margin-right:auto !important}.mb-lg-0{margin-bottom:0 !important}.mb-lg-1{margin-bottom:.25rem !important}.mb-lg-2{margin-bottom:.5rem !important}.mb-lg-3{margin-bottom:1rem !important}.mb-lg-4{margin-bottom:1.5rem !important}.mb-lg-5{margin-bottom:3rem !important}.mb-lg-auto{margin-bottom:auto !important}.ms-lg-0{margin-left:0 !important}.ms-lg-1{margin-left:.25rem !important}.ms-lg-2{margin-left:.5rem !important}.ms-lg-3{margin-left:1rem !important}.ms-lg-4{margin-left:1.5rem !important}.ms-lg-5{margin-left:3rem !important}.ms-lg-auto{margin-left:auto !important}.p-lg-0{padding:0 !important}.p-lg-1{padding:.25rem !important}.p-lg-2{padding:.5rem !important}.p-lg-3{padding:1rem !important}.p-lg-4{padding:1.5rem !important}.p-lg-5{padding:3rem !important}.px-lg-0{padding-right:0 !important;padding-left:0 !important}.px-lg-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-lg-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-lg-3{padding-right:1rem !important;padding-left:1rem !important}.px-lg-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-lg-5{padding-right:3rem !important;padding-left:3rem !important}.py-lg-0{padding-top:0 !important;padding-bottom:0 !important}.py-lg-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-lg-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-lg-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-lg-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-lg-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-lg-0{padding-top:0 !important}.pt-lg-1{padding-top:.25rem !important}.pt-lg-2{padding-top:.5rem !important}.pt-lg-3{padding-top:1rem !important}.pt-lg-4{padding-top:1.5rem !important}.pt-lg-5{padding-top:3rem !important}.pe-lg-0{padding-right:0 !important}.pe-lg-1{padding-right:.25rem !important}.pe-lg-2{padding-right:.5rem !important}.pe-lg-3{padding-right:1rem !important}.pe-lg-4{padding-right:1.5rem !important}.pe-lg-5{padding-right:3rem !important}.pb-lg-0{padding-bottom:0 !important}.pb-lg-1{padding-bottom:.25rem !important}.pb-lg-2{padding-bottom:.5rem !important}.pb-lg-3{padding-bottom:1rem !important}.pb-lg-4{padding-bottom:1.5rem !important}.pb-lg-5{padding-bottom:3rem !important}.ps-lg-0{padding-left:0 !important}.ps-lg-1{padding-left:.25rem !important}.ps-lg-2{padding-left:.5rem !important}.ps-lg-3{padding-left:1rem !important}.ps-lg-4{padding-left:1.5rem !important}.ps-lg-5{padding-left:3rem !important}.text-lg-start{text-align:left !important}.text-lg-end{text-align:right !important}.text-lg-center{text-align:center !important}}@media(min-width: 1200px){.float-xl-start{float:left !important}.float-xl-end{float:right !important}.float-xl-none{float:none !important}.d-xl-inline{display:inline !important}.d-xl-inline-block{display:inline-block !important}.d-xl-block{display:block !important}.d-xl-grid{display:grid !important}.d-xl-table{display:table !important}.d-xl-table-row{display:table-row !important}.d-xl-table-cell{display:table-cell !important}.d-xl-flex{display:flex !important}.d-xl-inline-flex{display:inline-flex !important}.d-xl-none{display:none !important}.flex-xl-fill{flex:1 1 auto !important}.flex-xl-row{flex-direction:row !important}.flex-xl-column{flex-direction:column !important}.flex-xl-row-reverse{flex-direction:row-reverse !important}.flex-xl-column-reverse{flex-direction:column-reverse !important}.flex-xl-grow-0{flex-grow:0 !important}.flex-xl-grow-1{flex-grow:1 !important}.flex-xl-shrink-0{flex-shrink:0 !important}.flex-xl-shrink-1{flex-shrink:1 !important}.flex-xl-wrap{flex-wrap:wrap !important}.flex-xl-nowrap{flex-wrap:nowrap !important}.flex-xl-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-xl-0{gap:0 !important}.gap-xl-1{gap:.25rem !important}.gap-xl-2{gap:.5rem !important}.gap-xl-3{gap:1rem !important}.gap-xl-4{gap:1.5rem !important}.gap-xl-5{gap:3rem !important}.justify-content-xl-start{justify-content:flex-start !important}.justify-content-xl-end{justify-content:flex-end !important}.justify-content-xl-center{justify-content:center !important}.justify-content-xl-between{justify-content:space-between !important}.justify-content-xl-around{justify-content:space-around !important}.justify-content-xl-evenly{justify-content:space-evenly !important}.align-items-xl-start{align-items:flex-start !important}.align-items-xl-end{align-items:flex-end !important}.align-items-xl-center{align-items:center !important}.align-items-xl-baseline{align-items:baseline !important}.align-items-xl-stretch{align-items:stretch !important}.align-content-xl-start{align-content:flex-start !important}.align-content-xl-end{align-content:flex-end !important}.align-content-xl-center{align-content:center !important}.align-content-xl-between{align-content:space-between !important}.align-content-xl-around{align-content:space-around !important}.align-content-xl-stretch{align-content:stretch !important}.align-self-xl-auto{align-self:auto !important}.align-self-xl-start{align-self:flex-start !important}.align-self-xl-end{align-self:flex-end !important}.align-self-xl-center{align-self:center !important}.align-self-xl-baseline{align-self:baseline !important}.align-self-xl-stretch{align-self:stretch !important}.order-xl-first{order:-1 !important}.order-xl-0{order:0 !important}.order-xl-1{order:1 !important}.order-xl-2{order:2 !important}.order-xl-3{order:3 !important}.order-xl-4{order:4 !important}.order-xl-5{order:5 !important}.order-xl-last{order:6 !important}.m-xl-0{margin:0 !important}.m-xl-1{margin:.25rem !important}.m-xl-2{margin:.5rem !important}.m-xl-3{margin:1rem !important}.m-xl-4{margin:1.5rem !important}.m-xl-5{margin:3rem !important}.m-xl-auto{margin:auto !important}.mx-xl-0{margin-right:0 !important;margin-left:0 !important}.mx-xl-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-xl-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-xl-3{margin-right:1rem !important;margin-left:1rem !important}.mx-xl-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-xl-5{margin-right:3rem !important;margin-left:3rem !important}.mx-xl-auto{margin-right:auto !important;margin-left:auto !important}.my-xl-0{margin-top:0 !important;margin-bottom:0 !important}.my-xl-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-xl-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-xl-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-xl-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-xl-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-xl-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-xl-0{margin-top:0 !important}.mt-xl-1{margin-top:.25rem !important}.mt-xl-2{margin-top:.5rem !important}.mt-xl-3{margin-top:1rem !important}.mt-xl-4{margin-top:1.5rem !important}.mt-xl-5{margin-top:3rem !important}.mt-xl-auto{margin-top:auto !important}.me-xl-0{margin-right:0 !important}.me-xl-1{margin-right:.25rem !important}.me-xl-2{margin-right:.5rem !important}.me-xl-3{margin-right:1rem !important}.me-xl-4{margin-right:1.5rem !important}.me-xl-5{margin-right:3rem !important}.me-xl-auto{margin-right:auto !important}.mb-xl-0{margin-bottom:0 !important}.mb-xl-1{margin-bottom:.25rem !important}.mb-xl-2{margin-bottom:.5rem !important}.mb-xl-3{margin-bottom:1rem !important}.mb-xl-4{margin-bottom:1.5rem !important}.mb-xl-5{margin-bottom:3rem !important}.mb-xl-auto{margin-bottom:auto !important}.ms-xl-0{margin-left:0 !important}.ms-xl-1{margin-left:.25rem !important}.ms-xl-2{margin-left:.5rem !important}.ms-xl-3{margin-left:1rem !important}.ms-xl-4{margin-left:1.5rem !important}.ms-xl-5{margin-left:3rem !important}.ms-xl-auto{margin-left:auto !important}.p-xl-0{padding:0 !important}.p-xl-1{padding:.25rem !important}.p-xl-2{padding:.5rem !important}.p-xl-3{padding:1rem !important}.p-xl-4{padding:1.5rem !important}.p-xl-5{padding:3rem !important}.px-xl-0{padding-right:0 !important;padding-left:0 !important}.px-xl-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-xl-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-xl-3{padding-right:1rem !important;padding-left:1rem !important}.px-xl-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-xl-5{padding-right:3rem !important;padding-left:3rem !important}.py-xl-0{padding-top:0 !important;padding-bottom:0 !important}.py-xl-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-xl-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-xl-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-xl-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-xl-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-xl-0{padding-top:0 !important}.pt-xl-1{padding-top:.25rem !important}.pt-xl-2{padding-top:.5rem !important}.pt-xl-3{padding-top:1rem !important}.pt-xl-4{padding-top:1.5rem !important}.pt-xl-5{padding-top:3rem !important}.pe-xl-0{padding-right:0 !important}.pe-xl-1{padding-right:.25rem !important}.pe-xl-2{padding-right:.5rem !important}.pe-xl-3{padding-right:1rem !important}.pe-xl-4{padding-right:1.5rem !important}.pe-xl-5{padding-right:3rem !important}.pb-xl-0{padding-bottom:0 !important}.pb-xl-1{padding-bottom:.25rem !important}.pb-xl-2{padding-bottom:.5rem !important}.pb-xl-3{padding-bottom:1rem !important}.pb-xl-4{padding-bottom:1.5rem !important}.pb-xl-5{padding-bottom:3rem !important}.ps-xl-0{padding-left:0 !important}.ps-xl-1{padding-left:.25rem !important}.ps-xl-2{padding-left:.5rem !important}.ps-xl-3{padding-left:1rem !important}.ps-xl-4{padding-left:1.5rem !important}.ps-xl-5{padding-left:3rem !important}.text-xl-start{text-align:left !important}.text-xl-end{text-align:right !important}.text-xl-center{text-align:center !important}}@media(min-width: 1400px){.float-xxl-start{float:left !important}.float-xxl-end{float:right !important}.float-xxl-none{float:none !important}.d-xxl-inline{display:inline !important}.d-xxl-inline-block{display:inline-block !important}.d-xxl-block{display:block !important}.d-xxl-grid{display:grid !important}.d-xxl-table{display:table !important}.d-xxl-table-row{display:table-row !important}.d-xxl-table-cell{display:table-cell !important}.d-xxl-flex{display:flex !important}.d-xxl-inline-flex{display:inline-flex !important}.d-xxl-none{display:none !important}.flex-xxl-fill{flex:1 1 auto !important}.flex-xxl-row{flex-direction:row !important}.flex-xxl-column{flex-direction:column !important}.flex-xxl-row-reverse{flex-direction:row-reverse !important}.flex-xxl-column-reverse{flex-direction:column-reverse !important}.flex-xxl-grow-0{flex-grow:0 !important}.flex-xxl-grow-1{flex-grow:1 !important}.flex-xxl-shrink-0{flex-shrink:0 !important}.flex-xxl-shrink-1{flex-shrink:1 !important}.flex-xxl-wrap{flex-wrap:wrap !important}.flex-xxl-nowrap{flex-wrap:nowrap !important}.flex-xxl-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-xxl-0{gap:0 !important}.gap-xxl-1{gap:.25rem !important}.gap-xxl-2{gap:.5rem !important}.gap-xxl-3{gap:1rem !important}.gap-xxl-4{gap:1.5rem !important}.gap-xxl-5{gap:3rem !important}.justify-content-xxl-start{justify-content:flex-start !important}.justify-content-xxl-end{justify-content:flex-end !important}.justify-content-xxl-center{justify-content:center !important}.justify-content-xxl-between{justify-content:space-between !important}.justify-content-xxl-around{justify-content:space-around !important}.justify-content-xxl-evenly{justify-content:space-evenly !important}.align-items-xxl-start{align-items:flex-start !important}.align-items-xxl-end{align-items:flex-end !important}.align-items-xxl-center{align-items:center !important}.align-items-xxl-baseline{align-items:baseline !important}.align-items-xxl-stretch{align-items:stretch !important}.align-content-xxl-start{align-content:flex-start !important}.align-content-xxl-end{align-content:flex-end !important}.align-content-xxl-center{align-content:center !important}.align-content-xxl-between{align-content:space-between !important}.align-content-xxl-around{align-content:space-around !important}.align-content-xxl-stretch{align-content:stretch !important}.align-self-xxl-auto{align-self:auto !important}.align-self-xxl-start{align-self:flex-start !important}.align-self-xxl-end{align-self:flex-end !important}.align-self-xxl-center{align-self:center !important}.align-self-xxl-baseline{align-self:baseline !important}.align-self-xxl-stretch{align-self:stretch !important}.order-xxl-first{order:-1 !important}.order-xxl-0{order:0 !important}.order-xxl-1{order:1 !important}.order-xxl-2{order:2 !important}.order-xxl-3{order:3 !important}.order-xxl-4{order:4 !important}.order-xxl-5{order:5 !important}.order-xxl-last{order:6 !important}.m-xxl-0{margin:0 !important}.m-xxl-1{margin:.25rem !important}.m-xxl-2{margin:.5rem !important}.m-xxl-3{margin:1rem !important}.m-xxl-4{margin:1.5rem !important}.m-xxl-5{margin:3rem !important}.m-xxl-auto{margin:auto !important}.mx-xxl-0{margin-right:0 !important;margin-left:0 !important}.mx-xxl-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-xxl-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-xxl-3{margin-right:1rem !important;margin-left:1rem !important}.mx-xxl-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-xxl-5{margin-right:3rem !important;margin-left:3rem !important}.mx-xxl-auto{margin-right:auto !important;margin-left:auto !important}.my-xxl-0{margin-top:0 !important;margin-bottom:0 !important}.my-xxl-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-xxl-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-xxl-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-xxl-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-xxl-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-xxl-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-xxl-0{margin-top:0 !important}.mt-xxl-1{margin-top:.25rem !important}.mt-xxl-2{margin-top:.5rem !important}.mt-xxl-3{margin-top:1rem !important}.mt-xxl-4{margin-top:1.5rem !important}.mt-xxl-5{margin-top:3rem !important}.mt-xxl-auto{margin-top:auto !important}.me-xxl-0{margin-right:0 !important}.me-xxl-1{margin-right:.25rem !important}.me-xxl-2{margin-right:.5rem !important}.me-xxl-3{margin-right:1rem !important}.me-xxl-4{margin-right:1.5rem !important}.me-xxl-5{margin-right:3rem !important}.me-xxl-auto{margin-right:auto !important}.mb-xxl-0{margin-bottom:0 !important}.mb-xxl-1{margin-bottom:.25rem !important}.mb-xxl-2{margin-bottom:.5rem !important}.mb-xxl-3{margin-bottom:1rem !important}.mb-xxl-4{margin-bottom:1.5rem !important}.mb-xxl-5{margin-bottom:3rem !important}.mb-xxl-auto{margin-bottom:auto !important}.ms-xxl-0{margin-left:0 !important}.ms-xxl-1{margin-left:.25rem !important}.ms-xxl-2{margin-left:.5rem !important}.ms-xxl-3{margin-left:1rem !important}.ms-xxl-4{margin-left:1.5rem !important}.ms-xxl-5{margin-left:3rem !important}.ms-xxl-auto{margin-left:auto !important}.p-xxl-0{padding:0 !important}.p-xxl-1{padding:.25rem !important}.p-xxl-2{padding:.5rem !important}.p-xxl-3{padding:1rem !important}.p-xxl-4{padding:1.5rem !important}.p-xxl-5{padding:3rem !important}.px-xxl-0{padding-right:0 !important;padding-left:0 !important}.px-xxl-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-xxl-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-xxl-3{padding-right:1rem !important;padding-left:1rem !important}.px-xxl-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-xxl-5{padding-right:3rem !important;padding-left:3rem !important}.py-xxl-0{padding-top:0 !important;padding-bottom:0 !important}.py-xxl-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-xxl-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-xxl-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-xxl-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-xxl-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-xxl-0{padding-top:0 !important}.pt-xxl-1{padding-top:.25rem !important}.pt-xxl-2{padding-top:.5rem !important}.pt-xxl-3{padding-top:1rem !important}.pt-xxl-4{padding-top:1.5rem !important}.pt-xxl-5{padding-top:3rem !important}.pe-xxl-0{padding-right:0 !important}.pe-xxl-1{padding-right:.25rem !important}.pe-xxl-2{padding-right:.5rem !important}.pe-xxl-3{padding-right:1rem !important}.pe-xxl-4{padding-right:1.5rem !important}.pe-xxl-5{padding-right:3rem !important}.pb-xxl-0{padding-bottom:0 !important}.pb-xxl-1{padding-bottom:.25rem !important}.pb-xxl-2{padding-bottom:.5rem !important}.pb-xxl-3{padding-bottom:1rem !important}.pb-xxl-4{padding-bottom:1.5rem !important}.pb-xxl-5{padding-bottom:3rem !important}.ps-xxl-0{padding-left:0 !important}.ps-xxl-1{padding-left:.25rem !important}.ps-xxl-2{padding-left:.5rem !important}.ps-xxl-3{padding-left:1rem !important}.ps-xxl-4{padding-left:1.5rem !important}.ps-xxl-5{padding-left:3rem !important}.text-xxl-start{text-align:left !important}.text-xxl-end{text-align:right !important}.text-xxl-center{text-align:center !important}}.bg-default{color:#fff}.bg-primary{color:#fff}.bg-secondary{color:#fff}.bg-success{color:#fff}.bg-info{color:#fff}.bg-warning{color:#fff}.bg-danger{color:#fff}.bg-light{color:#000}.bg-dark{color:#fff}@media(min-width: 1200px){.fs-1{font-size:2rem !important}.fs-2{font-size:1.65rem !important}.fs-3{font-size:1.45rem !important}}@media print{.d-print-inline{display:inline !important}.d-print-inline-block{display:inline-block !important}.d-print-block{display:block !important}.d-print-grid{display:grid !important}.d-print-table{display:table !important}.d-print-table-row{display:table-row !important}.d-print-table-cell{display:table-cell !important}.d-print-flex{display:flex !important}.d-print-inline-flex{display:inline-flex !important}.d-print-none{display:none !important}}.sidebar-item .chapter-number{color:#373a3c}.quarto-container{min-height:calc(100vh - 132px)}footer.footer .nav-footer,#quarto-header>nav{padding-left:1em;padding-right:1em}nav[role=doc-toc]{padding-left:.5em}#quarto-content>*{padding-top:14px}@media(max-width: 991.98px){#quarto-content>*{padding-top:0}#quarto-content .subtitle{padding-top:14px}#quarto-content section:first-of-type h2:first-of-type,#quarto-content section:first-of-type .h2:first-of-type{margin-top:1rem}}.headroom-target,header.headroom{will-change:transform;transition:position 200ms linear;transition:all 200ms linear}header.headroom--pinned{transform:translateY(0%)}header.headroom--unpinned{transform:translateY(-100%)}.navbar-container{width:100%}.navbar-brand{overflow:hidden;text-overflow:ellipsis}.navbar-brand-container{max-width:calc(100% - 115px);min-width:0;display:flex;align-items:center}@media(min-width: 992px){.navbar-brand-container{margin-right:1em}}.navbar-brand.navbar-brand-logo{margin-right:4px;display:inline-flex}.navbar-toggler{flex-basis:content;flex-shrink:0}.navbar .navbar-brand-container{order:2}.navbar .navbar-toggler{order:1}.navbar .navbar-collapse{order:4}.navbar #quarto-search{order:3}.navbar .navbar-toggler{margin-right:.5em}.navbar-logo{max-height:24px;width:auto;padding-right:4px}nav .nav-item:not(.compact){padding-top:1px}nav .nav-link i,nav .dropdown-item i{padding-right:1px}.navbar-expand-lg .navbar-nav .nav-link{padding-left:.6rem;padding-right:.6rem}nav .nav-item.compact .nav-link{padding-left:.5rem;padding-right:.5rem;font-size:1.1rem}.navbar .quarto-navbar-tools div.dropdown{display:inline-block}.navbar .quarto-navbar-tools .quarto-navigation-tool{color:#545555}.navbar .quarto-navbar-tools .quarto-navigation-tool:hover{color:#1a5698}@media(max-width: 991.98px){.navbar .quarto-navbar-tools{margin-top:.25em;padding-top:.75em;display:block;color:solid #d4d4d4 1px;text-align:center;vertical-align:middle;margin-right:auto}}.navbar-nav .dropdown-menu{min-width:220px;font-size:.9rem}.navbar .navbar-nav .nav-link.dropdown-toggle::after{opacity:.75;vertical-align:.175em}.navbar ul.dropdown-menu{padding-top:0;padding-bottom:0}.navbar .dropdown-header{text-transform:uppercase;font-size:.8rem;padding:0 .5rem}.navbar .dropdown-item{padding:.4rem .5rem}.navbar .dropdown-item>i.bi{margin-left:.1rem;margin-right:.25em}.sidebar #quarto-search{margin-top:-1px}.sidebar #quarto-search svg.aa-SubmitIcon{width:16px;height:16px}.sidebar-navigation a{color:inherit}.sidebar-title{margin-top:.25rem;padding-bottom:.5rem;font-size:1.3rem;line-height:1.6rem;visibility:visible}.sidebar-title>a{font-size:inherit;text-decoration:none}.sidebar-title .sidebar-tools-main{margin-top:-6px}@media(max-width: 991.98px){#quarto-sidebar div.sidebar-header{padding-top:.2em}}.sidebar-header-stacked .sidebar-title{margin-top:.6rem}.sidebar-logo{max-width:90%;padding-bottom:.5rem}.sidebar-logo-link{text-decoration:none}.sidebar-navigation li a{text-decoration:none}.sidebar-navigation .quarto-navigation-tool{opacity:.7;font-size:.875rem}#quarto-sidebar>nav>.sidebar-tools-main{margin-left:14px}.sidebar-tools-main{display:inline-flex;margin-left:0px;order:2}.sidebar-tools-main:not(.tools-wide){vertical-align:middle}.sidebar-navigation .quarto-navigation-tool.dropdown-toggle::after{display:none}.sidebar.sidebar-navigation>*{padding-top:1em}.sidebar-item{margin-bottom:.2em}.sidebar-section{margin-top:.2em;padding-left:.5em;padding-bottom:.2em}.sidebar-item .sidebar-item-container{display:flex;justify-content:space-between}.sidebar-item-toggle:hover{cursor:pointer}.sidebar-item .sidebar-item-toggle .bi{font-size:.7rem;text-align:center}.sidebar-item .sidebar-item-toggle .bi-chevron-right::before{transition:transform 200ms ease}.sidebar-item .sidebar-item-toggle[aria-expanded=false] .bi-chevron-right::before{transform:none}.sidebar-item .sidebar-item-toggle[aria-expanded=true] .bi-chevron-right::before{transform:rotate(90deg)}.sidebar-navigation .sidebar-divider{margin-left:0;margin-right:0;margin-top:.5rem;margin-bottom:.5rem}@media(max-width: 991.98px){.quarto-secondary-nav{display:block}.quarto-secondary-nav button.quarto-search-button{padding-right:0em;padding-left:2em}.quarto-secondary-nav button.quarto-btn-toggle{margin-left:-0.75rem;margin-right:.15rem}.quarto-secondary-nav nav.quarto-page-breadcrumbs{display:flex;align-items:center;padding-right:1em;margin-left:-0.25em}.quarto-secondary-nav nav.quarto-page-breadcrumbs a{text-decoration:none}.quarto-secondary-nav nav.quarto-page-breadcrumbs ol.breadcrumb{margin-bottom:0}}@media(min-width: 992px){.quarto-secondary-nav{display:none}}.quarto-secondary-nav .quarto-btn-toggle{color:#595959}.quarto-secondary-nav[aria-expanded=false] .quarto-btn-toggle .bi-chevron-right::before{transform:none}.quarto-secondary-nav[aria-expanded=true] .quarto-btn-toggle .bi-chevron-right::before{transform:rotate(90deg)}.quarto-secondary-nav .quarto-btn-toggle .bi-chevron-right::before{transition:transform 200ms ease}.quarto-secondary-nav{cursor:pointer}.quarto-secondary-nav-title{margin-top:.3em;color:#595959;padding-top:4px}.quarto-secondary-nav nav.quarto-page-breadcrumbs{color:#595959}.quarto-secondary-nav nav.quarto-page-breadcrumbs a{color:#595959}.quarto-secondary-nav nav.quarto-page-breadcrumbs a:hover{color:rgba(27,88,157,.8)}.quarto-secondary-nav nav.quarto-page-breadcrumbs .breadcrumb-item::before{color:#8c8c8c}div.sidebar-item-container{color:#595959}div.sidebar-item-container:hover,div.sidebar-item-container:focus{color:rgba(27,88,157,.8)}div.sidebar-item-container.disabled{color:rgba(89,89,89,.75)}div.sidebar-item-container .active,div.sidebar-item-container .show>.nav-link,div.sidebar-item-container .sidebar-link>code{color:#1b589d}div.sidebar.sidebar-navigation.rollup.quarto-sidebar-toggle-contents,nav.sidebar.sidebar-navigation:not(.rollup){background-color:#fff}@media(max-width: 991.98px){.sidebar-navigation .sidebar-item a,.nav-page .nav-page-text,.sidebar-navigation{font-size:1rem}.sidebar-navigation ul.sidebar-section.depth1 .sidebar-section-item{font-size:1.1rem}.sidebar-logo{display:none}.sidebar.sidebar-navigation{position:static;border-bottom:1px solid #dee2e6}.sidebar.sidebar-navigation.collapsing{position:fixed;z-index:1000}.sidebar.sidebar-navigation.show{position:fixed;z-index:1000}.sidebar.sidebar-navigation{min-height:100%}nav.quarto-secondary-nav{background-color:#fff;border-bottom:1px solid #dee2e6}.sidebar .sidebar-footer{visibility:visible;padding-top:1rem;position:inherit}.sidebar-tools-collapse{display:block}}#quarto-sidebar{transition:width .15s ease-in}#quarto-sidebar>*{padding-right:1em}@media(max-width: 991.98px){#quarto-sidebar .sidebar-menu-container{white-space:nowrap;min-width:225px}#quarto-sidebar.show{transition:width .15s ease-out}}@media(min-width: 992px){#quarto-sidebar{display:flex;flex-direction:column}.nav-page .nav-page-text,.sidebar-navigation .sidebar-section .sidebar-item{font-size:.875rem}.sidebar-navigation .sidebar-item{font-size:.925rem}.sidebar.sidebar-navigation{display:block;position:sticky}.sidebar-search{width:100%}.sidebar .sidebar-footer{visibility:visible}}@media(max-width: 991.98px){#quarto-sidebar-glass{position:fixed;top:0;bottom:0;left:0;right:0;background-color:rgba(255,255,255,0);transition:background-color .15s ease-in;z-index:-1}#quarto-sidebar-glass.collapsing{z-index:1000}#quarto-sidebar-glass.show{transition:background-color .15s ease-out;background-color:rgba(102,102,102,.4);z-index:1000}}.sidebar .sidebar-footer{padding:.5rem 1rem;align-self:flex-end;color:#6c757d;width:100%}.quarto-page-breadcrumbs .breadcrumb-item+.breadcrumb-item,.quarto-page-breadcrumbs .breadcrumb-item{padding-right:.33em;padding-left:0}.quarto-page-breadcrumbs .breadcrumb-item::before{padding-right:.33em}.quarto-sidebar-footer{font-size:.875em}.sidebar-section .bi-chevron-right{vertical-align:middle}.sidebar-section .bi-chevron-right::before{font-size:.9em}.notransition{-webkit-transition:none !important;-moz-transition:none !important;-o-transition:none !important;transition:none !important}.btn:focus:not(:focus-visible){box-shadow:none}.page-navigation{display:flex;justify-content:space-between}.nav-page{padding-bottom:.75em}.nav-page .bi{font-size:1.8rem;vertical-align:middle}.nav-page .nav-page-text{padding-left:.25em;padding-right:.25em}.nav-page a{color:#6c757d;text-decoration:none;display:flex;align-items:center}.nav-page a:hover{color:#1f66b6}.toc-actions{display:flex}.toc-actions p{margin-block-start:0;margin-block-end:0}.toc-actions a{text-decoration:none;color:inherit;font-weight:400}.toc-actions a:hover{color:#1f66b6}.toc-actions .action-links{margin-left:4px}.sidebar nav[role=doc-toc] .toc-actions .bi{margin-left:-4px;font-size:.7rem;color:#6c757d}.sidebar nav[role=doc-toc] .toc-actions .bi:before{padding-top:3px}#quarto-margin-sidebar .toc-actions .bi:before{margin-top:.3rem;font-size:.7rem;color:#6c757d;vertical-align:top}.sidebar nav[role=doc-toc] .toc-actions>div:first-of-type{margin-top:-3px}#quarto-margin-sidebar .toc-actions p,.sidebar nav[role=doc-toc] .toc-actions p{font-size:.875rem}.nav-footer .toc-actions{padding-bottom:.5em;padding-top:.5em}.nav-footer .toc-actions :first-child{margin-left:auto}.nav-footer .toc-actions :last-child{margin-right:auto}.nav-footer .toc-actions .action-links{display:flex}.nav-footer .toc-actions .action-links p{padding-right:1.5em}.nav-footer .toc-actions .action-links p:last-of-type{padding-right:0}.nav-footer{display:flex;flex-direction:row;flex-wrap:wrap;justify-content:space-between;align-items:baseline;text-align:center;padding-top:.5rem;padding-bottom:.5rem;background-color:#fff}body.nav-fixed{padding-top:64px}.nav-footer-contents{color:#6c757d;margin-top:.25rem}.nav-footer{min-height:3.5em;color:#757575}.nav-footer a{color:#757575}.nav-footer .nav-footer-left{font-size:.825em}.nav-footer .nav-footer-center{font-size:.825em}.nav-footer .nav-footer-right{font-size:.825em}.nav-footer-left .footer-items,.nav-footer-center .footer-items,.nav-footer-right .footer-items{display:inline-flex;padding-top:.3em;padding-bottom:.3em;margin-bottom:0em}.nav-footer-left .footer-items .nav-link,.nav-footer-center .footer-items .nav-link,.nav-footer-right .footer-items .nav-link{padding-left:.6em;padding-right:.6em}.nav-footer-left{flex:1 1 0px;text-align:left}.nav-footer-right{flex:1 1 0px;text-align:right}.nav-footer-center{flex:1 1 0px;min-height:3em;text-align:center}.nav-footer-center .footer-items{justify-content:center}@media(max-width: 767.98px){.nav-footer-center{margin-top:3em}}.navbar .quarto-reader-toggle.reader .quarto-reader-toggle-btn{background-color:#545555;border-radius:3px}.quarto-reader-toggle.reader.quarto-navigation-tool .quarto-reader-toggle-btn{background-color:#595959;border-radius:3px}.quarto-reader-toggle .quarto-reader-toggle-btn{display:inline-flex;padding-left:.2em;padding-right:.2em;margin-left:-0.2em;margin-right:-0.2em;text-align:center}.navbar .quarto-reader-toggle:not(.reader) .bi::before{background-image:url('data:image/svg+xml,')}.navbar .quarto-reader-toggle.reader .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-reader-toggle:not(.reader) .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-reader-toggle.reader .bi::before{background-image:url('data:image/svg+xml,')}#quarto-back-to-top{display:none;position:fixed;bottom:50px;background-color:#fff;border-radius:.25rem;box-shadow:0 .2rem .5rem #6c757d,0 0 .05rem #6c757d;color:#6c757d;text-decoration:none;font-size:.9em;text-align:center;left:50%;padding:.4rem .8rem;transform:translate(-50%, 0)}.aa-DetachedOverlay ul.aa-List,#quarto-search-results ul.aa-List{list-style:none;padding-left:0}.aa-DetachedOverlay .aa-Panel,#quarto-search-results .aa-Panel{background-color:#fff;position:absolute;z-index:2000}#quarto-search-results .aa-Panel{max-width:400px}#quarto-search input{font-size:.925rem}@media(min-width: 992px){.navbar #quarto-search{margin-left:.25rem;order:999}}@media(max-width: 991.98px){#quarto-sidebar .sidebar-search{display:none}}#quarto-sidebar .sidebar-search .aa-Autocomplete{width:100%}.navbar .aa-Autocomplete .aa-Form{width:180px}.navbar #quarto-search.type-overlay .aa-Autocomplete{width:40px}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form{background-color:inherit;border:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form:focus-within{box-shadow:none;outline:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-InputWrapper{display:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-InputWrapper:focus-within{display:inherit}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-Label svg,.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-LoadingIndicator svg{width:26px;height:26px;color:#545555;opacity:1}.navbar #quarto-search.type-overlay .aa-Autocomplete svg.aa-SubmitIcon{width:26px;height:26px;color:#545555;opacity:1}.aa-Autocomplete .aa-Form,.aa-DetachedFormContainer .aa-Form{align-items:center;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem;color:#373a3c;display:flex;line-height:1em;margin:0;position:relative;width:100%}.aa-Autocomplete .aa-Form:focus-within,.aa-DetachedFormContainer .aa-Form:focus-within{box-shadow:rgba(39,128,227,.6) 0 0 0 1px;outline:currentColor none medium}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix{align-items:center;display:flex;flex-shrink:0;order:1}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-Label,.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-Label,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator{cursor:initial;flex-shrink:0;padding:0;text-align:left}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-Label svg,.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-Label svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator svg{color:#373a3c;opacity:.5}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-SubmitButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-SubmitButton{appearance:none;background:none;border:0;margin:0}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator{align-items:center;display:flex;justify-content:center}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator[hidden]{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapper,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper{order:3;position:relative;width:100%}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input{appearance:none;background:none;border:0;color:#373a3c;font:inherit;height:calc(1.5em + .1rem + 2px);padding:0;width:100%}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::placeholder,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::placeholder{color:#373a3c;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input:focus{border-color:none;box-shadow:none;outline:none}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-decoration,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-cancel-button,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-button,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-decoration,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-decoration,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-cancel-button,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-button,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-decoration{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix{align-items:center;display:flex;order:4}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton{align-items:center;background:none;border:0;color:#373a3c;opacity:.8;cursor:pointer;display:flex;margin:0;width:calc(1.5em + .1rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:hover,.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:hover,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:focus{color:#373a3c;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton[hidden]{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton svg{width:calc(1.5em + 0.75rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton{border:none;align-items:center;background:none;color:#373a3c;opacity:.4;font-size:.7rem;cursor:pointer;display:none;margin:0;width:calc(1em + .1rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:hover,.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:hover,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:focus{color:#373a3c;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton[hidden]{display:none}.aa-PanelLayout:empty{display:none}.quarto-search-no-results.no-query{display:none}.aa-Source:has(.no-query){display:none}#quarto-search-results .aa-Panel{border:solid #ced4da 1px}#quarto-search-results .aa-SourceNoResults{width:398px}.aa-DetachedOverlay .aa-Panel,#quarto-search-results .aa-Panel{max-height:65vh;overflow-y:auto;font-size:.925rem}.aa-DetachedOverlay .aa-SourceNoResults,#quarto-search-results .aa-SourceNoResults{height:60px;display:flex;justify-content:center;align-items:center}.aa-DetachedOverlay .search-error,#quarto-search-results .search-error{padding-top:10px;padding-left:20px;padding-right:20px;cursor:default}.aa-DetachedOverlay .search-error .search-error-title,#quarto-search-results .search-error .search-error-title{font-size:1.1rem;margin-bottom:.5rem}.aa-DetachedOverlay .search-error .search-error-title .search-error-icon,#quarto-search-results .search-error .search-error-title .search-error-icon{margin-right:8px}.aa-DetachedOverlay .search-error .search-error-text,#quarto-search-results .search-error .search-error-text{font-weight:300}.aa-DetachedOverlay .search-result-text,#quarto-search-results .search-result-text{font-weight:300;overflow:hidden;text-overflow:ellipsis;display:-webkit-box;-webkit-line-clamp:2;-webkit-box-orient:vertical;line-height:1.2rem;max-height:2.4rem}.aa-DetachedOverlay .aa-SourceHeader .search-result-header,#quarto-search-results .aa-SourceHeader .search-result-header{font-size:.875rem;background-color:#f2f2f2;padding-left:14px;padding-bottom:4px;padding-top:4px}.aa-DetachedOverlay .aa-SourceHeader .search-result-header-no-results,#quarto-search-results .aa-SourceHeader .search-result-header-no-results{display:none}.aa-DetachedOverlay .aa-SourceFooter .algolia-search-logo,#quarto-search-results .aa-SourceFooter .algolia-search-logo{width:110px;opacity:.85;margin:8px;float:right}.aa-DetachedOverlay .search-result-section,#quarto-search-results .search-result-section{font-size:.925em}.aa-DetachedOverlay a.search-result-link,#quarto-search-results a.search-result-link{color:inherit;text-decoration:none}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item,#quarto-search-results li.aa-Item[aria-selected=true] .search-item{background-color:#2780e3}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item.search-result-more,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-section,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-text,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-title-container,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-text-container,#quarto-search-results li.aa-Item[aria-selected=true] .search-item.search-result-more,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-section,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-text,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-title-container,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-text-container{color:#fff;background-color:#2780e3}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item mark.search-match,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-match.mark,#quarto-search-results li.aa-Item[aria-selected=true] .search-item mark.search-match,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-match.mark{color:#fff;background-color:#4b95e8}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item,#quarto-search-results li.aa-Item[aria-selected=false] .search-item{background-color:#fff}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item.search-result-more,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-section,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-text,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-title-container,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-text-container,#quarto-search-results li.aa-Item[aria-selected=false] .search-item.search-result-more,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-section,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-text,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-title-container,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-text-container{color:#373a3c}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item mark.search-match,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-match.mark,#quarto-search-results li.aa-Item[aria-selected=false] .search-item mark.search-match,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-match.mark{color:inherit;background-color:#e5effc}.aa-DetachedOverlay .aa-Item .search-result-doc:not(.document-selectable) .search-result-title-container,#quarto-search-results .aa-Item .search-result-doc:not(.document-selectable) .search-result-title-container{background-color:#fff;color:#373a3c}.aa-DetachedOverlay .aa-Item .search-result-doc:not(.document-selectable) .search-result-text-container,#quarto-search-results .aa-Item .search-result-doc:not(.document-selectable) .search-result-text-container{padding-top:0px}.aa-DetachedOverlay li.aa-Item .search-result-doc.document-selectable .search-result-text-container,#quarto-search-results li.aa-Item .search-result-doc.document-selectable .search-result-text-container{margin-top:-4px}.aa-DetachedOverlay .aa-Item,#quarto-search-results .aa-Item{cursor:pointer}.aa-DetachedOverlay .aa-Item .search-item,#quarto-search-results .aa-Item .search-item{border-left:none;border-right:none;border-top:none;background-color:#fff;border-color:#ced4da;color:#373a3c}.aa-DetachedOverlay .aa-Item .search-item p,#quarto-search-results .aa-Item .search-item p{margin-top:0;margin-bottom:0}.aa-DetachedOverlay .aa-Item .search-item i.bi,#quarto-search-results .aa-Item .search-item i.bi{padding-left:8px;padding-right:8px;font-size:1.3em}.aa-DetachedOverlay .aa-Item .search-item .search-result-title,#quarto-search-results .aa-Item .search-item .search-result-title{margin-top:.3em;margin-bottom:.1rem}.aa-DetachedOverlay .aa-Item .search-result-title-container,#quarto-search-results .aa-Item .search-result-title-container{font-size:1em;display:flex;padding:6px 4px 6px 4px}.aa-DetachedOverlay .aa-Item .search-result-text-container,#quarto-search-results .aa-Item .search-result-text-container{padding-bottom:8px;padding-right:8px;margin-left:44px}.aa-DetachedOverlay .aa-Item .search-result-doc-section,.aa-DetachedOverlay .aa-Item .search-result-more,#quarto-search-results .aa-Item .search-result-doc-section,#quarto-search-results .aa-Item .search-result-more{padding-top:8px;padding-bottom:8px;padding-left:44px}.aa-DetachedOverlay .aa-Item .search-result-more,#quarto-search-results .aa-Item .search-result-more{font-size:.8em;font-weight:400}.aa-DetachedOverlay .aa-Item .search-result-doc,#quarto-search-results .aa-Item .search-result-doc{border-top:1px solid #ced4da}.aa-DetachedSearchButton{background:none;border:none}.aa-DetachedSearchButton .aa-DetachedSearchButtonPlaceholder{display:none}.navbar .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon{color:#545555}.sidebar-tools-collapse #quarto-search,.sidebar-tools-main #quarto-search{display:inline}.sidebar-tools-collapse #quarto-search .aa-Autocomplete,.sidebar-tools-main #quarto-search .aa-Autocomplete{display:inline}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton{padding-left:4px;padding-right:4px}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon{color:#595959}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon .aa-SubmitIcon,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon .aa-SubmitIcon{margin-top:-3px}.aa-DetachedContainer{background:rgba(255,255,255,.65);width:90%;bottom:0;box-shadow:rgba(206,212,218,.6) 0 0 0 1px;outline:currentColor none medium;display:flex;flex-direction:column;left:0;margin:0;overflow:hidden;padding:0;position:fixed;right:0;top:0;z-index:1101}.aa-DetachedContainer::after{height:32px}.aa-DetachedContainer .aa-SourceHeader{margin:var(--aa-spacing-half) 0 var(--aa-spacing-half) 2px}.aa-DetachedContainer .aa-Panel{background-color:#fff;border-radius:0;box-shadow:none;flex-grow:1;margin:0;padding:0;position:relative}.aa-DetachedContainer .aa-PanelLayout{bottom:0;box-shadow:none;left:0;margin:0;max-height:none;overflow-y:auto;position:absolute;right:0;top:0;width:100%}.aa-DetachedFormContainer{background-color:#fff;border-bottom:1px solid #ced4da;display:flex;flex-direction:row;justify-content:space-between;margin:0;padding:.5em}.aa-DetachedCancelButton{background:none;font-size:.8em;border:0;border-radius:3px;color:#373a3c;cursor:pointer;margin:0 0 0 .5em;padding:0 .5em}.aa-DetachedCancelButton:hover,.aa-DetachedCancelButton:focus{box-shadow:rgba(39,128,227,.6) 0 0 0 1px;outline:currentColor none medium}.aa-DetachedContainer--modal{bottom:inherit;height:auto;margin:0 auto;position:absolute;top:100px;border-radius:6px;max-width:850px}@media(max-width: 575.98px){.aa-DetachedContainer--modal{width:100%;top:0px;border-radius:0px;border:none}}.aa-DetachedContainer--modal .aa-PanelLayout{max-height:var(--aa-detached-modal-max-height);padding-bottom:var(--aa-spacing-half);position:static}.aa-Detached{height:100vh;overflow:hidden}.aa-DetachedOverlay{background-color:rgba(55,58,60,.4);position:fixed;left:0;right:0;top:0;margin:0;padding:0;height:100vh;z-index:1100}.quarto-listing{padding-bottom:1em}.listing-pagination{padding-top:.5em}ul.pagination{float:right;padding-left:8px;padding-top:.5em}ul.pagination li{padding-right:.75em}ul.pagination li.disabled a,ul.pagination li.active a{color:#373a3c;text-decoration:none}ul.pagination li:last-of-type{padding-right:0}.listing-actions-group{display:flex}.quarto-listing-filter{margin-bottom:1em;width:200px;margin-left:auto}.quarto-listing-sort{margin-bottom:1em;margin-right:auto;width:auto}.quarto-listing-sort .input-group-text{font-size:.8em}.input-group-text{border-right:none}.quarto-listing-sort select.form-select{font-size:.8em}.listing-no-matching{text-align:center;padding-top:2em;padding-bottom:3em;font-size:1em}#quarto-margin-sidebar .quarto-listing-category{padding-top:0;font-size:1rem}#quarto-margin-sidebar .quarto-listing-category-title{cursor:pointer;font-weight:600;font-size:1rem}.quarto-listing-category .category{cursor:pointer}.quarto-listing-category .category.active{font-weight:600}.quarto-listing-category.category-cloud{display:flex;flex-wrap:wrap;align-items:baseline}.quarto-listing-category.category-cloud .category{padding-right:5px}.quarto-listing-category.category-cloud .category-cloud-1{font-size:.75em}.quarto-listing-category.category-cloud .category-cloud-2{font-size:.95em}.quarto-listing-category.category-cloud .category-cloud-3{font-size:1.15em}.quarto-listing-category.category-cloud .category-cloud-4{font-size:1.35em}.quarto-listing-category.category-cloud .category-cloud-5{font-size:1.55em}.quarto-listing-category.category-cloud .category-cloud-6{font-size:1.75em}.quarto-listing-category.category-cloud .category-cloud-7{font-size:1.95em}.quarto-listing-category.category-cloud .category-cloud-8{font-size:2.15em}.quarto-listing-category.category-cloud .category-cloud-9{font-size:2.35em}.quarto-listing-category.category-cloud .category-cloud-10{font-size:2.55em}.quarto-listing-cols-1{grid-template-columns:repeat(1, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-1{grid-template-columns:repeat(1, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-1{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-2{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-2{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-2{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-3{grid-template-columns:repeat(3, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-3{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-3{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-4{grid-template-columns:repeat(4, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-4{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-4{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-5{grid-template-columns:repeat(5, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-5{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-5{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-6{grid-template-columns:repeat(6, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-6{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-6{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-7{grid-template-columns:repeat(7, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-7{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-7{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-8{grid-template-columns:repeat(8, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-8{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-8{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-9{grid-template-columns:repeat(9, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-9{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-9{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-10{grid-template-columns:repeat(10, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-10{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-10{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-11{grid-template-columns:repeat(11, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-11{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-11{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-12{grid-template-columns:repeat(12, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-12{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-12{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-grid{gap:1.5em}.quarto-grid-item.borderless{border:none}.quarto-grid-item.borderless .listing-categories .listing-category:last-of-type,.quarto-grid-item.borderless .listing-categories .listing-category:first-of-type{padding-left:0}.quarto-grid-item.borderless .listing-categories .listing-category{border:0}.quarto-grid-link{text-decoration:none;color:inherit}.quarto-grid-link:hover{text-decoration:none;color:inherit}.quarto-grid-item h5.title,.quarto-grid-item .title.h5{margin-top:0;margin-bottom:0}.quarto-grid-item .card-footer{display:flex;justify-content:space-between;font-size:.8em}.quarto-grid-item .card-footer p{margin-bottom:0}.quarto-grid-item p.card-img-top{margin-bottom:0}.quarto-grid-item p.card-img-top>img{object-fit:cover}.quarto-grid-item .card-other-values{margin-top:.5em;font-size:.8em}.quarto-grid-item .card-other-values tr{margin-bottom:.5em}.quarto-grid-item .card-other-values tr>td:first-of-type{font-weight:600;padding-right:1em;padding-left:1em;vertical-align:top}.quarto-grid-item div.post-contents{display:flex;flex-direction:column;text-decoration:none;height:100%}.quarto-grid-item .listing-item-img-placeholder{background-color:#adb5bd;flex-shrink:0}.quarto-grid-item .card-attribution{padding-top:1em;display:flex;gap:1em;text-transform:uppercase;color:#6c757d;font-weight:500;flex-grow:10;align-items:flex-end}.quarto-grid-item .description{padding-bottom:1em}.quarto-grid-item .card-attribution .date{align-self:flex-end}.quarto-grid-item .card-attribution.justify{justify-content:space-between}.quarto-grid-item .card-attribution.start{justify-content:flex-start}.quarto-grid-item .card-attribution.end{justify-content:flex-end}.quarto-grid-item .card-title{margin-bottom:.1em}.quarto-grid-item .card-subtitle{padding-top:.25em}.quarto-grid-item .card-text{font-size:.9em}.quarto-grid-item .listing-reading-time{padding-bottom:.25em}.quarto-grid-item .card-text-small{font-size:.8em}.quarto-grid-item .card-subtitle.subtitle{font-size:.9em;font-weight:600;padding-bottom:.5em}.quarto-grid-item .listing-categories{display:flex;flex-wrap:wrap;padding-bottom:5px}.quarto-grid-item .listing-categories .listing-category{color:#6c757d;border:solid 1px #dee2e6;border-radius:.25rem;text-transform:uppercase;font-size:.65em;padding-left:.5em;padding-right:.5em;padding-top:.15em;padding-bottom:.15em;cursor:pointer;margin-right:4px;margin-bottom:4px}.quarto-grid-item.card-right{text-align:right}.quarto-grid-item.card-right .listing-categories{justify-content:flex-end}.quarto-grid-item.card-left{text-align:left}.quarto-grid-item.card-center{text-align:center}.quarto-grid-item.card-center .listing-description{text-align:justify}.quarto-grid-item.card-center .listing-categories{justify-content:center}table.quarto-listing-table td.image{padding:0px}table.quarto-listing-table td.image img{width:100%;max-width:50px;object-fit:contain}table.quarto-listing-table a{text-decoration:none}table.quarto-listing-table th a{color:inherit}table.quarto-listing-table th a.asc:after{margin-bottom:-2px;margin-left:5px;display:inline-block;height:1rem;width:1rem;background-repeat:no-repeat;background-size:1rem 1rem;background-image:url('data:image/svg+xml,');content:""}table.quarto-listing-table th a.desc:after{margin-bottom:-2px;margin-left:5px;display:inline-block;height:1rem;width:1rem;background-repeat:no-repeat;background-size:1rem 1rem;background-image:url('data:image/svg+xml,');content:""}table.quarto-listing-table.table-hover td{cursor:pointer}.quarto-post.image-left{flex-direction:row}.quarto-post.image-right{flex-direction:row-reverse}@media(max-width: 767.98px){.quarto-post.image-right,.quarto-post.image-left{gap:0em;flex-direction:column}.quarto-post .metadata{padding-bottom:1em;order:2}.quarto-post .body{order:1}.quarto-post .thumbnail{order:3}}.list.quarto-listing-default div:last-of-type{border-bottom:none}@media(min-width: 992px){.quarto-listing-container-default{margin-right:2em}}div.quarto-post{display:flex;gap:2em;margin-bottom:1.5em;border-bottom:1px solid #dee2e6}@media(max-width: 767.98px){div.quarto-post{padding-bottom:1em}}div.quarto-post .metadata{flex-basis:20%;flex-grow:0;margin-top:.2em;flex-shrink:10}div.quarto-post .thumbnail{flex-basis:30%;flex-grow:0;flex-shrink:0}div.quarto-post .thumbnail img{margin-top:.4em;width:100%;object-fit:cover}div.quarto-post .body{flex-basis:45%;flex-grow:1;flex-shrink:0}div.quarto-post .body h3.listing-title,div.quarto-post .body .listing-title.h3{margin-top:0px;margin-bottom:0px;border-bottom:none}div.quarto-post .body .listing-subtitle{font-size:.875em;margin-bottom:.5em;margin-top:.2em}div.quarto-post .body .description{font-size:.9em}div.quarto-post a{color:#373a3c;display:flex;flex-direction:column;text-decoration:none}div.quarto-post a div.description{flex-shrink:0}div.quarto-post .metadata{display:flex;flex-direction:column;font-size:.8em;font-family:var(--bs-font-sans-serif);flex-basis:33%}div.quarto-post .listing-categories{display:flex;flex-wrap:wrap;padding-bottom:5px}div.quarto-post .listing-categories .listing-category{color:#6c757d;border:solid 1px #dee2e6;border-radius:.25rem;text-transform:uppercase;font-size:.65em;padding-left:.5em;padding-right:.5em;padding-top:.15em;padding-bottom:.15em;cursor:pointer;margin-right:4px;margin-bottom:4px}div.quarto-post .listing-description{margin-bottom:.5em}div.quarto-about-jolla{display:flex !important;flex-direction:column;align-items:center;margin-top:10%;padding-bottom:1em}div.quarto-about-jolla .about-image{object-fit:cover;margin-left:auto;margin-right:auto;margin-bottom:1.5em}div.quarto-about-jolla img.round{border-radius:50%}div.quarto-about-jolla img.rounded{border-radius:10px}div.quarto-about-jolla .quarto-title h1.title,div.quarto-about-jolla .quarto-title .title.h1{text-align:center}div.quarto-about-jolla .quarto-title .description{text-align:center}div.quarto-about-jolla h2,div.quarto-about-jolla .h2{border-bottom:none}div.quarto-about-jolla .about-sep{width:60%}div.quarto-about-jolla main{text-align:center}div.quarto-about-jolla .about-links{display:flex}@media(min-width: 992px){div.quarto-about-jolla .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-jolla .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-jolla .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-jolla .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-jolla .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-jolla .about-link:hover{color:#2780e3}div.quarto-about-jolla .about-link i.bi{margin-right:.15em}div.quarto-about-solana{display:flex !important;flex-direction:column;padding-top:3em !important;padding-bottom:1em}div.quarto-about-solana .about-entity{display:flex !important;align-items:start;justify-content:space-between}@media(min-width: 992px){div.quarto-about-solana .about-entity{flex-direction:row}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity{flex-direction:column-reverse;align-items:center;text-align:center}}div.quarto-about-solana .about-entity .entity-contents{display:flex;flex-direction:column}@media(max-width: 767.98px){div.quarto-about-solana .about-entity .entity-contents{width:100%}}div.quarto-about-solana .about-entity .about-image{object-fit:cover}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-image{margin-bottom:1.5em}}div.quarto-about-solana .about-entity img.round{border-radius:50%}div.quarto-about-solana .about-entity img.rounded{border-radius:10px}div.quarto-about-solana .about-entity .about-links{display:flex;justify-content:left;padding-bottom:1.2em}@media(min-width: 992px){div.quarto-about-solana .about-entity .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-solana .about-entity .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-solana .about-entity .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-solana .about-entity .about-link:hover{color:#2780e3}div.quarto-about-solana .about-entity .about-link i.bi{margin-right:.15em}div.quarto-about-solana .about-contents{padding-right:1.5em;flex-basis:0;flex-grow:1}div.quarto-about-solana .about-contents main.content{margin-top:0}div.quarto-about-solana .about-contents h2,div.quarto-about-solana .about-contents .h2{border-bottom:none}div.quarto-about-trestles{display:flex !important;flex-direction:row;padding-top:3em !important;padding-bottom:1em}@media(max-width: 991.98px){div.quarto-about-trestles{flex-direction:column;padding-top:0em !important}}div.quarto-about-trestles .about-entity{display:flex !important;flex-direction:column;align-items:center;text-align:center;padding-right:1em}@media(min-width: 992px){div.quarto-about-trestles .about-entity{flex:0 0 42%}}div.quarto-about-trestles .about-entity .about-image{object-fit:cover;margin-bottom:1.5em}div.quarto-about-trestles .about-entity img.round{border-radius:50%}div.quarto-about-trestles .about-entity img.rounded{border-radius:10px}div.quarto-about-trestles .about-entity .about-links{display:flex;justify-content:center}@media(min-width: 992px){div.quarto-about-trestles .about-entity .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-trestles .about-entity .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-trestles .about-entity .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-trestles .about-entity .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-trestles .about-entity .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-trestles .about-entity .about-link:hover{color:#2780e3}div.quarto-about-trestles .about-entity .about-link i.bi{margin-right:.15em}div.quarto-about-trestles .about-contents{flex-basis:0;flex-grow:1}div.quarto-about-trestles .about-contents h2,div.quarto-about-trestles .about-contents .h2{border-bottom:none}@media(min-width: 992px){div.quarto-about-trestles .about-contents{border-left:solid 1px #dee2e6;padding-left:1.5em}}div.quarto-about-trestles .about-contents main.content{margin-top:0}div.quarto-about-marquee{padding-bottom:1em}div.quarto-about-marquee .about-contents{display:flex;flex-direction:column}div.quarto-about-marquee .about-image{max-height:550px;margin-bottom:1.5em;object-fit:cover}div.quarto-about-marquee img.round{border-radius:50%}div.quarto-about-marquee img.rounded{border-radius:10px}div.quarto-about-marquee h2,div.quarto-about-marquee .h2{border-bottom:none}div.quarto-about-marquee .about-links{display:flex;justify-content:center;padding-top:1.5em}@media(min-width: 992px){div.quarto-about-marquee .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-marquee .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-marquee .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-marquee .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-marquee .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-marquee .about-link:hover{color:#2780e3}div.quarto-about-marquee .about-link i.bi{margin-right:.15em}@media(min-width: 992px){div.quarto-about-marquee .about-link{border:none}}div.quarto-about-broadside{display:flex;flex-direction:column;padding-bottom:1em}div.quarto-about-broadside .about-main{display:flex !important;padding-top:0 !important}@media(min-width: 992px){div.quarto-about-broadside .about-main{flex-direction:row;align-items:flex-start}}@media(max-width: 991.98px){div.quarto-about-broadside .about-main{flex-direction:column}}@media(max-width: 991.98px){div.quarto-about-broadside .about-main .about-entity{flex-shrink:0;width:100%;height:450px;margin-bottom:1.5em;background-size:cover;background-repeat:no-repeat}}@media(min-width: 992px){div.quarto-about-broadside .about-main .about-entity{flex:0 10 50%;margin-right:1.5em;width:100%;height:100%;background-size:100%;background-repeat:no-repeat}}div.quarto-about-broadside .about-main .about-contents{padding-top:14px;flex:0 0 50%}div.quarto-about-broadside h2,div.quarto-about-broadside .h2{border-bottom:none}div.quarto-about-broadside .about-sep{margin-top:1.5em;width:60%;align-self:center}div.quarto-about-broadside .about-links{display:flex;justify-content:center;column-gap:20px;padding-top:1.5em}@media(min-width: 992px){div.quarto-about-broadside .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-broadside .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-broadside .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-broadside .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-broadside .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-broadside .about-link:hover{color:#2780e3}div.quarto-about-broadside .about-link i.bi{margin-right:.15em}@media(min-width: 992px){div.quarto-about-broadside .about-link{border:none}}.tippy-box[data-theme~=quarto]{background-color:#fff;border:solid 1px #dee2e6;border-radius:.25rem;color:#373a3c;font-size:.875rem}.tippy-box[data-theme~=quarto]>.tippy-backdrop{background-color:#fff}.tippy-box[data-theme~=quarto]>.tippy-arrow:after,.tippy-box[data-theme~=quarto]>.tippy-svg-arrow:after{content:"";position:absolute;z-index:-1}.tippy-box[data-theme~=quarto]>.tippy-arrow:after{border-color:rgba(0,0,0,0);border-style:solid}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-6px}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-6px}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-6px}.tippy-box[data-placement^=left]>.tippy-arrow:before{right:-6px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-arrow:before{border-top-color:#fff}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-arrow:after{border-top-color:#dee2e6;border-width:7px 7px 0;top:17px;left:1px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-svg-arrow>svg{top:16px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-svg-arrow:after{top:17px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-arrow:before{border-bottom-color:#fff;bottom:16px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-arrow:after{border-bottom-color:#dee2e6;border-width:0 7px 7px;bottom:17px;left:1px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-svg-arrow>svg{bottom:15px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-svg-arrow:after{bottom:17px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-arrow:before{border-left-color:#fff}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-arrow:after{border-left-color:#dee2e6;border-width:7px 0 7px 7px;left:17px;top:1px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-svg-arrow>svg{left:11px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-svg-arrow:after{left:12px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-arrow:before{border-right-color:#fff;right:16px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-arrow:after{border-width:7px 7px 7px 0;right:17px;top:1px;border-right-color:#dee2e6}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-svg-arrow>svg{right:11px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-svg-arrow:after{right:12px}.tippy-box[data-theme~=quarto]>.tippy-svg-arrow{fill:#373a3c}.tippy-box[data-theme~=quarto]>.tippy-svg-arrow:after{background-image:url(data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJNMCA2czEuNzk2LS4wMTMgNC42Ny0zLjYxNUM1Ljg1MS45IDYuOTMuMDA2IDggMGMxLjA3LS4wMDYgMi4xNDguODg3IDMuMzQzIDIuMzg1QzE0LjIzMyA2LjAwNSAxNiA2IDE2IDZIMHoiIGZpbGw9InJnYmEoMCwgOCwgMTYsIDAuMikiLz48L3N2Zz4=);background-size:16px 6px;width:16px;height:6px}.top-right{position:absolute;top:1em;right:1em}.hidden{display:none !important}.zindex-bottom{z-index:-1 !important}.quarto-layout-panel{margin-bottom:1em}.quarto-layout-panel>figure{width:100%}.quarto-layout-panel>figure>figcaption,.quarto-layout-panel>.panel-caption{margin-top:10pt}.quarto-layout-panel>.table-caption{margin-top:0px}.table-caption p{margin-bottom:.5em}.quarto-layout-row{display:flex;flex-direction:row;align-items:flex-start}.quarto-layout-valign-top{align-items:flex-start}.quarto-layout-valign-bottom{align-items:flex-end}.quarto-layout-valign-center{align-items:center}.quarto-layout-cell{position:relative;margin-right:20px}.quarto-layout-cell:last-child{margin-right:0}.quarto-layout-cell figure,.quarto-layout-cell>p{margin:.2em}.quarto-layout-cell img{max-width:100%}.quarto-layout-cell .html-widget{width:100% !important}.quarto-layout-cell div figure p{margin:0}.quarto-layout-cell figure{display:inline-block;margin-inline-start:0;margin-inline-end:0}.quarto-layout-cell table{display:inline-table}.quarto-layout-cell-subref figcaption,figure .quarto-layout-row figure figcaption{text-align:center;font-style:italic}.quarto-figure{position:relative;margin-bottom:1em}.quarto-figure>figure{width:100%;margin-bottom:0}.quarto-figure-left>figure>p,.quarto-figure-left>figure>div{text-align:left}.quarto-figure-center>figure>p,.quarto-figure-center>figure>div{text-align:center}.quarto-figure-right>figure>p,.quarto-figure-right>figure>div{text-align:right}figure>p:empty{display:none}figure>p:first-child{margin-top:0;margin-bottom:0}figure>figcaption{margin-top:.5em}div[id^=tbl-]{position:relative}.quarto-figure>.anchorjs-link{position:absolute;top:.6em;right:.5em}div[id^=tbl-]>.anchorjs-link{position:absolute;top:.7em;right:.3em}.quarto-figure:hover>.anchorjs-link,div[id^=tbl-]:hover>.anchorjs-link,h2:hover>.anchorjs-link,.h2:hover>.anchorjs-link,h3:hover>.anchorjs-link,.h3:hover>.anchorjs-link,h4:hover>.anchorjs-link,.h4:hover>.anchorjs-link,h5:hover>.anchorjs-link,.h5:hover>.anchorjs-link,h6:hover>.anchorjs-link,.h6:hover>.anchorjs-link,.reveal-anchorjs-link>.anchorjs-link{opacity:1}#title-block-header{margin-block-end:1rem;position:relative;margin-top:-1px}#title-block-header .abstract{margin-block-start:1rem}#title-block-header .abstract .abstract-title{font-weight:600}#title-block-header a{text-decoration:none}#title-block-header .author,#title-block-header .date,#title-block-header .doi{margin-block-end:.2rem}#title-block-header .quarto-title-block>div{display:flex}#title-block-header .quarto-title-block>div>h1,#title-block-header .quarto-title-block>div>.h1{flex-grow:1}#title-block-header .quarto-title-block>div>button{flex-shrink:0;height:2.25rem;margin-top:0}@media(min-width: 992px){#title-block-header .quarto-title-block>div>button{margin-top:5px}}tr.header>th>p:last-of-type{margin-bottom:0px}table,.table{caption-side:top;margin-bottom:1.5rem}caption,.table-caption{padding-top:.5rem;padding-bottom:.5rem;text-align:center}.utterances{max-width:none;margin-left:-8px}iframe{margin-bottom:1em}details{margin-bottom:1em}details[show]{margin-bottom:0}details>summary{color:#6c757d}details>summary>p:only-child{display:inline}pre.sourceCode,code.sourceCode{position:relative}p code:not(.sourceCode){white-space:pre-wrap}code{white-space:pre}@media print{code{white-space:pre-wrap}}pre>code{display:block}pre>code.sourceCode{white-space:pre}pre>code.sourceCode>span>a:first-child::before{text-decoration:none}pre.code-overflow-wrap>code.sourceCode{white-space:pre-wrap}pre.code-overflow-scroll>code.sourceCode{white-space:pre}code a:any-link{color:inherit;text-decoration:none}code a:hover{color:inherit;text-decoration:underline}ul.task-list{padding-left:1em}[data-tippy-root]{display:inline-block}.tippy-content .footnote-back{display:none}.quarto-embedded-source-code{display:none}.quarto-unresolved-ref{font-weight:600}.quarto-cover-image{max-width:35%;float:right;margin-left:30px}.cell-output-display .widget-subarea{margin-bottom:1em}.cell-output-display:not(.no-overflow-x),.knitsql-table:not(.no-overflow-x){overflow-x:auto}.panel-input{margin-bottom:1em}.panel-input>div,.panel-input>div>div{display:inline-block;vertical-align:top;padding-right:12px}.panel-input>p:last-child{margin-bottom:0}.layout-sidebar{margin-bottom:1em}.layout-sidebar .tab-content{border:none}.tab-content>.page-columns.active{display:grid}div.sourceCode>iframe{width:100%;height:300px;margin-bottom:-0.5em}div.ansi-escaped-output{font-family:monospace;display:block}/*! +* +* ansi colors from IPython notebook's +* +*/.ansi-black-fg{color:#3e424d}.ansi-black-bg{background-color:#3e424d}.ansi-black-intense-fg{color:#282c36}.ansi-black-intense-bg{background-color:#282c36}.ansi-red-fg{color:#e75c58}.ansi-red-bg{background-color:#e75c58}.ansi-red-intense-fg{color:#b22b31}.ansi-red-intense-bg{background-color:#b22b31}.ansi-green-fg{color:#00a250}.ansi-green-bg{background-color:#00a250}.ansi-green-intense-fg{color:#007427}.ansi-green-intense-bg{background-color:#007427}.ansi-yellow-fg{color:#ddb62b}.ansi-yellow-bg{background-color:#ddb62b}.ansi-yellow-intense-fg{color:#b27d12}.ansi-yellow-intense-bg{background-color:#b27d12}.ansi-blue-fg{color:#208ffb}.ansi-blue-bg{background-color:#208ffb}.ansi-blue-intense-fg{color:#0065ca}.ansi-blue-intense-bg{background-color:#0065ca}.ansi-magenta-fg{color:#d160c4}.ansi-magenta-bg{background-color:#d160c4}.ansi-magenta-intense-fg{color:#a03196}.ansi-magenta-intense-bg{background-color:#a03196}.ansi-cyan-fg{color:#60c6c8}.ansi-cyan-bg{background-color:#60c6c8}.ansi-cyan-intense-fg{color:#258f8f}.ansi-cyan-intense-bg{background-color:#258f8f}.ansi-white-fg{color:#c5c1b4}.ansi-white-bg{background-color:#c5c1b4}.ansi-white-intense-fg{color:#a1a6b2}.ansi-white-intense-bg{background-color:#a1a6b2}.ansi-default-inverse-fg{color:#fff}.ansi-default-inverse-bg{background-color:#000}.ansi-bold{font-weight:bold}.ansi-underline{text-decoration:underline}:root{--quarto-body-bg: #fff;--quarto-body-color: #373a3c;--quarto-text-muted: #6c757d;--quarto-border-color: #dee2e6;--quarto-border-width: 1px;--quarto-border-radius: 0.25rem}table.gt_table{color:var(--quarto-body-color);font-size:1em;width:100%;background-color:rgba(0,0,0,0);border-top-width:inherit;border-bottom-width:inherit;border-color:var(--quarto-border-color)}table.gt_table th.gt_column_spanner_outer{color:var(--quarto-body-color);background-color:rgba(0,0,0,0);border-top-width:inherit;border-bottom-width:inherit;border-color:var(--quarto-border-color)}table.gt_table th.gt_col_heading{color:var(--quarto-body-color);font-weight:bold;background-color:rgba(0,0,0,0)}table.gt_table thead.gt_col_headings{border-bottom:1px solid currentColor;border-top-width:inherit;border-top-color:var(--quarto-border-color)}table.gt_table thead.gt_col_headings:not(:first-child){border-top-width:1px;border-top-color:var(--quarto-border-color)}table.gt_table td.gt_row{border-bottom-width:1px;border-bottom-color:var(--quarto-border-color);border-top-width:0px}table.gt_table tbody.gt_table_body{border-top-width:1px;border-bottom-width:1px;border-bottom-color:var(--quarto-border-color);border-top-color:currentColor}div.columns{display:initial;gap:initial}div.column{display:inline-block;overflow-x:initial;vertical-align:top;width:50%}.code-annotation-tip-content{word-wrap:break-word}.code-annotation-container-hidden{display:none !important}dl.code-annotation-container-grid{display:grid;grid-template-columns:min-content auto}dl.code-annotation-container-grid dt{grid-column:1}dl.code-annotation-container-grid dd{grid-column:2}pre.sourceCode.code-annotation-code{padding-right:0}code.sourceCode .code-annotation-anchor{z-index:100;position:absolute;right:.5em;left:inherit;background-color:rgba(0,0,0,0)}:root{--mermaid-bg-color: #fff;--mermaid-edge-color: #373a3c;--mermaid-node-fg-color: #373a3c;--mermaid-fg-color: #373a3c;--mermaid-fg-color--lighter: #4f5457;--mermaid-fg-color--lightest: #686d71;--mermaid-font-family: Source Sans Pro, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol;--mermaid-label-bg-color: #fff;--mermaid-label-fg-color: #2780e3;--mermaid-node-bg-color: rgba(39, 128, 227, 0.1);--mermaid-node-fg-color: #373a3c}@media print{:root{font-size:11pt}#quarto-sidebar,#TOC,.nav-page{display:none}.page-columns .content{grid-column-start:page-start}.fixed-top{position:relative}.panel-caption,.figure-caption,figcaption{color:#666}}.code-copy-button{position:absolute;top:0;right:0;border:0;margin-top:5px;margin-right:5px;background-color:rgba(0,0,0,0);z-index:3}.code-copy-button:focus{outline:none}.code-copy-button-tooltip{font-size:.75em}pre.sourceCode:hover>.code-copy-button>.bi::before{display:inline-block;height:1rem;width:1rem;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:1rem 1rem}pre.sourceCode:hover>.code-copy-button-checked>.bi::before{background-image:url('data:image/svg+xml,')}pre.sourceCode:hover>.code-copy-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}pre.sourceCode:hover>.code-copy-button-checked:hover>.bi::before{background-image:url('data:image/svg+xml,')}main ol ol,main ul ul,main ol ul,main ul ol{margin-bottom:1em}ul>li:not(:has(>p))>ul,ol>li:not(:has(>p))>ul,ul>li:not(:has(>p))>ol,ol>li:not(:has(>p))>ol{margin-bottom:0}ul>li:not(:has(>p))>ul>li:has(>p),ol>li:not(:has(>p))>ul>li:has(>p),ul>li:not(:has(>p))>ol>li:has(>p),ol>li:not(:has(>p))>ol>li:has(>p){margin-top:1rem}body{margin:0}main.page-columns>header>h1.title,main.page-columns>header>.title.h1{margin-bottom:0}@media(min-width: 992px){body .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.fullcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] 35px [page-end-inset page-end] 5fr [screen-end-inset] 1.5em}body.slimcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.listing:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 3em [body-end] 50px [body-end-outset] minmax(0px, 250px) [page-end-inset] minmax(50px, 100px) [page-end] 1fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] minmax(25px, 50px) [page-start-inset] minmax(50px, 150px) [body-start-outset] minmax(25px, 50px) [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] minmax(25px, 50px) [body-end-outset] minmax(50px, 150px) [page-end-inset] minmax(25px, 50px) [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(50px, 100px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 50px [page-start-inset] minmax(50px, 150px) [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(450px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 50px [page-start-inset] minmax(50px, 150px) [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(450px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(50px, 150px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] minmax(25px, 50px) [page-start-inset] minmax(50px, 150px) [body-start-outset] minmax(25px, 50px) [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] minmax(25px, 50px) [body-end-outset] minmax(50px, 150px) [page-end-inset] minmax(25px, 50px) [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}}@media(max-width: 991.98px){body .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.fullcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.slimcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.listing:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 1250px - 3em )) [body-content-end body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 145px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 145px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1.5em [body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(75px, 150px) [page-end-inset] 25px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 4fr [screen-end-inset] 1.5em [screen-end]}body.docked.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 4fr [screen-end-inset] 1.5em [screen-end]}body.floating.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(75px, 150px) [page-end-inset] 25px [page-end] 4fr [screen-end-inset] 1.5em [screen-end]}}@media(max-width: 767.98px){body .page-columns,body.fullcontent:not(.floating):not(.docked) .page-columns,body.slimcontent:not(.floating):not(.docked) .page-columns,body.docked .page-columns,body.docked.slimcontent .page-columns,body.docked.fullcontent .page-columns,body.floating .page-columns,body.floating.slimcontent .page-columns,body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}nav[role=doc-toc]{display:none}}body,.page-row-navigation{grid-template-rows:[page-top] max-content [contents-top] max-content [contents-bottom] max-content [page-bottom]}.page-rows-contents{grid-template-rows:[content-top] minmax(max-content, 1fr) [content-bottom] minmax(60px, max-content) [page-bottom]}.page-full{grid-column:screen-start/screen-end !important}.page-columns>*{grid-column:body-content-start/body-content-end}.page-columns.column-page>*{grid-column:page-start/page-end}.page-columns.column-page-left>*{grid-column:page-start/body-content-end}.page-columns.column-page-right>*{grid-column:body-content-start/page-end}.page-rows{grid-auto-rows:auto}.header{grid-column:screen-start/screen-end;grid-row:page-top/contents-top}#quarto-content{padding:0;grid-column:screen-start/screen-end;grid-row:contents-top/contents-bottom}body.floating .sidebar.sidebar-navigation{grid-column:page-start/body-start;grid-row:content-top/page-bottom}body.docked .sidebar.sidebar-navigation{grid-column:screen-start/body-start;grid-row:content-top/page-bottom}.sidebar.toc-left{grid-column:page-start/body-start;grid-row:content-top/page-bottom}.sidebar.margin-sidebar{grid-column:body-end/page-end;grid-row:content-top/page-bottom}.page-columns .content{grid-column:body-content-start/body-content-end;grid-row:content-top/content-bottom;align-content:flex-start}.page-columns .page-navigation{grid-column:body-content-start/body-content-end;grid-row:content-bottom/page-bottom}.page-columns .footer{grid-column:screen-start/screen-end;grid-row:contents-bottom/page-bottom}.page-columns .column-body{grid-column:body-content-start/body-content-end}.page-columns .column-body-fullbleed{grid-column:body-start/body-end}.page-columns .column-body-outset{grid-column:body-start-outset/body-end-outset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-body-outset table{background:#fff}.page-columns .column-body-outset-left{grid-column:body-start-outset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-body-outset-left table{background:#fff}.page-columns .column-body-outset-right{grid-column:body-content-start/body-end-outset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-body-outset-right table{background:#fff}.page-columns .column-page{grid-column:page-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page table{background:#fff}.page-columns .column-page-inset{grid-column:page-start-inset/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-inset table{background:#fff}.page-columns .column-page-inset-left{grid-column:page-start-inset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-inset-left table{background:#fff}.page-columns .column-page-inset-right{grid-column:body-content-start/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-inset-right figcaption table{background:#fff}.page-columns .column-page-left{grid-column:page-start/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-left table{background:#fff}.page-columns .column-page-right{grid-column:body-content-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-right figcaption table{background:#fff}#quarto-content.page-columns #quarto-margin-sidebar,#quarto-content.page-columns #quarto-sidebar{z-index:1}@media(max-width: 991.98px){#quarto-content.page-columns #quarto-margin-sidebar.collapse,#quarto-content.page-columns #quarto-sidebar.collapse,#quarto-content.page-columns #quarto-margin-sidebar.collapsing,#quarto-content.page-columns #quarto-sidebar.collapsing{z-index:1055}}#quarto-content.page-columns main.column-page,#quarto-content.page-columns main.column-page-right,#quarto-content.page-columns main.column-page-left{z-index:0}.page-columns .column-screen-inset{grid-column:screen-start-inset/screen-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset table{background:#fff}.page-columns .column-screen-inset-left{grid-column:screen-start-inset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-left table{background:#fff}.page-columns .column-screen-inset-right{grid-column:body-content-start/screen-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-right table{background:#fff}.page-columns .column-screen{grid-column:screen-start/screen-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen table{background:#fff}.page-columns .column-screen-left{grid-column:screen-start/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-left table{background:#fff}.page-columns .column-screen-right{grid-column:body-content-start/screen-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-right table{background:#fff}.page-columns .column-screen-inset-shaded{grid-column:screen-start/screen-end;padding:1em;background:#f8f9fa;z-index:998;transform:translate3d(0, 0, 0);margin-bottom:1em}.zindex-content{z-index:998;transform:translate3d(0, 0, 0)}.zindex-modal{z-index:1055;transform:translate3d(0, 0, 0)}.zindex-over-content{z-index:999;transform:translate3d(0, 0, 0)}img.img-fluid.column-screen,img.img-fluid.column-screen-inset-shaded,img.img-fluid.column-screen-inset,img.img-fluid.column-screen-inset-left,img.img-fluid.column-screen-inset-right,img.img-fluid.column-screen-left,img.img-fluid.column-screen-right{width:100%}@media(min-width: 992px){.margin-caption,div.aside,aside:not(.footnotes),.column-margin{grid-column:body-end/page-end !important;z-index:998}.column-sidebar{grid-column:page-start/body-start !important;z-index:998}.column-leftmargin{grid-column:screen-start-inset/body-start !important;z-index:998}.no-row-height{height:1em;overflow:visible}}@media(max-width: 991.98px){.margin-caption,div.aside,aside:not(.footnotes),.column-margin{grid-column:body-end/page-end !important;z-index:998}.no-row-height{height:1em;overflow:visible}.page-columns.page-full{overflow:visible}.page-columns.toc-left .margin-caption,.page-columns.toc-left div.aside,.page-columns.toc-left aside:not(.footnotes),.page-columns.toc-left .column-margin{grid-column:body-content-start/body-content-end !important;z-index:998;transform:translate3d(0, 0, 0)}.page-columns.toc-left .no-row-height{height:initial;overflow:initial}}@media(max-width: 767.98px){.margin-caption,div.aside,aside:not(.footnotes),.column-margin{grid-column:body-content-start/body-content-end !important;z-index:998;transform:translate3d(0, 0, 0)}.no-row-height{height:initial;overflow:initial}#quarto-margin-sidebar{display:none}#quarto-sidebar-toc-left{display:none}.hidden-sm{display:none}}.panel-grid{display:grid;grid-template-rows:repeat(1, 1fr);grid-template-columns:repeat(24, 1fr);gap:1em}.panel-grid .g-col-1{grid-column:auto/span 1}.panel-grid .g-col-2{grid-column:auto/span 2}.panel-grid .g-col-3{grid-column:auto/span 3}.panel-grid .g-col-4{grid-column:auto/span 4}.panel-grid .g-col-5{grid-column:auto/span 5}.panel-grid .g-col-6{grid-column:auto/span 6}.panel-grid .g-col-7{grid-column:auto/span 7}.panel-grid .g-col-8{grid-column:auto/span 8}.panel-grid .g-col-9{grid-column:auto/span 9}.panel-grid .g-col-10{grid-column:auto/span 10}.panel-grid .g-col-11{grid-column:auto/span 11}.panel-grid .g-col-12{grid-column:auto/span 12}.panel-grid .g-col-13{grid-column:auto/span 13}.panel-grid .g-col-14{grid-column:auto/span 14}.panel-grid .g-col-15{grid-column:auto/span 15}.panel-grid .g-col-16{grid-column:auto/span 16}.panel-grid .g-col-17{grid-column:auto/span 17}.panel-grid .g-col-18{grid-column:auto/span 18}.panel-grid .g-col-19{grid-column:auto/span 19}.panel-grid .g-col-20{grid-column:auto/span 20}.panel-grid .g-col-21{grid-column:auto/span 21}.panel-grid .g-col-22{grid-column:auto/span 22}.panel-grid .g-col-23{grid-column:auto/span 23}.panel-grid .g-col-24{grid-column:auto/span 24}.panel-grid .g-start-1{grid-column-start:1}.panel-grid .g-start-2{grid-column-start:2}.panel-grid .g-start-3{grid-column-start:3}.panel-grid .g-start-4{grid-column-start:4}.panel-grid .g-start-5{grid-column-start:5}.panel-grid .g-start-6{grid-column-start:6}.panel-grid .g-start-7{grid-column-start:7}.panel-grid .g-start-8{grid-column-start:8}.panel-grid .g-start-9{grid-column-start:9}.panel-grid .g-start-10{grid-column-start:10}.panel-grid .g-start-11{grid-column-start:11}.panel-grid .g-start-12{grid-column-start:12}.panel-grid .g-start-13{grid-column-start:13}.panel-grid .g-start-14{grid-column-start:14}.panel-grid .g-start-15{grid-column-start:15}.panel-grid .g-start-16{grid-column-start:16}.panel-grid .g-start-17{grid-column-start:17}.panel-grid .g-start-18{grid-column-start:18}.panel-grid .g-start-19{grid-column-start:19}.panel-grid .g-start-20{grid-column-start:20}.panel-grid .g-start-21{grid-column-start:21}.panel-grid .g-start-22{grid-column-start:22}.panel-grid .g-start-23{grid-column-start:23}@media(min-width: 576px){.panel-grid .g-col-sm-1{grid-column:auto/span 1}.panel-grid .g-col-sm-2{grid-column:auto/span 2}.panel-grid .g-col-sm-3{grid-column:auto/span 3}.panel-grid .g-col-sm-4{grid-column:auto/span 4}.panel-grid .g-col-sm-5{grid-column:auto/span 5}.panel-grid .g-col-sm-6{grid-column:auto/span 6}.panel-grid .g-col-sm-7{grid-column:auto/span 7}.panel-grid .g-col-sm-8{grid-column:auto/span 8}.panel-grid .g-col-sm-9{grid-column:auto/span 9}.panel-grid .g-col-sm-10{grid-column:auto/span 10}.panel-grid .g-col-sm-11{grid-column:auto/span 11}.panel-grid .g-col-sm-12{grid-column:auto/span 12}.panel-grid .g-col-sm-13{grid-column:auto/span 13}.panel-grid .g-col-sm-14{grid-column:auto/span 14}.panel-grid .g-col-sm-15{grid-column:auto/span 15}.panel-grid .g-col-sm-16{grid-column:auto/span 16}.panel-grid .g-col-sm-17{grid-column:auto/span 17}.panel-grid .g-col-sm-18{grid-column:auto/span 18}.panel-grid .g-col-sm-19{grid-column:auto/span 19}.panel-grid .g-col-sm-20{grid-column:auto/span 20}.panel-grid .g-col-sm-21{grid-column:auto/span 21}.panel-grid .g-col-sm-22{grid-column:auto/span 22}.panel-grid .g-col-sm-23{grid-column:auto/span 23}.panel-grid .g-col-sm-24{grid-column:auto/span 24}.panel-grid .g-start-sm-1{grid-column-start:1}.panel-grid .g-start-sm-2{grid-column-start:2}.panel-grid .g-start-sm-3{grid-column-start:3}.panel-grid .g-start-sm-4{grid-column-start:4}.panel-grid .g-start-sm-5{grid-column-start:5}.panel-grid .g-start-sm-6{grid-column-start:6}.panel-grid .g-start-sm-7{grid-column-start:7}.panel-grid .g-start-sm-8{grid-column-start:8}.panel-grid .g-start-sm-9{grid-column-start:9}.panel-grid .g-start-sm-10{grid-column-start:10}.panel-grid .g-start-sm-11{grid-column-start:11}.panel-grid .g-start-sm-12{grid-column-start:12}.panel-grid .g-start-sm-13{grid-column-start:13}.panel-grid .g-start-sm-14{grid-column-start:14}.panel-grid .g-start-sm-15{grid-column-start:15}.panel-grid .g-start-sm-16{grid-column-start:16}.panel-grid .g-start-sm-17{grid-column-start:17}.panel-grid .g-start-sm-18{grid-column-start:18}.panel-grid .g-start-sm-19{grid-column-start:19}.panel-grid .g-start-sm-20{grid-column-start:20}.panel-grid .g-start-sm-21{grid-column-start:21}.panel-grid .g-start-sm-22{grid-column-start:22}.panel-grid .g-start-sm-23{grid-column-start:23}}@media(min-width: 768px){.panel-grid .g-col-md-1{grid-column:auto/span 1}.panel-grid .g-col-md-2{grid-column:auto/span 2}.panel-grid .g-col-md-3{grid-column:auto/span 3}.panel-grid .g-col-md-4{grid-column:auto/span 4}.panel-grid .g-col-md-5{grid-column:auto/span 5}.panel-grid .g-col-md-6{grid-column:auto/span 6}.panel-grid .g-col-md-7{grid-column:auto/span 7}.panel-grid .g-col-md-8{grid-column:auto/span 8}.panel-grid .g-col-md-9{grid-column:auto/span 9}.panel-grid .g-col-md-10{grid-column:auto/span 10}.panel-grid .g-col-md-11{grid-column:auto/span 11}.panel-grid .g-col-md-12{grid-column:auto/span 12}.panel-grid .g-col-md-13{grid-column:auto/span 13}.panel-grid .g-col-md-14{grid-column:auto/span 14}.panel-grid .g-col-md-15{grid-column:auto/span 15}.panel-grid .g-col-md-16{grid-column:auto/span 16}.panel-grid .g-col-md-17{grid-column:auto/span 17}.panel-grid .g-col-md-18{grid-column:auto/span 18}.panel-grid .g-col-md-19{grid-column:auto/span 19}.panel-grid .g-col-md-20{grid-column:auto/span 20}.panel-grid .g-col-md-21{grid-column:auto/span 21}.panel-grid .g-col-md-22{grid-column:auto/span 22}.panel-grid .g-col-md-23{grid-column:auto/span 23}.panel-grid .g-col-md-24{grid-column:auto/span 24}.panel-grid .g-start-md-1{grid-column-start:1}.panel-grid .g-start-md-2{grid-column-start:2}.panel-grid .g-start-md-3{grid-column-start:3}.panel-grid .g-start-md-4{grid-column-start:4}.panel-grid .g-start-md-5{grid-column-start:5}.panel-grid .g-start-md-6{grid-column-start:6}.panel-grid .g-start-md-7{grid-column-start:7}.panel-grid .g-start-md-8{grid-column-start:8}.panel-grid .g-start-md-9{grid-column-start:9}.panel-grid .g-start-md-10{grid-column-start:10}.panel-grid .g-start-md-11{grid-column-start:11}.panel-grid .g-start-md-12{grid-column-start:12}.panel-grid .g-start-md-13{grid-column-start:13}.panel-grid .g-start-md-14{grid-column-start:14}.panel-grid .g-start-md-15{grid-column-start:15}.panel-grid .g-start-md-16{grid-column-start:16}.panel-grid .g-start-md-17{grid-column-start:17}.panel-grid .g-start-md-18{grid-column-start:18}.panel-grid .g-start-md-19{grid-column-start:19}.panel-grid .g-start-md-20{grid-column-start:20}.panel-grid .g-start-md-21{grid-column-start:21}.panel-grid .g-start-md-22{grid-column-start:22}.panel-grid .g-start-md-23{grid-column-start:23}}@media(min-width: 992px){.panel-grid .g-col-lg-1{grid-column:auto/span 1}.panel-grid .g-col-lg-2{grid-column:auto/span 2}.panel-grid .g-col-lg-3{grid-column:auto/span 3}.panel-grid .g-col-lg-4{grid-column:auto/span 4}.panel-grid .g-col-lg-5{grid-column:auto/span 5}.panel-grid .g-col-lg-6{grid-column:auto/span 6}.panel-grid .g-col-lg-7{grid-column:auto/span 7}.panel-grid .g-col-lg-8{grid-column:auto/span 8}.panel-grid .g-col-lg-9{grid-column:auto/span 9}.panel-grid .g-col-lg-10{grid-column:auto/span 10}.panel-grid .g-col-lg-11{grid-column:auto/span 11}.panel-grid .g-col-lg-12{grid-column:auto/span 12}.panel-grid .g-col-lg-13{grid-column:auto/span 13}.panel-grid .g-col-lg-14{grid-column:auto/span 14}.panel-grid .g-col-lg-15{grid-column:auto/span 15}.panel-grid .g-col-lg-16{grid-column:auto/span 16}.panel-grid .g-col-lg-17{grid-column:auto/span 17}.panel-grid .g-col-lg-18{grid-column:auto/span 18}.panel-grid .g-col-lg-19{grid-column:auto/span 19}.panel-grid .g-col-lg-20{grid-column:auto/span 20}.panel-grid .g-col-lg-21{grid-column:auto/span 21}.panel-grid .g-col-lg-22{grid-column:auto/span 22}.panel-grid .g-col-lg-23{grid-column:auto/span 23}.panel-grid .g-col-lg-24{grid-column:auto/span 24}.panel-grid .g-start-lg-1{grid-column-start:1}.panel-grid .g-start-lg-2{grid-column-start:2}.panel-grid .g-start-lg-3{grid-column-start:3}.panel-grid .g-start-lg-4{grid-column-start:4}.panel-grid .g-start-lg-5{grid-column-start:5}.panel-grid .g-start-lg-6{grid-column-start:6}.panel-grid .g-start-lg-7{grid-column-start:7}.panel-grid .g-start-lg-8{grid-column-start:8}.panel-grid .g-start-lg-9{grid-column-start:9}.panel-grid .g-start-lg-10{grid-column-start:10}.panel-grid .g-start-lg-11{grid-column-start:11}.panel-grid .g-start-lg-12{grid-column-start:12}.panel-grid .g-start-lg-13{grid-column-start:13}.panel-grid .g-start-lg-14{grid-column-start:14}.panel-grid .g-start-lg-15{grid-column-start:15}.panel-grid .g-start-lg-16{grid-column-start:16}.panel-grid .g-start-lg-17{grid-column-start:17}.panel-grid .g-start-lg-18{grid-column-start:18}.panel-grid .g-start-lg-19{grid-column-start:19}.panel-grid .g-start-lg-20{grid-column-start:20}.panel-grid .g-start-lg-21{grid-column-start:21}.panel-grid .g-start-lg-22{grid-column-start:22}.panel-grid .g-start-lg-23{grid-column-start:23}}@media(min-width: 1200px){.panel-grid .g-col-xl-1{grid-column:auto/span 1}.panel-grid .g-col-xl-2{grid-column:auto/span 2}.panel-grid .g-col-xl-3{grid-column:auto/span 3}.panel-grid .g-col-xl-4{grid-column:auto/span 4}.panel-grid .g-col-xl-5{grid-column:auto/span 5}.panel-grid .g-col-xl-6{grid-column:auto/span 6}.panel-grid .g-col-xl-7{grid-column:auto/span 7}.panel-grid .g-col-xl-8{grid-column:auto/span 8}.panel-grid .g-col-xl-9{grid-column:auto/span 9}.panel-grid .g-col-xl-10{grid-column:auto/span 10}.panel-grid .g-col-xl-11{grid-column:auto/span 11}.panel-grid .g-col-xl-12{grid-column:auto/span 12}.panel-grid .g-col-xl-13{grid-column:auto/span 13}.panel-grid .g-col-xl-14{grid-column:auto/span 14}.panel-grid .g-col-xl-15{grid-column:auto/span 15}.panel-grid .g-col-xl-16{grid-column:auto/span 16}.panel-grid .g-col-xl-17{grid-column:auto/span 17}.panel-grid .g-col-xl-18{grid-column:auto/span 18}.panel-grid .g-col-xl-19{grid-column:auto/span 19}.panel-grid .g-col-xl-20{grid-column:auto/span 20}.panel-grid .g-col-xl-21{grid-column:auto/span 21}.panel-grid .g-col-xl-22{grid-column:auto/span 22}.panel-grid .g-col-xl-23{grid-column:auto/span 23}.panel-grid .g-col-xl-24{grid-column:auto/span 24}.panel-grid .g-start-xl-1{grid-column-start:1}.panel-grid .g-start-xl-2{grid-column-start:2}.panel-grid .g-start-xl-3{grid-column-start:3}.panel-grid .g-start-xl-4{grid-column-start:4}.panel-grid .g-start-xl-5{grid-column-start:5}.panel-grid .g-start-xl-6{grid-column-start:6}.panel-grid .g-start-xl-7{grid-column-start:7}.panel-grid .g-start-xl-8{grid-column-start:8}.panel-grid .g-start-xl-9{grid-column-start:9}.panel-grid .g-start-xl-10{grid-column-start:10}.panel-grid .g-start-xl-11{grid-column-start:11}.panel-grid .g-start-xl-12{grid-column-start:12}.panel-grid .g-start-xl-13{grid-column-start:13}.panel-grid .g-start-xl-14{grid-column-start:14}.panel-grid .g-start-xl-15{grid-column-start:15}.panel-grid .g-start-xl-16{grid-column-start:16}.panel-grid .g-start-xl-17{grid-column-start:17}.panel-grid .g-start-xl-18{grid-column-start:18}.panel-grid .g-start-xl-19{grid-column-start:19}.panel-grid .g-start-xl-20{grid-column-start:20}.panel-grid .g-start-xl-21{grid-column-start:21}.panel-grid .g-start-xl-22{grid-column-start:22}.panel-grid .g-start-xl-23{grid-column-start:23}}@media(min-width: 1400px){.panel-grid .g-col-xxl-1{grid-column:auto/span 1}.panel-grid .g-col-xxl-2{grid-column:auto/span 2}.panel-grid .g-col-xxl-3{grid-column:auto/span 3}.panel-grid .g-col-xxl-4{grid-column:auto/span 4}.panel-grid .g-col-xxl-5{grid-column:auto/span 5}.panel-grid .g-col-xxl-6{grid-column:auto/span 6}.panel-grid .g-col-xxl-7{grid-column:auto/span 7}.panel-grid .g-col-xxl-8{grid-column:auto/span 8}.panel-grid .g-col-xxl-9{grid-column:auto/span 9}.panel-grid .g-col-xxl-10{grid-column:auto/span 10}.panel-grid .g-col-xxl-11{grid-column:auto/span 11}.panel-grid .g-col-xxl-12{grid-column:auto/span 12}.panel-grid .g-col-xxl-13{grid-column:auto/span 13}.panel-grid .g-col-xxl-14{grid-column:auto/span 14}.panel-grid .g-col-xxl-15{grid-column:auto/span 15}.panel-grid .g-col-xxl-16{grid-column:auto/span 16}.panel-grid .g-col-xxl-17{grid-column:auto/span 17}.panel-grid .g-col-xxl-18{grid-column:auto/span 18}.panel-grid .g-col-xxl-19{grid-column:auto/span 19}.panel-grid .g-col-xxl-20{grid-column:auto/span 20}.panel-grid .g-col-xxl-21{grid-column:auto/span 21}.panel-grid .g-col-xxl-22{grid-column:auto/span 22}.panel-grid .g-col-xxl-23{grid-column:auto/span 23}.panel-grid .g-col-xxl-24{grid-column:auto/span 24}.panel-grid .g-start-xxl-1{grid-column-start:1}.panel-grid .g-start-xxl-2{grid-column-start:2}.panel-grid .g-start-xxl-3{grid-column-start:3}.panel-grid .g-start-xxl-4{grid-column-start:4}.panel-grid .g-start-xxl-5{grid-column-start:5}.panel-grid .g-start-xxl-6{grid-column-start:6}.panel-grid .g-start-xxl-7{grid-column-start:7}.panel-grid .g-start-xxl-8{grid-column-start:8}.panel-grid .g-start-xxl-9{grid-column-start:9}.panel-grid .g-start-xxl-10{grid-column-start:10}.panel-grid .g-start-xxl-11{grid-column-start:11}.panel-grid .g-start-xxl-12{grid-column-start:12}.panel-grid .g-start-xxl-13{grid-column-start:13}.panel-grid .g-start-xxl-14{grid-column-start:14}.panel-grid .g-start-xxl-15{grid-column-start:15}.panel-grid .g-start-xxl-16{grid-column-start:16}.panel-grid .g-start-xxl-17{grid-column-start:17}.panel-grid .g-start-xxl-18{grid-column-start:18}.panel-grid .g-start-xxl-19{grid-column-start:19}.panel-grid .g-start-xxl-20{grid-column-start:20}.panel-grid .g-start-xxl-21{grid-column-start:21}.panel-grid .g-start-xxl-22{grid-column-start:22}.panel-grid .g-start-xxl-23{grid-column-start:23}}main{margin-top:1em;margin-bottom:1em}h1,.h1,h2,.h2{color:#4b4f51;margin-top:2rem;margin-bottom:1rem;font-weight:600}h1.title,.title.h1{margin-top:0}h2,.h2{border-bottom:1px solid #dee2e6;padding-bottom:.5rem}h3,.h3{font-weight:600}h3,.h3,h4,.h4{opacity:.9;margin-top:1.5rem}h5,.h5,h6,.h6{opacity:.9}.header-section-number{color:#747a7f}.nav-link.active .header-section-number{color:inherit}mark,.mark{padding:0em}.panel-caption,caption,.figure-caption{font-size:.9rem}.panel-caption,.figure-caption,figcaption{color:#747a7f}.table-caption,caption{color:#373a3c}.quarto-layout-cell[data-ref-parent] caption{color:#747a7f}.column-margin figcaption,.margin-caption,div.aside,aside,.column-margin{color:#747a7f;font-size:.825rem}.panel-caption.margin-caption{text-align:inherit}.column-margin.column-container p{margin-bottom:0}.column-margin.column-container>*:not(.collapse){padding-top:.5em;padding-bottom:.5em;display:block}.column-margin.column-container>*.collapse:not(.show){display:none}@media(min-width: 768px){.column-margin.column-container .callout-margin-content:first-child{margin-top:4.5em}.column-margin.column-container .callout-margin-content-simple:first-child{margin-top:3.5em}}.margin-caption>*{padding-top:.5em;padding-bottom:.5em}@media(max-width: 767.98px){.quarto-layout-row{flex-direction:column}}.nav-tabs .nav-item{margin-top:1px;cursor:pointer}.tab-content{margin-top:0px;border-left:#dee2e6 1px solid;border-right:#dee2e6 1px solid;border-bottom:#dee2e6 1px solid;margin-left:0;padding:1em;margin-bottom:1em}@media(max-width: 767.98px){.layout-sidebar{margin-left:0;margin-right:0}}.panel-sidebar,.panel-sidebar .form-control,.panel-input,.panel-input .form-control,.selectize-dropdown{font-size:.9rem}.panel-sidebar .form-control,.panel-input .form-control{padding-top:.1rem}.tab-pane div.sourceCode{margin-top:0px}.tab-pane>p{padding-top:1em}.tab-content>.tab-pane:not(.active){display:none !important}div.sourceCode{background-color:rgba(233,236,239,.65);border:1px solid rgba(233,236,239,.65);border-radius:.25rem}pre.sourceCode{background-color:rgba(0,0,0,0)}pre.sourceCode{border:none;font-size:.875em;overflow:visible !important;padding:.4em}.callout pre.sourceCode{padding-left:0}div.sourceCode{overflow-y:hidden}.callout div.sourceCode{margin-left:initial}.blockquote{font-size:inherit;padding-left:1rem;padding-right:1.5rem;color:#747a7f}.blockquote h1:first-child,.blockquote .h1:first-child,.blockquote h2:first-child,.blockquote .h2:first-child,.blockquote h3:first-child,.blockquote .h3:first-child,.blockquote h4:first-child,.blockquote .h4:first-child,.blockquote h5:first-child,.blockquote .h5:first-child{margin-top:0}pre{background-color:initial;padding:initial;border:initial}p code:not(.sourceCode),li code:not(.sourceCode),td code:not(.sourceCode){background-color:#f7f7f7;padding:.2em}nav p code:not(.sourceCode),nav li code:not(.sourceCode),nav td code:not(.sourceCode){background-color:rgba(0,0,0,0);padding:0}td code:not(.sourceCode){white-space:pre-wrap}#quarto-embedded-source-code-modal>.modal-dialog{max-width:1000px;padding-left:1.75rem;padding-right:1.75rem}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-body{padding:0}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-body div.sourceCode{margin:0;padding:.2rem .2rem;border-radius:0px;border:none}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-header{padding:.7rem}.code-tools-button{font-size:1rem;padding:.15rem .15rem;margin-left:5px;color:#6c757d;background-color:rgba(0,0,0,0);transition:initial;cursor:pointer}.code-tools-button>.bi::before{display:inline-block;height:1rem;width:1rem;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:1rem 1rem}.code-tools-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}#quarto-embedded-source-code-modal .code-copy-button>.bi::before{background-image:url('data:image/svg+xml,')}#quarto-embedded-source-code-modal .code-copy-button-checked>.bi::before{background-image:url('data:image/svg+xml,')}.sidebar{will-change:top;transition:top 200ms linear;position:sticky;overflow-y:auto;padding-top:1.2em;max-height:100vh}.sidebar.toc-left,.sidebar.margin-sidebar{top:0px;padding-top:1em}.sidebar.toc-left>*,.sidebar.margin-sidebar>*{padding-top:.5em}.sidebar.quarto-banner-title-block-sidebar>*{padding-top:1.65em}figure .quarto-notebook-link{margin-top:.5em}.quarto-notebook-link{font-size:.75em;color:#6c757d;margin-bottom:1em;text-decoration:none;display:block}.quarto-notebook-link:hover{text-decoration:underline;color:#2780e3}.quarto-notebook-link::before{display:inline-block;height:.75rem;width:.75rem;margin-bottom:0em;margin-right:.25em;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:.75rem .75rem}.quarto-alternate-notebooks i.bi,.quarto-alternate-formats i.bi{margin-right:.4em}.quarto-notebook .cell-container{display:flex}.quarto-notebook .cell-container .cell{flex-grow:4}.quarto-notebook .cell-container .cell-decorator{padding-top:1.5em;padding-right:1em;text-align:right}.quarto-notebook .cell-code code{white-space:pre-wrap}.quarto-notebook h2,.quarto-notebook .h2{border-bottom:none}.sidebar .quarto-alternate-formats a,.sidebar .quarto-alternate-notebooks a{text-decoration:none}.sidebar .quarto-alternate-formats a:hover,.sidebar .quarto-alternate-notebooks a:hover{color:#2780e3}.sidebar .quarto-alternate-notebooks h2,.sidebar .quarto-alternate-notebooks .h2,.sidebar .quarto-alternate-formats h2,.sidebar .quarto-alternate-formats .h2,.sidebar nav[role=doc-toc]>h2,.sidebar nav[role=doc-toc]>.h2{font-size:.875rem;font-weight:400;margin-bottom:.5rem;margin-top:.3rem;font-family:inherit;border-bottom:0;padding-bottom:0;padding-top:0px}.sidebar .quarto-alternate-notebooks h2,.sidebar .quarto-alternate-notebooks .h2,.sidebar .quarto-alternate-formats h2,.sidebar .quarto-alternate-formats .h2{margin-top:1rem}.sidebar nav[role=doc-toc]>ul a{border-left:1px solid #e9ecef;padding-left:.6rem}.sidebar .quarto-alternate-notebooks h2>ul a,.sidebar .quarto-alternate-notebooks .h2>ul a,.sidebar .quarto-alternate-formats h2>ul a,.sidebar .quarto-alternate-formats .h2>ul a{border-left:none;padding-left:.6rem}.sidebar .quarto-alternate-notebooks ul a:empty,.sidebar .quarto-alternate-formats ul a:empty,.sidebar nav[role=doc-toc]>ul a:empty{display:none}.sidebar .quarto-alternate-notebooks ul,.sidebar .quarto-alternate-formats ul,.sidebar nav[role=doc-toc] ul{padding-left:0;list-style:none;font-size:.875rem;font-weight:300}.sidebar .quarto-alternate-notebooks ul li a,.sidebar .quarto-alternate-formats ul li a,.sidebar nav[role=doc-toc]>ul li a{line-height:1.1rem;padding-bottom:.2rem;padding-top:.2rem;color:inherit}.sidebar nav[role=doc-toc] ul>li>ul>li>a{padding-left:1.2em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>a{padding-left:2.4em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>a{padding-left:3.6em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>ul>li>a{padding-left:4.8em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>ul>li>ul>li>a{padding-left:6em}.sidebar nav[role=doc-toc] ul>li>a.active,.sidebar nav[role=doc-toc] ul>li>ul>li>a.active{border-left:1px solid #2780e3;color:#2780e3 !important}.sidebar nav[role=doc-toc] ul>li>a:hover,.sidebar nav[role=doc-toc] ul>li>ul>li>a:hover{color:#2780e3 !important}kbd,.kbd{color:#373a3c;background-color:#f8f9fa;border:1px solid;border-radius:5px;border-color:#dee2e6}div.hanging-indent{margin-left:1em;text-indent:-1em}.citation a,.footnote-ref{text-decoration:none}.footnotes ol{padding-left:1em}.tippy-content>*{margin-bottom:.7em}.tippy-content>*:last-child{margin-bottom:0}.table a{word-break:break-word}.table>thead{border-top-width:1px;border-top-color:#dee2e6;border-bottom:1px solid #b6babc}.callout{margin-top:1.25rem;margin-bottom:1.25rem;border-radius:.25rem;overflow-wrap:break-word}.callout .callout-title-container{overflow-wrap:anywhere}.callout.callout-style-simple{padding:.4em .7em;border-left:5px solid;border-right:1px solid #dee2e6;border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.callout.callout-style-default{border-left:5px solid;border-right:1px solid #dee2e6;border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.callout .callout-body-container{flex-grow:1}.callout.callout-style-simple .callout-body{font-size:.9rem;font-weight:400}.callout.callout-style-default .callout-body{font-size:.9rem;font-weight:400}.callout.callout-titled .callout-body{margin-top:.2em}.callout:not(.no-icon).callout-titled.callout-style-simple .callout-body{padding-left:1.6em}.callout.callout-titled>.callout-header{padding-top:.2em;margin-bottom:-0.2em}.callout.callout-style-simple>div.callout-header{border-bottom:none;font-size:.9rem;font-weight:600;opacity:75%}.callout.callout-style-default>div.callout-header{border-bottom:none;font-weight:600;opacity:85%;font-size:.9rem;padding-left:.5em;padding-right:.5em}.callout.callout-style-default div.callout-body{padding-left:.5em;padding-right:.5em}.callout.callout-style-default div.callout-body>:first-child{margin-top:.5em}.callout>div.callout-header[data-bs-toggle=collapse]{cursor:pointer}.callout.callout-style-default .callout-header[aria-expanded=false],.callout.callout-style-default .callout-header[aria-expanded=true]{padding-top:0px;margin-bottom:0px;align-items:center}.callout.callout-titled .callout-body>:last-child:not(.sourceCode),.callout.callout-titled .callout-body>div>:last-child:not(.sourceCode){margin-bottom:.5rem}.callout:not(.callout-titled) .callout-body>:first-child,.callout:not(.callout-titled) .callout-body>div>:first-child{margin-top:.25rem}.callout:not(.callout-titled) .callout-body>:last-child,.callout:not(.callout-titled) .callout-body>div>:last-child{margin-bottom:.2rem}.callout.callout-style-simple .callout-icon::before,.callout.callout-style-simple .callout-toggle::before{height:1rem;width:1rem;display:inline-block;content:"";background-repeat:no-repeat;background-size:1rem 1rem}.callout.callout-style-default .callout-icon::before,.callout.callout-style-default .callout-toggle::before{height:.9rem;width:.9rem;display:inline-block;content:"";background-repeat:no-repeat;background-size:.9rem .9rem}.callout.callout-style-default .callout-toggle::before{margin-top:5px}.callout .callout-btn-toggle .callout-toggle::before{transition:transform .2s linear}.callout .callout-header[aria-expanded=false] .callout-toggle::before{transform:rotate(-90deg)}.callout .callout-header[aria-expanded=true] .callout-toggle::before{transform:none}.callout.callout-style-simple:not(.no-icon) div.callout-icon-container{padding-top:.2em;padding-right:.55em}.callout.callout-style-default:not(.no-icon) div.callout-icon-container{padding-top:.1em;padding-right:.35em}.callout.callout-style-default:not(.no-icon) div.callout-title-container{margin-top:-1px}.callout.callout-style-default.callout-caution:not(.no-icon) div.callout-icon-container{padding-top:.3em;padding-right:.35em}.callout>.callout-body>.callout-icon-container>.no-icon,.callout>.callout-header>.callout-icon-container>.no-icon{display:none}div.callout.callout{border-left-color:#6c757d}div.callout.callout-style-default>.callout-header{background-color:#6c757d}div.callout-note.callout{border-left-color:#2780e3}div.callout-note.callout-style-default>.callout-header{background-color:#e9f2fc}div.callout-note:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-note.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-note .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-tip.callout{border-left-color:#3fb618}div.callout-tip.callout-style-default>.callout-header{background-color:#ecf8e8}div.callout-tip:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-tip.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-tip .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-warning.callout{border-left-color:#ff7518}div.callout-warning.callout-style-default>.callout-header{background-color:#fff1e8}div.callout-warning:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-warning.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-warning .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-caution.callout{border-left-color:#f0ad4e}div.callout-caution.callout-style-default>.callout-header{background-color:#fef7ed}div.callout-caution:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-caution.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-caution .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-important.callout{border-left-color:#ff0039}div.callout-important.callout-style-default>.callout-header{background-color:#ffe6eb}div.callout-important:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-important.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-important .callout-toggle::before{background-image:url('data:image/svg+xml,')}.quarto-toggle-container{display:flex;align-items:center}.quarto-reader-toggle .bi::before,.quarto-color-scheme-toggle .bi::before{display:inline-block;height:1rem;width:1rem;content:"";background-repeat:no-repeat;background-size:1rem 1rem}.sidebar-navigation{padding-left:20px}.navbar .quarto-color-scheme-toggle:not(.alternate) .bi::before{background-image:url('data:image/svg+xml,')}.navbar .quarto-color-scheme-toggle.alternate .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-color-scheme-toggle:not(.alternate) .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-color-scheme-toggle.alternate .bi::before{background-image:url('data:image/svg+xml,')}.quarto-sidebar-toggle{border-color:#dee2e6;border-bottom-left-radius:.25rem;border-bottom-right-radius:.25rem;border-style:solid;border-width:1px;overflow:hidden;border-top-width:0px;padding-top:0px !important}.quarto-sidebar-toggle-title{cursor:pointer;padding-bottom:2px;margin-left:.25em;text-align:center;font-weight:400;font-size:.775em}#quarto-content .quarto-sidebar-toggle{background:#fafafa}#quarto-content .quarto-sidebar-toggle-title{color:#373a3c}.quarto-sidebar-toggle-icon{color:#dee2e6;margin-right:.5em;float:right;transition:transform .2s ease}.quarto-sidebar-toggle-icon::before{padding-top:5px}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-icon{transform:rotate(-180deg)}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-title{border-bottom:solid #dee2e6 1px}.quarto-sidebar-toggle-contents{background-color:#fff;padding-right:10px;padding-left:10px;margin-top:0px !important;transition:max-height .5s ease}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-contents{padding-top:1em;padding-bottom:10px}.quarto-sidebar-toggle:not(.expanded) .quarto-sidebar-toggle-contents{padding-top:0px !important;padding-bottom:0px}nav[role=doc-toc]{z-index:1020}#quarto-sidebar>*,nav[role=doc-toc]>*{transition:opacity .1s ease,border .1s ease}#quarto-sidebar.slow>*,nav[role=doc-toc].slow>*{transition:opacity .4s ease,border .4s ease}.quarto-color-scheme-toggle:not(.alternate).top-right .bi::before{background-image:url('data:image/svg+xml,')}.quarto-color-scheme-toggle.alternate.top-right .bi::before{background-image:url('data:image/svg+xml,')}#quarto-appendix.default{border-top:1px solid #dee2e6}#quarto-appendix.default{background-color:#fff;padding-top:1.5em;margin-top:2em;z-index:998}#quarto-appendix.default .quarto-appendix-heading{margin-top:0;line-height:1.4em;font-weight:600;opacity:.9;border-bottom:none;margin-bottom:0}#quarto-appendix.default .footnotes ol,#quarto-appendix.default .footnotes ol li>p:last-of-type,#quarto-appendix.default .quarto-appendix-contents>p:last-of-type{margin-bottom:0}#quarto-appendix.default .quarto-appendix-secondary-label{margin-bottom:.4em}#quarto-appendix.default .quarto-appendix-bibtex{font-size:.7em;padding:1em;border:solid 1px #dee2e6;margin-bottom:1em}#quarto-appendix.default .quarto-appendix-bibtex code.sourceCode{white-space:pre-wrap}#quarto-appendix.default .quarto-appendix-citeas{font-size:.9em;padding:1em;border:solid 1px #dee2e6;margin-bottom:1em}#quarto-appendix.default .quarto-appendix-heading{font-size:1em !important}#quarto-appendix.default *[role=doc-endnotes]>ol,#quarto-appendix.default .quarto-appendix-contents>*:not(h2):not(.h2){font-size:.9em}#quarto-appendix.default section{padding-bottom:1.5em}#quarto-appendix.default section *[role=doc-endnotes],#quarto-appendix.default section>*:not(a){opacity:.9;word-wrap:break-word}.btn.btn-quarto,div.cell-output-display .btn-quarto{color:#cbcccc;background-color:#373a3c;border-color:#373a3c}.btn.btn-quarto:hover,div.cell-output-display .btn-quarto:hover{color:#cbcccc;background-color:#555859;border-color:#4b4e50}.btn-check:focus+.btn.btn-quarto,.btn.btn-quarto:focus,.btn-check:focus+div.cell-output-display .btn-quarto,div.cell-output-display .btn-quarto:focus{color:#cbcccc;background-color:#555859;border-color:#4b4e50;box-shadow:0 0 0 .25rem rgba(77,80,82,.5)}.btn-check:checked+.btn.btn-quarto,.btn-check:active+.btn.btn-quarto,.btn.btn-quarto:active,.btn.btn-quarto.active,.show>.btn.btn-quarto.dropdown-toggle,.btn-check:checked+div.cell-output-display .btn-quarto,.btn-check:active+div.cell-output-display .btn-quarto,div.cell-output-display .btn-quarto:active,div.cell-output-display .btn-quarto.active,.show>div.cell-output-display .btn-quarto.dropdown-toggle{color:#fff;background-color:#5f6163;border-color:#4b4e50}.btn-check:checked+.btn.btn-quarto:focus,.btn-check:active+.btn.btn-quarto:focus,.btn.btn-quarto:active:focus,.btn.btn-quarto.active:focus,.show>.btn.btn-quarto.dropdown-toggle:focus,.btn-check:checked+div.cell-output-display .btn-quarto:focus,.btn-check:active+div.cell-output-display .btn-quarto:focus,div.cell-output-display .btn-quarto:active:focus,div.cell-output-display .btn-quarto.active:focus,.show>div.cell-output-display .btn-quarto.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(77,80,82,.5)}.btn.btn-quarto:disabled,.btn.btn-quarto.disabled,div.cell-output-display .btn-quarto:disabled,div.cell-output-display .btn-quarto.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}nav.quarto-secondary-nav.color-navbar{background-color:#f8f9fa;color:#545555}nav.quarto-secondary-nav.color-navbar h1,nav.quarto-secondary-nav.color-navbar .h1,nav.quarto-secondary-nav.color-navbar .quarto-btn-toggle{color:#545555}@media(max-width: 991.98px){body.nav-sidebar .quarto-title-banner{margin-bottom:0;padding-bottom:0}body.nav-sidebar #title-block-header{margin-block-end:0}}p.subtitle{margin-top:.25em;margin-bottom:.5em}code a:any-link{color:inherit;text-decoration-color:#6c757d}/*! light */div.observablehq table thead tr th{background-color:var(--bs-body-bg)}input,button,select,optgroup,textarea{background-color:var(--bs-body-bg)}.code-annotated .code-copy-button{margin-right:1.25em;margin-top:0;padding-bottom:0;padding-top:3px}.code-annotation-gutter-bg{background-color:#fff}.code-annotation-gutter{background-color:rgba(233,236,239,.65)}.code-annotation-gutter,.code-annotation-gutter-bg{height:100%;width:calc(20px + .5em);position:absolute;top:0;right:0}dl.code-annotation-container-grid dt{margin-right:1em;margin-top:.25rem}dl.code-annotation-container-grid dt{font-family:var(--bs-font-monospace);color:#4f5457;border:solid #4f5457 1px;border-radius:50%;height:22px;width:22px;line-height:22px;font-size:11px;text-align:center;vertical-align:middle;text-decoration:none}dl.code-annotation-container-grid dt[data-target-cell]{cursor:pointer}dl.code-annotation-container-grid dt[data-target-cell].code-annotation-active{color:#fff;border:solid #aaa 1px;background-color:#aaa}pre.code-annotation-code{padding-top:0;padding-bottom:0}pre.code-annotation-code code{z-index:3}#code-annotation-line-highlight-gutter{width:100%;border-top:solid rgba(170,170,170,.2666666667) 1px;border-bottom:solid rgba(170,170,170,.2666666667) 1px;z-index:2;background-color:rgba(170,170,170,.1333333333)}#code-annotation-line-highlight{margin-left:-4em;width:calc(100% + 4em);border-top:solid rgba(170,170,170,.2666666667) 1px;border-bottom:solid rgba(170,170,170,.2666666667) 1px;z-index:2;background-color:rgba(170,170,170,.1333333333)}code.sourceCode .code-annotation-anchor.code-annotation-active{background-color:var(--quarto-hl-normal-color, #aaaaaa);border:solid var(--quarto-hl-normal-color, #aaaaaa) 1px;color:#e9ecef;font-weight:bolder}code.sourceCode .code-annotation-anchor{font-family:var(--bs-font-monospace);color:var(--quarto-hl-co-color);border:solid var(--quarto-hl-co-color) 1px;border-radius:50%;height:18px;width:18px;font-size:9px;margin-top:2px}code.sourceCode button.code-annotation-anchor{padding:2px}code.sourceCode a.code-annotation-anchor{line-height:18px;text-align:center;vertical-align:middle;cursor:default;text-decoration:none}@media print{.page-columns .column-screen-inset{grid-column:page-start-inset/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset table{background:#fff}.page-columns .column-screen-inset-left{grid-column:page-start-inset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-left table{background:#fff}.page-columns .column-screen-inset-right{grid-column:body-content-start/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-right table{background:#fff}.page-columns .column-screen{grid-column:page-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen table{background:#fff}.page-columns .column-screen-left{grid-column:page-start/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-left table{background:#fff}.page-columns .column-screen-right{grid-column:body-content-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-right table{background:#fff}.page-columns .column-screen-inset-shaded{grid-column:page-start-inset/page-end-inset;padding:1em;background:#f8f9fa;z-index:998;transform:translate3d(0, 0, 0);margin-bottom:1em}}.quarto-video{margin-bottom:1em}.table>thead{border-top-width:0}.table>:not(caption)>*:not(:last-child)>*{border-bottom-color:#ebeced;border-bottom-style:solid;border-bottom-width:1px}.table>:not(:first-child){border-top:1px solid #b6babc;border-bottom:1px solid inherit}.table tbody{border-bottom-color:#b6babc}a.external:after{display:inline-block;height:.75rem;width:.75rem;margin-bottom:.15em;margin-left:.25em;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:.75rem .75rem}div.sourceCode code a.external:after{content:none}a.external:after:hover{cursor:pointer}.quarto-ext-icon{display:inline-block;font-size:.75em;padding-left:.3em}.code-with-filename .code-with-filename-file{margin-bottom:0;padding-bottom:2px;padding-top:2px;padding-left:.7em;border:var(--quarto-border-width) solid var(--quarto-border-color);border-radius:var(--quarto-border-radius);border-bottom:0;border-bottom-left-radius:0%;border-bottom-right-radius:0%}.code-with-filename div.sourceCode,.reveal .code-with-filename div.sourceCode{margin-top:0;border-top-left-radius:0%;border-top-right-radius:0%}.code-with-filename .code-with-filename-file pre{margin-bottom:0}.code-with-filename .code-with-filename-file,.code-with-filename .code-with-filename-file pre{background-color:rgba(219,219,219,.8)}.quarto-dark .code-with-filename .code-with-filename-file,.quarto-dark .code-with-filename .code-with-filename-file pre{background-color:#555}.code-with-filename .code-with-filename-file strong{font-weight:400}.quarto-title-banner{margin-bottom:1em;color:#545555;background:#f8f9fa}.quarto-title-banner .code-tools-button{color:#878888}.quarto-title-banner .code-tools-button:hover{color:#545555}.quarto-title-banner .code-tools-button>.bi::before{background-image:url('data:image/svg+xml,')}.quarto-title-banner .code-tools-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}.quarto-title-banner .quarto-title .title{font-weight:600}.quarto-title-banner .quarto-categories{margin-top:.75em}@media(min-width: 992px){.quarto-title-banner{padding-top:2.5em;padding-bottom:2.5em}}@media(max-width: 991.98px){.quarto-title-banner{padding-top:1em;padding-bottom:1em}}main.quarto-banner-title-block>section:first-child>h2,main.quarto-banner-title-block>section:first-child>.h2,main.quarto-banner-title-block>section:first-child>h3,main.quarto-banner-title-block>section:first-child>.h3,main.quarto-banner-title-block>section:first-child>h4,main.quarto-banner-title-block>section:first-child>.h4{margin-top:0}.quarto-title .quarto-categories{display:flex;flex-wrap:wrap;row-gap:.5em;column-gap:.4em;padding-bottom:.5em;margin-top:.75em}.quarto-title .quarto-categories .quarto-category{padding:.25em .75em;font-size:.65em;text-transform:uppercase;border:solid 1px;border-radius:.25rem;opacity:.6}.quarto-title .quarto-categories .quarto-category a{color:inherit}#title-block-header.quarto-title-block.default .quarto-title-meta{display:grid;grid-template-columns:repeat(2, 1fr)}#title-block-header.quarto-title-block.default .quarto-title .title{margin-bottom:0}#title-block-header.quarto-title-block.default .quarto-title-author-orcid img{margin-top:-0.2em;height:.8em;width:.8em}#title-block-header.quarto-title-block.default .quarto-description p:last-of-type{margin-bottom:0}#title-block-header.quarto-title-block.default .quarto-title-meta-contents p,#title-block-header.quarto-title-block.default .quarto-title-authors p,#title-block-header.quarto-title-block.default .quarto-title-affiliations p{margin-bottom:.1em}#title-block-header.quarto-title-block.default .quarto-title-meta-heading{text-transform:uppercase;margin-top:1em;font-size:.8em;opacity:.8;font-weight:400}#title-block-header.quarto-title-block.default .quarto-title-meta-contents{font-size:.9em}#title-block-header.quarto-title-block.default .quarto-title-meta-contents a{color:#373a3c}#title-block-header.quarto-title-block.default .quarto-title-meta-contents p.affiliation:last-of-type{margin-bottom:.1em}#title-block-header.quarto-title-block.default p.affiliation{margin-bottom:.1em}#title-block-header.quarto-title-block.default .description,#title-block-header.quarto-title-block.default .abstract{margin-top:0}#title-block-header.quarto-title-block.default .description>p,#title-block-header.quarto-title-block.default .abstract>p{font-size:.9em}#title-block-header.quarto-title-block.default .description>p:last-of-type,#title-block-header.quarto-title-block.default .abstract>p:last-of-type{margin-bottom:0}#title-block-header.quarto-title-block.default .description .abstract-title,#title-block-header.quarto-title-block.default .abstract .abstract-title{margin-top:1em;text-transform:uppercase;font-size:.8em;opacity:.8;font-weight:400}#title-block-header.quarto-title-block.default .quarto-title-meta-author{display:grid;grid-template-columns:1fr 1fr}.quarto-title-tools-only{display:flex;justify-content:right}body{-webkit-font-smoothing:antialiased}.badge.bg-light{color:#373a3c}.progress .progress-bar{font-size:8px;line-height:8px}/*# sourceMappingURL=038018dfc50d695214e8253e62c2ede5.css.map */ diff --git a/python-book/site_libs/bootstrap/bootstrap.min.js b/python-book/site_libs/bootstrap/bootstrap.min.js new file mode 100644 index 00000000..cc0a2556 --- /dev/null +++ b/python-book/site_libs/bootstrap/bootstrap.min.js @@ -0,0 +1,7 @@ +/*! + * Bootstrap v5.1.3 (https://getbootstrap.com/) + * Copyright 2011-2021 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) + */ +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).bootstrap=e()}(this,(function(){"use strict";const t="transitionend",e=t=>{let e=t.getAttribute("data-bs-target");if(!e||"#"===e){let i=t.getAttribute("href");if(!i||!i.includes("#")&&!i.startsWith("."))return null;i.includes("#")&&!i.startsWith("#")&&(i=`#${i.split("#")[1]}`),e=i&&"#"!==i?i.trim():null}return e},i=t=>{const i=e(t);return i&&document.querySelector(i)?i:null},n=t=>{const i=e(t);return i?document.querySelector(i):null},s=e=>{e.dispatchEvent(new Event(t))},o=t=>!(!t||"object"!=typeof t)&&(void 0!==t.jquery&&(t=t[0]),void 0!==t.nodeType),r=t=>o(t)?t.jquery?t[0]:t:"string"==typeof t&&t.length>0?document.querySelector(t):null,a=(t,e,i)=>{Object.keys(i).forEach((n=>{const s=i[n],r=e[n],a=r&&o(r)?"element":null==(l=r)?`${l}`:{}.toString.call(l).match(/\s([a-z]+)/i)[1].toLowerCase();var l;if(!new RegExp(s).test(a))throw new TypeError(`${t.toUpperCase()}: Option "${n}" provided type "${a}" but expected type "${s}".`)}))},l=t=>!(!o(t)||0===t.getClientRects().length)&&"visible"===getComputedStyle(t).getPropertyValue("visibility"),c=t=>!t||t.nodeType!==Node.ELEMENT_NODE||!!t.classList.contains("disabled")||(void 0!==t.disabled?t.disabled:t.hasAttribute("disabled")&&"false"!==t.getAttribute("disabled")),h=t=>{if(!document.documentElement.attachShadow)return null;if("function"==typeof t.getRootNode){const e=t.getRootNode();return e instanceof ShadowRoot?e:null}return t instanceof ShadowRoot?t:t.parentNode?h(t.parentNode):null},d=()=>{},u=t=>{t.offsetHeight},f=()=>{const{jQuery:t}=window;return t&&!document.body.hasAttribute("data-bs-no-jquery")?t:null},p=[],m=()=>"rtl"===document.documentElement.dir,g=t=>{var e;e=()=>{const e=f();if(e){const i=t.NAME,n=e.fn[i];e.fn[i]=t.jQueryInterface,e.fn[i].Constructor=t,e.fn[i].noConflict=()=>(e.fn[i]=n,t.jQueryInterface)}},"loading"===document.readyState?(p.length||document.addEventListener("DOMContentLoaded",(()=>{p.forEach((t=>t()))})),p.push(e)):e()},_=t=>{"function"==typeof t&&t()},b=(e,i,n=!0)=>{if(!n)return void _(e);const o=(t=>{if(!t)return 0;let{transitionDuration:e,transitionDelay:i}=window.getComputedStyle(t);const n=Number.parseFloat(e),s=Number.parseFloat(i);return n||s?(e=e.split(",")[0],i=i.split(",")[0],1e3*(Number.parseFloat(e)+Number.parseFloat(i))):0})(i)+5;let r=!1;const a=({target:n})=>{n===i&&(r=!0,i.removeEventListener(t,a),_(e))};i.addEventListener(t,a),setTimeout((()=>{r||s(i)}),o)},v=(t,e,i,n)=>{let s=t.indexOf(e);if(-1===s)return t[!i&&n?t.length-1:0];const o=t.length;return s+=i?1:-1,n&&(s=(s+o)%o),t[Math.max(0,Math.min(s,o-1))]},y=/[^.]*(?=\..*)\.|.*/,w=/\..*/,E=/::\d+$/,A={};let T=1;const O={mouseenter:"mouseover",mouseleave:"mouseout"},C=/^(mouseenter|mouseleave)/i,k=new Set(["click","dblclick","mouseup","mousedown","contextmenu","mousewheel","DOMMouseScroll","mouseover","mouseout","mousemove","selectstart","selectend","keydown","keypress","keyup","orientationchange","touchstart","touchmove","touchend","touchcancel","pointerdown","pointermove","pointerup","pointerleave","pointercancel","gesturestart","gesturechange","gestureend","focus","blur","change","reset","select","submit","focusin","focusout","load","unload","beforeunload","resize","move","DOMContentLoaded","readystatechange","error","abort","scroll"]);function L(t,e){return e&&`${e}::${T++}`||t.uidEvent||T++}function x(t){const e=L(t);return t.uidEvent=e,A[e]=A[e]||{},A[e]}function D(t,e,i=null){const n=Object.keys(t);for(let s=0,o=n.length;sfunction(e){if(!e.relatedTarget||e.relatedTarget!==e.delegateTarget&&!e.delegateTarget.contains(e.relatedTarget))return t.call(this,e)};n?n=t(n):i=t(i)}const[o,r,a]=S(e,i,n),l=x(t),c=l[a]||(l[a]={}),h=D(c,r,o?i:null);if(h)return void(h.oneOff=h.oneOff&&s);const d=L(r,e.replace(y,"")),u=o?function(t,e,i){return function n(s){const o=t.querySelectorAll(e);for(let{target:r}=s;r&&r!==this;r=r.parentNode)for(let a=o.length;a--;)if(o[a]===r)return s.delegateTarget=r,n.oneOff&&j.off(t,s.type,e,i),i.apply(r,[s]);return null}}(t,i,n):function(t,e){return function i(n){return n.delegateTarget=t,i.oneOff&&j.off(t,n.type,e),e.apply(t,[n])}}(t,i);u.delegationSelector=o?i:null,u.originalHandler=r,u.oneOff=s,u.uidEvent=d,c[d]=u,t.addEventListener(a,u,o)}function I(t,e,i,n,s){const o=D(e[i],n,s);o&&(t.removeEventListener(i,o,Boolean(s)),delete e[i][o.uidEvent])}function P(t){return t=t.replace(w,""),O[t]||t}const j={on(t,e,i,n){N(t,e,i,n,!1)},one(t,e,i,n){N(t,e,i,n,!0)},off(t,e,i,n){if("string"!=typeof e||!t)return;const[s,o,r]=S(e,i,n),a=r!==e,l=x(t),c=e.startsWith(".");if(void 0!==o){if(!l||!l[r])return;return void I(t,l,r,o,s?i:null)}c&&Object.keys(l).forEach((i=>{!function(t,e,i,n){const s=e[i]||{};Object.keys(s).forEach((o=>{if(o.includes(n)){const n=s[o];I(t,e,i,n.originalHandler,n.delegationSelector)}}))}(t,l,i,e.slice(1))}));const h=l[r]||{};Object.keys(h).forEach((i=>{const n=i.replace(E,"");if(!a||e.includes(n)){const e=h[i];I(t,l,r,e.originalHandler,e.delegationSelector)}}))},trigger(t,e,i){if("string"!=typeof e||!t)return null;const n=f(),s=P(e),o=e!==s,r=k.has(s);let a,l=!0,c=!0,h=!1,d=null;return o&&n&&(a=n.Event(e,i),n(t).trigger(a),l=!a.isPropagationStopped(),c=!a.isImmediatePropagationStopped(),h=a.isDefaultPrevented()),r?(d=document.createEvent("HTMLEvents"),d.initEvent(s,l,!0)):d=new CustomEvent(e,{bubbles:l,cancelable:!0}),void 0!==i&&Object.keys(i).forEach((t=>{Object.defineProperty(d,t,{get:()=>i[t]})})),h&&d.preventDefault(),c&&t.dispatchEvent(d),d.defaultPrevented&&void 0!==a&&a.preventDefault(),d}},M=new Map,H={set(t,e,i){M.has(t)||M.set(t,new Map);const n=M.get(t);n.has(e)||0===n.size?n.set(e,i):console.error(`Bootstrap doesn't allow more than one instance per element. Bound instance: ${Array.from(n.keys())[0]}.`)},get:(t,e)=>M.has(t)&&M.get(t).get(e)||null,remove(t,e){if(!M.has(t))return;const i=M.get(t);i.delete(e),0===i.size&&M.delete(t)}};class B{constructor(t){(t=r(t))&&(this._element=t,H.set(this._element,this.constructor.DATA_KEY,this))}dispose(){H.remove(this._element,this.constructor.DATA_KEY),j.off(this._element,this.constructor.EVENT_KEY),Object.getOwnPropertyNames(this).forEach((t=>{this[t]=null}))}_queueCallback(t,e,i=!0){b(t,e,i)}static getInstance(t){return H.get(r(t),this.DATA_KEY)}static getOrCreateInstance(t,e={}){return this.getInstance(t)||new this(t,"object"==typeof e?e:null)}static get VERSION(){return"5.1.3"}static get NAME(){throw new Error('You have to implement the static method "NAME", for each component!')}static get DATA_KEY(){return`bs.${this.NAME}`}static get EVENT_KEY(){return`.${this.DATA_KEY}`}}const R=(t,e="hide")=>{const i=`click.dismiss${t.EVENT_KEY}`,s=t.NAME;j.on(document,i,`[data-bs-dismiss="${s}"]`,(function(i){if(["A","AREA"].includes(this.tagName)&&i.preventDefault(),c(this))return;const o=n(this)||this.closest(`.${s}`);t.getOrCreateInstance(o)[e]()}))};class W extends B{static get NAME(){return"alert"}close(){if(j.trigger(this._element,"close.bs.alert").defaultPrevented)return;this._element.classList.remove("show");const t=this._element.classList.contains("fade");this._queueCallback((()=>this._destroyElement()),this._element,t)}_destroyElement(){this._element.remove(),j.trigger(this._element,"closed.bs.alert"),this.dispose()}static jQueryInterface(t){return this.each((function(){const e=W.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}R(W,"close"),g(W);const $='[data-bs-toggle="button"]';class z extends B{static get NAME(){return"button"}toggle(){this._element.setAttribute("aria-pressed",this._element.classList.toggle("active"))}static jQueryInterface(t){return this.each((function(){const e=z.getOrCreateInstance(this);"toggle"===t&&e[t]()}))}}function q(t){return"true"===t||"false"!==t&&(t===Number(t).toString()?Number(t):""===t||"null"===t?null:t)}function F(t){return t.replace(/[A-Z]/g,(t=>`-${t.toLowerCase()}`))}j.on(document,"click.bs.button.data-api",$,(t=>{t.preventDefault();const e=t.target.closest($);z.getOrCreateInstance(e).toggle()})),g(z);const U={setDataAttribute(t,e,i){t.setAttribute(`data-bs-${F(e)}`,i)},removeDataAttribute(t,e){t.removeAttribute(`data-bs-${F(e)}`)},getDataAttributes(t){if(!t)return{};const e={};return Object.keys(t.dataset).filter((t=>t.startsWith("bs"))).forEach((i=>{let n=i.replace(/^bs/,"");n=n.charAt(0).toLowerCase()+n.slice(1,n.length),e[n]=q(t.dataset[i])})),e},getDataAttribute:(t,e)=>q(t.getAttribute(`data-bs-${F(e)}`)),offset(t){const e=t.getBoundingClientRect();return{top:e.top+window.pageYOffset,left:e.left+window.pageXOffset}},position:t=>({top:t.offsetTop,left:t.offsetLeft})},V={find:(t,e=document.documentElement)=>[].concat(...Element.prototype.querySelectorAll.call(e,t)),findOne:(t,e=document.documentElement)=>Element.prototype.querySelector.call(e,t),children:(t,e)=>[].concat(...t.children).filter((t=>t.matches(e))),parents(t,e){const i=[];let n=t.parentNode;for(;n&&n.nodeType===Node.ELEMENT_NODE&&3!==n.nodeType;)n.matches(e)&&i.push(n),n=n.parentNode;return i},prev(t,e){let i=t.previousElementSibling;for(;i;){if(i.matches(e))return[i];i=i.previousElementSibling}return[]},next(t,e){let i=t.nextElementSibling;for(;i;){if(i.matches(e))return[i];i=i.nextElementSibling}return[]},focusableChildren(t){const e=["a","button","input","textarea","select","details","[tabindex]",'[contenteditable="true"]'].map((t=>`${t}:not([tabindex^="-"])`)).join(", ");return this.find(e,t).filter((t=>!c(t)&&l(t)))}},K="carousel",X={interval:5e3,keyboard:!0,slide:!1,pause:"hover",wrap:!0,touch:!0},Y={interval:"(number|boolean)",keyboard:"boolean",slide:"(boolean|string)",pause:"(string|boolean)",wrap:"boolean",touch:"boolean"},Q="next",G="prev",Z="left",J="right",tt={ArrowLeft:J,ArrowRight:Z},et="slid.bs.carousel",it="active",nt=".active.carousel-item";class st extends B{constructor(t,e){super(t),this._items=null,this._interval=null,this._activeElement=null,this._isPaused=!1,this._isSliding=!1,this.touchTimeout=null,this.touchStartX=0,this.touchDeltaX=0,this._config=this._getConfig(e),this._indicatorsElement=V.findOne(".carousel-indicators",this._element),this._touchSupported="ontouchstart"in document.documentElement||navigator.maxTouchPoints>0,this._pointerEvent=Boolean(window.PointerEvent),this._addEventListeners()}static get Default(){return X}static get NAME(){return K}next(){this._slide(Q)}nextWhenVisible(){!document.hidden&&l(this._element)&&this.next()}prev(){this._slide(G)}pause(t){t||(this._isPaused=!0),V.findOne(".carousel-item-next, .carousel-item-prev",this._element)&&(s(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null}cycle(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config&&this._config.interval&&!this._isPaused&&(this._updateInterval(),this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))}to(t){this._activeElement=V.findOne(nt,this._element);const e=this._getItemIndex(this._activeElement);if(t>this._items.length-1||t<0)return;if(this._isSliding)return void j.one(this._element,et,(()=>this.to(t)));if(e===t)return this.pause(),void this.cycle();const i=t>e?Q:G;this._slide(i,this._items[t])}_getConfig(t){return t={...X,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(K,t,Y),t}_handleSwipe(){const t=Math.abs(this.touchDeltaX);if(t<=40)return;const e=t/this.touchDeltaX;this.touchDeltaX=0,e&&this._slide(e>0?J:Z)}_addEventListeners(){this._config.keyboard&&j.on(this._element,"keydown.bs.carousel",(t=>this._keydown(t))),"hover"===this._config.pause&&(j.on(this._element,"mouseenter.bs.carousel",(t=>this.pause(t))),j.on(this._element,"mouseleave.bs.carousel",(t=>this.cycle(t)))),this._config.touch&&this._touchSupported&&this._addTouchEventListeners()}_addTouchEventListeners(){const t=t=>this._pointerEvent&&("pen"===t.pointerType||"touch"===t.pointerType),e=e=>{t(e)?this.touchStartX=e.clientX:this._pointerEvent||(this.touchStartX=e.touches[0].clientX)},i=t=>{this.touchDeltaX=t.touches&&t.touches.length>1?0:t.touches[0].clientX-this.touchStartX},n=e=>{t(e)&&(this.touchDeltaX=e.clientX-this.touchStartX),this._handleSwipe(),"hover"===this._config.pause&&(this.pause(),this.touchTimeout&&clearTimeout(this.touchTimeout),this.touchTimeout=setTimeout((t=>this.cycle(t)),500+this._config.interval))};V.find(".carousel-item img",this._element).forEach((t=>{j.on(t,"dragstart.bs.carousel",(t=>t.preventDefault()))})),this._pointerEvent?(j.on(this._element,"pointerdown.bs.carousel",(t=>e(t))),j.on(this._element,"pointerup.bs.carousel",(t=>n(t))),this._element.classList.add("pointer-event")):(j.on(this._element,"touchstart.bs.carousel",(t=>e(t))),j.on(this._element,"touchmove.bs.carousel",(t=>i(t))),j.on(this._element,"touchend.bs.carousel",(t=>n(t))))}_keydown(t){if(/input|textarea/i.test(t.target.tagName))return;const e=tt[t.key];e&&(t.preventDefault(),this._slide(e))}_getItemIndex(t){return this._items=t&&t.parentNode?V.find(".carousel-item",t.parentNode):[],this._items.indexOf(t)}_getItemByOrder(t,e){const i=t===Q;return v(this._items,e,i,this._config.wrap)}_triggerSlideEvent(t,e){const i=this._getItemIndex(t),n=this._getItemIndex(V.findOne(nt,this._element));return j.trigger(this._element,"slide.bs.carousel",{relatedTarget:t,direction:e,from:n,to:i})}_setActiveIndicatorElement(t){if(this._indicatorsElement){const e=V.findOne(".active",this._indicatorsElement);e.classList.remove(it),e.removeAttribute("aria-current");const i=V.find("[data-bs-target]",this._indicatorsElement);for(let e=0;e{j.trigger(this._element,et,{relatedTarget:o,direction:d,from:s,to:r})};if(this._element.classList.contains("slide")){o.classList.add(h),u(o),n.classList.add(c),o.classList.add(c);const t=()=>{o.classList.remove(c,h),o.classList.add(it),n.classList.remove(it,h,c),this._isSliding=!1,setTimeout(f,0)};this._queueCallback(t,n,!0)}else n.classList.remove(it),o.classList.add(it),this._isSliding=!1,f();a&&this.cycle()}_directionToOrder(t){return[J,Z].includes(t)?m()?t===Z?G:Q:t===Z?Q:G:t}_orderToDirection(t){return[Q,G].includes(t)?m()?t===G?Z:J:t===G?J:Z:t}static carouselInterface(t,e){const i=st.getOrCreateInstance(t,e);let{_config:n}=i;"object"==typeof e&&(n={...n,...e});const s="string"==typeof e?e:n.slide;if("number"==typeof e)i.to(e);else if("string"==typeof s){if(void 0===i[s])throw new TypeError(`No method named "${s}"`);i[s]()}else n.interval&&n.ride&&(i.pause(),i.cycle())}static jQueryInterface(t){return this.each((function(){st.carouselInterface(this,t)}))}static dataApiClickHandler(t){const e=n(this);if(!e||!e.classList.contains("carousel"))return;const i={...U.getDataAttributes(e),...U.getDataAttributes(this)},s=this.getAttribute("data-bs-slide-to");s&&(i.interval=!1),st.carouselInterface(e,i),s&&st.getInstance(e).to(s),t.preventDefault()}}j.on(document,"click.bs.carousel.data-api","[data-bs-slide], [data-bs-slide-to]",st.dataApiClickHandler),j.on(window,"load.bs.carousel.data-api",(()=>{const t=V.find('[data-bs-ride="carousel"]');for(let e=0,i=t.length;et===this._element));null!==s&&o.length&&(this._selector=s,this._triggerArray.push(e))}this._initializeChildren(),this._config.parent||this._addAriaAndCollapsedClass(this._triggerArray,this._isShown()),this._config.toggle&&this.toggle()}static get Default(){return rt}static get NAME(){return ot}toggle(){this._isShown()?this.hide():this.show()}show(){if(this._isTransitioning||this._isShown())return;let t,e=[];if(this._config.parent){const t=V.find(ut,this._config.parent);e=V.find(".collapse.show, .collapse.collapsing",this._config.parent).filter((e=>!t.includes(e)))}const i=V.findOne(this._selector);if(e.length){const n=e.find((t=>i!==t));if(t=n?pt.getInstance(n):null,t&&t._isTransitioning)return}if(j.trigger(this._element,"show.bs.collapse").defaultPrevented)return;e.forEach((e=>{i!==e&&pt.getOrCreateInstance(e,{toggle:!1}).hide(),t||H.set(e,"bs.collapse",null)}));const n=this._getDimension();this._element.classList.remove(ct),this._element.classList.add(ht),this._element.style[n]=0,this._addAriaAndCollapsedClass(this._triggerArray,!0),this._isTransitioning=!0;const s=`scroll${n[0].toUpperCase()+n.slice(1)}`;this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct,lt),this._element.style[n]="",j.trigger(this._element,"shown.bs.collapse")}),this._element,!0),this._element.style[n]=`${this._element[s]}px`}hide(){if(this._isTransitioning||!this._isShown())return;if(j.trigger(this._element,"hide.bs.collapse").defaultPrevented)return;const t=this._getDimension();this._element.style[t]=`${this._element.getBoundingClientRect()[t]}px`,u(this._element),this._element.classList.add(ht),this._element.classList.remove(ct,lt);const e=this._triggerArray.length;for(let t=0;t{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct),j.trigger(this._element,"hidden.bs.collapse")}),this._element,!0)}_isShown(t=this._element){return t.classList.contains(lt)}_getConfig(t){return(t={...rt,...U.getDataAttributes(this._element),...t}).toggle=Boolean(t.toggle),t.parent=r(t.parent),a(ot,t,at),t}_getDimension(){return this._element.classList.contains("collapse-horizontal")?"width":"height"}_initializeChildren(){if(!this._config.parent)return;const t=V.find(ut,this._config.parent);V.find(ft,this._config.parent).filter((e=>!t.includes(e))).forEach((t=>{const e=n(t);e&&this._addAriaAndCollapsedClass([t],this._isShown(e))}))}_addAriaAndCollapsedClass(t,e){t.length&&t.forEach((t=>{e?t.classList.remove(dt):t.classList.add(dt),t.setAttribute("aria-expanded",e)}))}static jQueryInterface(t){return this.each((function(){const e={};"string"==typeof t&&/show|hide/.test(t)&&(e.toggle=!1);const i=pt.getOrCreateInstance(this,e);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t]()}}))}}j.on(document,"click.bs.collapse.data-api",ft,(function(t){("A"===t.target.tagName||t.delegateTarget&&"A"===t.delegateTarget.tagName)&&t.preventDefault();const e=i(this);V.find(e).forEach((t=>{pt.getOrCreateInstance(t,{toggle:!1}).toggle()}))})),g(pt);var mt="top",gt="bottom",_t="right",bt="left",vt="auto",yt=[mt,gt,_t,bt],wt="start",Et="end",At="clippingParents",Tt="viewport",Ot="popper",Ct="reference",kt=yt.reduce((function(t,e){return t.concat([e+"-"+wt,e+"-"+Et])}),[]),Lt=[].concat(yt,[vt]).reduce((function(t,e){return t.concat([e,e+"-"+wt,e+"-"+Et])}),[]),xt="beforeRead",Dt="read",St="afterRead",Nt="beforeMain",It="main",Pt="afterMain",jt="beforeWrite",Mt="write",Ht="afterWrite",Bt=[xt,Dt,St,Nt,It,Pt,jt,Mt,Ht];function Rt(t){return t?(t.nodeName||"").toLowerCase():null}function Wt(t){if(null==t)return window;if("[object Window]"!==t.toString()){var e=t.ownerDocument;return e&&e.defaultView||window}return t}function $t(t){return t instanceof Wt(t).Element||t instanceof Element}function zt(t){return t instanceof Wt(t).HTMLElement||t instanceof HTMLElement}function qt(t){return"undefined"!=typeof ShadowRoot&&(t instanceof Wt(t).ShadowRoot||t instanceof ShadowRoot)}const Ft={name:"applyStyles",enabled:!0,phase:"write",fn:function(t){var e=t.state;Object.keys(e.elements).forEach((function(t){var i=e.styles[t]||{},n=e.attributes[t]||{},s=e.elements[t];zt(s)&&Rt(s)&&(Object.assign(s.style,i),Object.keys(n).forEach((function(t){var e=n[t];!1===e?s.removeAttribute(t):s.setAttribute(t,!0===e?"":e)})))}))},effect:function(t){var e=t.state,i={popper:{position:e.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(e.elements.popper.style,i.popper),e.styles=i,e.elements.arrow&&Object.assign(e.elements.arrow.style,i.arrow),function(){Object.keys(e.elements).forEach((function(t){var n=e.elements[t],s=e.attributes[t]||{},o=Object.keys(e.styles.hasOwnProperty(t)?e.styles[t]:i[t]).reduce((function(t,e){return t[e]="",t}),{});zt(n)&&Rt(n)&&(Object.assign(n.style,o),Object.keys(s).forEach((function(t){n.removeAttribute(t)})))}))}},requires:["computeStyles"]};function Ut(t){return t.split("-")[0]}function Vt(t,e){var i=t.getBoundingClientRect();return{width:i.width/1,height:i.height/1,top:i.top/1,right:i.right/1,bottom:i.bottom/1,left:i.left/1,x:i.left/1,y:i.top/1}}function Kt(t){var e=Vt(t),i=t.offsetWidth,n=t.offsetHeight;return Math.abs(e.width-i)<=1&&(i=e.width),Math.abs(e.height-n)<=1&&(n=e.height),{x:t.offsetLeft,y:t.offsetTop,width:i,height:n}}function Xt(t,e){var i=e.getRootNode&&e.getRootNode();if(t.contains(e))return!0;if(i&&qt(i)){var n=e;do{if(n&&t.isSameNode(n))return!0;n=n.parentNode||n.host}while(n)}return!1}function Yt(t){return Wt(t).getComputedStyle(t)}function Qt(t){return["table","td","th"].indexOf(Rt(t))>=0}function Gt(t){return(($t(t)?t.ownerDocument:t.document)||window.document).documentElement}function Zt(t){return"html"===Rt(t)?t:t.assignedSlot||t.parentNode||(qt(t)?t.host:null)||Gt(t)}function Jt(t){return zt(t)&&"fixed"!==Yt(t).position?t.offsetParent:null}function te(t){for(var e=Wt(t),i=Jt(t);i&&Qt(i)&&"static"===Yt(i).position;)i=Jt(i);return i&&("html"===Rt(i)||"body"===Rt(i)&&"static"===Yt(i).position)?e:i||function(t){var e=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&zt(t)&&"fixed"===Yt(t).position)return null;for(var i=Zt(t);zt(i)&&["html","body"].indexOf(Rt(i))<0;){var n=Yt(i);if("none"!==n.transform||"none"!==n.perspective||"paint"===n.contain||-1!==["transform","perspective"].indexOf(n.willChange)||e&&"filter"===n.willChange||e&&n.filter&&"none"!==n.filter)return i;i=i.parentNode}return null}(t)||e}function ee(t){return["top","bottom"].indexOf(t)>=0?"x":"y"}var ie=Math.max,ne=Math.min,se=Math.round;function oe(t,e,i){return ie(t,ne(e,i))}function re(t){return Object.assign({},{top:0,right:0,bottom:0,left:0},t)}function ae(t,e){return e.reduce((function(e,i){return e[i]=t,e}),{})}const le={name:"arrow",enabled:!0,phase:"main",fn:function(t){var e,i=t.state,n=t.name,s=t.options,o=i.elements.arrow,r=i.modifiersData.popperOffsets,a=Ut(i.placement),l=ee(a),c=[bt,_t].indexOf(a)>=0?"height":"width";if(o&&r){var h=function(t,e){return re("number"!=typeof(t="function"==typeof t?t(Object.assign({},e.rects,{placement:e.placement})):t)?t:ae(t,yt))}(s.padding,i),d=Kt(o),u="y"===l?mt:bt,f="y"===l?gt:_t,p=i.rects.reference[c]+i.rects.reference[l]-r[l]-i.rects.popper[c],m=r[l]-i.rects.reference[l],g=te(o),_=g?"y"===l?g.clientHeight||0:g.clientWidth||0:0,b=p/2-m/2,v=h[u],y=_-d[c]-h[f],w=_/2-d[c]/2+b,E=oe(v,w,y),A=l;i.modifiersData[n]=((e={})[A]=E,e.centerOffset=E-w,e)}},effect:function(t){var e=t.state,i=t.options.element,n=void 0===i?"[data-popper-arrow]":i;null!=n&&("string"!=typeof n||(n=e.elements.popper.querySelector(n)))&&Xt(e.elements.popper,n)&&(e.elements.arrow=n)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function ce(t){return t.split("-")[1]}var he={top:"auto",right:"auto",bottom:"auto",left:"auto"};function de(t){var e,i=t.popper,n=t.popperRect,s=t.placement,o=t.variation,r=t.offsets,a=t.position,l=t.gpuAcceleration,c=t.adaptive,h=t.roundOffsets,d=!0===h?function(t){var e=t.x,i=t.y,n=window.devicePixelRatio||1;return{x:se(se(e*n)/n)||0,y:se(se(i*n)/n)||0}}(r):"function"==typeof h?h(r):r,u=d.x,f=void 0===u?0:u,p=d.y,m=void 0===p?0:p,g=r.hasOwnProperty("x"),_=r.hasOwnProperty("y"),b=bt,v=mt,y=window;if(c){var w=te(i),E="clientHeight",A="clientWidth";w===Wt(i)&&"static"!==Yt(w=Gt(i)).position&&"absolute"===a&&(E="scrollHeight",A="scrollWidth"),w=w,s!==mt&&(s!==bt&&s!==_t||o!==Et)||(v=gt,m-=w[E]-n.height,m*=l?1:-1),s!==bt&&(s!==mt&&s!==gt||o!==Et)||(b=_t,f-=w[A]-n.width,f*=l?1:-1)}var T,O=Object.assign({position:a},c&&he);return l?Object.assign({},O,((T={})[v]=_?"0":"",T[b]=g?"0":"",T.transform=(y.devicePixelRatio||1)<=1?"translate("+f+"px, "+m+"px)":"translate3d("+f+"px, "+m+"px, 0)",T)):Object.assign({},O,((e={})[v]=_?m+"px":"",e[b]=g?f+"px":"",e.transform="",e))}const ue={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:function(t){var e=t.state,i=t.options,n=i.gpuAcceleration,s=void 0===n||n,o=i.adaptive,r=void 0===o||o,a=i.roundOffsets,l=void 0===a||a,c={placement:Ut(e.placement),variation:ce(e.placement),popper:e.elements.popper,popperRect:e.rects.popper,gpuAcceleration:s};null!=e.modifiersData.popperOffsets&&(e.styles.popper=Object.assign({},e.styles.popper,de(Object.assign({},c,{offsets:e.modifiersData.popperOffsets,position:e.options.strategy,adaptive:r,roundOffsets:l})))),null!=e.modifiersData.arrow&&(e.styles.arrow=Object.assign({},e.styles.arrow,de(Object.assign({},c,{offsets:e.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:l})))),e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-placement":e.placement})},data:{}};var fe={passive:!0};const pe={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:function(t){var e=t.state,i=t.instance,n=t.options,s=n.scroll,o=void 0===s||s,r=n.resize,a=void 0===r||r,l=Wt(e.elements.popper),c=[].concat(e.scrollParents.reference,e.scrollParents.popper);return o&&c.forEach((function(t){t.addEventListener("scroll",i.update,fe)})),a&&l.addEventListener("resize",i.update,fe),function(){o&&c.forEach((function(t){t.removeEventListener("scroll",i.update,fe)})),a&&l.removeEventListener("resize",i.update,fe)}},data:{}};var me={left:"right",right:"left",bottom:"top",top:"bottom"};function ge(t){return t.replace(/left|right|bottom|top/g,(function(t){return me[t]}))}var _e={start:"end",end:"start"};function be(t){return t.replace(/start|end/g,(function(t){return _e[t]}))}function ve(t){var e=Wt(t);return{scrollLeft:e.pageXOffset,scrollTop:e.pageYOffset}}function ye(t){return Vt(Gt(t)).left+ve(t).scrollLeft}function we(t){var e=Yt(t),i=e.overflow,n=e.overflowX,s=e.overflowY;return/auto|scroll|overlay|hidden/.test(i+s+n)}function Ee(t){return["html","body","#document"].indexOf(Rt(t))>=0?t.ownerDocument.body:zt(t)&&we(t)?t:Ee(Zt(t))}function Ae(t,e){var i;void 0===e&&(e=[]);var n=Ee(t),s=n===(null==(i=t.ownerDocument)?void 0:i.body),o=Wt(n),r=s?[o].concat(o.visualViewport||[],we(n)?n:[]):n,a=e.concat(r);return s?a:a.concat(Ae(Zt(r)))}function Te(t){return Object.assign({},t,{left:t.x,top:t.y,right:t.x+t.width,bottom:t.y+t.height})}function Oe(t,e){return e===Tt?Te(function(t){var e=Wt(t),i=Gt(t),n=e.visualViewport,s=i.clientWidth,o=i.clientHeight,r=0,a=0;return n&&(s=n.width,o=n.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(r=n.offsetLeft,a=n.offsetTop)),{width:s,height:o,x:r+ye(t),y:a}}(t)):zt(e)?function(t){var e=Vt(t);return e.top=e.top+t.clientTop,e.left=e.left+t.clientLeft,e.bottom=e.top+t.clientHeight,e.right=e.left+t.clientWidth,e.width=t.clientWidth,e.height=t.clientHeight,e.x=e.left,e.y=e.top,e}(e):Te(function(t){var e,i=Gt(t),n=ve(t),s=null==(e=t.ownerDocument)?void 0:e.body,o=ie(i.scrollWidth,i.clientWidth,s?s.scrollWidth:0,s?s.clientWidth:0),r=ie(i.scrollHeight,i.clientHeight,s?s.scrollHeight:0,s?s.clientHeight:0),a=-n.scrollLeft+ye(t),l=-n.scrollTop;return"rtl"===Yt(s||i).direction&&(a+=ie(i.clientWidth,s?s.clientWidth:0)-o),{width:o,height:r,x:a,y:l}}(Gt(t)))}function Ce(t){var e,i=t.reference,n=t.element,s=t.placement,o=s?Ut(s):null,r=s?ce(s):null,a=i.x+i.width/2-n.width/2,l=i.y+i.height/2-n.height/2;switch(o){case mt:e={x:a,y:i.y-n.height};break;case gt:e={x:a,y:i.y+i.height};break;case _t:e={x:i.x+i.width,y:l};break;case bt:e={x:i.x-n.width,y:l};break;default:e={x:i.x,y:i.y}}var c=o?ee(o):null;if(null!=c){var h="y"===c?"height":"width";switch(r){case wt:e[c]=e[c]-(i[h]/2-n[h]/2);break;case Et:e[c]=e[c]+(i[h]/2-n[h]/2)}}return e}function ke(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=void 0===n?t.placement:n,o=i.boundary,r=void 0===o?At:o,a=i.rootBoundary,l=void 0===a?Tt:a,c=i.elementContext,h=void 0===c?Ot:c,d=i.altBoundary,u=void 0!==d&&d,f=i.padding,p=void 0===f?0:f,m=re("number"!=typeof p?p:ae(p,yt)),g=h===Ot?Ct:Ot,_=t.rects.popper,b=t.elements[u?g:h],v=function(t,e,i){var n="clippingParents"===e?function(t){var e=Ae(Zt(t)),i=["absolute","fixed"].indexOf(Yt(t).position)>=0&&zt(t)?te(t):t;return $t(i)?e.filter((function(t){return $t(t)&&Xt(t,i)&&"body"!==Rt(t)})):[]}(t):[].concat(e),s=[].concat(n,[i]),o=s[0],r=s.reduce((function(e,i){var n=Oe(t,i);return e.top=ie(n.top,e.top),e.right=ne(n.right,e.right),e.bottom=ne(n.bottom,e.bottom),e.left=ie(n.left,e.left),e}),Oe(t,o));return r.width=r.right-r.left,r.height=r.bottom-r.top,r.x=r.left,r.y=r.top,r}($t(b)?b:b.contextElement||Gt(t.elements.popper),r,l),y=Vt(t.elements.reference),w=Ce({reference:y,element:_,strategy:"absolute",placement:s}),E=Te(Object.assign({},_,w)),A=h===Ot?E:y,T={top:v.top-A.top+m.top,bottom:A.bottom-v.bottom+m.bottom,left:v.left-A.left+m.left,right:A.right-v.right+m.right},O=t.modifiersData.offset;if(h===Ot&&O){var C=O[s];Object.keys(T).forEach((function(t){var e=[_t,gt].indexOf(t)>=0?1:-1,i=[mt,gt].indexOf(t)>=0?"y":"x";T[t]+=C[i]*e}))}return T}function Le(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=i.boundary,o=i.rootBoundary,r=i.padding,a=i.flipVariations,l=i.allowedAutoPlacements,c=void 0===l?Lt:l,h=ce(n),d=h?a?kt:kt.filter((function(t){return ce(t)===h})):yt,u=d.filter((function(t){return c.indexOf(t)>=0}));0===u.length&&(u=d);var f=u.reduce((function(e,i){return e[i]=ke(t,{placement:i,boundary:s,rootBoundary:o,padding:r})[Ut(i)],e}),{});return Object.keys(f).sort((function(t,e){return f[t]-f[e]}))}const xe={name:"flip",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name;if(!e.modifiersData[n]._skip){for(var s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0===r||r,l=i.fallbackPlacements,c=i.padding,h=i.boundary,d=i.rootBoundary,u=i.altBoundary,f=i.flipVariations,p=void 0===f||f,m=i.allowedAutoPlacements,g=e.options.placement,_=Ut(g),b=l||(_!==g&&p?function(t){if(Ut(t)===vt)return[];var e=ge(t);return[be(t),e,be(e)]}(g):[ge(g)]),v=[g].concat(b).reduce((function(t,i){return t.concat(Ut(i)===vt?Le(e,{placement:i,boundary:h,rootBoundary:d,padding:c,flipVariations:p,allowedAutoPlacements:m}):i)}),[]),y=e.rects.reference,w=e.rects.popper,E=new Map,A=!0,T=v[0],O=0;O=0,D=x?"width":"height",S=ke(e,{placement:C,boundary:h,rootBoundary:d,altBoundary:u,padding:c}),N=x?L?_t:bt:L?gt:mt;y[D]>w[D]&&(N=ge(N));var I=ge(N),P=[];if(o&&P.push(S[k]<=0),a&&P.push(S[N]<=0,S[I]<=0),P.every((function(t){return t}))){T=C,A=!1;break}E.set(C,P)}if(A)for(var j=function(t){var e=v.find((function(e){var i=E.get(e);if(i)return i.slice(0,t).every((function(t){return t}))}));if(e)return T=e,"break"},M=p?3:1;M>0&&"break"!==j(M);M--);e.placement!==T&&(e.modifiersData[n]._skip=!0,e.placement=T,e.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function De(t,e,i){return void 0===i&&(i={x:0,y:0}),{top:t.top-e.height-i.y,right:t.right-e.width+i.x,bottom:t.bottom-e.height+i.y,left:t.left-e.width-i.x}}function Se(t){return[mt,_t,gt,bt].some((function(e){return t[e]>=0}))}const Ne={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(t){var e=t.state,i=t.name,n=e.rects.reference,s=e.rects.popper,o=e.modifiersData.preventOverflow,r=ke(e,{elementContext:"reference"}),a=ke(e,{altBoundary:!0}),l=De(r,n),c=De(a,s,o),h=Se(l),d=Se(c);e.modifiersData[i]={referenceClippingOffsets:l,popperEscapeOffsets:c,isReferenceHidden:h,hasPopperEscaped:d},e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-reference-hidden":h,"data-popper-escaped":d})}},Ie={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.offset,o=void 0===s?[0,0]:s,r=Lt.reduce((function(t,i){return t[i]=function(t,e,i){var n=Ut(t),s=[bt,mt].indexOf(n)>=0?-1:1,o="function"==typeof i?i(Object.assign({},e,{placement:t})):i,r=o[0],a=o[1];return r=r||0,a=(a||0)*s,[bt,_t].indexOf(n)>=0?{x:a,y:r}:{x:r,y:a}}(i,e.rects,o),t}),{}),a=r[e.placement],l=a.x,c=a.y;null!=e.modifiersData.popperOffsets&&(e.modifiersData.popperOffsets.x+=l,e.modifiersData.popperOffsets.y+=c),e.modifiersData[n]=r}},Pe={name:"popperOffsets",enabled:!0,phase:"read",fn:function(t){var e=t.state,i=t.name;e.modifiersData[i]=Ce({reference:e.rects.reference,element:e.rects.popper,strategy:"absolute",placement:e.placement})},data:{}},je={name:"preventOverflow",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0!==r&&r,l=i.boundary,c=i.rootBoundary,h=i.altBoundary,d=i.padding,u=i.tether,f=void 0===u||u,p=i.tetherOffset,m=void 0===p?0:p,g=ke(e,{boundary:l,rootBoundary:c,padding:d,altBoundary:h}),_=Ut(e.placement),b=ce(e.placement),v=!b,y=ee(_),w="x"===y?"y":"x",E=e.modifiersData.popperOffsets,A=e.rects.reference,T=e.rects.popper,O="function"==typeof m?m(Object.assign({},e.rects,{placement:e.placement})):m,C={x:0,y:0};if(E){if(o||a){var k="y"===y?mt:bt,L="y"===y?gt:_t,x="y"===y?"height":"width",D=E[y],S=E[y]+g[k],N=E[y]-g[L],I=f?-T[x]/2:0,P=b===wt?A[x]:T[x],j=b===wt?-T[x]:-A[x],M=e.elements.arrow,H=f&&M?Kt(M):{width:0,height:0},B=e.modifiersData["arrow#persistent"]?e.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},R=B[k],W=B[L],$=oe(0,A[x],H[x]),z=v?A[x]/2-I-$-R-O:P-$-R-O,q=v?-A[x]/2+I+$+W+O:j+$+W+O,F=e.elements.arrow&&te(e.elements.arrow),U=F?"y"===y?F.clientTop||0:F.clientLeft||0:0,V=e.modifiersData.offset?e.modifiersData.offset[e.placement][y]:0,K=E[y]+z-V-U,X=E[y]+q-V;if(o){var Y=oe(f?ne(S,K):S,D,f?ie(N,X):N);E[y]=Y,C[y]=Y-D}if(a){var Q="x"===y?mt:bt,G="x"===y?gt:_t,Z=E[w],J=Z+g[Q],tt=Z-g[G],et=oe(f?ne(J,K):J,Z,f?ie(tt,X):tt);E[w]=et,C[w]=et-Z}}e.modifiersData[n]=C}},requiresIfExists:["offset"]};function Me(t,e,i){void 0===i&&(i=!1);var n=zt(e);zt(e)&&function(t){var e=t.getBoundingClientRect();e.width,t.offsetWidth,e.height,t.offsetHeight}(e);var s,o,r=Gt(e),a=Vt(t),l={scrollLeft:0,scrollTop:0},c={x:0,y:0};return(n||!n&&!i)&&(("body"!==Rt(e)||we(r))&&(l=(s=e)!==Wt(s)&&zt(s)?{scrollLeft:(o=s).scrollLeft,scrollTop:o.scrollTop}:ve(s)),zt(e)?((c=Vt(e)).x+=e.clientLeft,c.y+=e.clientTop):r&&(c.x=ye(r))),{x:a.left+l.scrollLeft-c.x,y:a.top+l.scrollTop-c.y,width:a.width,height:a.height}}function He(t){var e=new Map,i=new Set,n=[];function s(t){i.add(t.name),[].concat(t.requires||[],t.requiresIfExists||[]).forEach((function(t){if(!i.has(t)){var n=e.get(t);n&&s(n)}})),n.push(t)}return t.forEach((function(t){e.set(t.name,t)})),t.forEach((function(t){i.has(t.name)||s(t)})),n}var Be={placement:"bottom",modifiers:[],strategy:"absolute"};function Re(){for(var t=arguments.length,e=new Array(t),i=0;ij.on(t,"mouseover",d))),this._element.focus(),this._element.setAttribute("aria-expanded",!0),this._menu.classList.add(Je),this._element.classList.add(Je),j.trigger(this._element,"shown.bs.dropdown",t)}hide(){if(c(this._element)||!this._isShown(this._menu))return;const t={relatedTarget:this._element};this._completeHide(t)}dispose(){this._popper&&this._popper.destroy(),super.dispose()}update(){this._inNavbar=this._detectNavbar(),this._popper&&this._popper.update()}_completeHide(t){j.trigger(this._element,"hide.bs.dropdown",t).defaultPrevented||("ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._popper&&this._popper.destroy(),this._menu.classList.remove(Je),this._element.classList.remove(Je),this._element.setAttribute("aria-expanded","false"),U.removeDataAttribute(this._menu,"popper"),j.trigger(this._element,"hidden.bs.dropdown",t))}_getConfig(t){if(t={...this.constructor.Default,...U.getDataAttributes(this._element),...t},a(Ue,t,this.constructor.DefaultType),"object"==typeof t.reference&&!o(t.reference)&&"function"!=typeof t.reference.getBoundingClientRect)throw new TypeError(`${Ue.toUpperCase()}: Option "reference" provided type "object" without a required "getBoundingClientRect" method.`);return t}_createPopper(t){if(void 0===Fe)throw new TypeError("Bootstrap's dropdowns require Popper (https://popper.js.org)");let e=this._element;"parent"===this._config.reference?e=t:o(this._config.reference)?e=r(this._config.reference):"object"==typeof this._config.reference&&(e=this._config.reference);const i=this._getPopperConfig(),n=i.modifiers.find((t=>"applyStyles"===t.name&&!1===t.enabled));this._popper=qe(e,this._menu,i),n&&U.setDataAttribute(this._menu,"popper","static")}_isShown(t=this._element){return t.classList.contains(Je)}_getMenuElement(){return V.next(this._element,ei)[0]}_getPlacement(){const t=this._element.parentNode;if(t.classList.contains("dropend"))return ri;if(t.classList.contains("dropstart"))return ai;const e="end"===getComputedStyle(this._menu).getPropertyValue("--bs-position").trim();return t.classList.contains("dropup")?e?ni:ii:e?oi:si}_detectNavbar(){return null!==this._element.closest(".navbar")}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_getPopperConfig(){const t={placement:this._getPlacement(),modifiers:[{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"offset",options:{offset:this._getOffset()}}]};return"static"===this._config.display&&(t.modifiers=[{name:"applyStyles",enabled:!1}]),{...t,..."function"==typeof this._config.popperConfig?this._config.popperConfig(t):this._config.popperConfig}}_selectMenuItem({key:t,target:e}){const i=V.find(".dropdown-menu .dropdown-item:not(.disabled):not(:disabled)",this._menu).filter(l);i.length&&v(i,e,t===Ye,!i.includes(e)).focus()}static jQueryInterface(t){return this.each((function(){const e=hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}static clearMenus(t){if(t&&(2===t.button||"keyup"===t.type&&"Tab"!==t.key))return;const e=V.find(ti);for(let i=0,n=e.length;ie+t)),this._setElementAttributes(di,"paddingRight",(e=>e+t)),this._setElementAttributes(ui,"marginRight",(e=>e-t))}_disableOverFlow(){this._saveInitialAttribute(this._element,"overflow"),this._element.style.overflow="hidden"}_setElementAttributes(t,e,i){const n=this.getWidth();this._applyManipulationCallback(t,(t=>{if(t!==this._element&&window.innerWidth>t.clientWidth+n)return;this._saveInitialAttribute(t,e);const s=window.getComputedStyle(t)[e];t.style[e]=`${i(Number.parseFloat(s))}px`}))}reset(){this._resetElementAttributes(this._element,"overflow"),this._resetElementAttributes(this._element,"paddingRight"),this._resetElementAttributes(di,"paddingRight"),this._resetElementAttributes(ui,"marginRight")}_saveInitialAttribute(t,e){const i=t.style[e];i&&U.setDataAttribute(t,e,i)}_resetElementAttributes(t,e){this._applyManipulationCallback(t,(t=>{const i=U.getDataAttribute(t,e);void 0===i?t.style.removeProperty(e):(U.removeDataAttribute(t,e),t.style[e]=i)}))}_applyManipulationCallback(t,e){o(t)?e(t):V.find(t,this._element).forEach(e)}isOverflowing(){return this.getWidth()>0}}const pi={className:"modal-backdrop",isVisible:!0,isAnimated:!1,rootElement:"body",clickCallback:null},mi={className:"string",isVisible:"boolean",isAnimated:"boolean",rootElement:"(element|string)",clickCallback:"(function|null)"},gi="show",_i="mousedown.bs.backdrop";class bi{constructor(t){this._config=this._getConfig(t),this._isAppended=!1,this._element=null}show(t){this._config.isVisible?(this._append(),this._config.isAnimated&&u(this._getElement()),this._getElement().classList.add(gi),this._emulateAnimation((()=>{_(t)}))):_(t)}hide(t){this._config.isVisible?(this._getElement().classList.remove(gi),this._emulateAnimation((()=>{this.dispose(),_(t)}))):_(t)}_getElement(){if(!this._element){const t=document.createElement("div");t.className=this._config.className,this._config.isAnimated&&t.classList.add("fade"),this._element=t}return this._element}_getConfig(t){return(t={...pi,..."object"==typeof t?t:{}}).rootElement=r(t.rootElement),a("backdrop",t,mi),t}_append(){this._isAppended||(this._config.rootElement.append(this._getElement()),j.on(this._getElement(),_i,(()=>{_(this._config.clickCallback)})),this._isAppended=!0)}dispose(){this._isAppended&&(j.off(this._element,_i),this._element.remove(),this._isAppended=!1)}_emulateAnimation(t){b(t,this._getElement(),this._config.isAnimated)}}const vi={trapElement:null,autofocus:!0},yi={trapElement:"element",autofocus:"boolean"},wi=".bs.focustrap",Ei="backward";class Ai{constructor(t){this._config=this._getConfig(t),this._isActive=!1,this._lastTabNavDirection=null}activate(){const{trapElement:t,autofocus:e}=this._config;this._isActive||(e&&t.focus(),j.off(document,wi),j.on(document,"focusin.bs.focustrap",(t=>this._handleFocusin(t))),j.on(document,"keydown.tab.bs.focustrap",(t=>this._handleKeydown(t))),this._isActive=!0)}deactivate(){this._isActive&&(this._isActive=!1,j.off(document,wi))}_handleFocusin(t){const{target:e}=t,{trapElement:i}=this._config;if(e===document||e===i||i.contains(e))return;const n=V.focusableChildren(i);0===n.length?i.focus():this._lastTabNavDirection===Ei?n[n.length-1].focus():n[0].focus()}_handleKeydown(t){"Tab"===t.key&&(this._lastTabNavDirection=t.shiftKey?Ei:"forward")}_getConfig(t){return t={...vi,..."object"==typeof t?t:{}},a("focustrap",t,yi),t}}const Ti="modal",Oi="Escape",Ci={backdrop:!0,keyboard:!0,focus:!0},ki={backdrop:"(boolean|string)",keyboard:"boolean",focus:"boolean"},Li="hidden.bs.modal",xi="show.bs.modal",Di="resize.bs.modal",Si="click.dismiss.bs.modal",Ni="keydown.dismiss.bs.modal",Ii="mousedown.dismiss.bs.modal",Pi="modal-open",ji="show",Mi="modal-static";class Hi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._dialog=V.findOne(".modal-dialog",this._element),this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._isShown=!1,this._ignoreBackdropClick=!1,this._isTransitioning=!1,this._scrollBar=new fi}static get Default(){return Ci}static get NAME(){return Ti}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||this._isTransitioning||j.trigger(this._element,xi,{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._isAnimated()&&(this._isTransitioning=!0),this._scrollBar.hide(),document.body.classList.add(Pi),this._adjustDialog(),this._setEscapeEvent(),this._setResizeEvent(),j.on(this._dialog,Ii,(()=>{j.one(this._element,"mouseup.dismiss.bs.modal",(t=>{t.target===this._element&&(this._ignoreBackdropClick=!0)}))})),this._showBackdrop((()=>this._showElement(t))))}hide(){if(!this._isShown||this._isTransitioning)return;if(j.trigger(this._element,"hide.bs.modal").defaultPrevented)return;this._isShown=!1;const t=this._isAnimated();t&&(this._isTransitioning=!0),this._setEscapeEvent(),this._setResizeEvent(),this._focustrap.deactivate(),this._element.classList.remove(ji),j.off(this._element,Si),j.off(this._dialog,Ii),this._queueCallback((()=>this._hideModal()),this._element,t)}dispose(){[window,this._dialog].forEach((t=>j.off(t,".bs.modal"))),this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}handleUpdate(){this._adjustDialog()}_initializeBackDrop(){return new bi({isVisible:Boolean(this._config.backdrop),isAnimated:this._isAnimated()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_getConfig(t){return t={...Ci,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Ti,t,ki),t}_showElement(t){const e=this._isAnimated(),i=V.findOne(".modal-body",this._dialog);this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE||document.body.append(this._element),this._element.style.display="block",this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.scrollTop=0,i&&(i.scrollTop=0),e&&u(this._element),this._element.classList.add(ji),this._queueCallback((()=>{this._config.focus&&this._focustrap.activate(),this._isTransitioning=!1,j.trigger(this._element,"shown.bs.modal",{relatedTarget:t})}),this._dialog,e)}_setEscapeEvent(){this._isShown?j.on(this._element,Ni,(t=>{this._config.keyboard&&t.key===Oi?(t.preventDefault(),this.hide()):this._config.keyboard||t.key!==Oi||this._triggerBackdropTransition()})):j.off(this._element,Ni)}_setResizeEvent(){this._isShown?j.on(window,Di,(()=>this._adjustDialog())):j.off(window,Di)}_hideModal(){this._element.style.display="none",this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._isTransitioning=!1,this._backdrop.hide((()=>{document.body.classList.remove(Pi),this._resetAdjustments(),this._scrollBar.reset(),j.trigger(this._element,Li)}))}_showBackdrop(t){j.on(this._element,Si,(t=>{this._ignoreBackdropClick?this._ignoreBackdropClick=!1:t.target===t.currentTarget&&(!0===this._config.backdrop?this.hide():"static"===this._config.backdrop&&this._triggerBackdropTransition())})),this._backdrop.show(t)}_isAnimated(){return this._element.classList.contains("fade")}_triggerBackdropTransition(){if(j.trigger(this._element,"hidePrevented.bs.modal").defaultPrevented)return;const{classList:t,scrollHeight:e,style:i}=this._element,n=e>document.documentElement.clientHeight;!n&&"hidden"===i.overflowY||t.contains(Mi)||(n||(i.overflowY="hidden"),t.add(Mi),this._queueCallback((()=>{t.remove(Mi),n||this._queueCallback((()=>{i.overflowY=""}),this._dialog)}),this._dialog),this._element.focus())}_adjustDialog(){const t=this._element.scrollHeight>document.documentElement.clientHeight,e=this._scrollBar.getWidth(),i=e>0;(!i&&t&&!m()||i&&!t&&m())&&(this._element.style.paddingLeft=`${e}px`),(i&&!t&&!m()||!i&&t&&m())&&(this._element.style.paddingRight=`${e}px`)}_resetAdjustments(){this._element.style.paddingLeft="",this._element.style.paddingRight=""}static jQueryInterface(t,e){return this.each((function(){const i=Hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t](e)}}))}}j.on(document,"click.bs.modal.data-api",'[data-bs-toggle="modal"]',(function(t){const e=n(this);["A","AREA"].includes(this.tagName)&&t.preventDefault(),j.one(e,xi,(t=>{t.defaultPrevented||j.one(e,Li,(()=>{l(this)&&this.focus()}))}));const i=V.findOne(".modal.show");i&&Hi.getInstance(i).hide(),Hi.getOrCreateInstance(e).toggle(this)})),R(Hi),g(Hi);const Bi="offcanvas",Ri={backdrop:!0,keyboard:!0,scroll:!1},Wi={backdrop:"boolean",keyboard:"boolean",scroll:"boolean"},$i="show",zi=".offcanvas.show",qi="hidden.bs.offcanvas";class Fi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._isShown=!1,this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._addEventListeners()}static get NAME(){return Bi}static get Default(){return Ri}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||j.trigger(this._element,"show.bs.offcanvas",{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._element.style.visibility="visible",this._backdrop.show(),this._config.scroll||(new fi).hide(),this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.classList.add($i),this._queueCallback((()=>{this._config.scroll||this._focustrap.activate(),j.trigger(this._element,"shown.bs.offcanvas",{relatedTarget:t})}),this._element,!0))}hide(){this._isShown&&(j.trigger(this._element,"hide.bs.offcanvas").defaultPrevented||(this._focustrap.deactivate(),this._element.blur(),this._isShown=!1,this._element.classList.remove($i),this._backdrop.hide(),this._queueCallback((()=>{this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._element.style.visibility="hidden",this._config.scroll||(new fi).reset(),j.trigger(this._element,qi)}),this._element,!0)))}dispose(){this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}_getConfig(t){return t={...Ri,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Bi,t,Wi),t}_initializeBackDrop(){return new bi({className:"offcanvas-backdrop",isVisible:this._config.backdrop,isAnimated:!0,rootElement:this._element.parentNode,clickCallback:()=>this.hide()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_addEventListeners(){j.on(this._element,"keydown.dismiss.bs.offcanvas",(t=>{this._config.keyboard&&"Escape"===t.key&&this.hide()}))}static jQueryInterface(t){return this.each((function(){const e=Fi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}j.on(document,"click.bs.offcanvas.data-api",'[data-bs-toggle="offcanvas"]',(function(t){const e=n(this);if(["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this))return;j.one(e,qi,(()=>{l(this)&&this.focus()}));const i=V.findOne(zi);i&&i!==e&&Fi.getInstance(i).hide(),Fi.getOrCreateInstance(e).toggle(this)})),j.on(window,"load.bs.offcanvas.data-api",(()=>V.find(zi).forEach((t=>Fi.getOrCreateInstance(t).show())))),R(Fi),g(Fi);const Ui=new Set(["background","cite","href","itemtype","longdesc","poster","src","xlink:href"]),Vi=/^(?:(?:https?|mailto|ftp|tel|file|sms):|[^#&/:?]*(?:[#/?]|$))/i,Ki=/^data:(?:image\/(?:bmp|gif|jpeg|jpg|png|tiff|webp)|video\/(?:mpeg|mp4|ogg|webm)|audio\/(?:mp3|oga|ogg|opus));base64,[\d+/a-z]+=*$/i,Xi=(t,e)=>{const i=t.nodeName.toLowerCase();if(e.includes(i))return!Ui.has(i)||Boolean(Vi.test(t.nodeValue)||Ki.test(t.nodeValue));const n=e.filter((t=>t instanceof RegExp));for(let t=0,e=n.length;t{Xi(t,r)||i.removeAttribute(t.nodeName)}))}return n.body.innerHTML}const Qi="tooltip",Gi=new Set(["sanitize","allowList","sanitizeFn"]),Zi={animation:"boolean",template:"string",title:"(string|element|function)",trigger:"string",delay:"(number|object)",html:"boolean",selector:"(string|boolean)",placement:"(string|function)",offset:"(array|string|function)",container:"(string|element|boolean)",fallbackPlacements:"array",boundary:"(string|element)",customClass:"(string|function)",sanitize:"boolean",sanitizeFn:"(null|function)",allowList:"object",popperConfig:"(null|object|function)"},Ji={AUTO:"auto",TOP:"top",RIGHT:m()?"left":"right",BOTTOM:"bottom",LEFT:m()?"right":"left"},tn={animation:!0,template:'',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:[0,0],container:!1,fallbackPlacements:["top","right","bottom","left"],boundary:"clippingParents",customClass:"",sanitize:!0,sanitizeFn:null,allowList:{"*":["class","dir","id","lang","role",/^aria-[\w-]*$/i],a:["target","href","title","rel"],area:[],b:[],br:[],col:[],code:[],div:[],em:[],hr:[],h1:[],h2:[],h3:[],h4:[],h5:[],h6:[],i:[],img:["src","srcset","alt","title","width","height"],li:[],ol:[],p:[],pre:[],s:[],small:[],span:[],sub:[],sup:[],strong:[],u:[],ul:[]},popperConfig:null},en={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},nn="fade",sn="show",on="show",rn="out",an=".tooltip-inner",ln=".modal",cn="hide.bs.modal",hn="hover",dn="focus";class un extends B{constructor(t,e){if(void 0===Fe)throw new TypeError("Bootstrap's tooltips require Popper (https://popper.js.org)");super(t),this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this._config=this._getConfig(e),this.tip=null,this._setListeners()}static get Default(){return tn}static get NAME(){return Qi}static get Event(){return en}static get DefaultType(){return Zi}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}toggleEnabled(){this._isEnabled=!this._isEnabled}toggle(t){if(this._isEnabled)if(t){const e=this._initializeOnDelegatedTarget(t);e._activeTrigger.click=!e._activeTrigger.click,e._isWithActiveTrigger()?e._enter(null,e):e._leave(null,e)}else{if(this.getTipElement().classList.contains(sn))return void this._leave(null,this);this._enter(null,this)}}dispose(){clearTimeout(this._timeout),j.off(this._element.closest(ln),cn,this._hideModalHandler),this.tip&&this.tip.remove(),this._disposePopper(),super.dispose()}show(){if("none"===this._element.style.display)throw new Error("Please use show on visible elements");if(!this.isWithContent()||!this._isEnabled)return;const t=j.trigger(this._element,this.constructor.Event.SHOW),e=h(this._element),i=null===e?this._element.ownerDocument.documentElement.contains(this._element):e.contains(this._element);if(t.defaultPrevented||!i)return;"tooltip"===this.constructor.NAME&&this.tip&&this.getTitle()!==this.tip.querySelector(an).innerHTML&&(this._disposePopper(),this.tip.remove(),this.tip=null);const n=this.getTipElement(),s=(t=>{do{t+=Math.floor(1e6*Math.random())}while(document.getElementById(t));return t})(this.constructor.NAME);n.setAttribute("id",s),this._element.setAttribute("aria-describedby",s),this._config.animation&&n.classList.add(nn);const o="function"==typeof this._config.placement?this._config.placement.call(this,n,this._element):this._config.placement,r=this._getAttachment(o);this._addAttachmentClass(r);const{container:a}=this._config;H.set(n,this.constructor.DATA_KEY,this),this._element.ownerDocument.documentElement.contains(this.tip)||(a.append(n),j.trigger(this._element,this.constructor.Event.INSERTED)),this._popper?this._popper.update():this._popper=qe(this._element,n,this._getPopperConfig(r)),n.classList.add(sn);const l=this._resolvePossibleFunction(this._config.customClass);l&&n.classList.add(...l.split(" ")),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>{j.on(t,"mouseover",d)}));const c=this.tip.classList.contains(nn);this._queueCallback((()=>{const t=this._hoverState;this._hoverState=null,j.trigger(this._element,this.constructor.Event.SHOWN),t===rn&&this._leave(null,this)}),this.tip,c)}hide(){if(!this._popper)return;const t=this.getTipElement();if(j.trigger(this._element,this.constructor.Event.HIDE).defaultPrevented)return;t.classList.remove(sn),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1;const e=this.tip.classList.contains(nn);this._queueCallback((()=>{this._isWithActiveTrigger()||(this._hoverState!==on&&t.remove(),this._cleanTipClass(),this._element.removeAttribute("aria-describedby"),j.trigger(this._element,this.constructor.Event.HIDDEN),this._disposePopper())}),this.tip,e),this._hoverState=""}update(){null!==this._popper&&this._popper.update()}isWithContent(){return Boolean(this.getTitle())}getTipElement(){if(this.tip)return this.tip;const t=document.createElement("div");t.innerHTML=this._config.template;const e=t.children[0];return this.setContent(e),e.classList.remove(nn,sn),this.tip=e,this.tip}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),an)}_sanitizeAndSetContent(t,e,i){const n=V.findOne(i,t);e||!n?this.setElementContent(n,e):n.remove()}setElementContent(t,e){if(null!==t)return o(e)?(e=r(e),void(this._config.html?e.parentNode!==t&&(t.innerHTML="",t.append(e)):t.textContent=e.textContent)):void(this._config.html?(this._config.sanitize&&(e=Yi(e,this._config.allowList,this._config.sanitizeFn)),t.innerHTML=e):t.textContent=e)}getTitle(){const t=this._element.getAttribute("data-bs-original-title")||this._config.title;return this._resolvePossibleFunction(t)}updateAttachment(t){return"right"===t?"end":"left"===t?"start":t}_initializeOnDelegatedTarget(t,e){return e||this.constructor.getOrCreateInstance(t.delegateTarget,this._getDelegateConfig())}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_resolvePossibleFunction(t){return"function"==typeof t?t.call(this._element):t}_getPopperConfig(t){const e={placement:t,modifiers:[{name:"flip",options:{fallbackPlacements:this._config.fallbackPlacements}},{name:"offset",options:{offset:this._getOffset()}},{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"arrow",options:{element:`.${this.constructor.NAME}-arrow`}},{name:"onChange",enabled:!0,phase:"afterWrite",fn:t=>this._handlePopperPlacementChange(t)}],onFirstUpdate:t=>{t.options.placement!==t.placement&&this._handlePopperPlacementChange(t)}};return{...e,..."function"==typeof this._config.popperConfig?this._config.popperConfig(e):this._config.popperConfig}}_addAttachmentClass(t){this.getTipElement().classList.add(`${this._getBasicClassPrefix()}-${this.updateAttachment(t)}`)}_getAttachment(t){return Ji[t.toUpperCase()]}_setListeners(){this._config.trigger.split(" ").forEach((t=>{if("click"===t)j.on(this._element,this.constructor.Event.CLICK,this._config.selector,(t=>this.toggle(t)));else if("manual"!==t){const e=t===hn?this.constructor.Event.MOUSEENTER:this.constructor.Event.FOCUSIN,i=t===hn?this.constructor.Event.MOUSELEAVE:this.constructor.Event.FOCUSOUT;j.on(this._element,e,this._config.selector,(t=>this._enter(t))),j.on(this._element,i,this._config.selector,(t=>this._leave(t)))}})),this._hideModalHandler=()=>{this._element&&this.hide()},j.on(this._element.closest(ln),cn,this._hideModalHandler),this._config.selector?this._config={...this._config,trigger:"manual",selector:""}:this._fixTitle()}_fixTitle(){const t=this._element.getAttribute("title"),e=typeof this._element.getAttribute("data-bs-original-title");(t||"string"!==e)&&(this._element.setAttribute("data-bs-original-title",t||""),!t||this._element.getAttribute("aria-label")||this._element.textContent||this._element.setAttribute("aria-label",t),this._element.setAttribute("title",""))}_enter(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusin"===t.type?dn:hn]=!0),e.getTipElement().classList.contains(sn)||e._hoverState===on?e._hoverState=on:(clearTimeout(e._timeout),e._hoverState=on,e._config.delay&&e._config.delay.show?e._timeout=setTimeout((()=>{e._hoverState===on&&e.show()}),e._config.delay.show):e.show())}_leave(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusout"===t.type?dn:hn]=e._element.contains(t.relatedTarget)),e._isWithActiveTrigger()||(clearTimeout(e._timeout),e._hoverState=rn,e._config.delay&&e._config.delay.hide?e._timeout=setTimeout((()=>{e._hoverState===rn&&e.hide()}),e._config.delay.hide):e.hide())}_isWithActiveTrigger(){for(const t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1}_getConfig(t){const e=U.getDataAttributes(this._element);return Object.keys(e).forEach((t=>{Gi.has(t)&&delete e[t]})),(t={...this.constructor.Default,...e,..."object"==typeof t&&t?t:{}}).container=!1===t.container?document.body:r(t.container),"number"==typeof t.delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),a(Qi,t,this.constructor.DefaultType),t.sanitize&&(t.template=Yi(t.template,t.allowList,t.sanitizeFn)),t}_getDelegateConfig(){const t={};for(const e in this._config)this.constructor.Default[e]!==this._config[e]&&(t[e]=this._config[e]);return t}_cleanTipClass(){const t=this.getTipElement(),e=new RegExp(`(^|\\s)${this._getBasicClassPrefix()}\\S+`,"g"),i=t.getAttribute("class").match(e);null!==i&&i.length>0&&i.map((t=>t.trim())).forEach((e=>t.classList.remove(e)))}_getBasicClassPrefix(){return"bs-tooltip"}_handlePopperPlacementChange(t){const{state:e}=t;e&&(this.tip=e.elements.popper,this._cleanTipClass(),this._addAttachmentClass(this._getAttachment(e.placement)))}_disposePopper(){this._popper&&(this._popper.destroy(),this._popper=null)}static jQueryInterface(t){return this.each((function(){const e=un.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(un);const fn={...un.Default,placement:"right",offset:[0,8],trigger:"click",content:"",template:''},pn={...un.DefaultType,content:"(string|element|function)"},mn={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"};class gn extends un{static get Default(){return fn}static get NAME(){return"popover"}static get Event(){return mn}static get DefaultType(){return pn}isWithContent(){return this.getTitle()||this._getContent()}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),".popover-header"),this._sanitizeAndSetContent(t,this._getContent(),".popover-body")}_getContent(){return this._resolvePossibleFunction(this._config.content)}_getBasicClassPrefix(){return"bs-popover"}static jQueryInterface(t){return this.each((function(){const e=gn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(gn);const _n="scrollspy",bn={offset:10,method:"auto",target:""},vn={offset:"number",method:"string",target:"(string|element)"},yn="active",wn=".nav-link, .list-group-item, .dropdown-item",En="position";class An extends B{constructor(t,e){super(t),this._scrollElement="BODY"===this._element.tagName?window:this._element,this._config=this._getConfig(e),this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,j.on(this._scrollElement,"scroll.bs.scrollspy",(()=>this._process())),this.refresh(),this._process()}static get Default(){return bn}static get NAME(){return _n}refresh(){const t=this._scrollElement===this._scrollElement.window?"offset":En,e="auto"===this._config.method?t:this._config.method,n=e===En?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),V.find(wn,this._config.target).map((t=>{const s=i(t),o=s?V.findOne(s):null;if(o){const t=o.getBoundingClientRect();if(t.width||t.height)return[U[e](o).top+n,s]}return null})).filter((t=>t)).sort(((t,e)=>t[0]-e[0])).forEach((t=>{this._offsets.push(t[0]),this._targets.push(t[1])}))}dispose(){j.off(this._scrollElement,".bs.scrollspy"),super.dispose()}_getConfig(t){return(t={...bn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}}).target=r(t.target)||document.documentElement,a(_n,t,vn),t}_getScrollTop(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop}_getScrollHeight(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)}_getOffsetHeight(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height}_process(){const t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),i=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=i){const t=this._targets[this._targets.length-1];this._activeTarget!==t&&this._activate(t)}else{if(this._activeTarget&&t0)return this._activeTarget=null,void this._clear();for(let e=this._offsets.length;e--;)this._activeTarget!==this._targets[e]&&t>=this._offsets[e]&&(void 0===this._offsets[e+1]||t`${e}[data-bs-target="${t}"],${e}[href="${t}"]`)),i=V.findOne(e.join(","),this._config.target);i.classList.add(yn),i.classList.contains("dropdown-item")?V.findOne(".dropdown-toggle",i.closest(".dropdown")).classList.add(yn):V.parents(i,".nav, .list-group").forEach((t=>{V.prev(t,".nav-link, .list-group-item").forEach((t=>t.classList.add(yn))),V.prev(t,".nav-item").forEach((t=>{V.children(t,".nav-link").forEach((t=>t.classList.add(yn)))}))})),j.trigger(this._scrollElement,"activate.bs.scrollspy",{relatedTarget:t})}_clear(){V.find(wn,this._config.target).filter((t=>t.classList.contains(yn))).forEach((t=>t.classList.remove(yn)))}static jQueryInterface(t){return this.each((function(){const e=An.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(window,"load.bs.scrollspy.data-api",(()=>{V.find('[data-bs-spy="scroll"]').forEach((t=>new An(t)))})),g(An);const Tn="active",On="fade",Cn="show",kn=".active",Ln=":scope > li > .active";class xn extends B{static get NAME(){return"tab"}show(){if(this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE&&this._element.classList.contains(Tn))return;let t;const e=n(this._element),i=this._element.closest(".nav, .list-group");if(i){const e="UL"===i.nodeName||"OL"===i.nodeName?Ln:kn;t=V.find(e,i),t=t[t.length-1]}const s=t?j.trigger(t,"hide.bs.tab",{relatedTarget:this._element}):null;if(j.trigger(this._element,"show.bs.tab",{relatedTarget:t}).defaultPrevented||null!==s&&s.defaultPrevented)return;this._activate(this._element,i);const o=()=>{j.trigger(t,"hidden.bs.tab",{relatedTarget:this._element}),j.trigger(this._element,"shown.bs.tab",{relatedTarget:t})};e?this._activate(e,e.parentNode,o):o()}_activate(t,e,i){const n=(!e||"UL"!==e.nodeName&&"OL"!==e.nodeName?V.children(e,kn):V.find(Ln,e))[0],s=i&&n&&n.classList.contains(On),o=()=>this._transitionComplete(t,n,i);n&&s?(n.classList.remove(Cn),this._queueCallback(o,t,!0)):o()}_transitionComplete(t,e,i){if(e){e.classList.remove(Tn);const t=V.findOne(":scope > .dropdown-menu .active",e.parentNode);t&&t.classList.remove(Tn),"tab"===e.getAttribute("role")&&e.setAttribute("aria-selected",!1)}t.classList.add(Tn),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),u(t),t.classList.contains(On)&&t.classList.add(Cn);let n=t.parentNode;if(n&&"LI"===n.nodeName&&(n=n.parentNode),n&&n.classList.contains("dropdown-menu")){const e=t.closest(".dropdown");e&&V.find(".dropdown-toggle",e).forEach((t=>t.classList.add(Tn))),t.setAttribute("aria-expanded",!0)}i&&i()}static jQueryInterface(t){return this.each((function(){const e=xn.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(document,"click.bs.tab.data-api",'[data-bs-toggle="tab"], [data-bs-toggle="pill"], [data-bs-toggle="list"]',(function(t){["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this)||xn.getOrCreateInstance(this).show()})),g(xn);const Dn="toast",Sn="hide",Nn="show",In="showing",Pn={animation:"boolean",autohide:"boolean",delay:"number"},jn={animation:!0,autohide:!0,delay:5e3};class Mn extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._timeout=null,this._hasMouseInteraction=!1,this._hasKeyboardInteraction=!1,this._setListeners()}static get DefaultType(){return Pn}static get Default(){return jn}static get NAME(){return Dn}show(){j.trigger(this._element,"show.bs.toast").defaultPrevented||(this._clearTimeout(),this._config.animation&&this._element.classList.add("fade"),this._element.classList.remove(Sn),u(this._element),this._element.classList.add(Nn),this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.remove(In),j.trigger(this._element,"shown.bs.toast"),this._maybeScheduleHide()}),this._element,this._config.animation))}hide(){this._element.classList.contains(Nn)&&(j.trigger(this._element,"hide.bs.toast").defaultPrevented||(this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.add(Sn),this._element.classList.remove(In),this._element.classList.remove(Nn),j.trigger(this._element,"hidden.bs.toast")}),this._element,this._config.animation)))}dispose(){this._clearTimeout(),this._element.classList.contains(Nn)&&this._element.classList.remove(Nn),super.dispose()}_getConfig(t){return t={...jn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}},a(Dn,t,this.constructor.DefaultType),t}_maybeScheduleHide(){this._config.autohide&&(this._hasMouseInteraction||this._hasKeyboardInteraction||(this._timeout=setTimeout((()=>{this.hide()}),this._config.delay)))}_onInteraction(t,e){switch(t.type){case"mouseover":case"mouseout":this._hasMouseInteraction=e;break;case"focusin":case"focusout":this._hasKeyboardInteraction=e}if(e)return void this._clearTimeout();const i=t.relatedTarget;this._element===i||this._element.contains(i)||this._maybeScheduleHide()}_setListeners(){j.on(this._element,"mouseover.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"mouseout.bs.toast",(t=>this._onInteraction(t,!1))),j.on(this._element,"focusin.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"focusout.bs.toast",(t=>this._onInteraction(t,!1)))}_clearTimeout(){clearTimeout(this._timeout),this._timeout=null}static jQueryInterface(t){return this.each((function(){const e=Mn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}return R(Mn),g(Mn),{Alert:W,Button:z,Carousel:st,Collapse:pt,Dropdown:hi,Modal:Hi,Offcanvas:Fi,Popover:gn,ScrollSpy:An,Tab:xn,Toast:Mn,Tooltip:un}})); +//# sourceMappingURL=bootstrap.bundle.min.js.map \ No newline at end of file diff --git a/python-book/site_libs/clipboard/clipboard.min.js b/python-book/site_libs/clipboard/clipboard.min.js new file mode 100644 index 00000000..1103f811 --- /dev/null +++ b/python-book/site_libs/clipboard/clipboard.min.js @@ -0,0 +1,7 @@ +/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */ +!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.ClipboardJS=e():t.ClipboardJS=e()}(this,function(){return n={686:function(t,e,n){"use strict";n.d(e,{default:function(){return b}});var e=n(279),i=n.n(e),e=n(370),u=n.n(e),e=n(817),r=n.n(e);function c(t){try{return document.execCommand(t)}catch(t){return}}var a=function(t){t=r()(t);return c("cut"),t};function o(t,e){var n,o,t=(n=t,o="rtl"===document.documentElement.getAttribute("dir"),(t=document.createElement("textarea")).style.fontSize="12pt",t.style.border="0",t.style.padding="0",t.style.margin="0",t.style.position="absolute",t.style[o?"right":"left"]="-9999px",o=window.pageYOffset||document.documentElement.scrollTop,t.style.top="".concat(o,"px"),t.setAttribute("readonly",""),t.value=n,t);return e.container.appendChild(t),e=r()(t),c("copy"),t.remove(),e}var f=function(t){var e=1.anchorjs-link,.anchorjs-link:focus{opacity:1}",u.sheet.cssRules.length),u.sheet.insertRule("[data-anchorjs-icon]::after{content:attr(data-anchorjs-icon)}",u.sheet.cssRules.length),u.sheet.insertRule('@font-face{font-family:anchorjs-icons;src:url(data:n/a;base64,AAEAAAALAIAAAwAwT1MvMg8yG2cAAAE4AAAAYGNtYXDp3gC3AAABpAAAAExnYXNwAAAAEAAAA9wAAAAIZ2x5ZlQCcfwAAAH4AAABCGhlYWQHFvHyAAAAvAAAADZoaGVhBnACFwAAAPQAAAAkaG10eASAADEAAAGYAAAADGxvY2EACACEAAAB8AAAAAhtYXhwAAYAVwAAARgAAAAgbmFtZQGOH9cAAAMAAAAAunBvc3QAAwAAAAADvAAAACAAAQAAAAEAAHzE2p9fDzz1AAkEAAAAAADRecUWAAAAANQA6R8AAAAAAoACwAAAAAgAAgAAAAAAAAABAAADwP/AAAACgAAA/9MCrQABAAAAAAAAAAAAAAAAAAAAAwABAAAAAwBVAAIAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAMCQAGQAAUAAAKZAswAAACPApkCzAAAAesAMwEJAAAAAAAAAAAAAAAAAAAAARAAAAAAAAAAAAAAAAAAAAAAQAAg//0DwP/AAEADwABAAAAAAQAAAAAAAAAAAAAAIAAAAAAAAAIAAAACgAAxAAAAAwAAAAMAAAAcAAEAAwAAABwAAwABAAAAHAAEADAAAAAIAAgAAgAAACDpy//9//8AAAAg6cv//f///+EWNwADAAEAAAAAAAAAAAAAAAAACACEAAEAAAAAAAAAAAAAAAAxAAACAAQARAKAAsAAKwBUAAABIiYnJjQ3NzY2MzIWFxYUBwcGIicmNDc3NjQnJiYjIgYHBwYUFxYUBwYGIwciJicmNDc3NjIXFhQHBwYUFxYWMzI2Nzc2NCcmNDc2MhcWFAcHBgYjARQGDAUtLXoWOR8fORYtLTgKGwoKCjgaGg0gEhIgDXoaGgkJBQwHdR85Fi0tOAobCgoKOBoaDSASEiANehoaCQkKGwotLXoWOR8BMwUFLYEuehYXFxYugC44CQkKGwo4GkoaDQ0NDXoaShoKGwoFBe8XFi6ALjgJCQobCjgaShoNDQ0NehpKGgobCgoKLYEuehYXAAAADACWAAEAAAAAAAEACAAAAAEAAAAAAAIAAwAIAAEAAAAAAAMACAAAAAEAAAAAAAQACAAAAAEAAAAAAAUAAQALAAEAAAAAAAYACAAAAAMAAQQJAAEAEAAMAAMAAQQJAAIABgAcAAMAAQQJAAMAEAAMAAMAAQQJAAQAEAAMAAMAAQQJAAUAAgAiAAMAAQQJAAYAEAAMYW5jaG9yanM0MDBAAGEAbgBjAGgAbwByAGoAcwA0ADAAMABAAAAAAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAH//wAP) format("truetype")}',u.sheet.cssRules.length)),u=document.querySelectorAll("[id]"),t=[].map.call(u,function(A){return A.id}),i=0;i\]./()*\\\n\t\b\v\u00A0]/g,"-").replace(/-{2,}/g,"-").substring(0,this.options.truncate).replace(/^-+|-+$/gm,"").toLowerCase()},this.hasAnchorJSLink=function(A){var e=A.firstChild&&-1<(" "+A.firstChild.className+" ").indexOf(" anchorjs-link "),A=A.lastChild&&-1<(" "+A.lastChild.className+" ").indexOf(" anchorjs-link ");return e||A||!1}}}); +// @license-end \ No newline at end of file diff --git a/python-book/site_libs/quarto-html/popper.min.js b/python-book/site_libs/quarto-html/popper.min.js new file mode 100644 index 00000000..2269d669 --- /dev/null +++ b/python-book/site_libs/quarto-html/popper.min.js @@ -0,0 +1,6 @@ +/** + * @popperjs/core v2.11.4 - MIT License + */ + +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self).Popper={})}(this,(function(e){"use strict";function t(e){if(null==e)return window;if("[object Window]"!==e.toString()){var t=e.ownerDocument;return t&&t.defaultView||window}return e}function n(e){return e instanceof t(e).Element||e instanceof Element}function r(e){return e instanceof t(e).HTMLElement||e instanceof HTMLElement}function o(e){return"undefined"!=typeof ShadowRoot&&(e instanceof t(e).ShadowRoot||e instanceof ShadowRoot)}var i=Math.max,a=Math.min,s=Math.round;function f(e,t){void 0===t&&(t=!1);var n=e.getBoundingClientRect(),o=1,i=1;if(r(e)&&t){var a=e.offsetHeight,f=e.offsetWidth;f>0&&(o=s(n.width)/f||1),a>0&&(i=s(n.height)/a||1)}return{width:n.width/o,height:n.height/i,top:n.top/i,right:n.right/o,bottom:n.bottom/i,left:n.left/o,x:n.left/o,y:n.top/i}}function c(e){var n=t(e);return{scrollLeft:n.pageXOffset,scrollTop:n.pageYOffset}}function p(e){return e?(e.nodeName||"").toLowerCase():null}function u(e){return((n(e)?e.ownerDocument:e.document)||window.document).documentElement}function l(e){return f(u(e)).left+c(e).scrollLeft}function d(e){return t(e).getComputedStyle(e)}function h(e){var t=d(e),n=t.overflow,r=t.overflowX,o=t.overflowY;return/auto|scroll|overlay|hidden/.test(n+o+r)}function m(e,n,o){void 0===o&&(o=!1);var i,a,d=r(n),m=r(n)&&function(e){var t=e.getBoundingClientRect(),n=s(t.width)/e.offsetWidth||1,r=s(t.height)/e.offsetHeight||1;return 1!==n||1!==r}(n),v=u(n),g=f(e,m),y={scrollLeft:0,scrollTop:0},b={x:0,y:0};return(d||!d&&!o)&&(("body"!==p(n)||h(v))&&(y=(i=n)!==t(i)&&r(i)?{scrollLeft:(a=i).scrollLeft,scrollTop:a.scrollTop}:c(i)),r(n)?((b=f(n,!0)).x+=n.clientLeft,b.y+=n.clientTop):v&&(b.x=l(v))),{x:g.left+y.scrollLeft-b.x,y:g.top+y.scrollTop-b.y,width:g.width,height:g.height}}function v(e){var t=f(e),n=e.offsetWidth,r=e.offsetHeight;return Math.abs(t.width-n)<=1&&(n=t.width),Math.abs(t.height-r)<=1&&(r=t.height),{x:e.offsetLeft,y:e.offsetTop,width:n,height:r}}function g(e){return"html"===p(e)?e:e.assignedSlot||e.parentNode||(o(e)?e.host:null)||u(e)}function y(e){return["html","body","#document"].indexOf(p(e))>=0?e.ownerDocument.body:r(e)&&h(e)?e:y(g(e))}function b(e,n){var r;void 0===n&&(n=[]);var o=y(e),i=o===(null==(r=e.ownerDocument)?void 0:r.body),a=t(o),s=i?[a].concat(a.visualViewport||[],h(o)?o:[]):o,f=n.concat(s);return i?f:f.concat(b(g(s)))}function x(e){return["table","td","th"].indexOf(p(e))>=0}function w(e){return r(e)&&"fixed"!==d(e).position?e.offsetParent:null}function O(e){for(var n=t(e),i=w(e);i&&x(i)&&"static"===d(i).position;)i=w(i);return i&&("html"===p(i)||"body"===p(i)&&"static"===d(i).position)?n:i||function(e){var t=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&r(e)&&"fixed"===d(e).position)return null;var n=g(e);for(o(n)&&(n=n.host);r(n)&&["html","body"].indexOf(p(n))<0;){var i=d(n);if("none"!==i.transform||"none"!==i.perspective||"paint"===i.contain||-1!==["transform","perspective"].indexOf(i.willChange)||t&&"filter"===i.willChange||t&&i.filter&&"none"!==i.filter)return n;n=n.parentNode}return null}(e)||n}var j="top",E="bottom",D="right",A="left",L="auto",P=[j,E,D,A],M="start",k="end",W="viewport",B="popper",H=P.reduce((function(e,t){return e.concat([t+"-"+M,t+"-"+k])}),[]),T=[].concat(P,[L]).reduce((function(e,t){return e.concat([t,t+"-"+M,t+"-"+k])}),[]),R=["beforeRead","read","afterRead","beforeMain","main","afterMain","beforeWrite","write","afterWrite"];function S(e){var t=new Map,n=new Set,r=[];function o(e){n.add(e.name),[].concat(e.requires||[],e.requiresIfExists||[]).forEach((function(e){if(!n.has(e)){var r=t.get(e);r&&o(r)}})),r.push(e)}return e.forEach((function(e){t.set(e.name,e)})),e.forEach((function(e){n.has(e.name)||o(e)})),r}function C(e){return e.split("-")[0]}function q(e,t){var n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&o(n)){var r=t;do{if(r&&e.isSameNode(r))return!0;r=r.parentNode||r.host}while(r)}return!1}function V(e){return Object.assign({},e,{left:e.x,top:e.y,right:e.x+e.width,bottom:e.y+e.height})}function N(e,r){return r===W?V(function(e){var n=t(e),r=u(e),o=n.visualViewport,i=r.clientWidth,a=r.clientHeight,s=0,f=0;return o&&(i=o.width,a=o.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(s=o.offsetLeft,f=o.offsetTop)),{width:i,height:a,x:s+l(e),y:f}}(e)):n(r)?function(e){var t=f(e);return t.top=t.top+e.clientTop,t.left=t.left+e.clientLeft,t.bottom=t.top+e.clientHeight,t.right=t.left+e.clientWidth,t.width=e.clientWidth,t.height=e.clientHeight,t.x=t.left,t.y=t.top,t}(r):V(function(e){var t,n=u(e),r=c(e),o=null==(t=e.ownerDocument)?void 0:t.body,a=i(n.scrollWidth,n.clientWidth,o?o.scrollWidth:0,o?o.clientWidth:0),s=i(n.scrollHeight,n.clientHeight,o?o.scrollHeight:0,o?o.clientHeight:0),f=-r.scrollLeft+l(e),p=-r.scrollTop;return"rtl"===d(o||n).direction&&(f+=i(n.clientWidth,o?o.clientWidth:0)-a),{width:a,height:s,x:f,y:p}}(u(e)))}function I(e,t,o){var s="clippingParents"===t?function(e){var t=b(g(e)),o=["absolute","fixed"].indexOf(d(e).position)>=0&&r(e)?O(e):e;return n(o)?t.filter((function(e){return n(e)&&q(e,o)&&"body"!==p(e)})):[]}(e):[].concat(t),f=[].concat(s,[o]),c=f[0],u=f.reduce((function(t,n){var r=N(e,n);return t.top=i(r.top,t.top),t.right=a(r.right,t.right),t.bottom=a(r.bottom,t.bottom),t.left=i(r.left,t.left),t}),N(e,c));return u.width=u.right-u.left,u.height=u.bottom-u.top,u.x=u.left,u.y=u.top,u}function _(e){return e.split("-")[1]}function F(e){return["top","bottom"].indexOf(e)>=0?"x":"y"}function U(e){var t,n=e.reference,r=e.element,o=e.placement,i=o?C(o):null,a=o?_(o):null,s=n.x+n.width/2-r.width/2,f=n.y+n.height/2-r.height/2;switch(i){case j:t={x:s,y:n.y-r.height};break;case E:t={x:s,y:n.y+n.height};break;case D:t={x:n.x+n.width,y:f};break;case A:t={x:n.x-r.width,y:f};break;default:t={x:n.x,y:n.y}}var c=i?F(i):null;if(null!=c){var p="y"===c?"height":"width";switch(a){case M:t[c]=t[c]-(n[p]/2-r[p]/2);break;case k:t[c]=t[c]+(n[p]/2-r[p]/2)}}return t}function z(e){return Object.assign({},{top:0,right:0,bottom:0,left:0},e)}function X(e,t){return t.reduce((function(t,n){return t[n]=e,t}),{})}function Y(e,t){void 0===t&&(t={});var r=t,o=r.placement,i=void 0===o?e.placement:o,a=r.boundary,s=void 0===a?"clippingParents":a,c=r.rootBoundary,p=void 0===c?W:c,l=r.elementContext,d=void 0===l?B:l,h=r.altBoundary,m=void 0!==h&&h,v=r.padding,g=void 0===v?0:v,y=z("number"!=typeof g?g:X(g,P)),b=d===B?"reference":B,x=e.rects.popper,w=e.elements[m?b:d],O=I(n(w)?w:w.contextElement||u(e.elements.popper),s,p),A=f(e.elements.reference),L=U({reference:A,element:x,strategy:"absolute",placement:i}),M=V(Object.assign({},x,L)),k=d===B?M:A,H={top:O.top-k.top+y.top,bottom:k.bottom-O.bottom+y.bottom,left:O.left-k.left+y.left,right:k.right-O.right+y.right},T=e.modifiersData.offset;if(d===B&&T){var R=T[i];Object.keys(H).forEach((function(e){var t=[D,E].indexOf(e)>=0?1:-1,n=[j,E].indexOf(e)>=0?"y":"x";H[e]+=R[n]*t}))}return H}var G={placement:"bottom",modifiers:[],strategy:"absolute"};function J(){for(var e=arguments.length,t=new Array(e),n=0;n=0?-1:1,i="function"==typeof n?n(Object.assign({},t,{placement:e})):n,a=i[0],s=i[1];return a=a||0,s=(s||0)*o,[A,D].indexOf(r)>=0?{x:s,y:a}:{x:a,y:s}}(n,t.rects,i),e}),{}),s=a[t.placement],f=s.x,c=s.y;null!=t.modifiersData.popperOffsets&&(t.modifiersData.popperOffsets.x+=f,t.modifiersData.popperOffsets.y+=c),t.modifiersData[r]=a}},ie={left:"right",right:"left",bottom:"top",top:"bottom"};function ae(e){return e.replace(/left|right|bottom|top/g,(function(e){return ie[e]}))}var se={start:"end",end:"start"};function fe(e){return e.replace(/start|end/g,(function(e){return se[e]}))}function ce(e,t){void 0===t&&(t={});var n=t,r=n.placement,o=n.boundary,i=n.rootBoundary,a=n.padding,s=n.flipVariations,f=n.allowedAutoPlacements,c=void 0===f?T:f,p=_(r),u=p?s?H:H.filter((function(e){return _(e)===p})):P,l=u.filter((function(e){return c.indexOf(e)>=0}));0===l.length&&(l=u);var d=l.reduce((function(t,n){return t[n]=Y(e,{placement:n,boundary:o,rootBoundary:i,padding:a})[C(n)],t}),{});return Object.keys(d).sort((function(e,t){return d[e]-d[t]}))}var pe={name:"flip",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name;if(!t.modifiersData[r]._skip){for(var o=n.mainAxis,i=void 0===o||o,a=n.altAxis,s=void 0===a||a,f=n.fallbackPlacements,c=n.padding,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.flipVariations,h=void 0===d||d,m=n.allowedAutoPlacements,v=t.options.placement,g=C(v),y=f||(g===v||!h?[ae(v)]:function(e){if(C(e)===L)return[];var t=ae(e);return[fe(e),t,fe(t)]}(v)),b=[v].concat(y).reduce((function(e,n){return e.concat(C(n)===L?ce(t,{placement:n,boundary:p,rootBoundary:u,padding:c,flipVariations:h,allowedAutoPlacements:m}):n)}),[]),x=t.rects.reference,w=t.rects.popper,O=new Map,P=!0,k=b[0],W=0;W=0,S=R?"width":"height",q=Y(t,{placement:B,boundary:p,rootBoundary:u,altBoundary:l,padding:c}),V=R?T?D:A:T?E:j;x[S]>w[S]&&(V=ae(V));var N=ae(V),I=[];if(i&&I.push(q[H]<=0),s&&I.push(q[V]<=0,q[N]<=0),I.every((function(e){return e}))){k=B,P=!1;break}O.set(B,I)}if(P)for(var F=function(e){var t=b.find((function(t){var n=O.get(t);if(n)return n.slice(0,e).every((function(e){return e}))}));if(t)return k=t,"break"},U=h?3:1;U>0;U--){if("break"===F(U))break}t.placement!==k&&(t.modifiersData[r]._skip=!0,t.placement=k,t.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function ue(e,t,n){return i(e,a(t,n))}var le={name:"preventOverflow",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name,o=n.mainAxis,s=void 0===o||o,f=n.altAxis,c=void 0!==f&&f,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.padding,h=n.tether,m=void 0===h||h,g=n.tetherOffset,y=void 0===g?0:g,b=Y(t,{boundary:p,rootBoundary:u,padding:d,altBoundary:l}),x=C(t.placement),w=_(t.placement),L=!w,P=F(x),k="x"===P?"y":"x",W=t.modifiersData.popperOffsets,B=t.rects.reference,H=t.rects.popper,T="function"==typeof y?y(Object.assign({},t.rects,{placement:t.placement})):y,R="number"==typeof T?{mainAxis:T,altAxis:T}:Object.assign({mainAxis:0,altAxis:0},T),S=t.modifiersData.offset?t.modifiersData.offset[t.placement]:null,q={x:0,y:0};if(W){if(s){var V,N="y"===P?j:A,I="y"===P?E:D,U="y"===P?"height":"width",z=W[P],X=z+b[N],G=z-b[I],J=m?-H[U]/2:0,K=w===M?B[U]:H[U],Q=w===M?-H[U]:-B[U],Z=t.elements.arrow,$=m&&Z?v(Z):{width:0,height:0},ee=t.modifiersData["arrow#persistent"]?t.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},te=ee[N],ne=ee[I],re=ue(0,B[U],$[U]),oe=L?B[U]/2-J-re-te-R.mainAxis:K-re-te-R.mainAxis,ie=L?-B[U]/2+J+re+ne+R.mainAxis:Q+re+ne+R.mainAxis,ae=t.elements.arrow&&O(t.elements.arrow),se=ae?"y"===P?ae.clientTop||0:ae.clientLeft||0:0,fe=null!=(V=null==S?void 0:S[P])?V:0,ce=z+ie-fe,pe=ue(m?a(X,z+oe-fe-se):X,z,m?i(G,ce):G);W[P]=pe,q[P]=pe-z}if(c){var le,de="x"===P?j:A,he="x"===P?E:D,me=W[k],ve="y"===k?"height":"width",ge=me+b[de],ye=me-b[he],be=-1!==[j,A].indexOf(x),xe=null!=(le=null==S?void 0:S[k])?le:0,we=be?ge:me-B[ve]-H[ve]-xe+R.altAxis,Oe=be?me+B[ve]+H[ve]-xe-R.altAxis:ye,je=m&&be?function(e,t,n){var r=ue(e,t,n);return r>n?n:r}(we,me,Oe):ue(m?we:ge,me,m?Oe:ye);W[k]=je,q[k]=je-me}t.modifiersData[r]=q}},requiresIfExists:["offset"]};var de={name:"arrow",enabled:!0,phase:"main",fn:function(e){var t,n=e.state,r=e.name,o=e.options,i=n.elements.arrow,a=n.modifiersData.popperOffsets,s=C(n.placement),f=F(s),c=[A,D].indexOf(s)>=0?"height":"width";if(i&&a){var p=function(e,t){return z("number"!=typeof(e="function"==typeof e?e(Object.assign({},t.rects,{placement:t.placement})):e)?e:X(e,P))}(o.padding,n),u=v(i),l="y"===f?j:A,d="y"===f?E:D,h=n.rects.reference[c]+n.rects.reference[f]-a[f]-n.rects.popper[c],m=a[f]-n.rects.reference[f],g=O(i),y=g?"y"===f?g.clientHeight||0:g.clientWidth||0:0,b=h/2-m/2,x=p[l],w=y-u[c]-p[d],L=y/2-u[c]/2+b,M=ue(x,L,w),k=f;n.modifiersData[r]=((t={})[k]=M,t.centerOffset=M-L,t)}},effect:function(e){var t=e.state,n=e.options.element,r=void 0===n?"[data-popper-arrow]":n;null!=r&&("string"!=typeof r||(r=t.elements.popper.querySelector(r)))&&q(t.elements.popper,r)&&(t.elements.arrow=r)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function he(e,t,n){return void 0===n&&(n={x:0,y:0}),{top:e.top-t.height-n.y,right:e.right-t.width+n.x,bottom:e.bottom-t.height+n.y,left:e.left-t.width-n.x}}function me(e){return[j,D,E,A].some((function(t){return e[t]>=0}))}var ve={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(e){var t=e.state,n=e.name,r=t.rects.reference,o=t.rects.popper,i=t.modifiersData.preventOverflow,a=Y(t,{elementContext:"reference"}),s=Y(t,{altBoundary:!0}),f=he(a,r),c=he(s,o,i),p=me(f),u=me(c);t.modifiersData[n]={referenceClippingOffsets:f,popperEscapeOffsets:c,isReferenceHidden:p,hasPopperEscaped:u},t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-reference-hidden":p,"data-popper-escaped":u})}},ge=K({defaultModifiers:[Z,$,ne,re]}),ye=[Z,$,ne,re,oe,pe,le,de,ve],be=K({defaultModifiers:ye});e.applyStyles=re,e.arrow=de,e.computeStyles=ne,e.createPopper=be,e.createPopperLite=ge,e.defaultModifiers=ye,e.detectOverflow=Y,e.eventListeners=Z,e.flip=pe,e.hide=ve,e.offset=oe,e.popperGenerator=K,e.popperOffsets=$,e.preventOverflow=le,Object.defineProperty(e,"__esModule",{value:!0})})); + diff --git a/python-book/site_libs/quarto-html/quarto-syntax-highlighting.css b/python-book/site_libs/quarto-html/quarto-syntax-highlighting.css new file mode 100644 index 00000000..d9fd98f0 --- /dev/null +++ b/python-book/site_libs/quarto-html/quarto-syntax-highlighting.css @@ -0,0 +1,203 @@ +/* quarto syntax highlight colors */ +:root { + --quarto-hl-ot-color: #003B4F; + --quarto-hl-at-color: #657422; + --quarto-hl-ss-color: #20794D; + --quarto-hl-an-color: #5E5E5E; + --quarto-hl-fu-color: #4758AB; + --quarto-hl-st-color: #20794D; + --quarto-hl-cf-color: #003B4F; + --quarto-hl-op-color: #5E5E5E; + --quarto-hl-er-color: #AD0000; + --quarto-hl-bn-color: #AD0000; + --quarto-hl-al-color: #AD0000; + --quarto-hl-va-color: #111111; + --quarto-hl-bu-color: inherit; + --quarto-hl-ex-color: inherit; + --quarto-hl-pp-color: #AD0000; + --quarto-hl-in-color: #5E5E5E; + --quarto-hl-vs-color: #20794D; + --quarto-hl-wa-color: #5E5E5E; + --quarto-hl-do-color: #5E5E5E; + --quarto-hl-im-color: #00769E; + --quarto-hl-ch-color: #20794D; + --quarto-hl-dt-color: #AD0000; + --quarto-hl-fl-color: #AD0000; + --quarto-hl-co-color: #5E5E5E; + --quarto-hl-cv-color: #5E5E5E; + --quarto-hl-cn-color: #8f5902; + --quarto-hl-sc-color: #5E5E5E; + --quarto-hl-dv-color: #AD0000; + --quarto-hl-kw-color: #003B4F; +} + +/* other quarto variables */ +:root { + --quarto-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; +} + +pre > code.sourceCode > span { + color: #003B4F; +} + +code span { + color: #003B4F; +} + +code.sourceCode > span { + color: #003B4F; +} + +div.sourceCode, +div.sourceCode pre.sourceCode { + color: #003B4F; +} + +code span.ot { + color: #003B4F; + font-style: inherit; +} + +code span.at { + color: #657422; + font-style: inherit; +} + +code span.ss { + color: #20794D; + font-style: inherit; +} + +code span.an { + color: #5E5E5E; + font-style: inherit; +} + +code span.fu { + color: #4758AB; + font-style: inherit; +} + +code span.st { + color: #20794D; + font-style: inherit; +} + +code span.cf { + color: #003B4F; + font-style: inherit; +} + +code span.op { + color: #5E5E5E; + font-style: inherit; +} + +code span.er { + color: #AD0000; + font-style: inherit; +} + +code span.bn { + color: #AD0000; + font-style: inherit; +} + +code span.al { + color: #AD0000; + font-style: inherit; +} + +code span.va { + color: #111111; + font-style: inherit; +} + +code span.bu { + font-style: inherit; +} + +code span.ex { + font-style: inherit; +} + +code span.pp { + color: #AD0000; + font-style: inherit; +} + +code span.in { + color: #5E5E5E; + font-style: inherit; +} + +code span.vs { + color: #20794D; + font-style: inherit; +} + +code span.wa { + color: #5E5E5E; + font-style: italic; +} + +code span.do { + color: #5E5E5E; + font-style: italic; +} + +code span.im { + color: #00769E; + font-style: inherit; +} + +code span.ch { + color: #20794D; + font-style: inherit; +} + +code span.dt { + color: #AD0000; + font-style: inherit; +} + +code span.fl { + color: #AD0000; + font-style: inherit; +} + +code span.co { + color: #5E5E5E; + font-style: inherit; +} + +code span.cv { + color: #5E5E5E; + font-style: italic; +} + +code span.cn { + color: #8f5902; + font-style: inherit; +} + +code span.sc { + color: #5E5E5E; + font-style: inherit; +} + +code span.dv { + color: #AD0000; + font-style: inherit; +} + +code span.kw { + color: #003B4F; + font-style: inherit; +} + +.prevent-inlining { + content: " { + // Find any conflicting margin elements and add margins to the + // top to prevent overlap + const marginChildren = window.document.querySelectorAll( + ".column-margin.column-container > * " + ); + + let lastBottom = 0; + for (const marginChild of marginChildren) { + if (marginChild.offsetParent !== null) { + // clear the top margin so we recompute it + marginChild.style.marginTop = null; + const top = marginChild.getBoundingClientRect().top + window.scrollY; + console.log({ + childtop: marginChild.getBoundingClientRect().top, + scroll: window.scrollY, + top, + lastBottom, + }); + if (top < lastBottom) { + const margin = lastBottom - top; + marginChild.style.marginTop = `${margin}px`; + } + const styles = window.getComputedStyle(marginChild); + const marginTop = parseFloat(styles["marginTop"]); + + console.log({ + top, + height: marginChild.getBoundingClientRect().height, + marginTop, + total: top + marginChild.getBoundingClientRect().height + marginTop, + }); + lastBottom = top + marginChild.getBoundingClientRect().height + marginTop; + } + } +}; + +window.document.addEventListener("DOMContentLoaded", function (_event) { + // Recompute the position of margin elements anytime the body size changes + if (window.ResizeObserver) { + const resizeObserver = new window.ResizeObserver( + throttle(layoutMarginEls, 50) + ); + resizeObserver.observe(window.document.body); + } + + const tocEl = window.document.querySelector('nav.toc-active[role="doc-toc"]'); + const sidebarEl = window.document.getElementById("quarto-sidebar"); + const leftTocEl = window.document.getElementById("quarto-sidebar-toc-left"); + const marginSidebarEl = window.document.getElementById( + "quarto-margin-sidebar" + ); + // function to determine whether the element has a previous sibling that is active + const prevSiblingIsActiveLink = (el) => { + const sibling = el.previousElementSibling; + if (sibling && sibling.tagName === "A") { + return sibling.classList.contains("active"); + } else { + return false; + } + }; + + // fire slideEnter for bootstrap tab activations (for htmlwidget resize behavior) + function fireSlideEnter(e) { + const event = window.document.createEvent("Event"); + event.initEvent("slideenter", true, true); + window.document.dispatchEvent(event); + } + const tabs = window.document.querySelectorAll('a[data-bs-toggle="tab"]'); + tabs.forEach((tab) => { + tab.addEventListener("shown.bs.tab", fireSlideEnter); + }); + + // fire slideEnter for tabby tab activations (for htmlwidget resize behavior) + document.addEventListener("tabby", fireSlideEnter, false); + + // Track scrolling and mark TOC links as active + // get table of contents and sidebar (bail if we don't have at least one) + const tocLinks = tocEl + ? [...tocEl.querySelectorAll("a[data-scroll-target]")] + : []; + const makeActive = (link) => tocLinks[link].classList.add("active"); + const removeActive = (link) => tocLinks[link].classList.remove("active"); + const removeAllActive = () => + [...Array(tocLinks.length).keys()].forEach((link) => removeActive(link)); + + // activate the anchor for a section associated with this TOC entry + tocLinks.forEach((link) => { + link.addEventListener("click", () => { + if (link.href.indexOf("#") !== -1) { + const anchor = link.href.split("#")[1]; + const heading = window.document.querySelector( + `[data-anchor-id=${anchor}]` + ); + if (heading) { + // Add the class + heading.classList.add("reveal-anchorjs-link"); + + // function to show the anchor + const handleMouseout = () => { + heading.classList.remove("reveal-anchorjs-link"); + heading.removeEventListener("mouseout", handleMouseout); + }; + + // add a function to clear the anchor when the user mouses out of it + heading.addEventListener("mouseout", handleMouseout); + } + } + }); + }); + + const sections = tocLinks.map((link) => { + const target = link.getAttribute("data-scroll-target"); + if (target.startsWith("#")) { + return window.document.getElementById(decodeURI(`${target.slice(1)}`)); + } else { + return window.document.querySelector(decodeURI(`${target}`)); + } + }); + + const sectionMargin = 200; + let currentActive = 0; + // track whether we've initialized state the first time + let init = false; + + const updateActiveLink = () => { + // The index from bottom to top (e.g. reversed list) + let sectionIndex = -1; + if ( + window.innerHeight + window.pageYOffset >= + window.document.body.offsetHeight + ) { + sectionIndex = 0; + } else { + sectionIndex = [...sections].reverse().findIndex((section) => { + if (section) { + return window.pageYOffset >= section.offsetTop - sectionMargin; + } else { + return false; + } + }); + } + if (sectionIndex > -1) { + const current = sections.length - sectionIndex - 1; + if (current !== currentActive) { + removeAllActive(); + currentActive = current; + makeActive(current); + if (init) { + window.dispatchEvent(sectionChanged); + } + init = true; + } + } + }; + + const inHiddenRegion = (top, bottom, hiddenRegions) => { + for (const region of hiddenRegions) { + if (top <= region.bottom && bottom >= region.top) { + return true; + } + } + return false; + }; + + const categorySelector = "header.quarto-title-block .quarto-category"; + const activateCategories = (href) => { + // Find any categories + // Surround them with a link pointing back to: + // #category=Authoring + try { + const categoryEls = window.document.querySelectorAll(categorySelector); + for (const categoryEl of categoryEls) { + const categoryText = categoryEl.textContent; + if (categoryText) { + const link = `${href}#category=${encodeURIComponent(categoryText)}`; + const linkEl = window.document.createElement("a"); + linkEl.setAttribute("href", link); + for (const child of categoryEl.childNodes) { + linkEl.append(child); + } + categoryEl.appendChild(linkEl); + } + } + } catch { + // Ignore errors + } + }; + function hasTitleCategories() { + return window.document.querySelector(categorySelector) !== null; + } + + function offsetRelativeUrl(url) { + const offset = getMeta("quarto:offset"); + return offset ? offset + url : url; + } + + function offsetAbsoluteUrl(url) { + const offset = getMeta("quarto:offset"); + const baseUrl = new URL(offset, window.location); + + const projRelativeUrl = url.replace(baseUrl, ""); + if (projRelativeUrl.startsWith("/")) { + return projRelativeUrl; + } else { + return "/" + projRelativeUrl; + } + } + + // read a meta tag value + function getMeta(metaName) { + const metas = window.document.getElementsByTagName("meta"); + for (let i = 0; i < metas.length; i++) { + if (metas[i].getAttribute("name") === metaName) { + return metas[i].getAttribute("content"); + } + } + return ""; + } + + async function findAndActivateCategories() { + const currentPagePath = offsetAbsoluteUrl(window.location.href); + const response = await fetch(offsetRelativeUrl("listings.json")); + if (response.status == 200) { + return response.json().then(function (listingPaths) { + const listingHrefs = []; + for (const listingPath of listingPaths) { + const pathWithoutLeadingSlash = listingPath.listing.substring(1); + for (const item of listingPath.items) { + if ( + item === currentPagePath || + item === currentPagePath + "index.html" + ) { + // Resolve this path against the offset to be sure + // we already are using the correct path to the listing + // (this adjusts the listing urls to be rooted against + // whatever root the page is actually running against) + const relative = offsetRelativeUrl(pathWithoutLeadingSlash); + const baseUrl = window.location; + const resolvedPath = new URL(relative, baseUrl); + listingHrefs.push(resolvedPath.pathname); + break; + } + } + } + + // Look up the tree for a nearby linting and use that if we find one + const nearestListing = findNearestParentListing( + offsetAbsoluteUrl(window.location.pathname), + listingHrefs + ); + if (nearestListing) { + activateCategories(nearestListing); + } else { + // See if the referrer is a listing page for this item + const referredRelativePath = offsetAbsoluteUrl(document.referrer); + const referrerListing = listingHrefs.find((listingHref) => { + const isListingReferrer = + listingHref === referredRelativePath || + listingHref === referredRelativePath + "index.html"; + return isListingReferrer; + }); + + if (referrerListing) { + // Try to use the referrer if possible + activateCategories(referrerListing); + } else if (listingHrefs.length > 0) { + // Otherwise, just fall back to the first listing + activateCategories(listingHrefs[0]); + } + } + }); + } + } + if (hasTitleCategories()) { + findAndActivateCategories(); + } + + const findNearestParentListing = (href, listingHrefs) => { + if (!href || !listingHrefs) { + return undefined; + } + // Look up the tree for a nearby linting and use that if we find one + const relativeParts = href.substring(1).split("/"); + while (relativeParts.length > 0) { + const path = relativeParts.join("/"); + for (const listingHref of listingHrefs) { + if (listingHref.startsWith(path)) { + return listingHref; + } + } + relativeParts.pop(); + } + + return undefined; + }; + + const manageSidebarVisiblity = (el, placeholderDescriptor) => { + let isVisible = true; + let elRect; + + return (hiddenRegions) => { + if (el === null) { + return; + } + + // Find the last element of the TOC + const lastChildEl = el.lastElementChild; + + if (lastChildEl) { + // Converts the sidebar to a menu + const convertToMenu = () => { + for (const child of el.children) { + child.style.opacity = 0; + child.style.overflow = "hidden"; + } + + nexttick(() => { + const toggleContainer = window.document.createElement("div"); + toggleContainer.style.width = "100%"; + toggleContainer.classList.add("zindex-over-content"); + toggleContainer.classList.add("quarto-sidebar-toggle"); + toggleContainer.classList.add("headroom-target"); // Marks this to be managed by headeroom + toggleContainer.id = placeholderDescriptor.id; + toggleContainer.style.position = "fixed"; + + const toggleIcon = window.document.createElement("i"); + toggleIcon.classList.add("quarto-sidebar-toggle-icon"); + toggleIcon.classList.add("bi"); + toggleIcon.classList.add("bi-caret-down-fill"); + + const toggleTitle = window.document.createElement("div"); + const titleEl = window.document.body.querySelector( + placeholderDescriptor.titleSelector + ); + if (titleEl) { + toggleTitle.append( + titleEl.textContent || titleEl.innerText, + toggleIcon + ); + } + toggleTitle.classList.add("zindex-over-content"); + toggleTitle.classList.add("quarto-sidebar-toggle-title"); + toggleContainer.append(toggleTitle); + + const toggleContents = window.document.createElement("div"); + toggleContents.classList = el.classList; + toggleContents.classList.add("zindex-over-content"); + toggleContents.classList.add("quarto-sidebar-toggle-contents"); + for (const child of el.children) { + if (child.id === "toc-title") { + continue; + } + + const clone = child.cloneNode(true); + clone.style.opacity = 1; + clone.style.display = null; + toggleContents.append(clone); + } + toggleContents.style.height = "0px"; + const positionToggle = () => { + // position the element (top left of parent, same width as parent) + if (!elRect) { + elRect = el.getBoundingClientRect(); + } + toggleContainer.style.left = `${elRect.left}px`; + toggleContainer.style.top = `${elRect.top}px`; + toggleContainer.style.width = `${elRect.width}px`; + }; + positionToggle(); + + toggleContainer.append(toggleContents); + el.parentElement.prepend(toggleContainer); + + // Process clicks + let tocShowing = false; + // Allow the caller to control whether this is dismissed + // when it is clicked (e.g. sidebar navigation supports + // opening and closing the nav tree, so don't dismiss on click) + const clickEl = placeholderDescriptor.dismissOnClick + ? toggleContainer + : toggleTitle; + + const closeToggle = () => { + if (tocShowing) { + toggleContainer.classList.remove("expanded"); + toggleContents.style.height = "0px"; + tocShowing = false; + } + }; + + // Get rid of any expanded toggle if the user scrolls + window.document.addEventListener( + "scroll", + throttle(() => { + closeToggle(); + }, 50) + ); + + // Handle positioning of the toggle + window.addEventListener( + "resize", + throttle(() => { + elRect = undefined; + positionToggle(); + }, 50) + ); + + window.addEventListener("quarto-hrChanged", () => { + elRect = undefined; + }); + + // Process the click + clickEl.onclick = () => { + if (!tocShowing) { + toggleContainer.classList.add("expanded"); + toggleContents.style.height = null; + tocShowing = true; + } else { + closeToggle(); + } + }; + }); + }; + + // Converts a sidebar from a menu back to a sidebar + const convertToSidebar = () => { + for (const child of el.children) { + child.style.opacity = 1; + child.style.overflow = null; + } + + const placeholderEl = window.document.getElementById( + placeholderDescriptor.id + ); + if (placeholderEl) { + placeholderEl.remove(); + } + + el.classList.remove("rollup"); + }; + + if (isReaderMode()) { + convertToMenu(); + isVisible = false; + } else { + // Find the top and bottom o the element that is being managed + const elTop = el.offsetTop; + const elBottom = + elTop + lastChildEl.offsetTop + lastChildEl.offsetHeight; + + if (!isVisible) { + // If the element is current not visible reveal if there are + // no conflicts with overlay regions + if (!inHiddenRegion(elTop, elBottom, hiddenRegions)) { + convertToSidebar(); + isVisible = true; + } + } else { + // If the element is visible, hide it if it conflicts with overlay regions + // and insert a placeholder toggle (or if we're in reader mode) + if (inHiddenRegion(elTop, elBottom, hiddenRegions)) { + convertToMenu(); + isVisible = false; + } + } + } + } + }; + }; + + const tabEls = document.querySelectorAll('a[data-bs-toggle="tab"]'); + for (const tabEl of tabEls) { + const id = tabEl.getAttribute("data-bs-target"); + if (id) { + const columnEl = document.querySelector( + `${id} .column-margin, .tabset-margin-content` + ); + if (columnEl) + tabEl.addEventListener("shown.bs.tab", function (event) { + const el = event.srcElement; + if (el) { + const visibleCls = `${el.id}-margin-content`; + // walk up until we find a parent tabset + let panelTabsetEl = el.parentElement; + while (panelTabsetEl) { + if (panelTabsetEl.classList.contains("panel-tabset")) { + break; + } + panelTabsetEl = panelTabsetEl.parentElement; + } + + if (panelTabsetEl) { + const prevSib = panelTabsetEl.previousElementSibling; + if ( + prevSib && + prevSib.classList.contains("tabset-margin-container") + ) { + const childNodes = prevSib.querySelectorAll( + ".tabset-margin-content" + ); + for (const childEl of childNodes) { + if (childEl.classList.contains(visibleCls)) { + childEl.classList.remove("collapse"); + } else { + childEl.classList.add("collapse"); + } + } + } + } + } + + layoutMarginEls(); + }); + } + } + + // Manage the visibility of the toc and the sidebar + const marginScrollVisibility = manageSidebarVisiblity(marginSidebarEl, { + id: "quarto-toc-toggle", + titleSelector: "#toc-title", + dismissOnClick: true, + }); + const sidebarScrollVisiblity = manageSidebarVisiblity(sidebarEl, { + id: "quarto-sidebarnav-toggle", + titleSelector: ".title", + dismissOnClick: false, + }); + let tocLeftScrollVisibility; + if (leftTocEl) { + tocLeftScrollVisibility = manageSidebarVisiblity(leftTocEl, { + id: "quarto-lefttoc-toggle", + titleSelector: "#toc-title", + dismissOnClick: true, + }); + } + + // Find the first element that uses formatting in special columns + const conflictingEls = window.document.body.querySelectorAll( + '[class^="column-"], [class*=" column-"], aside, [class*="margin-caption"], [class*=" margin-caption"], [class*="margin-ref"], [class*=" margin-ref"]' + ); + + // Filter all the possibly conflicting elements into ones + // the do conflict on the left or ride side + const arrConflictingEls = Array.from(conflictingEls); + const leftSideConflictEls = arrConflictingEls.filter((el) => { + if (el.tagName === "ASIDE") { + return false; + } + return Array.from(el.classList).find((className) => { + return ( + className !== "column-body" && + className.startsWith("column-") && + !className.endsWith("right") && + !className.endsWith("container") && + className !== "column-margin" + ); + }); + }); + const rightSideConflictEls = arrConflictingEls.filter((el) => { + if (el.tagName === "ASIDE") { + return true; + } + + const hasMarginCaption = Array.from(el.classList).find((className) => { + return className == "margin-caption"; + }); + if (hasMarginCaption) { + return true; + } + + return Array.from(el.classList).find((className) => { + return ( + className !== "column-body" && + !className.endsWith("container") && + className.startsWith("column-") && + !className.endsWith("left") + ); + }); + }); + + const kOverlapPaddingSize = 10; + function toRegions(els) { + return els.map((el) => { + const boundRect = el.getBoundingClientRect(); + const top = + boundRect.top + + document.documentElement.scrollTop - + kOverlapPaddingSize; + return { + top, + bottom: top + el.scrollHeight + 2 * kOverlapPaddingSize, + }; + }); + } + + let hasObserved = false; + const visibleItemObserver = (els) => { + let visibleElements = [...els]; + const intersectionObserver = new IntersectionObserver( + (entries, _observer) => { + entries.forEach((entry) => { + if (entry.isIntersecting) { + if (visibleElements.indexOf(entry.target) === -1) { + visibleElements.push(entry.target); + } + } else { + visibleElements = visibleElements.filter((visibleEntry) => { + return visibleEntry !== entry; + }); + } + }); + + if (!hasObserved) { + hideOverlappedSidebars(); + } + hasObserved = true; + }, + {} + ); + els.forEach((el) => { + intersectionObserver.observe(el); + }); + + return { + getVisibleEntries: () => { + return visibleElements; + }, + }; + }; + + const rightElementObserver = visibleItemObserver(rightSideConflictEls); + const leftElementObserver = visibleItemObserver(leftSideConflictEls); + + const hideOverlappedSidebars = () => { + marginScrollVisibility(toRegions(rightElementObserver.getVisibleEntries())); + sidebarScrollVisiblity(toRegions(leftElementObserver.getVisibleEntries())); + if (tocLeftScrollVisibility) { + tocLeftScrollVisibility( + toRegions(leftElementObserver.getVisibleEntries()) + ); + } + }; + + window.quartoToggleReader = () => { + // Applies a slow class (or removes it) + // to update the transition speed + const slowTransition = (slow) => { + const manageTransition = (id, slow) => { + const el = document.getElementById(id); + if (el) { + if (slow) { + el.classList.add("slow"); + } else { + el.classList.remove("slow"); + } + } + }; + + manageTransition("TOC", slow); + manageTransition("quarto-sidebar", slow); + }; + const readerMode = !isReaderMode(); + setReaderModeValue(readerMode); + + // If we're entering reader mode, slow the transition + if (readerMode) { + slowTransition(readerMode); + } + highlightReaderToggle(readerMode); + hideOverlappedSidebars(); + + // If we're exiting reader mode, restore the non-slow transition + if (!readerMode) { + slowTransition(!readerMode); + } + }; + + const highlightReaderToggle = (readerMode) => { + const els = document.querySelectorAll(".quarto-reader-toggle"); + if (els) { + els.forEach((el) => { + if (readerMode) { + el.classList.add("reader"); + } else { + el.classList.remove("reader"); + } + }); + } + }; + + const setReaderModeValue = (val) => { + if (window.location.protocol !== "file:") { + window.localStorage.setItem("quarto-reader-mode", val); + } else { + localReaderMode = val; + } + }; + + const isReaderMode = () => { + if (window.location.protocol !== "file:") { + return window.localStorage.getItem("quarto-reader-mode") === "true"; + } else { + return localReaderMode; + } + }; + let localReaderMode = null; + + const tocOpenDepthStr = tocEl?.getAttribute("data-toc-expanded"); + const tocOpenDepth = tocOpenDepthStr ? Number(tocOpenDepthStr) : 1; + + // Walk the TOC and collapse/expand nodes + // Nodes are expanded if: + // - they are top level + // - they have children that are 'active' links + // - they are directly below an link that is 'active' + const walk = (el, depth) => { + // Tick depth when we enter a UL + if (el.tagName === "UL") { + depth = depth + 1; + } + + // It this is active link + let isActiveNode = false; + if (el.tagName === "A" && el.classList.contains("active")) { + isActiveNode = true; + } + + // See if there is an active child to this element + let hasActiveChild = false; + for (child of el.children) { + hasActiveChild = walk(child, depth) || hasActiveChild; + } + + // Process the collapse state if this is an UL + if (el.tagName === "UL") { + if (tocOpenDepth === -1 && depth > 1) { + el.classList.add("collapse"); + } else if ( + depth <= tocOpenDepth || + hasActiveChild || + prevSiblingIsActiveLink(el) + ) { + el.classList.remove("collapse"); + } else { + el.classList.add("collapse"); + } + + // untick depth when we leave a UL + depth = depth - 1; + } + return hasActiveChild || isActiveNode; + }; + + // walk the TOC and expand / collapse any items that should be shown + + if (tocEl) { + walk(tocEl, 0); + updateActiveLink(); + } + + // Throttle the scroll event and walk peridiocally + window.document.addEventListener( + "scroll", + throttle(() => { + if (tocEl) { + updateActiveLink(); + walk(tocEl, 0); + } + if (!isReaderMode()) { + hideOverlappedSidebars(); + } + }, 5) + ); + window.addEventListener( + "resize", + throttle(() => { + if (!isReaderMode()) { + hideOverlappedSidebars(); + } + }, 10) + ); + hideOverlappedSidebars(); + highlightReaderToggle(isReaderMode()); +}); + +// grouped tabsets +window.addEventListener("pageshow", (_event) => { + function getTabSettings() { + const data = localStorage.getItem("quarto-persistent-tabsets-data"); + if (!data) { + localStorage.setItem("quarto-persistent-tabsets-data", "{}"); + return {}; + } + if (data) { + return JSON.parse(data); + } + } + + function setTabSettings(data) { + localStorage.setItem( + "quarto-persistent-tabsets-data", + JSON.stringify(data) + ); + } + + function setTabState(groupName, groupValue) { + const data = getTabSettings(); + data[groupName] = groupValue; + setTabSettings(data); + } + + function toggleTab(tab, active) { + const tabPanelId = tab.getAttribute("aria-controls"); + const tabPanel = document.getElementById(tabPanelId); + if (active) { + tab.classList.add("active"); + tabPanel.classList.add("active"); + } else { + tab.classList.remove("active"); + tabPanel.classList.remove("active"); + } + } + + function toggleAll(selectedGroup, selectorsToSync) { + for (const [thisGroup, tabs] of Object.entries(selectorsToSync)) { + const active = selectedGroup === thisGroup; + for (const tab of tabs) { + toggleTab(tab, active); + } + } + } + + function findSelectorsToSyncByLanguage() { + const result = {}; + const tabs = Array.from( + document.querySelectorAll(`div[data-group] a[id^='tabset-']`) + ); + for (const item of tabs) { + const div = item.parentElement.parentElement.parentElement; + const group = div.getAttribute("data-group"); + if (!result[group]) { + result[group] = {}; + } + const selectorsToSync = result[group]; + const value = item.innerHTML; + if (!selectorsToSync[value]) { + selectorsToSync[value] = []; + } + selectorsToSync[value].push(item); + } + return result; + } + + function setupSelectorSync() { + const selectorsToSync = findSelectorsToSyncByLanguage(); + Object.entries(selectorsToSync).forEach(([group, tabSetsByValue]) => { + Object.entries(tabSetsByValue).forEach(([value, items]) => { + items.forEach((item) => { + item.addEventListener("click", (_event) => { + setTabState(group, value); + toggleAll(value, selectorsToSync[group]); + }); + }); + }); + }); + return selectorsToSync; + } + + const selectorsToSync = setupSelectorSync(); + for (const [group, selectedName] of Object.entries(getTabSettings())) { + const selectors = selectorsToSync[group]; + // it's possible that stale state gives us empty selections, so we explicitly check here. + if (selectors) { + toggleAll(selectedName, selectors); + } + } +}); + +function throttle(func, wait) { + let waiting = false; + return function () { + if (!waiting) { + func.apply(this, arguments); + waiting = true; + setTimeout(function () { + waiting = false; + }, wait); + } + }; +} + +function nexttick(func) { + return setTimeout(func, 0); +} diff --git a/python-book/site_libs/quarto-html/tippy.css b/python-book/site_libs/quarto-html/tippy.css new file mode 100644 index 00000000..e6ae635c --- /dev/null +++ b/python-book/site_libs/quarto-html/tippy.css @@ -0,0 +1 @@ +.tippy-box[data-animation=fade][data-state=hidden]{opacity:0}[data-tippy-root]{max-width:calc(100vw - 10px)}.tippy-box{position:relative;background-color:#333;color:#fff;border-radius:4px;font-size:14px;line-height:1.4;white-space:normal;outline:0;transition-property:transform,visibility,opacity}.tippy-box[data-placement^=top]>.tippy-arrow{bottom:0}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-7px;left:0;border-width:8px 8px 0;border-top-color:initial;transform-origin:center top}.tippy-box[data-placement^=bottom]>.tippy-arrow{top:0}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-7px;left:0;border-width:0 8px 8px;border-bottom-color:initial;transform-origin:center bottom}.tippy-box[data-placement^=left]>.tippy-arrow{right:0}.tippy-box[data-placement^=left]>.tippy-arrow:before{border-width:8px 0 8px 8px;border-left-color:initial;right:-7px;transform-origin:center left}.tippy-box[data-placement^=right]>.tippy-arrow{left:0}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-7px;border-width:8px 8px 8px 0;border-right-color:initial;transform-origin:center right}.tippy-box[data-inertia][data-state=visible]{transition-timing-function:cubic-bezier(.54,1.5,.38,1.11)}.tippy-arrow{width:16px;height:16px;color:#333}.tippy-arrow:before{content:"";position:absolute;border-color:transparent;border-style:solid}.tippy-content{position:relative;padding:5px 9px;z-index:1} \ No newline at end of file diff --git a/python-book/site_libs/quarto-html/tippy.umd.min.js b/python-book/site_libs/quarto-html/tippy.umd.min.js new file mode 100644 index 00000000..ca292be3 --- /dev/null +++ b/python-book/site_libs/quarto-html/tippy.umd.min.js @@ -0,0 +1,2 @@ +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?module.exports=t(require("@popperjs/core")):"function"==typeof define&&define.amd?define(["@popperjs/core"],t):(e=e||self).tippy=t(e.Popper)}(this,(function(e){"use strict";var t={passive:!0,capture:!0},n=function(){return document.body};function r(e,t,n){if(Array.isArray(e)){var r=e[t];return null==r?Array.isArray(n)?n[t]:n:r}return e}function o(e,t){var n={}.toString.call(e);return 0===n.indexOf("[object")&&n.indexOf(t+"]")>-1}function i(e,t){return"function"==typeof e?e.apply(void 0,t):e}function a(e,t){return 0===t?e:function(r){clearTimeout(n),n=setTimeout((function(){e(r)}),t)};var n}function s(e,t){var n=Object.assign({},e);return t.forEach((function(e){delete n[e]})),n}function u(e){return[].concat(e)}function c(e,t){-1===e.indexOf(t)&&e.push(t)}function p(e){return e.split("-")[0]}function f(e){return[].slice.call(e)}function l(e){return Object.keys(e).reduce((function(t,n){return void 0!==e[n]&&(t[n]=e[n]),t}),{})}function d(){return document.createElement("div")}function v(e){return["Element","Fragment"].some((function(t){return o(e,t)}))}function m(e){return o(e,"MouseEvent")}function g(e){return!(!e||!e._tippy||e._tippy.reference!==e)}function h(e){return v(e)?[e]:function(e){return o(e,"NodeList")}(e)?f(e):Array.isArray(e)?e:f(document.querySelectorAll(e))}function b(e,t){e.forEach((function(e){e&&(e.style.transitionDuration=t+"ms")}))}function y(e,t){e.forEach((function(e){e&&e.setAttribute("data-state",t)}))}function w(e){var t,n=u(e)[0];return null!=n&&null!=(t=n.ownerDocument)&&t.body?n.ownerDocument:document}function E(e,t,n){var r=t+"EventListener";["transitionend","webkitTransitionEnd"].forEach((function(t){e[r](t,n)}))}function O(e,t){for(var n=t;n;){var r;if(e.contains(n))return!0;n=null==n.getRootNode||null==(r=n.getRootNode())?void 0:r.host}return!1}var x={isTouch:!1},C=0;function T(){x.isTouch||(x.isTouch=!0,window.performance&&document.addEventListener("mousemove",A))}function A(){var e=performance.now();e-C<20&&(x.isTouch=!1,document.removeEventListener("mousemove",A)),C=e}function L(){var e=document.activeElement;if(g(e)){var t=e._tippy;e.blur&&!t.state.isVisible&&e.blur()}}var D=!!("undefined"!=typeof window&&"undefined"!=typeof document)&&!!window.msCrypto,R=Object.assign({appendTo:n,aria:{content:"auto",expanded:"auto"},delay:0,duration:[300,250],getReferenceClientRect:null,hideOnClick:!0,ignoreAttributes:!1,interactive:!1,interactiveBorder:2,interactiveDebounce:0,moveTransition:"",offset:[0,10],onAfterUpdate:function(){},onBeforeUpdate:function(){},onCreate:function(){},onDestroy:function(){},onHidden:function(){},onHide:function(){},onMount:function(){},onShow:function(){},onShown:function(){},onTrigger:function(){},onUntrigger:function(){},onClickOutside:function(){},placement:"top",plugins:[],popperOptions:{},render:null,showOnCreate:!1,touch:!0,trigger:"mouseenter focus",triggerTarget:null},{animateFill:!1,followCursor:!1,inlinePositioning:!1,sticky:!1},{allowHTML:!1,animation:"fade",arrow:!0,content:"",inertia:!1,maxWidth:350,role:"tooltip",theme:"",zIndex:9999}),k=Object.keys(R);function P(e){var t=(e.plugins||[]).reduce((function(t,n){var r,o=n.name,i=n.defaultValue;o&&(t[o]=void 0!==e[o]?e[o]:null!=(r=R[o])?r:i);return t}),{});return Object.assign({},e,t)}function j(e,t){var n=Object.assign({},t,{content:i(t.content,[e])},t.ignoreAttributes?{}:function(e,t){return(t?Object.keys(P(Object.assign({},R,{plugins:t}))):k).reduce((function(t,n){var r=(e.getAttribute("data-tippy-"+n)||"").trim();if(!r)return t;if("content"===n)t[n]=r;else try{t[n]=JSON.parse(r)}catch(e){t[n]=r}return t}),{})}(e,t.plugins));return n.aria=Object.assign({},R.aria,n.aria),n.aria={expanded:"auto"===n.aria.expanded?t.interactive:n.aria.expanded,content:"auto"===n.aria.content?t.interactive?null:"describedby":n.aria.content},n}function M(e,t){e.innerHTML=t}function V(e){var t=d();return!0===e?t.className="tippy-arrow":(t.className="tippy-svg-arrow",v(e)?t.appendChild(e):M(t,e)),t}function I(e,t){v(t.content)?(M(e,""),e.appendChild(t.content)):"function"!=typeof t.content&&(t.allowHTML?M(e,t.content):e.textContent=t.content)}function S(e){var t=e.firstElementChild,n=f(t.children);return{box:t,content:n.find((function(e){return e.classList.contains("tippy-content")})),arrow:n.find((function(e){return e.classList.contains("tippy-arrow")||e.classList.contains("tippy-svg-arrow")})),backdrop:n.find((function(e){return e.classList.contains("tippy-backdrop")}))}}function N(e){var t=d(),n=d();n.className="tippy-box",n.setAttribute("data-state","hidden"),n.setAttribute("tabindex","-1");var r=d();function o(n,r){var o=S(t),i=o.box,a=o.content,s=o.arrow;r.theme?i.setAttribute("data-theme",r.theme):i.removeAttribute("data-theme"),"string"==typeof r.animation?i.setAttribute("data-animation",r.animation):i.removeAttribute("data-animation"),r.inertia?i.setAttribute("data-inertia",""):i.removeAttribute("data-inertia"),i.style.maxWidth="number"==typeof r.maxWidth?r.maxWidth+"px":r.maxWidth,r.role?i.setAttribute("role",r.role):i.removeAttribute("role"),n.content===r.content&&n.allowHTML===r.allowHTML||I(a,e.props),r.arrow?s?n.arrow!==r.arrow&&(i.removeChild(s),i.appendChild(V(r.arrow))):i.appendChild(V(r.arrow)):s&&i.removeChild(s)}return r.className="tippy-content",r.setAttribute("data-state","hidden"),I(r,e.props),t.appendChild(n),n.appendChild(r),o(e.props,e.props),{popper:t,onUpdate:o}}N.$$tippy=!0;var B=1,H=[],U=[];function _(o,s){var v,g,h,C,T,A,L,k,M=j(o,Object.assign({},R,P(l(s)))),V=!1,I=!1,N=!1,_=!1,F=[],W=a(we,M.interactiveDebounce),X=B++,Y=(k=M.plugins).filter((function(e,t){return k.indexOf(e)===t})),$={id:X,reference:o,popper:d(),popperInstance:null,props:M,state:{isEnabled:!0,isVisible:!1,isDestroyed:!1,isMounted:!1,isShown:!1},plugins:Y,clearDelayTimeouts:function(){clearTimeout(v),clearTimeout(g),cancelAnimationFrame(h)},setProps:function(e){if($.state.isDestroyed)return;ae("onBeforeUpdate",[$,e]),be();var t=$.props,n=j(o,Object.assign({},t,l(e),{ignoreAttributes:!0}));$.props=n,he(),t.interactiveDebounce!==n.interactiveDebounce&&(ce(),W=a(we,n.interactiveDebounce));t.triggerTarget&&!n.triggerTarget?u(t.triggerTarget).forEach((function(e){e.removeAttribute("aria-expanded")})):n.triggerTarget&&o.removeAttribute("aria-expanded");ue(),ie(),J&&J(t,n);$.popperInstance&&(Ce(),Ae().forEach((function(e){requestAnimationFrame(e._tippy.popperInstance.forceUpdate)})));ae("onAfterUpdate",[$,e])},setContent:function(e){$.setProps({content:e})},show:function(){var e=$.state.isVisible,t=$.state.isDestroyed,o=!$.state.isEnabled,a=x.isTouch&&!$.props.touch,s=r($.props.duration,0,R.duration);if(e||t||o||a)return;if(te().hasAttribute("disabled"))return;if(ae("onShow",[$],!1),!1===$.props.onShow($))return;$.state.isVisible=!0,ee()&&(z.style.visibility="visible");ie(),de(),$.state.isMounted||(z.style.transition="none");if(ee()){var u=re(),p=u.box,f=u.content;b([p,f],0)}A=function(){var e;if($.state.isVisible&&!_){if(_=!0,z.offsetHeight,z.style.transition=$.props.moveTransition,ee()&&$.props.animation){var t=re(),n=t.box,r=t.content;b([n,r],s),y([n,r],"visible")}se(),ue(),c(U,$),null==(e=$.popperInstance)||e.forceUpdate(),ae("onMount",[$]),$.props.animation&&ee()&&function(e,t){me(e,t)}(s,(function(){$.state.isShown=!0,ae("onShown",[$])}))}},function(){var e,t=$.props.appendTo,r=te();e=$.props.interactive&&t===n||"parent"===t?r.parentNode:i(t,[r]);e.contains(z)||e.appendChild(z);$.state.isMounted=!0,Ce()}()},hide:function(){var e=!$.state.isVisible,t=$.state.isDestroyed,n=!$.state.isEnabled,o=r($.props.duration,1,R.duration);if(e||t||n)return;if(ae("onHide",[$],!1),!1===$.props.onHide($))return;$.state.isVisible=!1,$.state.isShown=!1,_=!1,V=!1,ee()&&(z.style.visibility="hidden");if(ce(),ve(),ie(!0),ee()){var i=re(),a=i.box,s=i.content;$.props.animation&&(b([a,s],o),y([a,s],"hidden"))}se(),ue(),$.props.animation?ee()&&function(e,t){me(e,(function(){!$.state.isVisible&&z.parentNode&&z.parentNode.contains(z)&&t()}))}(o,$.unmount):$.unmount()},hideWithInteractivity:function(e){ne().addEventListener("mousemove",W),c(H,W),W(e)},enable:function(){$.state.isEnabled=!0},disable:function(){$.hide(),$.state.isEnabled=!1},unmount:function(){$.state.isVisible&&$.hide();if(!$.state.isMounted)return;Te(),Ae().forEach((function(e){e._tippy.unmount()})),z.parentNode&&z.parentNode.removeChild(z);U=U.filter((function(e){return e!==$})),$.state.isMounted=!1,ae("onHidden",[$])},destroy:function(){if($.state.isDestroyed)return;$.clearDelayTimeouts(),$.unmount(),be(),delete o._tippy,$.state.isDestroyed=!0,ae("onDestroy",[$])}};if(!M.render)return $;var q=M.render($),z=q.popper,J=q.onUpdate;z.setAttribute("data-tippy-root",""),z.id="tippy-"+$.id,$.popper=z,o._tippy=$,z._tippy=$;var G=Y.map((function(e){return e.fn($)})),K=o.hasAttribute("aria-expanded");return he(),ue(),ie(),ae("onCreate",[$]),M.showOnCreate&&Le(),z.addEventListener("mouseenter",(function(){$.props.interactive&&$.state.isVisible&&$.clearDelayTimeouts()})),z.addEventListener("mouseleave",(function(){$.props.interactive&&$.props.trigger.indexOf("mouseenter")>=0&&ne().addEventListener("mousemove",W)})),$;function Q(){var e=$.props.touch;return Array.isArray(e)?e:[e,0]}function Z(){return"hold"===Q()[0]}function ee(){var e;return!(null==(e=$.props.render)||!e.$$tippy)}function te(){return L||o}function ne(){var e=te().parentNode;return e?w(e):document}function re(){return S(z)}function oe(e){return $.state.isMounted&&!$.state.isVisible||x.isTouch||C&&"focus"===C.type?0:r($.props.delay,e?0:1,R.delay)}function ie(e){void 0===e&&(e=!1),z.style.pointerEvents=$.props.interactive&&!e?"":"none",z.style.zIndex=""+$.props.zIndex}function ae(e,t,n){var r;(void 0===n&&(n=!0),G.forEach((function(n){n[e]&&n[e].apply(n,t)})),n)&&(r=$.props)[e].apply(r,t)}function se(){var e=$.props.aria;if(e.content){var t="aria-"+e.content,n=z.id;u($.props.triggerTarget||o).forEach((function(e){var r=e.getAttribute(t);if($.state.isVisible)e.setAttribute(t,r?r+" "+n:n);else{var o=r&&r.replace(n,"").trim();o?e.setAttribute(t,o):e.removeAttribute(t)}}))}}function ue(){!K&&$.props.aria.expanded&&u($.props.triggerTarget||o).forEach((function(e){$.props.interactive?e.setAttribute("aria-expanded",$.state.isVisible&&e===te()?"true":"false"):e.removeAttribute("aria-expanded")}))}function ce(){ne().removeEventListener("mousemove",W),H=H.filter((function(e){return e!==W}))}function pe(e){if(!x.isTouch||!N&&"mousedown"!==e.type){var t=e.composedPath&&e.composedPath()[0]||e.target;if(!$.props.interactive||!O(z,t)){if(u($.props.triggerTarget||o).some((function(e){return O(e,t)}))){if(x.isTouch)return;if($.state.isVisible&&$.props.trigger.indexOf("click")>=0)return}else ae("onClickOutside",[$,e]);!0===$.props.hideOnClick&&($.clearDelayTimeouts(),$.hide(),I=!0,setTimeout((function(){I=!1})),$.state.isMounted||ve())}}}function fe(){N=!0}function le(){N=!1}function de(){var e=ne();e.addEventListener("mousedown",pe,!0),e.addEventListener("touchend",pe,t),e.addEventListener("touchstart",le,t),e.addEventListener("touchmove",fe,t)}function ve(){var e=ne();e.removeEventListener("mousedown",pe,!0),e.removeEventListener("touchend",pe,t),e.removeEventListener("touchstart",le,t),e.removeEventListener("touchmove",fe,t)}function me(e,t){var n=re().box;function r(e){e.target===n&&(E(n,"remove",r),t())}if(0===e)return t();E(n,"remove",T),E(n,"add",r),T=r}function ge(e,t,n){void 0===n&&(n=!1),u($.props.triggerTarget||o).forEach((function(r){r.addEventListener(e,t,n),F.push({node:r,eventType:e,handler:t,options:n})}))}function he(){var e;Z()&&(ge("touchstart",ye,{passive:!0}),ge("touchend",Ee,{passive:!0})),(e=$.props.trigger,e.split(/\s+/).filter(Boolean)).forEach((function(e){if("manual"!==e)switch(ge(e,ye),e){case"mouseenter":ge("mouseleave",Ee);break;case"focus":ge(D?"focusout":"blur",Oe);break;case"focusin":ge("focusout",Oe)}}))}function be(){F.forEach((function(e){var t=e.node,n=e.eventType,r=e.handler,o=e.options;t.removeEventListener(n,r,o)})),F=[]}function ye(e){var t,n=!1;if($.state.isEnabled&&!xe(e)&&!I){var r="focus"===(null==(t=C)?void 0:t.type);C=e,L=e.currentTarget,ue(),!$.state.isVisible&&m(e)&&H.forEach((function(t){return t(e)})),"click"===e.type&&($.props.trigger.indexOf("mouseenter")<0||V)&&!1!==$.props.hideOnClick&&$.state.isVisible?n=!0:Le(e),"click"===e.type&&(V=!n),n&&!r&&De(e)}}function we(e){var t=e.target,n=te().contains(t)||z.contains(t);"mousemove"===e.type&&n||function(e,t){var n=t.clientX,r=t.clientY;return e.every((function(e){var t=e.popperRect,o=e.popperState,i=e.props.interactiveBorder,a=p(o.placement),s=o.modifiersData.offset;if(!s)return!0;var u="bottom"===a?s.top.y:0,c="top"===a?s.bottom.y:0,f="right"===a?s.left.x:0,l="left"===a?s.right.x:0,d=t.top-r+u>i,v=r-t.bottom-c>i,m=t.left-n+f>i,g=n-t.right-l>i;return d||v||m||g}))}(Ae().concat(z).map((function(e){var t,n=null==(t=e._tippy.popperInstance)?void 0:t.state;return n?{popperRect:e.getBoundingClientRect(),popperState:n,props:M}:null})).filter(Boolean),e)&&(ce(),De(e))}function Ee(e){xe(e)||$.props.trigger.indexOf("click")>=0&&V||($.props.interactive?$.hideWithInteractivity(e):De(e))}function Oe(e){$.props.trigger.indexOf("focusin")<0&&e.target!==te()||$.props.interactive&&e.relatedTarget&&z.contains(e.relatedTarget)||De(e)}function xe(e){return!!x.isTouch&&Z()!==e.type.indexOf("touch")>=0}function Ce(){Te();var t=$.props,n=t.popperOptions,r=t.placement,i=t.offset,a=t.getReferenceClientRect,s=t.moveTransition,u=ee()?S(z).arrow:null,c=a?{getBoundingClientRect:a,contextElement:a.contextElement||te()}:o,p=[{name:"offset",options:{offset:i}},{name:"preventOverflow",options:{padding:{top:2,bottom:2,left:5,right:5}}},{name:"flip",options:{padding:5}},{name:"computeStyles",options:{adaptive:!s}},{name:"$$tippy",enabled:!0,phase:"beforeWrite",requires:["computeStyles"],fn:function(e){var t=e.state;if(ee()){var n=re().box;["placement","reference-hidden","escaped"].forEach((function(e){"placement"===e?n.setAttribute("data-placement",t.placement):t.attributes.popper["data-popper-"+e]?n.setAttribute("data-"+e,""):n.removeAttribute("data-"+e)})),t.attributes.popper={}}}}];ee()&&u&&p.push({name:"arrow",options:{element:u,padding:3}}),p.push.apply(p,(null==n?void 0:n.modifiers)||[]),$.popperInstance=e.createPopper(c,z,Object.assign({},n,{placement:r,onFirstUpdate:A,modifiers:p}))}function Te(){$.popperInstance&&($.popperInstance.destroy(),$.popperInstance=null)}function Ae(){return f(z.querySelectorAll("[data-tippy-root]"))}function Le(e){$.clearDelayTimeouts(),e&&ae("onTrigger",[$,e]),de();var t=oe(!0),n=Q(),r=n[0],o=n[1];x.isTouch&&"hold"===r&&o&&(t=o),t?v=setTimeout((function(){$.show()}),t):$.show()}function De(e){if($.clearDelayTimeouts(),ae("onUntrigger",[$,e]),$.state.isVisible){if(!($.props.trigger.indexOf("mouseenter")>=0&&$.props.trigger.indexOf("click")>=0&&["mouseleave","mousemove"].indexOf(e.type)>=0&&V)){var t=oe(!1);t?g=setTimeout((function(){$.state.isVisible&&$.hide()}),t):h=requestAnimationFrame((function(){$.hide()}))}}else ve()}}function F(e,n){void 0===n&&(n={});var r=R.plugins.concat(n.plugins||[]);document.addEventListener("touchstart",T,t),window.addEventListener("blur",L);var o=Object.assign({},n,{plugins:r}),i=h(e).reduce((function(e,t){var n=t&&_(t,o);return n&&e.push(n),e}),[]);return v(e)?i[0]:i}F.defaultProps=R,F.setDefaultProps=function(e){Object.keys(e).forEach((function(t){R[t]=e[t]}))},F.currentInput=x;var W=Object.assign({},e.applyStyles,{effect:function(e){var t=e.state,n={popper:{position:t.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};Object.assign(t.elements.popper.style,n.popper),t.styles=n,t.elements.arrow&&Object.assign(t.elements.arrow.style,n.arrow)}}),X={mouseover:"mouseenter",focusin:"focus",click:"click"};var Y={name:"animateFill",defaultValue:!1,fn:function(e){var t;if(null==(t=e.props.render)||!t.$$tippy)return{};var n=S(e.popper),r=n.box,o=n.content,i=e.props.animateFill?function(){var e=d();return e.className="tippy-backdrop",y([e],"hidden"),e}():null;return{onCreate:function(){i&&(r.insertBefore(i,r.firstElementChild),r.setAttribute("data-animatefill",""),r.style.overflow="hidden",e.setProps({arrow:!1,animation:"shift-away"}))},onMount:function(){if(i){var e=r.style.transitionDuration,t=Number(e.replace("ms",""));o.style.transitionDelay=Math.round(t/10)+"ms",i.style.transitionDuration=e,y([i],"visible")}},onShow:function(){i&&(i.style.transitionDuration="0ms")},onHide:function(){i&&y([i],"hidden")}}}};var $={clientX:0,clientY:0},q=[];function z(e){var t=e.clientX,n=e.clientY;$={clientX:t,clientY:n}}var J={name:"followCursor",defaultValue:!1,fn:function(e){var t=e.reference,n=w(e.props.triggerTarget||t),r=!1,o=!1,i=!0,a=e.props;function s(){return"initial"===e.props.followCursor&&e.state.isVisible}function u(){n.addEventListener("mousemove",f)}function c(){n.removeEventListener("mousemove",f)}function p(){r=!0,e.setProps({getReferenceClientRect:null}),r=!1}function f(n){var r=!n.target||t.contains(n.target),o=e.props.followCursor,i=n.clientX,a=n.clientY,s=t.getBoundingClientRect(),u=i-s.left,c=a-s.top;!r&&e.props.interactive||e.setProps({getReferenceClientRect:function(){var e=t.getBoundingClientRect(),n=i,r=a;"initial"===o&&(n=e.left+u,r=e.top+c);var s="horizontal"===o?e.top:r,p="vertical"===o?e.right:n,f="horizontal"===o?e.bottom:r,l="vertical"===o?e.left:n;return{width:p-l,height:f-s,top:s,right:p,bottom:f,left:l}}})}function l(){e.props.followCursor&&(q.push({instance:e,doc:n}),function(e){e.addEventListener("mousemove",z)}(n))}function d(){0===(q=q.filter((function(t){return t.instance!==e}))).filter((function(e){return e.doc===n})).length&&function(e){e.removeEventListener("mousemove",z)}(n)}return{onCreate:l,onDestroy:d,onBeforeUpdate:function(){a=e.props},onAfterUpdate:function(t,n){var i=n.followCursor;r||void 0!==i&&a.followCursor!==i&&(d(),i?(l(),!e.state.isMounted||o||s()||u()):(c(),p()))},onMount:function(){e.props.followCursor&&!o&&(i&&(f($),i=!1),s()||u())},onTrigger:function(e,t){m(t)&&($={clientX:t.clientX,clientY:t.clientY}),o="focus"===t.type},onHidden:function(){e.props.followCursor&&(p(),c(),i=!0)}}}};var G={name:"inlinePositioning",defaultValue:!1,fn:function(e){var t,n=e.reference;var r=-1,o=!1,i=[],a={name:"tippyInlinePositioning",enabled:!0,phase:"afterWrite",fn:function(o){var a=o.state;e.props.inlinePositioning&&(-1!==i.indexOf(a.placement)&&(i=[]),t!==a.placement&&-1===i.indexOf(a.placement)&&(i.push(a.placement),e.setProps({getReferenceClientRect:function(){return function(e){return function(e,t,n,r){if(n.length<2||null===e)return t;if(2===n.length&&r>=0&&n[0].left>n[1].right)return n[r]||t;switch(e){case"top":case"bottom":var o=n[0],i=n[n.length-1],a="top"===e,s=o.top,u=i.bottom,c=a?o.left:i.left,p=a?o.right:i.right;return{top:s,bottom:u,left:c,right:p,width:p-c,height:u-s};case"left":case"right":var f=Math.min.apply(Math,n.map((function(e){return e.left}))),l=Math.max.apply(Math,n.map((function(e){return e.right}))),d=n.filter((function(t){return"left"===e?t.left===f:t.right===l})),v=d[0].top,m=d[d.length-1].bottom;return{top:v,bottom:m,left:f,right:l,width:l-f,height:m-v};default:return t}}(p(e),n.getBoundingClientRect(),f(n.getClientRects()),r)}(a.placement)}})),t=a.placement)}};function s(){var t;o||(t=function(e,t){var n;return{popperOptions:Object.assign({},e.popperOptions,{modifiers:[].concat(((null==(n=e.popperOptions)?void 0:n.modifiers)||[]).filter((function(e){return e.name!==t.name})),[t])})}}(e.props,a),o=!0,e.setProps(t),o=!1)}return{onCreate:s,onAfterUpdate:s,onTrigger:function(t,n){if(m(n)){var o=f(e.reference.getClientRects()),i=o.find((function(e){return e.left-2<=n.clientX&&e.right+2>=n.clientX&&e.top-2<=n.clientY&&e.bottom+2>=n.clientY})),a=o.indexOf(i);r=a>-1?a:r}},onHidden:function(){r=-1}}}};var K={name:"sticky",defaultValue:!1,fn:function(e){var t=e.reference,n=e.popper;function r(t){return!0===e.props.sticky||e.props.sticky===t}var o=null,i=null;function a(){var s=r("reference")?(e.popperInstance?e.popperInstance.state.elements.reference:t).getBoundingClientRect():null,u=r("popper")?n.getBoundingClientRect():null;(s&&Q(o,s)||u&&Q(i,u))&&e.popperInstance&&e.popperInstance.update(),o=s,i=u,e.state.isMounted&&requestAnimationFrame(a)}return{onMount:function(){e.props.sticky&&a()}}}};function Q(e,t){return!e||!t||(e.top!==t.top||e.right!==t.right||e.bottom!==t.bottom||e.left!==t.left)}return F.setDefaultProps({plugins:[Y,J,G,K],render:N}),F.createSingleton=function(e,t){var n;void 0===t&&(t={});var r,o=e,i=[],a=[],c=t.overrides,p=[],f=!1;function l(){a=o.map((function(e){return u(e.props.triggerTarget||e.reference)})).reduce((function(e,t){return e.concat(t)}),[])}function v(){i=o.map((function(e){return e.reference}))}function m(e){o.forEach((function(t){e?t.enable():t.disable()}))}function g(e){return o.map((function(t){var n=t.setProps;return t.setProps=function(o){n(o),t.reference===r&&e.setProps(o)},function(){t.setProps=n}}))}function h(e,t){var n=a.indexOf(t);if(t!==r){r=t;var s=(c||[]).concat("content").reduce((function(e,t){return e[t]=o[n].props[t],e}),{});e.setProps(Object.assign({},s,{getReferenceClientRect:"function"==typeof s.getReferenceClientRect?s.getReferenceClientRect:function(){var e;return null==(e=i[n])?void 0:e.getBoundingClientRect()}}))}}m(!1),v(),l();var b={fn:function(){return{onDestroy:function(){m(!0)},onHidden:function(){r=null},onClickOutside:function(e){e.props.showOnCreate&&!f&&(f=!0,r=null)},onShow:function(e){e.props.showOnCreate&&!f&&(f=!0,h(e,i[0]))},onTrigger:function(e,t){h(e,t.currentTarget)}}}},y=F(d(),Object.assign({},s(t,["overrides"]),{plugins:[b].concat(t.plugins||[]),triggerTarget:a,popperOptions:Object.assign({},t.popperOptions,{modifiers:[].concat((null==(n=t.popperOptions)?void 0:n.modifiers)||[],[W])})})),w=y.show;y.show=function(e){if(w(),!r&&null==e)return h(y,i[0]);if(!r||null!=e){if("number"==typeof e)return i[e]&&h(y,i[e]);if(o.indexOf(e)>=0){var t=e.reference;return h(y,t)}return i.indexOf(e)>=0?h(y,e):void 0}},y.showNext=function(){var e=i[0];if(!r)return y.show(0);var t=i.indexOf(r);y.show(i[t+1]||e)},y.showPrevious=function(){var e=i[i.length-1];if(!r)return y.show(e);var t=i.indexOf(r),n=i[t-1]||e;y.show(n)};var E=y.setProps;return y.setProps=function(e){c=e.overrides||c,E(e)},y.setInstances=function(e){m(!0),p.forEach((function(e){return e()})),o=e,m(!1),v(),l(),p=g(y),y.setProps({triggerTarget:a})},p=g(y),y},F.delegate=function(e,n){var r=[],o=[],i=!1,a=n.target,c=s(n,["target"]),p=Object.assign({},c,{trigger:"manual",touch:!1}),f=Object.assign({touch:R.touch},c,{showOnCreate:!0}),l=F(e,p);function d(e){if(e.target&&!i){var t=e.target.closest(a);if(t){var r=t.getAttribute("data-tippy-trigger")||n.trigger||R.trigger;if(!t._tippy&&!("touchstart"===e.type&&"boolean"==typeof f.touch||"touchstart"!==e.type&&r.indexOf(X[e.type])<0)){var s=F(t,f);s&&(o=o.concat(s))}}}}function v(e,t,n,o){void 0===o&&(o=!1),e.addEventListener(t,n,o),r.push({node:e,eventType:t,handler:n,options:o})}return u(l).forEach((function(e){var n=e.destroy,a=e.enable,s=e.disable;e.destroy=function(e){void 0===e&&(e=!0),e&&o.forEach((function(e){e.destroy()})),o=[],r.forEach((function(e){var t=e.node,n=e.eventType,r=e.handler,o=e.options;t.removeEventListener(n,r,o)})),r=[],n()},e.enable=function(){a(),o.forEach((function(e){return e.enable()})),i=!1},e.disable=function(){s(),o.forEach((function(e){return e.disable()})),i=!0},function(e){var n=e.reference;v(n,"touchstart",d,t),v(n,"mouseover",d),v(n,"focusin",d),v(n,"click",d)}(e)})),l},F.hideAll=function(e){var t=void 0===e?{}:e,n=t.exclude,r=t.duration;U.forEach((function(e){var t=!1;if(n&&(t=g(n)?e.reference===n:e.popper===n.popper),!t){var o=e.props.duration;e.setProps({duration:r}),e.hide(),e.state.isDestroyed||e.setProps({duration:o})}}))},F.roundArrow='',F})); + diff --git a/python-book/site_libs/quarto-nav/headroom.min.js b/python-book/site_libs/quarto-nav/headroom.min.js new file mode 100644 index 00000000..b08f1dff --- /dev/null +++ b/python-book/site_libs/quarto-nav/headroom.min.js @@ -0,0 +1,7 @@ +/*! + * headroom.js v0.12.0 - Give your page some headroom. Hide your header until you need it + * Copyright (c) 2020 Nick Williams - http://wicky.nillia.ms/headroom.js + * License: MIT + */ + +!function(t,n){"object"==typeof exports&&"undefined"!=typeof module?module.exports=n():"function"==typeof define&&define.amd?define(n):(t=t||self).Headroom=n()}(this,function(){"use strict";function t(){return"undefined"!=typeof window}function d(t){return function(t){return t&&t.document&&function(t){return 9===t.nodeType}(t.document)}(t)?function(t){var n=t.document,o=n.body,s=n.documentElement;return{scrollHeight:function(){return Math.max(o.scrollHeight,s.scrollHeight,o.offsetHeight,s.offsetHeight,o.clientHeight,s.clientHeight)},height:function(){return t.innerHeight||s.clientHeight||o.clientHeight},scrollY:function(){return void 0!==t.pageYOffset?t.pageYOffset:(s||o.parentNode||o).scrollTop}}}(t):function(t){return{scrollHeight:function(){return Math.max(t.scrollHeight,t.offsetHeight,t.clientHeight)},height:function(){return Math.max(t.offsetHeight,t.clientHeight)},scrollY:function(){return t.scrollTop}}}(t)}function n(t,s,e){var n,o=function(){var n=!1;try{var t={get passive(){n=!0}};window.addEventListener("test",t,t),window.removeEventListener("test",t,t)}catch(t){n=!1}return n}(),i=!1,r=d(t),l=r.scrollY(),a={};function c(){var t=Math.round(r.scrollY()),n=r.height(),o=r.scrollHeight();a.scrollY=t,a.lastScrollY=l,a.direction=ls.tolerance[a.direction],e(a),l=t,i=!1}function h(){i||(i=!0,n=requestAnimationFrame(c))}var u=!!o&&{passive:!0,capture:!1};return t.addEventListener("scroll",h,u),c(),{destroy:function(){cancelAnimationFrame(n),t.removeEventListener("scroll",h,u)}}}function o(t){return t===Object(t)?t:{down:t,up:t}}function s(t,n){n=n||{},Object.assign(this,s.options,n),this.classes=Object.assign({},s.options.classes,n.classes),this.elem=t,this.tolerance=o(this.tolerance),this.offset=o(this.offset),this.initialised=!1,this.frozen=!1}return s.prototype={constructor:s,init:function(){return s.cutsTheMustard&&!this.initialised&&(this.addClass("initial"),this.initialised=!0,setTimeout(function(t){t.scrollTracker=n(t.scroller,{offset:t.offset,tolerance:t.tolerance},t.update.bind(t))},100,this)),this},destroy:function(){this.initialised=!1,Object.keys(this.classes).forEach(this.removeClass,this),this.scrollTracker.destroy()},unpin:function(){!this.hasClass("pinned")&&this.hasClass("unpinned")||(this.addClass("unpinned"),this.removeClass("pinned"),this.onUnpin&&this.onUnpin.call(this))},pin:function(){this.hasClass("unpinned")&&(this.addClass("pinned"),this.removeClass("unpinned"),this.onPin&&this.onPin.call(this))},freeze:function(){this.frozen=!0,this.addClass("frozen")},unfreeze:function(){this.frozen=!1,this.removeClass("frozen")},top:function(){this.hasClass("top")||(this.addClass("top"),this.removeClass("notTop"),this.onTop&&this.onTop.call(this))},notTop:function(){this.hasClass("notTop")||(this.addClass("notTop"),this.removeClass("top"),this.onNotTop&&this.onNotTop.call(this))},bottom:function(){this.hasClass("bottom")||(this.addClass("bottom"),this.removeClass("notBottom"),this.onBottom&&this.onBottom.call(this))},notBottom:function(){this.hasClass("notBottom")||(this.addClass("notBottom"),this.removeClass("bottom"),this.onNotBottom&&this.onNotBottom.call(this))},shouldUnpin:function(t){return"down"===t.direction&&!t.top&&t.toleranceExceeded},shouldPin:function(t){return"up"===t.direction&&t.toleranceExceeded||t.top},addClass:function(t){this.elem.classList.add.apply(this.elem.classList,this.classes[t].split(" "))},removeClass:function(t){this.elem.classList.remove.apply(this.elem.classList,this.classes[t].split(" "))},hasClass:function(t){return this.classes[t].split(" ").every(function(t){return this.classList.contains(t)},this.elem)},update:function(t){t.isOutOfBounds||!0!==this.frozen&&(t.top?this.top():this.notTop(),t.bottom?this.bottom():this.notBottom(),this.shouldUnpin(t)?this.unpin():this.shouldPin(t)&&this.pin())}},s.options={tolerance:{up:0,down:0},offset:0,scroller:t()?window:null,classes:{frozen:"headroom--frozen",pinned:"headroom--pinned",unpinned:"headroom--unpinned",top:"headroom--top",notTop:"headroom--not-top",bottom:"headroom--bottom",notBottom:"headroom--not-bottom",initial:"headroom"}},s.cutsTheMustard=!!(t()&&function(){}.bind&&"classList"in document.documentElement&&Object.assign&&Object.keys&&requestAnimationFrame),s}); diff --git a/python-book/site_libs/quarto-nav/quarto-nav.js b/python-book/site_libs/quarto-nav/quarto-nav.js new file mode 100644 index 00000000..3b21201f --- /dev/null +++ b/python-book/site_libs/quarto-nav/quarto-nav.js @@ -0,0 +1,277 @@ +const headroomChanged = new CustomEvent("quarto-hrChanged", { + detail: {}, + bubbles: true, + cancelable: false, + composed: false, +}); + +window.document.addEventListener("DOMContentLoaded", function () { + let init = false; + + // Manage the back to top button, if one is present. + let lastScrollTop = window.pageYOffset || document.documentElement.scrollTop; + const scrollDownBuffer = 5; + const scrollUpBuffer = 35; + const btn = document.getElementById("quarto-back-to-top"); + const hideBackToTop = () => { + btn.style.display = "none"; + }; + const showBackToTop = () => { + btn.style.display = "inline-block"; + }; + if (btn) { + window.document.addEventListener( + "scroll", + function () { + const currentScrollTop = + window.pageYOffset || document.documentElement.scrollTop; + + // Shows and hides the button 'intelligently' as the user scrolls + if (currentScrollTop - scrollDownBuffer > lastScrollTop) { + hideBackToTop(); + lastScrollTop = currentScrollTop <= 0 ? 0 : currentScrollTop; + } else if (currentScrollTop < lastScrollTop - scrollUpBuffer) { + showBackToTop(); + lastScrollTop = currentScrollTop <= 0 ? 0 : currentScrollTop; + } + + // Show the button at the bottom, hides it at the top + if (currentScrollTop <= 0) { + hideBackToTop(); + } else if ( + window.innerHeight + currentScrollTop >= + document.body.offsetHeight + ) { + showBackToTop(); + } + }, + false + ); + } + + function throttle(func, wait) { + var timeout; + return function () { + const context = this; + const args = arguments; + const later = function () { + clearTimeout(timeout); + timeout = null; + func.apply(context, args); + }; + + if (!timeout) { + timeout = setTimeout(later, wait); + } + }; + } + + function headerOffset() { + // Set an offset if there is are fixed top navbar + const headerEl = window.document.querySelector("header.fixed-top"); + if (headerEl) { + return headerEl.clientHeight; + } else { + return 0; + } + } + + function footerOffset() { + const footerEl = window.document.querySelector("footer.footer"); + if (footerEl) { + return footerEl.clientHeight; + } else { + return 0; + } + } + + function updateDocumentOffsetWithoutAnimation() { + updateDocumentOffset(false); + } + + function updateDocumentOffset(animated) { + // set body offset + const topOffset = headerOffset(); + const bodyOffset = topOffset + footerOffset(); + const bodyEl = window.document.body; + bodyEl.setAttribute("data-bs-offset", topOffset); + bodyEl.style.paddingTop = topOffset + "px"; + + // deal with sidebar offsets + const sidebars = window.document.querySelectorAll( + ".sidebar, .headroom-target" + ); + sidebars.forEach((sidebar) => { + if (!animated) { + sidebar.classList.add("notransition"); + // Remove the no transition class after the animation has time to complete + setTimeout(function () { + sidebar.classList.remove("notransition"); + }, 201); + } + + if (window.Headroom && sidebar.classList.contains("sidebar-unpinned")) { + sidebar.style.top = "0"; + sidebar.style.maxHeight = "100vh"; + } else { + sidebar.style.top = topOffset + "px"; + sidebar.style.maxHeight = "calc(100vh - " + topOffset + "px)"; + } + }); + + // allow space for footer + const mainContainer = window.document.querySelector(".quarto-container"); + if (mainContainer) { + mainContainer.style.minHeight = "calc(100vh - " + bodyOffset + "px)"; + } + + // link offset + let linkStyle = window.document.querySelector("#quarto-target-style"); + if (!linkStyle) { + linkStyle = window.document.createElement("style"); + linkStyle.setAttribute("id", "quarto-target-style"); + window.document.head.appendChild(linkStyle); + } + while (linkStyle.firstChild) { + linkStyle.removeChild(linkStyle.firstChild); + } + if (topOffset > 0) { + linkStyle.appendChild( + window.document.createTextNode(` + section:target::before { + content: ""; + display: block; + height: ${topOffset}px; + margin: -${topOffset}px 0 0; + }`) + ); + } + if (init) { + window.dispatchEvent(headroomChanged); + } + init = true; + } + + // initialize headroom + var header = window.document.querySelector("#quarto-header"); + if (header && window.Headroom) { + const headroom = new window.Headroom(header, { + tolerance: 5, + onPin: function () { + const sidebars = window.document.querySelectorAll( + ".sidebar, .headroom-target" + ); + sidebars.forEach((sidebar) => { + sidebar.classList.remove("sidebar-unpinned"); + }); + updateDocumentOffset(); + }, + onUnpin: function () { + const sidebars = window.document.querySelectorAll( + ".sidebar, .headroom-target" + ); + sidebars.forEach((sidebar) => { + sidebar.classList.add("sidebar-unpinned"); + }); + updateDocumentOffset(); + }, + }); + headroom.init(); + + let frozen = false; + window.quartoToggleHeadroom = function () { + if (frozen) { + headroom.unfreeze(); + frozen = false; + } else { + headroom.freeze(); + frozen = true; + } + }; + } + + window.addEventListener( + "hashchange", + function (e) { + if ( + getComputedStyle(document.documentElement).scrollBehavior !== "smooth" + ) { + window.scrollTo(0, window.pageYOffset - headerOffset()); + } + }, + false + ); + + // Observe size changed for the header + const headerEl = window.document.querySelector("header.fixed-top"); + if (headerEl && window.ResizeObserver) { + const observer = new window.ResizeObserver( + updateDocumentOffsetWithoutAnimation + ); + observer.observe(headerEl, { + attributes: true, + childList: true, + characterData: true, + }); + } else { + window.addEventListener( + "resize", + throttle(updateDocumentOffsetWithoutAnimation, 50) + ); + } + setTimeout(updateDocumentOffsetWithoutAnimation, 250); + + // fixup index.html links if we aren't on the filesystem + if (window.location.protocol !== "file:") { + const links = window.document.querySelectorAll("a"); + for (let i = 0; i < links.length; i++) { + if (links[i].href) { + links[i].href = links[i].href.replace(/\/index\.html/, "/"); + } + } + + // Fixup any sharing links that require urls + // Append url to any sharing urls + const sharingLinks = window.document.querySelectorAll( + "a.sidebar-tools-main-item" + ); + for (let i = 0; i < sharingLinks.length; i++) { + const sharingLink = sharingLinks[i]; + const href = sharingLink.getAttribute("href"); + if (href) { + sharingLink.setAttribute( + "href", + href.replace("|url|", window.location.href) + ); + } + } + + // Scroll the active navigation item into view, if necessary + const navSidebar = window.document.querySelector("nav#quarto-sidebar"); + if (navSidebar) { + // Find the active item + const activeItem = navSidebar.querySelector("li.sidebar-item a.active"); + if (activeItem) { + // Wait for the scroll height and height to resolve by observing size changes on the + // nav element that is scrollable + const resizeObserver = new ResizeObserver((_entries) => { + // The bottom of the element + const elBottom = activeItem.offsetTop; + const viewBottom = navSidebar.scrollTop + navSidebar.clientHeight; + + // The element height and scroll height are the same, then we are still loading + if (viewBottom !== navSidebar.scrollHeight) { + // Determine if the item isn't visible and scroll to it + if (elBottom >= viewBottom) { + navSidebar.scrollTop = elBottom; + } + + // stop observing now since we've completed the scroll + resizeObserver.unobserve(navSidebar); + } + }); + resizeObserver.observe(navSidebar); + } + } + } +}); diff --git a/python-book/site_libs/quarto-search/autocomplete.umd.js b/python-book/site_libs/quarto-search/autocomplete.umd.js new file mode 100644 index 00000000..619c57cc --- /dev/null +++ b/python-book/site_libs/quarto-search/autocomplete.umd.js @@ -0,0 +1,3 @@ +/*! @algolia/autocomplete-js 1.7.3 | MIT License | © Algolia, Inc. and contributors | https://github.com/algolia/autocomplete */ +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self)["@algolia/autocomplete-js"]={})}(this,(function(e){"use strict";function t(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function n(e){for(var n=1;n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function a(e,t){return function(e){if(Array.isArray(e))return e}(e)||function(e,t){var n=null==e?null:"undefined"!=typeof Symbol&&e[Symbol.iterator]||e["@@iterator"];if(null==n)return;var r,o,i=[],u=!0,a=!1;try{for(n=n.call(e);!(u=(r=n.next()).done)&&(i.push(r.value),!t||i.length!==t);u=!0);}catch(e){a=!0,o=e}finally{try{u||null==n.return||n.return()}finally{if(a)throw o}}return i}(e,t)||l(e,t)||function(){throw new TypeError("Invalid attempt to destructure non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function c(e){return function(e){if(Array.isArray(e))return s(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||l(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function l(e,t){if(e){if("string"==typeof e)return s(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);return"Object"===n&&e.constructor&&(n=e.constructor.name),"Map"===n||"Set"===n?Array.from(e):"Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)?s(e,t):void 0}}function s(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n=n?null===r?null:0:o}function S(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function I(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function E(e,t){var n=[];return Promise.resolve(e(t)).then((function(e){return Promise.all(e.filter((function(e){return Boolean(e)})).map((function(e){if(e.sourceId,n.includes(e.sourceId))throw new Error("[Autocomplete] The `sourceId` ".concat(JSON.stringify(e.sourceId)," is not unique."));n.push(e.sourceId);var t=function(e){for(var t=1;te.length)&&(t=e.length);for(var n=0,r=new Array(t);ne.length)&&(t=e.length);for(var n=0,r=new Array(t);n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}var ae,ce,le,se=null,pe=(ae=-1,ce=-1,le=void 0,function(e){var t=++ae;return Promise.resolve(e).then((function(e){return le&&t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}var ye=["props","refresh","store"],be=["inputElement","formElement","panelElement"],Oe=["inputElement"],_e=["inputElement","maxLength"],Pe=["item","source"];function je(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function we(e){for(var t=1;t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function Ee(e){var t=e.props,n=e.refresh,r=e.store,o=Ie(e,ye);return{getEnvironmentProps:function(e){var n=e.inputElement,o=e.formElement,i=e.panelElement;function u(e){!r.getState().isOpen&&r.pendingRequests.isEmpty()||e.target===n||!1===[o,i].some((function(t){return n=t,r=e.target,n===r||n.contains(r);var n,r}))&&(r.dispatch("blur",null),t.debug||r.pendingRequests.cancelAll())}return we({onTouchStart:u,onMouseDown:u,onTouchMove:function(e){!1!==r.getState().isOpen&&n===t.environment.document.activeElement&&e.target!==n&&n.blur()}},Ie(e,be))},getRootProps:function(e){return we({role:"combobox","aria-expanded":r.getState().isOpen,"aria-haspopup":"listbox","aria-owns":r.getState().isOpen?"".concat(t.id,"-list"):void 0,"aria-labelledby":"".concat(t.id,"-label")},e)},getFormProps:function(e){return e.inputElement,we({action:"",noValidate:!0,role:"search",onSubmit:function(i){var u;i.preventDefault(),t.onSubmit(we({event:i,refresh:n,state:r.getState()},o)),r.dispatch("submit",null),null===(u=e.inputElement)||void 0===u||u.blur()},onReset:function(i){var u;i.preventDefault(),t.onReset(we({event:i,refresh:n,state:r.getState()},o)),r.dispatch("reset",null),null===(u=e.inputElement)||void 0===u||u.focus()}},Ie(e,Oe))},getLabelProps:function(e){return we({htmlFor:"".concat(t.id,"-input"),id:"".concat(t.id,"-label")},e)},getInputProps:function(e){var i;function u(e){(t.openOnFocus||Boolean(r.getState().query))&&fe(we({event:e,props:t,query:r.getState().completion||r.getState().query,refresh:n,store:r},o)),r.dispatch("focus",null)}var a=e||{};a.inputElement;var c=a.maxLength,l=void 0===c?512:c,s=Ie(a,_e),p=A(r.getState()),f=function(e){return Boolean(e&&e.match(C))}((null===(i=t.environment.navigator)||void 0===i?void 0:i.userAgent)||""),d=null!=p&&p.itemUrl&&!f?"go":"search";return we({"aria-autocomplete":"both","aria-activedescendant":r.getState().isOpen&&null!==r.getState().activeItemId?"".concat(t.id,"-item-").concat(r.getState().activeItemId):void 0,"aria-controls":r.getState().isOpen?"".concat(t.id,"-list"):void 0,"aria-labelledby":"".concat(t.id,"-label"),value:r.getState().completion||r.getState().query,id:"".concat(t.id,"-input"),autoComplete:"off",autoCorrect:"off",autoCapitalize:"off",enterKeyHint:d,spellCheck:"false",autoFocus:t.autoFocus,placeholder:t.placeholder,maxLength:l,type:"search",onChange:function(e){fe(we({event:e,props:t,query:e.currentTarget.value.slice(0,l),refresh:n,store:r},o))},onKeyDown:function(e){!function(e){var t=e.event,n=e.props,r=e.refresh,o=e.store,i=ge(e,de);if("ArrowUp"===t.key||"ArrowDown"===t.key){var u=function(){var e=n.environment.document.getElementById("".concat(n.id,"-item-").concat(o.getState().activeItemId));e&&(e.scrollIntoViewIfNeeded?e.scrollIntoViewIfNeeded(!1):e.scrollIntoView(!1))},a=function(){var e=A(o.getState());if(null!==o.getState().activeItemId&&e){var n=e.item,u=e.itemInputValue,a=e.itemUrl,c=e.source;c.onActive(ve({event:t,item:n,itemInputValue:u,itemUrl:a,refresh:r,source:c,state:o.getState()},i))}};t.preventDefault(),!1===o.getState().isOpen&&(n.openOnFocus||Boolean(o.getState().query))?fe(ve({event:t,props:n,query:o.getState().query,refresh:r,store:o},i)).then((function(){o.dispatch(t.key,{nextActiveItemId:n.defaultActiveItemId}),a(),setTimeout(u,0)})):(o.dispatch(t.key,{}),a(),u())}else if("Escape"===t.key)t.preventDefault(),o.dispatch(t.key,null),o.pendingRequests.cancelAll();else if("Tab"===t.key)o.dispatch("blur",null),o.pendingRequests.cancelAll();else if("Enter"===t.key){if(null===o.getState().activeItemId||o.getState().collections.every((function(e){return 0===e.items.length})))return void(n.debug||o.pendingRequests.cancelAll());t.preventDefault();var c=A(o.getState()),l=c.item,s=c.itemInputValue,p=c.itemUrl,f=c.source;if(t.metaKey||t.ctrlKey)void 0!==p&&(f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),n.navigator.navigateNewTab({itemUrl:p,item:l,state:o.getState()}));else if(t.shiftKey)void 0!==p&&(f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),n.navigator.navigateNewWindow({itemUrl:p,item:l,state:o.getState()}));else if(t.altKey);else{if(void 0!==p)return f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),void n.navigator.navigate({itemUrl:p,item:l,state:o.getState()});fe(ve({event:t,nextState:{isOpen:!1},props:n,query:s,refresh:r,store:o},i)).then((function(){f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i))}))}}}(we({event:e,props:t,refresh:n,store:r},o))},onFocus:u,onBlur:y,onClick:function(n){e.inputElement!==t.environment.document.activeElement||r.getState().isOpen||u(n)}},s)},getPanelProps:function(e){return we({onMouseDown:function(e){e.preventDefault()},onMouseLeave:function(){r.dispatch("mouseleave",null)}},e)},getListProps:function(e){return we({role:"listbox","aria-labelledby":"".concat(t.id,"-label"),id:"".concat(t.id,"-list")},e)},getItemProps:function(e){var i=e.item,u=e.source,a=Ie(e,Pe);return we({id:"".concat(t.id,"-item-").concat(i.__autocomplete_id),role:"option","aria-selected":r.getState().activeItemId===i.__autocomplete_id,onMouseMove:function(e){if(i.__autocomplete_id!==r.getState().activeItemId){r.dispatch("mousemove",i.__autocomplete_id);var t=A(r.getState());if(null!==r.getState().activeItemId&&t){var u=t.item,a=t.itemInputValue,c=t.itemUrl,l=t.source;l.onActive(we({event:e,item:u,itemInputValue:a,itemUrl:c,refresh:n,source:l,state:r.getState()},o))}}},onMouseDown:function(e){e.preventDefault()},onClick:function(e){var a=u.getItemInputValue({item:i,state:r.getState()}),c=u.getItemUrl({item:i,state:r.getState()});(c?Promise.resolve():fe(we({event:e,nextState:{isOpen:!1},props:t,query:a,refresh:n,store:r},o))).then((function(){u.onSelect(we({event:e,item:i,itemInputValue:a,itemUrl:c,refresh:n,source:u,state:r.getState()},o))}))}},a)}}}function Ae(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function Ce(e){for(var t=1;t0},reshape:function(e){return e.sources}},e),{},{id:null!==(n=e.id)&&void 0!==n?n:v(),plugins:o,initialState:H({activeItemId:null,query:"",completion:null,collections:[],isOpen:!1,status:"idle",context:{}},e.initialState),onStateChange:function(t){var n;null===(n=e.onStateChange)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onStateChange)||void 0===n?void 0:n.call(e,t)}))},onSubmit:function(t){var n;null===(n=e.onSubmit)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onSubmit)||void 0===n?void 0:n.call(e,t)}))},onReset:function(t){var n;null===(n=e.onReset)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onReset)||void 0===n?void 0:n.call(e,t)}))},getSources:function(n){return Promise.all([].concat(F(o.map((function(e){return e.getSources}))),[e.getSources]).filter(Boolean).map((function(e){return E(e,n)}))).then((function(e){return d(e)})).then((function(e){return e.map((function(e){return H(H({},e),{},{onSelect:function(n){e.onSelect(n),t.forEach((function(e){var t;return null===(t=e.onSelect)||void 0===t?void 0:t.call(e,n)}))},onActive:function(n){e.onActive(n),t.forEach((function(e){var t;return null===(t=e.onActive)||void 0===t?void 0:t.call(e,n)}))}})}))}))},navigator:H({navigate:function(e){var t=e.itemUrl;r.location.assign(t)},navigateNewTab:function(e){var t=e.itemUrl,n=r.open(t,"_blank","noopener");null==n||n.focus()},navigateNewWindow:function(e){var t=e.itemUrl;r.open(t,"_blank","noopener")}},e.navigator)})}(e,t),r=R(Te,n,(function(e){var t=e.prevState,r=e.state;n.onStateChange(Be({prevState:t,state:r,refresh:u},o))})),o=function(e){var t=e.store;return{setActiveItemId:function(e){t.dispatch("setActiveItemId",e)},setQuery:function(e){t.dispatch("setQuery",e)},setCollections:function(e){var n=0,r=e.map((function(e){return L(L({},e),{},{items:d(e.items).map((function(e){return L(L({},e),{},{__autocomplete_id:n++})}))})}));t.dispatch("setCollections",r)},setIsOpen:function(e){t.dispatch("setIsOpen",e)},setStatus:function(e){t.dispatch("setStatus",e)},setContext:function(e){t.dispatch("setContext",e)}}}({store:r}),i=Ee(Be({props:n,refresh:u,store:r},o));function u(){return fe(Be({event:new Event("input"),nextState:{isOpen:r.getState().isOpen},props:n,query:r.getState().query,refresh:u,store:r},o))}return n.plugins.forEach((function(e){var n;return null===(n=e.subscribe)||void 0===n?void 0:n.call(e,Be(Be({},o),{},{refresh:u,onSelect:function(e){t.push({onSelect:e})},onActive:function(e){t.push({onActive:e})}}))})),function(e){var t,n,r=e.metadata,o=e.environment;if(null===(t=o.navigator)||void 0===t||null===(n=t.userAgent)||void 0===n?void 0:n.includes("Algolia Crawler")){var i=o.document.createElement("meta"),u=o.document.querySelector("head");i.name="algolia:metadata",setTimeout((function(){i.content=JSON.stringify(r),u.appendChild(i)}),0)}}({metadata:ke({plugins:n.plugins,options:e}),environment:n.environment}),Be(Be({refresh:u},i),o)}var Ue=function(e,t,n,r){var o;t[0]=0;for(var i=1;i=5&&((o||!e&&5===r)&&(u.push(r,0,o,n),r=6),e&&(u.push(r,e,0,n),r=6)),o=""},c=0;c"===t?(r=1,o=""):o=t+o[0]:i?t===i?i="":o+=t:'"'===t||"'"===t?i=t:">"===t?(a(),r=1):r&&("="===t?(r=5,n=o,o=""):"/"===t&&(r<5||">"===e[c][l+1])?(a(),3===r&&(u=u[0]),r=u,(u=u[0]).push(2,0,r),r=0):" "===t||"\t"===t||"\n"===t||"\r"===t?(a(),r=2):o+=t),3===r&&"!--"===o&&(r=4,u=u[0])}return a(),u}(e)),t),arguments,[])).length>1?t:t[0]}var We=function(e){var t=e.environment,n=t.document.createElementNS("http://www.w3.org/2000/svg","svg");n.setAttribute("class","aa-ClearIcon"),n.setAttribute("viewBox","0 0 24 24"),n.setAttribute("width","18"),n.setAttribute("height","18"),n.setAttribute("fill","currentColor");var r=t.document.createElementNS("http://www.w3.org/2000/svg","path");return r.setAttribute("d","M5.293 6.707l5.293 5.293-5.293 5.293c-0.391 0.391-0.391 1.024 0 1.414s1.024 0.391 1.414 0l5.293-5.293 5.293 5.293c0.391 0.391 1.024 0.391 1.414 0s0.391-1.024 0-1.414l-5.293-5.293 5.293-5.293c0.391-0.391 0.391-1.024 0-1.414s-1.024-0.391-1.414 0l-5.293 5.293-5.293-5.293c-0.391-0.391-1.024-0.391-1.414 0s-0.391 1.024 0 1.414z"),n.appendChild(r),n};function Qe(e,t){if("string"==typeof t){var n=e.document.querySelector(t);return"The element ".concat(JSON.stringify(t)," is not in the document."),n}return t}function $e(){for(var e=arguments.length,t=new Array(e),n=0;n2&&(u.children=arguments.length>3?lt.call(arguments,2):n),"function"==typeof e&&null!=e.defaultProps)for(i in e.defaultProps)void 0===u[i]&&(u[i]=e.defaultProps[i]);return _t(e,u,r,o,null)}function _t(e,t,n,r,o){var i={type:e,props:t,key:n,ref:r,__k:null,__:null,__b:0,__e:null,__d:void 0,__c:null,__h:null,constructor:void 0,__v:null==o?++pt:o};return null==o&&null!=st.vnode&&st.vnode(i),i}function Pt(e){return e.children}function jt(e,t){this.props=e,this.context=t}function wt(e,t){if(null==t)return e.__?wt(e.__,e.__.__k.indexOf(e)+1):null;for(var n;t0?_t(d.type,d.props,d.key,null,d.__v):d)){if(d.__=n,d.__b=n.__b+1,null===(f=g[s])||f&&d.key==f.key&&d.type===f.type)g[s]=void 0;else for(p=0;p0&&void 0!==arguments[0]?arguments[0]:[];return{get:function(){return e},add:function(t){var n=e[e.length-1];(null==n?void 0:n.isHighlighted)===t.isHighlighted?e[e.length-1]={value:n.value+t.value,isHighlighted:n.isHighlighted}:e.push(t)}}}(n?[{value:n,isHighlighted:!1}]:[]);return t.forEach((function(e){var t=e.split(Ht);r.add({value:t[0],isHighlighted:!0}),""!==t[1]&&r.add({value:t[1],isHighlighted:!1})})),r.get()}function Wt(e){return function(e){if(Array.isArray(e))return Qt(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||function(e,t){if(!e)return;if("string"==typeof e)return Qt(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);"Object"===n&&e.constructor&&(n=e.constructor.name);if("Map"===n||"Set"===n)return Array.from(e);if("Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))return Qt(e,t)}(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function Qt(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n",""":'"',"'":"'"},Gt=new RegExp(/\w/i),Kt=/&(amp|quot|lt|gt|#39);/g,Jt=RegExp(Kt.source);function Yt(e,t){var n,r,o,i=e[t],u=(null===(n=e[t+1])||void 0===n?void 0:n.isHighlighted)||!0,a=(null===(r=e[t-1])||void 0===r?void 0:r.isHighlighted)||!0;return Gt.test((o=i.value)&&Jt.test(o)?o.replace(Kt,(function(e){return zt[e]})):o)||a!==u?i.isHighlighted:a}function Xt(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function Zt(e){for(var t=1;te.length)&&(t=e.length);for(var n=0,r=new Array(t);n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function mn(e){return function(e){if(Array.isArray(e))return vn(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||function(e,t){if(!e)return;if("string"==typeof e)return vn(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);"Object"===n&&e.constructor&&(n=e.constructor.name);if("Map"===n||"Set"===n)return Array.from(e);if("Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))return vn(e,t)}(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function vn(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n0;if(!O.value.core.openOnFocus&&!t.query)return n;var r=Boolean(h.current||O.value.renderer.renderNoResults);return!n&&r||n},__autocomplete_metadata:{userAgents:Sn,options:e}}))})),j=p(n({collections:[],completion:null,context:{},isOpen:!1,query:"",activeItemId:null,status:"idle"},O.value.core.initialState)),w={getEnvironmentProps:O.value.renderer.getEnvironmentProps,getFormProps:O.value.renderer.getFormProps,getInputProps:O.value.renderer.getInputProps,getItemProps:O.value.renderer.getItemProps,getLabelProps:O.value.renderer.getLabelProps,getListProps:O.value.renderer.getListProps,getPanelProps:O.value.renderer.getPanelProps,getRootProps:O.value.renderer.getRootProps},S={setActiveItemId:P.value.setActiveItemId,setQuery:P.value.setQuery,setCollections:P.value.setCollections,setIsOpen:P.value.setIsOpen,setStatus:P.value.setStatus,setContext:P.value.setContext,refresh:P.value.refresh},I=d((function(){return Ve.bind(O.value.renderer.renderer.createElement)})),E=d((function(){return ct({autocomplete:P.value,autocompleteScopeApi:S,classNames:O.value.renderer.classNames,environment:O.value.core.environment,isDetached:_.value,placeholder:O.value.core.placeholder,propGetters:w,setIsModalOpen:k,state:j.current,translations:O.value.renderer.translations})}));function A(){tt(E.value.panel,{style:_.value?{}:wn({panelPlacement:O.value.renderer.panelPlacement,container:E.value.root,form:E.value.form,environment:O.value.core.environment})})}function C(e){j.current=e;var t={autocomplete:P.value,autocompleteScopeApi:S,classNames:O.value.renderer.classNames,components:O.value.renderer.components,container:O.value.renderer.container,html:I.value,dom:E.value,panelContainer:_.value?E.value.detachedContainer:O.value.renderer.panelContainer,propGetters:w,state:j.current,renderer:O.value.renderer.renderer},r=!g(e)&&!h.current&&O.value.renderer.renderNoResults||O.value.renderer.render;!function(e){var t=e.autocomplete,r=e.autocompleteScopeApi,o=e.dom,i=e.propGetters,u=e.state;nt(o.root,i.getRootProps(n({state:u,props:t.getRootProps({})},r))),nt(o.input,i.getInputProps(n({state:u,props:t.getInputProps({inputElement:o.input}),inputElement:o.input},r))),tt(o.label,{hidden:"stalled"===u.status}),tt(o.loadingIndicator,{hidden:"stalled"!==u.status}),tt(o.clearButton,{hidden:!u.query})}(t),function(e,t){var r=t.autocomplete,o=t.autocompleteScopeApi,u=t.classNames,a=t.html,c=t.dom,l=t.panelContainer,s=t.propGetters,p=t.state,f=t.components,d=t.renderer;if(p.isOpen){l.contains(c.panel)||"loading"===p.status||l.appendChild(c.panel),c.panel.classList.toggle("aa-Panel--stalled","stalled"===p.status);var m=p.collections.filter((function(e){var t=e.source,n=e.items;return t.templates.noResults||n.length>0})).map((function(e,t){var c=e.source,l=e.items;return d.createElement("section",{key:t,className:u.source,"data-autocomplete-source-id":c.sourceId},c.templates.header&&d.createElement("div",{className:u.sourceHeader},c.templates.header({components:f,createElement:d.createElement,Fragment:d.Fragment,items:l,source:c,state:p,html:a})),c.templates.noResults&&0===l.length?d.createElement("div",{className:u.sourceNoResults},c.templates.noResults({components:f,createElement:d.createElement,Fragment:d.Fragment,source:c,state:p,html:a})):d.createElement("ul",i({className:u.list},s.getListProps(n({state:p,props:r.getListProps({})},o))),l.map((function(e){var t=r.getItemProps({item:e,source:c});return d.createElement("li",i({key:t.id,className:u.item},s.getItemProps(n({state:p,props:t},o))),c.templates.item({components:f,createElement:d.createElement,Fragment:d.Fragment,item:e,state:p,html:a}))}))),c.templates.footer&&d.createElement("div",{className:u.sourceFooter},c.templates.footer({components:f,createElement:d.createElement,Fragment:d.Fragment,items:l,source:c,state:p,html:a})))})),v=d.createElement(d.Fragment,null,d.createElement("div",{className:u.panelLayout},m),d.createElement("div",{className:"aa-GradientBottom"})),h=m.reduce((function(e,t){return e[t.props["data-autocomplete-source-id"]]=t,e}),{});e(n(n({children:v,state:p,sections:m,elements:h},d),{},{components:f,html:a},o),c.panel)}else l.contains(c.panel)&&l.removeChild(c.panel)}(r,t)}function D(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};c();var t=O.value.renderer,n=t.components,r=u(t,In);y.current=Ge(r,O.value.core,{components:Ke(n,(function(e){return!e.value.hasOwnProperty("__autocomplete_componentName")})),initialState:j.current},e),m(),l(),P.value.refresh().then((function(){C(j.current)}))}function k(e){requestAnimationFrame((function(){var t=O.value.core.environment.document.body.contains(E.value.detachedOverlay);e!==t&&(e?(O.value.core.environment.document.body.appendChild(E.value.detachedOverlay),O.value.core.environment.document.body.classList.add("aa-Detached"),E.value.input.focus()):(O.value.core.environment.document.body.removeChild(E.value.detachedOverlay),O.value.core.environment.document.body.classList.remove("aa-Detached"),P.value.setQuery(""),P.value.refresh()))}))}return a((function(){var e=P.value.getEnvironmentProps({formElement:E.value.form,panelElement:E.value.panel,inputElement:E.value.input});return tt(O.value.core.environment,e),function(){tt(O.value.core.environment,Object.keys(e).reduce((function(e,t){return n(n({},e),{},o({},t,void 0))}),{}))}})),a((function(){var e=_.value?O.value.core.environment.document.body:O.value.renderer.panelContainer,t=_.value?E.value.detachedOverlay:E.value.panel;return _.value&&j.current.isOpen&&k(!0),C(j.current),function(){e.contains(t)&&e.removeChild(t)}})),a((function(){var e=O.value.renderer.container;return e.appendChild(E.value.root),function(){e.removeChild(E.value.root)}})),a((function(){var e=f((function(e){C(e.state)}),0);return b.current=function(t){var n=t.state,r=t.prevState;(_.value&&r.isOpen!==n.isOpen&&k(n.isOpen),_.value||!n.isOpen||r.isOpen||A(),n.query!==r.query)&&O.value.core.environment.document.querySelectorAll(".aa-Panel--scrollable").forEach((function(e){0!==e.scrollTop&&(e.scrollTop=0)}));e({state:n})},function(){b.current=void 0}})),a((function(){var e=f((function(){var e=_.value;_.value=O.value.core.environment.matchMedia(O.value.renderer.detachedMediaQuery).matches,e!==_.value?D({}):requestAnimationFrame(A)}),20);return O.value.core.environment.addEventListener("resize",e),function(){O.value.core.environment.removeEventListener("resize",e)}})),a((function(){if(!_.value)return function(){};function e(e){E.value.detachedContainer.classList.toggle("aa-DetachedContainer--modal",e)}function t(t){e(t.matches)}var n=O.value.core.environment.matchMedia(getComputedStyle(O.value.core.environment.document.documentElement).getPropertyValue("--aa-detached-modal-media-query"));e(n.matches);var r=Boolean(n.addEventListener);return r?n.addEventListener("change",t):n.addListener(t),function(){r?n.removeEventListener("change",t):n.removeListener(t)}})),a((function(){return requestAnimationFrame(A),function(){}})),n(n({},S),{},{update:D,destroy:function(){c()}})},e.getAlgoliaFacets=function(e){var t=En({transformResponse:function(e){return e.facetHits}}),r=e.queries.map((function(e){return n(n({},e),{},{type:"facet"})}));return t(n(n({},e),{},{queries:r}))},e.getAlgoliaResults=An,Object.defineProperty(e,"__esModule",{value:!0})})); + diff --git a/python-book/site_libs/quarto-search/fuse.min.js b/python-book/site_libs/quarto-search/fuse.min.js new file mode 100644 index 00000000..adc28356 --- /dev/null +++ b/python-book/site_libs/quarto-search/fuse.min.js @@ -0,0 +1,9 @@ +/** + * Fuse.js v6.6.2 - Lightweight fuzzy-search (http://fusejs.io) + * + * Copyright (c) 2022 Kiro Risk (http://kiro.me) + * All Rights Reserved. Apache Software License 2.0 + * + * http://www.apache.org/licenses/LICENSE-2.0 + */ +var e,t;e=this,t=function(){"use strict";function e(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function t(t){for(var n=1;ne.length)&&(t=e.length);for(var n=0,r=new Array(t);n0&&void 0!==arguments[0]?arguments[0]:1,t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:3,n=new Map,r=Math.pow(10,t);return{get:function(t){var i=t.match(C).length;if(n.has(i))return n.get(i);var o=1/Math.pow(i,.5*e),c=parseFloat(Math.round(o*r)/r);return n.set(i,c),c},clear:function(){n.clear()}}}var $=function(){function e(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},n=t.getFn,i=void 0===n?I.getFn:n,o=t.fieldNormWeight,c=void 0===o?I.fieldNormWeight:o;r(this,e),this.norm=E(c,3),this.getFn=i,this.isCreated=!1,this.setIndexRecords()}return o(e,[{key:"setSources",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[];this.docs=e}},{key:"setIndexRecords",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[];this.records=e}},{key:"setKeys",value:function(){var e=this,t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[];this.keys=t,this._keysMap={},t.forEach((function(t,n){e._keysMap[t.id]=n}))}},{key:"create",value:function(){var e=this;!this.isCreated&&this.docs.length&&(this.isCreated=!0,g(this.docs[0])?this.docs.forEach((function(t,n){e._addString(t,n)})):this.docs.forEach((function(t,n){e._addObject(t,n)})),this.norm.clear())}},{key:"add",value:function(e){var t=this.size();g(e)?this._addString(e,t):this._addObject(e,t)}},{key:"removeAt",value:function(e){this.records.splice(e,1);for(var t=e,n=this.size();t2&&void 0!==arguments[2]?arguments[2]:{},r=n.getFn,i=void 0===r?I.getFn:r,o=n.fieldNormWeight,c=void 0===o?I.fieldNormWeight:o,a=new $({getFn:i,fieldNormWeight:c});return a.setKeys(e.map(_)),a.setSources(t),a.create(),a}function R(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},n=t.errors,r=void 0===n?0:n,i=t.currentLocation,o=void 0===i?0:i,c=t.expectedLocation,a=void 0===c?0:c,s=t.distance,u=void 0===s?I.distance:s,h=t.ignoreLocation,l=void 0===h?I.ignoreLocation:h,f=r/e.length;if(l)return f;var d=Math.abs(a-o);return u?f+d/u:d?1:f}function N(){for(var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[],t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:I.minMatchCharLength,n=[],r=-1,i=-1,o=0,c=e.length;o=t&&n.push([r,i]),r=-1)}return e[o-1]&&o-r>=t&&n.push([r,o-1]),n}var P=32;function W(e){for(var t={},n=0,r=e.length;n1&&void 0!==arguments[1]?arguments[1]:{},o=i.location,c=void 0===o?I.location:o,a=i.threshold,s=void 0===a?I.threshold:a,u=i.distance,h=void 0===u?I.distance:u,l=i.includeMatches,f=void 0===l?I.includeMatches:l,d=i.findAllMatches,v=void 0===d?I.findAllMatches:d,g=i.minMatchCharLength,y=void 0===g?I.minMatchCharLength:g,p=i.isCaseSensitive,m=void 0===p?I.isCaseSensitive:p,k=i.ignoreLocation,M=void 0===k?I.ignoreLocation:k;if(r(this,e),this.options={location:c,threshold:s,distance:h,includeMatches:f,findAllMatches:v,minMatchCharLength:y,isCaseSensitive:m,ignoreLocation:M},this.pattern=m?t:t.toLowerCase(),this.chunks=[],this.pattern.length){var b=function(e,t){n.chunks.push({pattern:e,alphabet:W(e),startIndex:t})},x=this.pattern.length;if(x>P){for(var w=0,L=x%P,S=x-L;w3&&void 0!==arguments[3]?arguments[3]:{},i=r.location,o=void 0===i?I.location:i,c=r.distance,a=void 0===c?I.distance:c,s=r.threshold,u=void 0===s?I.threshold:s,h=r.findAllMatches,l=void 0===h?I.findAllMatches:h,f=r.minMatchCharLength,d=void 0===f?I.minMatchCharLength:f,v=r.includeMatches,g=void 0===v?I.includeMatches:v,y=r.ignoreLocation,p=void 0===y?I.ignoreLocation:y;if(t.length>P)throw new Error(w(P));for(var m,k=t.length,M=e.length,b=Math.max(0,Math.min(o,M)),x=u,L=b,S=d>1||g,_=S?Array(M):[];(m=e.indexOf(t,L))>-1;){var O=R(t,{currentLocation:m,expectedLocation:b,distance:a,ignoreLocation:p});if(x=Math.min(O,x),L=m+k,S)for(var j=0;j=z;q-=1){var B=q-1,J=n[e.charAt(B)];if(S&&(_[B]=+!!J),K[q]=(K[q+1]<<1|1)&J,F&&(K[q]|=(A[q+1]|A[q])<<1|1|A[q+1]),K[q]&$&&(C=R(t,{errors:F,currentLocation:B,expectedLocation:b,distance:a,ignoreLocation:p}))<=x){if(x=C,(L=B)<=b)break;z=Math.max(1,2*b-L)}}if(R(t,{errors:F+1,currentLocation:b,expectedLocation:b,distance:a,ignoreLocation:p})>x)break;A=K}var U={isMatch:L>=0,score:Math.max(.001,C)};if(S){var V=N(_,d);V.length?g&&(U.indices=V):U.isMatch=!1}return U}(e,n,i,{location:c+o,distance:a,threshold:s,findAllMatches:u,minMatchCharLength:h,includeMatches:r,ignoreLocation:l}),p=y.isMatch,m=y.score,k=y.indices;p&&(g=!0),v+=m,p&&k&&(d=[].concat(f(d),f(k)))}));var y={isMatch:g,score:g?v/this.chunks.length:1};return g&&r&&(y.indices=d),y}}]),e}(),z=function(){function e(t){r(this,e),this.pattern=t}return o(e,[{key:"search",value:function(){}}],[{key:"isMultiMatch",value:function(e){return D(e,this.multiRegex)}},{key:"isSingleMatch",value:function(e){return D(e,this.singleRegex)}}]),e}();function D(e,t){var n=e.match(t);return n?n[1]:null}var K=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=e===this.pattern;return{isMatch:t,score:t?0:1,indices:[0,this.pattern.length-1]}}}],[{key:"type",get:function(){return"exact"}},{key:"multiRegex",get:function(){return/^="(.*)"$/}},{key:"singleRegex",get:function(){return/^=(.*)$/}}]),n}(z),q=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=-1===e.indexOf(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}}],[{key:"type",get:function(){return"inverse-exact"}},{key:"multiRegex",get:function(){return/^!"(.*)"$/}},{key:"singleRegex",get:function(){return/^!(.*)$/}}]),n}(z),B=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=e.startsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,this.pattern.length-1]}}}],[{key:"type",get:function(){return"prefix-exact"}},{key:"multiRegex",get:function(){return/^\^"(.*)"$/}},{key:"singleRegex",get:function(){return/^\^(.*)$/}}]),n}(z),J=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=!e.startsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}}],[{key:"type",get:function(){return"inverse-prefix-exact"}},{key:"multiRegex",get:function(){return/^!\^"(.*)"$/}},{key:"singleRegex",get:function(){return/^!\^(.*)$/}}]),n}(z),U=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=e.endsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[e.length-this.pattern.length,e.length-1]}}}],[{key:"type",get:function(){return"suffix-exact"}},{key:"multiRegex",get:function(){return/^"(.*)"\$$/}},{key:"singleRegex",get:function(){return/^(.*)\$$/}}]),n}(z),V=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=!e.endsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}}],[{key:"type",get:function(){return"inverse-suffix-exact"}},{key:"multiRegex",get:function(){return/^!"(.*)"\$$/}},{key:"singleRegex",get:function(){return/^!(.*)\$$/}}]),n}(z),G=function(e){a(n,e);var t=l(n);function n(e){var i,o=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},c=o.location,a=void 0===c?I.location:c,s=o.threshold,u=void 0===s?I.threshold:s,h=o.distance,l=void 0===h?I.distance:h,f=o.includeMatches,d=void 0===f?I.includeMatches:f,v=o.findAllMatches,g=void 0===v?I.findAllMatches:v,y=o.minMatchCharLength,p=void 0===y?I.minMatchCharLength:y,m=o.isCaseSensitive,k=void 0===m?I.isCaseSensitive:m,M=o.ignoreLocation,b=void 0===M?I.ignoreLocation:M;return r(this,n),(i=t.call(this,e))._bitapSearch=new T(e,{location:a,threshold:u,distance:l,includeMatches:d,findAllMatches:g,minMatchCharLength:p,isCaseSensitive:k,ignoreLocation:b}),i}return o(n,[{key:"search",value:function(e){return this._bitapSearch.searchIn(e)}}],[{key:"type",get:function(){return"fuzzy"}},{key:"multiRegex",get:function(){return/^"(.*)"$/}},{key:"singleRegex",get:function(){return/^(.*)$/}}]),n}(z),H=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){for(var t,n=0,r=[],i=this.pattern.length;(t=e.indexOf(this.pattern,n))>-1;)n=t+i,r.push([t,n-1]);var o=!!r.length;return{isMatch:o,score:o?0:1,indices:r}}}],[{key:"type",get:function(){return"include"}},{key:"multiRegex",get:function(){return/^'"(.*)"$/}},{key:"singleRegex",get:function(){return/^'(.*)$/}}]),n}(z),Q=[K,H,B,J,V,U,q,G],X=Q.length,Y=/ +(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)/;function Z(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{};return e.split("|").map((function(e){for(var n=e.trim().split(Y).filter((function(e){return e&&!!e.trim()})),r=[],i=0,o=n.length;i1&&void 0!==arguments[1]?arguments[1]:{},i=n.isCaseSensitive,o=void 0===i?I.isCaseSensitive:i,c=n.includeMatches,a=void 0===c?I.includeMatches:c,s=n.minMatchCharLength,u=void 0===s?I.minMatchCharLength:s,h=n.ignoreLocation,l=void 0===h?I.ignoreLocation:h,f=n.findAllMatches,d=void 0===f?I.findAllMatches:f,v=n.location,g=void 0===v?I.location:v,y=n.threshold,p=void 0===y?I.threshold:y,m=n.distance,k=void 0===m?I.distance:m;r(this,e),this.query=null,this.options={isCaseSensitive:o,includeMatches:a,minMatchCharLength:u,findAllMatches:d,ignoreLocation:l,location:g,threshold:p,distance:k},this.pattern=o?t:t.toLowerCase(),this.query=Z(this.pattern,this.options)}return o(e,[{key:"searchIn",value:function(e){var t=this.query;if(!t)return{isMatch:!1,score:1};var n=this.options,r=n.includeMatches;e=n.isCaseSensitive?e:e.toLowerCase();for(var i=0,o=[],c=0,a=0,s=t.length;a-1&&(n.refIndex=e.idx),t.matches.push(n)}}))}function ve(e,t){t.score=e.score}function ge(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{},r=n.includeMatches,i=void 0===r?I.includeMatches:r,o=n.includeScore,c=void 0===o?I.includeScore:o,a=[];return i&&a.push(de),c&&a.push(ve),e.map((function(e){var n=e.idx,r={item:t[n],refIndex:n};return a.length&&a.forEach((function(t){t(e,r)})),r}))}var ye=function(){function e(n){var i=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},o=arguments.length>2?arguments[2]:void 0;r(this,e),this.options=t(t({},I),i),this.options.useExtendedSearch,this._keyStore=new S(this.options.keys),this.setCollection(n,o)}return o(e,[{key:"setCollection",value:function(e,t){if(this._docs=e,t&&!(t instanceof $))throw new Error("Incorrect 'index' type");this._myIndex=t||F(this.options.keys,this._docs,{getFn:this.options.getFn,fieldNormWeight:this.options.fieldNormWeight})}},{key:"add",value:function(e){k(e)&&(this._docs.push(e),this._myIndex.add(e))}},{key:"remove",value:function(){for(var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:function(){return!1},t=[],n=0,r=this._docs.length;n1&&void 0!==arguments[1]?arguments[1]:{},n=t.limit,r=void 0===n?-1:n,i=this.options,o=i.includeMatches,c=i.includeScore,a=i.shouldSort,s=i.sortFn,u=i.ignoreFieldNorm,h=g(e)?g(this._docs[0])?this._searchStringList(e):this._searchObjectList(e):this._searchLogical(e);return fe(h,{ignoreFieldNorm:u}),a&&h.sort(s),y(r)&&r>-1&&(h=h.slice(0,r)),ge(h,this._docs,{includeMatches:o,includeScore:c})}},{key:"_searchStringList",value:function(e){var t=re(e,this.options),n=this._myIndex.records,r=[];return n.forEach((function(e){var n=e.v,i=e.i,o=e.n;if(k(n)){var c=t.searchIn(n),a=c.isMatch,s=c.score,u=c.indices;a&&r.push({item:n,idx:i,matches:[{score:s,value:n,norm:o,indices:u}]})}})),r}},{key:"_searchLogical",value:function(e){var t=this,n=function(e,t){var n=(arguments.length>2&&void 0!==arguments[2]?arguments[2]:{}).auto,r=void 0===n||n,i=function e(n){var i=Object.keys(n),o=ue(n);if(!o&&i.length>1&&!se(n))return e(le(n));if(he(n)){var c=o?n[ce]:i[0],a=o?n[ae]:n[c];if(!g(a))throw new Error(x(c));var s={keyId:j(c),pattern:a};return r&&(s.searcher=re(a,t)),s}var u={children:[],operator:i[0]};return i.forEach((function(t){var r=n[t];v(r)&&r.forEach((function(t){u.children.push(e(t))}))})),u};return se(e)||(e=le(e)),i(e)}(e,this.options),r=function e(n,r,i){if(!n.children){var o=n.keyId,c=n.searcher,a=t._findMatches({key:t._keyStore.get(o),value:t._myIndex.getValueForItemAtKeyId(r,o),searcher:c});return a&&a.length?[{idx:i,item:r,matches:a}]:[]}for(var s=[],u=0,h=n.children.length;u1&&void 0!==arguments[1]?arguments[1]:{},n=t.getFn,r=void 0===n?I.getFn:n,i=t.fieldNormWeight,o=void 0===i?I.fieldNormWeight:i,c=e.keys,a=e.records,s=new $({getFn:r,fieldNormWeight:o});return s.setKeys(c),s.setIndexRecords(a),s},ye.config=I,function(){ne.push.apply(ne,arguments)}(te),ye},"object"==typeof exports&&"undefined"!=typeof module?module.exports=t():"function"==typeof define&&define.amd?define(t):(e="undefined"!=typeof globalThis?globalThis:e||self).Fuse=t(); \ No newline at end of file diff --git a/python-book/site_libs/quarto-search/quarto-search.js b/python-book/site_libs/quarto-search/quarto-search.js new file mode 100644 index 00000000..f5d852d1 --- /dev/null +++ b/python-book/site_libs/quarto-search/quarto-search.js @@ -0,0 +1,1140 @@ +const kQueryArg = "q"; +const kResultsArg = "show-results"; + +// If items don't provide a URL, then both the navigator and the onSelect +// function aren't called (and therefore, the default implementation is used) +// +// We're using this sentinel URL to signal to those handlers that this +// item is a more item (along with the type) and can be handled appropriately +const kItemTypeMoreHref = "0767FDFD-0422-4E5A-BC8A-3BE11E5BBA05"; + +window.document.addEventListener("DOMContentLoaded", function (_event) { + // Ensure that search is available on this page. If it isn't, + // should return early and not do anything + var searchEl = window.document.getElementById("quarto-search"); + if (!searchEl) return; + + const { autocomplete } = window["@algolia/autocomplete-js"]; + + let quartoSearchOptions = {}; + let language = {}; + const searchOptionEl = window.document.getElementById( + "quarto-search-options" + ); + if (searchOptionEl) { + const jsonStr = searchOptionEl.textContent; + quartoSearchOptions = JSON.parse(jsonStr); + language = quartoSearchOptions.language; + } + + // note the search mode + if (quartoSearchOptions.type === "overlay") { + searchEl.classList.add("type-overlay"); + } else { + searchEl.classList.add("type-textbox"); + } + + // Used to determine highlighting behavior for this page + // A `q` query param is expected when the user follows a search + // to this page + const currentUrl = new URL(window.location); + const query = currentUrl.searchParams.get(kQueryArg); + const showSearchResults = currentUrl.searchParams.get(kResultsArg); + const mainEl = window.document.querySelector("main"); + + // highlight matches on the page + if (query !== null && mainEl) { + // perform any highlighting + highlight(escapeRegExp(query), mainEl); + + // fix up the URL to remove the q query param + const replacementUrl = new URL(window.location); + replacementUrl.searchParams.delete(kQueryArg); + window.history.replaceState({}, "", replacementUrl); + } + + // function to clear highlighting on the page when the search query changes + // (e.g. if the user edits the query or clears it) + let highlighting = true; + const resetHighlighting = (searchTerm) => { + if (mainEl && highlighting && query !== null && searchTerm !== query) { + clearHighlight(query, mainEl); + highlighting = false; + } + }; + + // Clear search highlighting when the user scrolls sufficiently + const resetFn = () => { + resetHighlighting(""); + window.removeEventListener("quarto-hrChanged", resetFn); + window.removeEventListener("quarto-sectionChanged", resetFn); + }; + + // Register this event after the initial scrolling and settling of events + // on the page + window.addEventListener("quarto-hrChanged", resetFn); + window.addEventListener("quarto-sectionChanged", resetFn); + + // Responsively switch to overlay mode if the search is present on the navbar + // Note that switching the sidebar to overlay mode requires more coordinate (not just + // the media query since we generate different HTML for sidebar overlays than we do + // for sidebar input UI) + const detachedMediaQuery = + quartoSearchOptions.type === "overlay" ? "all" : "(max-width: 991px)"; + + // If configured, include the analytics client to send insights + const plugins = configurePlugins(quartoSearchOptions); + + let lastState = null; + const { setIsOpen, setQuery, setCollections } = autocomplete({ + container: searchEl, + detachedMediaQuery: detachedMediaQuery, + defaultActiveItemId: 0, + panelContainer: "#quarto-search-results", + panelPlacement: quartoSearchOptions["panel-placement"], + debug: false, + openOnFocus: true, + plugins, + classNames: { + form: "d-flex", + }, + translations: { + clearButtonTitle: language["search-clear-button-title"], + detachedCancelButtonText: language["search-detached-cancel-button-title"], + submitButtonTitle: language["search-submit-button-title"], + }, + initialState: { + query, + }, + getItemUrl({ item }) { + return item.href; + }, + onStateChange({ state }) { + // Perhaps reset highlighting + resetHighlighting(state.query); + + // If the panel just opened, ensure the panel is positioned properly + if (state.isOpen) { + if (lastState && !lastState.isOpen) { + setTimeout(() => { + positionPanel(quartoSearchOptions["panel-placement"]); + }, 150); + } + } + + // Perhaps show the copy link + showCopyLink(state.query, quartoSearchOptions); + + lastState = state; + }, + reshape({ sources, state }) { + return sources.map((source) => { + try { + const items = source.getItems(); + + // Validate the items + validateItems(items); + + // group the items by document + const groupedItems = new Map(); + items.forEach((item) => { + const hrefParts = item.href.split("#"); + const baseHref = hrefParts[0]; + const isDocumentItem = hrefParts.length === 1; + + const items = groupedItems.get(baseHref); + if (!items) { + groupedItems.set(baseHref, [item]); + } else { + // If the href for this item matches the document + // exactly, place this item first as it is the item that represents + // the document itself + if (isDocumentItem) { + items.unshift(item); + } else { + items.push(item); + } + groupedItems.set(baseHref, items); + } + }); + + const reshapedItems = []; + let count = 1; + for (const [_key, value] of groupedItems) { + const firstItem = value[0]; + reshapedItems.push({ + ...firstItem, + type: kItemTypeDoc, + }); + + const collapseMatches = quartoSearchOptions["collapse-after"]; + const collapseCount = + typeof collapseMatches === "number" ? collapseMatches : 1; + + if (value.length > 1) { + const target = `search-more-${count}`; + const isExpanded = + state.context.expanded && + state.context.expanded.includes(target); + + const remainingCount = value.length - collapseCount; + + for (let i = 1; i < value.length; i++) { + if (collapseMatches && i === collapseCount) { + reshapedItems.push({ + target, + title: isExpanded + ? language["search-hide-matches-text"] + : remainingCount === 1 + ? `${remainingCount} ${language["search-more-match-text"]}` + : `${remainingCount} ${language["search-more-matches-text"]}`, + type: kItemTypeMore, + href: kItemTypeMoreHref, + }); + } + + if (isExpanded || !collapseMatches || i < collapseCount) { + reshapedItems.push({ + ...value[i], + type: kItemTypeItem, + target, + }); + } + } + } + count += 1; + } + + return { + ...source, + getItems() { + return reshapedItems; + }, + }; + } catch (error) { + // Some form of error occurred + return { + ...source, + getItems() { + return [ + { + title: error.name || "An Error Occurred While Searching", + text: + error.message || + "An unknown error occurred while attempting to perform the requested search.", + type: kItemTypeError, + }, + ]; + }, + }; + } + }); + }, + navigator: { + navigate({ itemUrl }) { + if (itemUrl !== offsetURL(kItemTypeMoreHref)) { + window.location.assign(itemUrl); + } + }, + navigateNewTab({ itemUrl }) { + if (itemUrl !== offsetURL(kItemTypeMoreHref)) { + const windowReference = window.open(itemUrl, "_blank", "noopener"); + if (windowReference) { + windowReference.focus(); + } + } + }, + navigateNewWindow({ itemUrl }) { + if (itemUrl !== offsetURL(kItemTypeMoreHref)) { + window.open(itemUrl, "_blank", "noopener"); + } + }, + }, + getSources({ state, setContext, setActiveItemId, refresh }) { + return [ + { + sourceId: "documents", + getItemUrl({ item }) { + if (item.href) { + return offsetURL(item.href); + } else { + return undefined; + } + }, + onSelect({ + item, + state, + setContext, + setIsOpen, + setActiveItemId, + refresh, + }) { + if (item.type === kItemTypeMore) { + toggleExpanded(item, state, setContext, setActiveItemId, refresh); + + // Toggle more + setIsOpen(true); + } + }, + getItems({ query }) { + if (query === null || query === "") { + return []; + } + + const limit = quartoSearchOptions.limit; + if (quartoSearchOptions.algolia) { + return algoliaSearch(query, limit, quartoSearchOptions.algolia); + } else { + // Fuse search options + const fuseSearchOptions = { + isCaseSensitive: false, + shouldSort: true, + minMatchCharLength: 2, + limit: limit, + }; + + return readSearchData().then(function (fuse) { + return fuseSearch(query, fuse, fuseSearchOptions); + }); + } + }, + templates: { + noResults({ createElement }) { + const hasQuery = lastState.query; + + return createElement( + "div", + { + class: `quarto-search-no-results${ + hasQuery ? "" : " no-query" + }`, + }, + language["search-no-results-text"] + ); + }, + header({ items, createElement }) { + // count the documents + const count = items.filter((item) => { + return item.type === kItemTypeDoc; + }).length; + + if (count > 0) { + return createElement( + "div", + { class: "search-result-header" }, + `${count} ${language["search-matching-documents-text"]}` + ); + } else { + return createElement( + "div", + { class: "search-result-header-no-results" }, + `` + ); + } + }, + footer({ _items, createElement }) { + if ( + quartoSearchOptions.algolia && + quartoSearchOptions.algolia["show-logo"] + ) { + const libDir = quartoSearchOptions.algolia["libDir"]; + const logo = createElement("img", { + src: offsetURL( + `${libDir}/quarto-search/search-by-algolia.svg` + ), + class: "algolia-search-logo", + }); + return createElement( + "a", + { href: "http://www.algolia.com/" }, + logo + ); + } + }, + + item({ item, createElement }) { + return renderItem( + item, + createElement, + state, + setActiveItemId, + setContext, + refresh + ); + }, + }, + }, + ]; + }, + }); + + window.quartoOpenSearch = () => { + setIsOpen(false); + setIsOpen(true); + focusSearchInput(); + }; + + // Remove the labeleledby attribute since it is pointing + // to a non-existent label + if (quartoSearchOptions.type === "overlay") { + const inputEl = window.document.querySelector( + "#quarto-search .aa-Autocomplete" + ); + if (inputEl) { + inputEl.removeAttribute("aria-labelledby"); + } + } + + // If the main document scrolls dismiss the search results + // (otherwise, since they're floating in the document they can scroll with the document) + window.document.body.onscroll = () => { + setIsOpen(false); + }; + + if (showSearchResults) { + setIsOpen(true); + focusSearchInput(); + } +}); + +function configurePlugins(quartoSearchOptions) { + const autocompletePlugins = []; + const algoliaOptions = quartoSearchOptions.algolia; + if ( + algoliaOptions && + algoliaOptions["analytics-events"] && + algoliaOptions["search-only-api-key"] && + algoliaOptions["application-id"] + ) { + const apiKey = algoliaOptions["search-only-api-key"]; + const appId = algoliaOptions["application-id"]; + + // Aloglia insights may not be loaded because they require cookie consent + // Use deferred loading so events will start being recorded when/if consent + // is granted. + const algoliaInsightsDeferredPlugin = deferredLoadPlugin(() => { + if ( + window.aa && + window["@algolia/autocomplete-plugin-algolia-insights"] + ) { + window.aa("init", { + appId, + apiKey, + useCookie: true, + }); + + const { createAlgoliaInsightsPlugin } = + window["@algolia/autocomplete-plugin-algolia-insights"]; + // Register the insights client + const algoliaInsightsPlugin = createAlgoliaInsightsPlugin({ + insightsClient: window.aa, + onItemsChange({ insights, insightsEvents }) { + const events = insightsEvents.map((event) => { + const maxEvents = event.objectIDs.slice(0, 20); + return { + ...event, + objectIDs: maxEvents, + }; + }); + + insights.viewedObjectIDs(...events); + }, + }); + return algoliaInsightsPlugin; + } + }); + + // Add the plugin + autocompletePlugins.push(algoliaInsightsDeferredPlugin); + return autocompletePlugins; + } +} + +// For plugins that may not load immediately, create a wrapper +// plugin and forward events and plugin data once the plugin +// is initialized. This is useful for cases like cookie consent +// which may prevent the analytics insights event plugin from initializing +// immediately. +function deferredLoadPlugin(createPlugin) { + let plugin = undefined; + let subscribeObj = undefined; + const wrappedPlugin = () => { + if (!plugin && subscribeObj) { + plugin = createPlugin(); + if (plugin && plugin.subscribe) { + plugin.subscribe(subscribeObj); + } + } + return plugin; + }; + + return { + subscribe: (obj) => { + subscribeObj = obj; + }, + onStateChange: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.onStateChange) { + plugin.onStateChange(obj); + } + }, + onSubmit: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.onSubmit) { + plugin.onSubmit(obj); + } + }, + onReset: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.onReset) { + plugin.onReset(obj); + } + }, + getSources: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.getSources) { + return plugin.getSources(obj); + } else { + return Promise.resolve([]); + } + }, + data: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.data) { + plugin.data(obj); + } + }, + }; +} + +function validateItems(items) { + // Validate the first item + if (items.length > 0) { + const item = items[0]; + const missingFields = []; + if (item.href == undefined) { + missingFields.push("href"); + } + if (!item.title == undefined) { + missingFields.push("title"); + } + if (!item.text == undefined) { + missingFields.push("text"); + } + + if (missingFields.length === 1) { + throw { + name: `Error: Search index is missing the ${missingFields[0]} field.`, + message: `The items being returned for this search do not include all the required fields. Please ensure that your index items include the ${missingFields[0]} field or use index-fields in your _quarto.yml file to specify the field names.`, + }; + } else if (missingFields.length > 1) { + const missingFieldList = missingFields + .map((field) => { + return `${field}`; + }) + .join(", "); + + throw { + name: `Error: Search index is missing the following fields: ${missingFieldList}.`, + message: `The items being returned for this search do not include all the required fields. Please ensure that your index items includes the following fields: ${missingFieldList}, or use index-fields in your _quarto.yml file to specify the field names.`, + }; + } + } +} + +let lastQuery = null; +function showCopyLink(query, options) { + const language = options.language; + lastQuery = query; + // Insert share icon + const inputSuffixEl = window.document.body.querySelector( + ".aa-Form .aa-InputWrapperSuffix" + ); + + if (inputSuffixEl) { + let copyButtonEl = window.document.body.querySelector( + ".aa-Form .aa-InputWrapperSuffix .aa-CopyButton" + ); + + if (copyButtonEl === null) { + copyButtonEl = window.document.createElement("button"); + copyButtonEl.setAttribute("class", "aa-CopyButton"); + copyButtonEl.setAttribute("type", "button"); + copyButtonEl.setAttribute("title", language["search-copy-link-title"]); + copyButtonEl.onmousedown = (e) => { + e.preventDefault(); + e.stopPropagation(); + }; + + const linkIcon = "bi-clipboard"; + const checkIcon = "bi-check2"; + + const shareIconEl = window.document.createElement("i"); + shareIconEl.setAttribute("class", `bi ${linkIcon}`); + copyButtonEl.appendChild(shareIconEl); + inputSuffixEl.prepend(copyButtonEl); + + const clipboard = new window.ClipboardJS(".aa-CopyButton", { + text: function (_trigger) { + const copyUrl = new URL(window.location); + copyUrl.searchParams.set(kQueryArg, lastQuery); + copyUrl.searchParams.set(kResultsArg, "1"); + return copyUrl.toString(); + }, + }); + clipboard.on("success", function (e) { + // Focus the input + + // button target + const button = e.trigger; + const icon = button.querySelector("i.bi"); + + // flash "checked" + icon.classList.add(checkIcon); + icon.classList.remove(linkIcon); + setTimeout(function () { + icon.classList.remove(checkIcon); + icon.classList.add(linkIcon); + }, 1000); + }); + } + + // If there is a query, show the link icon + if (copyButtonEl) { + if (lastQuery && options["copy-button"]) { + copyButtonEl.style.display = "flex"; + } else { + copyButtonEl.style.display = "none"; + } + } + } +} + +/* Search Index Handling */ +// create the index +var fuseIndex = undefined; +async function readSearchData() { + // Initialize the search index on demand + if (fuseIndex === undefined) { + // create fuse index + const options = { + keys: [ + { name: "title", weight: 20 }, + { name: "section", weight: 20 }, + { name: "text", weight: 10 }, + ], + ignoreLocation: true, + threshold: 0.1, + }; + const fuse = new window.Fuse([], options); + + // fetch the main search.json + const response = await fetch(offsetURL("search.json")); + if (response.status == 200) { + return response.json().then(function (searchDocs) { + searchDocs.forEach(function (searchDoc) { + fuse.add(searchDoc); + }); + fuseIndex = fuse; + return fuseIndex; + }); + } else { + return Promise.reject( + new Error( + "Unexpected status from search index request: " + response.status + ) + ); + } + } + return fuseIndex; +} + +function inputElement() { + return window.document.body.querySelector(".aa-Form .aa-Input"); +} + +function focusSearchInput() { + setTimeout(() => { + const inputEl = inputElement(); + if (inputEl) { + inputEl.focus(); + } + }, 50); +} + +/* Panels */ +const kItemTypeDoc = "document"; +const kItemTypeMore = "document-more"; +const kItemTypeItem = "document-item"; +const kItemTypeError = "error"; + +function renderItem( + item, + createElement, + state, + setActiveItemId, + setContext, + refresh +) { + switch (item.type) { + case kItemTypeDoc: + return createDocumentCard( + createElement, + "file-richtext", + item.title, + item.section, + item.text, + item.href + ); + case kItemTypeMore: + return createMoreCard( + createElement, + item, + state, + setActiveItemId, + setContext, + refresh + ); + case kItemTypeItem: + return createSectionCard( + createElement, + item.section, + item.text, + item.href + ); + case kItemTypeError: + return createErrorCard(createElement, item.title, item.text); + default: + return undefined; + } +} + +function createDocumentCard(createElement, icon, title, section, text, href) { + const iconEl = createElement("i", { + class: `bi bi-${icon} search-result-icon`, + }); + const titleEl = createElement("p", { class: "search-result-title" }, title); + const titleContainerEl = createElement( + "div", + { class: "search-result-title-container" }, + [iconEl, titleEl] + ); + + const textEls = []; + if (section) { + const sectionEl = createElement( + "p", + { class: "search-result-section" }, + section + ); + textEls.push(sectionEl); + } + const descEl = createElement("p", { + class: "search-result-text", + dangerouslySetInnerHTML: { + __html: text, + }, + }); + textEls.push(descEl); + + const textContainerEl = createElement( + "div", + { class: "search-result-text-container" }, + textEls + ); + + const containerEl = createElement( + "div", + { + class: "search-result-container", + }, + [titleContainerEl, textContainerEl] + ); + + const linkEl = createElement( + "a", + { + href: offsetURL(href), + class: "search-result-link", + }, + containerEl + ); + + const classes = ["search-result-doc", "search-item"]; + if (!section) { + classes.push("document-selectable"); + } + + return createElement( + "div", + { + class: classes.join(" "), + }, + linkEl + ); +} + +function createMoreCard( + createElement, + item, + state, + setActiveItemId, + setContext, + refresh +) { + const moreCardEl = createElement( + "div", + { + class: "search-result-more search-item", + onClick: (e) => { + // Handle expanding the sections by adding the expanded + // section to the list of expanded sections + toggleExpanded(item, state, setContext, setActiveItemId, refresh); + e.stopPropagation(); + }, + }, + item.title + ); + + return moreCardEl; +} + +function toggleExpanded(item, state, setContext, setActiveItemId, refresh) { + const expanded = state.context.expanded || []; + if (expanded.includes(item.target)) { + setContext({ + expanded: expanded.filter((target) => target !== item.target), + }); + } else { + setContext({ expanded: [...expanded, item.target] }); + } + + refresh(); + setActiveItemId(item.__autocomplete_id); +} + +function createSectionCard(createElement, section, text, href) { + const sectionEl = createSection(createElement, section, text, href); + return createElement( + "div", + { + class: "search-result-doc-section search-item", + }, + sectionEl + ); +} + +function createSection(createElement, title, text, href) { + const descEl = createElement("p", { + class: "search-result-text", + dangerouslySetInnerHTML: { + __html: text, + }, + }); + + const titleEl = createElement("p", { class: "search-result-section" }, title); + const linkEl = createElement( + "a", + { + href: offsetURL(href), + class: "search-result-link", + }, + [titleEl, descEl] + ); + return linkEl; +} + +function createErrorCard(createElement, title, text) { + const descEl = createElement("p", { + class: "search-error-text", + dangerouslySetInnerHTML: { + __html: text, + }, + }); + + const titleEl = createElement("p", { + class: "search-error-title", + dangerouslySetInnerHTML: { + __html: ` ${title}`, + }, + }); + const errorEl = createElement("div", { class: "search-error" }, [ + titleEl, + descEl, + ]); + return errorEl; +} + +function positionPanel(pos) { + const panelEl = window.document.querySelector( + "#quarto-search-results .aa-Panel" + ); + const inputEl = window.document.querySelector( + "#quarto-search .aa-Autocomplete" + ); + + if (panelEl && inputEl) { + panelEl.style.top = `${Math.round(panelEl.offsetTop)}px`; + if (pos === "start") { + panelEl.style.left = `${Math.round(inputEl.left)}px`; + } else { + panelEl.style.right = `${Math.round(inputEl.offsetRight)}px`; + } + } +} + +/* Highlighting */ +// highlighting functions +function highlightMatch(query, text) { + if (text) { + const start = text.toLowerCase().indexOf(query.toLowerCase()); + if (start !== -1) { + const startMark = ""; + const endMark = ""; + + const end = start + query.length; + text = + text.slice(0, start) + + startMark + + text.slice(start, end) + + endMark + + text.slice(end); + const startInfo = clipStart(text, start); + const endInfo = clipEnd( + text, + startInfo.position + startMark.length + endMark.length + ); + text = + startInfo.prefix + + text.slice(startInfo.position, endInfo.position) + + endInfo.suffix; + + return text; + } else { + return text; + } + } else { + return text; + } +} + +function clipStart(text, pos) { + const clipStart = pos - 50; + if (clipStart < 0) { + // This will just return the start of the string + return { + position: 0, + prefix: "", + }; + } else { + // We're clipping before the start of the string, walk backwards to the first space. + const spacePos = findSpace(text, pos, -1); + return { + position: spacePos.position, + prefix: "", + }; + } +} + +function clipEnd(text, pos) { + const clipEnd = pos + 200; + if (clipEnd > text.length) { + return { + position: text.length, + suffix: "", + }; + } else { + const spacePos = findSpace(text, clipEnd, 1); + return { + position: spacePos.position, + suffix: spacePos.clipped ? "…" : "", + }; + } +} + +function findSpace(text, start, step) { + let stepPos = start; + while (stepPos > -1 && stepPos < text.length) { + const char = text[stepPos]; + if (char === " " || char === "," || char === ":") { + return { + position: step === 1 ? stepPos : stepPos - step, + clipped: stepPos > 1 && stepPos < text.length, + }; + } + stepPos = stepPos + step; + } + + return { + position: stepPos - step, + clipped: false, + }; +} + +// removes highlighting as implemented by the mark tag +function clearHighlight(searchterm, el) { + const childNodes = el.childNodes; + for (let i = childNodes.length - 1; i >= 0; i--) { + const node = childNodes[i]; + if (node.nodeType === Node.ELEMENT_NODE) { + if ( + node.tagName === "MARK" && + node.innerText.toLowerCase() === searchterm.toLowerCase() + ) { + el.replaceChild(document.createTextNode(node.innerText), node); + } else { + clearHighlight(searchterm, node); + } + } + } +} + +function escapeRegExp(string) { + return string.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string +} + +// highlight matches +function highlight(term, el) { + const termRegex = new RegExp(term, "ig"); + const childNodes = el.childNodes; + + // walk back to front avoid mutating elements in front of us + for (let i = childNodes.length - 1; i >= 0; i--) { + const node = childNodes[i]; + + if (node.nodeType === Node.TEXT_NODE) { + // Search text nodes for text to highlight + const text = node.nodeValue; + + let startIndex = 0; + let matchIndex = text.search(termRegex); + if (matchIndex > -1) { + const markFragment = document.createDocumentFragment(); + while (matchIndex > -1) { + const prefix = text.slice(startIndex, matchIndex); + markFragment.appendChild(document.createTextNode(prefix)); + + const mark = document.createElement("mark"); + mark.appendChild( + document.createTextNode( + text.slice(matchIndex, matchIndex + term.length) + ) + ); + markFragment.appendChild(mark); + + startIndex = matchIndex + term.length; + matchIndex = text.slice(startIndex).search(new RegExp(term, "ig")); + if (matchIndex > -1) { + matchIndex = startIndex + matchIndex; + } + } + if (startIndex < text.length) { + markFragment.appendChild( + document.createTextNode(text.slice(startIndex, text.length)) + ); + } + + el.replaceChild(markFragment, node); + } + } else if (node.nodeType === Node.ELEMENT_NODE) { + // recurse through elements + highlight(term, node); + } + } +} + +/* Link Handling */ +// get the offset from this page for a given site root relative url +function offsetURL(url) { + var offset = getMeta("quarto:offset"); + return offset ? offset + url : url; +} + +// read a meta tag value +function getMeta(metaName) { + var metas = window.document.getElementsByTagName("meta"); + for (let i = 0; i < metas.length; i++) { + if (metas[i].getAttribute("name") === metaName) { + return metas[i].getAttribute("content"); + } + } + return ""; +} + +function algoliaSearch(query, limit, algoliaOptions) { + const { getAlgoliaResults } = window["@algolia/autocomplete-preset-algolia"]; + + const applicationId = algoliaOptions["application-id"]; + const searchOnlyApiKey = algoliaOptions["search-only-api-key"]; + const indexName = algoliaOptions["index-name"]; + const indexFields = algoliaOptions["index-fields"]; + const searchClient = window.algoliasearch(applicationId, searchOnlyApiKey); + const searchParams = algoliaOptions["params"]; + const searchAnalytics = !!algoliaOptions["analytics-events"]; + + return getAlgoliaResults({ + searchClient, + queries: [ + { + indexName: indexName, + query, + params: { + hitsPerPage: limit, + clickAnalytics: searchAnalytics, + ...searchParams, + }, + }, + ], + transformResponse: (response) => { + if (!indexFields) { + return response.hits.map((hit) => { + return hit.map((item) => { + return { + ...item, + text: highlightMatch(query, item.text), + }; + }); + }); + } else { + const remappedHits = response.hits.map((hit) => { + return hit.map((item) => { + const newItem = { ...item }; + ["href", "section", "title", "text"].forEach((keyName) => { + const mappedName = indexFields[keyName]; + if ( + mappedName && + item[mappedName] !== undefined && + mappedName !== keyName + ) { + newItem[keyName] = item[mappedName]; + delete newItem[mappedName]; + } + }); + newItem.text = highlightMatch(query, newItem.text); + return newItem; + }); + }); + return remappedHits; + } + }, + }); +} + +function fuseSearch(query, fuse, fuseOptions) { + return fuse.search(query, fuseOptions).map((result) => { + const addParam = (url, name, value) => { + const anchorParts = url.split("#"); + const baseUrl = anchorParts[0]; + const sep = baseUrl.search("\\?") > 0 ? "&" : "?"; + anchorParts[0] = baseUrl + sep + name + "=" + value; + return anchorParts.join("#"); + }; + + return { + title: result.item.title, + section: result.item.section, + href: addParam(result.item.href, kQueryArg, query), + text: highlightMatch(query, result.item.text), + }; + }); +} diff --git a/python-book/standard_scores.html b/python-book/standard_scores.html new file mode 100644 index 00000000..0cfb0f51 --- /dev/null +++ b/python-book/standard_scores.html @@ -0,0 +1,1928 @@ + + + + + + + + + +Resampling statistics - 16  Ranks, Quantiles and Standard Scores + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

16  Ranks, Quantiles and Standard Scores

+
+ + + +
+ + + + +
+ + +
+ +

Imagine we have a set of measures, in some particular units. We may want some way to see quickly how these measures compare to one another, and how they may compare to other measures, in different units.

+

Ranks are one way of having an implicit comparison between values.1 Is the value large in terms of the other values (with high rank) — or is it small (low rank)?

+

We can convert ranks to quantile positions. Quantile positions are values from 0 through 1 that are closer to 1 for high rank values, and closer to 0 for low rank values. Each value in the data has a rank, and a corresponding quantile position. We can also look at the value corresponding to each quantile position, and these are the quantiles. You will see what we mean later in the chapter.

+

Ranks and quantile positions give an idea whether the measure is high or low compared to the other values, but they do not immediately tell us whether the measure is exceptional or unusual. To do that, we may want to ask whether the measure falls outside the typical range of values — that is, how the measure compares to the distribution of values. One common way of doing this is to re-express the measures (values) as standard scores, where the standard score for a particular value tells you how far the value is from the center of the distribution, in terms of the typical spread of the distribution. (We will say more about what we mean by “typical” later.) Standard values are particularly useful to allow us to compare different types of measures on a standard scale. They translate the units of measurement into standard and comparable units.

+
+

16.1 Household income and congressional districts

+

Democratic congresswoman Marcy Kaptur has represented the 9th district of Ohio since 1983. Ohio’s 9th district is relatively working class, and the Democratic party has, traditionally, represented people with lower income. However, Kaptur has pointed out that this pattern appears to be changing; more of the high-income congressional districts now lean Democrat, and the Republican party is now more likely to represent lower-income districts. The French economist Thomas Piketty has described this phenomenon across several Western countries. Voters for left parties are now more likely to be highly educated and wealthy. He terms this shift “Brahmin Left Vs Merchant Right” (Piketty 2018). The data below come from a table Kaptur prepared that shows this pattern in the 2023 US congress. The table lists the top 20 districts by the median income of the households in that district, along with their representatives and their party.2

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 16.1: 20 most wealthy 2023 Congressional districts by household income
Ascending_RankDistrictMedian IncomeRepresentativeParty
422422MD-3114804J. SarbanesDemocrat
423423MA-5115618K. ClarkDemocrat
424424NY-12116070J. NadlerDemocrat
425425VA-8116332D. BeyerDemocrat
426426MD-5117049S. HoyerDemocrat
427427NJ-11117198M. SherrillDemocrat
428428NY-3119185G. SantosRepublican
429429CA-14119209E. SwalwellDemocrat
430430NJ-7119567T. KeanRepublican
431431NY-1120031N. LaLotaRepublican
432432WA-1120671S. DelBeneDemocrat
433433MD-8120948J. RaskinDemocrat
434434NY-4121979A. D’EspositoRepublican
435435CA-11124456N. PelosiDemocrat
436436CA-15125855K. MullinDemocrat
437437CA-10135150M. DeSaulnierDemocrat
438438VA-11139003G. ConnollyDemocrat
439439VA-10140815J. WextonDemocrat
440440CA-16150720A. EshooDemocrat
441441CA-17157049R. KhannaDemocrat
+
+ + +
+
+

You may notice right away that many of the 20 richest districts have Democratic Party representatives.

+

In fact, if we look at all 441 congressional districts in Kaptur’s table, we find a large difference in the average median household income for Democrat and Republican districts; the Democrat districts are, on average, about 14% richer (Table 16.2).

+
+
+
+ + + + + + + + + + + + + + + + + + +
Table 16.2: Means for median household income by party
Mean of median household income
Democrat$76,933
Republican$67,474
+
+ + +
+
+

Next we are going to tip our hand, and show how we got these data. In previous chapters, we had cells like this in which we enter the values we will analyze. These values come from the example we introduced in Section 12.16:

+
+
# Liquor prices for US states with private market.
+priv = np.array([
+    4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54, 4.75,
+    4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75, 5.20, 5.10,
+    4.80, 4.29])
+
+

Now we have 441 values to enter, and it is time to introduce Pythons standard tools for loading data.

+
+

16.1.1 Comma-separated-values (CSV) format

+

The data we will load is in a file on disk called data/congress_2023.csv. These are data from Kaptur’s table in a comma-separated-values (CSV) format file. We refer to this file with its filename, containing the directory (data/) followed by the name of the file (congress_2023.csv), giving a filename of data/congress_2023.csv.

+

The CSV format is a very simple text format for storing table data. Usually, the first line of the CSV file contains the column names of the table, and the rest of the lines contain the row values. As the name suggests, commas (,) separate the column names in the first line, and the row values in the following lines. If you opened the data/congress_2023.csv file in some editor, such as Notepad on Windows or TextEdit on Mac, you would find that the first few lines looked like this:

+
+
Ascending_Rank,District,Median_Income,Representative,Party
+1,PR-At Large,22237,J. González-Colón,Republican
+2,AS-At Large,28352,A. Coleman,Republican
+3,MP-At Large,31362,G. Sablan,Democrat
+4,KY-5,37910,H. Rogers,Republican
+5,MS-2,37933,B. G. Thompson,Democrat
+
+
+
+

16.1.2 Introducing the Pandas library

+

Here we start using the Pandas library to load table data into Python.

+

Thus far we have used the Numpy library to work with data in arrays. Pandas is As always with Python, when we want to use a library like Pandas, we have to import it first.

+

We have used the term library here, but Python uses the term module to refer to libraries of code and data that you import.

+

When using Numpy, we write:

+
+
# Import the Numpy library (module), name it "np".
+import numpy as np
+
+

Now we will use the Pandas library (module).

+

We can import Pandas like this:

+
+
# Import the Pandas library (module)
+import pandas
+
+

As Numpy has a standard abbreviation np, that almost everyone writing Python code will recognize and use, so Pandas has the standard abbreviation pd:

+
+
# Import the Pandas library (module), name it "pd".
+import pandas as pd
+
+

Pandas is the standard data science library for Python. It is particularly good at loading data files, and presenting them to us as a useful table-like structure, called a data frame.

+

We start by using Pandas to load our data file:

+
+
district_income = pd.read_csv('data/congress_2023.csv')
+
+

We have thus far done many operations that returned Numpy arrays. pd.read_csv returns a Pandas data frame:

+
+
type(district_income)
+
+
<class 'pandas.core.frame.DataFrame'>
+
+
+

A data frame is Pandas’ own way of representing a table, with columns and rows. You can think of it as Python’s version of a spreadsheet. As strings or Numpy arrays have methods (functions attached to the array), so Pandas data frames have methods. These methods do things with the data frame to which they are attached. For example, the head method of the data frame shows (by default) the first five rows in the table:

+
+
# Show the first five rows in the data frame
+district_income.head()
+
+
   Ascending_Rank     District  Median_Income     Representative       Party
+0               1  PR-At Large          22237  J. González-Colón  Republican
+1               2  AS-At Large          28352         A. Coleman  Republican
+2               3  MP-At Large          31362          G. Sablan    Democrat
+3               4         KY-5          37910          H. Rogers  Republican
+4               5         MS-2          37933     B. G. Thompson    Democrat
+
+
+

The data are in income order, from lowest to highest, so the first five districts are those with the lowest household income.

+
+
+
+ +
+
+Sorting +
+
+
+
+

If the data were not already in income order, we could have sorted them with Numpy’s sort[R’s function.

+
+
+
+
+

We are particularly interested in the column named Median_Income.

+

You may remember the idea of indexing, introduced in Section 7.6. Indexing occurs when we fetch data from within a container, such as a string or an array. We do this by putting square brackets [] after the value we want to index into, and put something inside the brackets to say what we want.

+

For example, to get the first element of the priv array above, we use indexing:

+
+
# Fetch the first element of the priv array with indexing.
+# This is the element at position 0.
+priv[0]
+
+
4.82
+
+
+

As you can index into strings and Numpy arrays, by using square brackets, so you can index into Pandas data frames. Instead of putting the position between the square brackets, we can put the column name. This fetches the data from that column, returning a new type of value called a Pandas Series.

+
+
# Index into Pandas data frame to get one column of data.
+# Notice we use a string between the square brackets, giving the column name.
+income_col = district_income['Median_Income']
+# The value that comes back is of type Series.  A Series represents the
+# data from a single column.
+type(income_col)
+
+
<class 'pandas.core.series.Series'>
+
+
+

We want to go straight to our familiar Numpy arrays, so we convert the column of data into a Numpy array, using the np.array function you have already seen:

+ +
+
# Convert column data into a Numpy array.
+incomes = np.array(income_col)
+# Show the first five values, by indexing with a slice.
+incomes[:5]
+
+
array([22237, 28352, 31362, 37910, 37933])
+
+
+

:::

+
+

16.1.3 Incomes and Ranks

+

We now have the incomes values as an array.

+

There are 441 values in the whole vector, one of each congressional district:

+
+
len(incomes)
+
+
441
+
+
+

While we are at it, let us also get the values from the “Ascending_Rank” column, with the same procedure. These are ranks from low to high, meaning 1 is the lowest median income, and 441 is the highest median income.

+
+
lo_to_hi_ranks = np.array(district_income['Ascending_Rank'])
+# Show the first five values, by indexing with a slice.
+lo_to_hi_ranks[:5]
+
+
array([1, 2, 3, 4, 5])
+
+
+

In our case, the DataFrame has the Ascending_Rank column with the ranks we need, but if we need the ranks and we don’t have them, we can calculate them using the rankdata function from the Scipy stats package.

+
+
+

16.1.4 Introducing Scipy

+

Earlier in this chapter we introduced the Pandas module. We used Pandas to load the CSV data into Python.

+

Now we introduce another fundamental Python library for working with data called Scipy. The name Scipy comes from the compression of SCIentific PYthon, and the library is nearly as broad as the name suggests — it is a huge collection of functions and data that implement a wide range of scientific algorithms. Scipy is an umbrella package, in that it contains sub-packages, each covering a particular field of scientific computing. One of those sub-packages is called stats, and, yes, it covers statistics.

+

We can get the Scipy stats sub-package with:

+
+
import scipy.stats
+
+

but, as for Numpy and Pandas, we often import the package with an abbreviation, such as:

+
+
# Import the scipy.stats package with the name "sps".
+import scipy.stats as sps
+
+

One of the many functions in scipy.stats is the rankdata function.

+
+
+

16.1.5 Calculating ranks

+

As you might expect sps.rankdata accepts an array as an input argument. Let’s say that there are n = len(data) values in the array that we pass to sps.rankdata. The function returns an array, length \(n\), where the elements are the ranks of each corresponding element in the input data array. A rank value of 1 corresponds the lowest value in data (closest to negative infinity), and a rank of \(n\) corresponds to the highest value (closest to positive infinity).

+

Here’s an example data array to show how sps.rankdata works.

+
+
# The data.
+data = np.array([3, -1, 5, -2])
+# Corresponding ranks for the data.
+sps.rankdata(data)
+
+
array([3., 2., 4., 1.])
+
+
+

We can use sps.rankdata to recalculate the ranks for the congressional median household income values.

+
+
# Recalculate the ranks.
+recalculated_ranks = sps.rankdata(incomes)
+# Show the first 5 ranks.
+recalculated_ranks[:5]
+
+
array([1., 2., 3., 4., 5.])
+
+
+
+
+
+

16.2 Comparing two values in the district income data

+

Let us say that we have taken an interest in two particular members of Congress: the Speaker of the House of Representatives, Republican Kevin McCarthy, and the progressive activist and Democrat Alexandria Ocasio-Cortez. We will refer to both using their initials: KM for Kevin Owen McCarthy and AOC for Alexandra Ocasio-Cortez.

+

By scrolling through the CSV file, or (in our case) using some simple Pandas code that we won’t cover now, we find the rows corresponding to McCarthy (KM) and Ocasio-Cortez (AOC) — Table 16.3.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 16.3: Rows for Kevin McCarthy and Alexandra Ocasio-Cortez
Ascending_RankDistrictMedian IncomeRepresentativeParty
81NY-1456129A. Ocasio-CortezDemocrat
295CA-2077205K. McCarthyRepublican
+
+ + +
+
+

The rows show the rank of each congressional district in terms of median household income. The districts are ordered by this rank, so we can get their respective indices (positions) in the incomes array from their rank. Remember, Python’s indices start at 0, whereas the ranks start at 1, so we need to subtract 1 from the rank to get the index

+
+
# Rank of McCarthy's district in terms of median household income.
+km_rank = 295
+# Index (position) of McCarthy's value in the "incomes" array.
+# Subtract one from rank, because Python starts indices at 0 rather than 1.
+km_index = km_rank - 1
+
+

Now we have the index (position) of KM’s value, we can find the household income for his district from the incomes array:

+
+
# Show the median household income from McCarthy's district
+# by indexing into the "incomes" array:
+km_income = incomes[km_index]
+km_income
+
+
77205
+
+
+

Here is the corresponding index and incomes value for AOC:

+
+
# Index (position) of AOC's value in the "incomes" array.
+aoc_rank = 81
+aoc_index = aoc_rank - 1
+# Show the median household income from AOC's district
+# by indexing into the "incomes" array:
+aoc_income = incomes[aoc_index]
+aoc_income
+
+
56129
+
+
+

Notice that we fetch the same value for median household income from incomes as you see in the corresponding rows.

+
+
+

16.3 Comparing values with ranks and quantile positions

+

We have KM’s and AOC’s district median household income values, but our next question might be — how unusual are these values?

+

Of course, it depends what we mean by unusual. We might mean, are they greater or smaller than most of the other values?

+

One way of answering that question is simply looking at the rank of the values. If the rank is lower than \(\frac{441}{2} = 220.5\) then this is a district with lower median income than most districts. If it is greater than \(220.5\) then it has higher median income than most districts. We see that KM’s district, with rank 295 is wealthier than most, whereas AOC’s district (rank 81) is poorer than most.

+

But we can’t interpret the ranks without remembering that there are 441 values, so — for example - a rank of 81 represents a relatively low value, whereas one of 295 is relatively high.

+

We would like some scale that tells us immediately whether this is a relatively low or a relatively high value, without having to remembering how many values there are.

+

This is a good use for quantile positions (QPs). The QP of a value tells you where the value ranks relative to the other values, on a scale from \(0\) through \(1\). A QP of \(0\) tells you this is the lowest-ranking value, and a QP of \(1\) tells you this is the highest-ranking value.

+

We can calculate the QP for each rank. Think of the low-to-high ranks as being a line starting at 1 (the lowest rank — for the lowest median income) and going up to 441 (the highest rank — for the highest median income).

+

The QP corresponding to any particular rank tells you how far along this line the rank is. Notice that the length of the line is the distance from the first to the last value, so 441 - 1 = 440.

+

So, if the rank was \(1\), then the value is at the start of the line. It has got \(\frac{0}{440}\) of the way along the line, and the QP is \(0\). If the rank is \(441\), the value is at the end of the line, it has got \(\frac{440}{440}\) of the way along the line and the QP is \(1\).

+

Now consider the rank of \(100\). It has got \(\frac{(100 - 1)}{440}\) of the way along the line, and the QP position is 0.22.

+

More generally, we can translate the high-to-low ranks to QPs with:

+
+
# Length of the line defining quantile positions.
+# Start of line is rank 1 (quantile position 0).
+# End of line is rank 441 (quantile position 1).
+distance = len(lo_to_hi_ranks) - 1  # 440 in our case.
+# What proportion along the line does each value get to?
+quantile_positions = (lo_to_hi_ranks - 1) / distance
+# Show the first five.
+quantile_positions[:5]
+
+
array([0.        , 0.00227273, 0.00454545, 0.00681818, 0.00909091])
+
+
+

Let’s plot the ranks and the QPs together on the x-axis:

+
+
+
+
+

+
+
+
+
+

The QPs for KM and AOC tell us where their districts’ incomes are in the ranks, on a 0 to 1 scale:

+
+
km_quantile_position = quantile_positions[km_index]
+km_quantile_position
+
+
0.6681818181818182
+
+
+
+
aoc_quantile_position = quantile_positions[aoc_index]
+aoc_quantile_position
+
+
0.18181818181818182
+
+
+

If we multiply the QP by 100, we get the percentile positions — so the percentile position ranges from 0 through 100.

+
+
# Percentile positions are just quantile positions * 100
+print('KM percentile position:', km_quantile_position * 100)
+
+
KM percentile position: 66.81818181818183
+
+
print('AOC percentile position:', aoc_quantile_position * 100)
+
+
AOC percentile position: 18.181818181818183
+
+
+

Now consider one particular QP: \(0.5\). The \(0.5\) QP is exactly half-way along the line from rank \(1\) to rank \(441\). In our case this corresponds to rank \(\frac{441 - 1}{2} + 1 = 221\).

+
+
# For rank 221 we need index 220, because Python indices start at 0
+print('Middle rank:', lo_to_hi_ranks[220])
+
+
Middle rank: 221
+
+
print('Quantile position:', quantile_positions[220])
+
+
Quantile position: 0.5
+
+
+

The value corresponding to any particular QP is the quantile value, or just the quantile for short. For a QP of 0.5, the quantile (quantile value) is:

+
+
# Quantile value for 0.5
+print('Quantile value for QP of 0.5:', incomes[220])
+
+
Quantile value for QP of 0.5: 67407
+
+
+

In fact we can ask Python for this value (quantile) directly, using the quantile function:

+
+
np.quantile(incomes, 0.5)
+
+
67407.0
+
+
+
+
+
+ +
+
+quantile and sorting +
+
+
+

In our case, the incomes data is already sorted from lowest (at position 0 in the array to highest (at position 440 in the array). The quantile function does not need the data to be sorted; it does its own internal sorting to do the calculation.

+

For example, we could shuffle incomes into a random order, and still get the same values from quantile.

+
+
rnd = np.random.default_rng()
+shuffled_incomes = rnd.permuted(incomes)
+# Quantile still gives the same value.
+np.quantile(incomes, 0.5)
+
+
67407.0
+
+
+
+
+

Above we have the 0.5 quantile — the value corresponding to the QP of 0.5.

+

The 0.5 quantile is an interesting value. By the definition of QP, exactly half of the remaining values (after excluding the 0.5 quantile value) have lower rank, and are therefore less than the 0.5 quantile value. Similarly exactly half of the remaining values are greater than the 0.5 quantile. You may recognize this as the median value. This is such a common quantile value that NumPy has a function np.median as a shortcut for np.quantile(data, 0.5).

+
+
np.median(incomes)
+
+
67407.0
+
+
+

Another interesting QP is 0.25. We find the QP of 0.25 at rank:

+
+
qp25_rank = (441 - 1) * 0.25 + 1
+qp25_rank
+
+
111.0
+
+
+
+
# Therefore, index 110 (Python indices start from 0)
+print('Rank corresponding to QP 0.25:', qp25_rank)
+
+
Rank corresponding to QP 0.25: 111.0
+
+
print('0.25 quantile value:', incomes[110])
+
+
0.25 quantile value: 58961
+
+
print('0.25 quantile value using np.quantile:',
+      np.quantile(incomes, 0.25))
+
+
0.25 quantile value using np.quantile: 58961.0
+
+
+
+
+
+
+

+
+
+
+
+

Call the 0.25 quantile value \(V\). \(V\) is the number such that 25% of the remaining values are less than \(V\), and 75% are greater.

+

Now let’s think about the 0.01 quantile. We don’t have an income value exactly corresponding to this QP, because there is no rank exactly corresponding to the 0.01 QP.

+
+
rank_for_qp001 = (441 - 1) * 0.01 + 1
+rank_for_qp001
+
+
5.4
+
+
+

Let’s have a look at the first 10 values for rank / QP and incomes:

+
+
+
+
+

+
+
+
+
+

What then, is the quantile value for QP = 0.01? There are various ways to answer that question (Hyndman and Fan 1996), but one obvious way, and the default for NumPy, is to draw a straight line up from the matching rank — or equivalently, down from the QP — then note where that line crosses the lines joining the values to the left and right of the QP on the graph above, and look across to the y-axis for the corresponding value:

+
+
+
+
+

+
+
+
+
+
+
np.quantile(incomes, 0.01)
+
+
38887.4
+
+
+

This is called the linear method — because it uses straight lines joining the points to estimate the quantile value for a QP that does not correspond to a whole-number rank.

+
+
+
+ +
+
+Calculating quantiles using the linear method +
+
+
+

We gave a graphical explanation of how to calculate the quantile for a QP that does not correspond to whole-number rank in the data. A more formal way of getting the value using the numerical equivalent of the graphical method is linear interpolation. Linear interpolation calculates the quantile value as a weighted average of the quantile values for the QPs of the whole number ranks just less than, and just greater than the QP we are interested in. For example, let us return to the QP of \(0.01\). Let us remind ourselves of the QPs, whole-number ranks and corresponding values either side of the QP \(0.01\):

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Ranks, QPs and corresponding values around QP of 0.01
RankQuantile positionQuantile value
50.009937933
5.40.01V
60.011340319
+

What value should we should give \(V\) in the table? One answer is to take the average of the two values either side of the desired QP — in this case \((37933 + 40319) / 2\). We could write this same calculation as \(37933 * 0.5 + 40319 * 0.5\) — showing that we are giving equal weight (\(0.5\)) to the two values either side.

+

But giving both values equal weight doesn’t seem quite right, because the QP we want is closer to the QP for rank 5 (and corresponding value 37933) than it is to the QP for rank 6 (and corresponding value 40319). We should give more weight to the rank 5 value than the rank 6 value. Specifically the lower value is 0.4 rank units away from the QP rank we want, and the higher is 0.6 rank units away. So we give higher weight for shorter distance, and multiply the rank 5 value by \(1 - 0.4 = 0.6\), and the rank 6 value by \(1 - 0.6 = 0.4\). Therefore the weighted average is \(37933 * 0.6 + 40319 * 0.4 = 38887.4\). This is a mathematical way to get the value we described graphically, of tracking up from the rank of 5.4 to the line drawn between the values for rank 5 and 6, and reading off the y-value at which this track crosses that line.

+
+
+
+
+

16.4 Unusual values compared to the distribution

+

Now we return the problem of whether KMs and AOCs districts are unusual in terms of their median household incomes. From what we have so far, we might conclude that AOC’s district is fairly poor, and KM’s district is relatively wealthy. But — are either of their districts unusual in their wealth or poverty?

+

To answer that question, we have to think about the distribution of values. Are either AOC’s or KM’s district outside the typical spread of values for districts?

+

The rest of this section is an attempt to answer what we could mean by outside and typical spread.

+

Let us start with a histogram of the district incomes, marking the position of the KM and AOC districts.

+
+
+
+
+

+
+
+
+
+

What could we mean by “outside” the “typical spread”. By outside, we mean somewhere away from the center of the distribution. Let us take the mean of the distribution to be its center, and add that to the plot.

+
+
mean_income = np.mean(incomes)
+
+
+
+
+
+

+
+
+
+
+
+
+

16.5 On deviations

+

Now let us ask what we could mean by typical spread. By spread we mean deviation either side of the center.

+

We can calculate how far away each income is away from the mean, by subtracting the mean from all the income values. Call the result — the deviations from the mean, or deviations for short.

+
+
deviations = incomes - np.mean(incomes)
+
+

The deviation values give, for each district, how far that district’s income is from the mean. Values near the mean will have small (positive or negative) values, and values further from the mean will have large (positive and negative) values. Here is a histogram of the deviation values.

+
+
+
+
+

+
+
+
+
+

Notice that the shape of the distribution has not changed — all that changed is the position of the distribution on the x-axis. In fact, the distribution of deviations centers on zero — the deviations have a mean of (as near as the computer can accurately calculate) zero:

+
+
# Show the mean of the deviations, rounded to 8 decimal places.
+np.round(np.mean(deviations), 8)
+
+
0.0
+
+
+
+
+

16.6 The mean absolute deviation

+

Now let us consider the deviation value for KM and AOC:

+
+
print('Deviation for KM:', deviations[km_index])
+
+
Deviation for KM: 5098.036281179142
+
+
print('Deviation for AOC:', deviations[aoc_index])
+
+
Deviation for AOC: -15977.963718820858
+
+
+

We have the same problem as before. Yes, we see that KM has a positive deviation, and therefore, that his district is more wealthy than average across the 441 districts. Conversely AOC’s district has a negative deviation, and is poorer than average. But we still lack a standard measure of how far away from the mean each district is, in terms of the spread of values in the histogram.

+

To get such a standard measure, we would like idea of a typical or average deviation. Then we will compare KM’s and AOC’s deviations to the average deviation, to see if they are unusually far from the mean.

+

You have just seen above that we cannot use the literal average (mean) of the deviations for this purpose because the positive and negative deviations will exactly cancel out, and the mean deviation will always be as near as the computer can calculate to zero.

+

To stop the negatives canceling the positives, we can simply knock the minus signs off all the negative deviations.

+

This is the job of the NumPy abs function — where abs is short for absolute. The abs function will knock minus signs off negative values, like this:

+
+
np.abs([-1, 0, 1, -2])
+
+
array([1, 0, 1, 2])
+
+
+

To get an average of the deviations, regardless of whether they are positive or negative, we can take the mean of the absolute deviations, like this:

+
+
# The Mean Absolute Deviation (MAD)
+abs_deviations = np.abs(deviations)
+mad = np.mean(abs_deviations)
+# Show the result
+mad
+
+
15101.657570662428
+
+
+

This is the Mean Absolute Deviation (MAD). It is one measure of the typical spread. MAD is the average distance (regardless of positive or negative) of a value from the mean of the values.

+

We can get an idea of how typical a particular deviation is by dividing the deviation by the MAD value, like this:

+
+
print('Deviation in MAD units for KM:', deviations[km_index] / mad)
+
+
Deviation in MAD units for KM: 0.33758123949803737
+
+
print('Deviation in MAD units AOC:', deviations[aoc_index] / mad)
+
+
Deviation in MAD units AOC: -1.0580271499375542
+
+
+
+
+

16.7 The standard deviation

+

We are interested in the average deviation, but we find that a simple average of the deviations from the mean always gives 0 (perhaps with some tiny calculation error), because the positive and negative deviations cancel exactly.

+

The MAD calculation solves this problem by knocking the signs off the negative values before we take the mean.

+

Another very popular way of solving the same problem is to precede the calculation by squaring all the deviations, like this:

+
+
squared_deviations = deviations ** 2
+# Show the first five values.
+squared_deviations[:5]
+
+
array([2.48701328e+09, 1.91449685e+09, 1.66015207e+09, 1.16943233e+09,
+       1.16785980e+09])
+
+
+
+
+
+ +
+
+Exponential format for showing very large and very small numbers +
+
+
+

The squared_deviation values above appear in exponential notation (E-notation). Other terms for E-notation are scientific notation, scientific form, or standard form. E-notation is a useful way to express very large (far from 0) or very small (close to 0) numbers in a more compact form.

+

E-notation represents a value as a floating point value \(m\) multiplied by 10 to the power of an exponent \(n\):

+

\[ +m * 10^n +\]

+

\(m\) is a floating point number with one digit before the decimal point — so it can be any value from 1.0 through 9.9999… \(n\) is an integer (positive or negative whole number).

+

For example, the median household income of KM’s district is 77205 (dollars). We can express that same number in E-notation as \(7.7205 * 10^4\) . Python writes this as 7.7205e4, where the number before the e is \(m\) and the number after the e is the exponent value \(n\). E-notation is another way of writing the number, because \(7.7205 * 10^4 = 77205\).

+
+
7.7205e4 == 77205
+
+
True
+
+
+

It is no great advantage to use E-notation in this case; 77205 is probably easier to read and understand than 7.7205e4. The notation comes into its own where you start to lose track of the powers of 10 when you read a number — and that does happen when the number becomes very long without E-notation. For example, \(77205^2 = 5960612025\). \(5960612025\) is long enough that you start having to count the digits to see how large it is. In E-notation, that number is 5.960612025e9. If you remember that \(10^9\) is one US billion, then the E-notation tells you at a glance that the value is about \(5.9\) billion.

+

Python makes its own decision whether to print out numbers using E-notation. This only affects the display of the numbers; the underlying values remain the same whether NumPy chooses to show them in E-notation or not.

+
+
+

The process of squaring the deviations turns all the negative values into positive values.

+

We can then take the average (mean) of the squared deviations to give a measure of the typical squared deviation:

+
+
mean_squared_deviation = np.mean(squared_deviations)
+mean_squared_deviation
+
+
385971462.1165975
+
+
+

Rather confusingly, the field of statistics uses the term variance to refer to mean squared deviation value. Just to emphasize that naming, let’s do the same calculation but using “variance” as the variable name.

+
+
# Statistics calls the mean squared deviation - the "variance"
+variance = np.mean(squared_deviations)
+variance
+
+
385971462.1165975
+
+
+
+

It will come as no surprise to find that Numpy has a function to do the whole variance calculation — subtracting the mean, and returning the average squared deviation — np.var:

+
+
# Use np.var to calculate the mean squared deviation directly.
+np.var(incomes)
+
+
385971462.1165975
+
+
+
+

The variance is the typical (in the sense of the mean) squared deviation. The units for the variance, in our case, would be squared dollars. But we are more interested in the typical deviation, in our original units – dollars rather than squared dollars.

+

So we take the square root of the mean squared deviation (the square root of the variance), to get the standard deviation. It is the standard deviation in the sense that it a measure of typical deviation, in the specific sense of the square root of the mean squared deviations.

+
+
# The standard deviation is the square root of the mean squared deviation.
+# (and therefore, the square root of the variance).
+standard_deviation = np.sqrt(mean_squared_deviation)
+standard_deviation
+
+
19646.156420954136
+
+
+
+

Again, Numpy has a function to do this calculation directly: np.std:

+
+
# Use np.std to calculate the square root of the mean squared deviation
+# directly.
+np.std(incomes)
+
+
19646.156420954136
+
+
+
+
# Of course, np.std(incomes) is the same as:
+np.sqrt(np.var(incomes))
+
+
19646.156420954136
+
+
+
+

The standard deviation (the square root of the mean squared deviation) is a popular alternative to the Mean Absolute Deviation, as a measure of typical spread.

+

Figure 16.1 shows another histogram of the income values, marking the mean, the mean plus or minus one standard deviation, and the mean plus or minus two standard deviations. You can see that the mean plus or minus one standard deviation includes a fairly large proportion of the data. The mean plus or minus two standard deviation includes much larger proportion.

+
+
+
+
+

+
Figure 16.1: Income histogram plus or minus 1 and 2 standard deviations
+
+
+
+
+

Now let us return to the question of how unusual our two congressional districts are in terms of the distribution. First we calculate the number of standard deviations of each district from the mean:

+
+
km_std_devs = deviations[km_index] / standard_deviation
+print('Deviation in standard deviation units for KM:',
+      np.round(km_std_devs), 2)
+
+
Deviation in standard deviation units for KM: 0.0 2
+
+
aoc_std_devs = deviations[aoc_index] / standard_deviation
+print('Deviation in standard deviation units for AOC:',
+      np.round(aoc_std_devs), 2)
+
+
Deviation in standard deviation units for AOC: -1.0 2
+
+
+

The values for each district are a re-expression of the income values in terms of the distribution. They give the distance from the mean (positive or negative) in units of standard deviation.

+
+
+

16.8 Standard scores

+

We will often find uses for the procedure we have just applied, where we take the original values (here, incomes) and:

+
    +
  • Subtract the mean to convert to deviations, then
  • +
  • Divide by the standard deviation
  • +
+

Let’s apply that procedure to all the incomes values.

+

First we calculate the standard deviation:

+
+
deviations = incomes - np.mean(incomes)
+income_std = np.sqrt(np.mean(deviations ** 2))
+
+

Then we calculate standard scores:

+
+
deviations_in_stds = deviations / income_std
+deviations_in_stds[:5]
+
+
array([-2.53840816, -2.22715135, -2.07394072, -1.74064397, -1.73947326])
+
+
+

This procedure converts the original data (here incomes) to deviations from the mean in terms of the standard deviation. The resulting values are called standard scores or z-scores. One name for this procedure is “z-scoring”.

+

If you plot a histogram of the standard scores, you will see they have a mean of (actually exactly) 0, and a standard deviation of (actually exactly) 1.

+
+
+
+
+

+
+
+
+
+

With all this information — what should we conclude about the two districts in question? KM’s district is 0.26 standard deviations above the mean, but that’s not enough to conclude that it is unusual. We see from the histogram that a large proportion of the districts are at least this distance from the mean. We can calculate that proportion directly.

+
+
# Distances (negative or positive) from the mean.
+abs_std_devs = np.abs(deviations_in_stds)
+# Number where distance greater than KM distance.
+n_gt_km = np.sum(abs_std_devs > km_std_devs)
+prop_gt_km = n_gt_km / len(deviations_in_stds)
+print("Proportion of districts further from mean than KM:",
+      np.round(prop_gt_km, 2))
+
+
Proportion of districts further from mean than KM: 0.82
+
+
+

A full 82% of districts are further from the mean than is KM’s district. KM’s district is richer than average, but not unusual. The benefit of the standard deviation distance is that we can see this directly from the value, without doing the calculation of proportions, because the standard deviation is a measure of typical spread, and KM’s district is well-within this measure.

+

AOC’s district is -0.81 standard deviations from the mean. This is a little more unusual than KM’s score.

+
+
# Number where distance greater than AOC distance.
+# Make AOC's distance positive to correspond to distance from the mean.
+n_gt_aoc = np.sum(abs_std_devs > np.abs(aoc_std_devs))
+prop_gt_aoc = n_gt_aoc / len(deviations_in_stds)
+print("Proportion of districts further from mean than AOC:",
+      np.round(prop_gt_aoc, 2))
+
+
Proportion of districts further from mean than AOC: 0.35
+
+
+

Only 35% of districts are further from the mean than AOC’s district, but this is still a reasonable proportion. We see from the standard score that AOC is within one standard deviation. AOC’s district is poorer than average, but not to a remarkable degree.

+
+
+

16.9 Standard scores to compare values on different scales

+

Why are standard scores so useful? They allow us to compare values on very different scales.

+

Consider the values in Table 16.4. Each row of the table corresponds to a team competing in the English Premier League (EPL) for the 2021-2022 season. For those of you with absolutely no interest in sports, the EPL is the league of the top 20 teams in English football, or soccer to our North American friends. The points column of the table gives the total number of points at the end of the 2021 season (from 38 games). The team gets 3 points for a win, and 1 point for a draw, so the maximum possible points from 38 games are \(3 * 38 = 114\). The wages column gives the estimated total wage bill in thousands of British Pounds (£1000).

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 16.4: 2021 points and wage bills (£1000s) for EPL teams
teampointswages
Manchester City93168572
Liverpool92148772
Chelsea74187340
Tottenham Hotspur71110416
Arsenal69118074
Manchester United58238780
West Ham United5677936
Leicester City5281590
Brighton and Hove Albion5149820
Wolverhampton Wanderers5162756
Newcastle United4973308
Crystal Palace4871910
Brentford4628606
Aston Villa4585330
Southampton4058657
Everton39110202
Leeds United3837354
Burnley3540830
Watford2342030
Norwich City2231750
+
+ + +
+
+

Let’s say we own Crystal Palace Football Club. Crystal Palace was a bit below average in the league in terms of points. Now we are thinking about whether we should invest in higher-paid players for the coming season, to improve our points score, and therefore, league position.

+

One thing we might like to know is whether there is an association between the wage bill and the points scored.

+

To look at that, we can do a scatter plot. This is a plot with — say — wages on the x-axis, and points on the y-axis. For each team we have a pair of values — their wage bill and their points scored. For each team, we put a marker on the scatter plot at the coordinates given by the wage value (on the x-axis) and the points value (on the y-axis).

+

Here is that plot for our EPL data in Table 16.4, with the Crystal Palace marker picked out in red.

+
+
+
+
+

+
+
+
+
+

It looks like there is a rough association of wages and points; teams that spend more in wages tend to have more points.

+

At the moment, the points and wages are in very different units. Points are on a possible scale of 0 (lose every game) to 38 * 3 = 114 (win every game). Wages are in thousands of pounds. Maybe we are not interested in the values in these units, but in how unusual the values are, in terms of wages, and in terms of points.

+

This is a good application of standard scores. Standard scores convert the original values to values on a standard scale, where 0 corresponds to an average value, 1 to a value one standard deviation above the mean, and -1 to a value one standard deviation below the mean. If we follow the standard score process for both points and wages, the values will be in the same standard units.

+

To do this calculation, we need the values from the table. We follow the same recipe as before, in loading the data with Pandas, and converting to arrays.

+
+
import numpy as np
+import pandas as pd
+
+points_wages = pd.read_csv('data/premier_league.csv')
+points = np.array(points_wages['points'])
+wages = np.array(points_wages['wages'])
+
+

As you recall, the standard deviation is the square root of the mean squared deviation. In code:

+
+
# The standard deviation is the square root of the
+# mean squared deviation.
+wage_deviations = wages - np.mean(wages)
+wage_std = np.sqrt(np.mean(wage_deviations ** 2))
+wage_std
+
+
55523.946071289814
+
+
+

Now we can apply the standard score procedure to wages. We divide the deviations by the standard deviation.

+
+
standard_wages = (wages - np.mean(wages)) / wage_std
+
+

We apply the same procedure to the points:

+
+
point_deviations = points - np.mean(points)
+point_std = np.sqrt(np.mean(point_deviations ** 2))
+standard_points = point_deviations / point_std
+
+

Now, when we plot the standard score version of the points against the standard score version of the wages, we see that they are in comparable units, each with a mean of 0, and a spread (a standard deviation) of 1.

+
+
+
+
+

+
+
+
+
+

Let us go back to our concerns as the owners of Crystal Palace. Counting down from the top in the table above, we see that Crystal Palace is the 12th row. Therefore, we can get the Crystal Palace wage value with:

+
+
# In Python the 12th value is at position (index) 11
+cp_index = 11
+cp_wages = wages[cp_index]
+cp_wages
+
+
71910
+
+
+

We can get our wage bill in standard units in the same way:

+
+
cp_standard_wages = standard_wages[cp_index]
+cp_standard_wages
+
+
-0.3474473873890471
+
+
+

Our wage bill is a below average, but its still within striking distance of the mean.

+

We know that we are comparing ourselves against the other teams, so perhaps we want to increase our wage bill by one standard deviation, to push us above the mean, and somewhat away from the center of the pack. If we add one standard deviation to our wage bill, that increases the standard score of our wages by 1.

+

But — if we increase our wages by one standard deviation — how much can we expect that to increase our points — in standard units.

+

That is question about the strength of the association between two measures — here wages and points — and we will cover that topic in much more detail in Chapter 29. But, racing ahead — here is the answer to the question we have just posed — the amount we expect to gain in points, in standard units, if we increase our wages by one standard deviation (and therefore, 1 in standard units).

+

For reasons we won’t justify now, we calculate the \(r\) value of association between wages and points, like this:

+
+
standards_multiplied = standard_wages * standard_points
+r = np.mean(standards_multiplied)
+r
+
+
0.7080086644844557
+
+
+

The \(r\) value is the answer to our question. For every one unit increase in standard scores in wages, we expect an increase of \(r\) (0.708) standard score units in points.

+
+
+

16.10 Conclusion

+

When we look at a set of values, we often ask questions about whether individual values are unusual or surprising. One way of doing that is to look at where the values are in the sorted order — for example, using the raw rank of values, or the proportion of values below this value — the quantiles or percentiles of a value. Another measure of interest is where a value is in comparison to the spread of all values either side of the mean. We use the term “deviations” to refer to the original values after we have subtracted the mean of the values. We can measure spread either side of the mean with metrics such as the mean of the absolute deviations (MAD) and the square root of the mean squared deviations (the standard deviation). One common use of the deviations and the standard deviation is to transform values into standard scores. These are the deviations divided by the standard deviation, and they transform values to have a standard mean (zero) and spread (standard deviation of 1). This can make it easier to compare sets of values with very different ranges and means.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/standard_scores_files/figure-html/fig-mean-stds-1.png b/python-book/standard_scores_files/figure-html/fig-mean-stds-1.png new file mode 100644 index 00000000..1ae75dab Binary files /dev/null and b/python-book/standard_scores_files/figure-html/fig-mean-stds-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-104-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-104-1.png new file mode 100644 index 00000000..5810390a Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-104-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-111-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-111-1.png new file mode 100644 index 00000000..95ee4925 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-111-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-120-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-120-1.png new file mode 100644 index 00000000..0e121c96 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-120-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-41-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-41-1.png new file mode 100644 index 00000000..1fd4ecf6 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-41-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-61-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-61-1.png new file mode 100644 index 00000000..d76ece82 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-61-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-64-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-64-1.png new file mode 100644 index 00000000..e7ac08d8 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-64-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-65-3.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-65-3.png new file mode 100644 index 00000000..5eab5ac4 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-65-3.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-68-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-68-1.png new file mode 100644 index 00000000..db6e5524 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-68-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-71-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-71-1.png new file mode 100644 index 00000000..0e045692 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-71-1.png differ diff --git a/python-book/standard_scores_files/figure-html/unnamed-chunk-74-1.png b/python-book/standard_scores_files/figure-html/unnamed-chunk-74-1.png new file mode 100644 index 00000000..dd9b28a1 Binary files /dev/null and b/python-book/standard_scores_files/figure-html/unnamed-chunk-74-1.png differ diff --git a/python-book/style.css b/python-book/style.css new file mode 100644 index 00000000..aad2ca98 --- /dev/null +++ b/python-book/style.css @@ -0,0 +1,76 @@ +.rmdcomment { + padding: 1em 1em 1em 4em; + margin-bottom: 10px; + background: #f5f5f5; + position:relative; +} + +.rmdcomment:before { + content: "\f075"; + font-family: FontAwesome; + left:10px; + position:absolute; + top:0px; + font-size: 45px; + } + +/* Unfortunately we need !important because of the + * extreme specificity of the Bookdown CSS rule for a elements + * at this level of the class heirarchy */ +.nb-links a { + background-color:#477DCA !important; + color:#FFF !important; + border-radius:3px; + display:inline-block; + font-size:1.2em; + font-weight:700; + padding:.4em 1em; + margin-bottom: .5em; + } + +.interact-button:hover { + text-decoration:none; +} + +div.interact-context { + display: inline; + padding-left: 1em; + font-weight: 600; +} + +.notebook-link:hover { + text-decoration:none; +} + +table { + font-size: 80%; + border-bottom: 1px solid darkgray; + margin-bottom: 4rem !important; +} + +table caption { + text-align: left; + font-size: 125%; + font-weight: bold; + overflow-x: visible; + white-space: pre; + border-bottom: 1px solid darkgray; + margin-bottom: 1.5rem; +} + +.lightable-paper { + width: auto; +} + +.question::before { + content: "Question:"; + font-weight: bold; +} + +.question { + border: 1px solid black; + background: #F5E1FD; + padding-left: 1rem; + padding-right: 1rem; + padding-top: 1rem; +} diff --git a/python-book/technical_note.html b/python-book/technical_note.html new file mode 100644 index 00000000..2a115063 --- /dev/null +++ b/python-book/technical_note.html @@ -0,0 +1,661 @@ + + + + + + + + + +Resampling statistics - 34  Technical Note to the Professional Reader + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

34  Technical Note to the Professional Reader

+
+ + + +
+ + + + +
+ + +
+ +

The material presented in this book fits together with the technical literature as follows: Though I (JLS) had proceeded from first principles rather than from the literature, I have from the start cited work by Chung and Fraser (1958) and Meyer Dwass (1957) They suggested taking samples of permutations in a two-sample test as a way of extending the applicability of Fisher’s randomization test (1935; 1960, chap. III, section 21). Resampling with replacement from a single sample to determine sample statistic variability was suggested by Simon (1969). Independent work by Efron (1979) explored the properties of this technique (Efron termed it the “bootstrap”) and lent it theoretical support. The notion of using these techniques routinely and in preference to conventional techniques based on Gaussian assumptions was suggested by Simon (1969) and by Simon, Atkinson, and Shevokas (1976).

+ + + + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/testing_counts_1.html b/python-book/testing_counts_1.html new file mode 100644 index 00000000..31b74f1c --- /dev/null +++ b/python-book/testing_counts_1.html @@ -0,0 +1,2068 @@ + + + + + + + + + +Resampling statistics - 21  Hypothesis-Testing with Counted Data, Part 1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

21  Hypothesis-Testing with Counted Data, Part 1

+
+ + + +
+ + + + +
+ + +
+ +
+

21.1 Introduction

+

The first task in inferential statistics is to make one or more point estimates — that is, to make one or more statements about how much there is of something we are interested in — including especially the mean and the dispersion. (That work goes under the label “estimation” and is discussed in Chapter 19.) Frequently the next step, after making such quantitative estimation of the universe from which a sample has been drawn, is to consider whether two or more samples are different from each other, or whether the single sample is different from a specified value; this work goes under the label “hypothesis testing.” We ask: Did something happen? Or: Is there a difference between two universes? These are yes-no questions.

+

In other cases, the next step is to inquire into the reliability of the estimates; this goes under the label “confidence intervals.” (Some writers include assessing reliability under the rubric of estimation, but I judge it better not to do so).

+

So: Having reviewed how to convert hypothesis-testing problems into statistically testable questions in Chapter 20, we now must ask: How does one employ resampling methods to make the statistical test? As is always the case when using resampling techniques, there is no unique series of steps by which to proceed. The crucial criterion in assessing the model is whether it accurately simulates the actual event. With hypothesis-testing problems, any number of models may be correct. Generally speaking, though, the model that makes fullest use of the quantitative information available from the data is the best model.

+

When attempting to deduce the characteristics of a universe from sample data, or when asking whether a sample was drawn from a particular universe, a crucial issue is whether a “one-tailed test” or a “two-tailed test” should be applied. That is, in examining the results of our resampling experiment based on the benchmark universe, do we examine both ends of the frequency distribution, or just one? If there is strong reason to believe a priori that the difference between the benchmark (null) universe and the sample will be in a given direction — for example if you hypothesize that the sample mean will be smaller than the mean of the benchmark universe — you should then employ a one-tailed test . If you do not have strong basis for such a prediction, use the two-tailed test. As an example, when a scientist tests a new medication, his/her hypothesis would be that the number of patients who get well will be higher in the treated group than in the control group. Thus, s/he applies the one-tailed test. See the text below for more detail on one- and two-tailed tests.

+

Some language first:

+

Hypothesis: In inferential statistics, a statement or claim about a universe that can be tested and that you wish to investigate.

+

Testing: The process of investigating the validity of a hypothesis.

+

Benchmark (or null) hypothesis: A particular hypothesis chosen for convenience when testing hypotheses in inferential statistics. For example, we could test the hypothesis that there is no difference between a sample and a given universe, or between two samples, or that a parameter is less than or greater than a certain value. The benchmark universe refers to this hypothesis. (The concept of the benchmark or null hypothesis was discussed in Chapter 9 and Chapter 20.)

+

Now let us begin the actual statistical testing of various sorts of hypotheses about samples and populations.

+
+
+

21.2 Should a single sample of counted data be considered different from a benchmark universe?

+
+

21.2.0.1 Example: Does Irradiation Affect the Sex Ratio in Fruit Flies?

+

Where the Benchmark Universe Mean (in this case, the Proportion) is Known, is the Mean (Proportion) of the Population Affected by the Treatment?)

+

You think you have developed a technique for irradiating the genes of fruit flies so that the sex ratio of the offspring will not be half males and half females. In the first twenty cases you treat, there are fourteen males and six females. Does this experimental result confirm that the irradiation does work?

+

First convert the scientific question — whether or not the treatment affects the sex distribution — into a probability-statistical question: Is the observed sample likely to have come from a benchmark universe in which the sex ratio is one male to one female? The benchmark (null) hypothesis, then, is that the treatment makes no difference and the sample comes from the one-male-to-one-female universe. Therefore, we investigate how likely a one-to-one universe is to produce a distribution of fourteen or more of just one sex.

+

A coin has a one-to-one (one out of two) chance of coming up tails. Therefore, we might flip a coin in groups of twenty flips, and count the number of heads in each twenty flips. Or we can use a random number table. The following steps will produce a sound estimate:

+
    +
  • Step 1. Let heads = male, tails = female.
  • +
  • Step 2. Flip twenty coins and count the number of males. If 14 or more males occur, record “yes.” Also, if 6 or fewer males occur, record “yes” because this means we have gotten 14 or more females. Otherwise, record “no.”
  • +
  • Step 3. Repeat step 2 perhaps 100 times.
  • +
  • Step 4. Calculate the proportion “yes” in the 100 trials. This proportion estimates the probability that a fruit-fly population with a propensity to produce 50 percent males will by chance produce as many as 14 or as few as 6 males in a sample of 20 flies.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.1: Results from 25 random trials for Fruitfly problem
Trial no# of heads>=14 or <= 6
18No
28No
312No
49No
512No
610No
79No
814Yes
914Yes
1010No
119No
128No
1313No
145Yes
157No
1611No
1711No
1810No
1910No
2011No
218No
229No
2316Yes
244Yes
2513No
+
+ + +
+
+

Table 21.1 shows the results obtained in twenty-five trials of twenty flips each. In three of the twenty-five trials (12 percent) there were fourteen or more heads, which we call “males,” and in two of the twenty-five trials (8 percent) there six or fewer heads, meaning there were fourteen or more tails (“females”). We can therefore estimate that, even if the treatment does not affect the sex and the births over a long period really are one to one, five out of twenty-five times (20 percent) we would get fourteen or more of one sex or the other. Therefore, finding fourteen males out of twenty births is not overwhelming evidence that the treatment has any effect, even though the result is suggestive.

+

How accurate is the estimate? Seventy-five more trials were made, and of the 100 trials eight contained fourteen or more “males” (8 percent), and 9 trials contained fourteen or more “females” (9 percent), a total of 17 percent. So the first twenty-five trials gave a fairly reliable indication. As a matter of fact, analytically-based computation (not explained here) shows that the probability of getting fourteen or more females out of twenty births is .057 and, of course, the same for fourteen or more males from a one-to-one universe, implying a total probability of .114 of getting fourteen or more males or females.

+

Now let us obtain larger and more accurate simulation samples with the computer. The key step in the Python notebook below represents male fruit flies with the string 'male' and female fruit flies with the string 'female'. The rnd.choice function is then used to generate 20 of these strings with an equal probability that either string is selected. This simulates randomly choosing 20 fruit flies on the benchmark assumption — the “null hypothesis” — that each fruit fly has an equal chance of being a male or female. Now we want to discover the chances of getting more than 13 (i.e., 14 or more) males or more than 13 females under these conditions. So we use np.sum to count the number of males in each random sample and then store this value in the scores array of this number for each sample. We repeat these steps 10,000 times.

+

After ten thousand samples have been drawn, we count (sum) how often there were more than 13 males and then count the number of times there were fewer than 7 males (because if there were fewer than 7 males there must have been more than 13 females). When we add the two results together we have the probability that the results obtained from the sample of irradiated fruit flies would be obtained from a random sample of fruit flies.

+
+

Start of fruit_fly notebook

+ + +
+
import numpy as np
+import matplotlib.pyplot as plt
+
+# set up the random number generator
+rnd = np.random.default_rng()
+
+
+
# Set the number of trials
+n_trials = 10000
+
+# set the sample size for each trial
+sample_size = 20
+
+# An empty array to store the trials
+scores = np.zeros(n_trials)
+
+# Do 1000 trials
+for i in range(n_trials):
+
+    # Generate 20 simulated fruit flies, where each has an equal chance of being
+    # male or female
+    a = rnd.choice(['male', 'female'], size = sample_size, p = [0.5, 0.5], replace = True)
+
+    # count the number of males in the sample
+    b = np.sum(a == 'male')
+
+    # store the result of this trial
+    scores[i] = b
+
+# Produce a histogram of the trial results
+plt.title(f"Number of males in {n_trials} samples of \n{sample_size} simulated fruit flies")
+plt.hist(scores)
+plt.xlabel('Number of Males')
+plt.ylabel('Frequency')
+plt.show()
+
+
+
+

+
+
+
+
+

In the histogram above, we see that in 16 percent of the trials, the number of males was 14 or more, or 6 or fewer. Or instead of reading the results from the histogram, we can calculate the result by tacking on the following commands to the above program:

+
+
# Determine the number of trials in which we had 14 or more males.
+j = np.sum(scores >= 14)
+
+# Determine the number of trials in which we had 6 or fewer males.
+k = np.sum(scores <= 6)
+
+# Add the two results together.
+m = j + k
+
+# Convert to a proportion.
+mm = m / n_trials
+
+# Print the results.
+print(mm)
+
+
0.1191
+
+
+

End of fruit_fly notebook

+
+ +

Notice that the strength of the evidence for the effectiveness of the radiation treatment depends upon the original question: whether or not the treatment had any effect on the sex of the fruit fly, which is a two-tailed question. If there were reason to believe at the start that the treatment could increase only the number of males , then we would focus our attention on the result that in only three of the twenty-five trials were fourteen or more males. There would then be only a 3/25 = 0.12 probability of getting the observed results by chance if the treatment really has no effect, rather than the weaker odds against obtaining fourteen or more of either males or females.

+

Therefore, whether you decide to figure the odds of just fourteen or more males (what is called a “one-tail test”) or the odds for fourteen or more males plus fourteen or more females (a “two-tail test”), depends upon your advance knowledge of the subject. If you have no reason to believe that the treatment will have an effect only in the direction of creating more males and if you figure the odds for the one-tail test anyway, then you will be kidding yourself. Theory comes to bear here. If you have a strong hypothesis, deduced from a strong theory, that there will be more males, then you should figure one-tail odds, but if you have no such theory you should figure the weaker two-tail odds.1

+

In the case of the next problem concerning calves, we shall see that a one-tail test is appropriate because we have no interest in producing more male calves. Before leaving this example, let us review our intellectual strategy in handling the problem. First we observe a result (14 males in 20 flies) which differs from the proportion of the benchmark population (50 percent males). Because we have treated this sample with irradiation and observed a result that differs from the untreated benchmark-population’s mean, we speculate that the irradiation caused the sample to differ from the untreated population. We wish to check on whether this speculation is correct.

+

When asking whether this speculation is correct, we are implicitly asking whether future irradiation would also produce a proportion of males higher than 50 percent. That is, we are implicitly asking whether irradiated flies would produce more samples with male proportions as high as 14/20 than would occur by chance in the absence of irradiation.

+

If samples as far away as 14/20 from the benchmark population mean of 10/20 would occur frequently by chance, then we would not be impressed with that experimental evidence as proof that irradiation does affect the sex ratio. Hence we set up a model that will tell us the frequency with which samples of 14 or more males out of 20 births would be observed by chance. Carrying out the resampling procedure tells us that perhaps a tenth of the time such samples would be observed by chance. That is not extremely frequent, but it is not infrequent either. Hence we would probably conclude that the evidence is provocative enough to justify further experimentation, but not so strong that we should immediately believe in the truth of this speculation.

+

The logic of attaching meaning to the probabilistic outcome of a test of a hypothesis is discussed in Chapter 22. There also is more about the concept of the level of significance in Chapter 22.

+

Because of the great importance of this sort of case, which brings out the basic principles particularly clearly, let us consider another example:

+
+
+

21.2.1 Example: Does a treatment increase the female calf rate?

+

What is the probability that among 10 calves born, 9 or more will be female?

+

Let’s consider this question in the context of a set of queries for performing statistical inference that will be discussed further in Chapter 25.

+

The question: (From Hodges Jr and Lehmann (1970)): Female calves are more valuable than males. A bio-engineer claims to be able to cause more females to be born than the expected 50 percent rate. He conducts his procedure, and nine females are born out of the next 10 pregnancies among the treated cows. Should you believe his claim? That is, what is the probability of a result this (or more) surprising occurring by chance if his procedure has no effect? In this problem, we assume that on average 100 of 206 births are female, in contrast to the 50-50 benchmark universe in the previous problem.

+

What is the purpose of the work?: Female calves are more valuable than male calves.

+

Statistical inference?: Yes.

+

Confidence interval or Test of hypothesis?: Test of hypothesis.

+

Will you state the costs and benefits of various outcomes, or a loss function?: Yes. One need only say that the benefits are very large, and if the results are promising, it is worth gathering more data to confirm results.

+

How many samples of data are part of the hypothesis test?: One.

+

What is the size of the first sample about which you wish to make significance statements?: Ten.

+

What comparison(s) to make?: Compare the sample to the benchmark universe.

+

What is the benchmark universe: that embodies the null hypothesis? 100/206 female.

+

Which symbols for the observed entities?: Balls in bucket, or numbers.

+

What values or ranges of values?: We could write numbers 1 through 206 on pieces of paper, and take numbers 1-100 as “male” and 101-206 as “female”. Or we could use some other mechanism to give us a 100/206 chance of any one calf being female.

+

Finite or infinite universe?: Infinite.

+

Which sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)?: Ten calves.

+

What procedure to produce the sample entities?: Sampling with replacement.

+

Simple (single step) or complex (multiple “if” drawings)?: Can think of it either way.

+

What to record as the outcome of each resample trial?: The proportion (or number) of females.

+

What is the criterion to be used in the test?: The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of 100/206 females.

+

“One tail” or “two tail” test?: One tail, because the farmer is only interested in females. Finding a large proportion of males would not be of interest; it would not cause rejecting the null hypothesis.

+

The actual computation of probability may be done in several ways, as discussed earlier for four children and for ten cows. Conventional methods are discussed for comparison in Chapter 25. Here is the resampling solution in Python.

+
+

Start of female_calves notebook

+ + +
+
# set the number of trials
+n_trials = 10000
+
+# set the size of each sample
+sample_size = 10
+
+# Probability of any one calf being female.
+p_female = 100 / 206
+
+# an array to store the results
+scores = np.zeros(n_trials)
+
+# for 10000 repeats
+for i in range(n_trials):
+
+    a = rnd.choice(['female', 'male'],
+        p=[p_female, 1 - p_female],
+        size = sample_size)
+    b = np.sum(a == 'female')
+
+    # store the result of the current trial
+    scores[i] = b
+
+# plot a histogram of the scores
+plt.title(f"Number of females in {n_trials} samples of \n{sample_size} simulated calves")
+plt.hist(scores)
+plt.xlabel('Number of Females')
+plt.ylabel('Frequency')
+plt.show()
+
+
+
+

+
+
+
+
# count the number of scores that were greater than or equal to 9
+k = np.sum(scores >= 9)
+
+# express as a proportion
+kk = k / n_trials
+
+# show the proportion
+print(f"The probability of 9 or 10 females occurring by chance is {kk}")
+
+
The probability of 9 or 10 females occurring by chance is 0.0084
+
+
+

We read from the result in vector kk in the “calves” program that the probability of 9 or 10 females occurring by chance is a bit more than one percent.

+

End of female_calves notebook

+
+ +
+
+

21.2.2 Example: A Public-Opinion Poll

+

Is the Proportion of a Population Greater Than a Given Value?

+

A municipal official wants to determine whether a majority of the town’s residents are for or against the awarding of a high-speed broadband internet contract, and he asks you to take a poll. You judge that the voter registration records are a fair representation of the universe in which the politician was interested, and you therefore decided to interview a random selection of registered voters. Of a sample of fifty people who expressed opinions, thirty said “yes” they were for the plan and twenty said “no,” they were against it. How conclusively do the results show that the people in town want this internet contract?

+

Now comes some necessary subtle thinking in the interpretation of what seems like a simple problem. Notice that our aim in the analysis is to avoid the mistake of saying that the town favors the plan when in fact it does not favor the plan. Our chance of making this mistake is greatest when the voters are evenly split, so we choose as the benchmark (null) hypothesis that 50 percent of the town does not want the plan. This statement really means that “50 percent or more do not want the plan.” We could assess the probability of obtaining our result from a population that is split (say) 52-48 against, but such a probability would necessarily be even smaller, and we are primarily interested in assessing the maximum probability of being wrong. If the maximum probability of error turns out to be inconsequential, then we need not worry about less likely errors.

+

This problem is very much like the one-group fruit fly irradiation problem above. The only difference is that now we are comparing the observed sample against an arbitrary value of 50 percent (because that is the break-point in a situation where the majority decides) whereas in Section 21.2.0.1 we compared the observed sample against the normal population proportion (also 50 percent, because that is the normal proportion of males). But it really does not matter why we are comparing the observed sample to the figure of 50 percent; the procedure is the same in both cases. (Please notice that there is nothing special about the 50 percent figure; the same procedure would be followed for 20 percent or 85 percent.)

+

In brief, we a) take two pieces of paper, write “Yes” on one and “No” on the other, put them in a bucket b) draw a piece of paper from the bucket, record whether it was “Yes” or “No”, replace, and repeat 50 times c) count the number of “yeses” and “noes” in the first fifty draws, c) repeat for perhaps a hundred trials, then d) count the proportion of the trials in which a 50-50 universe would produce thirty or more “yes” answers.

+

In operational steps, the procedure is as follows:

+
    +
  • Step 1. “1-5” = no, “6-0” = yes.
  • +
  • Step 2. In 50 random numbers, count the “yeses,” and record “false positive” if 30 or more “yeses.”
  • +
  • Step 3. Repeat step 2 perhaps 100 times.
  • +
  • Step 4. Calculate the proportion of experimental trials showing “false positive.” This estimates the probability that as many as 30 “yeses” would be observed by chance in a sample of 50 people if half (or more) are really against the plan.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.2: Results from 20 random trials for contract poll problem
Trial no# of "Noes"# of "Yeses">= 30 "Yeses"
12129
22525
32525
42525
52822
62822
72525
82822
92624
102228
112723
122525
132228
142426
152723
162723
172822
182624
193317
202327
+
+ + +
+
+

In Table 21.2, we see the results of twenty trials; 0 of 20 times (0 percent), 30 or more “yeses” were observed by chance. So our “significance level” or “prob value” is 0 percent, which is normally too high to feel confident that our poll results are reliable. This is the probability that as many as thirty of fifty people would say “yes” by chance if the population were “really” split evenly. (If the population were split so that more than 50 percent were against the plan, the probability would be even less that the observed results would occur by chance. In this sense, the benchmark hypothesis is conservative). On the other hand, if we had been counting the number of times there are 30 or more “No” votes that, in our setup, have the same odds as to 30 or more “Yes” votes, there would have been one. This indicates how samples can vary just by chance.

+

Taken together, the evidence suggests that the mayor would be wise not to place very much confidence in the poll results, but rather ought to act with caution or else take a larger sample of voters.

+
+

Start of contract_poll notebook

+ + +

This Python notebook generates samples of 50 simulated voters on the assumption that only 50 percent are in favor of the contract. Then it counts (sums) the number of samples where over 29 (30 or more) of the 50 respondents said they were in favor of the contract. (That is, we use a “one-tailed test.”) The result in the kk variable is the chance of a “false positive,” that is, 30 or more people saying they favor a contract when support for the proposal is actually split evenly down the middle.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+# We will do 10,000 iterations.
+n = 10_000
+
+# Make an array of integers to store the "Yes" counts.
+yeses = np.zeros(n, dtype=int)
+
+for i in range(n):
+    answers = rnd.choice(['No', 'Yes'], size=50)
+    yeses[i] = np.sum(answers == 'Yes')
+
+# Produce a histogram of the trial results.
+# Use integer bins for histogram, from 10 through 40.
+plt.hist(yeses, bins=range(10, 41))
+plt.title('Number of yes votes out of 50, in null universe')
+
+
+
+

+
+
+
+
+

In the histogram above, we see that about 11 percent of our trials had 30 or more voters in favor, despite the fact that they were drawn from a population that was split 50-50. Python will calculate this proportion directly if we add the following commands to the above:

+
+
k = np.sum(yeses >= 30)
+kk = k / n
+print('Proportion >= 30:', np.round(kk, 2))
+
+
Proportion >= 30: 0.1
+
+
+

End of contract_poll notebook

+
+ +

The section above discusses testing hypotheses about a single sample of counted data relative to a benchmark universe. This section discusses the issue of whether two samples with counted data should be considered the same or different.

+
+
+

21.2.3 Example: Did the Trump-Clinton Poll Indicate that Trump Would Win?

+
+

Start of trump_clinton notebook

+ + +

What is the probability that a sample outcome such as actually observed (840 Trump, 660 Clinton) would occur by chance if Clinton is “really” ahead — that is, if Clinton has 50 percent (or more) of the support? To restate in sharper statistical language: What is the probability that the observed sample or one even more favorable to Trump would occur if the universe has a mean of 50 percent or below?

+

Here is a procedure that responds to that question:

+
    +
  1. Create a benchmark universe with one ball marked “Trump” and another marked “Clinton”
  2. +
  3. Draw a ball, record its marking, and replace. (We sample with replacement to simulate the practically-infinite population of U. S. voters.)
  4. +
  5. Repeat step 2 1500 times and count the number of “Trump”s. If 840 or greater, record “Y”; otherwise, record “N.”
  6. +
  7. Repeat steps 3 and 4 perhaps 1000 or 10,000 times, and count the number of “Y”s. The outcome estimates the probability that 840 or more Trump choices would occur if the universe is “really” half or more in favor of Clinton.
  8. +
+

This procedure may be done as follows with Python.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+# Number of repeats we will run.
+n = 10_000
+
+# Make an integer array to store the counts.
+trumps = np.zeros(n, dtype=int)
+
+for i in range(n):
+    votes = rnd.choice(['Trump', 'Clinton'], size=1500)
+    trumps[i] = np.sum(votes == 'Trump')
+
+# Integer bins from 675 through 825 in steps of 5.
+plt.hist(trumps, bins=range(675, 826, 5))
+plt.title('Number of Trump voters of 1500 in null-world simulation')
+
+# How often >= 840 Trump votes in random draw?
+k = np.sum(trumps >= 840)
+# As a proportion of simulated resamples.
+kk = k / n
+
+print('Proportion voting for Trump:', kk)
+
+
Proportion voting for Trump: 0.0
+
+
+
+
+

+
+
+
+
+

The value for kk is our estimate of the probability that Trump’s “victory” in the sample would occur by chance if he really were behind. In this case, our probability estimate is less than 1 in 10,000 (< 0.0001).

+

End of trump_clinton notebook

+
+ + +
+
+

21.2.4 Example: Comparison of Possible Cancer Cure to Placebo

+

Do Two Binomial Populations Differ in Their Proportions.

+

Section 21.2.0.1 used an observed sample of male and female fruitflies to test the benchmark (null) hypothesis that the flies came from a universe with a one-to-one sex ratio, and the poll data problem also compared results to a 50-50 hypothesis. The calves problem also compared the results to a single benchmark universe — a proportion of 100/206 females. Now we want to compare two samples with each other , rather than comparing one sample with a hypothesized universe. That is, in this example we are not comparing one sample to a benchmark universe, but rather asking whether both samples come from the same universe. The universe from which both samples come, if both belong to the same universe, may be thought of as the benchmark universe, in this case.

+

The scientific question is whether pill P cures a rare cancer. A researcher gave pill P to six patients selected randomly from a group of twelve cancer patients; of the six, five got well. He gave an inactive placebo to the other six patients, and two of them got well. Does the evidence justify a conclusion that the pill has a curative effect?

+

(An identical statistical example would serve for an experiment on methods of teaching reading to children. In such a situation the researcher would respond to inconclusive results by running the experiment on more subjects, but in cases like the cancer-pill example the researcher often cannot obtain more subjects.)

+

We can answer the stated question by combining the two samples and testing both samples against the resulting combined universe. In this case, the universe is twelve subjects, seven (5 + 2) of whom got well. How likely would such a universe produce two samples as far apart as five of six, and two of six, patients who get well? In other words, how often will two samples of six subjects, each drawn from a universe in which 7/12 of the patients get well, be as far apart as 5 - 2 = 3 patients in favor of the sample designated “pill”? This is obviously a one-tail test, for we have no reason to believe that the pill group might do less well than the placebo group.

+

We might construct a twelve-sided die, seven of whose sides are marked “get well.” Or put 12 pieces of paper in a bucket, seven with “get well” and five with “not well”. Or we would use pairs of numbers from the random-number table, with numbers “01-07” corresponding to get well, numbers “08-12” corresponding to “not get well,” and all other numbers omitted. (If you wish to save time, you can work out a system that uses more numbers and skips fewer, but that is up to you.) Designate the first six subjects “pill” and the next six subjects “placebo.”

+

The specific procedure might be as follows:

+
    +
  • Step 1. Write “get well” on seven pieces of paper, “not well” on another five. Put the 12 pieces of paper into a bucket.
  • +
  • Step 2. Select two groups, “pill” and “placebo”, each with six random draws (with replacement) from the 12 pieces of paper.
  • +
  • Step 3. Record how many “get well” in each group.
  • +
  • Step 4. Subtract the result in group “placebo” from that in group “pill” (the difference may be negative).
  • +
  • Step 5. Repeat steps 1-4 perhaps 100 times.
  • +
  • Step 6. Compute the proportion of trials in which the pill does better by three or more cases.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.3: Results from 25 random trials for pill/placebo
Trial no# of pill cures# of placebo curesDifference
123-1
2431
3523
4330
5523
6440
7330
8330
9330
1045-1
1145-1
1234-1
1303-3
14541
15330
16532
17514
1834-1
19422
2024-2
2126-4
22550
2345-1
24330
2545-1
+
+ + +
+
+

In the trials shown in Table 21.3, in three cases (12 percent) the difference between the randomly-drawn groups is three cases or greater. Apparently it is somewhat unusual — it happens 12 percent of the time — for this universe to generate “pill” samples in which the number of recoveries exceeds the number in the “placebo” samples by three or more. Therefore the answer to the scientific question, based on these samples, is that there is some reason to think that the medicine does have a favorable effect. But the investigator might sensibly await more data before reaching a firm conclusion about the pill’s efficiency, given the 12 percent probability.

+
+

Start of pill_placebo notebook

+ + +

Now for a Python solution. Again, the benchmark hypothesis is that pill P has no effect, and we ask how often, on this assumption, the results that were obtained from the actual test of the pill would occur by chance.

+

Given that in the test 7 of 12 patients overall got well, the benchmark hypothesis assumes 7/12 to be the chances of any random patient being cured. We generate two similar samples of 6 patients, both taken from the same universe composed of the combined samples — the bootstrap procedure. We count (sum) the number who are “get well” in each sample. Then we subtract the number who got well in the “pill” sample from the number who got well in the “no-pill” sample. We record the resulting difference for each trial in the variable pill_betters.

+

In the actual test, 3 more patients got well in the sample given the pill than in the sample given the placebo. We therefore count how many of the trials yield results where the difference between the sample given the pill and the sample not given the pill was greater than 2 (equal to or greater than 3). This result is the probability that the results derived from the actual test would be obtained from random samples drawn from a population which has a constant cure rate, pill or no pill.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+# The bucket with the pieces of paper.
+options = np.repeat(['get well', 'not well'], [7, 5])
+
+n = 10_000
+
+pill_betters = np.zeros(n, dtype=int)
+
+for i in range(n):
+    pill = rnd.choice(options, size=6)
+    pill_cures = np.sum(pill == 'get well')
+    placebo = rnd.choice(options, size=6)
+    placebo_cures = np.sum(placebo == 'get well')
+    pill_betters[i] = pill_cures - placebo_cures
+
+plt.hist(pill_betters, bins=range(-6, 7))
+plt.title('Number of extra cures pill vs placebo in null universe')
+
+
+
+

+
+
+
+
+

Recall our actual observed results: In the medicine group, three more patients were cured than in the placebo group. From the histogram, we see that in only about 8 percent of the simulated trials did the “medicine” group do as well or better. The results seem to suggest — but by no means conclusively — that the medicine’s performance is not due to chance. Further study would probably be warranted. The following commands added to the above program will calculate this proportion directly:

+
+
# How many trials gave an advantage of 3 or greater to the pill?
+k = np.sum(pill_betters >= 3)
+# Convert to a proportion.
+kk = k / n
+# Print the result.
+print('Proportion with advantage of 3 or more for pill:',
+      np.round(kk, 2))
+
+
Proportion with advantage of 3 or more for pill: 0.07
+
+
+

End of pill_placebo notebook

+
+ +

As I (JLS) wrote when I first proposed this bootstrap method in 1969, this method is not the standard way of handling the problem; it is not even analogous to the standard analytic difference-of-proportions method (though since then it has become widely accepted). Though the method shown is quite direct and satisfactory, there are also many other resampling methods that one might construct to solve the same problem. By all means, invent your own statistics rather than simply trying to copy the methods described here; the examples given here only illustrate the process of inventing statistics rather than offering solutions for all classes of problems.

+
+
+

21.2.5 Example: Did Attitudes About Marijuana Change?

+ +

Consider two polls, each asking 1500 Americans about marijuana legalization. One poll, taken in 1980, found 52 percent of respondents in favor of decriminalization; the other, taken in 1985, found 46 percent in favor of decriminalization (Wonnacott and Wonnacott 1990, 275). Our null (benchmark) hypothesis is that both samples came from the same universe (the universe made up of the total of the two sets of observations). If so, let us then ask how likely would be two polls to produce results as different as were observed? Hence we construct a universe with a mean of 49 percent (the mean of the two polls of 52 percent and 46 percent), and repeatedly draw pairs of samples of size 1500 from it.

+

To see how the construction of the appropriate question is much more challenging intellectually than is the actual mathematics, let us consider another possibility suggested by a student: What about considering the universe to be the earlier poll with a mean of 52 percent, and then asking the probability that the later poll of 1500 people with a mean of 46 percent would come from it? Indeed, on first thought that procedure seems reasonable.

+

Upon reflection — and it takes considerable thought on these matters to get them right — that would not be an appropriate procedure. The student’s suggested procedure would be the same as assuming that we had long-run solid knowledge of the universe, as if based on millions of observations, and then asking about the probability of a particular sample drawn from it. That does not correspond to the facts.

+

The only way to find the approach you eventually consider best — and there is no guarantee that it is indeed correct — is by close reference to the particular facts of the case.

+
+
+

21.2.6 Example: Infarction and Cholesterol: Framingham Study

+

It is so important to understand the logic of hypothesis tests, and of the resampling method of doing them, that we will now tackle another problem similar to the preceding one.

+

This will be the first of several problems that use data from the famous Framingham study (drawn from Kahn and Sempos (1989)) concerning the development of myocardial infarction 16 years after the Framingham study began, for men ages 35- 44 with serum cholesterol above 250, compared to those with serum cholesterol below 250. The raw data are shown in Table 21.4. The data are from (Shurtleff 1970), cited in (Kahn and Sempos 1989, 12:61, Table 3-8). Kahn and Sempos divided the cases into “high” and “low” cholesterol.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.4: Development of Myocardial Infarction in Men Aged 35-44 After 16 Years
Serum CholesterolDeveloped MIDidn’t Develop MITotal
> 25010125135
<= 25021449470
+
+

The statistical logic properly begins by asking: How likely is that the two observed groups “really” came from the same “population” with respect to infarction rates? That is, we start with this question: How sure should one be that there is a difference in myocardial infarction rates between the high and low-cholesterol groups? Operationally, we address this issue by asking how likely it is that two groups as different in disease rates as the observed groups would be produced by the same “statistical universe.”

+

Key step: We assume that the relevant “benchmark” or “null hypothesis” population (universe) is the composite of the two observed groups. That is, if there were no “true” difference in infarction rates between the two serum-cholesterol groups, and the observed disease differences occurred just because of sampling variation, the most reasonable representation of the population from which they came is the composite of the two observed groups.

+

Therefore, we compose a hypothetical “benchmark” universe containing (135 + 470 =) 605 men at risk, and designate (10 + 21 =) 31 of them as infarction cases. We want to determine how likely it is that a universe like this one would produce — just by chance — two groups that differ as much as do the actually observed groups. That is, how often would random sampling from this universe produce one sub-sample of 135 men containing a large enough number of infarctions, and the other sub-sample of 470 men producing few enough infarctions, that the difference in occurrence rates would be as high as the observed difference of .029? (10/135 = .074, and 21/470 = .045, and .074 - .045 = .029).

+

So far, everything that has been said applies both to the conventional formulaic method and to the “new statistics” resampling method. But the logic is seldom explained to the reader of a piece of research — if indeed the researcher her/ himself grasps what the formula is doing. And if one just grabs for a formula with a prayer that it is the right one, one need never analyze the statistical logic of the problem at hand.

+

Now we tackle this problem with a method that you would think of yourself if you began with the following mind-set: How can I simulate the mechanism whose operation I wish to understand? These steps will do the job:

+
    +
  • Step 1: Fill a bucket with 605 balls, 31 red (infarction) and the rest (605 — 31 = 574) green (no infarction).
  • +
  • Step 2: Draw a sample of 135 (simulating the high serum-cholesterol group), one ball at a time and throwing it back after it is drawn to keep the simulated probability of an infarction the same throughout the sample; record the number of reds. Then do the same with another sample of 470 (the low serum-cholesterol group).
  • +
  • Step 3: Calculate the difference in infarction rates for the two simulated groups, and compare it to the actual difference of .029; if the simulated difference is that large, record “Yes” for this trial; if not, record “No.”
  • +
  • Step 4: Repeat steps 2 and 3 until a total of (say) 400 or 1000 trials have been completed. Compute the frequency with which the simulated groups produce a difference as great as actually observed. This frequency is an estimate of the probability that a difference as great as actually observed in Framingham would occur even if serum cholesterol has no effect upon myocardial infarction.
  • +
+

The procedure above can be carried out with balls in a bucket in a few hours. Yet it is natural to seek the added convenience of the computer to draw the samples. Here is a Python program:

+
+

Start of framingham_hearts notebook

+ + +
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+n = 10_0005
+men = np.repeat(['infarction', 'no infarction'], [31, 574])
+
+n_high = 135  # Number of men with high cholesterol
+n_low = 470  # Number of men with low cholesterol
+
+infarct_differences = np.zeros(n)
+
+for i in range(n):
+    highs = rnd.choice(men, size=n_high)
+    lows = rnd.choice(men, size=n_low)
+    high_infarcts = np.sum(highs == 'infarction')
+    low_infarcts = np.sum(lows == 'infarction')
+    high_prop = high_infarcts / n_high
+    low_prop = low_infarcts / n_low
+    infarct_differences[i] = high_prop - low_prop
+
+plt.hist(infarct_differences, bins=np.arange(-0.1, 0.1, 0.005))
+plt.title('Infarct proportion differences in null universe')
+
+# How often was the resampled difference >= the observed difference?
+k = np.sum(infarct_differences >= 0.029)
+# Convert this result to a proportion
+kk = k / n
+
+print('Proportion of trials with difference >= observed:',
+      np.round(kk, 2))
+
+
Proportion of trials with difference >= observed: 0.09
+
+
+
+
+

+
+
+
+
+

The results of the test using this program may be seen in the histogram. We find — perhaps surprisingly — that a difference as large as observed would occur by chance around 10 percent of the time. (If we were not guided by the theoretical expectation that high serum cholesterol produces heart disease, we might include the 10 percent difference going in the other direction, giving a 20 percent chance). Even a ten percent chance is sufficient to call into question the conclusion that high serum cholesterol is dangerous. At a minimum, this statistical result should call for more research before taking any strong action clinically or otherwise.

+

End of framingham_hearts notebook

+
+ +

Where should one look to determine which procedures should be used to deal with a problem such as set forth above? Unlike the formulaic approach, the basic source is not a manual which sets forth a menu of formulas together with sets of rules about when they are appropriate. Rather, you consult your own understanding about what is happening in (say) the Framingham situation, and the question that needs to be answered, and then you construct a “model” that is as faithful to the facts as is possible. The bucket-sampling described above is such a model for the case at hand.

+

To connect up what we have done with the conventional approach, one could apply a z test (conceptually similar to the t test, but applicable to yes-no data; it is the Normal-distribution approximation to the large binomial distribution). Do so, we find that the results are much the same as the resampling result — an eleven percent probability.

+

Someone may ask: Why do a resampling test when you can use a standard device such as a z or t test? The great advantage of resampling is that it avoids using the wrong method. The researcher is more likely to arrive at sound conclusions with resampling because s/he can understand what s/he is doing, instead of blindly grabbing a formula which may be in error.

+

The textbook from which the problem is drawn is an excellent one; the difficulty of its presentation is an inescapable consequence of the formulaic approach to probability and statistics. The body of complex algebra and tables that only a rare expert understands down to the foundations constitutes an impenetrable wall to understanding. Yet without such understanding, there can be only rote practice, which leads to frustration and error.

+
+
+

21.2.7 Example: Is One Pig Ration More Effective Than the Other?

+

Testing For a Difference in Means With a Two-by-Two Classification.

+

Each of two new types of ration is fed to twelve pigs. A farmer wants to know whether ration A or ration B is better.2 The weight gains in pounds for pigs fed on rations A and B are:

+

A: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 31

+

B: 26, 24, 28, 29, 30, 29, 31, 29, 32, 26, 28, 32

+

The statistical question may be framed as follows: should one consider that the pigs fed on the different rations come from the same universe with respect to weight gains?

+

In the actual experiment, 9 of the 12 pigs who were fed ration A also were in the top half of weight gains. How likely is it that one group of 12 randomly-chosen pigs would contain 9 of the 12 top weight gainers?

+

One approach to the problem is to divide the pigs into two groups — the twelve with the highest weight gains, and the twelve with the lowest weight gains — and examine whether an unusually large number of high-weight-gain pigs were fed on one or the other of the rations.

+

We can make this test by ordering and grouping the twenty four pigs:

+

High-weight group:

+

38 (ration A), 35 (A), 34 (A), 34 (A), 32 (B), 32 (A), 32 (A), 32 (B), 31 (A),

+

31 (B), 31 (A), 31 (A)

+

Low-weight group:

+

30 (B), 29 (A), 29 (A), 29 (B), 29 (B), 29 (B), 28 (B), 28 (B), 26 (A), 26 (B),

+

26 (B), 24 (B).

+

Among the twelve high-weight-gain pigs, nine were fed on ration A. We ask: Is this further from an even split than we are likely to get by chance? Let us take twelve red and twelve black cards, shuffle them, and deal out twelve cards (the other twelve need not be dealt out). Count the proportion of the hands in which one ration comes up nine or more times in the first twelve cards, to reflect ration A’s appearance nine times among the highest twelve weight gains. More specifically:

+
    +
  • Step 1. Constitute a deck of twelve red and twelve black cards, and shuffle.
  • +
  • Step 2. Deal out twelve cards, count the number red, and record “yes” if there are nine or more of either red or black.
  • +
  • Step 3. Repeat step 2 perhaps fifty times.
  • +
  • Step 4. Compute the proportion “yes.” This proportion estimates the probability sought.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.5: Results from 25 random trials for pig rations
Trial no# red# black>=9 red or black
1210+
275
357
493+
593+
675
766
866
975
1075
1175
1266
1348
1466
1557
1648
1784
1848
1984
2084
2157
2284
2384
2493+
2566
+
+ + +
+
+

Table 21.5 shows the results of 25 trials. In four (marked by + signs) of the 25 (that is, 16 percent of the trials) there were nine or more either red or black cards in the first twelve cards. Again the results suggest that it would be slightly unusual for the results to favor one ration or the other so strongly just by chance if they come from the same universe.

+

Now the Python procedure to answer the question:

+
+

Start of pig_rations notebook

+ + +

The ranks = np.arange(1, 25) statement creates an array of numbers 1 through 24, which will represent the rankings of weight gains for each of the 24 pigs. We repeat the following procedure for 10000 trials. First we shuffle the elements of array ranks so that the rank numbers for weight gains are randomized and placed in array shuffled. We then select the first 12 elements of shuffled and place them in first_12; this represents the rankings of a randomly-selected group of 12 pigs. We next count (sum) in n_top the number of pigs whose rankings for weight gain were in the top half — that is, a rank of less than 13. We record that number in top_ranks, and then continue the loop, until we finish our n trials.

+

Since we did not know beforehand the direction of the effect of ration A on weight gain, we want to count the times that either more than 8 of the random selection of 12 pigs were in the top half of the rankings, or that fewer than 4 of these pigs were in the top half of the weight gain rankings — (The latter is the same as counting the number of times that more than 8 of the 12 non-selected random pigs were in the top half in weight gain.)

+

We do so with the final two sum statements. By adding the two results n_gte_9 and n_lte_3 together, we have the number of times out of 10,000 that differences in weight gains in two groups as dramatic as those obtained in the actual experiment would occur by chance.

+
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+# Constitute the set of the weight gain rank orders. ranks is now a vector
+# consisting of the numbers 1 — 24, in that order.
+ranks = np.arange(1, 25)
+
+n = 10_000
+
+top_ranks = np.zeros(n, dtype=int)
+
+for i in range(n):
+    # Shuffle the ranks of the weight gains.
+    shuffled = rnd.permuted(ranks)
+    # Take the first 12 ranks.
+    first_12 = shuffled[:12]
+    # Determine how many of these randomly selected 12 ranks are less than
+    # 12 (i.e. 1-12), put that result in n_top.
+    n_top = np.sum(first_12 <= 12)
+    # Keep track of each trial result in top_ranks
+    top_ranks[i] = n_top
+
+plt.hist(top_ranks, bins=np.arange(1, 12))
+plt.title('Number of top 12 ranks in pig-ration trials')
+
+
+
+

+
+
+
+
+

We see from the histogram that, in about 3 percent of the trials, either more than 8 or fewer than 4 top half ranks (1-12) made it into the random group of twelve that we selected. Python will calculate this for us as follows:

+
+
# Determine how many of the trials yielded 9 or more top ranks.
+n_gte_9 = np.sum(top_ranks >= 9)
+# Determine how many trials yielded 3 or fewer of the top ranks.
+# If there were 3 or fewer, then 9 or more of the top ranks must
+# have been in the other group (not selected).
+n_lte_3 = np.sum(top_ranks <= 3)
+# Add the two together.
+n_both = n_gte_9 + n_lte_3
+# Convert to a proportion.
+prop_both = n_both / n
+
+print('Trial proportion >=9 top ranks in either group:',
+      np.round(prop_both, 2))
+
+
Trial proportion >=9 top ranks in either group: 0.04
+
+
+

The decisions that are warranted on the basis of the results depend upon one’s purpose. If writing a scientific paper on the merits of ration A is the ultimate purpose, it would be sensible to test another batch of pigs to get further evidence. (Or you could proceed to employ another sort of test for a slightly more precise evaluation.) But if the goal is a decision on which type of ration to buy for a small farm and they are the same price, just go ahead and buy ration A because, even if it is no better than ration B, you have strong evidence that it is no worse .

+

End of pig_rations notebook

+
+ +
+
+

21.2.8 Example: Do Planet Densities Differ?

+

Consider the five planets known to the ancient world.

+

Mosteller and Rourke (1973, 17–19) ask us to compare the densities of the three planets farther from the sun than is the earth (Mars, density 0.71; Jupiter, 0.24; and Saturn, 0.12) against the densities of the planets closer to the sun than is the earth (Mercury, 0.68; Venus, 0.94).

+

The average density of the distant planets is .357, of the closer planets is .81. Is this difference (.353) statistically surprising, or is it likely to occur in a chance ordering of these planets?

+

We can answer this question with a permutation test; such sampling without replacement makes sense here because we are considering the entire set of planets, rather than a sample drawn from a larger population of planets (the word “population” is used here, rather than “universe,” to avoid confusion.) And because the number of objects is so small, one could examine all possible arrangements (permutations), and see how many have (say) differences in mean densities between the two groups as large as observed.

+

Another method that Mosteller and Rourke suggest is by a comparison of the density ranks of the two sets, where Saturn has rank 1 and Venus has rank 5. This might have a scientific advantage if the sample data are dominated by a single “outlier,” whose domination is removed when we rank the data.

+

We see that the sum of the ranks for the “closer” set is 3+5=8. We can then ask: If the ranks were assigned at random, how likely is it that a set of two planets would have a sum as large as 8? Again, because the sample is small, we can examine all the possible permutations, as Mosteller and Rourke do in Table 3-1 (Mosteller and Rourke 1973, 56) (Substitute “Closer” for “B,” “Further” for “A”). In two of the ten permutations, a sum of ranks as great as 8 is observed, so the probability of a result as great as observed happening by chance is 20 percent, using these data. (We could just as well consider the difference in mean ranks between the two groups: (8/2 - 7/3 = 10 / 6 = 1.67).

+ + +

To illuminate the logic of this test, consider comparing the heights of two samples of trees. If sample A has the five tallest trees, and sample B has the five shortest trees, the difference in mean ranks will be (6+7+8+9+10=) 40 — (1+2+3+4+5=) 15, the largest possible difference. If the groups are less sharply differentiated — for example, if sample A has #3 and sample B has #8 — the difference in ranks will be less than the maximum of 40, as you can quickly verify.

+

The method we have just used is called a Mann-Whitney test, though that label is usually applied when the data are too many to examine all the possible permutations; in that case one conventionally uses a table prepared by formula. In the case where there are too many for a complete permutation test, our resampling algorithm is as follows (though we’ll continue with the planets example):

+
    +
  1. Compute the mean ranks of the two groups.
  2. +
  3. Calculate the difference between the means computed in step 1.
  4. +
  5. Create a bucket containing the ranks from 1 to the number of observations (5, in the case of the planets)
  6. +
  7. Shuffle the ranks.
  8. +
  9. Since we are working with the ranked data, we must draw without replacement, because there can only be one #3, one #7, and so on. So draw the number of observations corresponding to the number of observations — 2 “Closer” and 3 “Further.”
  10. +
  11. Compute the mean ranks of the two simulated groups of planets.
  12. +
  13. Calculate the difference between the means computed in step 5 and record.
  14. +
  15. Repeat steps 4 through 7 perhaps 1000 times.
  16. +
  17. Count how often the shuffled difference in ranks exceeds the observed difference from step 2 (1.67).
  18. +
+
+

Start of planet_densities notebook

+ + +
+
import numpy as np
+
+rnd = np.random.default_rng()
+
+# Steps 1 and 2.
+actual_mean_diff = 8 / 2 - 7 / 3
+
+# Step 3
+ranks = np.arange(1, 6)
+
+n = 10_000
+
+mean_differences = np.zeros(n)
+
+for i in range(n):
+    # Step 4
+    shuffled = rnd.permuted(ranks)
+    # Step 5
+    closer = shuffled[:2]  # First 2
+    further = shuffled[2:] # Last 3
+    # Step 6
+    mean_close = np.mean(closer)
+    mean_far = np.mean(further)
+    # Step 7
+    mean_differences[i] = mean_close - mean_far
+
+# Step 9
+k = np.sum(mean_differences >= actual_mean_diff)
+prob = k / n
+
+print('Proportion of trials with mean difference >= 1.67:',
+      np.round(prob, 2))
+
+
Proportion of trials with mean difference >= 1.67: 0.19
+
+
+

Interpretation: 19 percent of the time, random shufflings produced a difference in ranks as great as or greater than observed. Hence, on the strength of this evidence, we should not conclude that there is a statistically surprising difference in densities between the further planets and the closer planets.

+

End of planet_densities notebook

+
+ +
+
+
+

21.3 Conclusion

+

This chapter has begun the actual work of testing hypotheses. The next chapter continues with discussion of somewhat more complex problems with counted data — more complex to think about, but no more difficult to actually treat mathematically with resampling simulation. If you have understood the general logic of the procedures used up until this point, you are in command of all the necessary conceptual knowledge to construct your own tests to answer any statistical question. A lot more practice, working on a variety of problems, obviously would help. But the key elements are simple: 1) Model the real situation accurately, 2) experiment with the model, and 3) compare the results of the model with the observed results.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-12-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-12-1.png new file mode 100644 index 00000000..bea18e38 Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-12-1.png differ diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-16-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-16-1.png new file mode 100644 index 00000000..5af3bca3 Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-16-1.png differ diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-20-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-20-1.png new file mode 100644 index 00000000..281b9eed Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-20-1.png differ diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-23-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-23-1.png new file mode 100644 index 00000000..020a649c Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-23-1.png differ diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-27-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-27-1.png new file mode 100644 index 00000000..6176e1f3 Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-27-1.png differ diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-4-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-4-1.png new file mode 100644 index 00000000..366a4908 Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-4-1.png differ diff --git a/python-book/testing_counts_1_files/figure-html/unnamed-chunk-8-1.png b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-8-1.png new file mode 100644 index 00000000..6bacfd9f Binary files /dev/null and b/python-book/testing_counts_1_files/figure-html/unnamed-chunk-8-1.png differ diff --git a/python-book/testing_counts_2.html b/python-book/testing_counts_2.html new file mode 100644 index 00000000..1c7c0669 --- /dev/null +++ b/python-book/testing_counts_2.html @@ -0,0 +1,2089 @@ + + + + + + + + + +Resampling statistics - 23  The Statistics of Hypothesis-Testing with Counted Data, Part 2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

23  The Statistics of Hypothesis-Testing with Counted Data, Part 2

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+

Here’s the bad-news-good-news message again: The bad news is that the subject of inferential statistics is extremely difficult — not because it is complex but rather because it is subtle. The cause of the difficulty is that the world around us is difficult to understand, and spoon-fed mathematical simplifications which you manipulate mechanically simply mislead you into thinking you understand that about which you have not got a clue.

+

The good news is that you — and that means you , even if you say you are “no good at math” — can understand these problems with a layperson’s hard thinking, even if you have no mathematical background beyond arithmetic and you think that you have no mathematical capability. That’s because the difficulty lies in such matters as pin-pointing the right question, and understanding how to interpret your results.

+

The problems in the previous chapter were tough enough. But this chapter considers problems with additional complications, such as when there are more than two groups, or paired comparisons for the same units of observation.

+
+

23.1 Comparisons among more than two samples of counted data

+

Example 17-1: Do Any of Four Treatments Affect the Sex Ratio in Fruit Flies? (When the Benchmark Universe Proportion is Known, Is the Propor tion of the Binomial Population Affected by Any of the Treatments?) (Program “4treat”)

+

Suppose that, instead of experimenting with just one type of radiation treatment on the flies (as in Example 15-1), you try four different treatments, which we shall label A, B, C, and D. Treatment A produces fourteen males and six females, but treatments B, C, and D produce ten, eleven, and ten males, respectively. It is immediately obvious that there is no reason to think that treatment B, C, or D affects the sex ratio. But what about treatment A?

+

A frequent and dangerous mistake made by young scientists is to scrounge around in the data for the most extreme result, and then treat it as if it were the only result. In the context of this example, it would be fallacious to think that the probability of the fourteen-males-to-six females split observed for treatment A is the same as the probability that we figured for a single experiment in Example 15-1. Instead, we must consider that our benchmark universe is composed of four sets of twenty trials, each trial having a 50-50 probability of being male. We can consider that our previous trials 1-4 in Example 15-1 constitute a single new trial, and each subsequent set of four previous trials constitute another new trial. We then ask how likely a new trial of our sets of twenty flips is to produce one set with fourteen or more of one or the other sex.

+

Let us make the procedure explicit, but using random numbers instead of coins this time:

+

Step 1. Let “1-5” = males, “6-0” = females

+

Step 2. Choose four groups of twenty numbers. If for any group there are 14 or more males, record “yes”; if 13 or less, record “no.”

+

Step 3. Repeat perhaps 1000 times.

+

Step 4. Calculate the proportion “yes” in the 1000 trials. This proportion estimates the probability that a fruit fly population with a proportion of 50 percent males will produce as many as 14 males in at least one of four samples of 20 flies.

+

We begin the trials with data as in Table 17-1. In two of the six simulation trials, more than one sample shows 14 or more males. Another trial shows fourteen or more females . Without even concerning ourselves about whether we should be looking at males or females, or just males, or needing to do more trials, we can see that it would be very common indeed to have one of four treatments show fourteen or more of one sex just by chance. This discovery clearly indicates that a result that would be fairly unusual (three in twenty-five) for a single sample alone is commonplace in one of four observed samples.

+

Table 17-1

+

Number of “Males” in Groups of 20 (Based on Random Numbers)

+

Trial Group A Group B Group C Group D Yes / No

+

>= 14 or <= 6

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
11112812No
212798No
36101010Yes
499127No
514121310Yes
6111497Yes
+

A key point of the RESAMPLING STATS program “4TREAT” is that each sample consists of four sets of 20 randomly generated hypothetical fruit flies. And if we consider 1000 trials, we will be examining 4000 sets of 20 fruit flies.

+

In each trial we GENERATE up to 4 random samples of 20 fruit flies, and for each, we count the number of males (“1”s) and then check whether that group has more than 13 of either sex (actually, more than 13 “1”s or less than 7 “1”s). If it does, then we change J to 1, which informs us that for this sample, at least 1 group of 20 fruit flies had results as unusual as the results from the fruit flies exposed to the four treatments.

+

After the 1000 runs are made, we count the number of trials where one sample had a group of fruit flies with 14 or more of either sex, and PRINT the results.

+ +
' Program file: "4treat.rss"
+
+REPEAT 1000
+    ' Do 1000 experiments.
+    COPY (0) j
+    ' j indicates whether we have obtained a trial group with 14 or more of
+    ' either sex. We start at "0" (= no).
+    REPEAT 4
+        ' Repeat the following steps 4 times to constitute 4 trial groups of 20
+        ' flies each.
+        GENERATE 20 1,2 a
+        ' Generate randomly 20 "1"s and "2"s and put them in a; let "1"
+
+        ' = male.
+        COUNT a =1 b
+        ' Count the number of males, put the result in b.
+        IF b >= 14
+            ' If the result is 14 or more males, then
+        END
+        COPY (1) j
+        ' Set the indicator to "1."
+
+        ' End the IF condition.
+        IF b <= 6
+            ' If the result is 6 or fewer males (the same as 14 or more females), then
+        END
+        COPY (1) j
+        ' Set the indicator to "1."
+
+        ' End the IF condition.
+    END
+END
+' End the procedure for one group, go back and repeat until all four
+' groups have been done.
+SCORE j z
+' j now tells us whether we got a result as extreme as that observed (j =
+' "1" if we did, j = "0" if not). We must keep track in z of this result
+' for each experiment.
+
+' End one experiment, go back and repeat until all 1000 are complete.
+COUNT z =1 k
+' Count the number of experiments in which we had results as extreme as
+' those observed.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the result.
+
+' Note: The file "4treat" on the Resampling Stats software disk contains
+' this set of commands.
+

In one set of 1000 trials, there were more than 13 or less than 7 males 33 percent of the time — clearly not an unusual occurrence.

+

Example 17-2: Do Four Psychological Treatments Differ in Effectiveness? (Do Several Two-Outcome Samples Differ Among Themselves in Their Propor tions? (Program “4treat1”)

+

Consider four different psychological treatments designed to rehabilitate juvenile delinquents. Instead of a numerical test score, there is only a “yes” or a “no” answer as to whether the juvenile has been rehabilitated or has gotten into trouble again. Label the treatments P, R, S, and T, each of which is administered to a separate group of twenty juvenile delinquents. The number of rehabilitations per group has been: P, 17; R, 10; S, 10; T, 7. Is it improbable that all four groups come from the same universe?

+

This problem is like the placebo vs. cancer-cure problem, but now there are more than two samples. It is also like the four-sample irradiated-fruit flies example (Example 17-1), except that now we are not asking whether any or some of the samples differ from a given universe (50-50 sex ratio in that case). Rather, we are now asking whether there are differences among the samples themselves. Please keep in mind that we are still dealing with two-outcome (yes-or-no, well-or-sick) problems. Later we shall take up problems that are similar except that the outcomes are “quantitative.”

+

If all four groups were drawn from the same universe, that universe has an estimated rehabilitation rate of 17/20 + 10/20 + 10/20 + 7/20 = 44/80 = 55/100, because the observed data taken as a whole constitute our best guess as to the nature of the universe from which they come — again, if they all come from the same universe. (Please think this matter over a bit, because it is important and subtle. It may help you to notice the absence of any other information about the universe from which they have all come, if they have come from the same universe.)

+

Therefore, select twenty two-digit numbers for each group from the random-number table, marking “yes” for each number “1-55” and “no” for each number “56-100.” Conduct a number of such trials. Then count the proportion of times that the difference between the highest and lowest groups is larger than the widest observed difference, the difference between P and T (17-7 = 10). In Table 17-2, none of the first six trials shows anywhere near as large a difference as the observed range of 10, suggesting that it would be rare for four treatments that are “really” similar to show so great a difference. There is thus reason to believe that P and T differ in their effects.

+

Table 7-2

+

Results of Six Random Trials for Problem “Delinquents”

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialPRSTLargest Minus Smallest
11198124
2101012122
39128124
491112103
5101011121
611119112
+

The strategy of the RESAMPLING STATS solution to “Delinquents” is similar to the strategy for previous problems in this chapter. The benchmark (null) hypothesis is that the treatments do not differ in their effects observed, and we estimate the probability that the observed results would occur by chance using the benchmark universe. The only new twist is that we must instruct the computer to find the groups with the highest and the lowest numbers of rehabilitations.

+

Using RESAMPLING STATS we GENERATE four “treatments,” each represented by 20 numbers, each number randomly selected between 1 and 100. We let 1-55 = success, 56-100

+

= failure. Follow along in the program for the rest of the procedure:

+ +
' Program file: "4treat1.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 20 1,100 a
+    ' The first treatment group, where "1-55" = success, "56-100" = failure
+    GENERATE 20 1,100 b
+    ' The second group
+    GENERATE 20 1,100 c
+    ' The third group
+    GENERATE 20 1,100 d
+    ' The fourth group
+    COUNT a <=55 aa
+    ' Count the first group's successes
+    COUNT b <=55 bb
+    ' Same for second, third & fourth groups
+    COUNT c <=55 cc
+    COUNT d <=55 dd
+END
+SUBTRACT aa bb ab
+' Now find all the pairwise differences in successes among the groups
+SUBTRACT aa cc ac
+SUBTRACT aa dd ad
+SUBTRACT bb cc bc
+SUBTRACT bb dd bd
+SUBTRACT cc dd cd
+CONCAT ab ac ad bc bd cd e
+' Concatenate, or join, all the differences in a single vector e
+ABS e f
+' Since we are interested only in the magnitude of the difference, not its
+' direction, we take the ABSolute value of all the differences.
+MAX f g
+' Find the largest of all the differences
+SCORE g z
+' Keep score of the largest
+
+' End a trial, go back and repeat until all 1000 are complete.
+COUNT z >=10 k
+' How many of the trials yielded a maximum difference greater than the
+' observed maximum difference?
+DIVIDE k 1000 kk
+' Convert to a proportion
+PRINT kk
+' Note: The file "4treat1" on the Resampling Stats software disk contains
+' this set of commands.
+

One percent of the experiments with randomly generated treatments from a common success rate of .55 produced differences in excess of the observed maximum difference (10).

+

An alternative approach to this problem would be to deal with each result’s departure from the mean, rather than the largest difference among the pairs. Once again, we want to deal with absolute departures, since we are interested only in magnitude of difference. We could take the absolute value of the differences, as above, but we will try something different here. Squaring the differences also renders them all positive: this is a common approach in statistics.

+

The first step is to examine our data and calculate this measure: The mean is 11, the differences are 6, 1, 1, and 4, the

+

squared differences are 36, 1, 1, and 16, and their sum is 54. Our experiment will be, as before, to constitute four groups of 20 at random from a universe with a 55 percent rehabilitation rate. We then calculate this same measure for the random groups. If it is frequently larger than 54, then we conclude that a uniform cure rate of 55 percent could easily have produced the observed results. The program that follows also GENERATES the four treatments by using a REPEAT loop, rather than spelling out the GENERATE command 4 times as above. In RESAMPLING STATS:

+ +
' Program file: "testing_counts_2_02.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    REPEAT 4
+        ' Repeat the following steps 4 times to constitute 4 groups of 20 and
+        ' count their rehabilitation rates.
+        GENERATE 20 1,100 a
+        ' Randomly generate 20 numbers between 1 and 100 and put them in a; let
+        ' 1-55 = rehabilitation, 56-100 no rehab.
+        COUNT a between 1 55 b
+        ' Count the number of rehabs, put the result in b.
+        SCORE b w
+        ' Keep track of the 4 rehab rates for the group of 20.
+    END
+    ' End the procedure for one group of 20, go back and repeat until all 4
+    ' are done.
+    MEAN w x
+    ' Calculate the mean
+    SUMSQRDEV w x y
+    ' Find the sum of squared deviations between group rehab rates (w) and the
+    ' overall rate (x).
+    SCORE y z
+    ' Keep track of the result for each trial.
+    CLEAR w
+    ' Erase the contents of w to prepare for the next trial.
+END
+' End one experiment, go back and repeat until all 1000 are complete.
+HISTOGRAM z
+' Produce a histogram of trial results.
+

4 Treatments

+

+

sum of squared differences

+

From this histogram, we see that in only 1 percent of the cases did our trial sum of squared differences equal or exceed 54, confirming our conclusion that this is an unusual result. We can have RESAMPLING STATS calculate this proportion:

+ +
' Program file: "4treat2.rss"
+
+COUNT z >= 54 k
+' Determine how many trials produced differences as great as those
+' observed.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the results.
+
+' Note: The file "4treat2" on the Resampling Stats software disk contains
+' this set of commands.
+

The conventional way to approach this problem would be with what is known as a “chi-square test.”

+

Example 17-3: Three-way Comparison

+

In a national election poll of 750 respondents in May, 1992, George Bush got 36 percent of the preferences (270 voters), Ross Perot got 30 percent (225 voters), and Bill Clinton got 28 percent (210 voters) ( Wall Street Journal, October 29, 1992, A16). Assuming that the poll was representative of actual voting, how likely is it that Bush was actually behind and just came out ahead in this poll by chance? Or to put it differently, what was the probability that Bush actually had a plurality of support, rather than that his apparent advantage was a matter of sampling variability? We test this by constructing a universe in which Bush is slightly behind (in practice, just equal), and then drawing samples to see how likely it is that those samples will show Bush ahead.

+

We must first find that universe — among all possible universes that yield a conclusion contrary to the conclusion shown by the data, and one in which we are interested — that has the highest probability of producing the observed sample. With a two-person race the universe is obvious: a universe that is evenly split except for a single vote against “our” candidate who is now in the lead, i.e. in practice a 50-50 universe. In that simple case we then ask the probability that that universe would produce a sample as far out in the direction of the conclusion drawn from the observed sample as the observed sample.

+

With a three-person race, however, the decision is not obvious (and if this problem becomes too murky for you, skip over it; it is included here more for fun than anything else). And there is no standard method for handling this problem in conventional statistics (a solution in terms of a confidence interval was first offered in 1992, and that one is very complicated and not very satisfactory to me). But the sort of thinking that we must labor to accomplish is also required for any conventional solution; the difficulty is inherent in the problem, rather than being inherent in resampling, and resampling will be at least as simple and understandable as any formulaic approach.

+

The relevant universe is (or so I think) a universe that is 35 Bush — 35 Perot — 30 Clinton (for a race where the poll indicates a 36-30-28 split); the 35-35-30 universe is of interest because it is the universe that is closest to the observed sample that does not provide a win for Bush (leaving out the “undecideds” for convenience); it is roughly analogous to the 50-50 split in the two-person race, though a clear-cut argument would require a lot more discussion. A universe that is split 34-34-32, or any of the other possible universes, is less likely to produce a 36-30-28 sample (such as was observed) than is a 35-35-30 universe, I believe, but that is a checkable matter. (In technical terms, it might be a “maximum likelihood universe” that we are looking for.)

+

We might also try a 36-36-28 universe to see if that produces a result very different than the 35-35-30 universe.

+

Among those universes where Bush is behind (or equal), a universe that is split 50-50-0 (with just one extra vote for the closest opponent to Bush) would be the most likely to produce a 6 percent difference between the top two candidates by chance, but we are not prepared to believe that the voters are split in such a fashion. This assumption shows that we are bringing some judgments to bear from outside the observed data.

+

For now, the point is not how to discover the appropriate benchmark hypothesis, but rather its criterion — which is, I repeat, that universe (among all possible universes) that yields a conclusion contrary to the conclusion shown by the data (and in which we are interested) and that (among such universes that yield such a conclusion) has the highest probability of producing the observed sample.

+

Let’s go through the logic again: 1) Bush apparently has a 6 percent lead over the second-place candidate. 2) We ask if the second-place candidate might be ahead if all voters were polled. We test that by setting up a universe in which the second-place candidate is infinitesimally ahead (in practice, we make the two top candidates equal in our hypothetical universe). And we make the third-place candidate somewhere close to the top two candidates. 3) We then draw samples from this universe and observe how often the result is a 6 percent lead for the top candidate (who starts off just below equal in the universe).

+

From here on, the procedure is straightforward: Determine how likely that universe is to produce a sample as far (or further) away in the direction of “our” candidate winning. (One could do something like this even if the candidate of interest were not now in the lead.)

+

This problem teaches again that one must think explicitly about the choice of a benchmark hypothesis. The grounds for the choice of the benchmark hypothesis should precede the program, or should be included as an extended comment within the program.

+

This program embodies the previous line of thought.

+ +
' Program file: "testing_counts_2_04.rss"
+
+URN 35#1 35#2 30#3 univ 1= Bush, 2= Perot, 3=Clinton
+REPEAT 1000
+    SAMPLE 750 univ samp
+    ' Take a sample of 750 votes
+    COUNT samp =1 bush
+    ' Count the Bush voters, etc.
+    COUNT samp =2 pero
+    ' Perot voters
+    COUNT samp =3 clin
+    ' Clinton voters
+    CONCAT pero clin others
+    ' Join Perot & Clinton votes
+    MAX others second
+    ' Find the larger of the other two
+    SUBTRACT bush second d
+    ' Find Bush's margin over 2nd
+    SCORE d z
+END
+HISTOGRAM z
+COUNT z >=46 m
+' Compare to the observed margin in the sample of 750 corresponding to a 6
+' percent margin by Bush over 2nd place finisher (rounded)
+DIVIDE m 1000 mm
+PRINT mm
+
+
+

+
Figure 23.1: Samples of 750 Voters:
+
+
+

The result is — Bush’s margin over 2nd (mm) = 0.018.

+

When we run this program with a 36-36-28 split, we also get a similar result — 2.6 percent. That is, the analysis shows a probability of only 2.6 percent that Bush would score a 6 percentage point “victory” in the sample, by chance, if the universe were split as specified. So Bush could feels reasonably confident that at the time the poll was taken, he was ahead of the other two candidates.

+
+
+

23.2 Paired Comparisons With Counted Data

+

Example 17-4: The Pig Rations Again, But Comparing Pairs of Pigs (Paired-Comparison Test) (Program “Pigs2”)

+

To illustrate how several different procedures can reasonably be used to deal with a given problem, here is another way to decide whether pig ration A is “really” better: We can assume that the order of the pig scores listed within each ration group is random — perhaps the order of the stalls the pigs were kept in, or their alphabetical-name order, or any other random order not related to their weights . Match the first pig eating ration A with the first pig eating ration B, and also match the second pigs, the third pigs, and so forth. Then count the number of matched pairs on which ration A does better. On nine of twelve pairings ration A does better, that is, 31.0 > 26.0, 34.0 > 24.0, and so forth.

+

Now we can ask: If the two rations are equally good, how often will one ration exceed the other nine or more times out of twelve, just by chance? This is the same as asking how often either heads or tails will come up nine or more times in twelve tosses. (This is a “two-tailed” test because, as far as we know, either ration may be as good as or better than the other.) Once we have decided to treat the problem in this manner, it is quite similar to Example 15-1 (the first fruitfly irradiation problem). We ask how likely it is that the outcome will be as far away as the observed outcome (9 “heads” of 12) from 6 of 12 (which is what we expect to get by chance in this case if the two rations are similar).

+

So we conduct perhaps fifty trials as in Table 17-3, where an asterisk denotes nine or more heads or tails.

+

Step 1. Let odd numbers equal “A better” and even numbers equal “B better.”

+

Step 2. Examine 12 random digits and check whether 9 or more, or 3 or less, are odd. If so, record “yes,” otherwise “no.”

+

Step 3. Repeat step 2 fifty times.

+

Step 4. Compute the proportion “yes,” which estimates the probability sought.

+

The results are shown in Table 17-3.

+

In 8 of 50 simulation trials, one or the other ration had nine or more tosses in its favor. Therefore, we estimate the probability to be .16 (eight of fifty) that samples this different would be generated by chance if the samples came from the same universe.

+

Table 17-3

+

Results From Fifty Simulation Trials Of The Problem “Pigs2”

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Trial

Heads” or Odds”

+

(Ration A)

“Tails” or “Evems”

+

(Ration B)

Trial

“Heads” or Odds”

+

(Ration A)

“Tails” or “Evens”

+

(Ration B)

1662666
2482757
3662875
4752948
* 5393066
657* 3193
784* 32210
8663375
9753457
*10933566
11753684
*12393766
13573848
14663957
15664084
16844157
17574266
*18934357
19664475
20754566
21484648
* 221024757
23664857
24574984
*25395075
+

Now for a RESAMPLING STATS program and results. “Pigs2” is different from “Pigs1” in that it compares the weight-gain results of pairs of pigs, instead of simply looking at the rankings for weight gains.

+

The key to “Pigs2” is the GENERATE statement. If we assume that ration A does not have an effect on weight gain (which is the “benchmark” or “null” hypothesis), then the results of the actual experiment would be no different than if we randomly GENERATE numbers “1” and “2” and treat a “1” as a larger weight gain for the ration A pig, and a “2” as a larger weight gain for the ration B pig. Both events have a .5 chance of occurring for each pair of pigs because if the rations had no effect on weight gain (the null hypothesis), ration A pigs would have larger weight gains about half of the time. The next step is to COUNT the number of times that the weight gains of one group (call it the group fed with ration A) were larger than the weight gains of the other (call it the group fed with ration B). The complete program follows:

+ +
' Program file: "pigs2.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 12 1,2 a
+    ' Generate randomly 12 "1"s and "2"s, put them in a. This represents 12
+    ' "pairings" where "1" = ration a "wins," "2" = ration b = "wins."
+    COUNT a =1 b
+    ' Count the number of "pairings" where ration a won, put the result in b.
+    SCORE b z
+    ' Keep track of the result in z
+END
+' End the trial, go back and repeat until all 100 trials are complete.
+COUNT z >= 9 j
+' Determine how often we got 9 or more "wins" for ration a.
+COUNT z <= 3 k
+' Determine how often we got 3 or fewer "wins" for ration a.
+ADD j k m
+' Add the two together
+DIVIDE m 100 mm
+' Convert to a proportion
+PRINT mm
+' Print the result.
+
+' Note: The file "pigs2" on the Resampling Stats software disk contains
+' this set of commands.
+

Notice how we proceeded in Examples 15-6 and 17-4. The data were originally quantitative — weight gains in pounds for each pig. But for simplicity we classified the data into simpler counted-data formats. The first format (Example 15-6) was a rank order, from highest to lowest. The second format (Example 17-4) was simply higher-lower, obtained by randomly pairing the observations (using alphabetical letter, or pig’s stall number, or whatever was the cause of the order in which the data were presented to be random). Classifying the data in either of these ways loses some information and makes the subsequent tests somewhat cruder than more refined analysis could provide (as we shall see in the next chapter), but the loss of efficiency is not crucial in many such cases. We shall see how to deal directly with the quantitative data in Chapter 24.

+

Example 17-5: Merged Firms Compared to Two Non-Merged Groups

+

In a study by Simon, Mokhtari, and Simon (1996), a set of 33 advertising agencies that merged over a period of years were each compared to entities within two groups (each also of 33 firms) that did not merge; one non-merging group contained firms of roughly the same size as the final merged entities, and the other non-merging group contained pairs of non-merging firms whose total size was roughly the same as the total size of the merging entities.

+

The idea behind the matching was that each pair of merged firms was compared against

+
    +
  1. a pair of contemporaneous firms that were roughly the same size as the merging firms before the merger, and

  2. +
  3. a single firm that was roughly the same size as the merged entity after the merger.

    +

    Here (Table 17-4) are the data (provided by the authors):

    +

    Table 17-4

    +

    Revenue Growth In Year 1 Following Merger

    +

    Set # Merged Match1 Match2

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1-0.200000.025640.000000
    2-0.34831-0.125000.080460
    30.075140.06322-0.023121
    40.12613-0.041990.164671
    5-0.101690.080000.277778
    60.037840.149070.430168
    70.116160.151830.142857
    8-0.098360.037740.040000
    90.021370.07661.0111111
    10-0.017110.284340.189139
    11-0.364780.139070.038869
    120.088140.038740.094792
    13-0.263160.056410.045139
    14-0.049380.053710.008333
    150.011460.048050.094817
    160.009750.198160.060929
    170.071430.42083-0.024823
    180.001830.074320.053191
    190.00482-0.007070.050083
    20-0.053990.171520.109524
    210.022700.02788-0.022456
    220.059840.048570.167064
    23-0.059870.026430.020676
    24-0.08861-0.059270.077067
    25-0.02483-0.018390.059633
    260.076430.012620.034635
    27-0.00170-0.045490.053571
    28-0.219750.343090.042789
    290.382370.221050.115773
    30-0.006760.254940.237047
    31-0.162980.011240.190476
    320.191820.150480.151994
    330.061160.170450.093525
    +

    Comparisons were made in several years before and after the mergings to see whether the merged entities did better or worse than the non-merging entities they were matched with by the researchers, but for simplicity we may focus on just one of the more important years in which they were compared — say, the revenue growth rates in the year after the merger.

    +

    Here are those average revenue growth rates for the three groups:

    +

    Year’s rev. growth

    + + + + + + + + + + + + + + + +
    MERGED-0.0213
    MATCH 10.092085
    MATCH 20.095931
    +

    We could do a general test to determine whether there are differences among the means of the three groups, as was done in the “Differences Among 4 Pig Rations” problem (Section 24.0.1). However, we note that there may be considerable variation from one matched set to another — variation which can obscure the overall results if we resample from a large general bucket.

    +

    Therefore, we use the following resampling procedure that maintains the separation between matched sets by converting each observation into a rank (1, 2 or 3) within the matched set.

    +

    Here (Table 17-5) are those ranks:

    +

    Table 17-5

    +

    Ranked Within Matched Set (1 = worst, 3 = best)

    +

    Set # Merged Match1 Match2

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1132
    2123
    3321
    4213
    5123
    6132
    7132
    8123
    9123
    10123
    11132
    12213
    13132
    14132
    15123
    16132
    17231
    18132
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Set #MergedMatch1Match2
    19213
    20132
    21223
    22223
    23132
    24123
    25123
    26312
    27213
    28132
    29321
    30132
    31123
    32312
    33132
    +

    These are the average ranks for the three groups (1 = worst, 3

    +

    = best):

    + + + + + + + + + + + + + + + +
    MERGED1.45
    MATCH 12.18
    MATCH 22.36
    +

    Is it possible that the merged group received such a low (poor) average ranking just by chance? The null hypothesis is that the ranks within each set were assigned randomly, and that “merged” came out so poorly just by chance. The following procedure simulates random assignment of ranks to the “merged” group:

    +
      +
    1. Randomly select 33 integers between “1” and “3” (inclusive).

    2. +
    3. Find the average rank & record.

    4. +
    5. Repeat steps 1 and 2, say, 1000 times.

    6. +
    7. Find out how often the average rank is as low as 1.45

    8. +
  4. +
+

Here’s a RESAMPLING STATS program (“merge.sta”):

+ +
' Program file: "testing_counts_2_06.rss"
+
+REPEAT 1000
+    GENERATE 33 (1 2 3) ranks
+    MEAN ranks ranksum
+    SCORE ranksum z
+END
+HISTOGRAM z
+COUNT z <=1.45 k
+DIVIDE k 1000 kk
+PRINT kk
+

+

Result: kk = 0

+

Interpretation: 1000 random selections of 33 ranks never produced an average as low as the observed average. Therefore we rule out chance as an explanation for the poor ranking of the merged firms.

+

Exactly the same technique might be used in experimental medical studies wherein subjects in an experimental group are matched with two different entities that receive placebos or control treatments.

+

For example, there have been several recent three-way tests of treatments for depression: drug therapy versus cognitive therapy versus combined drug and cognitive therapy. If we are interested in the combined drug-therapy treatment in particular, comparing it to standard existing treatments, we can proceed in the same fashion as in the merger problem.

+

We might just as well consider the real data from the merger as hypothetical data for a proposed test in 33 triplets of people that have been matched within triplet by sex, age, and years of education. The three treatments were to be chosen randomly within each triplet.

+

Assume that we now switch scales from the merger data, so that #1 = best and #3 = worst, and that the outcomes on a series of tests were ranked from best (#1) to worst (#3) within each triplet. Assume that the combined drug-and-therapy regime has the best average rank. How sure can we be that the observed result would not occur by chance? Here are the data from the merger study, seen here as Table 17-5-b:

+

Table 17-5-b

+

Ranked Therapies Within Matched Patient Triplets

+

(hypothetical data identical to merger data) (1 = best, 3 = worst)

+

Triplet # Therapy Only Combined Drug Only

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
1132
2123
3321
4213
5123
6132
7132
8123
9123
10123
11132
12213
13132
14132
15123
16132
17231
18132
19213
20132
21213
22213
23132
24123
25123
26312
27213
28132
29321
30132
31123
32312
33132
+

These are the average ranks for the three groups (“1” = best, “3”= worst):

+ + + + + + + + + + + + + + + +
Combined1.45
Drug2.18
Therapy2.36
+

In these hypothetical data, the average rank for the drug and therapy regime is 1.45. Is it likely that the regimes do not “really” differ with respect to effectiveness, and that the drug and therapy regime came out with the best rank just by the luck of the draw? We test by asking, “If there is no difference, what is the probability that the treatment of interest will get an average rank this good, just by chance?”

+

We proceed exactly as with the solution for the merger problem (see above).

+

In the above problems, we did not concern ourselves with chance outcomes for the other therapies (or the matched firms) because they were not our primary focus. If, in actual fact, one of them had done exceptionally well or poorly, we would have paid little notice because their performance was not the object of the study. We needed, therefore, only to guard against the possibility that chance good luck for our therapy of interest might have led us to a hasty conclusion.

+

Suppose now that we are not interested primarily in the combined drug-therapy treatment, and that we have three treatments being tested, all on equal footing. It is no longer sufficient to ask the question “What is the probability that the combined therapy could come out this well just by chance?” We must now ask “What is the probability that any of the therapies could have come out this well by chance?” (Perhaps you can guess that this probability will be higher than the probability that our chosen therapy will do so well by chance.)

+

Here is a resampling procedure that will answer this question:

+
    +
  1. Put the numbers “1”, “2” and “3” (corresponding to ranks) in a bucket

  2. +
  3. Shuffle the numbers and deal them out to three locations that correspond to treatments (call the locations “t1,” “t2,” and “t3”)

  4. +
  5. Repeat step two another 32 times (for a total of 33 repetitions, for 33 matched triplets)

  6. +
  7. Find the average rank for each location (treatment.

  8. +
  9. Record the minimum (best) score.

  10. +
  11. Repeat steps 2-4, say, 1000 times.

  12. +
  13. Find out how often the minimum average rank for any treatment is as low as 1.45

  14. +
+ +
' Program file: "testing_counts_2_07.rss"
+
+NUMBERS (1 2 3) a
+' Step 1 above
+REPEAT 1000
+    ' Step 6
+    REPEAT 33
+        ' Step 3
+        SHUFFLE a a
+        ' Step 2
+        SCORE a t1 t2 t3
+        ' Step 2
+    END
+    ' Step 3
+    MEAN t1 tt1
+    ' Step 4
+    MEAN t2 tt2
+    MEAN t3 tt3
+    CLEAR t1
+    ' Clear the vectors where we've stored the ranks for this trial (must do
+    ' this whenever we have a SCORE statement that's part of a "nested" repeat
+    ' loop)
+    CLEAR t2
+    CLEAR t3
+    CONCAT tt1 tt2 tt3 b
+    ' Part of step 5
+    MIN b bb
+    ' Part of step 5
+    SCORE bb z
+    ' Part of step 5
+END
+' Step 6
+HISTOGRAM z
+COUNT z <=1.45 k
+' Step 7
+DIVIDE k 1000 kk
+PRINT kk
+

Interpretation: 1000 random shufflings of 33 ranks, apportioned to three “treatments,” never produced for the best treatment in the three an average as low as the observed average, therefore we rule out chance as an explanation for the success of the combined therapy.

+

An interesting feature of the mergers (or depression treatment) problem is that it would be hard to find a conventional test that would handle this three-way comparison in an efficient manner. Certainly it would be impossible to find a test that does not require formulae and tables that only a talented professional statistician could manage satisfactorily, and even s/ he is not likely to fully understand those formulaic procedures.

+

+

Result: kk = 0

+
+
+

23.3 Technical note

+

Some of the tests introduced in this chapter are similar to standard nonparametric rank and sign tests. They differ less in the structure of the test statistic than in the way in which significance is assessed (the comparison is to multiple simulations of a model based on the benchmark hypothesis, rather than to critical values calculated analytically).

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/testing_measured.html b/python-book/testing_measured.html new file mode 100644 index 00000000..41cf9fc2 --- /dev/null +++ b/python-book/testing_measured.html @@ -0,0 +1,1617 @@ + + + + + + + + + +Resampling statistics - 24  The Statistics of Hypothesis-Testing With Measured Data + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

24  The Statistics of Hypothesis-Testing With Measured Data

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+ +

Chapter 21 and Chapter 19 discussed testing a hypothesis with data that either arrive in dichotomized (yes-no) form, or come as data in situations where it is convenient to dichotomize. We next consider hypothesis testing using measured data. Conventional statistical practice employs such devices as the “t-test” and “analysis of variance.” In contrast to those complex devices, the resampling method does not differ greatly from what has been discussed in previous chapters.

+
+

24.0.1 Example: The Pig Rations Still Once Again, Using Measured Data

+

Testing for the Difference Between Means of Two Equal-Sized Samples of Measured-Data Observations) (Program “Pigs3”)

+

Let us now treat the pig-food problem without converting the quantitative data into qualitative data, because a conversion always loses information.

+

The term “lose information” can be understood intuitively. Consider two sets of three sacks of corn. Set A includes sacks containing, respectively, one pound, two pounds, and three pounds. Set B includes sacks of one pound, two pounds, and a hundred pounds. If we rank the sacks by weight, the two sets can no longer be distinguished. The one-pound and two-pound sacks have ranks one and two in both cases, and their relative places in their sets are the same. But if we know not only that the one-pound sack is the smallest of its set and the three-pound or hundred-pound sack is the largest, but also that the largest sack is three pounds (or a hundred pounds), we have more information about a set than if we only know the ranks of its sacks.

+

Rank data are also known as “ordinal” data, whereas data measured in (say) pounds are known as “cardinal” data. Even though converting from cardinal (measured) to ordinal (ranked) data loses information, the conversion may increase convenience, and may therefore be worth doing in some cases.

+

+

We begin a measured-data procedure by noting that if the two pig foods are the same, then each of the observed weight gains came from the same benchmark universe . This is the basic tactic in our statistical strategy. That is, if the two foods came from the same universe, our best guess about the composition of that universe is that it includes weight gains just like the twenty-four we have observed , and in the same proportions, because that is all the information that we have about the universe; this is the bootstrap method. Since ours is (by definition) a sample from an infinite (or at least, a very large) universe of possible weight gains, we assume that there are many weight gains in the universe just like the ones we have observed, in the same proportion as we have observed them. For example, we assume that 2/24 of the universe is composed of 34-pound weight gains, as seen in Figure 18-1:

+

Figure 18-1

+

We recognize, of course, that weight gains other than the exact ones we observed certainly would occur in repeated experiments. And if we thought it reasonable to do so, we could assume that the “distribution” of the weight gains would follow a regular “smooth” shape such as Figure 18-2. But deciding just how to draw Figure 18-2 from the data in Figure 18-1 requires that we make arbitrary assumptions about unknown conditions. And if we were to draw Figure 18-2 in a form that would be sufficiently regular for conventional mathematical analysis, we might have to make some very strong assumptions going far beyond the observed data.

+

Drawing a smooth curve such as Figure 18-2 from the raw data in Figure 18-1 might be satisfactory — if done with wisdom and good judgment. But there is no necessity to draw such a smooth curve, in this case or in most cases. We can proceed by assuming simply that the benchmark universe — the universe to which we shall compare our samples, conventionally

+

Relative Probability

+

called the “null” or “hypothetical” universe — is composed only of elements similar to the observations we have in hand. We thereby lose no efficiency and avoid making unsound assumptions.

+

+

Size of Weight Gain, 30.2 = Mean

+

Figure 18-2

+

To carry out our procedure in practice: 1) Write down each of the twenty-four weight gains on a blank index card. We then have one card each for 31, 34, 29, 26, and so on. 2) Shuffle the twenty-four cards thoroughly, and pick one card. 3) Record the weight gain, and replace the card. (Recall that we are treating the weight gains as if they come from an infinite universe — that is, as if the probability of selecting any amount is the same no matter which others are selected randomly. Another way to say this is to state that each selection is independent of each other selection. If we did not replace the card before selecting the next weight gain, the selections would no longer be independent. See Chapter 11 for further discussion of this issue.) 4) Repeat this process until you have made two sets of 12 observations. 5) Call the first hand “food A” and the second hand “food B.” Determine the average weight gain for the two hands, and record it as in Table 18-1. Repeat this procedure many times.

+

In operational steps:

+

Step 1. Write down each observed weight gain on a card, e.g. 31, 34, 29...

+

Step 2. Shuffle and deal a card.

+

Step 3. Record the weight and replace the card.

+

Step 4. Repeat steps 2 and 3 eleven more times; call this group A.

+

Step 5. Repeat steps 2-3 another twelve times; call this group B.

+

Step 6. Calculate the mean weight gain of each group.

+

Step 7. Subtract the mean of group A from the mean of group B and record. If larger (more positive) than 3.16 (the difference between the observed means) or more negative than -3.16, record “more.” Otherwise record “less.”

+

Step 8. Repeat this procedure perhaps fifty times, and calculate the proportion “more.” This estimates the probability sought.

+

In none of the first ten simulated trials did the difference in the means of the random hands exceed the observed difference (3.16 pounds, in the top line in the table) between foods A and B. (The difference between group totals tells the same story and is faster, requiring no division calculations.)

+

In the old days before a computer was always easily available, I would quit making trials at such a point, confident that a difference in means as great as observed is not likely to happen by chance. (Using the convenient “multiplication rule” described in Chapter 9, we can estimate the probability of such an occurrence happening by chance in 10 successive trials as \(\frac{1}{2} * \frac{1}{2} * \frac{1}{2} ... = \frac{1}{2}^{10} = 1/1024 \approx .001\) = .1 percent, a small chance indeed.) Nevertheless, let us press on to do 50 trials.

+

Table 18-1

+ +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Trial #Mean of First 12 Observation s (First Hand)Mean of Second 12 Observation s (Second Hand)

Differenc e Greater or

+

Less Than Observed Differenc

Observed382 / 12=31.83344 / 12=28.673.16
1368 / 12=30.67357 / 12=29.75.87Less
2364 / 12=30.33361 / 12=30.08.25Less
3352 / 12=29.33373 / 12=31.08(1.75)Less
4378 / 12=31.50347 / 12=28.922.58Less
5365 / 12=30.42360 / 12=30.00.42Less
6352 / 12=29.33373 / 12=31.08(1.75)Less
7355 / 12=29.58370 / 12=30.83(1.25)Less
8366 / 12=30.50359 / 12=29.92.58Less
9360 / 12=30.00365 / 12=30.42(.42)Less
10355 / 12=29.58370 / 12=30.83(1.25)Less
11359 / 12=29.92366 / 12=30.50(.58)Less
12369 / 12=30.75356 / 12=29.671.08
+

Results of Fifty Random Samples for the Problem “PIGS3”

+

e

+ +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Trial #Mean of First 12 Observation s (First Hand)Mean of Second 12 Observation s (Second Hand)

Differenc e Greater or

+

Less Than Observed Differenc

Observed382 / 12=31.83344 / 12=28.673.16
13360 / 12=30.00365 / 12=30.42(.42)Less
14377 / 12=31.42348 / 12=29.002.42Less
15365 / 12=30.42360 / 12=30.00.42Less
16364 / 12=30.33361 / 12=30.08.25Less
17363 / 12=30.25362 / 12=30.17.08Less
18365 / 12=30.42360 / 12=30.00.42Less
19369 / 12=30.75356 / 12=29.671.08Less
20369 / 12=30.75356 / 12=29.671.08Less
21369 / 12=30.75356 / 12=29.671.08Less
22364 / 12=30.33361 / 12=30.08.25Less
23363 / 12=30.25362 / 12=30.17.08Less
24363 / 12=30.25362 / 12=30.17.08Less
25364 / 12=30.33361 / 12=30.08.25Less
26359 / 12=29.92366 / 12=30.50(.58)Less
27362 / 12=30.17363 / 12=30.25(.08)Less
28362 / 12=30.17363 / 12=30.25(.08)Less
29373 / 12=31.08352 / 12=29.331.75Less
30367 / 12=30.58358 / 12=29.83.75Less
31376 / 12=31.33349 / 12=29.082.25Less
32365 / 12=30.42360 / 12=30.00.42Less
33357 / 12=29.75368 / 12=30.67(1.42)Less
34349 / 12=29.08376 / 12=31.332.25Less
35356 / 12=29.67396 / 12=30.75(1.08)Less
36359 / 12=29.92366 / 12=30.50(.58)Less
37372 / 12=31.00353 / 12=29.421.58Less
38368 / 12=30.67357 / 12=29.75.92Less
39344 / 12=28.67382 / 12=31.81(3.16)Equal
40365 / 12=30.42360 / 12=30.00.42Less
41375 / 12=31.25350 / 12=29.172.08Less
42353 / 12=29.42372 / 12=31.00(1.58)Less
43357 / 12=29.75368 / 12=30.67(.92)Less
44363 / 12=30.25362 / 12=30.17.08Less
45353 / 12=29.42372 / 12=31.00(1.58)Less
46354 / 12=29.50371 / 12=30.92(1.42)Less
47353 / 12=29.42372 / 12=31.00(1.58)Less
48366 / 12=30.50350 / 12=29.92.58Less
49364 / 12=30.53361 / 12=30.08.25Less
50370 / 12=30.83355 / 12=29.581.25Less
+

Table 18-1 (continued)

+

e

+

Table 18-1 shows fifty trials of which only one (the thirty-ninth) is as “far out” as the observed samples. These data give us an estimate of the probability that, if the two foods come from the same universe, a difference this great or greater would occur just by chance. (Compare this 2 percent estimate with the probability of roughly 1 percent estimated with the conventional t test — a “significance level” of 1 percent.) On the average, the test described in this section yields a significance level as high as such mathematical-probability tests as the t test — that is, it is just as efficient — though the tests described in Examples 15-6 and 17-1 are likely to be less efficient because they convert measured data to ranked or classified data. 1

+

It is not appropriate to say that these data give us an estimate of the probability that the foods “do not come” from the same universe. This is because we can never state a probability that a sample came from a given universe unless the alternatives are fully specified in advance.2

+

This example also illustrates how the dispersion within samples affects the difficulty of finding out whether the samples differ from each other. For example, the average weight gain for food A was 32 pounds, versus 29 pounds for food B. If all the food A-fed pigs had gained weight within a range of say 29.9 and 30.1 pounds, and if all the food B-fed pigs had gained weight within a range of 28.9 and 29.1 pounds — that is, if the highest weight gain in food B had been lower than the lowest weight gain in food A — then there would be no question that food A is better, and even fewer observations would have made this statistically conclusive. Variation (dispersion) is thus of great importance in statistics and in the social sciences. The larger the dispersion among the observations within the samples, the larger the sample size necessary to make a conclusive comparison between two groups or reliable estimates of summarization statistics. (The dispersion might be measured by the mean absolute deviation (the average absolute difference between the mean and the individual observations, treating both plus and minus differences as positive), the variance (the average squared difference between the mean and the observations), the standard deviation (the square root of the variance), the range (the difference between the smallest and largest observations), or some other device.)

+ +

If you are performing your tests by hand rather than using a computer (a good exercise even nowadays when computers are so accessible), you might prefer to work with the median instead of the mean, because the median requires less computation. (The median also has the advantage of being less influenced by a single far-out observation that might be quite atypical; all measures have their special advantages and disadvantages.) Simply compare the difference in medians of the twelve-pig resamples to the difference in medians of the actual samples, just as was done with the means. The only operational difference is to substitute the word “median” for the word “mean” in the steps listed above. You may need a somewhat larger number of trials when working with medians, however, for they tend to be less precise than means.

+ +

The RESAMPLING STATS program compares the difference in the sums of the weight gains for the actual pigs against the difference resulting from two randomly-chosen groups of pigs, using the same numerical weight gains of individual pigs as were obtained in the actual experiment. If the differences in average weight gains of the randomly ordered groups are rarely as large as the difference in weight gains from the actual sets of pigs fed food A-alpha and food B-beta, then we can conclude that the foods do make a difference in pigs’ weight gains.

+

Note first that pigs in group A gained a total of 382 pounds while group B gained a total of 344 pounds — 38 fewer. To minimize computations, we will deal with totals like these, not averages.

+

First we construct vectors A and B of the weight gains of the pigs fed with the two foods. Then we combine the two vectors into one long vector and select two groups of 12 randomly and with replacement (the two SAMPLE commands). We SUM the weight gains for the two resamples, and calculate the difference. We keep SCORE of those differences, graph them on a HISTOGRAM, and see how many times resample A exceeded resample B by at least 38 pounds, or vice versa (we are testing whether the two are different, not whether food A produces larger weight gains).

+ +
' Program file: "testing_measured_00.rss"
+
+NUMBERS (31 34 29 26 32 35 38 34 31 29 32 31) a
+' Record group a's weight gains.
+NUMBERS (26 24 28 29 30 29 31 29 32 26 28 32) b
+' Record group b's weight gains.
+CONCAT a b c
+' Combine a and b together in one long vector.
+REPEAT 1000
+    ' Do 1000 experiments.
+    SAMPLE 12 c d
+    ' Take a "resample" of 12 with replacement from c and put it in d.
+    SAMPLE 12 c e
+    ' Take another "resample."
+    SUM d dd
+    ' Sum the first "resample."
+    SUM e ee
+    ' Sum the second "resample."
+    SUBTRACT dd ee f
+    ' Calculate the difference between the two resamples.
+    SCORE f z
+    ' Keep track of each trial result.
+END
+' End one experiment, go back and repeat until all trials are complete,
+' then proceed.
+HISTOGRAM z
+' Produce a histogram of trial results.
+

PIGS3: Difference Between Two Resamples

+

Sum of Weight Gains

+

+

1 st resample less 2 nd

+

From this histogram we see that none of the trials produced a difference between groups as large as that observed (or larger). RESAMPLING STATS will calculate this for us with the following commands:

+ +
' Program file: "pigs3.rss"
+
+COUNT z >= 38 k
+' Determine how many of the trials produced a difference between resamples
+
+' \>= 38.
+COUNT z <= -38 l
+' Likewise for a difference of -38.
+ADD k l m
+' Add the two together.
+DIVIDE m 1000 mm
+' Convert to a proportion.
+PRINT mm
+' Print the result.
+
+' Note: The file "pigs3" on the Resampling Stats software disk contains
+' this set of commands.
+
+
+

24.0.2 Example: Is There a Difference in Liquor Prices Between State-Run and Privately-Run Systems?

+

This is an example of testing for differences between means of unequal-sized samples of measured data.

+

In the 1960s I studied the price of liquor in the sixteen “monopoly” states (where the state government owns the retail liquor stores) compared to the twenty-six states in which retail liquor stores are privately owned. (Some states were omitted for technical reasons. And it is interesting to note that the situation and the price pattern has changed radically since then.) These data were introduced in the context of a problem in probability in Chapter 12.

+

These were the representative 1961 prices of a fifth of Seagram 7 Crown whiskey in the two sets of states:3

+

16 monopoly states: $4.65, $4.55, $4.11, $4.15, $4.20, $4.55, $3.80,

+

$4.00, $4.19, $4.75, $4.74, $4.50, $4.10, $4.00, $5.05, $4.20

+

Mean = $4.35

+

26 private-ownership states: $4.82, $5.29, $4.89, $4.95, $4.55, $4.90,

+

$5.25, $5.30, $4.29, $4.85, $4.54, $4.75, $4.85, $4.85, $4.50, $4.75,

+

$4.79, $4.85, $4.79, $4.95, $4.95, $4.75, $5.20, $5.10, $4.80, $4.29.

+

Mean = $4.84

+

The economic question that underlay the investigation — having both theoretical and policy ramifications — is as follows: Does state ownership affect prices? The empirical question is whether the prices in the two sets of states were systematically different. In statistical terms, we wish to test the hypothesis that there was a difference between the groups of states related to their mode of liquor distribution, or whether the observed $.49 differential in means might well have occurred by happenstance. In other words, we want to know whether the two sub-groups of states differed systematically in their liquor prices, or whether the observed pattern could well have been produced by chance variability.

+

The first step is to examine the two sets of data graphically to see whether there was such a clear-cut difference between them — of the order of Snow’s data on cholera, or the Japanese Navy data on beri-beri — that no test was necessary. The separate displays, and then the two combined together, are shown in Figure 24.1; the answer is not clear-cut and hence a formal test is necessary.

+ + +
+
+
+
+

+
Figure 24.1: Liquor prices by government and private
+
+
+
+
+

At first I used a resampling permutation test as follows: Assuming that the entire universe of possible prices consists of the set of events that were observed, because that is all the information available about the universe, I wrote each of the forty-two observed state prices on a separate card. The shuffled deck simulated a situation in which each state has an equal chance for each price.

+

On the “null hypothesis” that the two groups’ prices do not reflect different price-setting mechanisms, but rather differ only by chance, I then examined how often that simulated universe stochastically produces groups with results as different as observed in 1961. I repeatedly dealt groups of 16 and 26 cards, without replacing the cards, to simulate hypothetical monopoly-state and private-state samples, each time calculating the difference in mean prices.

+

The probability that the benchmark null-hypothesis universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state-ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices. If the simulated difference between the randomly-chosen groups was frequently equal to or greater than observed in 1961, one would not conclude that the observed difference was due to the type of retailing system because it could well have been due to chance variation.

+

The results — not even one “success” in 10,000 trials — imply that there is a very small probability that two groups with mean prices as different as were observed would happen by chance if drawn from the universe of 42 observed prices. So we “reject the null hypothesis” and instead find persuasive the proposition that the type of liquor distribution system influences the prices that consumers pay.4

+

As I shall discuss later, the logical framework of this resampling version of the permutation test differs greatly from the formulaic version, which would have required heavy computation. The standard conventional alternative would be a Student’s t-test, in which the user simply plugs into an unintuitive formula and reads the result from a table. And because of the unequal numbers of cases and unequal dispersions in the two samples, an appropriate t-test is far from obvious, whereas resampling is not made more difficult by such realistic complications.

+ +

A program to handle the liquor problem with an infinite-universe bootstrap distribution simply substitutes the random sampling command SAMPLE for the SHUFFLE/TAKE commands. The results of the new test are indistinguishable from those in the program given above.

+

Still another difficult question is whether any hypothesis test is appropriate, because the states were not randomly selected for inclusion in one group or another, and the results could be caused by factors other than the liquor system; this applies to both the above methods. The states constitute the entire universe in which we are interested, rather than being a sample taken from some larger universe as with a biological experiment or a small survey sample. But this objection pertains to a conventional test as well as to resampling methods. And a similar question arises throughout medical and social science — to the two water suppliers between which John Snow detected vast differences in cholera rates, to rates of lung cancer in human smokers, to analyses of changes in speeding laws, and so on.

+

The appropriate question is not whether the units were assigned randomly, however, but whether there is strong reason to believe that the results are not meaningful because they are the result of a particular “hidden” variable.

+

These debates about fundamentals illustrate the unsettled state of statistical thinking about basic issues. Other disciplines also have their controversies about fundamentals. But in statistics these issues arise as early as the introductory course, because all but the most contrived problems are shot through with these questions. Instructors and researchers usually gloss over these matters, as Gigerenzer et al., show ( The Empire of Chance ). Again, because with resampling one does not become immersed in the difficult mathematical techniques that underlie conventional methods, one is quicker to see these difficult questions, which apply equally to conventional methods and resampling.

+ +

Example 18-3: Is There a Difference Between Treatments to Prevent Low Birthweights?

+

Next we consider the use of resampling with measured data to test the hypothesis that drug A prevents low birthweights (Rosner, 1982, p. 257). The data for the treatment and control groups are shown in Table 18-2.

+

Table 18-2

+

Birthweights in a Clinical Trial to Test a Drug for Preventing Low Birthweights

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Treatment GroupControl Group
6.96.4
7.66.7
7.35.4
7.68.2
6.85.3
7.26.6
8.05.8
5.55.7
5.86.2
7.37.1
8.27.0
6.96.9
6.85.6
5.74.2
8.66.8
Average: 7.086.26
+

Source: Rosner, Table 8.7

+

The treatment group averaged .82 pounds more than the control group. Here is a resampling approach to the problem:

+
    +
  1. If the drug has no effect, our best guess about the “universe” of birthweights is that it is composed of (say) a million each of the observed weights, all lumped together. In other words, in the absence of any other information or compelling theory, we assume that the combination of our samples is our best estimate of the universe. Hence let us write each of the birthweights on a card, and put them into a hat. Drawing them one by one and then replacing them is the operational equivalent of a very large (but equal) number of each birthweight.

  2. +
  3. Repeatedly draw two samples of 15 birthweights each, and check how frequently the observed difference is as large as, or larger than, the actual difference of .82 pounds.

  4. +
+

We find in the RESAMPLING STATS program below that only 1 percent of the pairs of hypothetical resamples produced means that differed by as much as .82. We therefore conclude that the observed difference is unlikely to have occurred by chance.

+ +
' Program file: "testing_measured_02.rss"
+
+NUMBERS (6.9 7.6 7.3 7.6 6.8 7.2 8.0 5.5 5.8 7.3 8.2 6.9 6.8 5.7 8.6) treat
+NUMBERS (6.4 6.7 5.4 8.2 5.3 6.6 5.8 5.7 6.2 7.1 7.0 6.9 5.6 4.2 6.8) control
+CONCAT treat control all
+' Combine all birthweight observations in same vector
+REPEAT 1000
+    ' Do 1000 simulations
+    SAMPLE 15 all treat$
+    ' Take a resample of 15 from all birth weights (the \$ indicates a
+    ' resampling counterpart to a real sample)
+    SAMPLE 15 all control$
+    ' Take a second, similar resample
+    MEAN treat$ mt
+    ' Find the means of the two resamples
+    MEAN control$ mc
+    SUBTRACT mt mc dif
+    ' Find the difference between the means of the two resamples
+    SCORE dif z
+    ' Keep score of the result
+END
+' End the simulation experiment, go back and repeat
+HISTOGRAM z
+' Produce a histogram of the resample differences
+COUNT z >= 0.82 k
+' How often did resample differences exceed the observed difference of
+' .82?
+

+

Resample differences in pounds

+

Result: Only 1.3 percent of the pairs of resamples produced means that differed by as much as .82. We can conclude that the observed difference is unlikely to have occurred by chance.

+
+
+

24.0.3 Example: Bootstrap Sampling with Replacement

+

Efron and Tibshirani (1993, 11) present this as their basic problem illustrating the bootstrap method: Seven mice were given a new medical treatment intended to improve their survival rates after surgery, and nine mice were not treated. The numbers of days the treated mice survived were 94, 38, 23, 197, 99, 16 and 14, whereas the numbers of days the untreated mice (the control group) survived were 52, 10, 40, 104, 51, 27, 146, 30, and 46. The question we ask is: Did the treatment prolong survival, or might chance variation be responsible for the observed difference in mean survival times?

+

We start by supposing the treatment did NOT prolong survival and that chance was responsible. If that is so, then we consider that the two groups came from the same universe. Now we’d like to know how likely it is that two groups drawn from this common universe would differ as much as the two observed groups differ.

+

If we had unlimited time and money, we would seek additional samples in the same way that we obtained these. Lacking time and money, we create a hypothetical universe that embodies everything we know about such a common universe. We imagine replicating each sample element millions of times to create an almost infinite universe that looks just like our samples. Then we can take resamples from this hypothetical universe and see how they behave.

+

Even on a computer, creating such a large universe is tedious so we use a shortcut. We replace each element after we pick it for a resample. That way, our hypothetical (bootstrap) universe is effectively infinite.

+

The following procedure will serve:

+
    +
  1. Calculate the difference between the means of the two observed samples – it’s 30.63 days in favor of the treated mice.

  2. +
  3. Consider the two samples combined (16 observations) as the relevant universe to resample from.

  4. +
  5. Draw 7 hypothetical observations with replacement and designate them “Treatment”; draw 9 hypothetical observations with replacement and designate them “Control.”

  6. +
  7. Compute and record the difference between the means of the two samples.

    +

    Repeat steps 2 and 3 perhaps 1000 times.

  8. +
  9. Determine how often the resampled difference exceeds the observed difference of 30.63.

  10. +
+

The following program (“mice2smp”) follows the above procedure:

+ +
' Program file: "testing_measured_03.rss"
+
+NUMBERS (94 38 23 197 99 16 141) treatmt
+' treatment group
+NUMBERS (52 10 40 104 51 27 146 30 46) control
+' control group
+CONCAT treatmt control u
+' U is our universe (step 2 above)
+REPEAT 1000
+    ' step 5 above
+    SAMPLE 7 u treatmt$
+    ' step 3 above
+    SAMPLE 9 u control$
+    ' step 3
+    MEAN treatmt$ tmean
+    ' step 4
+    MEAN control$ cmean
+    ' step 4
+    SUBTRACT tmean cmean diff
+    ' step 4
+    SCORE diff scrboard
+    ' step 4
+END
+' step 5
+HISTOGRAM scrboard
+COUNT scrboard >=30.63 k
+' step 6
+DIVIDE k 1000 prob
+PRINT prob
+

+

Result: PROB = 0.112

+

Interpretation: 1000 simulated resamples (of sizes 7 and 9) from a combined universe produced a difference as big as 30.63 more than 11 percent of the time. We cannot rule out the possibility that chance might be responsible for the observed advantage of the treatment group.

+

Example 18-5: Permutation Sampling Without Replacement

+

This section discusses at some length the question of when sampling with replacement (the bootstrap), and sampling without replacement (permutation or “exact” test) are the appropriate resampling methods. The case at hand seems like a clearcut case where the bootstrap is appropriate. (Note that in this case we draw both samples from a combined universe consisting of all observations, whether we do so with or without replacement.) Nevertheless, let us see how the technique would differ if one were to consider that the permutation test is appropriate. The algorithm would then be as follows (with the steps that are the same as above labeled “a” and those that are different labeled “b”):

+

1a. Calculate the difference between the means of the two observed samples – it’s 30.63 days in favor of the treated mice.

+

2a. Consider the two samples combined (16 observations) as the relevant universe to resample from.

+

3b. Draw 7 hypothetical observations without replacement and designate them “Treatment”; draw 9 hypothetical observations with replacement and designate them “Control.”

+

4a. Compute and record the difference between the means of the two samples.

+

5a. Repeat steps 2 and 3 perhaps 1000 times

+

6a. Determine how often the resampled difference exceeds the observed difference of 30.63.

+

Here is the RESAMPLING STATS program:

+ +
' Program file: "testing_measured_04.rss"
+
+NUMBERS (94 38 23 197 99 16 141) treatmt
+' treatment group
+NUMBERS (52 10 40 104 51 27 146 30 46) control
+' control group
+CONCAT treatmt control u
+' U is our universe (step 2 above)
+REPEAT 1000
+    ' step 5 above
+    SHUFFLE u ushuf
+    TAKE ushuf 1,7 treatmt$
+    ' step 3 above
+    TAKE ushuf 8,16 control$
+    ' step 3
+    MEAN treatmt$ tmean
+    ' step 4
+    MEAN control$ cmean
+    ' step 4
+    SUBTRACT tmean cmean diff
+    ' step 4
+    SCORE diff scrboard
+    ' step 4
+END
+' step 5
+HISTOGRAM scrboard
+COUNT scrboard >=30.63 k
+' step 6
+DIVIDE k 1000 prob
+PRINT prob
+

+

Result: prob = 0.145

+

Interpretation: 1000 simulated resamples (of sizes 7 and 9) from a combined universe produced a difference as big as 30.63 more than 14 percent of the time. We therefore should not rule out the possibility that chance might be responsible for the observed advantage of the treatment group.

+
+
+

24.1 Differences among four means

+

Example 18-6: Differences Among Four Pig Rations (Test for Differences Among Means of More Than Two Samples of Measured Data) (File “PIGS4”)

+

In Examples 15-1 and 15-4 we investigated whether or not the results shown by a single sample are sufficiently different from a null (benchmark) hypothesis so that the sample is unlikely to have come from the null-hypothesis benchmark universe. In Examples 15-7, 17-1, and 18-1 we then investigated whether or not the results shown by two samples suggest that both had come from the same universe, a universe that was assumed to be the composite of the two samples. Now as in Example 17-2 we investigate whether or not several samples come from the same universe, except that now we work with measured data rather than with counted data.

+

If one experiments with each of 100 different pig foods on twelve pigs, some of the foods will show much better results than will others just by chance , just as one family in sixteen is likely to have the very “high” number of 4 daughters in its first four children. Therefore, it is wrong reasoning to try out the 100 pig foods, select the food that shows the best results, and then compare it statistically with the average (sum) of all the other foods (or worse, with the poorest food). With such a procedure and enough samples, you will surely find one (or more) that seems very atypical statistically. A bridge hand with 12 or 13 spades seems very atypical, too, but if you deal enough bridge hands you will sooner or later get one with 12 or 13 spades — as a purely chance phenomenon, dealt randomly from a standard deck. Therefore we need a test that prevents our falling into such traps. Such a test usually operates by taking into account the differences among all the foods that were tried.

+

The method of Example 18-1 can be extended to handle this problem. Assume that four foods were each tested on twelve pigs. The weight gains in pounds for the pigs fed on foods A and B were as before. For foods C and D the weight gains were:

+

Ration C: 30, 30, 32, 31, 29, 27, 25, 30, 31, 32, 34, 33

+

Ration D: 32, 25, 31, 26, 32, 27, 28, 29, 29, 28, 23, 25

+

Now construct a benchmark universe of forty-eight index cards, one for each weight gain. Then deal out sets of four hands randomly. More specifically:

+

Step 1. Constitute a universe of the forty-eight observed weight gains in the four samples, writing the weight gains on cards.

+

Step 2. Draw four groups of twelve weight gains, with replacement, since we are drawing from a hypothesized infinite universe in which consecutive draws are independent. Determine whether the difference between the lowest and highest group means is as large or larger than the observed difference. If so write “yes,” otherwise “no.”

+

Step 3. Repeat step 2 fifty times.

+

Step 4. Count the trials in which the differences between the simulated groups with the highest and lowest means are as large or larger than the differences between the means of the highest and lowest observed samples. The proportion of such trials to the total number of trials is the probability that all four samples would differ as much as do the observed samples if they (in technical terms) come from the same universe.

+

The problem “Pigs4,” as handled by the steps given above, is quite similar to the way we handled Example TKTK, except that the data are measured (in pounds of weight gain) rather than simply counted (the number of rehabilitations).

+

Instead of working through a program for the procedure outlined above, let us consider a different approach to the problem — computing the difference between each pair of foods, six differences in all, converting all minus (-) signs to (+) differences. Then we can total the six differences, and compare the total with the sum of the six differences in the observed sample. The proportion of the resampling trials in which the observed sample sum is exceeded by the sum of the differences in the trials is the probability that the observed samples would differ as much as they do if they come from the same universe.5

+

One naturally wonders whether this latter test statistic is better than the range, as discussed above. It would seem obvious that using the information contained in all four samples should increase the precision of the estimate. And indeed it is so, as you can confirm for yourself by comparing the results of the two approaches. But in the long run, the estimate provided by the two approaches would be much the same. That is, there is no reason to think that one or another of the estimates is biased . However, successive samples from the population would steady down faster to the true value using the four-groupbased estimate than they would using the range. That is, the four-group-based estimate would require a smaller sample of pigs.

+

Is there reason to prefer one or the other approach from the point of view of some decision that might be made? One might think that the range procedure throws light on which one of the foods is best in a way that the four-group-based approach does not. But this is not correct. Both approaches answer this question, and only this question: Are the results from the four foods likely to have resulted from the same “universe” of weight gains or not? If one wants to know whether the best food is similar to, say, all the other three, the appropriate approach would be a two -sample approach similar to various two -sample examples discussed earlier. (It would be still another question to ask whether the best food is different from the worst. One would then use a procedure different from either of those discussed above.)

+

If the foods cost the same, one would not need even a twosample analysis to decide which food to feed. Feed the one whose results are best in the experiment, without bothering to ask whether it is “really” the best; you can’t go wrong as long as it doesn’t cost more to use it. (One could inquire about the probability that the food yielding the best results in the experiment would attain those results by chance even if it was worse than the others by some stipulated amount, but pursuing that line of thought may be left to the student as an exercise.)

+

In the problem “Pigs4,” we want a measure of how the groups differ. The obvious first step is to add up the total weight gains for each group: 382, 344, 364, 335. The next step is to calculate the differences between all the possible combinations of groups: 382-344=38, 382-364=18, 382-335=47, 344-364= -20, 344-335=9, 364-335=29.

+
+
+

24.2 Using Squared Differences

+

Here we face a choice. We could work with the absolute differences — that is, the results of the subtractions — treating each result as a positive number even if it is negative. We have seen this approach before. Therefore let us now take the opportunity of showing another approach. Instead of working with the absolute differences, we square each difference, and then SUM the squares. An advantage of working with the squares is that they are positive — a negative number squared is positive — which is convenient. Additionally, conventional statistics works mainly with squared quantities, and therefore it is worth getting familiar with that point of view. The squared differences in this case add up to 5096.

+

Using RESAMPLING STATS, we shuffle all the weight gains together, select four random groups, and determine whether the squared differences in the resample exceed 5096. If they do so with regularity, then we conclude that the observed differences could easily have occurred by chance.

+

With the CONCAT command, we string the four vectors into a single vector. After SHUFFLEing the 48-pig weight-gain vector G into H, we TAKE four randomized samples. And we compute the squared differences between the pairs of groups and SUM the squared differences just as we did above for the observed groups.

+

Last, we examine how often the simulated-trials data produce differences among the groups as large as (or larger than) the actually observed data — 5096.

+ +
' Program file: "pigs4.rss"
+
+NUMBERS (34 29 26 32 35 38 31 34 30 29 32 31) a
+NUMBERS (26 24 28 29 30 29 32 26 31 29 32 28) b
+NUMBERS (30 30 32 31 29 27 25 30 31 32 34 33) c
+NUMBERS (32 25 31 26 32 27 28 29 29 28 23 25) d
+' (Record the data for the 4 foods)
+CONCAT a b c d g
+' Combine the four vectors into g
+REPEAT 1000
+    ' Do 1000 trials
+    SHUFFLE g h
+    ' Shuffle all the weight gains.
+    SAMPLE 12 h p
+    ' Take 4 random samples, with replacement.
+    SAMPLE 12 h q
+    SAMPLE 12 h r
+    SAMPLE 12 h s
+    SUM p i
+    ' Sum the weight gains for the 4 resamples.
+    SUM q j
+    SUM r k
+    SUM s l
+    SUBTRACT i j ij
+    ' Find the differences between all the possible pairs of resamples.
+    SUBTRACT i k ik
+    SUBTRACT i l il
+    SUBTRACT j k jk
+    SUBTRACT j l jl
+    SUBTRACT k l kl
+    MULTIPLY ij ij ijsq
+    ' Find the squared differences.
+    MULTIPLY ik ik iksq
+    MULTIPLY il il ilsq
+    MULTIPLY jk jk jksq
+    MULTIPLY jl jl jlsq
+    MULTIPLY kl kl klsq
+    ADD ijsq iksq ilsq jksq jlsq klsq total
+    ' Add them together.
+    SCORE total z
+    ' Keep track of the total for each trial.
+END
+' End one trial, go back and repeat until 1000 trials are complete.
+HISTOGRAM z
+' Produce a histogram of the trial results.
+COUNT z >= 5096 k
+' Find out how many trials produced differences among groups as great as
+' or greater than those observed.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the result.
+
+' Note: The file "pigs4" on the Resampling Stats software disk contains
+' this set of commands.
+

PIGS4: Differences Among Four Pig Rations

+

+

sums of squares

+

We find that our observed sum of squares — 5096 — was exceeded by randomly-drawn sums of squares in only 3 percent of our trials. We conclude that the four treatments are likely not all similar.

+
+
+

24.3 Exercises

+

Solutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.

+

Exercise 18-1

+

The data shown in Table 18-3 (Hollander and Wolfe 1999, 39, Table 3.1) might be data for the outcomes of two different mechanics, showing the length of time until the next overhaul is needed for nine pairs of similar vehicles. Or they could be two readings made by different instruments on the same sample of rock. In fact, they represent data for two successive tests for depression on the Hamilton scale, before and after drug therapy.

+ +

Table 18-3

+

Hamilton Depression Scale Values

+ +++++ + + + + + + + + + + + + + + +
Patient #Score BeforeScore After
1 2 3 4 5 6 7 8 91.83 .50 1.62 2.48 1.68 1.88 1.55 3.06 1.3.878 .647 .598 2.05 1.06 1.29 1.06 3.14 1.29
+

The task is to perform a test that will help decide whether there is a difference in the depression scores at the two visits (or the performances of the two mechanics). Perform both a bootstrap test and a permutation test, and give some reason for preferring one to the other in principle. How much do they differ in practice?

+

Exercise 18-2

+

Thirty-six of 72 (.5) taxis surveyed in Pittsburgh had visible seatbelts. Seventy-seven of 129 taxis in Chicago (.597) had visible seatbelts. Calculate a confidence interval for the difference in proportions, estimated at -.097. (Source: Peskun, Peter H., “A New Confidence Interval Method Based on the Normal Approximation for the Difference of Two Binomial Probabilities,” Journal of the American Statistical Association , 6/93 p. 656).

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/testing_procedures.html b/python-book/testing_procedures.html new file mode 100644 index 00000000..bd83690a --- /dev/null +++ b/python-book/testing_procedures.html @@ -0,0 +1,876 @@ + + + + + + + + + +Resampling statistics - 25  General Procedures for Testing Hypotheses + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

25  General Procedures for Testing Hypotheses

+
+ + + +
+ + + + +
+ + +
+ +
+

25.1 Introduction

+

The previous chapters have presented procedures for making statistical inferences that apply to both testing hypotheses and constructing confidence intervals: This chapter focuses on specific procedures for testing hypotheses.

+

`The general idea in testing hypotheses is to ask: Is there some other universe which might well have produced the observed sample? So we consider alternative hypotheses. This is a straightforward exercise in probability, asking about behavior of one or more universes. The choice of another universe(s) to examine depends upon purposes and other considerations.

+
+
+

25.2 Canonical question-and-answer procedure for testing hypotheses

+
+
+

25.3 Skeleton procedure for testing hypotheses

+

Akin to skeleton procedure for questions in probability and confidence intervals shown elsewhere

+

The following series of questions will be repeated below in the context of a specific inference.

+

What is the question? What is the purpose to be served by answering the question?

+

Is this a “probability” or a “statistics” question?

+

Assuming the Question is a Statistical Inference Question

+

What is the form of the statistics question?

+

Hypothesis test, or confidence interval, or other inference? One must first decide whether the conceptual-scientific question is of the form a) a test about the probability that some sample is likely to happen by chance rather than being very surprising (a test of a hypothesis), or b) a question about the accuracy of the estimate of a parameter of the population based upon sample evidence (a confidence interval):

+

Assuming the Question Concerns Testing Hypotheses

+

Will you state the costs and benefits of various outcomes, perhaps in the form of a “loss function”? If “yes,” what are they?

+

How many samples of data have been observed?

+

One, two, more than two?

+

What is the description of the observed sample(s)?

+

Raw data?

+

Which characteristic(s) (parameters) of the population are of interest to you?

+

What are the statistics of the sample(s) that refer to this (these) characteristics(s) in which you are interested?

+

What comparison(s) to make?

+

Samples to each other?

+

Sample to particular universe(s)? If so, which?

+

What is the benchmark (null) universe?

+

This may include presenting the raw data and/or such summary statistics as the computed mean, median, standard deviation, range, interquartile range, other:

+

If there is to be a Neyman-Pearson-type alternative universe, what is it? (In most cases the answer to this technical question is “no.”)

+

Which symbols for the observed entities?

+

Discrete or continuous?

+

What values or ranges of values?

+

Which sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? (Answer: samples the same size as has been observed)

+

[Here one may continue with the conventional method, using perhaps a t or f or chi-square test or whatever: Everything up to now is the same whether continuing with resampling or with standard parametric test.]

+

What procedure will be used to produce the resampled entities?

+

Randomly drawn?

+

Simple (single step) or complex (multiple “if” drawings)?

+

What procedure to produce resample?

+

Which universe will you draw them from? With or without replacement?

+

What size resamples? Number of resample trials?

+

What to record as outcome of each resample trial?

+

Mean, median, or whatever of resample?

+

Classifying the outcomes

+

What is the criterion of significance to be used in evaluating the results of the test?

+

Stating the distribution of results

+

Graph of each statistic recorded — occurrences for each value.

+

Count the outcomes that exceed criterion and divide by number of trials.

+
+
+

25.4 An example: can the bio-engineer increase the female calf rate?

+

The question. (from (Hodges Jr and Lehmann 1970, 310): Female calves are more valuable than male calves. A bio-engineer claims to have a method that can produce more females. He tests the procedure on ten of your pregnant cows, and the result is nine females. Should you believe that his method has some effect? That is, what is the probability of a result this surprising occurring by chance?

+

The purpose: Female calves are more valuable than male.

+

Inference? Yes.

+

Test of hypothesis? Yes.

+

Will you state the costs and benefits of various outcomes (or a loss function)? We need only say that the benefits of a method that works are very large, and if the results are promising, it is worth gathering more data to confirm results.

+

How many samples of data are part of the significance test? One

+

What is the size of the first sample about which you wish to make significance statements? Ten.

+

What comparison(s) to make? Compare sample to benchmark universe.

+

What is the benchmark universe that embodies the null hypothesis? 50-50 female, or 100/206 female.

+

If there is to be a Neyman-Pearson alternative universe , what is it? None.

+

Which symbols for the observed entities? Balls in bucket, or numbers.

+

What values or ranges of values? 0-1, (1-100), or 101-206.

+

Finite or infinite? Infinite.

+

Which sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? Ten calves compared to universe.

+

What procedure to produce entities? Sampling with replacement,

+

Simple (single step) or complex (multiple “if” drawings)? One can think of it either way.

+

What to record as outcome of each resample trial? The proportion (or number) of females.

+

What is the criterion to be used in the test? The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of half females. (Or frame in terms of a significance level.)

+

“One-tail” or “two-tail” test? One tail, because the farmer is only interested in females: Finding a large proportion of males would not be of interest, and would not cause one to reject the null hypothesis.

+

Computation of the probability sought. The actual computation of probability may be done with several formulaic or sample-space methods, and with several resampling methods: I will first show a resampling method and then several conventional methods. The following material, which allows one to compare resampling and conventional methods, is more germane to the earlier explication of resampling taken altogether in earlier chapters than it is to the theory of hypothesis tests discussed in this chapter, but it is more expedient to present it here.

+
+
+

25.5 Computation of Probabilities with Resampling

+

We can do the problem by hand as follows:

+
    +
  1. Constitute a bucket with either one blue and one pink ball, or 106 blue and 100 pink balls.
  2. +
  3. Draw ten balls with replacement, count pinks, and record.
  4. +
  5. Repeat step (2) say 400 times.
  6. +
  7. Calculate proportion of results with 9 or 10 pinks.
  8. +
+

Or, we can take advantage of the speed and efficiency of the computer as follows:

+
+
import numpy as np
+import matplotlib.pyplot as plt
+
+rnd = np.random.default_rng()
+
+n = 10000
+
+females = np.zeros(n)
+
+for i in range(n):
+    samp = rnd.choice(['female', 'male'], size=10, replace=True)
+    females[i] = np.sum(samp == 'female')
+
+plt.hist(females, bins='auto')
+
+k = np.sum(females >= 9)
+kk = k / n
+print('Proportion with >= 9 females:', kk)
+
+
Proportion with >= 9 females: 0.0127
+
+
+
+
+

+
+
+
+
+

This outcome implies that there is roughly a one percent chance that one would observe 9 or 10 female births in a single sample of 10 calves if the probability of a female on each birth is .5. This outcome should help the decision-maker decide about the plausibility of the bio-engineer’s claim to be able to increase the probability of female calves being born.

+
+
+

25.6 Conventional methods

+
+

25.6.1 The Sample Space and First Principles

+

Assume for a moment that our problem is a smaller one and therefore much easier — the probability of getting two females in two calves if the probability of a female is .5. One could then map out what mathematicians call the “sample space,” a technique that (in its simplest form) assigns to each outcome a single point, and find the proportion of points that correspond to a “success.” We list all four possible combinations — FF, FM, MF, MM. Now we look at the ratio of the number of combinations that have 2 females to the total, which is 1/4. We may then interpret this probability.

+

We might also use this method for (say) five female calves in a row. We can make a list of possibilities such as FFFFF, MFFFF, MMFFF, MMMFFF … MFMFM … MMMMM. There will be 2*2*2*2*2 = 32 possibilities, and 64 and 128 possibilities for six and seven calves respectively. But when we get as high as ten calves, this method would become very troublesome.

+
+
+

25.6.2 Sample Space Calculations

+

For two females in a row, we could use the well known, and very simple, multiplication rule; we could do so even for ten females in a row. But calculating the probability of nine females in ten is a bit more complex.

+
+
+

25.6.3 Pascal’s Triangle

+

One can use Pascal’s Triangle to obtain binomial coefficients for p = .5 and a sample size of 10, focusing on those for 9 or 10 successes. Then calculate the proportion of the total cases with 9 or 10 “successes” in one direction, to find the proportion of cases that pass beyond the criterion of 9 females. The method of Pascal’s Triangle requires more complete understanding of the probabilistic system than does the resampling simulation described above because Pascal’s Triangle requires that one understand the entire structure; simulation requires only that you follow the rules of the model.

+
+
+

25.6.4 The Quincunx

+

The quincunx — a device that filters tiny balls through a set of bumper points not unlike a pinball machine, mentioned here simply for completeness — is more a simulation method than theoretical, but it may be considered “conventional.” Hence, it is included here.

+
+
+

25.6.5 Table of Binomial Coefficients

+

Pascal’s Triangle becomes cumbersome or impractical with large numbers — say, 17 females of 20 births — or with probabilities other than .5. One might produce the binomial coefficients by algebraic multiplication, but that, too, becomes tedious even with small sample sizes. One can also use the pre-computed table of binomial coefficients found in any standard text. But the probabilities for n = 10 and 9 or 10 females are too small to be shown.

+
+
+

25.6.6 Binomial Formula

+

For larger sample sizes, one can use the binomial formula. The binomial formula gives no deeper understanding of the statistical structure than does the Triangle (but it does yield a deeper understanding of the pure mathematics). With very large numbers, even the binomial formula is cumbersome.

+
+
+

25.6.7 The Normal Approximation

+

When the sample size becomes too large for any of the above methods, one can then use the Normal approximation, which yields results close to the binomial (as seen very nicely in the output of the quincunx). But use of the Normal distribution requires an estimate of the standard deviation, which can be derived either by formula or by resampling. (See a more extended parallel discussion in Chapter 27 on confidence intervals for the Bush-Dukakis comparison.)

+

The desired probability can be obtained from the Z formula and a standard table of the Normal distribution found in every elementary text.

+

The Z table can be made less mysterious if we generate it with simulation, or with graph paper or Archimedes’ method, using as raw material (say) five “continuous” (that is, non-binomial) distributions, many of which are skewed: 1) Draw samples of (say) 50 or 100. 2) Plot the means to see that the Normal shape is the outcome. Then 3) standardize with the standard deviation by marking the standard deviations onto the histograms.

+

The aim of the above exercise and the heart of the conventional parametric method is to compare the sample result — the mean — to a standardized plot of the means of samples drawn from the universe of interest to see how likely it is that that universe produces means deviating as much from the universe mean as does our observed sample mean. The steps are:

+
    +
  1. Establish the Normal shape — from the exercise above, or from the quincunx or Pascal’s Triangle or the binomial formula or the formula for the Normal approximation or some other device.
  2. +
  3. Standardize that shape in standard deviations.
  4. +
  5. Compute the Z score for the sample mean — that is, its deviation from the universe mean in standard deviations.
  6. +
  7. Examine the Normal (or really, tables computed from graph paper, etc.) to find the probability of a mean deviating that far by chance.
  8. +
+

This is the canon of the procedure for most parametric work in statistics. (For some small samples, accuracy is improved with an adjustment.)

+
+
+
+

25.7 Choice of the benchmark universe1

+

In the example of the ten calves, the choice of a benchmark universe — a universe that (on average) produces equal proportions of males and females — seems rather straightforward and even automatic, requiring no difficult judgments. But in other cases the process requires more judgments.

+

Let’s consider another case where the choice of a benchmark universe requires no difficult judgments. Assume the U.S. Department of Labor’s Bureau of Labor Statistics (BLS) takes a very large sample — say, 20,000 persons — and finds a 10 percent unemployment rate. At some later time another but smaller sample is drawn — 2,000 persons — showing an 11 percent unemployment rate. Should BLS conclude that unemployment has risen, or is there a large chance that the difference between 10 percent and 11 percent is due to sample variability? In this case, it makes rather obvious sense to ask how often a sample of 2,000 drawn from a universe of 10 percent unemployment (ignoring the variability in the larger sample) will be as different as 11 percent due solely to sample variability? This problem differs from that of the calves only in the proportions and the sizes of the samples.

+

Let’s change the facts and assume that a very large sample had not been drawn and only a sample of 2,000 had been taken, indicating 11 percent unemployment. A policy-maker asks the probability that unemployment is above ten percent. It would still seem rather straightforward to ask how often a universe of 10 percent unemployment would produce a sample of 2000 with a proportion of 11 percent unemployed.

+

Still another problem where the choice of benchmark hypothesis is relatively straightforward: Say that BLS takes two samples of 2000 persons a month apart, and asks whether there is a difference in the results. Pooling the two samples and examining how often two samples drawn from the pooled universe would be as different as observed seems obvious.

+

One of the reasons that the above cases — especially the two-sample case — seem so clear-cut is that the variance of the benchmark hypothesis is not an issue, being implied by the fact that the samples deal with proportions. If the data were continuous, however, this issue would quickly arise. Consider, for example, that the BLS might take the same sorts of samples and ask unemployed persons the lengths of time they had been unemployed. Comparing a small sample to a very large one would be easy to decide about. And even comparing two small samples might be straightforward — simply pooling them as is.

+

But what about if you have a sample of 2,000 with data on lengths of unemployment spells with a mean of 30 days, and you are asked the probability that it comes from a universe with a mean of 25 days? Now there arises the question about the amount of variability to assume for that benchmark universe. Should it be the variability observed in the sample? That is probably an overestimate, because a universe with a smaller mean would probably have a smaller variance, too. So some judgment is required; there cannot be an automatic “objective” process here, whether one proceeds with the conventional or the resampling method.

+

The example of the comparison of liquor retailing systems in Section 24.0.2 provides more material on this subject.

+
+
+

25.8 Why is statistics — and hypothesis testing — so difficult?

+

Why is statistics such a difficult subject? The aforegoing procedural outline provides a window to the explanation. Hypothesis testing — as is also true of the construction of confidence intervals (but unlike simple probability problems) — involves a very long chain of reasoning, perhaps longer than in any other realm of systematic thinking. Furthermore, many decisions in the process require judgment that goes beyond technical analysis. All this emerges as one proceeds through the skeleton procedure above with any specific example.

+

(Bayes’ rule also is very difficult intuitively, but that probably is a result of the twists and turns required in all complex problems in conditional probability. Decision-tree analysis is counter-intuitive, too, probably because it starts at the end instead of the beginning of the story, as we are usually accustomed to doing.)

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/python-book/testing_procedures_files/figure-html/unnamed-chunk-1-1.png b/python-book/testing_procedures_files/figure-html/unnamed-chunk-1-1.png new file mode 100644 index 00000000..3e0fff46 Binary files /dev/null and b/python-book/testing_procedures_files/figure-html/unnamed-chunk-1-1.png differ diff --git a/python-book/what_is_probability.html b/python-book/what_is_probability.html new file mode 100644 index 00000000..2ee727b3 --- /dev/null +++ b/python-book/what_is_probability.html @@ -0,0 +1,909 @@ + + + + + + + + + +Resampling statistics - 3  What is probability? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

3  What is probability?

+
+ + + +
+ + + + +
+ + +
+ +
+

Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales.” — Bertrand Russell (1945 p. xiv).

+
+
+

3.1 Introduction

+

The central concept for dealing with uncertainty is probability. Hence we must inquire into the “meaning” of the term probability. (The term “meaning” is in quotes because it can be a confusing word.)

+

You have been using the notion of probability all your life when drawing conclusions about what you expect to happen, and in reaching decisions in your public and personal lives.

+

You wonder: Will the kick from the 45 yard line go through the uprights? How much oil can you expect from the next well you drill, and what value should you assign to that prospect? Will you make money if you invest in tech stocks for the medium term, or should you spread your investments across the stock market? Will the next Space-X launch end in disaster? Your answers to these questions rest on the probabilities you estimate.

+

And you act on the basis of probabilities: You pay extra for an low-interest loan, if you think that interest rates are going to go up. You bet heavily on a poker hand if there is a high probability that you have the best hand. A hospital decides not to buy another ambulance when the administrator judges that there is a low probability that all the other ambulances will ever be in use at once. NASA decides whether or not to send off the space shuttle this morning as scheduled.

+

The idea of probability is essential when we reason about uncertainty, and so this chapter discusses what is meant by such key terms as “probability,” “chance”, “sample,” and “universe.” It discusses the nature and the usefulness of the concept of probability as used in this book, and it touches on the source of basic estimates of probability that are the raw material of statistical inferences.

+
+
+

3.2 The “Meaning” of “Probability”

+

Probability is difficult to define (Feller 1968), but here is a useful informal starting point:

+
+

A probability is a number from 0 through 1 that reflects how likely it is that a particular event will happen.

+
+

Any particular stated probability is an assertion that indicates how likely you believe it is that an event will occur.

+

If you give an event a probability of 0 you mean that you are certain it will not happen. If you give probability 1 to an event, you mean you are certain that it will happen. For example, if I give you one card from deck that you know contains only the standard 52 cards — before you look at the card, you can give probability 0 to the card being a joker, because you are certain the pack does not contain any joker cards. If I then select only the 14 spades from that deck, and give you a card from that selection, you will say there is probability 1 that the card is a black card, because all the spades are black cards.

+

A probability estimate of .2 indicates that you think there is twice as great a chance of the event happening as if you had estimated a probability of .1. This is the rock-bottom interpretation of the term “probability,” and the heart of the concept. 1

+

The idea of probability arises when you are not sure about what will happen in an uncertain situation. For example, you may lack information and therefore can only make an estimate. If someone asks you your name, you do not use the concept of probability to answer; you know the answer to a very high degree of surety. To be sure, there is some chance that you do not know your own name, but for all practical purposes you can be quite sure of the answer. If someone asks you who will win tomorrow’s baseball game, however, there is a considerable chance that you will be wrong no matter what you say. Whenever there is a reasonable chance that your prediction will be wrong, the concept of probability can help you.

+

The concept of probability helps you to answer the question, “How likely is it that…?” The purpose of the study of probability and statistics is to help you make sound appraisals of statements about the future, and good decisions based upon those appraisals. The concept of probability is especially useful when you have a sample from a larger set of data — a “universe” — and you want to know the probability of various degrees of likeness between the sample and the universe. (The universe of events you are sampling from is also called the “population,” a concept to be discussed below.) Perhaps the universe of your study is all high school graduates in 2018. You might then want to know, for example, the probability that the universe’s average SAT (university entrance) score will not differ from your sample’s average SAT by more than some arbitrary number of SAT points — say, ten points.

+

We have said that a probability statement is about the future. Well, usually. Occasionally you might state a probability about your future knowledge of past events — that is, “I think I’ll find out that…” — or even about the unknown past. (Historians use probabilities to measure their uncertainty about whether events occurred in the past, and the courts do, too, though the courts hesitate to say so explicitly.)

+

Sometimes one knows a probability, such as in the case of a gambler playing black on an honest roulette wheel, or an insurance company issuing a policy on an event with which it has had a lot of experience, such as a life insurance policy. But often one does not know the probability of a future event. Therefore, our concept of probability must include situations where extensive data are not available.

+

All of the many techniques used to estimate probabilities should be thought of as proxies for the actual probability. For example, if Mission Control at Space Central simulates what should and probably will happen in space if a valve is turned aboard a space craft just now being built, the test result on the ground is a proxy for the real probability of what will happen when the crew turn the valve in the planned mission.

+

In some cases, it is difficult to conceive of any data that can serve as a proxy. For example, the director of the CIA, Robert Gates, said in 1993 “that in May 1989, the CIA reported that the problems in the Soviet Union were so serious and the situation so volatile that Gorbachev had only a 50-50 chance of surviving the next three to four years unless he retreated from his reform policies” (The Washington Post , January 17, 1993, p. A42). Can such a statement be based on solid enough data to be more than a crude guess?

+

The conceptual probability in any specific situation is an interpretation of all the evidence that is then available . For example, a wise biomedical worker’s estimate of the chance that a given therapy will have a positive effect on a sick patient should be an interpretation of the results of not just one study in isolation, but of the results of that study plus everything else that is known about the disease and the therapy. A wise policymaker in business, government, or the military will base a probability estimate on a wide variety of information and knowledge. The same is even true of an insurance underwriter who bases a life-insurance or shipping-insurance rate not only on extensive tables of long-time experience but also on recent knowledge of other kinds. Each situation asks us to make a choice of the best method of estimating a probability — whether that estimate is objective — from a frequency series — or subjective, from the distillation of other experience.

+
+
+

3.3 The nature and meaning of the concept of probability

+

It is confusing and unnecessary to inquire what probability “really” is. (Indeed, the terms “really” and “is,” alone or in combination, are major sources of confusion in statistics and in other logical and scientific discussions, and it is often wise to avoid their use.) Various concepts of probability — which correspond to various common definitions of the term — are useful in particular contexts. This book contains many examples of the use of probability. Work with them will gradually develop a sound understanding of the concept.

+

There are two major concepts and points of view about probability — frequency and degrees of belief. Each is useful in some situations but not in others. Though they may seem incompatible in principle, there almost never is confusion about which is appropriate in a given situation.

+
    +
  1. Frequency . The probability of an event can be said to be the proportion of times that the event has taken place in the past, usually based on a long series of trials. Insurance companies use this when they estimate the probability that a thirty-five-year-old teacher will die during a period for which he wants to buy an insurance policy. (Notice this shortcoming: Sometimes you must bet upon events that have never or only infrequently taken place before, and so you cannot reasonably reckon the proportion of times they occurred one way or the other in the past.)

  2. +
  3. Degree of belief . The probability that an event will take place or that a statement is true can be said to correspond to the odds at which you would bet that the event will take place. (Notice a shortcoming of this concept: You might be willing to accept a five-dollar bet at 2-1 odds that your team will win the game, but you might be unwilling to bet a hundred dollars at the same odds.)

  4. +
+

See (Barnett 1982, chap. 3) for an in-depth discussion of different approaches to probability.

+

The connection between gambling and immorality or vice troubles some people about gambling examples. On the other hand, the immediacy and consequences of the decisions that the gambler has to make give the subject a special tang. There are several reasons why statistics use so many gambling examples — and especially tossing coins, throwing dice, and playing cards:

+
    +
  1. Historical . The theory of probability began with gambling examples of dice analyzed by Cardano, Galileo, and then by Pascal and Fermat.
  2. +
  3. Generality . These examples are not related to any particular walk of life, and therefore they can be generalized to applications in any walk of life. Students in any field — business, medicine, science — can feel equally at home with gambling examples.
  4. +
  5. Sharpness . These examples are particularly stark, and unencumbered by the baggage of particular walks of life or special uses.
  6. +
  7. Universality . Many other texts use these same examples, and therefore the use of them connects up this book with the main body of writing about probability and statistics.
  8. +
+

Often we’ll begin with a gambling example and then consider an example in one of the professional fields — such as business and other decision-making activities, biostatistics and medicine, social science and natural science — and everyday living. People in one field often can benefit from examples in others; for example, medical students should understand the need for business decision-making in terms of medical practice, as well as the biostatistical examples. And social scientists should understand the decision-making aspects of statistics if they have any interest in the use of their work in public policy.

+
+
+

3.4 Back to Proxies

+

Example of a proxy: The “probability risk assessments” (PRAs) that are made for the chances of failures of nuclear power plants are based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. A PRA can cost a nuclear facility $5 million.

+

Another example: If a manager of a high-street store looks at the sales of a particular brand of smart watches in the last two Decembers, and on that basis guesses how likely it is that she will run out of stock if she orders 200 smart watches, then the last two years’ experience is serving as a proxy for future experience. If a sales manager just “intuits” that the odds are 3 to 1 (a probability of .75) that the main local competitor will not meet a price cut, then all her past experience summed into her intuition is a proxy for the probability that it will really happen. Whether any proxy is a good or bad one depends on the wisdom of the person choosing the proxy and making the probability estimates.

+

How does one estimate a probability in practice? This involves practical skills not very different from the practical skills required to estimate with accuracy the length of a golf shot, the number of carpenters you will need to build a house, or the time it will take you to walk to a friend’s house; we will consider elsewhere some ways to improve your practical skills in estimating probabilities. For now, let us simply categorize and consider in the next section various ways of estimating an ordinary garden variety of probability, which is called an “unconditional” probability.

+
+
+

3.5 The various ways of estimating probabilities

+

Consider the probability of drawing an even-numbered spade from a deck of poker cards (consider the queen as even and the jack and king as odd). Here are several general methods of estimation, where we define each method in terms of the operations we use to make the estimate:

+
    +
  1. Experience.

    +

    The first possible source for an estimate of the probability of drawing an even-numbered spade is the purely empirical method of experience . If you have watched card games casually from time to time, you might simply guess at the proportion of times you have seen even-numbered spades appear — say, “about 1 in 15” or “about 1 in 9” (which is almost correct) or something like that. (If you watch long enough you might come to estimate something like 6 in 52.)

    +

    General information and experience are also the source for estimating the probability that the sales of a particular brand of smart watch this December will be between 200 and 250, based on sales the last two Decembers; that your team will win the football game tomorrow; that war will break out next year; or that a United States astronaut will reach Mars before a Russian astronaut. You simply put together all your relevant prior experience and knowledge, and then make an educated guess.

    +

    Observation of repeated events can help you estimate the probability that a machine will turn out a defective part or that a child can memorize four nonsense syllables correctly in one attempt. You watch repeated trials of similar events and record the results.

    +

    Data on the mortality rates for people of various ages in a particular country in a given decade are the basis for estimating the probabilities of death, which are then used by the actuaries of an insurance company to set life insurance rates. This is systematized experience — called a frequency series .

    +

    No frequency series can speak for itself in a perfectly objective manner. Many judgments inevitably enter into compiling every frequency series — deciding which frequency series to use for an estimate, choosing which part of the frequency series to use, and so on. For example, should the insurance company use only its records from last year, which will be too few to provide as much data as is preferable, or should it also use death records from years further back, when conditions were slightly different, together with data from other sources? (Of course, no two deaths — indeed, no events of any kind — are exactly the same. But under many circumstances they are practically the same, and science is only interested in such “practical” considerations.)

    +

    Given that we have to use judgment in probability estimates, the reader may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities; the two terms are working synonyms.

    +

    There is no logical difference between the sort of probability that the life insurance company estimates on the basis of its “frequency series” of past death rates, and the manager’s estimates of the sales of smart watches in December, based on sales in that month in the past two years. 2

    +

    The concept of a probability based on a frequency series can be rendered almost useless when all the observations are repetitions of a single magnitude — for example, the case of all successes and zero failures of space-shuttle launches prior to the Challenger shuttle tragedy in the 1980s; in those data alone there was almost no basis to estimate the probability of a shuttle failure. (Probabilists have made some rather peculiar attempts over the centuries to estimate probabilities from the length of a zero-defect time series — such as the fact that the sun has never failed to rise (foggy days aside! — based on the undeniable fact that the longer such a series is, the smaller the probability of a failure; see e.g., (Whitworth 1897, xix–xli). However, one surely has more information on which to act when one has a long series of observations of the same magnitude rather than a short series).

  2. +
  3. Simulated experience.

    +

    A second possible source of probability estimates is empirical scientific investigation with repeated trials of the phenomenon. This is an empirical method even when the empirical trials are simulations. In the case of the even-numbered spades, the empirical scientific procedure is to shuffle the cards, deal one card, record whether or not the card is an even-number spade, replace the card, and repeat the steps a good many times. The proportions of times you observe an even-numbered spade come up is a probability estimate based on a frequency series.

    +

    You might reasonably ask why we do not just count the number of even-numbered spades in the deck of fifty-two cards — using the sample space analysis you see below. No reason at all. But that procedure would not work if you wanted to estimate the probability of a baseball batter getting a hit or a cigarette lighter producing flame.

    +

    Some varieties of poker are so complex that experiment is the only feasible way to estimate the probabilities a player needs to know.

    +

    The resampling approach to statistics produces estimates of most probabilities with this sort of experimental “Monte Carlo” method. More about this later.

  4. +
  5. Sample space analysis and first principles.

    +

    A third source of probability estimates is counting the possibilities — the quintessential theoretical method. For example, by examination of an ordinary die one can determine that there are six different numbers that can come up. One can then determine that the probability of getting (say) either a “1” or a “2,” on a single throw, is 2/6 = 1/3, because two among the six possibilities are “1” or “2.” One can similarly determine that there are two possibilities of getting a “1” plus a “6” out of thirty-six possibilities when rolling two dice, yielding a probability estimate of 2/36 = 1/18.

    +

    Estimating probabilities by counting the possibilities has two requirements: 1) that the possibilities all be known (and therefore limited), and few enough to be studied easily; and 2) that the probability of each particular possibility be known, for example, that the probabilities of all sides of the dice coming up are equal, that is, equal to 1/6.

  6. +
  7. Mathematical shortcuts to sample-space analysis.

    +

    A fourth source of probability estimates is mathematical calculations . If one knows by other means that the probability of a spade is 1/4 and the probability of an even-numbered card is 6/13, one can use probability calculation rules to calculate that the probability of turning up an even-numbered spade is 6/52 (that is, 1/4 x 6/13). If one knows that the probability of a spade is 1/4 and the probability of a heart is 1/4, one can then calculate that the probability of getting a heart or a spade is 1/2 (that is 1/4 + 1/4). The point here is not the particular calculation procedures, which we will touch on later, but rather that one can often calculate the desired probability on the basis of already-known probabilities.

    +

    It is possible to estimate probabilities with mathematical calculation only if one knows by other means the probabilities of some related events. For example, there is no possible way of mathematically calculating that a child will memorize four nonsense syllables correctly in one attempt; empirical knowledge is necessary.

  8. +
  9. Kitchen-sink methods.

    +

    In addition to the above four categories of estimation procedures, the statistical imagination may produce estimates in still other ways such as a) the salesman’s seat-of-the-pants estimate of what the competition’s price will be next quarter, based on who-knows-what gossip, long-time acquaintance with the competitors, and so on, and b) the probability risk assessments (PRAs) that are made for the chances of failures of nuclear power plants based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. Any of these methods may be a combination of theoretical and empirical methods.

  10. +
+

As an example of an organization struggling with kitchen-sink methods, consider the estimation of the probability of failure for the tragic flight of the Challenger shuttle, as described by the famous physicist Nobelist Richard Feynman. This is a very real case that includes just about every sort of complication that enters into estimating probabilities.

+
+

…Mr. Ullian told us that 5 out of 127 rockets that he had looked at had failed — a rate of about 4 percent. He took that 4 percent and divided it by 4, because he assumed a manned flight would be safer than an unmanned one. He came out with about a 1 percent chance of failure, and that was enough to warrant the destruct charges.

+

But NASA [the space agency in charge] told Mr. Ullian that the probability of failure was more like 1 in \(10^5\).

+

I tried to make sense out of that number. “Did you say 1 in \(10^5\)?”

+

“That’s right; 1 in 100,000.”

+

“That means you could fly the shuttle every day for an average of 300 years between accidents — every day, one flight, for 300 years — which is obviously crazy!”

+

“Yes, I know,” said Mr. Ullian. “I moved my number up to 1 in 1000 to answer all of NASA’s claims — that they were much more careful with manned flights, that the typical rocket isn’t a valid comparison, etcetera.”

+

But then a new problem came up: the Jupiter probe, Galileo , was going to use a power supply that runs on heat generated by radioactivity. If the shuttle carrying Galileo failed, radioactivity could be spread over a large area. So the argument continued: NASA kept saying 1 in 100,000 and Mr. Ullian kept saying 1 in 1000, at best.

+

Mr. Ullian also told us about the problems he had in trying to talk to the man in charge, Mr. Kingsbury: he could get appointments with underlings, but he never could get through to Kingsbury and find out how NASA got its figure of 1 in 100,000 (Feynman and Leighton 1988, 179–80).

+
+

Feynman tried to ascertain more about the origins of the figure of 1 in 100,000 that entered into NASA’s calculations. He performed an experiment with the engineers:

+
+

…“Here’s a piece of paper each. Please write on your paper the answer to this question: what do you think is the probability that a flight would be uncompleted due to a failure in this engine?”

+

They write down their answers and hand in their papers. One guy wrote “99-44/100% pure” (copying the Ivory soap slogan), meaning about 1 in 200. Another guy wrote something very technical and highly quantitative in the standard statistical way, carefully defining everything, that I had to translate — which also meant about 1 in 200. The third guy wrote, simply, “1 in 300.”

+

Mr. Lovingood’s paper, however, said:

+

“Cannot quantify. Reliability is judged from:

+
    +
  • past experience
  • +
  • quality control in manufacturing
  • +
  • engineering judgment”
  • +
+

“Well,” I said, “I’ve got four answers, and one of them weaseled.” I turned to Mr. Lovingood: “I think you weaseled.”

+

“I don’t think I weaseled.”

+

“You didn’t tell me what your confidence was, sir; you told me how you determined it. What I want to know is: after you determined it, what was it?”

+

He says, “100 percent” — the engineers’ jaws drop, my jaw drops; I look at him, everybody looks at him — “uh, uh, minus epsilon!”

+

So I say, “Well, yes; that’s fine. Now, the only problem is, WHAT IS EPSILON?”

+

He says, “\(10^-5\).” It was the same number that Mr. Ullian had told us about: 1 in 100,000.

+

I showed Mr. Lovingood the other answers and said, “You’ll be interested to know that there is a difference between engineers and management here — a factor of more than 300.”

+

He says, “Sir, I’ll be glad to send you the document that contains this estimate, so you can understand it.”

+

Later, Mr. Lovingood sent me that report. It said things like “The probability of mission success is necessarily very close to 1.0” — does that mean it is close to 1.0, or it ought to be close to 1.0? — and “Historically, this high degree of mission success has given rise to a difference in philosophy between unmanned and manned space flight programs; i.e., numerical probability versus engineering judgment.” As far as I can tell, “engineering judgment” means they’re just going to make up numbers! The probability of an engine-blade failure was given as a universal constant, as if all the blades were exactly the same, under the same conditions. The whole paper was quantifying everything. Just about every nut and bolt was in there: “The chance that a HPHTP pipe will burst is \(10^-7\).” You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you add everything together you get 1 in 100,000. (Feynman and Leighton 1988, 182–83).

+
+

We see in the Challenger shuttle case very mixed kinds of inputs to actual estimates of probabilities. They include frequency series of past flights of other rockets, judgments about the relevance of experience with that different sort of rocket, adjustments for special temperature conditions (cold), and much much more. There also were complex computational processes in arriving at the probabilities that were made the basis for the launch decision. And most impressive of all, of course, are the extraordinary differences in estimates made by various persons (or perhaps we should talk of various statuses and roles) which make a mockery of the notion of objective estimation in this case.

+

Working with different sorts of estimation methods in different sorts of situations is not new; practical statisticians do so all the time. We argue that we should make no apology for doing so.

+

The concept of probability varies from one field of endeavor to another; it is different in the law, in science, and in business. The concept is most straightforward in decision-making situations such as business and gambling; there it is crystal-clear that one’s interest is entirely in making accurate predictions so as to advance the interests of oneself and one’s group. The concept is most difficult in social science, where there is considerable doubt about the aims and values of an investigation. In sum, one should not think of what a probability “is” but rather how best to estimate it. In practice, neither in actual decision-making situations nor in scientific work — nor in classes — do people experience difficulties estimating probabilities because of philosophical confusions. Only philosophers and mathematicians worry — and even they really do not need to worry — about the “meaning” of probability3.

+
+
+

3.6 The relationship of probability to other magnitudes

+

An important argument in favor of approaching the concept of probability as an estimate is that an estimate of a probability often (though not always) is the opposite side of the coin from an estimate of a physical quantity such as time or space.

+

For example, uncertainty about the probability that one will finish a task within 9 minutes is another way of labeling the uncertainty that the time required to finish the task will be less than 9 minutes. Hence, if estimation is appropriate for time in this case, it should be equally appropriate for probability. The same is true for the probability that the quantity of smart watches sold will be between 200 and 250 units.

+

Hence the concept of probability, and its estimation in any particular case, should be no more puzzling than is the “dual” concept of time or distance or quantities of smart watches. That is, lack of certainty about the probability that an event will occur is not different in nature from lack of certainty about the amount of time or distance in the event. There is no essential difference between whether a part 2 inches in length will be the next to emerge from the machine, or what the length of the next part will be, or the length of the part that just emerged (if it has not yet been measured).

+

The information available for the measurement of (say) the length of a car or the location of a star is exactly the same information that is available with respect to the concept of probability in those situations. That is, one may have ten disparate observations of a car’s length which then constitute a probability distribution, and the same for the altitude of a star in the heavens.

+

In a book of puzzles about probability (Mosteller 1987, problem 42), this problem appears: “If a stick is broken in two at random, what is the average length of the smaller piece?” This particular puzzle does not even mention probability explicitly, and no one would feel the need to write a scholarly treatise on the meaning of the word “length” here, any more than one would one do so if the question were about an astronomer’s average observation of the angle of a star at a given time or place, or the average height of boards cut by a carpenter, or the average size of a basketball team. Nor would one write a treatise about the “meaning” of “time” if a similar puzzle involved the average time between two bird calls. Yet a rephrasing of the problem reveals its tie to the concept of probability, to wit: What is the probability that the smaller piece will be (say) more than half the length of the larger piece? Or, what is the probability distribution of the sizes of the shorter piece?

+

The duality of the concepts of probability and physical entities also emerges in Whitworth’s discussion (1897) of fair betting odds:

+
+

…What sum ought you fairly give or take now, while the event is undetermined, in exchange for the assurance that you shall receive a stated sum (say $1,000) if the favourable event occur? The chance of receiving $1,000 is worth something. It is not as good as the certainty of receiving $1,000, and therefore it is worth less than $1,000. But the prospect or expectation or chance, however slight, is a commodity which may be bought and sold. It must have its price somewhere between zero and $1,000. (p. xix.)

+
+
+

…And the ratio of the expectation to the full sum to be received is what is called the chance of the favourable event. For instance, if we say that the chance is 1/5, it is equivalent to saying that $200 is the fair price of the contingent $1,000. (p. xx.)…

+
+
+

The fair price can sometimes be calculated mathematically from a priori considerations: sometimes it can be deduced from statistics, that is, from the recorded results of observation and experiment. Sometimes it can only be estimated generally, the estimate being founded on a limited knowledge or experience. If your expectation depends on the drawing of a ticket in a raffle, the fair price can be calculated from abstract considerations: if it depend upon your outliving another person, the fair price can be inferred from recorded statistics: if it depend upon a benefactor not revoking his will, the fair price depends upon the character of your benefactor, his habit of changing his mind, and other circumstances upon the knowledge of which you base your estimate. But if in any of these cases you determine that $300 is the sum which you ought fairly to accept for your prospect, this is equivalent to saying that your chance, whether calculated or estimated, is 3/10... (p. xx.)

+
+

It is indubitable that along with frequency data, a wide variety of other information will affect the odds at which a reasonable person will bet. If the two concepts of probability stand on a similar footing here, why should they not be on a similar footing in all discussion of probability? I can think of no reason that they should not be so treated.

+

Scholars write about the “discovery” of the concept of probability in one century or another. But is it not likely that even in pre-history, when a fisherperson was asked how long the big fish was, s/he sometimes extended her/his arms and said, “About this long, but I’m not exactly sure,” and when a scout was asked how many of the enemy there were, s/he answered, “I don’t know for sure...probably about fifty.” The uncertainty implicit in these statements is the functional equivalent of probability statements. There simply is no need to make such heavy work of the probability concept as the philosophers and mathematicians and historians have done.

+
+
+

3.7 What is “chance”?

+

The study of probability focuses on events with randomness — that is, events about which there is uncertainty whether or not they will occur. And the uncertainty refers to your knowledge rather than to the event itself. For example, consider this physical illustration with a remote control. The remote control has a front end that should point at the TV that is controls, and a back end that will usually be pointing at me, the user of the remote control. Call the front — the TV end, and the back — the sofa end of the remote control.

+

I spin the remote control like a baton twirler. If I hold it at the sofa end and attempt to flip it so that it turns only half a revolution, I can be almost sure that I will correctly get the TV end and not the sofa end. And if I attempt to flip it a full revolution, again I can almost surely get the sofa end successfully. It is not a random event whether I catch the sofa end or the TV end (here ignoring those throws when I catch neither end) when doing only half a revolution or one revolution. The result is quite predictable in both these simple maneuvers so far.

+

When I say the result is “predictable,” I mean that you would not bet with me about whether this time I’ll get the TV or the sofa end. So we say that the outcome of my flip aiming at half a revolution is not “random.”

+

When I twirl the remote control so little, I control (almost completely) whether the sofa end or the TV end comes down to my hand; this is the same as saying that the outcome does not occur by chance.

+

The terms “random” and “chance” implicitly mean that you believe that I cannot control or cannot know in advance what will happen.

+

Whether this twirl will be the rare time I miss, however, should be considered chance. Though you would not bet at even odds on my catching the sofa end versus the TV end if there is to be only a half or one full revolution, you might bet — at (say) odds of 50 to 1 — that I will make a mistake and get it wrong, or drop it. So the very same flip can be seen as random or determined depending on what aspect of it we are looking at.

+

Of course you would not bet against me about my not making a mistake, because the bet might cause me to make a mistake purposely. This “moral hazard” is a problem that emerges when a person buys life insurance and may commit suicide, or when a boxer may lose a fight purposely. The people who stake money on those events say that such an outcome is “fixed” (a very appropriate word) and not random.

+

Now I attempt more difficult maneuvers with the remote control. I can do \(1\frac{1}{2}\) flips pretty well, and two full revolutions with some success — maybe even \(2\frac{1}{2}\) flips on a good day. But when I get much beyond that, I cannot determine very well whether I’ll get the sofa or the TV end. The outcome gradually becomes less and less predictable — that is, more and more random.

+

If I flip the remote control so that it revolves three or more times, I can hardly control the process at all, and hence I cannot predict well whether I’ll get the sofa end or the TV end. With 5 revolutions I have absolutely no control over the outcome; I cannot predict the outcome better than 50-50. At that point, getting the sofa end or the TV end has become a completely random event for our purposes, just like flipping a coin high in the air. So at that point we say that “chance” controls the outcome, though that word is just a synonym for my lack of ability to control and predict the outcome. “Chance” can be thought to stand for the myriad small factors that influence the outcome.

+

We see the same gradual increase in randomness with increasing numbers of shuffles of cards. After one shuffle, a skilled magician can know where every card is, and after two shuffles there is still much order that s/he can work with. But after (say) five shuffles, the magician no longer has any power to predict and control, and the outcome of any draw can then be thought of as random chance.

+

At what point do we say that the outcome is “random” or “pure chance” as to whether my hand will grasp the TV end, the sofa end, or at some other spot? There is no sharp boundary to this transition. Rather, the transition is gradual; this is the crucial idea, and one that I have not seen stated before.

+

Whether or not we refer to the outcome as random depends upon the twirler’s skill, which influences how predictable the event is. A baton twirler or juggler might be able to do ten flips with a non-random outcome; if the twirler is an expert and the outcome is highly predictable, we say it is not random but rather is determined.

+

Again, this shows that the randomness is not a property of the physical event, but rather of a person’s knowledge and skill.

+
+
+

3.8 What Do We Mean by “Random”?

+

We have defined “chance” and “random* as the absence of predictive power and/or explanation and/or control. Here we should not confuse the concepts of determinacy-indeterminacy and predictable-unpredictable. What matters for decision purposes is whether you can predict. Whether the process is”really” determinate is largely a matter of definition and labeling, an unnecessary philosophical controversy for our purposes (and perhaps for any other purpose) 4.

+

The remote control in the previous demonstration becomes unpredictable — that is, random — even though it still is subject to similar physical processes as when it is predictable. I do not deny in principle that these processes can be “understood,” or that one could produce a machine that would — like a baton twirler — make the course of the remote control predictable for many turns. But in practice we cannot make the predictions — and it is the practical reality, rather than the principle, that matters here.

+

When I flip the remote control half a turn or one turn, I control (almost completely) whether it comes down at the sofa end end or the TV end, so we do not say that the outcome is chance. Much the same can be said about what happens to the predictability of drawing a given card as one increases the number of times one shuffles a deck of cards.

+

Consider, too, a set of fake dice that I roll. Before you know they are fake, you assume that the probabilities of various outcomes is a matter of chance. But after you know that the dice are loaded, you no longer assume that the outcome is chance. This illustrates how the probabilities you work with are influenced by your knowledge of the facts of the situation.

+

Admittedly, this way of thinking about probability takes some getting used to. Events may appear to be random, but in fact, we can predict them — and visa versa. For example, suppose a magician does a simple trick with dice such as this one:

+
+

The magician turns her back while a spectator throws three dice on the table. He is instructed to add the faces. He then picks up any one die, adding the number on the bottom to the previous total. This same die is rolled again. The number it now shows is also added to the total. The magician turns around. She calls attention to the fact that she has no way of knowing which of the three dice was used for the second roll. She picks up the dice, shakes them in her hand a moment, then correctly announces the final sum.

+
+

Method:. When the spectator rolls the dice, they get three numbers, one from each of the three dice. Call these numbers \(a\), \(b\) and \(c\). Then he chooses one die — it doesn’t matter which, but let’s say he chooses the third die, with value \(c\). He adds the bottom of the third die to the total. Here’s the trick — the total of opposite faces on a standard die always add up to 7 — 1 is opposite 6, 2 is opposite 5, and 3 is opposite 4. So the total is now \(a + b + 7\). Then the spectator rolls the third die again, to get a new number \(d\). The total is now \(a + b + 7 + d\). When the magician turns round she can see what \(a\) and \(b\) and \(d\) are, so to get the right final total, she just needs to add 7 (Gardner 1985, p259). Ben Sparks does a nice demonstration of the trick on Numerphile YouTube.

+

The point here is that, until you know the trick, you (the magician) cannot predict the final sum, so the magician and the spectator consider the result as random. If you do know the trick, you can predict the result, and it is not random. Whether something is “random” or not, depends on what you know.

+

Consider the distributions of heights of various groups of living things (including people). When we consider all living things taken together, the shape of the overall distribution — many individuals at the tiny end where the viruses are found, and very few individuals at the tall end where the giraffes are — is determined mostly by the distribution of species that have different mean heights. Hence we can explain the shape of that distribution, and we do not say that is determined by “chance.” But with a homogenous cohort of a single species — say, all 25-year-old human females in the U.S. — our best description of the shape of the distribution is “chance.” With situations in between, the shape is partly due to identifiable factors — e.g. age — and partly due to “chance.”

+

Or consider the case of a basketball shooter: What causes her or him to make (or not make) a basket this shot, after a string of successes? Much must be ascribed to chance variation. But what causes a given shooter to be very good or very poor relative to other players? For that explanation we can point to such factors as the amount of practice or natural talent.

+

Again, all this has nothing to do with whether the mechanism is “really” chance, unlike the arguments that have been raging in physics for a century. That is the point of the remote control demonstration. Our knowledge and our power to predict the outcome gradually transits from non-chance (that is, “determined”) to chance (“not determined”) in a gradual way even though the same sort of physical mechanism produces each throw of the remote control.

+

Earlier I mentioned that when we say that chance controls the outcome of the remote control flip after (say) five revolutions, we mean that there are many small forces that affect the outcome. The effect of each force is not known, and each is independent of the other. None of these forces is large enough for me (as the remote control twirler) to deal with, or else I would deal with it and be able to improve my control and my ability to predict the outcome. This concept of many small influences — “small” meaning in practice those influences whose effects cannot be identified and allowed for — which affect the outcome and whose effects are not knowable and which are independent of each other is important in statistical inference. For example, as we will see later, when we add many unpredictable deviations together, and plot the distribution of the result, we end up with the famous and very common bell-shaped normal distribution — this striking result comes about because of a mathematical phenomenon called the Central Limit Theorem. We will show this at work, later in the book.

+
+
+

3.9 Randomness from the computer

+

We now have the idea of random variation as being variation we cannot predict. For example, when we flip the remote control through many rotations, we can no longer easily predict which end will land in our hand. We can call the result of any particular flip — random — because we cannot predict whether the result will be TV end or sofa end.

+

We still know some things about the result — it will be one of two options — TV or sofa (unless we drop it). But we cannot predict which. We say the result of each flip is random if we cannot do anything to improve our prediction of 50% for TV (or sofa) end on the next flip.

+

We are not saying the result is random in any deep, non-deterministic sense — we are only saying we can treat the result as random, because we cannot predict it.

+

Now consider getting random numbers from the computer, where the numbers can either be 0 or 1. This is rather like tossing a fair coin, where the results are 0 and 1 rather than “heads” and “tails”.

+

When we ask the computer for a random choice between 0 and 1, we accept it is random-enough, or random-like, if we can’t do anything to predict which of 0 or 1 we will get on any one trial. We can’t do better than guessing that the next value will be — say — 0 — and whichever number we guess, we will only ever have a 50% chance of being correct. We are not saying the computer is giving truly random numbers in some deep sense, only numbers we cannot distinguish from truly random numbers, because we cannot do anything to predict them. The technical term for random numbers from the computer is therefore pseudo-random — meaning, like random numbers, in the sense they are effectively unpredictable. Effectively unpredictable means there is no practical way for you, or even a very powerful computer, to do anything to improve your prediction of the next number in the series.

+
+
+

3.10 The philosophers’ dispute about the concept of probability

+

Those who call themselves “objectivists” or “frequentists” and those who call themselves “personalists” or “Bayesians” have been arguing for hundreds or even thousands of years about the “nature” of probability. The objectivists insist (correctly) that any estimation not based on a series of observations is subject to potential bias, from which they conclude (incorrectly) that we should never think of probability that way. They are worried about the perversion of science, the substitution of arbitrary assessments for value-free data-gathering. The personalists argue (correctly) that in many situations it is not possible to obtain sufficient data to avoid considerable judgment. Indeed, if a probability is about the future, some judgment is always required — about which observations will be relevant, and so on. They sometimes conclude (incorrectly) that the objectivists’ worries are unimportant.

+

As is so often the case, the various sides in the argument have different sorts of situations in mind. As we have seen, the arguments disappear if one thinks operationally with respect to the purpose of the work, rather than in terms of properties, as mentioned earlier.

+

Here is an example of the difficulty of focusing on the supposed properties of the mechanism or situation: The mathematical theorist asserts that the probability of a die falling with the “5” side up is 1/6, on the basis of the physics of equally-weighted sides. But if one rolls a particular die a million times, and it turns up “5” less than 1/6 of the time, one surely would use the observed proportion as the practical estimate. The probabilities of various outcomes with cheap dice may depend upon the number of pips drilled out on a side. In 20,000 throws of a red die and 20,000 throws of a white die, the proportions of 3’s and 4’s were, respectively, .159 and .146, .145 and .142 — all far below the expected proportions of .167. That is, 3’s and 4’s occurred about 11 percent less often that if the dice had been perfectly formed, a difference that could make a big difference in a gambling game (Bulmer 1979, 18).

+

It is reasonable to think of both the engineering method (the theoretical approach) and the empirical method (experimentation and data collection) as two alternative ways to estimate a probability. The two methods use different processes and different proxies for the probability you wish to estimate. One must adduce additional knowledge to decide which method to use in any given situation. It is sensible to use the empirical method when data are available. (But use both together whenever possible.)

+

In view of the inevitably subjective nature of probability estimates, you may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities. The two terms are working synonyms.

+

Most important: One cannot sensibly talk about probabilities in the abstract, without reference to some set of facts. The topic then loses its meaning, and invites confusion and argument. This also is a reason why a general formalization of the probability concept does not make sense.

+
+
+

3.11 The relationship of probability to the concept of resampling

+

There is no all-agreed definition of the concept of the resampling method in statistics. Unlike some other writers, I prefer to apply the term to problems in both pure probability and statistics. This set of examples may illustrate:

+
    +
  1. Consider asking about the number of hits one would expect from a 0.250 (25 percent) batter in a 400 at-bat season. One would call this a problem in “probability.” The sampling distribution of the batter’s results can be calculated by formula or produced by Monte Carlo simulation.

  2. +
  3. Now consider examining the number of hits in a given batter’s season, and asking how likely that number (or fewer) is to occur by chance if the batter’s long-run batting average is 0.250. One would call this a problem in “statistics.” But just as in example (1) above, the answer can be calculated by formula or produced by Monte Carlo simulation. And the calculation or simulation is exactly the same as used in (1).

    +

    Here the term “resampling” might be applied to the simulation with considerable agreement among people familiar with the term, but perhaps not by all such persons.

  4. +
  5. Next consider an observed distribution of distances that a batter’s hits travel in a season with 100 hits, with an observed mean of 150 feet per hit. One might ask how likely it is that a sample of 10 hits drawn with replacement from the observed distribution of hit lengths (with a mean of 150 feet) would have a mean greater than 160 feet, and one could easily produce an answer with repeated Monte Carlo samples. Traditionally this would be called a problem in probability.

  6. +
  7. Next consider that a batter gets 10 hits with a mean of 160 feet, and one wishes to estimate the probability that the sample would be produced by a distribution as specified in (3). This is a problem in statistics, and by 1996, it is common statistical practice to treat it with a resampling method. The actual simulation would, however, be identical to the work described in (3).

  8. +
+

Because the work in (4) and (2) differ only in question (4) involving measured data and question (2) involving counted data, there seems no reason to discriminate between the two cases with respect to the term “resampling.” With respect to the pairs of cases (1) and (2), and (3) and (4), there is no difference in the actual work performed, though there is a difference in the way the question is framed. I would therefore urge that the label “resampling” be applied to (1) and (3) as well as to (2) and (4), to bring out the important fact that the procedure is the same as in resampling questions in statistics.

+

One could easily produce examples like (1) and (2) for cases that are similar except that the drawing is without replacement, as in the sampling version of Fisher’s permutation test — for example, a tea taster (Fisher 1935; Fisher 1960, chap. II, section 5). And one could adduce the example of prices in different state liquor control systems (see Section 12.16) which is similar to cases (3) and (4) except that sampling without replacement seems appropriate. Again, the analogs to cases (2) and (4) would generally be called “resampling.”

+

The concept of resampling is defined in a more precise way in Section 8.9.

+
+
+

3.12 Conclusion

+

We define “chance” as the absence of predictive power and/ or explanation and/or control.

+

When the remote control rotates more than three or four turns I cannot control the outcome — whether TV or sofa end — with any accuracy. That is to say, I cannot predict much better than 50-50 with more than four rotations. So we then say that the outcome is determined by “chance.”

+

As to those persons who wish to inquire into what the situation “really” is: I hope they agree that we do not need to do so to proceed with our work. I hope all will agree that the outcome of flipping the TV gradually becomes unpredictable (random) though still subject to similar physical processes as when predictable. I do not deny in principle that these processes can be “understood,” certainly one can develop a machine (or a baton twirler) that will make the outcome predictable for many turns. But this has nothing to do with whether the mechanism is “really” something one wants to say is influenced by “chance.” This is the point of the cooking-TV demonstration. The outcome traverses from non-chance (determined) to chance (not determined) in a smooth way even though the physical mechanism that produces the revolutions remains much the same over the traverse.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/about_technology.html b/r-book/about_technology.html new file mode 100644 index 00000000..2672dcd9 --- /dev/null +++ b/r-book/about_technology.html @@ -0,0 +1,891 @@ + + + + + + + + + +Resampling statistics - 4  Introducing R and the RStudio notebook + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook

+
+ + + +
+ + + + +
+ + +
+ +

This chapter introduces you to the technology we will use throughout the book. By technology, we mean two things:

+
    +
  • The R programming language, along with some important add-on libraries for data analysis.
  • +
  • The RStudio notebook system for running and editing R code in a graphical interface.
  • +
+

The chapter introduces the R language, and then gives an example to introduce R and the RStudio Notebook. If you have not used R before, the example notebook will get you started. The example also shows how we will be using notebooks through the rest of the book.

+
+

This version of the book uses the R programming language to implement resampling algorithms.

+

The current title of the main website for R [^R-lang] is “The R Project for Statistical Computing”, and this is a good summary of how R started, and what R is particularly good at. The people who designed R designed it for themselves, and you and I — those of us who are working to find and understand the patterns in data. Over the last 20 years, it has gained wide use for data analysis across many fields, especially in life sciences, data science and statistics.

+

Although many people use R as a simple way of exploring data and doing standard statistical tests, it is a full-fledged programming language.

+
+

It is very important that R is a programming language and not a set of canned routines for “doing statistics”. It means that we can explore the ideas of probability and statistics using the language of R to express those ideas. It also means that you, and we, and anyone else in the world, can write new code to share with others, so they can benefit from our work, understand it, and improve it. This book is one example; we have written the R code in this book as clearly as we can to make it easy to follow, and to explain the underlying ideas. We hope you will help us by testing what we have done and sending us suggestions for ways we could improve. Please see the preface for more information about how to do that.

+
+

4.1 The environment

+

Many of the chapters have sections with code for you to run, and experiment with. These sections contain Jupyter notebooks 1]. Jupyter notebooks are interactive web pages that allow you to read, write and run R code. We mark the start of each notebook in the text with a note and link heading like the one you see below. In the web edition of this book, you can click on the Download link in this header to download the section as a notebook. You can also click on the Interact link in this header to open the notebook on a cloud computer. This allows you to interact with the notebook on the cloud computer. You can run the code, and experiment by making changes.

+

In the print version of the book, we point you to the web version, to get the links.

+

At the end of this chapter, we explain how to run these notebooks on your own computer. In the next section you will see an example notebook; you might want to run this in the cloud to get started.

+
+
+

4.2 Getting started with the notebook

+

The next section contains a notebook called “Billie’s Bill”. If you are looking at the web edition, you will see links to interact with this notebook in the cloud, or download it to your computer.

+
+

Start of billies_bill notebook

+ + +

The text in this notebook section assumes you have opened the page as an interactive notebook, on your own computer, or one of the RStudio web interfaces.

+

A notebook can contain blocks of text — like this one — as well as code, and the results from running the code.

+

If you are in the notebook interface (rather than reading this in the textbook), you will see the RStudio menu near the top of the page, with headings “File”, “Edit” and so on.

+
+

Underneath that, by default, you may see a row of icons - the “Toolbar”.

+

In the toolbar, you may see a list box that will allow you to run the code in the notebook, among other icons.

+

When we get to code chunks, you will also see a green play icon at the right edge of the interface, in the chunk. This will allow you to run the code chunk.

+

Although you can use this “run” button, we suggest you get used to using the keyboard shortcut. The default shortcut on Windows or Linux is to hold down the Control key and the Shift key and the Enter (Return) key at the same time. We will call this Control-Shift-Enter. On Mac the default combination is Command-Shift-Enter, where Command is the key with the four-leaf-clover-like icon to the left of the space-bar. To save us having to say this each time, we will call this combination Ctl/Cmd-Shift-Enter.

+
+

In this, our first notebook, we will be using R to solve one of those difficult and troubling problems in life — working out the bill in a restaurant.

+
+

4.3 The meal in question

+

Alex and Billie are at a restaurant, getting ready to order. They do not have much money, so they are calculating the expected bill before they order.

+

Alex is thinking of having the fish for £10.50, and Billie is leaning towards the chicken, at £9.25. First they calculate their combined bill.

+

Below this text you see a code chunk. It contains the R code to calculate the total bill. Press Control-Shift-Enter or Cmd-Shift-Enter (on Mac) in the chunk below, to see the total.

+
+
10.50 + 9.25
+
+
[1] 19.8
+
+
+

The contents of the chunk above is R code. As you would predict, R understands numbers like 10.50, and it understands + between the numbers as an instruction to add the numbers.

+

When you press Ctl/Cmd-Shift-Enter, R finds 10.50, realizes it is a number, and stores that number somewhere in memory. It does the same thing for 9.25, and then it runs the addition operation on these two numbers in memory, which gives the number 19.75.

+

Finally, R sends the resulting number (19.75) back to the notebook for display. The notebook detects that R sent back a value, and shows it to us.

+

This is exactly what a calculator would do.

+
+
+

4.4 Comments

+

Unlike a calculator, we can also put notes next to our calculations, to remind us what they are for. One way of doing this is to use a “comment”. You have already seen comments in the previous chapter.

+

A comment is some text that the computer will ignore. In R, you can make a comment by starting a line with the # (hash) character. For example, the next cell is a code cell, but when you run it, it does not show any result. In this case, that is because the computer sees the # at the beginning of the line, and then ignores the rest.

+

Many of the code cells you see will have comments in them, to explain what the code is doing.

+

Practice writing comments for your own code. It is a very good habit to get into. You will find that experienced programmers write many comments on their code. They do not do this to show off, but because they have a lot of experience in reading code, and they know that comments make it much easier to read and understand code.

+
+
+

4.5 More calculations

+

Let us continue with the struggle that Alex and Billie are having with their bill.

+

They realize that they will also need to pay a tip.

+

They think it would be reasonable to leave a 15% tip. Now they need to multiply their total bill by 0.15, to get the tip. The bill is about £20, so they know that the tip will be about £3.

+

In R * means multiplication. This is the equivalent of the “×” key on a calculator.

+

What about this, for the correct calculation?

+
+
# The tip - with a nasty mistake.
+10.50 + 9.25 * 0.15
+
+
[1] 11.9
+
+
+

Oh dear, no, that isn’t doing the right calculation.

+

R follows the normal rules of precedence with calculations. These rules tell us to do multiplication before addition.

+

See https://en.wikipedia.org/wiki/Order_of_operations for more detail on the standard rules.

+

In the case above the rules tell R to first calculate 9.25 * 0.15 (to get 1.3875) and then to add the result to 10.50, giving 11.8875.

+

We need to tell R we want it to do the addition and then the multiplication. We do this with round brackets (parentheses):

+
+
+
+ +
+
+ +
+
+
+

There are three types of brackets in R.

+

These are:

+
    +
  • round brackets or parentheses: ();
  • +
  • square brackets: [];
  • +
  • curly brackets: {}.
  • +
+

Each type of bracket has a different meaning in R. In the examples, play close to attention to the type of brackets we are using.

+
+
+
+
# The bill plus tip - mistake fixed.
+(10.50 + 9.25) * 0.15
+
+
[1] 2.96
+
+
+

The obvious next step is to calculate the bill including the tip.

+
+
# The bill, including the tip
+10.50 + 9.25 + (10.50 + 9.25) * 0.15
+
+
[1] 22.7
+
+
+

At this stage we start to feel that we are doing too much typing. Notice that we had to type out 10.50 + 9.25 twice there. That is a little boring, but it also makes it easier to make mistakes. The more we have to type, the greater the chance we have to make a mistake.

+

To make things simpler, we would like to be able to store the result of the calculation 10.50 + 9.25, and then re-use this value, to calculate the tip.

+

This is the role of variables. A variable is a value with a name.

+

Here is a variable:

+
+
# The cost of Alex's meal.
+a <- 10.50
+
+

a is a name we give to the value 10.50. You can read the line above as “The variable a gets the value 10.50”. We can also talk of setting the variable. Here we are setting a to equal 10.50.

+

Now, when we use a in code, it refers to the value we gave it. For example, we can put a on a line on its own, and R will show us the value of a:

+
+
# The value of a
+a
+
+
[1] 10.5
+
+
+

We did not have to use the name a — we can choose almost any name we like. For example, we could have chosen alex_meal instead:

+
+
# The cost of Alex's meal.
+# alex_meal gets the value 10.50
+alex_meal <- 10.50
+
+

We often set variables like this, and then display the result, all in the same chunk. We do this by first setting the variable, as above, and then, on the final line of the chunk, we put the variable name on a line on its own, to ask R to show us the value of the variable. Here we set billie_meal to have the value 9.25, and then show the value of billie_meal, all in the same chunk.

+
+
# The cost of Alex's meal.
+# billie_meal gets the value 10.50
+billie_meal <- 10.50
+# Show the value of billie_meal
+billie_meal
+
+
[1] 10.5
+
+
+

Of course, here, we did not learn much, but we often set variable values with the results of a calculation. For example:

+
+
# The cost of both meals, before tip.
+bill_before_tip <- 10.50 + 9.25
+# Show the value of both meals.
+bill_before_tip
+
+
[1] 19.8
+
+
+

But wait — we can do better than typing in the calculation like this. We can use the values of our variables, instead of typing in the values again.

+
+
# The cost of both meals, before tip, using variables.
+bill_before_tip <- alex_meal + billie_meal
+# Show the value of both meals.
+bill_before_tip
+
+
[1] 21
+
+
+

We make the calculation clearer by writing the calculation this way — we are calculating the bill before the tip by adding the cost of Alex’s and Billie’s meal — and that’s what the code looks like. But this also allows us to change the variable value, and recalculate. For example, say Alex decided to go for the hummus plate, at £7.75. Now we can tell R that we want alex_meal to have the value 7.75 instead of 10.50:

+
+
# The new cost of Alex's meal.
+# alex_meal gets the value 7.75
+alex_meal = 7.75
+# Show the value of alex_meal
+alex_meal
+
+
[1] 7.75
+
+
+

Notice that alex_meal now has a new value. It was 10.50, but now it is 7.75. We have reset the value of alex_meal. In order to use the new value for alex_meal, we must recalculate the bill before tip with exactly the same code as before:

+
+
# The new cost of both meals, before tip.
+bill_before_tip <- alex_meal + billie_meal
+# Show the value of both meals.
+bill_before_tip
+
+
[1] 18.2
+
+
+

Notice that, now we have rerun this calculation, we have reset the value for bill_before_tip to the correct value corresponding to the new value for alex_meal.

+

All that remains is to recalculate the bill plus tip, using the new value for the variable:

+
+
# The cost of both meals, after tip.
+bill_after_tip = bill_before_tip + bill_before_tip * 0.15
+# Show the value of both meals, after tip.
+bill_after_tip
+
+
[1] 21
+
+
+

Now we are using variables with relevant names, the calculation looks right to our eye. The code expresses the calculation as we mean it: the bill after tip is equal to the bill before the tip, plus the bill before the tip times 0.15.

+
+
+

4.6 And so, on

+

Now you have done some practice with the notebook, and with variables, you are ready for a new problem in probability and statistics, in the next chapter.

+

End of billies_bill notebook

+
+
+
+
+

4.7 Running the code on your own computer

+

Many people, including your humble authors, like to be able to run code examples on their own computers. This section explains how you can set up to run the notebooks on your own computer.

+

Once you have done this setup, you can use the “download” link

+
+

To run the R notebook, you will need two software packages on your computer. These are:

+
    +
  • The base R language
  • +
  • The RStudio graphical interface to R.
  • +
+

The base R language gives you the software to run R code and show results. You can use the base R language on its own, but, in order to interact with the R notebook on your computer, you will need the RStudio interface. RStudio gives you a richer interface to interact with the R language, including the ability to open, edit and run R notebooks, like the notebook in this chapter. RStudio uses the base R language to run R code from the notebook, and show the results.

+

Install the base R language by going to the main R website at https://www.r-project.org, following the links to the package for your system (Windows, Mac, or Linux), and install according to the instructions on the website.

+

Then install the RStudio interface by visiting the RStudio website at https://www.rstudio.com, and navigating to the download links for the free edition of the “RStudio IDE”. IDE stands for Integrated Development Environment; it refers to the RStudio application’s ability make it easier to interact with, and develop, R code. You only need the free version; it has all the features you will need. The free version is the only version that we, your humble authors, have used for this book, and for all our own work and teaching.

+
+ + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/acknowlegements.html b/r-book/acknowlegements.html new file mode 100644 index 00000000..60c4dcf1 --- /dev/null +++ b/r-book/acknowlegements.html @@ -0,0 +1,628 @@ + + + + + + + + + +Resampling statistics - 33  Acknowledgements + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

33  Acknowledgements

+
+ + + +
+ + + + +
+ + +
+ +
+

33.1 For the second edition

+

Many people have helped in the long evolution of this work. First was the late Max Beberman, who in 1967 immediately recognized the potential of resampling statistics for high school students as well as for all others. Louis Guttman and Joseph Doob provided important encouragement about the theoretical and practical value of resampling statistics. Allen Holmes cooperated with me in teaching the first class at University High School in Urbana, Illinois, in 1967. Kenneth Travers found and supervised several PhD students — David Atkinson and Carolyn Shevokas outstanding among them — who experimented with resampling statistics in high school and college classrooms and proved its effectiveness; Travers also carried the message to many secondary school teachers in person and in his texts. In 1973 Dan Weidenfield efficiently wrote the first program for the mainframe (then called “Simple Stats”). Derek Kumar wrote the first interactive program for the Apple II. Chad McDaniel developed the IBM version, with touchup by Henry van Kuijk and Yoram Kochavi. Carlos Puig developed the powerful 1990 version of the program. William E. Kirwan, Robert Dorfman, and Rudolf Lamone have provided their good offices for us to harness the resources of the University of Maryland and, in particular, the College of Business and Management. Terry Oswald worked day and night with great dedication on the program and on commercial details to start the marketing of RESAMPLING STATS. In mid-1989, Peter Bruce assumed the overall stewardship of RESAMPLING STATS, and has been proceeding with energy, good judgment, and courage. He has contributed to this volume in many ways, always excellently (including the writing and re-writing of programs, as well as explanations of the bootstrap and of the interpretation of p-values). Vladimir Koliadin wrote the code for several of the problems in this edition, and Cheinan Marks programmed the Windows and Macintosh versions of Resampling Stats. Toni York handled the typesetting and desktop publishing through various iterations, Barbara Shaw provided expert proofreading and desktop publishing services for the second printing of the second edition, and Chris Brest produced many of the figures. Thanks to all of you, and to others who should be added to the list.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/bayes_simulation.html b/r-book/bayes_simulation.html new file mode 100644 index 00000000..fdc8e55a --- /dev/null +++ b/r-book/bayes_simulation.html @@ -0,0 +1,1038 @@ + + + + + + + + + +Resampling statistics - 31  Bayesian Analysis by Simulation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

31  Bayesian Analysis by Simulation

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

This branch of mathematics [probability] is the only one, I believe, in which good writers frequently get results entirely erroneous. (Peirce 1923, Doctrine of Chances, II)

+
+

Bayesian analysis is a way of thinking about problems in probability and statistics that can help one reach otherwise-difficult decisions. It also can sometimes be used in science. The range of its recommended uses is controversial, but this chapter deals only with those uses of Bayesian analysis that are uncontroversial.

+

Better than defining Bayesian analysis in formal terms is to demonstrate its use. We shall start with the simplest sort of problem, and proceed gradually from there.

+
+

31.1 Simple decision problems

+
+

31.1.1 Assessing the Likelihood That a Used Car Will Be Sound

+

Consider a problem in estimating the soundness of a used car one considers purchasing (after (Wonnacott and Wonnacott 1990, 93–94)). Seventy percent of the cars are known to be OK on average, and 30 percent are faulty. Of the cars that are really OK, a mechanic correctly identifies 80 percent as “OK” but says that 20 percent are “faulty”; of those that are faulty, the mechanic correctly identifies 90 percent as faulty and says (incorrectly) that 10 percent are OK.

+

We wish to know the probability that if the mechanic says a car is “OK,” it really is faulty. Phrased differently, what is the probability of a car being faulty if the mechanic said it was OK?

+

We can get the desired probabilities directly by simulation without knowing Bayes’ rule, as we shall see. But one must be able to model the physical problem correctly in order to proceed with the simulation; this requirement of a clearly visualized model is a strong point in favor of simulation.

+
    +
  1. Note that we are only interested in outcomes where the mechanic approved a car.

  2. +
  3. For each car, generate a label of either “faulty” or “working” with probabilities of 0.3 and 0.7, respectively.

  4. +
  5. For each faulty car, we generate one of two labels, “approved” or “not approved” with probabilities 0.1 and 0.9, respectively.

  6. +
  7. For each working car, we generate one of two labels, “approved” or “not approved” with probabilities 0.7 and 0.3, respectively.

  8. +
  9. Out of all cars “approved”, count how many are “faulty”. The ratio between these numbers is our answer.

  10. +
+

Here is the whole thing:

+

The answer looks to be somewhere between 5 and 6%. The code clearly follows the description step by step, but it is also quite slow. If we can improve the code, we may be able to do our simulation with more cars, and get a more accurate answer.

+

Let’s use arrays to store the states of all cars in the lot simultaneously:

+

The code now runs much faster, and with a larger number of cars we see that the answer is closer to a 5% chance of a car being broken after it has been approved by a mechanic.

+
+
+

31.1.2 Calculation without simulation

+

Simulation forces us to model our problem clearly and concretely in code. Such code is most often easier to reason about than opaque statistical methods. Running the simulation gives a good sense of what the correct answer should be. Thereafter, we can still look into different — sometimes more elegant or accurate — ways of modeling and solving the problem.

+

Let’s examine the following diagram of our car selection:

+

+

We see that there are two paths, highlighted, that results in a car being approved by a mechanic. Either a car can be working, and correctly identified as such by a mechanic; or the car can be broken, while the mechanic mistakenly determines it to be working. Our question only pertains to these two paths, so we do not need to study the rest of the tree.

+

In the long run, in our simulation, about 70% of the cars will end with the label “working”, and about 30% will end up with the label “faulty”. We just took 10000 sample cars above but, in fact, the larger the number of cars we take, the closer we will get to 70% “working” and 30% “faulty”. So, with many samples, we can think of 70% of these samples flowing down the “working” path, and 30% flowing along the “faulty” path.

+

Now, we want to know, of all the cars approved by a mechanic, how many are faulty:

+

\[ \frac{\mathrm{cars_{\mathrm{faulty}}}}{\mathrm{cars}_{\mathrm{approved}}} \]

+

We follow the two highlighted paths in the tree:

+
    +
  1. Of a large sample of cars, 30% are faulty. Of these, 10% are approved by a mechanic. That is, 30% * 10% = 3% of all cars.
  2. +
  3. Of all cars, 70% work. Of these, 80% are approved by a mechanic. That is, 70% * 80% = 56% of all cars.
  4. +
+

The percentage of faulty cars, out of approved cars, becomes:

+

\[ +3\% / (56\% + 3\%) = 5.08\% +\]

+

Notation-wise, it is a bit easier to calculate these sums using proportions rather than percentages:

+
    +
  1. Faulty cars approved by a mechanic: 0.3 * 0.1 = 0.03
  2. +
  3. Working cars approved by a mechanic: 0.7 * 0.8 = 0.56
  4. +
+

Fraction of faulty cars out of approved cars: 0.03 / (0.03 + 0.56) = 0.0508

+

We see that every time the tree branches, it filters the cars: some go to one branch, the rest to another. In our code, we used the AND (&) operator to find the intersection between faulty AND approved cars, i.e., to filter out from all faulty cars only the cars that were ALSO approved.

+
+
+
+

31.2 Probability interpretation

+
+

31.2.1 Probability from proportion

+

In these examples, we often calculate proportions. In the given simulation:

+
    +
  • How many cars are approved by a mechanic? 59/100.
  • +
  • How many of those 59 were faulty? 3/59.
  • +
+

We often also count how commonly events occur: “it rained 4 out of the 10 days”.

+

An extension of this idea is to predict the probability of an event occurring, based on what we had seen in the past. We can say “out of 100 days, there was some rain on 20 of them; we therefore estimate that the probability of rain occurring is 20/100”. Of course, this is not a complex or very accurate weather model; for that, we’d need to take other factors—such as season—into consideration. Overall, the more observations we have, the better our probability estimates become. We discussed this idea previously in “The Law of Large Numbers”.

+ +
+

31.2.1.1 Ratios of proportions

+

At our mechanic’s yard, we can ask “how many red cars here are faulty”? To calculate that, we’d first count the number of red cars, then the number of those red cars that are also broken, then calculate the ratio: red_cars_faulty / red_cars.

+

We could just as well have worked in percentages: percentage_of_red_cars_broken / percentage_of_cars_that_are_red, since that is (red_cars_broken / 100) / (red_cars / 100)—the same ratio calculated before.

+

Our point is that the denominator doesn’t matter when calculating ratios, so we could just as well have written:

+

(red_cars_broken / all_cars) / (red_cars / all_cars)

+

or

+

\[ +P(\text{cars that are red and that are broken}) / P(\text{red cars}) +\]

+ +
+
+
+

31.2.2 Probability relationships: conditional probability

+

Here’s one way of writing the probability that a car is broken:

+

\[ +P(\text{car is broken}) +\]

+

We can shorten “car is broken” to B, and write the same thing as:

+

\[ +P(B) +\]

+

Similarly, we could write the probability that a car is red as:

+

\[ +P(R) +\]

+

We might also want to express the conditional probability, as in the probability that the car is broken, given that we already know that the car is red:

+

\[ +P(\text{car is broken GIVEN THAT car is red}) +\]

+

That is getting getting pretty verbose, so we will shorten this as we did above:

+

\[ +P(B \text{ GIVEN THAT } R) +\]

+

To make things even more compact, we write “GIVEN THAT” as a vertical bar | — so the whole thing becomes:

+

\[ +P(B | R) +\]

+

We read this as “the probability that the car is broken given that the car is red”. Such a probability is known as a conditional probability. We discuss these in more details in Ch TKTK.

+ +

In our original problem, we ask what the chance is of a car being broken given that a mechanic approved it. As discussed under “Ratios of proportions”, it can be calculated with:

+

\[ +P(\text{car broken | mechanic approved}) += P(\text{car broken and mechanic approved}) / P(\text{mechanic approved}) +\]

+

We have already used \(B\) to mean “broken” (above), so let us use \(A\) to mean “mechanic approved”. Then we can write the statement above in a more compact way:

+

\[ +P(B | A) = P(B \text{ and } A) / P(A) +\]

+

To put this generally, conditional probabilities for two events \(X\) and \(Y\) can be written as:

+

\(P(X | Y) = P(X \text{ and } Y) / P(Y)\)

+

Where (again) \(\text{ and }\) means that both events occur.

+
+
+

31.2.3 Example: conditional probability

+

Let’s discuss a very relevant example. You get a COVID test, and the test is negative. Now, you would like to know what the chance is of you having COVID.

+

We have the following information:

+
    +
  • 1.5% of people in your area have COVID
  • +
  • The false positive rate of the tests (i.e., that they detect COVID when it is absent) is very low at 0.5%
  • +
  • The false negative rate (i.e., that they fail to detect COVID when it is present) is quite high at 40%
  • +
+

+

Again, we start with our simulation.

+

This gives around 0.006 or 0.6%.

+

Now that we have a rough indication of what the answer should be, let’s try and calculate it directly, based on the tree of informatiom shown earlier.

+

We will use these abbreviations:

+
    +
  • \(C^+\) means Covid positive (you do actually have Covid).
  • +
  • \(C^-\) means Covid negative (you do not actually have Covid).
  • +
  • \(T^+\) means the Covid test was positive.
  • +
  • \(T^-\) means the Covid test was negative.
  • +
+

For example \(P(C^+ | T^-)\) is the probability (\(P\)) that you do actually have Covid (\(C^+\)) given that (\(|\)) the test was negative (\(T^-\)).

+

We would like to know the probability of having COVID given that your test was negative (\(P(C^+ | T^-)\)). Using the conditional probability relationship from above, we can write:

+

\[ +P(C^+ | T^-) = P(C^+ \text{ and } T^-) / P(T^-) +\]

+

We see from the tree diagram that \(P(C^+ \text{ and } T^-) = P(T^- | C^+) * P(C^+) = .4 * .015 = 0.006\).

+ +

We observe that \(P(T^-) = P(T^- \text{ and } C^-) + P(T^- \text{ and } C^+)\), i.e. that we can obtain a negative test result through two paths, having COVID or not having COVID. We expand these further as conditional probabilities:

+

\(P(T^- \text{ and } C^-) = P(T^- | C^-) * P(C^-)\)

+

and

+

\(P(T^- \text{ and } C^+) = P(T^- | C^+) * P(C^+)\).

+

We can now calculate

+

\[ +P(T^-) = P(T^- | C^-) * P(C^-) + P(T^- | C^+) * P(C^+) +\]

+

\[ += .995 * .985 + .4 * .015 = 0.986 +\]

+

The answer, then, is:

+

\(P(C^+ | T^-) = 0.006 / 0.986 = 0.0061\) or 0.61%.

+

This matches very closely our simulation result, so we have some confidence that we have done the calculation correctly.

+
+
+

31.2.4 Estimating Driving Risk for Insurance Purposes

+

Another sort of introductory problem, following after (Feller 1968, p 122):

+

A mutual insurance company charges its members according to the risk of having an car accident. It is known that there are two classes of people — 80 percent of the population with good driving judgment and with a probability of .06 of having an accident each year, and 20 percent with poor judgment and a probability of .6 of having an accident each year. The company’s policy is to charge $100 for each percent of risk, i. e., a driver with a probability of .6 should pay 60*$100 = $6000.

+

If nothing is known of a driver except that they had an accident last year, what fee should they pay?

+

Another way to phrase this question is: given that a driver had an accident last year, what is the probability of them having an accident overall?

+

We will proceed as follows:

+
    +
  1. Generate a population of N people. Label each as good driver or poor driver.
  2. +
  3. Simulate the last year for each person: did they have an accident or not?
  4. +
  5. Select only the ones that had an accident last year.
  6. +
  7. Among those, calculate what their average risk is of making an accident. This will indicate the appropriate insurance premium.
  8. +
+

The answer should be around 4450 USD.

+
+
+

31.2.5 Screening for Disease

+ +

This is a classic Bayesian problem (quoted by Tversky and Kahneman (1982, 154), from Cascells et al. (1978, 999)):

+
+

If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

+
+

Tversky and Kahneman note that among the respondents — students and staff at Harvard Medical School — “the most common response, given by almost half of the participants, was 95%” — very much the wrong answer.

+

To obtain an answer by simulation, we may rephrase the question above with (hypothetical) absolute numbers as follows:

+

If a test to detect a disease whose prevalence has been estimated to be about 100,000 in the population of 100 million persons over age 40 (that is, about 1 in a thousand) has been observed to have a false positive rate of 60 in 1200 observations, and never gives a negative result if a person really has the disease, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?

+

If the raw numbers are not available, the problem can be phrased in such terms as “about 1 case in 1000” and “about 5 false positives in 100 cases.”)

+

One may obtain an answer as follows:

+
    +
  1. Construct bucket A with 999 white beads and 1 black bead, and bucket B with 95 green beads and 5 red beads. A more complete problem that also discusses false negatives would need a third bucket.

  2. +
  3. Pick a bead from bucket A. If black, record “T,” replace the bead, and end the trial. If white, continue to step 3.

  4. +
  5. If a white bead is drawn from bucket A, select a bead from bucket B. If red, record “F” and replace the bead, and if green record “N” and replace the bead.

  6. +
  7. Repeat steps 2-4 perhaps 10,000 times, and in the results count the proportion of “T”s to (“T”s plus “F”s) ignoring the “N”s).

    +

    Of course 10,000 draws would be tedious, but even after a few hundred draws a person would be likely to draw the correct conclusion that the proportion of “T”s to (“T”s plus “F”s) would be small. And it is easy with a computer to do 10,000 trials very quickly.

    +

    Note that the respondents in the Cascells et al. study were not naive; the medical staff members were supposed to understand statistics. Yet most doctors and other personnel offered wrong answers. If simulation can do better than the standard deductive method, then simulation would seem to be the method of choice. And only one piece of training for simulation is required: Teach the habit of saying “I’ll simulate it” and then actually doing so.

  8. +
+
+
+
+

31.3 Fundamental problems in statistical practice

+

Box and Tiao (1992) begin their classic exposition of Bayesian statistics with the analysis of a famous problem first published by Fisher (1959, 18).

+
+

…there are mice of two colors, black and brown. The black mice are of two genetic kinds, homozygotes (BB) and heterozygotes (Bb), and the brown mice are of one kind (bb). It is known from established genetic theory that the probabilities associated with offspring from various matings are as listed in Table 31.1.

+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 31.1: Probabilities for Genetic Character of Mice Offspring (Box and Tiao 1992, 12–14)
BB (black)Bb (black)bb (brown)
BB mated with bb010
Bb mated with bb0½½
Bb mated with Bb¼½¼
+
+

Suppose we have a “test” mouse which has been produced by a mating between two (Bb) mice and is black. What is the genetic kind of this mouse?

+

To answer that, we look at the information in the last line of the table: it shows that the probabilities of a test mouse is of kind BB and Bb are precisely known, and are 1/3 and 2/3 respectively ((1/4)/(1/4 + 1/2) vs (1/2)/(1/4 + 1/2)). We call this our “prior” estimate — in other words, our estimate before seeing data.

+

Suppose the test mouse is now mated with a brown mouse (of kind bb) and produces seven black offspring. Before, we thought that it was more likely for the parent to be of kind Bb than of kind BB. But if that were true, then we would have expected to have seen some brown offspring (the probability of mating Bb with bb resulting in brown offspring is given as 0.5). Therefore, we sense that it may now be more likely that the parent was of type BB instead. How do we quantify that?

+

One can calculate, as Fisher (1959, 19) did, the probabilities after seeing the data (we call this the posterior probability). This is typically done using using Bayes’ rule.

+

But instead of doing that, let’s take the easy route out and simulate the situation instead.

+
    +
  1. We begin, as do Box and Tiao, by restricting our attention to the third line in Table Table 31.1. We draw a mouse with label ‘BB’, ‘Bb’, or ‘bb’, using those probabilities. We were told that the “test mouse” is black, so if we draw ‘bb’, we try again. (Alternatively, we could draw ‘BB’ and ‘Bb’ with probabilities of 1/3 and 2/3 respectively.)

  2. +
  3. We now want to examine the offspring of the test mouse when mated with a brown “bb” mouse. Specifically, we are only interested in cases where all offspring were black. We will store the genetic kind of the parents of such offspring so that we can count them later.

    +

    If our test mouse is “BB”, we already know that all their offspring will be black (“Bb”). Thus, store “BB” in the parent list.

  4. +
  5. If our test mouse is “Bb”, we have a bit more work to do. Draw seven offspring from the middle row of Table tbl-mice-genetics. If all the offspring are black, store “Bb” in the parent list.

  6. +
  7. Repeat steps 1-3 perhaps 10000 times.

  8. +
  9. Now, out of all parents count the numbers of “BB” vs “Bb”.

  10. +
+

We will do a naïve implementation that closely follows the logic described above, followed by a slightly optimized version.

+

We see that all the offspring being black considerably changes the situation! We started with the odds being 2:1 in favor of Bb vs BB. The “posterior” or “after the evidence” ratio is closer to 64:1 in favor of BB! (1973, pp. 12-14)

+

Let’s tune the code a bit to run faster. Instead of doing the trials one mouse at a time, we will do the whole bunch together.

+

This yields a similar result, but in much shorter time — which means we can increase the number of trials and get a more accurate result.

+ +

Creating the correct simulation procedure is not trivial, because Bayesian reasoning is subtle — a reason it has been the cause of controversy for more than two centuries. But it certainly is not easier to create a correct procedure using analytic tools (except in the cookbook sense of plug-and-pray). And the difficult mathematics that underlie the analytic method (see e.g. (Box and Tiao 1992, Appendix A1.1) make it almost impossible for the statistician to fully understand the procedure from beginning to end. If one is interested in insight, the simulation procedure might well be preferred.1

+
+
+

31.4 Problems based on normal and other distributions

+

This section should be skipped by all except advanced practitioners of statistics.

+

Much of the work in Bayesian analysis for scientific purposes treats the combining of prior distributions having Normal and other standard shapes with sample evidence which may also be represented with such standard functions. The mathematics involved often is formidable, though some of the calculational formulas are fairly simple and even intuitive.

+

These problems may be handled with simulation by replacing the Normal (or other) distribution with the original raw data when data are available, or by a set of discrete sub-universes when distributions are subjective.

+

Measured data from a continuous distribution present a special problem because the probability of any one observed value is very low, often approaching zero, and hence the probability of a given set of observed values usually cannot be estimated sensibly; this is the reason for the conventional practice of working with a continuous distribution itself, of course. But a simulation necessarily works with discrete values. A feasible procedure must bridge this gulf.

+

The logic for a problem of Schlaifer’s (1961, example 17.1) will only be sketched out. The procedure is rather novel, but it has not heretofore been published and therefore must be considered tentative and requiring particular scrutiny.

+
+

31.4.1 An Intermediate Problem in Conditional Probability

+

Schlaifer employs a quality-control problem for his leading example of Bayesian estimation with Normal sampling. A chemical manufacturer wants to estimate the amount of yield of a crucial ingredient X in a batch of raw material in order to decide whether it should receive special handling. The yield ranges between 2 and 3 pounds (per gallon), and the manufacturer has compiled the distribution of the last 100 batches.

+

The manufacturer currently uses the decision rule that if the mean of nine samples from the batch (which vary only because of measurement error, which is the reason that he takes nine samples rather than just one) indicates that the batch mean is greater than 2.5 gallons, the batch is accepted. The first question Schlaifer asks, as a sampling-theory waystation to the more general question, is the likelihood that a given batch with any given yield — say 2.3 gallons — will produce a set of samples with a mean as great or greater than 2.5 gallons.

+

We are told that the manufacturer has in hand nine samples from a given batch; they are 1.84, 1.75, 1.39, 1.65, 3.53, 1.03,

+

2.73, 2.86, and 1.96, with a mean of 2.08. Because we are also told that the manufacturer considers the extent of sample variation to be the same at all yield levels, we may — if we are again working with 2.3 as our example of a possible universe — therefore add (2.3 minus 2.08 =) 0.22 to each of these nine observations, so as to constitute a bootstrap-type universe; we do this on the grounds that this is our best guess about the constitution of that distribution with a mean at (say) 2.3.

+

We then repeatedly draw samples of nine observations from this distribution (centered at 2.3) to see how frequently its mean exceeds 2.5. This work is so straightforward that we need not even state the steps in the procedure.

+
+
+

31.4.2 Estimating the Posterior Distribution

+

Next we estimate the posterior distribution. Figure 31.1 shows the prior distribution of batch yields, based on 100 previous batches.

+
+
+
+
+

+
Figure 31.1: Posterior distribution of batch yields
+
+
+
+
+

Notation: S m = set of batches (where total S = 100) with a particular mean m (say, m = 2.1). x i = particular observation (say, x 3 = 1.03). s = the set of x i .

+

We now perform for each of the S m (categorized into the tenth-of-gallon divisions between 2.1 and 3.0 gallons), each corresponding to one of the yields ranging from 2.1 to 3.0, the same sort of sampling operation performed for S m=2.3 in the previous problem. But now, instead of using the manufacturer’s decision criterion of 2.5, we construct an interval of arbitrary width around the sample mean of 2.08 — say at .1 intervals from 2.03 to 2.13 — and then work with the weighted proportions of sample means that fall into this interval.

+
    +
  1. Using a bootstrap-like approach, we presume that the sub-universe of observations related to each S m equals the mean of that S m — say, 2.1) plus (minus) the mean of the x i (equals 2.05) added to (subtracted from) each of the nine x i , say, 1.03 + .05 = 1.08. For a distribution centered at 2.3, the values would be (1.84 + .22 = 2.06, 1.75 + .22 = 1.97…).
  2. +
  3. Working with the distribution centered at 2.3 as an example: Constitute a universe of the values (1.84+.22=2.06, 1.75 + .22 = 1.97…). Here we may notice that the variability in the sample enters into the analysis at this point, rather than when the sample evidence is combined with the prior distribution; this is in contrast to conventional Bayesian practice where the posterior is the result of the prior and sample means weighted by the reciprocals of the variances (see e.g. (Box and Tiao 1992, 17 and Appendix A1.1)).
  4. +
  5. Draw nine observations from this universe (with replacement, of course), compute the mean, and record.
  6. +
  7. Repeat step 2 perhaps 1000 times and plot the distribution of outcomes.
  8. +
  9. Compute the percentages of the means within (say) .5 on each side of the sample mean, i. e. from 2.03–2.13. The resulting number — call it UP i — is the un-standardized (un-normalized) effect of this sub-distribution in the posterior distribution.
  10. +
  11. Repeat steps 1-5 to cover each other possible batch yield from 2.0 to 3.0 (2.3 was just done).
  12. +
  13. Weight each of these sub-distributions — actually, its UP i — by its prior probability, and call that WP i -.
  14. +
  15. Standardize the WP i s to a total probability of 1.0. The result is the posterior distribution. The value found is 2.283, which the reader may wish to compare with a theoretically-obtained result (which Schlaifer does not give).
  16. +
+

This procedure must be biased because the numbers of “hits” will differ between the two sides of the mean for all sub-distributions except that one centered at the same point as the sample, but the extent and properties of this bias are as-yet unknown. The bias would seem to be smaller as the interval is smaller, but a small interval requires a large number of simulations; a satisfactorily narrow interval surely will contain relatively few trials, which is a practical problem of still-unknown dimensions.

+

Another procedure — less theoretically justified and probably more biased — intended to get around the problem of the narrowness of the interval, is as follows:

+
    +
  1. (5a.) Compute the percentages of the means on each side of the sample mean, and note the smaller of the two (or in another possible process, the difference of the two). The resulting number — call it UP i — is the un-standardized (un-normalized) weight of this sub-distribution in the posterior distribution.
  2. +
+

Another possible criterion — a variation on the procedure in 5a — is the difference between the two tails; for a universe with the same mean as the sample, this difference would be zero.

+
+
+
+

31.5 Conclusion

+

All but the simplest problems in conditional probability are confusing to the intuition even if not difficult mathematically. But when one tackles Bayesian and other problems in probability with experimental simulation methods rather than with logic, neither simple nor complex problems need be difficult for experts or beginners.

+

This chapter shows how simulation can be a helpful and illuminating way to approach problems in Bayesian analysis.

+

Simulation has two valuable properties for Bayesian analysis:

+
    +
  1. It can provide an effective way to handle problems whose analytic solution may be difficult or impossible.
  2. +
  3. Simulation can provide insight to problems that otherwise are difficult to understand fully, as is peculiarly the case with Bayesian analysis.
  4. +
+

Bayesian problems of updating estimates can be handled easily and straightforwardly with simulation, whether the data are discrete or continuous. The process and the results tend to be intuitive and transparent. Simulation works best with the original raw data rather than with abstractions from them via percentages and distributions. This can aid the understanding as well as facilitate computation.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/confidence_1.html b/r-book/confidence_1.html new file mode 100644 index 00000000..8980bcf1 --- /dev/null +++ b/r-book/confidence_1.html @@ -0,0 +1,707 @@ + + + + + + + + + +Resampling statistics - 26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples

+
+ + + +
+ + + + +
+ + +
+ +
+

26.1 Introduction

+

This chapter discusses how to assess the accuracy of a point estimate of the mean, median, or other statistic of a sample. We want to know: How close is our estimate of (say) the sample mean likely to be to the population mean? The chapter begins with an intuitive discussion of the relationship between a) a statistic derived from sample data, and b) a parameter of a universe from which the sample is drawn. Then we discuss the actual construction of confidence intervals using two different approaches which produce the same numbers though they have different logic. The following chapter shows illustrations of these procedures.

+

The accuracy of an estimate is a hard intellectual nut to crack, so hard that for hundreds of years statisticians and scientists wrestled with the problem with little success; it was not until the last century or two that much progress was made. The kernel of the problem is learning the extent of the variation in the population. But whereas the sample mean can be used straightforwardly to estimate the population mean, the extent of variation in the sample does not directly estimate the extent of the variation in the population, because the variation differs at different places in the distribution, and there is no reason to expect it to be symmetrical around the estimate or the mean.

+

The intellectual difficulty of confidence intervals is one reason why they are less prominent in statistics literature and practice than are tests of hypotheses (though statisticians often favor confidence intervals). Another reason is that tests of hypotheses are more fundamental for pure science because they address the question that is at the heart of all knowledge-getting: “Should these groups be considered different or the same ?” The statistical inference represented by confidence limits addresses what seems to be a secondary question in most sciences (though not in astronomy or perhaps physics): “How reliable is the estimate?” Still, confidence intervals are very important in some applied sciences such as geology — estimating the variation in grades of ores, for example — and in some parts of business and industry.

+

Confidence intervals and hypothesis tests are not disjoint ideas. Indeed, hypothesis testing of a single sample against a benchmark value is (in all schools of thought, I believe) operationally identical with the most common way (Approach 1 below) of constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different for confidence limits and hypothesis tests.

+

The logic of confidence intervals is on shakier ground, in my judgment, than that of hypothesis testing, though there are many thoughtful and respected statisticians who argue that the logic of confidence intervals is better grounded and leads less often to error.

+

Confidence intervals are considered by many to be part of the same topic as estimation , being an estimation of accuracy, in their view. And confidence intervals and hypothesis testing are seen as sub-cases of each other by some people. Whatever the importance of these distinctions among these intellectual tasks in other contexts, they need not concern us here.

+
+
+

26.2 Estimating the accuracy of a sample mean

+

If one draws a sample that is very, very large — large enough so that one need not worry about sample size and dispersion in the case at hand — from a universe whose characteristics one knows , one then can deduce the probability that the sample mean will fall within a given distance of the population mean. Intuitively, it seems as if one should also be able to reverse the process — to infer something about the location of the population mean from the sample mean . But this inverse inference turns out to be a slippery business indeed.

+

Let’s put it differently: It is all very well to say — as one logically may — that on average the sample mean (or other point estimator) equals a population parameter in most situations.

+

But what about the result of any particular sample? How accurate or inaccurate an estimate of the population mean is the sample likely to produce?

+

Because the logic of confidence intervals is subtle, most statistics texts skim right past the conceptual difficulties, and go directly to computation. Indeed, the topic of confidence intervals has been so controversial that some eminent statisticians refuse to discuss it at all. And when the concept is combined with the conventional algebraic treatment, the composite is truly baffling; the formal mathematics makes impossible any intuitive understanding. For students, “pluginski” is the only viable option for passing exams.

+

With the resampling method, however, the estimation of confidence intervals is easy. The topic then is manageable though subtle and challenging — sometimes pleasurably so. Even beginning undergraduates can enjoy the subtlety and find that it feels good to stretch the brain and get down to fundamentals.

+

One thing is clear: Despite the subtlety of the topic, the accuracy of estimates must be dealt with, one way or another.

+

I hope the discussion below resolves much of the confusion of the topic.

+
+
+

26.3 The logic of confidence intervals

+

To preview the treatment of confidence intervals presented below: We do not learn about the reliability of sample estimates of the mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because this cannot be done in principle . Instead, we investigate the behavior of various universes in the neighborhood of the sample, universes whose characteristics are chosen on the basis of their similarity to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes that are implicitly suggested by the sample evidence but are not logically implied by that evidence.

+

The examples worked in the following chapter help explain why statistics is a difficult subject. The procedure required to transit successfully from the original question to a statistical probability, and then through a sensible interpretation of the probability, involves a great many choices about the appropriate model based on analysis of the problem at hand; a wrong choice at any point dooms the procedure. The actual computation of the probability — whether done with formulaic probability theory or with resampling simulation — is only a very small part of the procedure, and it is the least difficult part if one proceeds with resampling. The difficulties in the statistical process are not mathematical but rather stem from the hard clear thinking needed to understand the nature of the situation and to ascertain the appropriate way to model it.

+

Again, the purpose of a confidence interval is to help us assess the reliability of a statistic of the sample — for example, its mean or median — as an estimator of the parameter of the universe. The line of thought runs as follows: It is possible to map the distribution of the means (or other such parameter) of samples of any given size (the size of interest in any investigation usually being the size of the observed sample) and of any given pattern of dispersion (which we will assume for now can be estimated from the sample) that a universe in the neighborhood of the sample will produce. For example, we can compute how large an interval to the right and left of a postulated universe’s mean is required to include 45 percent of the samples on either side of the mean.

+

What cannot be done is to draw conclusions from sample evidence about the nature of the universe from which it was drawn, in the absence of some information about the set of universes from which it might have been drawn. That is, one can investigate the behavior of one or more specified universes, and discover the absolute and relative probabilities that the given specified universe(s) might produce such a sample. But the universe(s) to be so investigated must be specified in advance (which is consistent with the Bayesian view of statistics). To put it differently, we can employ probability theory to learn the pattern(s) of results produced by samples drawn from a particular specified universe, and then compare that pattern to the observed sample. But we cannot infer the probability that that sample was drawn from any given universe in the absence of knowledge of the other possible sources of the sample. That is a subtle difference, I know, but I hope that the following discussion makes it understandable.

+
+
+

26.4 Computing confidence intervals

+

In the first part of the discussion we shall leave aside the issue of estimating the extent of the dispersion — a troublesome matter, but one which seldom will result in unsound conclusions even if handled crudely. To start from scratch again: The first — and seemingly straightforward — step is to estimate the mean of the population based on the sample data. The next and more complex step is to ask about the range of values (and their probabilities) that the estimate of the mean might take — that is, the construction of confidence intervals. It seems natural to assume that if our best guess about the population mean is the value of the sample mean, our best guesses about the various values that the population mean might take if unbiased sampling error causes discrepancies between population parameters and sample statistics, should be values clustering around the sample mean in a symmetrical fashion (assuming that asymmetry is not forced by the distribution — as for example, the binomial is close to symmetric near its middle values). But how far away from the sample mean might the population mean be?

+

Let’s walk slowly through the logic, going back to basics to enhance intuition. Let’s start with the familiar saying, “The apple doesn’t fall far from the tree.” Imagine that you are in a very hypothetical place where an apple tree is above you, and you are not allowed to look up at the tree, whose trunk has an infinitely thin diameter. You see an apple on the ground. You must now guess where the trunk (center) of the tree is. The obvious guess for the location of the trunk is right above the apple. But the trunk is not likely to be exactly above the apple because of the small probability of the trunk being at any particular location, due to sampling dispersion.

+

Though you find it easy to make a best guess about where the mean is (the true trunk), with the given information alone you have no way of making an estimate of the probability that the mean is one place or another, other than that the probability is the same that the tree is to the north or south, east or west, of you. You have no idea about how far the center of the tree is from you. You cannot even put a maximum on the distance it is from you, and without a maximum you could not even reasonably assume a rectangular distribution, or a Normal distribution, or any other.

+

Next you see two apples. What guesses do you make now? The midpoint between the two obviously is your best guess about the location of the center of the tree. But still there is no way to estimate the probability distribution of the location of the center of the tree.

+

Now assume you are given still another piece of information: The outermost spread of the tree’s branches (the range) equals the distance between the two apples you see. With this information, you could immediately locate the boundaries of the location of the center of the tree. But this is only because the answer you sought was given to you in disguised form.

+

You could, however, come up with some statements of relative probabilities. In the absence of prior information on where the tree might be, you would offer higher odds that the center (the trunk) is in any unit of area close to the center of your two apples than in a unit of area far from the center. That is, if you are told that either one apple, or two apples, came from one of two specified trees whose locations are given , with no reason to believe it is one tree or the other (later, we can put other prior probabilities on the two trees), and you are also told the dispersions, you now can put relative probabilities on one tree or the other being the source. (Note to the advanced student: This is like the Neyman-Pearson procedure, and it is easily reconciled with the Bayesian point of view to be explored later. One can also connect this concept of relative probability to the Fisherian concept of maximum likelihood — which is a probability relative to all others). And you could list from high to low the probabilities for each unit of area in the neighborhood of your apple sample. But this procedure is quite different from making any single absolute numerical probability estimate of the location of the mean.

+

Now let’s say you see 10 apples on the ground. Of course your best estimate is that the trunk of the tree is at their arithmetic center. But how close to the actual tree trunk (the population mean) is your estimate likely to be? This is the question involved in confidence intervals. We want to estimate a range (around the center, which we estimate with the center mean of the sample, we said) within which we are pretty sure that the trunk lies.

+

To simplify, we consider variation along only one dimension — that is, on (say) a north-south line rather than on two dimensions (the entire surface).

+

We first note that you have no reason to estimate the trunk’s location to be outside the sample pattern, or at its edge, though it could be so in principle.

+

If the pattern of the 10 apples is tight, you imagine the pattern of the likely locations of the population mean to be tight; if not, not. That is, it is intuitively clear that there is some connection between how spread out are the sample observations and your confidence about the location of the population mean . For example, consider two patterns of a thousand apples, one with twice the spread of another, where we measure spread by (say) the diameter of the circle that holds the inner half of the apples for each tree, or by the standard deviation. It makes sense that if the two patterns have the same center point (mean), you would put higher odds on the tree with the smaller spread being within some given distance — say, a foot — of the estimated mean. But what odds would you give on that bet?

+
+
+

26.5 Procedure for estimating confidence intervals

+

Here is a canonical list of questions that help organize one’s thinking when constructing confidence intervals. The list is comparable to the lists for questions in probability and for hypothesis testing provided in earlier chapters. This set of questions will be applied operationally in Chapter 27.

+

What Is The Question?

+

What is the purpose to be served by answering the question? Is this a “probability” or a “statistics” question?

+

If the Question Is a Statistical Inference Question:

+

What is the form of the statistics question?

+

Hypothesis test or confidence limits or other inference?

+

Assuming Question Is About Confidence Limits:

+

What is the description of the sample that has been observed?

+

Raw data?

+

Statistics of the sample?

+

Which universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess of the properties of the universe whose parameter you wish to make statements about? Finite or infinite? Bayesian possibilities?

+

Which parameter do you wish to make statements about?

+

Mean, median, standard deviation, range, interquartile range, other?

+

Which symbols for the observed entities?

+

Discrete or continuous?

+

What values or ranges of values?

+

If the universe is as guessed at, for which samples do you wish to estimate the variation? (Answer: samples the same size as has been observed)

+

Here one may continue with the conventional method, using perhaps a t or F or chi-square test or whatever. Everything up to now is the same whether continuing with resampling or with standard parametric test.

+

What procedure to produce the original entities in the sample?

+

What universe will you draw them from? Random selection?

+

What size resample?

+

Simple (single step) or complex (multiple “if” drawings)?

+

What procedure to produce resamples?

+

With or without replacement? Number of drawings?

+

What to record as result of resample drawing?

+

Mean, median, or whatever of resample

+

Stating the Distribution of Results

+

Histogram, frequency distribution, other?

+

Choice Of Confidence Bounds

+

One or two-tailed?

+

90%, 95%, etc.?

+

Computation of Probabilities Within Chosen Bounds

+
+
+

26.6 Summary

+

This chapter discussed the theoretical basis for assessing the accuracy of population averages from sample data. The following chapter shows two very different approaches to confidence intervals, and provides examples of the computations.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/confidence_2.html b/r-book/confidence_2.html new file mode 100644 index 00000000..d9d99676 --- /dev/null +++ b/r-book/confidence_2.html @@ -0,0 +1,1290 @@ + + + + + + + + + +Resampling statistics - 27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals

+
+ + + +
+ + + + +
+ + +
+ +

There are two broad conceptual approaches to the question at hand: 1) Study the probability of various distances between the sample mean and the likeliest population mean; and 2) study the behavior of particular border universes. Computationally, both approaches often yield the same result, but their interpretations differ. Approach 1 follows the conventional logic although carrying out the calculations with resampling simulation.

+
+

27.1 Approach 1: The distance between sample and population mean

+

If the study of probability can tell us the probability that a given population will produce a sample with a mean at a given distance x from the population mean, and if a sample is an unbiased estimator of the population, then it seems natural to turn the matter around and interpret the same sort of data as telling us the probability that the estimate of the population mean is that far from the “actual” population mean. A fly in the ointment is our lack of knowledge of the dispersion, but we can safely put that aside for now. (See below, however.)

+

This first approach begins by assuming that the universe that actually produced the sample has the same amount of dispersion (but not necessarily the same mean) that one would estimate from the sample. One then produces (either with resampling or with Normal distribution theory) the distribution of sample means that would occur with repeated sampling from that designated universe with samples the size of the observed sample. One can then compute the distance between the (assumed) population mean and (say) the inner 45 percent of sample means on each side of the actually observed sample mean.

+

The crucial step is to shift vantage points. We look from the sample to the universe, instead of from a hypothesized universe to simulated samples (as we have done so far). This same interval as computed above must be the relevant distance as when one looks from the sample to the universe. Putting this algebraically, we can state (on the basis of either simulation or formal calculation) that for any given population S, and for any given distance \(d\) from its mean \(\mu\), that \(P((\mu - \bar{x}) < d) = \alpha\), where \(\bar{x}\) is a randomly generated sample mean and \(\alpha\) is the probability resulting from the simulation or calculation.

+

The above equation focuses on the deviation of various sample means (\(\bar{x}\)) from a stated population mean (\(\mu\)). But we are logically entitled to read the algebra in another fashion, focusing on the deviation of \(\mu\) from a randomly generated sample mean. This implies that for any given randomly generated sample mean we observe, the same probability (\(\alpha\)) describes the probability that \(\mu\) will be at a distance \(d\) or less from the observed \(\bar{x}\). (I believe that this is the logic underlying the conventional view of confidence intervals, but I have yet to find a clear-cut statement of it; in any case, it appears to be logically correct.)

+

To repeat this difficult idea in slightly different words: If one draws a sample (large enough to not worry about sample size and dispersion), one can say in advance that there is a probability \(p\) that the sample mean (\(\bar{x}\)) will fall within \(z\) standard deviations of the population mean (\(\mu\)). One estimates the population dispersion from the sample. If there is a probability \(p\) that \(\bar{x}\) is within \(z\) standard deviations of \(\mu\), then with probability \(p\), \(\mu\) must be within that same \(z\) standard deviations of \(\bar{x}\). To repeat, this is, I believe, the heart of the standard concept of the confidence interval, to the extent that there is thought through consensus on the matter.

+

So we can state for such populations the probability that the distance between the population and sample means will be \(d\) or less. Or with respect to a given distance, we can say that the probability that the population and sample means will be that close together is \(p\).

+

That is, we start by focusing on how much the sample mean diverges from the known population mean. But then — and to repeat once more this key conceptual step — we refocus our attention to begin with the sample mean and then discuss the probability that the population mean will be within a given distance. The resulting distance is what we call the “confidence interval.”

+

Please notice that the distribution (universe) assumed at the beginning of this approach did not include the assumption that the distribution is centered on the sample mean or anywhere else. It is true that the sample mean is used for purposes of reporting the location of the estimated universe mean . But despite how the subject is treated in the conventional approach, the estimated population mean is not part of the work of constructing confidence intervals. Rather, the calculations apply in the same way to all universes in the neighborhood of the sample (which are assumed, for the purpose of the work, to have the same dispersion). And indeed, it must be so, because the probability that the universe from which the sample was drawn is centered exactly at the sample mean is very small.

+

This independence of the confidence-intervals construction from the mean of the sample (and the mean of the estimated universe) is surprising at first, but after a bit of thought it makes sense.

+

In this first approach, as noted more generally above, we do not make estimates of the confidence intervals on the basis of any logical inference from any one particular sample to any one particular universe, because this cannot be done in principle ; it is the futile search for this connection that for decades roiled the brains of so many statisticians and now continues to trouble the minds of so many students. Instead, we investigate the behavior of (in this first approach) the universe that has a higher probability of producing the observed sample than does any other universe (in the absence of any additional evidence to the contrary), and whose characteristics are chosen on the basis of its resemblance to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes, the universe(s) being implicitly suggested by the sample evidence but not logically implied by that evidence. And there are no grounds for dispute about exactly what is being done — only about how to interpret the results.

+

One difficulty with the above approach is that the estimate of the population dispersion does not rest on sound foundations; this matter will be discussed later, but it is not likely to lead to a seriously misleading conclusion.

+

A second difficulty with this approach is in interpreting the result. What is the justification for focusing our attention on a universe centered on the sample mean? While this particular universe may be more likely than any other, it undoubtedly has a low probability. And indeed, the statement of the confidence intervals refers to the probabilities that the sample has come from universes other than the universe centered at the sample mean, and quite a distance from it.

+

My answer to this question does not rest on a set of meaningful mathematical axioms, and I assert that a meaningful axiomatic answer is impossible in principle. Rather, I reason that we should consider the behavior of this universe because other universes near it will produce much the same results, differing only in dispersion from this one, and this difference is not likely to be crucial; this last assumption is all-important, of course. True, we do not know what the dispersion might be for the “true” universe. But elsewhere (Simon, forthcoming) I argue that the concept of the “true universe” is not helpful — or maybe even worse than nothing — and should be forsworn. And we can postulate a dispersion for any other universe we choose to investigate. That is, for this postulation we unabashedly bring in any other knowledge we may have. The defense for such an almost-arbitrary move would be that this is a second-order matter relative to the location of the estimated universe mean, and therefore it is not likely to lead to serious error. (This sort of approximative guessing sticks in the throats of many trained mathematicians, of course, who want to feel an unbroken logic leading backwards into the mists of axiom formation. But the axioms themselves inevitably are chosen arbitrarily just as there is arbitrariness in the practice at hand, though the choice process for axioms is less obvious and more hallowed by having been done by the masterminds of the past. (See Simon (1998), on the necessity for judgment.) The absence of a sequence of equations leading from some first principles to the procedure described in the paragraph above is evidence of what is felt to be missing by those who crave logical justification. The key equation in this approach is formally unassailable, but it seems to come from nowhere.)

+

In the examples in the following chapter may be found computations for two population distributions — one binomial and one quantitative — of the histograms of the sample means produced with this procedure.

+

Operationally, we use the observed sample mean, together with an estimate of the dispersion from the sample, to estimate a mean and dispersion for the population. Then with reference to the sample mean we state a combination of a distance (on each side) and a probability pertaining to the population mean. The computational examples will illustrate this procedure.

+

Once we have obtained a numerical answer, we must decide how to interpret it. There is a natural and almost irresistible tendency to talk about the probability that the mean of the universe lies within the intervals, but this has proven confusing and controversial. Interpretation in terms of a repeated process is not very satisfying intuitively.1

+

In my view, it is not worth arguing about any “true” interpretation of these computations. One could sensibly interpret the computations in terms of the odds a decision maker, given the evidence, would reasonably offer about the relative probabilities that the sample came from one of two specified universes (one of them probably being centered on the sample); this does provide some information on reliability, but this procedure departs from the concept of confidence intervals.

+
+

27.1.1 Example: Counted Data: The Accuracy of Political Polls

+

Consider the reliability of a randomly selected 1988 presidential election poll, showing 840 intended votes for Bush and 660 intended votes for Dukakis out of 1500 (Wonnacott and Wonnacott 1990, 5). Let us work through the logic of this example.

+ +
    +
  • What is the question? Stated technically, what are the 95% confidence limits for the proportion of Bush supporters in the population? (The proportion is the mean of a binomial population or sample, of course.) More broadly, within which bounds could one confidently believe that the population proportion was likely to lie? At this stage of the work, we must already have translated the conceptual question (in this case, a decision-making question from the point of view of the candidates) into a statistical question. (See Chapter 20 on translating questions into statistical form.)
  • +
  • What is the purpose to be served by answering this question? There is no sharp and clear answer in this case. The goal could be to satisfy public curiosity, or strategy planning for a candidate (though a national proportion is not as helpful for planning strategy as state data would be). A secondary goal might be to help guide decisions about the sample size of subsequent polls.
  • +
  • Is this a “probability” or a “probability-statistics” question? The latter; we wish to infer from sample to population rather than the converse.
  • +
  • Given that this is a statistics question: What is the form of the statistics question — confidence limits or hypothesis testing? Confidence limits.
  • +
  • Given that the question is about confidence limits: What is the description of the sample that has been observed? a) The raw sample data — the observed numbers of interviewees are 840 for Bush and 660 for Dukakis — constitutes the best description of the universe. The statistics of the sample are the given proportions — 56 percent for Bush, 44 percent for Dukakis.
  • +
  • Which universe? (Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe about whose parameter you wish to make statements? The best guess is that the population proportion is the sample proportion — that is, the population contains 56 percent Bush votes, 44 percent Dukakis votes.
  • +
  • Possibilities for Bayesian analysis? Not in this case, unless you believe that the sample was biased somehow.
  • +
  • Which parameter(s) do you wish to make statements about? Mean, median, standard deviation, range, interquartile range, other? We wish to estimate the proportion in favor of Bush (or Dukakis).
  • +
  • Which symbols for the observed entities? Perhaps 56 green and 44 yellow balls, if a bucket is used, or “0” and “1” if the computer is used.
  • +
  • Discrete or continuous distribution? In principle, discrete. (All distributions must be discrete in practice.)
  • +
  • What values or ranges of values?* “0” or “1.”
  • +
  • Finite or infinite? Infinite — the sample is small relative to the population.
  • +
  • If the universe is what you guess it to be, for which samples do you wish to estimate the variation? A sample the same size as the observed poll.
  • +
+

Here one may continue either with resampling or with the conventional method. Everything done up to now would be the same whether continuing with resampling or with a standard parametric test.

+
+
+
+

27.2 Conventional Calculational Methods

+

Estimating the Distribution of Differences Between Sample and Population Means With the Normal Distribution.

+

In the conventional approach, one could in principle work from first principles with lists and sample space, but that would surely be too cumbersome. One could work with binomial proportions, but this problem has too large a sample for tree-drawing and quincunx techniques; even the ordinary textbook table of binomial coefficients is too small for this job. Calculating binomial coefficients also is a big job. So instead one would use the Normal approximation to the binomial formula.

+

(Note to the beginner: The distribution of means that we manipulate has the Normal shape because of the operation of the Law of Large Numbers (The Central Limit theorem). Sums and averages, when the sample is reasonably large, take on this shape even if the underlying distribution is not Normal. This is a truly astonishing property of randomly drawn samples — the distribution of their means quickly comes to resemble a “Normal” distribution, no matter the shape of the underlying distribution. We then standardize it with the standard deviation or other devices so that we can state the probability distribution of the sampling error of the mean for any sample of reasonable size.)

+

The exercise of creating the Normal shape empirically is simply a generalization of particular cases such as we will later create here for the poll by resampling simulation. One can also go one step further and use the formula of de Moivre-Laplace-Gauss to describe the empirical distributions, and to serve instead of the empirical distributions. Looking ahead now, the difference between resampling and the conventional approach can be said to be that in the conventional approach we simply plot the Gaussian distribution very carefully, and use a formula instead of the empirical histograms, afterwards putting the results in a standardized table so that we can read them quickly without having to recreate the curve each time we use it. More about the nature of the Normal distribution may be found in Simon (forthcoming).

+

All the work done above uses the information specified previously — the sample size of 1500, the drawing with replacement, the observed proportion as the criterion.

+
+
+

27.3 Confidence Intervals Empirically — With Resampling

+

Estimating the Distribution of Differences Between Sample and Population Means By Resampling

+
    +
  • What procedure to produce entities?: Random selection from bucket or computer.
  • +
  • Simple (single step) or complex (multiple “if” drawings)?: Simple.
  • +
  • What procedure to produce resamples? That is, with or without replacement? With replacement.
  • +
  • Number of drawings observations in actual sample, and hence, number of drawings in resamples? 1500.
  • +
  • What to record as result of each resample drawing? Mean, median, or whatever of resample? The proportion is what we seek.
  • +
  • Stating the distribution of results : The distribution of proportions for the trial samples.
  • +
  • Choice of confidence bounds? : 95%, two tails (choice made by the textbook that posed the problem).
  • +
  • Computation of probabilities within chosen bounds : Read the probabilistic result from the histogram of results.
  • +
  • Computation of upper and lower confidence bounds: Locate the values corresponding to the 2.5th and 97.5th percentile of the resampled proportions.
  • +
+

Because the theory of confidence intervals is so abstract (even with the resampling method of computation), let us now walk through this resampling demonstration slowly, using the conventional Approach 1 described previously. We first produce a sample, and then see how the process works in reverse to estimate the reliability of the sample, using the Bush-Dukakis poll as an example. The computer program follows below.

+
    +
  • Step 1: Draw a sample of 1500 voters from a universe that, based on the observed sample, is 56 percent for Bush, 44 percent for Dukakis. The first such sample produced by the computer happens to be 53 percent for Bush; it might have been 58 percent, or 55 percent, or very rarely, 49 percent for Bush.
  • +
  • Step 2: Repeat step 1 perhaps 400 or 1000 times.
  • +
  • Step 3: Estimate the distribution of means (proportions) of samples of size 1500 drawn from this 56-44 percent Bush- Dukakis universe; the resampling result is shown below.
  • +
  • Step 4: In a fashion similar to what was done in steps 13, now compute the 95 percent confidence intervals for some other postulated universe mean — say 53% for Bush, 47% for Dukakis. This step produces a confidence interval that is not centered on the sample mean and the estimated universe mean, and hence it shows the independence of the procedure from that magnitude. And we now compare the breadth of the estimated confidence interval generated with the 53-47 percent universe against the confidence interval derived from the corresponding distribution of sample means generated by the “true” Bush-Dukakis population of 56 percent — 44 percent. If the procedure works well, the results of the two procedures should be similar.
  • +
+

Now we interpret the results using this first approach. The histogram shows the probability that the difference between the sample mean and the population mean — the error in the sample result — will be about 2.5 percentage points too low. It follows that about 47.5 percent (half of 95 percent) of the time, a sample like this one will be between the population mean and 2.5 percent too low. We do not know the actual population mean. But for any observed sample like this one, we can say that there is a 47.5 percent chance that the distance between it and the mean of the population that generated it is minus 2.5 percent or less.

+

Now a crucial step: We turn around the statement just above, and say that there is an 47.5 percent chance that the population mean is less than three percentage points higher than the mean of a sample drawn like this one, but at or above the sample mean. (And we do the same for the other side of the sample mean.) So to recapitulate: We observe a sample and its mean. We estimate the error by experimenting with one or more universes in that neighborhood, and we then give the probability that the population mean is within that margin of error from the sample mean.

+
+

27.3.1 Example: Measured Data Example — the Bootstrap

+

A feed merchant decides to experiment with a new pig ration — ration A — on twelve pigs. To obtain a random sample, he provides twelve customers (selected at random) with sufficient food for one pig. After 4 weeks, the 12 pigs experience an average gain of 508 ounces. The weight gain of the individual pigs are as follows: 496, 544, 464, 416, 512, 560, 608, 544, 480, 466, 512, 496.

+

The merchant sees that the ration produces results that are quite variable (from a low of 466 ounces to a high of 560 ounces) and is therefore reluctant to advertise an average weight gain of 508 ounces. He speculates that a different sample of pigs might well produce a different average weight gain.

+

Unfortunately, it is impractical to sample additional pigs to gain additional information about the universe of weight gains. The merchant must rely on the data already gathered. How can these data be used to tell us more about the sampling variability of the average weight gain?

+

Recalling that all we know about the universe of weight gains is the sample we have observed, we can replicate that sample millions of times, creating a “pseudo-universe” that embodies all our knowledge about the real universe. We can then draw additional samples from this pseudo-universe and see how they behave.

+

More specifically, we replicate each observed weight gain millions of times — we can imagine writing each result that many times on separate pieces of paper — then shuffle those weight gains and pick out a sample of 12. Average the weight gain for that sample, and record the result. Take repeated samples, and record the result for each. We can then make a histogram of the results; it might look something like this:

+
+
+
+
+

+
+
+
+
+

Though we do not know the true average weight gain, we can use this histogram to estimate the bounds within which it falls. The merchant can consider various weight gains for advertising purposes, and estimate the probability that the true weight gain falls below the value. For example, he might wish to advertise a weight gain of 500 ounces. Examining the histogram, we see that about 36% of our samples yielded weight gains less than 500 ounces. The merchant might wish to choose a lower weight gain to advertise, to reduce the risk of overstating the effectiveness of the ration.

+

This illustrates the “bootstrap” method. By re-using our original sample many times (and using nothing else), we are able to make inferences about the population from which the sample came. This problem would conventionally be addressed with the “t-test.”

+
+
+

27.3.2 Example: Measured Data Example: Estimating Tree Diameters

+
    +
  • What is the question? A horticulturist is experimenting with a new type of tree. She plants 20 of them on a plot of land, and measures their trunk diameter after two years. She wants to establish a 90% confidence interval for the population average trunk diameter. For the data given below, calculate the mean of the sample and calculate (or describe a simulation procedure for calculating) a 90% confidence interval around the mean. Here are the 20 diameters, in centimeters and in no particular order (Table 27.1):

    +
    + + ++++++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Table 27.1: Tree Diameters, in Centimeters
    8.57.69.35.511.46.96.512.98.74.8
    4.28.16.55.86.72.411.17.18.87.2
    +
  • +
  • What is the purpose to be served by answering the question? Either research & development, or pure science.

  • +
  • Is this a “probability” or a “statistics” question? Statistics.

  • +
  • What is the form of the statistics question? Confidence limits.

  • +
  • What is the description of the sample that has been observed? The raw data as shown above.

  • +
  • Statistics of the sample ? Mean of the tree data.

  • +
  • Which universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe whose parameter you wish to make statements about? Answer: The universe is like the sample above but much, much bigger. That is, in the absence of other information, we imagine this “bootstrap” universe as a collection of (say) one million trees of 8.5 centimeters width, one million of 7.2 centimeters, and so on. We’ll see in a moment that the device of sampling with replacement makes it unnecessary for us to work with such a large universe; by replacing each element after we draw it in a resample, we achieve the same effect as creating an almost-infinite universe from which to draw the resamples. (Are there possibilities for Bayesian analysis?) No Bayesian prior information will be included.

  • +
  • Which parameter do you wish to make statements about? The mean.

  • +
  • Which symbols for the observed entities? Cards or computer entries with numbers 8.5…7.2, sample of an infinite size.

  • +
  • If the universe is as guessed at, for which samples do you wish to estimate the variation? Samples of size 20.

  • +
+

Here one may continue with the conventional method. Everything up to now is the same whether continuing with resampling or with a standard parametric test. The information listed above is the basis for a conventional test.

+

Continuing with resampling:

+
    +
  • What procedure will be used to produce the trial entities? Random selection: simple (single step), not complex (multiple “if”) sample drawings).
  • +
  • What procedure to produce resamples? With replacement. As noted above, sampling with replacement allows us to forego creating a very large bootstrap universe; replacing the elements after we draw them achieves the same effect as would an infinite universe.
  • +
  • Number of drawings? 20 trees
  • +
  • What to record as result of resample drawing? The mean.
  • +
  • How to state the distribution of results? See histogram.
  • +
  • Choice of confidence bounds? 90%, two-tailed.
  • +
  • Computation of values of the resample statistic corresponding to chosen confidence bounds? Read from histogram.
  • +
+

As has been discussed in Chapter 19, it often is more appropriate to work with the median than with the mean. One reason is that the median is not so sensitive to the extreme observations as is the mean. Another reason is that one need not assume a Normal distribution for the universe under study: this consideration affects conventional statistics but usually does not affect resampling, but it is worth keeping mind when a statistician is making a choice between a parametric (that is, Normal-based) and a non-parametric procedure.

+
+
+

27.3.3 Example: Determining a Confidence Interval for the Median Aluminum Content in Theban Jars

+

Data for the percentages of aluminum content in a sample of 18 ancient Theban jars (Catling and Jones 1977) are as follows, arranged in ascending order: 11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5, 15.0, 15.1, 15.8, 16.0, 16.3, 16.5, 16.9, 17.0, 17.2, 17.5, 19.0. Consider now putting a confidence interval around the median of 15.45 (halfway between the middle observations 15.1 and 15.8).

+

One may simply estimate a confidence interval around the median with a bootstrap procedure by substituting the median for the mean in the usual bootstrap procedure for estimating a confidence limit around the mean, as follows:

+
+
data = c(11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5,
+         15.0, 15.1, 15.8, 16.0, 16.3, 16.5, 16.9,
+         17.0, 17.2, 17.5, 19.0)
+
+observed_median <- median(data)
+
+n <- 10000
+medians <- numeric(n)
+
+for (i in 1:n) {
+    sample <- sample(data, replace=TRUE)
+    medians[i] <- median(sample)
+}
+
+hist(medians)
+
+message('Observed median aluminum content: ', observed_median)
+
+
Observed median aluminum content: 15.45
+
+
pp <- quantile(medians, c(0.025, 0.975))
+message('Estimate of 95 percent confidence interval: ', pp[1], ' - ', pp[2])
+
+
Estimate of 95 percent confidence interval: 14.15 - 16.6
+
+
+
+
+

+
+
+
+
+

(This problem would be approached conventionally with a binomial procedure leading to quite wide confidence intervals (Deshpande, Gore, and Shanubhogue 1995, 32)).

+ +
+
+

27.3.4 Example: Confidence Interval for the Median Price Elasticity of Demand for Cigarettes

+

The data for a measure of responsiveness of demand to a price change (the “elasticity” — percent change in demand divided by percent change in price) are shown for cigarette price changes as follows (Table 27.2). I (JLS) computed the data from cigarette sales data preceding and following a tax change in a state (Lyon and Simon 1968).

+
+ + ++++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 27.2: Price elasticity of demand in various states at various dates
1.7251.139.957.863.802.517.407.304
.204.125.122.106.031-.032-.1-.142
-.174-.234-.240-.251-.277-.301-.302-.302
-.307-.328-.329-.346-.357-.376-.377-.383
-.385-.393-.444-.482-.511-.538-.541-.549
-.554-.600-.613-.644-.692-.713-.724-.734
-.749-.752-.753-.766-.805-.866-.926-.971
-.972-.975-1.018-1.024-1.066-1.118-1.145-1.146
-1.157-1.282-1.339-1.420-1.443-1.478-2.041-2.092
-7.100
+
+

The positive observations (implying an increase in demand when the price rises) run against all theory, but can be considered to be the result simply of measurement errors, and treated as they stand. Aside from this minor complication, the reader may work this example similarly to the case of the Theban jars. Consider this program:

+
+
data = c(
+    1.725, 1.139, 0.957, 0.863, 0.802, 0.517, 0.407, 0.304,
+    0.204, 0.125, 0.122, 0.106, 0.031, -0.032, -0.1,  -0.142,
+    -0.174, -0.234, -0.240, -0.251, -0.277, -0.301, -0.302, -0.302,
+    -0.307, -0.328, -0.329, -0.346, -0.357, -0.376, -0.377, -0.383,
+    -0.385, -0.393, -0.444, -0.482, -0.511, -0.538, -0.541, -0.549,
+    -0.554, -0.600, -0.613, -0.644, -0.692, -0.713, -0.724, -0.734,
+    -0.749, -0.752, -0.753, -0.766, -0.805, -0.866, -0.926, -0.971,
+    -0.972, -0.975, -1.018, -1.024, -1.066, -1.118, -1.145, -1.146,
+    -1.157, -1.282, -1.339, -1.420, -1.443, -1.478, -2.041, -2.092,
+    -7.100)
+
+data_median <- median(data)
+
+n <- 10000
+
+medians <- numeric(n)
+
+for (i in 1:n) {
+    sample <- sample(data, replace=TRUE)
+    medians[i] <- median(sample)
+}
+
+hist(medians)
+
+message('Observed median elasticity: ', data_median)
+
+
Observed median elasticity: -0.511
+
+
pp <- quantile(medians, c(0.025, 0.975))
+message('Estimate of 95 percent confidence interval: ',
+        pp[1], ' - ', pp[2])
+
+
Estimate of 95 percent confidence interval: -0.692 - -0.357
+
+
+
+
+

+
+
+
+
+
+
+
+

27.4 Measured Data Example: Confidence Intervals For a Difference Between Two Means

+

This is another example from the mice data.

+

Returning to the data on the survival times of the two groups of mice in Section 24.0.3. It is the view of this book that confidence intervals should be calculated for a difference between two groups only if one is reasonably satisfied that the difference is not due to chance. Some statisticians might choose to compute a confidence interval in this case nevertheless, some because they believe that the confidence-interval machinery is more appropriate to deciding whether the difference is the likely outcome of chance than is the machinery of a hypothesis test in which you are concerned with the behavior of a benchmark or null universe. So let us calculate a confidence interval for these data, which will in any case demonstrate the technique for determining a confidence interval for a difference between two samples.

+

Our starting point is our estimate for the difference in mean survival times between the two samples — 30.63 days. We ask “How much might this estimate be in error? If we drew additional samples from the control universe and additional samples from the treatment universe, how much might they differ from this result?”

+

We do not have the ability to go back to these universes and draw more samples, but from the samples themselves we can create hypothetical universes that embody all that we know about the treatment and control universes. We imagine replicating each element in each sample millions of times to create a hypothetical control universe and (separately) a hypothetical treatment universe. Then we can draw samples (separately) from these hypothetical universes to see how reliable is our original estimate of the difference in means (30.63 days).

+

Actually, we use a shortcut — instead of copying each sample element a million times, we simply replace it after drawing it for our resample, thus creating a universe that is effectively infinite.

+

Here are the steps:

+
    +
  • Step 1: Consider the two samples separately as the relevant universes.
  • +
  • Step 2: Draw a sample of 7 with replacement from the treatment group and calculate the mean.
  • +
  • Step 3: Draw a sample of 9 with replacement from the control group and calculate the mean.
  • +
  • Step 4: Calculate the difference in means (treatment minus control) & record.
  • +
  • Step 5: Repeat steps 2-4 many times.
  • +
  • Step 6: Review the distribution of resample means; the 5th and 95th percentiles are estimates of the endpoints of a 90 percent confidence interval.
  • +
+

Here is a R example:

+
+
treatment = c(94, 38, 23, 197, 99, 16, 141)
+control = c(52, 10, 40, 104, 51, 27, 146, 30, 46)
+
+observed_diff <- mean(treatment) - mean(control)
+
+n <- 10000
+mean_delta <- numeric(n)
+
+for (i in 1:n) {
+    treatment_sample <- sample(treatment, replace=TRUE)
+    control_sample <- sample(control, replace=TRUE)
+    mean_delta[i] <- mean(treatment_sample) - mean(control_sample)
+}
+
+hist(mean_delta)
+
+message('Observed difference in means: ', round(observed_diff, 2))
+
+
Observed difference in means: 30.63
+
+
pp <- quantile(mean_delta, c(0.05, 0.95))
+message('Estimate of 90 percent confidence interval: ',
+        round(pp[1], 2), ' - ', round(pp[2], 2))
+
+
Estimate of 90 percent confidence interval: -13.76 - 75.43
+
+
+
+
+

+
+
+
+
+

Interpretation: This means that one can be 90 percent confident that the mean of the difference (which is estimated to be 30.635) falls between -13.763) and 75.429). So the reliability of the estimate of the mean is very small.

+
+
+

27.5 Count Data Example: Confidence Limit on a Proportion, Framingham Cholesterol Data

+

The Framingham cholesterol data were used in Section 21.2.6 to illustrate the first classic question in statistical inference — interpretation of sample data for testing hypotheses. Now we use the same data for the other main theme in statistical inference — the estimation of confidence intervals. Indeed, the bootstrap method discussed above was originally devised for estimation of confidence intervals. The bootstrap method may also be used to calculate the appropriate sample size for experiments and surveys, another important topic in statistics.

+

Consider for now just the data for the sub-group of 135 high-cholesterol men in Table 21.4. Our second classic statistical question is as follows: How much confidence should we have that if we were to take a much larger sample than was actually obtained, the sample mean (that is, the proportion 10/135 = .07) would be in some close vicinity of the observed sample mean? Let us first carry out a resampling procedure to answer the questions, waiting until afterwards to discuss the logic of the inference.

+
    +
  1. Construct a bucket containing 135 balls — 10 red (infarction) and 125 green (no infarction) to simulate the universe as we guess it to be.
  2. +
  3. Mix, choose a ball, record its color, replace it, and repeat 135 times (to simulate a sample of 135 men).
  4. +
  5. Record the number of red balls among the 135 balls drawn.
  6. +
  7. Repeat steps 2-3 perhaps 10000 times, and observe how much the total number of reds varies from sample to sample. We arbitrarily denote the boundary lines that include 47.5 percent of the hypothetical samples on each side of the sample mean as the 95 percent “confidence limits” around the mean of the actual population.
  8. +
+

Here is a R program:

+
+
men <- rep(c(1, 0), c(10, 125))
+
+n <- 10000
+z <- numeric(n)
+
+for (i in 1:n) {
+    sample <- sample(men, replace=TRUE)
+    infarctions <- sum(sample == 1)
+    z[i] <- infarctions / 135
+}
+
+hist(z)
+
+pp <- quantile(z, c(0.025, 0.975))
+message('Estimate of 95 percent confidence interval: ',
+        round(pp[1], 2), ' - ', round(pp[2], 2))
+
+
Estimate of 95 percent confidence interval: 0.04 - 0.12
+
+
+
+
+

+
+
+
+
+

(The result is the 95 percent confidence interval, enclosing 95 percent of the resample results)

+

The variation in the histogram above highlights the fact that a sample containing only 10 cases of infarction is very small, and the number of observed cases — or the proportion of cases — necessarily varies greatly from sample to sample. Perhaps the most important implication of this statistical analysis, then, is that we badly need to collect additional data.

+

Again, this is a classic problem in confidence intervals, found in all subject fields. The language used in the cholesterol-infarction example is exactly the same as the language used for the Bush-Dukakis poll above except for labels and numbers.

+

As noted above, the philosophic logic of confidence intervals is quite deep and controversial, less obvious than for the hypothesis test. The key idea is that we can estimate for any given universe the probability P that a sample’s mean will fall within any given distance D of the universe’s mean; we then turn this around and assume that if we know the sample mean, the probability is P that the universe mean is within distance D of it. This inversion is more slippery than it may seem. But the logic is exactly the same for the formulaic method and for resampling. The only difference is how one estimates the probabilities — either with a numerical resampling simulation (as here), or with a formula or other deductive mathematical device (such as counting and partitioning all the possibilities, as Galileo did when he answered a gambler’s question about three dice). And when one uses the resampling method, the probabilistic calculations are the least demanding part of the work. One then has mental capacity available to focus on the crucial part of the job — framing the original question soundly, choosing a model for the facts so as to properly resemble the actual situation, and drawing appropriate inferences from the simulation.

+
+
+

27.6 Approach 2: Probability of various universes producing this sample

+

A second approach to the general question of estimate accuracy is to analyze the behavior of a variety of universes centered at other points on the line, rather than the universe centered on the sample mean. One can ask the probability that a distribution centered away from the sample mean, with a given dispersion, would produce (say) a 10-apple scatter having a mean as far away from the given point as the observed sample mean. If we assume the situation to be symmetric, we can find a point at which we can say that a distribution centered there would have only a (say) 5 percent chance of producing the observed sample. And we can also say that a distribution even further away from the sample mean would have an even lower probability of producing the given sample. But we cannot turn the matter around and say that there is any particular chance that the distribution that actually produced the observed sample is between that point and the center of the sample.

+

Imagine a situation where you are standing on one side of a canyon, and you are hit by a baseball, the only ball in the vicinity that day. Based on experiments, you can estimate that a baseball thrower who you see standing on the other side of the canyon has only a 5 percent chance of hitting you with a single throw. But this does not imply that the source of the ball that hit you was someone else standing in the middle of the canyon, because that is patently impossible. That is, your knowledge about the behavior of the “boundary” universe does not logically imply anything about the existence and behavior of any other universes. But just as in the discussion of testing hypotheses, if you know that one possibility is unlikely, it is reasonable that as a result you will draw conclusions about other possibilities in the context of your general knowledge and judgment.

+

We can find the “boundary” distribution(s) we seek if we a) specify a measure of dispersion, and b) try every point along the line leading away from the sample mean, until we find that distribution that produces samples such as that observed with a (say) 5 percent probability or less.

+

To estimate the dispersion, in many cases we can safely use an estimate based on the sample dispersion, using either resampling or Normal distribution theory. The hardest cases for resampling are a) a very small sample of data, and b) a proportion near 0 or near 1.0 (because the presence or absence in the sample of a small number of observations can change the estimate radically, and therefore a large sample is needed for reliability). In such situations one should use additional outside information, or Normal distribution theory, or both.

+

We can also create a confidence interval in the following fashion: We can first estimate the dispersion for a universe in the general neighborhood of the sample mean, using various devices to be “conservative,” if we like.2 Given the estimated dispersion, we then estimate the probability distribution of various amounts of error between observed sample means and the population mean. We can do this with resampling simulation as follows: a) Create other universes at various distances from the sample mean, but with other characteristics similar to the universe that we postulate for the immediate neighborhood of the sample, and b) experiment with those universes. One can also apply the same logic with a more conventional parametric approach, using general knowledge of the sampling distribution of the mean, based on Normal distribution theory or previous experience with resampling. We shall not discuss the latter method here.

+

As with approach 1, we do not make any probability statements about where the population mean may be found. Rather, we discuss only what various hypothetical universes might produce , and make inferences about the “actual” population’s characteristics by comparison with those hypothesized universes.

+

If we are interested in (say) a 95 percent confidence interval, we want to find the distribution on each side of the sample mean that would produce a sample with a mean that far away only 2.5 percent of the time (2 * .025 = 1-.95). A shortcut to find these “border distributions” is to plot the sampling distribution of the mean at the center of the sample, as in Approach 1. Then find the (say) 2.5 percent cutoffs at each end of that distribution. On the assumption of equal dispersion at the two points along the line, we now reproduce the previously-plotted distribution with its centroid (mean) at those 2.5 percent points on the line. The new distributions will have 2.5 percent of their areas on the other side of the mean of the sample.

+
+

27.6.1 Example: Approach 2 for Counted Data: the Bush-Dukakis Poll

+

Let’s implement Approach 2 for counted data, using for comparison the Bush-Dukakis poll data discussed earlier in the context of Approach 1.

+

We seek to state, for universes that we select on the basis that their results will interest us, the probability that they (or it, for a particular universe) would produce a sample as far or farther away from the mean of the universe in question as the mean of the observed sample — 56 percent for Bush. The most interesting universe is that which produces such a sample only about 5 percent of the time, simply because of the correspondence of this value to a conventional breakpoint in statistical inference. So we could experiment with various universes by trial and error to find this universe.

+

We can learn from our previous simulations of the Bush — Dukakis poll in Approach 1 that about 95 percent of the samples fall within .025 on either side of the sample mean (which we had been implicitly assuming is the location of the population mean). If we assume (and there seems no reason not to) that the dispersions of the universes we experiment with are the same, we will find (by symmetry) that the universe we seek is centered on those points .025 away from .56, or .535 and .585.

+

From the standpoint of Approach 2, then, the conventional sample formula that is centered at the mean can be considered a shortcut to estimating the boundary distributions. We say that the boundary is at the point that centers a distribution which has only a (say) 2.5 percent chance of producing the observed sample; it is that distribution which is the subject of the discussion, and not the distribution which is centered at \(\mu = \bar{x}\). Results of these simulations are shown in Figure 27.1.

+
+
+

+
Figure 27.1: Approach 2 for Bush-Dukakis problem
+
+
+

About these distributions centered at .535 and .585 — or more importantly for understanding an election situation, the universe centered at .535 — one can say: Even if the “true” value is as low as 53.5 percent for Bush, there is only a 2 ½ percent chance that a sample as high as 56 percent pro-Bush would be observed. (The values of a 2 ½ percent probability and a 2 ½ percent difference between 56 percent and 53.5 percent coincide only by chance in this case.) It would be even more revealing in an election situation to make a similar statement about the universe located at 50-50, but this would bring us almost entirely within the intellectual ambit of hypothesis testing.

+

To restate, then: Moving progressively farther away from the sample mean, we can eventually find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can then say that this point represents a “limit” or “boundary” so that the interval between it and the sample mean may be called a confidence interval.

+
+
+

27.6.2 Example: Approach 2 for Measured Data: The Diameters of Trees

+

To implement Approach 2 for measured data, one may proceed exactly as with Approach 1 above except that the output of the simulation with the sample mean as midpoint will be used for guidance about where to locate trial universes for Approach 2. The results for the tree diameter data (Table 27.1) are shown in Figure 27.2.

+
+
+

+
Figure 27.2: Approach 2 for tree diameters
+
+
+
+
+
+

27.7 Interpretation of Approach 2

+

Now to interpret the results of the second approach: Assume that the sample is not drawn in a biased fashion (such as the wind blowing all the apples in the same direction), and that the population has the same dispersion as the sample. We can then say that distributions centered at the two endpoints of the 95 percent confidence interval (each of them including a tail in the direction of the observed sample mean with 2.5 percent of the area), or even further away from the sample mean, will produce the observed sample only 5 percent of the time or less .

+

The result of the second approach is more in the spirit of a hypothesis test than of the usual interpretation of confidence intervals. Another statement of the result of the second approach is: We postulate a given universe — say, a universe at (say) the two-tailed 95 percent boundary line. We then say: The probability that the observed sample would be produced by a universe with a mean as far (or further) from the observed sample’s mean as the universe under investigation is only 2.5 percent. This is similar to the probability value interpretation of a hypothesis-test framework. It is not a direct statement about the location of the mean of the universe from which the sample has been drawn. But it is certainly reasonable to derive a betting-odds interpretation of the statement just above, to wit: The chances are 2½ in 100 (or, the odds are 2½ to 97½ ) that a population located here would generate a sample with a mean as far away as the observed sample. And it would seem legitimate to proceed to the further betting-odds statement that (assuming we have no additional information) the odds are 97 ½ to 2 ½ that the mean of the universe that generated this sample is no farther away from the sample mean than the mean of the boundary universe under discussion. About this statement there is nothing slippery, and its meaning should not be controversial.

+

Here again the tactic for interpreting the statistical procedure is to restate the facts of the behavior of the universe that we are manipulating and examining at that moment. We use a heuristic device to find a particular distribution — the one that is at (say) the 97 ½ –2 ½ percent boundary — and simply state explicitly what the distribution tells us implicitly: The probability of this distribution generating the observed sample (or a sample even further removed) is 2 ½ percent. We could go on to say (if it were of interest to us at the moment) that because the probability of this universe generating the observed sample is as low as it is, we “reject” the “hypothesis” that the sample came from a universe this far away or further. Or in other words, we could say that because we would be very surprised if the sample were to have come from this universe, we instead believe that another hypothesis is true. The “other” hypothesis often is that the universe that generated the sample has a mean located at the sample mean or closer to it than the boundary universe.

+

The behavior of the universe at the 97 ½ –2 ½ percent boundary line can also be interpreted in terms of our “confidence” about the location of the mean of the universe that generated the observed sample. We can say: At this boundary point lies the end of the region within which we would bet 97 ½ to 2 ½ that the mean of the universe that generated this sample lies to the (say) right of it.

+

As noted in the preview to this chapter, we do not learn about the reliability of sample estimates of the population mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because in principle this cannot be done . Instead, in this second approach we investigate the behavior of various universes at the borderline of the neighborhood of the sample, those universes being chosen on the basis of their resemblances to the sample. We seek, for example, to find the universes that would produce samples with the mean of the observed sample less than (say) 5 percent of the time. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of hypothesized universes, the hypotheses being implicitly suggested by the sample evidence but not logically implied by that evidence.

+

Approaches 1 and 2 may (if one chooses) be seen as identical conceptually as well as (in many cases) computationally (except for the asymmetric distributions mentioned earlier). But as I see it, the interpretation of them is rather different, and distinguishing them helps one’s intuitive understanding.

+
+
+

27.8 Exercises

+

Solutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.

+
+

27.8.1 Exercise 1

+

In a sample of 200 people, 7 percent are found to be unemployed. Determine a 95 percent confidence interval for the true population proportion.

+
+
+

27.8.2 Exercise 2

+

A sample of 20 batteries is tested, and the average lifetime is 28.85 months. Establish a 95 percent confidence interval for the true average value. The sample values (lifetimes in months) are listed below.

+

30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31

+
+
+

27.8.3 Exercise 3

+

Suppose we have 10 measurements of Optical Density on a batch of HIV negative control:

+

.02 .026 .023 .017 .022 .019 .018 .018 .017 .022

+

Derive a 95 percent confidence interval for the sample mean. Are there enough measurements to produce a satisfactory answer?

+ + + +
+
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/confidence_2_files/figure-html/unnamed-chunk-1-1.png b/r-book/confidence_2_files/figure-html/unnamed-chunk-1-1.png new file mode 100644 index 00000000..5b18743a Binary files /dev/null and b/r-book/confidence_2_files/figure-html/unnamed-chunk-1-1.png differ diff --git a/r-book/confidence_2_files/figure-html/unnamed-chunk-3-3.png b/r-book/confidence_2_files/figure-html/unnamed-chunk-3-3.png new file mode 100644 index 00000000..8a0864ae Binary files /dev/null and b/r-book/confidence_2_files/figure-html/unnamed-chunk-3-3.png differ diff --git a/r-book/confidence_2_files/figure-html/unnamed-chunk-5-1.png b/r-book/confidence_2_files/figure-html/unnamed-chunk-5-1.png new file mode 100644 index 00000000..4b513571 Binary files /dev/null and b/r-book/confidence_2_files/figure-html/unnamed-chunk-5-1.png differ diff --git a/r-book/confidence_2_files/figure-html/unnamed-chunk-7-1.png b/r-book/confidence_2_files/figure-html/unnamed-chunk-7-1.png new file mode 100644 index 00000000..042c5564 Binary files /dev/null and b/r-book/confidence_2_files/figure-html/unnamed-chunk-7-1.png differ diff --git a/r-book/confidence_2_files/figure-html/unnamed-chunk-9-1.png b/r-book/confidence_2_files/figure-html/unnamed-chunk-9-1.png new file mode 100644 index 00000000..d0711cc8 Binary files /dev/null and b/r-book/confidence_2_files/figure-html/unnamed-chunk-9-1.png differ diff --git a/r-book/correlation_causation.html b/r-book/correlation_causation.html new file mode 100644 index 00000000..7320f9de --- /dev/null +++ b/r-book/correlation_causation.html @@ -0,0 +1,2934 @@ + + + + + + + + + +Resampling statistics - 29  Correlation and Causation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

29  Correlation and Causation

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

29.1 Preview

+

The correlation (speaking in a loose way for now) between two variables measures the strength of the relationship between them. A positive “linear” correlation between two variables x and y implies that high values of x are associated with high values of y, and that low values of x are associated with low values of y. A negative correlation implies the opposite; high values of x are associated with low values of y. By definition a “correlation coefficient” close to zero indicates little or no linear relationship between two variables; correlation coefficients close to 1 and -1 denote a strong positive or negative relationship. We will generally use a simpler measure of correlation than the correlation coefficient, however.

+

One way to measure correlation with the resampling method is to rank both variables from highest to lowest, and investigate how often in randomly-generated samples the rankings of the two variables are as close to each other as the rankings in the observed variables. A better approach, because it uses more of the quantitative information contained in the data though it requires more computation, is to multiply the values for the corresponding pairs of values for the two variables, and compare the sum of the resulting products to the analogous sum for randomly-generated pairs of the observed variable values. The last section of the chapter shows how the strength of a relationship can be determined when the data are counted, rather than measured. First comes some discussion of the philosophical issues involved in correlation and causation.

+
+
+

29.2 Introduction to correlation and causation

+

The questions in examples Section 12.1 to Section 13.3.3 have been stated in the following form: Does the independent variable (say, irradiation; or type of pig ration) have an effect upon the dependent variable (say, sex of fruit flies; or weight gain of pigs)? This is another way to state the following question: Is there a causal relationship between the independent variable(s) and the dependent variable? (“Independent” or “control” is the name we give to the variable(s) the researcher believes is (are) responsible for changes in the other variable, which we call the “dependent” or “response” variable.)

+

A causal relationship cannot be defined perfectly neatly. Even an experiment does not determine perfectly whether a relationship deserves to be called “causal” because, among other reasons, the independent variable may not be clear-cut. For example, even if cigarette smoking experimentally produces cancer in rats, it might be the paper and not the tobacco that causes the cancer. Or consider the fabled gentlemen who got experimentally drunk on bourbon and soda on Monday night, scotch and soda on Tuesday night, and brandy and soda on Wednesday night — and stayed sober Thursday night by drinking nothing. With a vast inductive leap of scientific imagination, they treated their experience as an empirical demonstration that soda, the common element each evening, was the cause of the inebriated state they had experienced. Notice that their deduction was perfectly sound, given only the recent evidence they had. Other knowledge of the world is necessary to set them straight. That is, even in a controlled experiment there is often no way except subject-matter knowledge to avoid erroneous conclusions about causality. Nothing except substantive knowledge or scientific intuition would have led them to the recognition that it is the alcohol rather than the soda that made them drunk, as long as they always took soda with their drinks . And no statistical procedure can suggest to them that they ought to experiment with the presence and absence of soda. If this is true for an experiment, it must also be true for an uncontrolled study.

+

Here are some tests that a relationship usually must pass to be called causal. That is, a working definition of a particular causal relationship is expressed in a statement that has these important characteristics:

+
    +
  1. It is an association that is strong enough so that the observer believes it to have a predictive (explanatory) power great enough to be scientifically useful or interesting. For example, he is not likely to say that wearing glasses causes (or is a cause of) auto accidents if the observed correlation is .07, even if the sample is large enough to make the correlation statistically significant. In other words, unimportant relationships are not likely to be labeled causal.

    +

    Various observers may well differ in judging whether or not an association is strong enough to be important and therefore “causal.” And the particular field in which the observer works may affect this judgment. This is an indication that whether or not a relationship is dubbed “causal” involves a good deal of human judgment and is subject to dispute.

  2. +
  3. The “side conditions” must be sufficiently few and sufficiently observable so that the relationship will apply under a wide enough range of conditions to be considered useful or interesting. In other words, the relationship must not require too many “if”s, “and”s, and “but”s in order to hold . For example, one might say that an increase in income caused an increase in the birth rate if this relationship were observed everywhere. But, if the relationship were found to hold only in developed countries, among the educated classes, and among the higher-income groups, then it would be less likely to be called “causal” — even if the correlation were extremely high once the specified conditions had been met. A similar example can be made of the relationship between income and happiness.

  4. +
  5. For a relationship to be called “causal,” there should be sound reason to believe that, even if the control variable were not the “real” cause (and it never is), other relevant “hidden” and “real” cause variables must also change consistently with changes in the control variables. That is, a variable being manipulated may reasonably be called “causal” if the real variable for which it is believed to be a proxy must always be tied intimately to it. (Between two variables, v and w, v may be said to be the “more real” cause and w a “spurious” cause, if v and w require the same side conditions, except that v does not require w as a side condition.) This third criterion (non-spuriousness) is of particular importance to policy makers. The difference between it and the previous criterion for side conditions is that a plenitude of very restrictive side conditions may take the relationship out of the class of causal relationships, even though the effects of the side conditions are known . This criterion of nonspuriousness concerns variables that are as yet unknown and unevaluated but that have a possible ability to upset the observed association.

    +

    Examples of spurious relationships and hidden-third-factor causation are commonplace. For a single example, toy sales rise in December. There is no danger in saying that December causes an increase in toy sales, even though it is “really” Christmas that causes the increase, because Christmas and December practically always accompany each other.

    +

    Belief that the relationship is not spurious is increased if many likely variables have been investigated and none removes the relationship. This is further demonstration that the test of whether or not an association should be called “causal” cannot be a logical one; there is no way that one can express in symbolic logic the fact that many other variables have been tried without changing the relationship in question.

  6. +
  7. The more tightly a relationship is bound into (that is, deduced from, compatible with, and logically connected to) a general framework of theory, the stronger is its claim to be called “causal.” For an economics example, observed positive relationships between the interest rate and business investment and between profits and investment are more likely to be called “causal” than is the relationship between liquid assets and investment. This is so because the first two statements can be deduced from classical price theory, whereas the third statement cannot. Connection to a theoretical framework provides support for belief that the side conditions necessary for the statement to hold true are not restrictive and that the likelihood of spurious correlation is not great; because a statement is logically connected to the rest of the system, the statement tends to stand or fall as the rest of the system stands or falls. And, because the rest of the system of economic theory has, over a long period of time and in a wide variety of tests, been shown to have predictive power, a statement connected with it is cloaked in this mantle.

  8. +
+

The social sciences other than economics do not have such well-developed bodies of deductive theory, and therefore this criterion of causality does not weigh as heavily in sociology, for instance, as in economics. Rather, the other social sciences seem to substitute a weaker and more general criterion, that is, whether or not the statement of the relationship is accompanied by other statements that seem to “explain” the “mechanism” by which the relationship operates. Consider, for example, the relationship between the phases of the moon and the suicide rate. The reason that sociologists do not call it causal is that there are no auxiliary propositions that explain the relationship and describe an operative mechanism. On the other hand, the relationship between broken homes and juvenile delinquency is often referred to as “causal,” in large part because a large body of psychoanalytic theory serves to explain why a child raised without one or the other parent, or in the presence of parental strife, should not adjust readily.

+

Furthermore, one can never decide with perfect certainty whether in any given situation one variable “causes” a particular change in another variable. At best, given your particular purposes in investigating a phenomena, you may be safe in judging that very likely there is causal influence.

+

In brief, it is correct to say (as it is so often said) that correlation does not prove causation — if we add the word “completely” to make it “correlation does not completely prove causation.” On the other hand, causation can never be “proven” completely by correlation or any other tool or set of tools, including experimentation. The best we can do is make informed judgments about whether to call a relationship causal.

+

It is clear, however, that in any situation where we are interested in the possibility of causation, we must at least know whether there is a relationship (correlation) between the variables of interest; the existence of a relationship is necessary for a relationship to be judged causal even if it is not sufficient to receive the causal label. And in other situations where we are not even interested in causality, but rather simply want to predict events or understand the structure of a system, we may be interested in the existence of relationships quite apart from questions about causations. Therefore our next set of problems deals with the probability of there being a relationship between two measured variables, variables that can take on any values (say, the values on a test of athletic scores) rather than just two values (say, whether or not there has been irradiation.)1

+

Another way to think about such problems is to ask whether two variables are independent of each other — that is, whether you know anything about the value of one variable if you know the value of the other in a particular case — or whether they are not independent but rather are related.

+
+
+

29.3 A Note on Association Compared to Testing a Hypothesis

+

Problems in which we investigate a) whether there is an association , versus b) whether there is a difference between just two groups, often look very similar, especially when the data constitute a 2-by-2 table. There is this important difference between the two types of analysis, however: Questions about association refer to variables — say weight and age — and it never makes sense to ask whether there is a difference between variables (except when asking whether they measure the same quantity). Questions about similarity or difference refer to groups of individuals , and in such a situation it does make sense to ask whether or not two groups are observably different from each other.

+

Example 23-1: Is Athletic Ability Directly Related to Intelligence? (Is There Correlation Between Two Variables or Are They Independent?) (Program “Ability1”)

+

A scientist often wants to know whether or not two characteristics go together, that is, whether or not they are correlated (that is, related or associated). For example, do youths with high athletic ability tend to also have high I.Q.s?

+

Hypothetical physical-education scores of a group of ten high-school boys are shown in Table 23-1, ordered from high to low, along with the I.Q. score for each boy. The ranks for each student’s athletic and I.Q. scores are then shown in columns 3 and 4.

+

Table 23-1

+

Hypothetical Athletic and I.Q. Scores for High School Boys

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Athletic ScoreI.Q. ScoreAthletic RankI.Q.Rank
(1)(2)(3)(4)
9711413
9412021
9310737
9011344
8711852
8610168
8610976
8511085
8110099
76991010
+

We want to know whether a high score on athletic ability tends to be found along with a high I.Q. score more often than would be expected by chance. Therefore, our strategy is to see how often high scores on both variables are found by chance. We do this by disassociating the two variables and making two separate and independent universes, one composed of the athletic scores and another of the I.Q. scores. Then we draw pairs of observations from the two universes at random, and compare the experimental patterns that occur by chance to what actually is observed to occur in the world.

+

The first testing scheme we shall use is similar to our first approach to the pig rations — splitting the results into just “highs” and “lows.” We take ten cards, one of each denomination from “ace” to “10,” shuffle, and deal five cards to correspond to the first five athletic ranks. The face values then correspond to the

+

I.Q. ranks. Under the benchmark hypothesis the athletic ranks will not be associated with the I.Q. ranks. Add the face values in the first five cards in each trial; the first hand includes 2, 4, 5, 6, and 9, so the sum is 26. Record, shuffle, and repeat perhaps ten times. Then compare the random results to the sum of the observed ranks of the five top athletes, which equals 17.

+

The following steps describe a slightly different procedure than that just described, because this one may be easier to understand:

+

Step 1. Convert the athletic and I.Q. scores to ranks. Then constitute a universe of spades, “ace” to “10,” to correspond to the athletic ranks, and a universe of hearts, “ace” to “10,” to correspond to the IQ ranks.

+

Step 2. Deal out the well-shuffled cards into pairs, each pair with an athletic score and an I.Q. score.

+

Step 3. Locate the cards with the top five athletic ranks, and add the I.Q. rank scores on their paired cards. Compare this sum to the observed sum of 17. If 17 or less, indicate “yes,” otherwise “no.” (Why do we use “17 or less” rather than “less than 17”? Because we are asking the probability of a score this low or lower .)

+

Step 4. Repeat steps 2 and 3 ten times.

+

Step 5. Calculate the proportion “yes.” This estimates the probability sought.

+

In Table 23-2 we see that the observed sum (17) is lower than the sum of the top 5 ranks in all but one (shown by an asterisk) of the ten random trials (trial 5), which suggests that there is a good chance (9 in 10) that the five best athletes will not have I.Q. scores that high by chance. But it might be well to deal some more to get a more reliable average. We add thirty hands, and thirty-nine of the total forty hands exceed the observed rank value, so the probability that the observed correlation of athletic and I.Q. scores would occur by chance is about

+

.025. In other words, if there is no real association between the variables, the probability that the top 5 ranks would sum to a number this low or lower is only 1 in 40, and it therefore seems reasonable to believe that high athletic ability tends to accompany a high I.Q.

+

Table 23-2

+

Results of 40 Random Trials of The Problem “Ability”

+

(Note: Observed sum of IQ ranks: 17)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialSum of IQ RanksYes or No
126No
223No
322No
437No
* 516Yes
622No
722No
828No
938No
1022No
1135No
1236No
1331No
1429No
1532No
1625No
1725No
1829No
1925No
2022No
2130No
2231No
2335No
2425No
2533No
2630No
2724No
2829No
2930No
3031No
3130No
3221No
3325No
3419No
3529No
3623No
3723No
3834No
3923No
4026No
+

The RESAMPLING STATS program “Ability1” creates an array containing the I.Q. rankings of the top 5 students in athletics. The SUM of these I.Q. rankings constitutes the observed result to be tested against randomly-drawn samples. We observe that the actual I.Q. rankings of the top five athletes sums to 17. The more frequently that the sum of 5 randomly-generated rankings (out of 10) is as low as this observed number, the higher is the probability that there is no relationship between athletic performance and I.Q. based on these data.

+

First we record the NUMBERS “1” through “10” into vector

+

A. Then we SHUFFLE the numbers so the rankings are in a random order. Then TAKE the first 5 of these numbers and put them in another array, D, and SUM them, putting the result in E. We repeat this procedure 1000 times, recording each result in a scorekeeping vector: Z. Graphing Z, we get a HIS- TOGRAM that shows us how often our randomly assigned sums are equal to or below 17.

+ +
' Program file: "correlation_causation_00.rss"
+
+REPEAT 1000
+    ' Repeat the experiment 1000 times.
+    NUMBERS 1,10 a
+    ' Constitute the set of I.Q. ranks.
+    SHUFFLE a b
+    ' Shuffle them.
+    TAKE b 1,5 d
+    ' Take the first 5 ranks.
+    SUM d e
+    ' Sum those ranks.
+    SCORE e z
+    ' Keep track of the result of each trial.
+END
+' End the experiment, go back and repeat.
+HISTOGRAM z
+' Produce a histogram of trial results.
+

ABILITY1: Random Selection of 5 Out of 10 Ranks

+

+

Sum of top 5 ranks

+

We see that in only about 2% of the trials did random selection of ranks produce a total of 17 or lower. RESAMPLING STATS will calculate this for us directly:

+ +
' Program file: "ability1.rss"
+
+COUNT z <= 17 k
+' Determine how many trials produced sums of ranks \<= 17 by chance.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the results.
+
+' Note: The file "ability1" on the Resampling Stats software disk contains
+' this set of commands.
+

Why do we sum the ranks of the first five athletes and compare them with the second five athletes, rather than comparing the top three, say, with the bottom seven? Indeed, we could have looked at the top three, two, four, or even six or seven. The first reason for splitting the group in half is that an even split uses the available information more fully, and therefore we obtain greater efficiency. (I cannot prove this formally here, but perhaps it makes intuitive sense to you.) A second reason is that getting into the habit of always looking at an even split reduces the chances that you will pick and choose in such a manner as to fool yourself. For example, if the I.Q. ranks of the top five athletes were 3, 2, 1, 10, and 9, we would be deceiving ourselves if, after looking the data over, we drew the line between athletes 3 and 4. (More generally, choosing an appropriate measure before examining the data will help you avoid fooling yourself in such matters.)

+

A simpler but less efficient approach to this same problem is to classify the top-half athletes by whether or not they were also in the top half of the I.Q. scores. Of the first five athletes actually observed, four were in the top five I.Q. scores. We can then shuffle five black and five red cards and see how often four or more (that is, four or five) blacks come up with the first five cards. The proportion of times that four or more blacks occurs in the trial is the probability that an association as strong as that observed might occur by chance even if there is no association. Table 23-3 shows a proportion of five trials out of twenty.

+

In the RESAMPLING STATS program “Ability2” we first note that the top 5 athletes had 4 of the top 5 I.Q. scores. So we constitute the set of 10 IQ rankings (vector A). We then SHUFFLE A and TAKE 5 I.Q. rankings (out of 10). We COUNT how many are in the top 5, and keep SCORE of the result. After REPEATing 1000 times, we find out how often we select 4 of the top 5.

+

Table 23-3

+

Results of 20 Random Trials of the Problem “ABILITY2”

+

Observed Score: 4

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialScoreYes or No
14Yes
22No
32No
42No
53No
62No
74Yes
83No
93No
104Yes
113No
121No
133No
143No
154Yes
163No
172No
182No
192No
204Yes
+ +
' Program file: "ability2.rss"
+
+REPEAT 1000
+    ' Do 1000 experiments.
+    NUMBERS 1,10 a
+    ' Constitute the set of I.Q. ranks.
+    SHUFFLE a b
+    ' Shuffle them.
+    TAKE b 1,5 c
+    ' Take the first 5 ranks.
+    COUNT c between 1 5 d
+    ' Of those 5, count how many are among the top half of the ranks (1-5).
+    SCORE d z
+    ' Keep track of that result in z
+END
+' End one experiment, go back and repeat until all 1000 are complete.
+COUNT z >= 4 k
+' Determine how many trials produced 4 or more top ranks by chance.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the result.
+
+' Note: The file "ability2" on the Resampling Stats software disk contains
+' this set of commands.
+

So far we have proceeded on the theory that if there is any relationship between athletics and I.Q., then the better athletes have higher rather than lower I.Q. scores. The justification for this assumption is that past research suggests that it is probably true. But if we had not had the benefit of that past research, we would then have had to proceed somewhat differently; we would have had to consider the possibility that the top five athletes could have I.Q. scores either higher or lower than those of the other students. The results of the “two-tail” test would have yielded odds weaker than those we observed.

+

Example 23-2: Athletic Ability and I.Q. a Third Way.

+

(Program “Ability3”).

+

Example 23-1 investigated the relationship between I.Q. and athletic score by ranking the two sets of scores. But ranking of scores loses some efficiency because it uses only an “ordinal” (rank-ordered) rather than a “cardinal” (measured) scale; the numerical shadings and relative relationships are lost when we convert to ranks. Therefore let us consider a test of correlation that uses the original cardinal numerical scores.

+

First a little background: Figure 29.1 and Figure 29.2 show two hypothetical cases of very high association among the I.Q. and athletic scores used in previous examples. Figure 29.1 indicates that the higher the I.Q. score, the higher the athletic score. With a boy’s athletic score you can thus predict quite well his I.Q. score by means of a hand-drawn line — or vice versa. The same is true of Figure 29.2, but in the opposite direction. Notice that even though athletic score is on the x-axis (horizontal) and I.Q. score is on the y-axis (vertical), the athletic score does not cause the I.Q. score. (It is an unfortunate deficiency of such diagrams that some variable must arbitrarily be placed on the x-axis, whether you intend to suggest causation or not.)

+
+
+
+
+

+
Figure 29.1: Hypothetical Scores for I.Q. and Athletic Ability — 1
+
+
+
+
+
+
+
+
+

+
Figure 29.2: Hypothetical Scores for I.Q. and Athletic Ability — 2
+
+
+
+
+

In Figure 29.3, which plots the scores as given in table 23-1 the prediction of athletic score given I.Q. score, or vice versa, is less clear-cut than in Figure 29.1. On the basis of Figure 29.3 alone, one can say only that there might be some association between the two variables.

+
+
+
+
+

+
Figure 29.3: Given Scores for I.Q. and Athletic Ability
+
+
+
+
+
+
+

29.4 Correlation: sum of products

+

Now let us take advantage of a handy property of numbers. The more closely two sets of numbers match each other in order, the higher the sums of their products. Consider the following arrays of the numbers 1, 2, and 3:

+

1 x 1 = 1

+

2 x 2 = 4 (columns in matching order) 3 x 3 = 9

+

SUM = 14

+

1 x 2 = 2

+

2 x 3 = 6 (columns not in matching order) 3 x 1 = 3

+

SUM = 11

+

I will not attempt a mathematical proof, but the reader is encouraged to try additional combinations to be sure that the highest sum is obtained when the order of the two columns is the same. Likewise, the lowest sum is obtained when the two columns are in perfectly opposite order:

+

1 x 3 = 3

+

2 x 2 = 4 (columns in opposite order) 3 x 1 = 3

+

SUM = 10

+

Consider the cases in Table 23-4 which are chosen to illustrate a perfect (linear) association between x (Column 1) and y 1 (Column 2), and also between x (Column 1) and y 2 (Column 4); the numbers shown in Columns 3 and 5 are those that would be consistent with perfect associations. Notice the sum of the multiples of the x and y values in the two cases. It is either higher ( xy 1) or lower ( xy 2) than for any other possible way of arranging the y ’s. Any other arrangement of the y’s ( y 3, in Column 6, for example, chosen at random), when multiplied by the x ’s in Column 1, ( xy 3), produces a sum that falls somewhere between the sums of xy 1 and xy 2, as is the case with any other set of y 3’s which is not perfectly correlated with the x ’s.

+

Table 23-5, below, shows that the sum of the products of the observed I.Q. scores multiplied by athletic scores (column 7) is between the sums that would occur if the I.Q. scores were ranked from best to worst (column 3) and worst to best (column 5). The extent of correlation (association) can thus be measured by whether the sum of the multiples of the observed x

+

and y values is relatively much higher or much lower than are sums of randomly-chosen pairs of x and y .

+

Table 23-4

+

Comparison of Sums of Multiplications

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Strong Positive RelationshipStrong Negative RelationshipRandom Pairings
XY1X*Y1Y2X*Y2Y3X*Y3
224102048
4416832832
6636636636
8864448216
101010022010100
SUMS:220156192
+

Table 23-5

+

Sums of Products: IQ and Athletic Scores

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
1234567
AthleticHypotheticalCol. 1 xHypotheticalCol. 1 xActualCol. 1 x
ScoreI.Q.Col.2I.Q.Col. 4I.Q.Col.6
971201164099960311411058
9411811092100940012011280
931141060210193931079951
9011310170107963011310170
871109570109948311810266
86109937411084601018686
86107920211397181099374
85101858511496901109350
81100810011895581008100
769975241209120997524
SUMS:958599505595759
+

3 Cases:

+
    +
  • Perfect positive correlation (hypothetical); column 3

  • +
  • Perfect negative correlation (hypothetical); column 5

  • +
  • Observed; column 7

  • +
+

Now we attack the I.Q. and athletic-score problem using the property of numbers just discussed. First multiply the x and y values of the actual observations, and sum them to be 95,759 (Table 23-5). Then write the ten observed I.Q. scores on cards, and assign the cards in random order to the ten athletes, as shown in column 1 in Table 23-6.

+

Multiply by the x’s, and sum as in Table 23-7. If the I.Q. scores and athletic scores are positively associated , that is, if high I.Q.s and high athletic scores go together, then the sum of the multiplications for the observed sample will be higher than for most of the random trials. (If high I.Q.s go with low athletic scores, the sum of the multiplications for the observed sample will be lower than most of the random trials.)

+

Table 23-6

+

Random Drawing of I.Q. Scores and Pairing (Randomly) Against Athletic Scores (20 Trials)

+

Trial Number

+

Athletic 1 2 3 4 5 6 7 8 9 10

+

Score

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
97114109110118107114107120100114
94101113113101118100110109120107
931071181009912010111499110113
901131011181141011131001189999
87120100101100110107113114101118
86100110120107113110118101118101
8611010799109100120120113114120
85999910412099109101107109109
811181201141101149999100107109
76109114109113109118109110113110
Trial Number
Athletic Score11121314151617181920
971091181011091071009911399110
94101110114118101107114101109113
93120120100120114113100100120100
901101181091109910910710911099
8710010012099118114110110107101
8611899107100109118113118100118
86991019910110099101107114120
85107114110114120110120120118100
81114107113113110101109114101100
7611310911810711312011899118107
+

Table 23-7

+

Results of Sum Products for Above 20 Random Trials

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialSum of MultiplicationsTrialSum of Multiplications
195,4301195,406
295,4261295,622
395,4461395,250
495,3811495,599
595,5421595,323
695,3621695,308
795,5081795,220
895,5901895,443
995,3791995,421
1095,5322095,528
+

More specifically, by the steps:

+

Step 1. Write the ten I.Q. scores on one set of cards, and the ten athletic scores on another set of cards.

+

Step 2. Pair the I.Q. and athletic-score cards at random. Multiply the scores in each pair, and add the results of the ten multiplications.

+

Step 3. Subtract the experimental sum in step 2 from the observed sum, 95,759.

+

Step 4. Repeat steps 2 and 3 twenty times.

+

Step 5. Compute the proportion of trials where the difference is negative, which estimates the probability that an association as strong as the observed would occur by chance.

+

The sums of the multiplications for 20 trials are shown in Table 23-7. No random-trial sum was as high as the observed sum, which suggests that the probability of an association this strong happening by chance is so low as to approach zero. (An empirically-observed probability is never actually zero.)

+

This program can be solved particularly easily with RESAMPLING STATS. The arrays A and B in program “Ability3” list the athletic scores and the I.Q. scores respectively of 10 “actual” students ordered from highest to lowest athletic score. We MULTIPLY the corresponding elements of these arrays and proceed to compare the sum of these multiplications to the sums of experimental multiplications in which the elements are selected randomly.

+

Finally, we COUNT the trials in which the sum of the products of the randomly-paired athletic and I.Q. scores equals or exceeds the sum of the products in the observed data.

+ +
' Program file: "correlation_causation_03.rss"
+
+NUMBERS (97 94 93 90 87 86 86 85 81 76) a
+' Record athletic scores, highest to lowest.
+NUMBERS (114 120 107 113 118 101 109 110 100 99) b
+' Record corresponding IQ scores for those students.
+MULTIPLY a b c
+' Multiply the two sets of scores together.
+SUM c d
+' Sum the results — the "observed value."
+REPEAT 1000
+    ' Do 1000 experiments.
+    SHUFFLE a e
+    ' Shuffle the athletic scores so we can pair them against IQ scores.
+    MULTIPLY e b f
+    ' Multiply the shuffled athletic scores by the I.Q. scores. (Note that we
+    ' could shuffle the I.Q. scores too but it would not achieve any greater
+    ' randomization.)
+    SUM f j
+    ' Sum the randomized multiplications.
+    SUBTRACT d j k
+    ' Subtract the sum from the sum of the "observed" multiplication.
+    SCORE k z
+    ' Keep track of the result in z.
+END
+' End one trial, go back and repeat until 1000 trials are complete.
+HISTOGRAM z
+' Obtain a histogram of the trial results.
+

Random Sums of Products

+

ATHLETES & IQ SCORES

+

+

observed sum less random sum

+

We see that obtaining a chance trial result as great as that observed was rare. RESAMPLING STATS will calculate this proportion for us:

+ +
' Program file: "ability3.rss"
+
+COUNT z <= 0 k
+' Determine in how many trials the random sum of products was less than
+' the observed sum of products.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Note: The file "ability3" on the Resampling Stats software disk contains
+' this set of commands.
+

Example 23-3: Correlation Between Adherence to Medication Regime and Change in Cholesterol

+

Efron and Tibshirani (1993, 72) show data on the extents to which 164 men a) took the drug prescribed to them (cholostyramine), and b) showed a decrease in total plasma cholesterol. Table 23-8 shows these values (note that a positive value in the “decrease in cholesterol” column denotes a decrease in cholesterol, while a negative value denotes an increase.)

+

Table 23-8

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TakenTakenTakenTaken
0 -5.2527-1.50 7159.5095 32.50
0 -7.252823.50 7114.7595 70.75
0 -6.252933.00 7263.0095 18.25
0 11.50314.25 720.0095 76.00
2 21.003218.75 7342.0095 75.75
2 -23.00328.50 7441.2595 78.75
2 5.75333.25 7536.2595 54.75
3 3.253327.75 7666.5095 77.00
3 8.753430.75 7761.7596 68.00
4 8.7534-1.50 7714.0096 73.00
4 -10.25341.00 7836.0096 28.75
7 -10.50347.75 7839.5096 26.75
8 19.7535-15.75 811.0096 56.00
8 -0.503633.50 8253.5096 47.50
8 29.253636.25 8446.5096 30.25
8 36.25375.50 8551.0096 21.00
9 10.753825.50 8539.0097 79.00
9 19.504120.25 87-0.2597 69.00
9 17.254333.25 871.0097 80.00
10 3.504556.75 8746.7597 86.00
10 11.25454.25 8711.5098 54.75
11 -13.004732.50 872.7598 26.75
12 24.005054.50 8848.7598 80.00
13 2.5050-4.25 8956.7598 42.25
15 3.005142.75 9029.2598 6.00
15 5.505162.75 9072.5098 104.75
16 21.255264.25 9141.7598 94.25
16 29.755330.25 9248.5098 41.25
17 7.505414.75 9261.2598 40.25
18 -16.505447.25 9229.5099 51.50
20 4.505618.00 9259.7599 82.75
20 39.005713.75 9371.0099 85.00
21 -5.755748.75 9337.7599 70.00
21 -21.005843.00 9341.00100 92.00
21 0.256027.75 939.75100 73.75
22 -10.256244.50 9353.75100 54.00
24 -0.506422.50 9462.50100 69.50
25 -19.0064-14.50 9439.00100 101.50
25 15.7564-20.75 943.25100 68.00
26 6.006746.25 9460.00100 44.75
27 10.506839.50 95113.25100 86.75
+

% Prescribed Dosage

+

Decrease in Cholesterol

+

% Prescribed Dosage

+

Decrease in Cholesterol

+

% Prescribed Dosage

+

Decrease in Cholesterol

+

% Prescribed Dosage

+

Decrease in Cholesterol

+

The aim is to assess the effect of the compliance on the improvement. There are two related issues:

+
    +
  1. What form of regression should be fitted to these data, which we address later, and

  2. +
  3. Is there reason to believe that the relationship is meaningful? That is, we wish to ascertain if there is any meaningful correlation between the variables — because if there is no relationship between the variables, there is no basis for regressing one on the other. Sometimes people jump ahead in the latter question to first run the regression and then ask whether the regression slope coefficient(s) is (are) different than zero, but this usually is not sound practice. The sensible way to proceed is first to graph the data to see whether there is visible indication of a relationship.

  4. +
+

Efron and Tibshirani do this, and they find sufficient intuitive basis in the graph to continue the analysis. The next step is to investigate whether a measure of relationship is statistically significant; this we do as follows (program “inp10”):

+
    +
  1. Multiply the observed values for each of the 164 participants on the independent x variable (cholostyramine — percent of prescribed dosage actually taken) and the dependent y variable (cholesterol), and sum the results — it’s 439,140.

  2. +
  3. Randomly shuffle the dependent variable y values among the participants. The sampling is being done without replacement, though an equally good argument could be made for sampling with replacement; the results do not differ meaningfully, however, because the sample size is so large.

  4. +
  5. Then multiply these x and y hypothetical values for each of the 164 participants, sum the results and record.

  6. +
  7. Repeat steps 2 and 3 perhaps 1000 times.

  8. +
  9. Determine how often the shuffled sum-of-products exceeds the observed value (439,140).

  10. +
+

The following program in RESAMPLING STATS provides the solution:

+ +
' Program file: "correlation_causation_05.rss"
+
+READ FILEinp10x y
+' Data
+MULTIPLY x y xy
+' Step 1 above
+SUM xy xysum
+' Note: xysum = 439,140 (4.3914e+05)
+REPEAT 1000
+    ' Do 1000 simulations (step 4 above)
+    SHUFFLE x xrandom
+    ' Step 2 above
+    MULTIPLY xrandom y xy
+    ' Step 3 above
+    SUM xy newsum
+    ' Step 3 above
+    SCORE newsum scrboard
+    ' Step 3 above
+END
+' Step 4 above
+COUNT scorboard >=439140 prob
+' Step 5 above
+PRINT xysum prob
+' Result: prob = 0. Interpretation: 1000 simulated random shufflings never
+' produced a sum-of-products as high as the observed value. Hence we rule
+' out random chance as an explanation for the observed correlation.
+

Example 23-3: Is There A Relationship Between Drinking Beer And Being In Favor of Selling Beer? (Testing for a Relationship Between Counted-Data Variables.) (Program “Beerpoll”)

+

The data for athletic ability and I.Q. were measured. Therefore, we could use them in their original “cardinal” form, or we could split them up into “high” and “low” groups. Often, however, the individual observations are recorded only as “yes” or “no,” which makes it more difficult to ascertain the existence of a relationship. Consider the poll responses in Table 23-8 to two public-opinion survey questions: “Do you drink beer?” and “Are you in favor of local option on the sale of beer?”.2

+ +

Table 23-9

+

Results of Observed Sample For Problem “Beerpoll”

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Do you favor local option on the sale of beer?Do you drink beer?
YesNoTotal
Favor452065
Don’t Favor7613
Total522678
+

Here is the statistical question: Is a person’s opinion on “local option” related to whether or not he drinks beer? Our resampling solution begins by noting that there are seventy-eight respondents, sixty-five of whom approve local option and thirteen of whom do not. Therefore write “approve” on sixty-five index cards and “not approve” on thirteen index cards. Now take another set of seventy-eight index cards, preferably of a different color, and write “yes” on fifty-two of them and “no” on twenty-six of them, corresponding to the numbers of people who do and do not drink beer in the sample. Now lay them down in random pairs , one from each pile.

+

If there is a high association between the variables, then real life observations will bunch up in the two diagonal cells in the upper left and lower right in Table 23-8. (Ignore the “total” data for now.) Therefore, subtract one sum of two diagonal cells from the other sum for the observed data: (45 + 6) - (20 + 7) = 24. Then compare this difference to the comparable differences found in random trials. The proportion of times that the simulated-trial difference exceeds the observed difference is the probability that the observed difference of +24 might occur by chance, even if there is no relationship between the two variables. (Notice that, in this case, we are working on the assumption that beer drinking is positively associated with approval of local option and not the inverse. We are interested only in differences that are equal to or exceed +24 when the northeast-southwest diagonal is subtracted from the northwest-southeast diagonal.)

+

We can carry out a resampling test with this procedure:

+

Step 1. Write “approve” on 65 and “disapprove” on 13 red index cards, respectively; write “Drink” and “Don’t drink” on 52 and 26 white cards, respectively.

+

Step 2. Pair the two sets of cards randomly. Count the numbers of the four possible pairs: (1) “approve-drink,” (2) “disapprove-don’t drink,” (3) “disapprove-drink,” and (4) “approve-don’t drink.” Record the number of these combinations, as in Table 23-10, where columns 1-4 correspond to the four cells in Table 23-9.

+

Step 3. Add (column 1 plus column 4) and (column 2 plus column 3), and subtract the result in the second parenthesis from the result in the first parenthesis. If the difference is equal to or greater than 24, record “yes,” otherwise “no.”

+

Step 4. Repeat steps 2 and 3 perhaps a hundred times.

+

Step 5. Calculate the proportion “yes,” which estimates the probability that an association this great or greater would be observed by chance.

+

Table 23-10

+

Results of One Random Trial of the Problem “Beerpoll”

+ ++++++++ + + + + + + + + + + + + + + + + + + +
(1)(2)(3)(4)(5)
TrialApprove YesApprove NoDisappr ove YesDisappr ove No

(Col 1 + Col 4) -

+

(Col 2 + Col 3)

+

1 43 22 9 4 47-31=16

+

A series of ten trials in this case (see Table 23-9) indicates that the observed difference is very often exceeded, which suggests that there is no relationship between beer drinking and opinion.

+

The RESAMPLING STATS program “Beerpoll” does this repetitively. From the “actual” sample results we know that 52 respondents drink beer and 26 do not. We create the vector “drink” with 52 “1”s for those who drink beer, and 26 “2”s for those who do not. We also create the vector “sale” with 65 “1”s (approve) and 13 “2”s (disapprove). In the actual sample, 51 of the 78 respondents had “consistent” responses to the two questions — that is, people who both favor the sale of beer and drink beer, or who are against the sale of beer and do not drink beer. We want to randomly pair the responses to the two questions to compare against that observed result to test the relationship.

+

To accomplish this aim, we REPEAT the following procedure 1000 times. We SHUFFLE drink to drink$ so that the responses are randomly ordered. Now when we SUBTRACT the corresponding elements of the two arrays, a “0” will appear in each element of the new array c for which there was consistency in the response of the two questions. We therefore COUNT the times that c equals “0” and place this result in d, and the number of times c does not equal 0, and place this result in e. Find the difference (d minus e), and SCORE this to z.

+

SCORE Z stores for each trial the number of consistent responses minus inconsistent responses. To determine whether the results of the actual sample indicate a relationship between the responses to the two questions, we check how often the random trials had a difference (between consistent and inconsistent responses) as great as 24, the value in the observed sample.

+ +
' Program file: "beerpoll.rss"
+
+URN 52#1 26#0 drink
+' Constitute the set of 52 beer drinkers, represented by 52 "1"s, and the
+' set of 26 non-drinkers, represented by "2"s.
+URN 57#1 21#0 sale
+' The same set of individuals classified by whether they favor ("1") or
+' don't favor ("0") the sale of beer.
+
+' Note: F is now the vector {1 1 1 1 1 1 \... 0 0 0 0 0 \...} where 1 =
+' people in favor, 0 = people opposed.
+REPEAT 1000
+    ' Repeat the experiment 1000 times.
+    SHUFFLE drink drink$
+    ' Shuffle the beer drinkers/non-drinker, call the shuffled set drink\*.
+
+    ' Note: drink\$ is now a vector like {1 1 1 0 1 0 0 1 0 1 1 0 0 \...}
+    ' where 1 = drinker, 0 = non-drinker.
+END
+SUBTRACT drink$ sale c
+' Subtract the favor/don't favor set from the drink/don't drink set.
+' Consistent responses are someone who drinks favoring the sale of beer (a
+' "1" and a "1") or someone who doesn't drink opposing the sale of beer.
+' When subtracted, consistent responses *(and only consistent responses)*
+' produce a "0."
+COUNT c =0 d
+' Count the number of consistent responses (those equal to "0").
+COUNT c <> 0 e
+' Count the "inconsistent" responses (those not equal to "0").
+SUBTRACT d e f
+' Find the difference
+SCORE f z
+' Keep track of the results of each trial.
+
+' End one trial, go back and repeat until all 1000 trials are complete.
+HISTOGRAM z
+' Produce a histogram of the trial result.
+
+' Note: The file "beerpoll" on the Resampling Stats software disk contains
+' this set of commands.
+

Are Drinkers More Likely to Favor Local Option & Vice Versa

+

+

# consistent responses thru chance draw

+

The actual results showed a difference of 24. In the histogram we see that a difference that large or larger happened just by chance pairing — without any relationship between the two variables — 23% of the time. Hence, we conclude that there is little evidence of a relationship between the two variables.

+

Though the test just described may generally be appropriate for data of this sort, it may well not be appropriate in some particular case. Let’s consider a set of data where even if the test showed that an association existed, we would not believe the test result to be meaningful.

+

Suppose the survey results had been as presented in Table 23-11. We see that non-beer drinkers have a higher rate of approval of allowing beer drinking, which does not accord with experience or reason. Hence, without additional explanation we would not believe that a meaningful relationship exists among these variables even if the test showed one to exist. (Still another reason to doubt that a relationship exists is that the absolute differences are too small — there is only a 6% difference in disapproval between drink and don’t drink groups — to mean anything to anyone. On both grounds, then, it makes sense simply to act as if there were no difference between the two groups and to run no test .).

+

Table 23-11

+

Beer Poll In Which Results Are Not In Accord With Expectation Or Reason

+ + + + + + + + + + + + + + + + + + + + + +
% Approve% DisapproveTotal
Beer Drinkers71%29%100%
Non-Beer Drinkers77%23%100%
+

The lesson to be learned from this is that one should inspect the data carefully before applying a statistical test, and only test for “significance” if the apparent relationships accord with theory, general understanding, and common sense.

+

Example 23-4: Do Athletes Really Have “Slumps”? (Are Successive Events in a Series Independent, or is There a Relationship Between Them?)

+

The important concept of independent events was introduced earlier. Various scientific and statistical decisions depend upon whether or not a series of events is independent. But how does one know whether or not the events are independent? Let us consider a baseball example.

+

Baseball players and their coaches believe that on some days and during some weeks a player will bat better than on other days and during other weeks. And team managers and coaches act on the belief that there are periods in which players do poorly — slumps — by temporarily replacing the player with another after a period of poor performance. The underlying belief is that a series of failures indicates a temporary (or permanent) change in the player’s capacity to play well, and it therefore makes sense to replace him until the evil spirit passes on, either of its own accord or by some change in the player’s style.

+

But even if his hits come randomly, a player will have runs of good luck and runs of bad luck just by chance — just as does a card player. The problem, then, is to determine whether (a) the runs of good and bad batting are merely runs of chance, and the probability of success for each event remains the same throughout the series of events — which would imply that the batter’s ability is the same at all times, and coaches should not take recent performance heavily into account when deciding which players should play; or (b) whether a batter really does have a tendency to do better at some times than at others, which would imply that there is some relationship between the occurrence of success in one trial event and the probability of success in the next trial event, and therefore that it is reasonable to replace players from time to time.

+

Let’s analyze the batting of a player we shall call “Slug.” Here are the results of Slug’s first 100 times at bat during the 1987 season (“H” = hit, “X” = out):

+

X X X X X X H X X H X H H X X X X X X X X H X X X X X H X X X X H H X X X X X H X X H X H X X X H H X X X X X H X H X X X X H H X H H X X X X X X X X X X H X X X H X X H X X H X H X X H X X X H X X X.

+

Now, do Slug’s hits tend to come in bunches? That would be the case if he really did have a tendency to do better at some times than at others. Therefore, let us compare Slug’s results with those of a deck of cards or a set of random numbers that we know has no tendency to do better at some times than at others.

+

During this period of 100 times at bat, Slug has averaged one hit in every four times at bat — a .250 batting average. This average is the same as the chance of one card suit’s coming up. We designate hearts as “hits” and prepare a deck of 100 cards, twenty-five “H”s (hearts, or “hit”) and seventy-five “X”s (other suit, or “out”). Here is the sequence in which the 100 randomly-shuffled cards fell:

+

X X H X X X X H H X X X H H H X X X X X H X X X H X X H X X X X H X H H X X X X X X X X X H X X X X X X H H X X X X X H H H X X X X X X H X H X H X X H X H X X X X X X X X X H X X X X X X X H H H X X.

+

Now we can compare whether or not Slug’s hits are bunched up more than they would be by random chance; we can do so by counting the clusters (also called “runs”) of consecutive hits and outs for Slug and for the cards. Slug had forty-three clusters, which is more than the thirty-seven clusters in the cards; it therefore does not seem that there is a tendency for Slug’s hits to cluster together. (A larger number of clusters indicates a lower tendency to cluster.)

+

Of course, the single trial of 100 cards shown above might have an unusually high or low number of clusters. To be safer, lay out, (say,) ten trials of 100 cards each, and compare Slug’s number of clusters with the various trials. The proportion of trials with more clusters than Slug’s indicates whether or not Slug’s hits have a tendency to bunch up. (But caution: This proportion cannot be interpreted directly as a probability.)

+

Now the steps:

+

Step 1. Constitute a bucket with 3 slips of paper that say “out” and one that says “hit.” Or “01-25” = hits (H), “26-00” = outs (X), Slug’s long-run average.

+

Step 2. Sample 100 slips of paper, with replacement, record “hit” or “out” each time, or write a series of “H’s” or “X’s” corresponding to 100 numbers, each selected randomly between 1 and 100.

+

Step 3. Count the number of “clusters,” that is, the number of “runs” of the same event, “H”s or “X”s.

+

Step 4. Compare the outcome in step 3 with Slug’s outcome, 43 clusters. If 43 or fewer; write “yes,” otherwise “no.”

+

Step 5. Repeat steps 2-4 a hundred times.

+

Step 6. Compute the proportion “yes.” This estimates the probability that Slug’s record is not characterized by more “slumps” than would be caused by chance. A very low proportion of “yeses” indicates longer (and hence fewer) “streaks” and “slumps” than would result by chance.

+

In RESAMPLING STATS, we can do this experiment 1000 times.

+ +
' Program file: "sluggo.rss"
+
+REPEAT 1000
+    URN 3#0 1#1 a
+    SAMPLE 100 a b
+    ' Sample 100 "at-bats" from a
+    RUNS b >=1 c
+    ' How many runs (of any length \>=1) are there in the 100 at-bats?
+    SCORE c z
+END
+HISTOGRAM z
+' Note: The file "sluggo" on the Resampling Stats software disk contains
+' this set of commands.
+

Examining the histogram, we see that 43 runs is not at all an unusual occurrence:

+

“Runs” in 100 At-Bats

+

+

# “runs” of same outcome

+

The manager wants to look at this matter in a somewhat different fashion, however. He insists that the existence of slumps is proven by the fact that the player sometimes does not get a hit for an abnormally long period of time. One way of testing whether or not the coach is right is by comparing an average player’s longest slump in a 100-at-bat season with the longest run of outs in the first card trial. Assume that Slug is a player picked at random . Then compare Slug’s longest slump — say, 10 outs in a row — with the longest cluster of a single simulated 100-at-bat trial with the cards, 9 outs. This result suggests that Slug’s apparent slump might well have resulted by chance.

+

The estimate can be made more accurate by taking the average longest slump (cluster of outs) in ten simulated 400-at-bat trials. But notice that we do not compare Slug’s slump against the longest slump found in ten such simulated trials. We want to know the longest cluster of outs that would be found under average conditions, and the hand with the longest slump is not average or typical. Determining whether to compare Slug’s slump with the average longest slump or with the longest of the ten longest slumps is a decision of crucial importance. There are no mathematical or logical rules to help you. What is required is hard, clear thinking. Experience can help you think clearly, of course, but these decisions are not easy or obvious even to the most experienced statisticians.

+

The coach may then refer to the protracted slump of one of the twenty-five players on his team to prove that slumps really occur. But, of twenty-five random 100-at-bat trials, one will contain a slump longer than any of the other twenty-four, and that slump will be considerably longer than average. A fair comparison, then, would be between the longest slump of his longest-slumping player, and the longest run of outs found among twenty-five random trials. In fact, the longest run among twenty-five hands of 100 cards was fifteen outs in a row. And, if we had set some of the hands for lower (and higher) batting averages than .250, the longest slump in the cards would have been even longer.

+

Research by Roberts and his students at the University of Chicago shows that in fact slumps do not exist, as I conjectured in the first publication of this material in 1969. (Of course, a batter feels as if he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, sandlot players and professionals alike feel confident — just as gamblers often feel that they’re on a “streak.” But there seems to be no connection between a player’s performance and whether he feels hot or cold, astonishing as that may be.)

+

Averages over longer periods may vary systematically, as Ty Cobb’s annual batting average varied non-randomly from season to season, Roberts found. But short-run analyses of dayto-day and week-to-week individual and team performances in most sports have shown results similar to the outcomes that a lottery-type random-number machine would produce.

+

Remember, too, the study by Gilovich, Vallone, and Twersky of basketball mentioned in Chapter 14. To repeat, their analyses “provided no evidence for a positive correlation between the outcomes of successive shots.” That is, knowing whether a shooter has or has not scored on the previous sheet — or in any previous sequence of shots — is useless for predicting whether he will score again.

+

The species homo sapiens apparently has a powerful propensity to believe that one can find a pattern even when there is no pattern to be found. Two decades ago I cooked up several series of random numbers that looked like weekly prices of publicly-traded stocks. Players in the experiment were told to buy and sell stocks as they chose. Then I repeatedly gave them “another week’s prices,” and allowed them to buy and sell again. The players did all kinds of fancy calculating, using a wild variety of assumptions — although there was no possible way that the figuring could help them.

+

When I stopped the game before completing the 10 buy-andsell sessions they expected, subjects would ask that the game go on. Then I would tell them that there was no basis to believe that there were patterns in the data, because the “prices” were just randomly-generated numbers. Winning or losing therefore did not depend upon the subjects’ skill. Nevertheless, they demanded that the game not stop until the 10 “weeks” had been played, so they could find out whether they “won” or “lost.”

+

This study of batting illustrates how one can test for independence among various trials. The trials are independent if each observation is randomly chosen with replacement from the universe, in which case there is no reason to believe that one observation will be related to the observations directly before and after; as it is said, “the coin has no memory.”

+

The year-to-year level of Lake Michigan is an example in which observations are not independent. If Lake Michigan is very high in one year, it is likely to be higher than average the following year because some of the high level carries over from one year into the next.3 We could test this hypothesis by writing down whether the level in each year from, say, 1860 to 1975 was higher or lower than the median level for those years. We would then count the number of runs of “higher” and “lower” and compare the number of runs of “black” and “red” with a deck of that many cards; we would find fewer runs in the lake level than in an average hand of 116 (1976-1860) cards, though this test is hardly necessary. (But are the changes in Lake Michigan’s level independent from year to year? If the level went up last year, is there a better than 50-50 chance that the level will also go up this year? The answer to this question is not so obvious. One could compare the numbers of runs of ups and downs against an average hand of cards, just as with the hits and outs in baseball.)

+

Exercise for students: How could one check whether the successive numbers in a random-number table are independent?

+
+
+

29.5 Exercises

+

Solutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.

+

Exercise 23-1

+

Table 23-12 shows voter participation rates in the various states in the 1844 presidential election. Should we conclude that there was a negative relationship between the participation rate (a) and the vote spread (b) between the parties in the election? (Adapted from (Noreen 1989, 20, Table 2-4):

+

Table 23-12

+

Voter Participation In The 1844 Presidential Election

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateParticipation (a)Spread (b)
Maine67.513
New Hampshire65.619
Vermont65.718
Massachusetts59.312
Rhode Island39.820
Connecticut76.15
New York73.61
New Jersey81.61
Pennsylvania75.52
Delaware85.03
Maryland80.35
Virginia54.56
North Carolina79.15
Georgia94.04
Kentucky80.38
Tennessee89.61
Louisiana44.73
Alabama82.78
Mississippi89.713
Ohio83.62
Indiana84.92
Illinois76.312
Missouri74.717
Arkansas68.826
Michigan79.36
National Average74.99
+

The observed correlation coefficient between voter participation and spread is -.37398. Is this more negative that what might occur by chance, if no correlation exists?

+

Exercise 23-2

+

We would like to know whether, among major-league baseball players, home runs (per 500 at-bats) and strikeouts (per 500 at-bat’s) are correlated. We first use the procedure as used above for I.Q. and athletic ability — multiplying the elements within each pair. (We will later use a more “sophisticated” measure, the correlation coefficient.)

+

The data for 18 randomly-selected players in the 1989 season are as follows, as they would appear in the first lines of the program.

+ +
' Program file: "correlation_causation_08.rss"
+
+NUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32) homeruns
+NUMBERS (135 153 120 161 138 175 126 200 205 147 165 124 169 156 36 98 82 131) strikeout
+' Exercise: Complete this program.
+

Exercise 23-3

+

In the previous example relating strikeouts and home runs, we used the procedure of multiplying the elements within each pair. Now we use a more “sophisticated” measure, the correlation coefficient, which is simply a standardized form of the multiplicands, but sufficiently well known that we calculate it with a pre-set command.

+

Exercise: Write a program that uses the correlation coefficient to test the significance of the association between home runs and strikeouts.

+

Exercise 23-4

+

All the other things equal, an increase in a country’s money supply is inflationary and should have a negative impact on the exchange rate for the country’s currency. The data in the following table were computed using data from tables in the 1983/1984 Statistical Yearbook of the United Nations :

+

Table 23-13

+

Money Supply and Exchange Rate Changes

+

% Change % Change % Change % Change

+

Exch. Rate Money Supply Exch. Rate Money Supply

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Australia0.0890.035Belgium0.1340.003
Botswana0.3510.085Burma0.0640.155
Burundi0.0640.064Canada0.0620.209
Chile0.4650.126China0.4110.555
Costa Rica0.1000.100Cyprus0.1580.044
Denmark0.1400.351Ecuador0.2420.356
Fiji0.0930.000Finland0.1240.164
France0.1490.090Germany0.1560.061
Greece0.3020.202Hungary0.1330.049
India0.1870.184Indonesia0.0800.132
Italy0.1670.124Jamaica0.5040.237
Japan0.0810.069Jordan0.0920.010
Kenya0.1440.141Korea0.0400.006
Kuwait0.038-0.180Lebanon0.6190.065
Madagascar0.3370.244Malawi0.2050.203
Malaysia0.037-0.006Malta0.0030.003
Mauritania0.1800.192Mauritius0.2260.136
Mexico0.3380.599Morocco0.0760.076
Netherlands0.1580.078New Zealand0.3700.098
Nigeria0.0790.082Norway0.1770.242
Papua0.0750.209Philippines0.4110.035
Portugal0.2880.166Romania-0.0290.039
Rwanda0.0590.083Samoa0.3480.118
Saudi Arabia0.0230.023Seychelles0.0630.031
Singapore0.0240.030Solomon Is0.1010.526
Somalia0.4810.238South Africa0.6240.412
Spain0.1070.086Sri Lanka0.0510.141
Switzerland0.1860.186Tunisia0.1930.068
Turkey0.5730.181UK0.2550.154
USA0.0000.156Vanatuva0.0080.331
Yemen0.2530.247Yugoslavia0.6850.432
Zaire0.3430.244Zambia0.4570.094
Zimbabwe0.3590.164
+

Percentage changes in exchange rates and money supply between 1983 and 1984 for various countries.

+

Are changes in the exchange rates and in money supplies related to each other? That is, are they correlated?

+ +

Exercise: Should the algorithm of non-computer resampling steps be similar to the algorithm for I.Q. and athletic ability shown in the text? One can also work with the correlation coefficient rather then the sum-of-products method, and expect to get the same result.

+
    +
  1. Write a series of non-computer resampling steps to solve this problem.

  2. +
  3. Write a computer program to implement those steps.

  4. +
+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/diagrams/basketball_shots.svg b/r-book/diagrams/basketball_shots.svg new file mode 100644 index 00000000..0b0962b1 --- /dev/null +++ b/r-book/diagrams/basketball_shots.svg @@ -0,0 +1,349 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Success=1/3x1/3x1/3=1/27 + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 1/3Hit + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + 2/3Miss + + diff --git a/r-book/diagrams/batch_posterior.svg b/r-book/diagrams/batch_posterior.svg new file mode 100644 index 00000000..b6991a3a --- /dev/null +++ b/r-book/diagrams/batch_posterior.svg @@ -0,0 +1,114 @@ + + + + + + + + + + + + + + + + + + + .2.1002.02.22.42.62.83.03.2 + + + + + + diff --git a/r-book/diagrams/car-tree.png b/r-book/diagrams/car-tree.png new file mode 100644 index 00000000..e4de635c Binary files /dev/null and b/r-book/diagrams/car-tree.png differ diff --git a/r-book/diagrams/commanders_tree.svg b/r-book/diagrams/commanders_tree.svg new file mode 100644 index 00000000..e3b37eba --- /dev/null +++ b/r-book/diagrams/commanders_tree.svg @@ -0,0 +1,114 @@ + + + + + + + + + niceday(P=.7)nastyday(P=.3)Cmdrswin(P=.65)=.455(ProbabilityofnicedayCmdrswin) + and + Cmdrslose(P=.35)Cmdrswin(P=.55)Cmdrslose(P=.45) + + diff --git a/r-book/diagrams/covid-tree.png b/r-book/diagrams/covid-tree.png new file mode 100644 index 00000000..35975f00 Binary files /dev/null and b/r-book/diagrams/covid-tree.png differ diff --git a/r-book/diagrams/drunks_walk.svg b/r-book/diagrams/drunks_walk.svg new file mode 100644 index 00000000..13dbfb67 --- /dev/null +++ b/r-book/diagrams/drunks_walk.svg @@ -0,0 +1,238 @@ + + + +image/svg+xml765432101234567765432101234567765432101234567765432101234567 +xx +Myhouse1W,4SHishouse3E,2N + \ No newline at end of file diff --git a/r-book/diagrams/given_iq_athletic.svg b/r-book/diagrams/given_iq_athletic.svg new file mode 100644 index 00000000..b2cd7d46 --- /dev/null +++ b/r-book/diagrams/given_iq_athletic.svg @@ -0,0 +1,171 @@ + + + + + + + + + xxxxxxxxxx + 120110115105100 + I.Q.Score + 85801009590AthleticScore + + + + + + diff --git a/r-book/diagrams/hypot_iq_athletic_1.svg b/r-book/diagrams/hypot_iq_athletic_1.svg new file mode 100644 index 00000000..9256d7de --- /dev/null +++ b/r-book/diagrams/hypot_iq_athletic_1.svg @@ -0,0 +1,162 @@ + + + + + + + + + xxxxxxxx + 120110100 + I.Q.Score + 758595AthleticScore + + + + + + + + + + diff --git a/r-book/diagrams/hypot_iq_athletic_2.svg b/r-book/diagrams/hypot_iq_athletic_2.svg new file mode 100644 index 00000000..3da307c5 --- /dev/null +++ b/r-book/diagrams/hypot_iq_athletic_2.svg @@ -0,0 +1,178 @@ + + + + + + + + + xxxxxxxxxxxx + 120110100 + I.Q.Score + 758595AthleticScore + + + + + + + + + + diff --git a/r-book/diagrams/liquor_price_plots.svg b/r-book/diagrams/liquor_price_plots.svg new file mode 100644 index 00000000..c4a59cda --- /dev/null +++ b/r-book/diagrams/liquor_price_plots.svg @@ -0,0 +1,632 @@ + + + + + + + + + + + 05350400450500550CentsMean:$4.84PRIVATE + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 05350400450500550CentsMean:$4.35GOVERNMENT + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 05350400450500550CentsPRIVATE+GOVERNMENT + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/r-book/diagrams/mercury_price_indexes.svg b/r-book/diagrams/mercury_price_indexes.svg new file mode 100644 index 00000000..77f20056 --- /dev/null +++ b/r-book/diagrams/mercury_price_indexes.svg @@ -0,0 +1,2281 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 02040608010012018501870189019101930195019701990 + Indexforwages + + + + 0502575100150125175 + IndexforCPI + DividedbywagesDividedbyCPI + + diff --git a/r-book/diagrams/mercury_reserves.svg b/r-book/diagrams/mercury_reserves.svg new file mode 100644 index 00000000..45a908c4 --- /dev/null +++ b/r-book/diagrams/mercury_reserves.svg @@ -0,0 +1,251 @@ + + + + + + + + + + + 050,000100,000150,000200,000250,000195019551960196519701975198019851990 + Metrictons + after1979reservebasechangeafter1979reservebasechange + + + + 051015202530354045195019551960196519701975198019851990 + Years + + + + + + + + + + + + + YearYear + + diff --git a/r-book/diagrams/nile_height.svg b/r-book/diagrams/nile_height.svg new file mode 100644 index 00000000..4215f6fd --- /dev/null +++ b/r-book/diagrams/nile_height.svg @@ -0,0 +1,135 @@ + + + + + + + + + + + + Height(cm) + 12501200115011001050810 + A.D. + 820830840850Year + + diff --git a/r-book/diagrams/np_round_function_named.svg b/r-book/diagrams/np_round_function_named.svg new file mode 100644 index 00000000..a3d53fee --- /dev/null +++ b/r-book/diagrams/np_round_function_named.svg @@ -0,0 +1,16 @@ + + + + + +3.1415 + + + Arguments:Return value:Name:roundround togiven decimalplaces3.142a =decimals = + diff --git a/r-book/diagrams/pop_prop_disp.svg b/r-book/diagrams/pop_prop_disp.svg new file mode 100644 index 00000000..d7b920f5 --- /dev/null +++ b/r-book/diagrams/pop_prop_disp.svg @@ -0,0 +1,90 @@ + + + + + + + + + + .51.0PopulationProportion + Errorinaveragesamplein% + + diff --git a/r-book/diagrams/round_function_named.svg b/r-book/diagrams/round_function_named.svg new file mode 100644 index 00000000..ef51c0c6 --- /dev/null +++ b/r-book/diagrams/round_function_named.svg @@ -0,0 +1,16 @@ + + + + + +3.1415 + + + Arguments:Return value:Name:roundround togiven decimalplaces3.142x =digits = + diff --git a/r-book/diagrams/round_function_ndigits_pl.svg b/r-book/diagrams/round_function_ndigits_pl.svg new file mode 100644 index 00000000..b89c3f66 --- /dev/null +++ b/r-book/diagrams/round_function_ndigits_pl.svg @@ -0,0 +1,16 @@ + + + + + +3.1415 + + + Arguments:Return value:Name:roundround togiven decimalplaces3.142 + diff --git a/r-book/diagrams/round_function_pl.svg b/r-book/diagrams/round_function_pl.svg new file mode 100644 index 00000000..2b760992 --- /dev/null +++ b/r-book/diagrams/round_function_pl.svg @@ -0,0 +1,13 @@ + + + + + +3.7Argument:Return value:Name:roundround tonearest integer3 + diff --git a/r-book/diagrams/sample_pl.svg b/r-book/diagrams/sample_pl.svg new file mode 100644 index 00000000..9d2a1432 --- /dev/null +++ b/r-book/diagrams/sample_pl.svg @@ -0,0 +1,380 @@ + + + + + + + + + + + + + + + + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] + 1 + + + + + + + + + + Arguments: + Example return value: + Name: + sample + + select specified numberof elements at random ... + 8 + diff --git a/r-book/diagrams/ships_gold_silver.svg b/r-book/diagrams/ships_gold_silver.svg new file mode 100644 index 00000000..75169d0d --- /dev/null +++ b/r-book/diagrams/ships_gold_silver.svg @@ -0,0 +1,867 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 5 + + + + 4 + + + + 3 + + + + 1 + + + + 2 + + + + 9 + + + + 8 + + + + 7 + + + + 6G + a + G + a + G + b + G + b + G + c + G + c + SSSSG + a + G + b + G + c + SSSP=/ + 13 + P=/ + 13 + P=/ + 13 + P(G)=.5P(G)=/ + 23 + P(S)=.5P(S)=/ + 13 + GP=?P=?2G1SI + + + IIIII + + + II + + + II + + + IIIII + + + + + + + diff --git a/r-book/diagrams/success_case1.svg b/r-book/diagrams/success_case1.svg new file mode 100644 index 00000000..ee22ad7f --- /dev/null +++ b/r-book/diagrams/success_case1.svg @@ -0,0 +1,239 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512313426 + + + + + diff --git a/r-book/diagrams/success_case2.svg b/r-book/diagrams/success_case2.svg new file mode 100644 index 00000000..751a4481 --- /dev/null +++ b/r-book/diagrams/success_case2.svg @@ -0,0 +1,264 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512223331113426 + + + + + + + diff --git a/r-book/diagrams/success_case3.svg b/r-book/diagrams/success_case3.svg new file mode 100644 index 00000000..fbc4912e --- /dev/null +++ b/r-book/diagrams/success_case3.svg @@ -0,0 +1,239 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512233113426 + + + + + + + diff --git a/r-book/diagrams/success_case4.svg b/r-book/diagrams/success_case4.svg new file mode 100644 index 00000000..27e005d6 --- /dev/null +++ b/r-book/diagrams/success_case4.svg @@ -0,0 +1,660 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512332113426 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + pick#:123 + 512332113426 + + + + + diff --git a/r-book/diagrams/success_case5.svg b/r-book/diagrams/success_case5.svg new file mode 100644 index 00000000..3667495a --- /dev/null +++ b/r-book/diagrams/success_case5.svg @@ -0,0 +1,499 @@ + + + + + + + + pick#:pick#:pick#: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 222255533336661111444 + + + + + + + + + + + + + + + + + + + + + + 513426 + + + + + + + + + + + + + + + + + + + or(and)or(and) + + diff --git a/r-book/diagrams/white_balls_universe.svg b/r-book/diagrams/white_balls_universe.svg new file mode 100644 index 00000000..3df2eb16 --- /dev/null +++ b/r-book/diagrams/white_balls_universe.svg @@ -0,0 +1,178 @@ + + + + + + + + + + + + + + + + + + + + + + + 2468101214161820 + + .14.12.10.08.06.04.020 + Probability + + diff --git a/r-book/exercise_solutions.html b/r-book/exercise_solutions.html new file mode 100644 index 00000000..83d06971 --- /dev/null +++ b/r-book/exercise_solutions.html @@ -0,0 +1,867 @@ + + + + + + + + + +Resampling statistics - 32  Exercise Solutions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

32  Exercise Solutions

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

32.1 Solution 18-2

+ +
URN 36#1 36#0 pit
+URN 77#1 52#0 chi
+REPEAT 1000
+    SAMPLE 72 pit pit$
+    SAMPLE 129 chi chi$
+    MEAN pit$ p
+    MEAN chi$ c
+    SUBTRACT p c d
+    SCORE d scrboard
+END
+HISTOGRAM scrboard
+PERCENTILE scrboard (2.5 97.5) interval
+PRINT interval
+

+

Results:

+

INTERVAL = -0.25921 0.039083 (estimated 95 percent confidence interval).

+
+
+

32.2 Solution 21-1

+ +
REPEAT 1000
+    GENERATE 200  1,100 a
+    COUNT a <= 7 b
+    DIVIDE b 200 c
+    SCORE c scrboard
+END
+HISTOGRAM scrboard
+PERCENTILE z (2.5 97.5) interval
+PRINT interval
+

+

Result:

+

INTERVAL = 0.035 0.105 [estimated 95 percent confidence interval]

+
+
+

32.3 Solution 21-2

+

We use the “bootstrap” technique of drawing many bootstrap re-samples with replacement from the original sample, and observing how the re-sample means are distributed.

+ +
NUMBERS (30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31) a
+
+REPEAT 1000
+    ' Do 1000 trials or simulations
+    SAMPLE 20 a b
+    ' Draw 20 lifetimes from a, randomly and with replacement
+    MEAN b c
+    ' Find the average lifetime of the 20
+    SCORE c scrboard
+    ' Keep score
+END
+
+HISTOGRAM scrboard
+' Graph the experiment results
+
+PERCENTILE scrboard (2.5 97.5) interval
+' Identify the 2.5th and 97.5th percentiles. These percentiles will
+' enclose 95 percent of the resample means.
+

+

Result:

+

INTERVAL = 27.7 30.05 [estimated 95 percent confidence interval]

+
+
+

32.4 Solution 21-3

+ +
NUMBERS (.02 .026 .023 .017 .022 .019 .018 .018 .017 .022) a
+REPEAT 1000
+    SAMPLE 10 a b
+    MEAN b c
+    SCORE c scrboard
+END
+HISTOGRAM scrboard
+PERCENTILE scrboard (2.5 97.5) interval
+PRINT interval
+

+

Result:

+

INTERVAL = 0.0187 0.0219 [estimated 95 percent confidence interval]

+
+
+

32.5 Solution 23-1

+
    +
  1. Create two groups of paper cards: 25 with participation rates, and 25 with the spread values. Arrange the cards in pairs in accordance with the table, and compute the correlation coefficient between the shuffled participation and spread variables.

  2. +
  3. Shuffle one of the sets, say that with participation, and compute correlation between shuffled participation and spread.

  4. +
  5. Repeat step 2 many, say 1000, times. Compute the proportion of the trials in which correlation was at least as negative as that for the original data.

  6. +
+ +
DATA (67.5  65.6  65.7  59.3 39.8  76.1  73.6  81.6  75.5  85.0  80.3
+54.5  79.1  94.0  80.3  89.6  44.7  82.7 89.7  83.6 84.9  76.3  74.7
+68.8  79.3) partic1
+
+DATA (13 19 18 12 20 5 1 1 2 3 5 6 5 4 8 1 3 18 13 2 2 12 17 26 6)
+spread1
+
+CORR partic1 spread1 corr
+
+' compute correlation - it’s -.37
+REPEAT 1000
+    SHUFFLE partic1 partic2
+    ' shuffle the participation rates
+    CORR partic2 spread1 corrtria
+    ' compute re-sampled correlation
+    SCORE corrtria z
+    ' keep the value in the scoreboard
+END
+HISTOGRAM z
+COUNT z <= -.37 n
+' count the trials when result  <= -.37
+DIVIDE n 1000 prob
+' compute the proportion of such trials
+PRINT prob
+

Conclusion: The results of 5 Monte Carlo experiments each of a thousand such simulations are as follows:

+

prob = 0.028, 0.045, 0.036, 0.04, 0.025.

+

From this we may conclude that the voter participation rates probably are negatively related to the vote spread in the election. The actual value of the correlation (-.37398) cannot be explained by chance alone. In our Monte Carlo simulation of the null-hypothesis a correlation that negative is found only 3 percent — 4 percent of the time.

+

Distribution of the test statistic’s value in 1000 independent trials corresponding to the null-hypothesis:

+

+
+
+

32.6 Solution 23-2

+ +
NUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)
+homeruns
+NUMBERS (135 153 120 161 138 175 126 200 205 147 165 124
+169 156 36 98 82 131) strikeout
+MULTIPLY homerun strikeout r
+SUM r s
+REPEAT 1000
+    SHUFFLE strikeout  strikout2
+    MULTIPLY strikout2 homeruns c
+    SUM c cc
+    SUBTRACT s cc d
+    SCORE d scrboard
+END
+HISTOGRAM scrboard
+COUNT scrboard >=s k
+DIVIDE k 1000 kk
+PRINT kk
+

+

Result: kk = 0

+

Interpretation: In 1000 simulations, random shuffling never produced a value as high as observed. Therefore, we conclude that random chance could not be responsible for the observed degree of correlation.

+
+
+

32.7 Solution 23-3

+ +
NUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)
+homeruns
+NUMBERS (135 153 120 161 138 175 126 200 205 147 165 124
+169 156 36 98 82 131) strikeou
+CORR homeruns strikeou r
+    REPEAT 1000
+    SHUFFLE strikeou  strikou2
+    CORR strikou2 homeruns r$
+    SCORE r$ scrboard
+END
+HISTOGRAM scrboard
+COUNT scrboard >=0.62 k
+DIVIDE k 1000 kk
+PRINT kk r
+

+

Result: kk = .001

+

Interpretation: A correlation coefficient as high as the observed value (.62) occurred only 1 out of 1000 times by chance. Hence, we rule out chance as an explanation for such a high value of the correlation coefficient.

+
+
+

32.8 Solution 23-4

+ +
READ FILEnoreen2.datexrate msuppl
+' read data from file
+CORR exrate msuppl stat
+' compute correlation stat (it’s .419)
+REPEAT 1000
+    SHUFFLE msuppl msuppl$
+    ' shuffle money supply values
+    CORR exrate msuppl$  stat$
+    ' compute correlation
+    SCORE stat$ scrboard
+    ' keep the value in a scoreboard
+END
+PRINT stat
+HISTOGRAM scrboard
+COUNT scrboard >=0.419 k
+DIVIDE k 1000 prob
+PRINT prob
+

Distribution of the correlation after permutation of the data:

+

+

Result: prob = .001

+

Interpretation: The observed correlation (.419) between the exchange rate and the money supply is seldom exceeded by random experiments with these data. Thus, the observed result 0.419 cannot be explained by chance alone and we conclude that it is statistically significant.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/framing_questions.html b/r-book/framing_questions.html new file mode 100644 index 00000000..a43f978c --- /dev/null +++ b/r-book/framing_questions.html @@ -0,0 +1,784 @@ + + + + + + + + + +Resampling statistics - 20  Framing Statistical Questions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

20  Framing Statistical Questions

+
+ + + +
+ + + + +
+ + +
+ +
+

20.1 Introduction

+

Chapter 3 - Chapter 15 discussed problems in probability theory. That is, we have been estimating the probability of a composite event resulting from a system in which we know the probabilities of the simple events — the “parameters” of the situation.

+

Then Chapter 17 - Chapter 19 discussed the underlying philosophy of statistical inference.

+

Now we turn to inferential-statistical problems. Up until now, we have been estimating the complex probabilities of known universes — the topic of probability . Now as we turn to problems in statistics , we seek to learn the characteristics of an unknown system — the basic probabilities of its simple events and parameters. (Here we note again, however, that in the process of dealing with them, all statistical-inferential problems eventually are converted into problems of pure probability). To assess the characteristics of the system in such problems, we employ the characteristics of the sample(s) that have been drawn from it.

+

For further discussion on the distinction between inferential statistics and probability theory, see Chapter 2 - Chapter 3.

+

This chapter begins the topic of hypothesis testing . The issue is: whether to adjudge that a particular sample (or samples) come(s) from a particular universe. A two-outcome yes-no universe is discussed first. Then we move on to “measured-data” universes, which are more complex than yes-no outcomes because the variables can take on many values, and because we ask somewhat more complex questions about the relationships of the samples to the universes. This topic is continued in subsequent chapters.

+

In a typical hypothesis-testing problem presented in this chapter, one sample of hospital patients is treated with a new drug and a second sample is not treated but rather given a “placebo.” After obtaining results from the samples, the “null” or “test” or “benchmark” hypothesis would be that the resulting drug and placebo samples are drawn from the same universe. This device of the null hypothesis is the equivalent of stating that the drug had no effect on the patients. It is a special intellectual strategy developed to handle such statistical questions.

+

We start with the scientific question: Does the medicine have an effect? We then translate it into a testable statistical question: How likely is it that the sample means come from the same universe? This process of question-translation is the crucial step in hypothesis-testing and inferential statistics. The chapter then explains how to solve these problems using resampling methods after you have formulated the proper statistical question.

+

Though the examples in the chapter mostly focus on tests of hypotheses, the procedures also apply to confidence intervals, which will be discussed later.

+
+
+

20.2 Translating scientific questions into probabilistic and statistical questions

+

The first step in using probability and statistics is to translate the scientific question into a statistical question. Once you know exactly which prob-stats question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy (though subtle). The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms.

+

Though this translation is difficult, it involves no mathematics. Rather, this step requires only hard thought. You cannot beg off by saying, “I have no brain for math!” The need is for a brain that will do clear thinking, rather than a brain especially talented in mathematics. A person who uses conventional methods can avoid this hard thinking by simply grabbing the formula for some test without understanding why s/he chooses that test. But resampling pushes you to do this thinking explicitly.

+

This crucial process of translating from a pre-statistical question to a statistical question takes place in all statistical inference. But its nature comes out most sharply with respect to testing hypotheses, so most of what will be said about it will be in that context.

+
+
+

20.3 The three types of questions

+

Let’s consider the natures of conceptual, operational, and statistical questions.

+
+

20.3.1 The Scientific Question

+

A study for either scientific or decision-making purposes properly begins with a general question about the nature of the world — that is, a conceptual or theoretical question. One must then transform this question into an operational-empirical form that one can study scientifically. Thence comes the translation into a technical-statistical question.

+

The scientific-conceptual-theoretical question can be an issue of theory, or a policy choice, or the result of curiosity at large.

+

Examples include: Can a bioengineer increase the chance of female calves being born? Is copper becoming less scarce? Are the prices of liquor systematically different in states where the liquor stores are publicly owned compared to states where they are privately owned? Does a new formulation of pig rations lead to faster hog growth? Was the rate of unemployment higher last month than the long-run average, or was the higher figure likely to be the result of sampling error? What are the margins of probable error for an unemployment survey?

+
+
+

20.3.2 The Operational-Empirical Question

+

The operational-empirical question is framed in measurable quantities in a meaningful design. Examples include: How likely is this state of affairs (say, the new pig-food formulation) to cause an event such as was observed (say, the observed increase in hog growth)? How likely is it that the mean unemployment rate of a sample taken from the universe of interest (say, the labor force, with an unemployment rate of 10 percent) will be between 11 percent and 12 percent? What is the probability of getting three girls in the first four children if the probability of a girl is .48? How unlikely is it to get nine females out of ten calves in an experiment on your farm? Did the price of copper fall between 1800 and the present? These questions are in the form of empirical questions, which have already been transformed by operationalizing from scientific-conceptual questions.

+
+
+

20.3.3 The Statistical Question

+

At this point one must decide whether the conceptual-scientific question is of the form of either a) or b):

+
    +
  1. A test about whether some sample will frequently happen by chance rather than being very surprising — a test of the “significance” of a hypothesis. Such hypothesis testing takes the following form: How likely is a given “universe” to produce some sample like x? This leads to interpretation about: How likely is a given universe to be the cause of this observed sample?
  2. +
  3. A question about the accuracy of the estimate of a parameter of the population based upon sample evidence (an inquiry about “confidence intervals”). This sort of question is considered by some (but not by me) to be a question in estimation — that is, one’s best guess about (say) the magnitude and probable error of the mean or median of a population. This is the form of a question about confidence limits — how likely is the mean to be between x and y?
  4. +
+

Notice that the statistical question is framed as a question in probability.

+
+
+
+

20.4 Illustrative translations

+

The best way to explain how to translate a scientific question into a statistical question is to illustrate the process.

+
+

20.4.1 Illustration A — beliefs about smoking

+

Were doctors’ beliefs as of 1964 about the harmfulness of cigarette smoking (and doctors’ own smoking behavior) affected by the social groups among whom the doctors live (Simon 1967)? That was the theoretical question. We decided to define the doctors’ reference groups as the states in which they live, because data about doctors and smoking were available state by state (Modern Medicine, 1964). We could then translate this question into an operational and testable scientific hypothesis by asking this question: Do doctors in tobacco-economy states differ from doctors in other states in their smoking, and in their beliefs about smoking?

+

Which numbers would help us answer this question, and how do we interpret those numbers? We now were ready to ask the statistical question: Do doctors in tobacco-economy states “belong to the same universe” (with respect to smoking) as do other doctors? That is, do doctors in tobacco-economy states have the same characteristics — at least, those characteristics we are interested in, smoking in this case — as do other doctors? Later we shall see that the way to proceed is to consider the statistical hypothesis that these doctors do indeed belong to that same universe; that hypothesis and the universe will be called “benchmark hypothesis” and “benchmark universe” respectively — or in more conventional usage, the “null hypothesis.”

+

If the tobacco-economy doctors do indeed belong to the benchmark universe — that is, if the benchmark hypothesis is correct — then there is a 49/50 chance that doctors in some state other than the state in which tobacco is most important will have the highest rate of cigarette smoking. But in fact we observe that the state in which tobacco accounts for the largest proportion of the state’s income — North Carolina — had (as of 1964) a higher proportion of doctors who smoked than any other state. (Furthermore, a lower proportion of doctors in North Carolina than in any other state said that they believed that smoking is a health hazard.)

+

Of course, it is possible that it was just chance that North Carolina doctors smoked most, but the chance is only 1 in 50 if the benchmark hypothesis is correct. Obviously, some state had to have the highest rate, and the chance for any other state was also 1 in 50. But, because our original scientific hypothesis was that North Carolina doctors’ smoking rate would be highest, and we then observed that it was highest even though the chance was only 1 in 50, the observation became interesting and meaningful to us. It means that the chances are strong that there was a connection between the importance of tobacco in the economy of a state and the rate of cigarette smoking among doctors living there (as of 1964).

+

To consider this problem from another direction, it would be rare for North Carolina to have the highest smoking rate for doctors if there were no special reason for it; in fact, it would occur only once in fifty times. But, if there were a special reason — and we hypothesize that the tobacco economy provides the reason — then it would not seem unusual or rare for North Carolina to have the highest rate; therefore we choose to believe in the not-so-unusual phenomenon, that the tobacco economy caused doctors to smoke cigarettes.

+

Like many (most? all?) actual situations, the cigarettes and doctors’ smoking issue is a rather messy business. Did I have a clear-cut, theoretically-derived prediction before I began? Maybe I did a bit of “data dredging” — that is, maybe I started with a vague expectation, and only arrived at my sharp hypothesis after I saw the data. This would weaken the probabilistic interpretation of the test of significance — but this is something that a scientific investigator does not like to do because it weakens his/her claim for attention and chance of publication. On the other hand, if one were a Bayesian, one could claim that one had a prior probability that the observed effect would occur, and the observed data strengthens that prior; but this procedure would not seem proper to many other investigators. The only wholly satisfactory conclusion is to obtain more data — but as of 1993, there does not seem to have been another data set collected since 1964, and collecting a set by myself is not feasible.

+

This clearly is a case of statistical inference that one could argue about, though perhaps it is true that all cases where the data are sufficiently ambiguous as to require a test of significance are also sufficiently ambiguous that they are properly subject to argument.

+

For some decades the hypothetico-deductive framework was the leading point of view in empirical science. It insisted that the empirical and statistical investigation should be preceded by theory, and only propositions suggested by the theory should be tested. Investigators were not supposed to go back and forth from data to theory to testing. It is now clear that this is an ivory-tower irrelevance, and no one lived by the hypothetico-deductive strictures anyway — just pretended to. Furthermore, there is no sound reason to feel constrained by it, though it strengthens your conclusions if you had theoretical reason in advance to expect the finding you obtained.

+
+
+

20.4.2 Illustration B — is it a cure?

+

Does medicine CCC cure some particular cancer? That’s the scientific question. So you give the medicine to six patients who have the cancer and you do not give it to six similar patients who have the cancer. Your sample contains only twelve people because it is not feasible for you to obtain a larger sample. Five of six “medicine” patients get well, two of six “no medicine” patients get well. Does the medicine cure the cancer? That is, if future cancer patients take the medicine, will their rate of recovery be higher than if they did not take the medicine?

+

One way to translate the scientific question into a statistical question is to ask: Do the “medicine” patients belong to the same universe as the “no medicine” patients? That is, we ask whether “medicine” patients still have the same chances of getting well from the cancer as do the “no medicine” patients, or whether the medicine has bettered the chances of those who took it and thus removed them from the original universe, with its original chances of getting well. The original universe, to which the “no medicine” patients must still belong, is the benchmark universe. Shortly we shall see that we proceed by comparing the observed results against the benchmark hypothesis that the “medicine” patients still belong to the benchmark universe — that is, they still have the same chance of getting well as the “no medicine” patients.

+

We want to know whether or not the medicine does any good. This question is the same as asking whether patients who take medicine are still in the same population (universe) as “no medicine” patients, or whether they now belong to a different population in which patients have higher chances of getting well. To recapitulate our translations, we move from asking: Does the medicine cure the cancer? to, Do “medicine” patients have the same chance of getting well as “no medicine” patients?; and finally, to: Do “medicine” patients belong to the same universe (population) as “no medicine” patients? Remember that “population” in this sense does not refer to the population at large, but rather to a group of cancer sufferers (perhaps an infinitely large group) who have given chances of getting well, on the average. Groups with different chances of getting well are called “different populations” (universes). Shortly we shall see how to answer this statistical question. We must keep in mind that our ultimate concern in cases like this one is to predict future results of the medicine, that is, to predict whether use of the medicine will lead to a higher recovery rate than would be observed without the medicine.

+
+
+

20.4.3 Illustration C — a better method for teaching reading

+

Is method Alpha a better method of teaching reading than method Beta? That is, will method Alpha produce a higher average reading score in the future than will method Beta? Twenty children taught to read with method Alpha have an average reading score of 79, whereas children taught with method Beta have an average score of 84. To translate this scientific question into a statistical question we ask: Do children taught with method Alpha come from the same universe (population) as children taught with method Beta? Again, “universe” (population) does not mean the town or social group the children come from, and indeed the experiment will make sense only if the children do come from the same population, in that sense of “population.” What we want to know is whether or not the children belong to the same statistical population (universe), defined according to their reading ability, after they have studied with method Alpha or method Beta.

+
+
+

20.4.4 Illustration D — better fertilizer

+

If one plot of ground is treated with fertilizer, and another similar plot is not treated, the benchmark (null) hypothesis is that the corn raised on the treated plot is no different than the corn raised on the untreated lot — that is, that the corn from the treated plot comes from (“belongs to”) the same universe as the corn from the untreated plot. If our statistical test makes it seem very unlikely that a universe like that from which the untreated-plot corn comes would also produce corn such as came from the treated plot, then we are willing to believe that the fertilizer has an effect. For a psychological example, substitute the words “group of children” for “plot,” “special training” for “fertilizer,” and “I.Q. score” for “corn.”

+

There is nothing sacred about the benchmark (null) hypothesis of “no difference.” You could just as well test the benchmark hypothesis that the corn comes from a universe that averages 110 bushels per acre, if you have reason to be especially interested in knowing whether or not the fertilizer produces more than 110 bushels per acre. But in many cases it is reasonable to test the probability that a sample comes from the population that does not receive the special treatment of medicine, fertilizer, or training.

+
+
+
+

20.5 Generalizing from sample to universe

+

So far we have discussed the scientific question and the statistical question. Remember that there is always a generalization question, too: Do the statistical results from this particular sample of, say, rats apply to a universe of humans? This question can be answered only with wisdom, common sense, and general knowledge, and not with probability statistics.

+

Translating from a scientific question into a statistical question is mostly a matter of asking the probability that some given benchmark universe (population) will produce one or more observed samples. Notice that we must (at least for general scientific testing purposes) ask about a given universe whose composition we assume to be known , rather than about a range of universes, or about a universe whose properties are unknown. In fact, there is really only one question that probability statistics can answer: Given some particular benchmark universe of some stated composition, what is the probability that an observed sample would come from it? (Please notice the subtle but all-important difference between the words “would come” in the previous sentence, and the word “came.”) A variation of this question is: Given two (or more) samples, what is the probability that they would come from the same universe — that is, that the same universe would produce both of them? In this latter case, the relevant benchmark universe is implicitly the universe whose composition is the two samples combined.

+

The necessity for stating the characteristics of the universe in question becomes obvious when you think about it for a moment. Probability-statistical testing adds up to comparing a sample with a particular benchmark universe, and asking whether there probably is a difference between the sample and the universe. To carry out this comparison, we ask how likely it is that the benchmark universe would produce a sample like the observed sample.

+ +

But in order to find out whether or not a universe could produce a given sample, we must ask whether or not some particular universe — with stated characteristics — could produce the sample. There is no doubt that some universe could produce the sample by a random process; in fact, some universe did. The only sensible question, then, is whether or not a particular universe, with stated (or known) characteristics, is likely to produce such a sample. In the case of the medicine, the universe with which we compare the sample who took the medicine is the benchmark universe to which that sample would belong if the medicine had had no effect. This comparison leads to the benchmark (null) hypothesis that the sample comes from a population in which the medicine (or other experimental treatment) seems to have no effect . It is to avoid confusion inherent in the term “null hypothesis” that I replace it with the term “benchmark hypothesis.”

+

The concept of the benchmark (null) hypothesis is not easy to grasp. The best way to learn its meaning is to see how it is used in practice. For example, we say we are willing to believe that the medicine has an effect if it seems very unlikely from the number who get well that the patients given the medicine still belong to the same benchmark universe as the patients given no medicine at all — that is, if the benchmark hypothesis is unlikely.

+
+
+

20.6 The steps in statistical inference

+

These are the steps in conducting statistical inference

+
    +
  • Step 1. Frame a question in the form of: What is the chance of getting the observed sample x from some specified population X? For example, what is the probability of getting a sample of 9 females and one male from a population where the probability of getting a single female is .48?
  • +
  • Step 2. Reframe the question in the form of: What kinds of samples does population X produce, with which probabilities? That is, what is the probability of the observed sample x (9 females in 10 calves), given that a population is X (composed of 48 percent females)? Or in notation, what is \(P(x | X)\)?
  • +
  • Step 3. Actually investigate the behavior of S with respect to S and other samples. This can be done in two ways:
  • +
+
    +
  1. Use the calculus of probability (the formulaic method), perhaps resorting to the Monte Carlo method if an appropriate formula does not exist. Or
  2. +
  3. Resampling (in the larger sense), which equals the Monte Carlo method minus its use for approximations, investigation of complex functions in statistics and other theoretical mathematics, and non-resampling uses elsewhere in science. Resampling in the more restricted sense includes bootstrap, permutation, and other non-parametric methods. More about the resampling procedure follows in the paragraphs to come, and then in later chapters in the book.
  4. +
+
    +
  • Step 4. Interpret the probabilities that result from step 3 in terms of acceptance or rejection of hypotheses, surety of conclusions, and as inputs to decision theory.1
  • +
+

The following short definition of statistical inference summarizes the previous four steps:

+
+

Statistical inference equals the selection of a probabilistic model to resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.

+
+

Stating the steps to be followed in a procedure is an operational definition of the procedure. My belief in the clarifying power of this device (the operational definition) is embodied in the set of steps given in Chapter 15 for the various aspects of statistical inference. A canonical question-and-answer procedure for testing hypotheses will be found in Chapter 25, and one for confidence intervals will be found in Chapter 26.

+
+
+

20.7 Summary

+

We define resampling to include problems in inferential statistics as well as problems in probability as follows: Using the entire set of data you have in hand, or using the given data-generating mechanism (such as a die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples. That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.

+

Problems in pure probability may at first seem different in nature than problems in statistical inference. But the same logic as stated in this definition applies to both varieties of problems. The difference is that in probability problems the “model” is known in advance — say, the model implicit in a deck of poker cards plus a game’s rules for dealing and counting the results — rather than the model being assumed to be best estimated by the observed data, as in resampling statistics.

+

The hardest job in using probability statistics, and the most important, is to translate the scientific question into a form to which statistics can give a sensible answer. You must translate scientific questions into the appropriate form for statistical operations , so that you know which operations to perform. This is the part of the job that requires hard, clear thinking — though it is non-mathematical thinking — and it is the part that someone else usually cannot easily do for you.

+

Once you know exactly which probability-statistical question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy. The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms. Though this step is hard, it involves no mathematics . This step requires only hard, clear thinking . You cannot beg off by saying “I have no brain for math!” To flub this step is to admit that you have no brain for clear thinking, rather than no brain for mathematics.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/how_big_sample.html b/r-book/how_big_sample.html new file mode 100644 index 00000000..7dd0b954 --- /dev/null +++ b/r-book/how_big_sample.html @@ -0,0 +1,2097 @@ + + + + + + + + + +Resampling statistics - 30  How Large a Sample? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

30  How Large a Sample?

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+
+

30.1 Issues in determining sample size

+

Sometime in the course of almost every study — preferably early in the planning stage — the researcher must decide how large a sample to take. Deciding the size of sample to take is likely to puzzle and distress you at the beginning of your research career. You have to decide somehow, but there are no simple, obvious guides for the decision.

+

For example, one of the first studies I worked on was a study of library economics (Fussler and Simon 1961), which required taking a sample of the books from the library’s collections. Sampling was expensive, and we wanted to take a correctly sized sample. But how large should the sample be? The longer we searched the literature, and the more people we asked, the more frustrated we got because there just did not seem to be a clear-cut answer. Eventually we found out that, even though there are some fairly rational ways of fixing the sample size, most sample sizes in most studies are fixed simply (and irrationally) by the amount of money that is available or by the sample size that similar research has used in the past.

+

The rational way to choose a sample size is by weighing the benefits you can expect in information against the cost of increasing the sample size. In principle you should continue to increase the sample size until the benefit and cost of an additional sampled unit are equal.1

+

The benefit of additional information is not easy to estimate even in applied research, and it is extraordinarily difficult to estimate in basic research. Therefore, it has been the practice of researchers to set up target goals of the degree of accuracy they wish to achieve, or to consider various degrees of accuracy that might be achieved with various sample sizes, and then to balance the degree of accuracy with the cost of achieving that accuracy. The bulk of this chapter is devoted to learning how the sample size is related to accuracy in simple situations.

+

In complex situations, however, and even in simple situations for beginners, you are likely to feel frustrated by the difficulties of relating accuracy to sample size, in which case you cry out to a supervisor, “Don’t give me complicated methods, just give me a rough number based on your greatest experience.” My inclination is to reply to you, “Sometimes life is hard and there is no shortcut.” On the other hand, perhaps you can get more information than misinformation out of knowing sample sizes that have been used in other studies. Table 24-1 shows the middle (modal), 25th percentile, and 75th percentile scores for — please keep this in mind — National Opinion Surveys in the top panel. The bottom panel shows how subgroup analyses affect sample size.

+

Pretest sample sizes are smaller, of course, perhaps 25-100 observations. Samples in research for Master’s and Ph.D. theses are likely to be closer to a pretest than to national samples.

+

Table 24-1

+

Most Common Sample Sizes Used for National and Regional Studies By Subject Matter

+

Subject Matter National Regional

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Subject MatterModeQ3Q1ModeQ3Q1
Financial1000+100 400 50
Medical1000+1000+5001000+ 1000+ 250
Other Behavior1000+700 1000 300
Attitudes1000+1000+500700 1000 400
Laboratory Experiments100 200 50
+

Typical Sample Sizes for Studies of Human and Institutional Populations

+

People or Households Institutions

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
People or householdsInstitutions
Subgroup AnalysesNationalSpecialNationalSpecial
None or few1000-1500200-500200-50050-200
Average1500-2500500-1000500-1000200-500
Many2500+1000+1000+500+
+

SOURCE: From Applied Sampling, by Seymour Sudman (1976, 86 — 87) copyright Academic Press, reprinted by permission.

+

Once again, the sample size ought to depend on the proportions of the sample that have the characteristics you are interested in, the extent to which you want to learn about subgroups as well as the universe as a whole, and of course the purpose of your study, the value of the information, and the cost. Also, keep in mind that the added information that you obtain from an additional sample observation tends to be smaller as the sample size gets larger. You must quadruple the sample to halve the error.

+

Now let us consider some specific cases. The first examples taken up here are from the descriptive type of study, and the latter deal with sample sizes in relationship research.

+
+
+

30.2 Some practical examples

+

Example 24-1

+

What proportion of the homes in Countryville are tuned into television station WCNT’s ten o’clock news program? That is the question your telephone survey aims to answer, and you want to know how many randomly selected homes you must telephone to obtain a sufficiently large sample.

+

Begin by guessing the likeliest answer, say 30 percent in this case. Do not worry if you are off by 5 per cent or even 10 per cent; and you will probably not be further off than that. Select a first-approximation sample size of perhaps 400; this number is selected from my general experience, but it is just a starting point. Then proceed through the first 400 numbers in the random-number table, marking down a yes for numbers 1-3 and no for numbers 4-10 (because 3/10 was your estimate of the proportion listening). Then add the number of yes and no . Carry out perhaps ten sets of such trials, the results of which are in Table 24-2.

+

Table 24-2

+

% DIFFERENCE FROM

+

Trial Number “Yes” Number “No” Expected Mean of 30%

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
(120 “Yes”)
11152851.25
21192810.25
31162841.00
41142861.50
51072933.25
61162841.00
71322683.00
81232770.75
91212790.25
101142861.50
Mean1.37
+

Based on these ten trials, you can estimate that if you take a sample of 400 and if the “real” viewing level is 30 percent, your average percentage error will be 1.375 percent on either side of 30 percent. That is, with a sample of 400, half the time your error will be greater than 1.375 percent if 3/10 of the universe is listening.

+

Now you must decide whether the estimated error is small enough for your needs. If you want greater accuracy than a sample of 400 will give you, increase the sample size, using this important rule of thumb: To cut the error in half, you must quadruple the sample size. In other words, if you want a sample that will give you an error of only 0.55 percent on the average, you must increase the sample size to 1,600 interviews. Similarly, if you cut the sample size to 100, the average error will be only 2.75 percent (double 1.375 percent) on either side of 30 percent. If you distrust this rule of thumb, run ten or so trials on sample sizes of 100 or 1,600, and see what error you can expect to obtain on the average.

+

If the “real” viewership is 20 percent or 40 percent, instead of 30 percent, the accuracy you will obtain from a sample size of 400 will not be very different from an “actual” viewership of 30 percent, so do not worry about that too much, as long as you are in the right general vicinity.

+

Accuracy is slightly greater in smaller universes but only slightly. For example, a sample of 400 would give perfect accuracy if Countryville had only 400 residents. And a sample of 400 will give slightly greater accuracy for a town of 800 residents than for a city of 80,000 residents. But, beyond the point at which the sample is a large fraction of the total universe, there is no difference in accuracy with increases in the size of universe. This point is very important. For any given level of accuracy, identical sample sizes give the same level of accuracy for Podunk (population 8,000) or New York City (population 8 million). The ratio of the sample size to the population of Podunk or New York City means nothing at all, even though it intuitively seems to be important.

+

The size of the sample must depend upon which population or subpopulations you wish to describe. For example, Alfred Kinsey’s sample size for the classic “Sexual Behavior in the Human Male” (1948) would have seemed large, by customary practice, for generalizations about the United States population as a whole. But, as Kinsey explains: “… the chief concern of the present study is an understanding of the sexual behavior of each segment of the population, and that it is only secondarily concerned with generalization for the population as a whole.” (1948, 82, italics added). Therefore Kinsey’s sample had to include subsamples large enough to obtain the desired accuracy in each of these sub-universes. The U.S. Census offers a similar illustration. When the U.S. Bureau of the Census aims to estimate only a total or an average for the United States as a whole — as, for example, in the Current Population Survey estimate of unemployment — a sample of perhaps 50,000 is big enough. But the decennial census aims to make estimates for all the various communities in the country, estimates that require adequate subsamples in each of these sub-universes; such is the justification for the decennial census’ sample size of so many millions. Television ratings illustrate both types of purpose. Nielsen ratings, for example, are sold primarily to national network advertisers. These advertisers on national television networks usually sell their goods all across the country and are therefore interested primarily in the total United States viewership for a program, rather than in the viewership in various demographic subgroups. The appropriate calculations for Nielsen sample size will therefore refer to the total United States sample. But other organizations sell rating services to local television and radio stations for use in soliciting advertising over the local stations rather than over the network as a whole. Each local sample must then be large enough to provide reasonable accuracy, and, considered as a whole, the samples for the local stations therefore add up to a much larger sample than the Nielsen and other nationwide samples.

+

The problem may be handled with the following R program. This program represents viewers with the string 'viewers' and non-viewers as 'not viewers'. It then asks sample to choose randomly between 'viewer' and 'not viewer' with a 30% (p=0.3) chance of getting a 'viewer' and a 70% chance of getting a 'not viewer'. It gets a sample of 400 such numbers, counts (with sum the “viewers” then finds how much this sample diverges from the expected number of viewers (30% of 400 = 120). It repeats this procedure 10000 times, and then calculates the average divergence.

+
+

Start of viewer_numbers notebook

+ + +
+
# set the number of trials
+n_trials <- 10000
+
+# an empty array to store the scores
+scores <- numeric(n_trials)
+
+# What are the options to choose from?
+options <- c('viewer', 'not viewer')
+
+# do n_trials trials
+for (i in 1:n_trials) {
+
+    # Choose 'viewer' 30% of the time.
+    a <- sample(options, size=400, prob=c(0.3, 0.7), replace=TRUE)
+
+    # count the viewers
+    b <- sum(a == 'viewer')
+
+    # how different from expected?
+    c <- 120 - b
+
+    # absolute value of the difference
+    d <- abs(c)
+
+    # express as a proportion of sample
+    e <- d / 400
+
+    # keep score of the result
+    scores[i] <- e
+}
+
+# find the mean divergence
+k <- mean(scores)
+
+# Show the result
+k
+
+
[1] 0.0182
+
+
+ +

End of viewer_numbers notebook

+
+

It is a simple matter to go back and try a sample size of (say) 1600 rather than 400, and examine the effect on the mean difference.

+

Example 24-2

+

This example, like Example 24-1, illustrates the choice of sample size for estimating a summarization statistic. Later examples deal with sample sizes for probability statistics.

+

Hark back to the pig-ration problems presented earlier, and consider the following set of pig weight-gains recorded for ration A: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 30. Assume that

+

our purpose now is to estimate the average weight gain for ration A, so that the feed company can advertise to farmers how much weight gain to expect from ration A. If the universe is made up of pig weight-gains like those we observed, we can simulate the universe with, say, 1 million weight gains of thirty-one pounds, 1 million of thirty-four pounds, and so on for the twelve observed weight gains. Or, more conveniently, as accuracy will not be affected much, we can make up a universe of say, thirty cards for each thirty-one-pound gain, thirty cards for each thirty-four-pound gains and so forth, yielding a deck of 30 x 12 = 360 cards. Then shuffle, and, just for a starting point, try sample sizes of twelve pigs. The means of the samples for twenty such trials are as in Table 24-3.

+

Now ask yourself whether a sample size of twelve pigs gives you enough accuracy. There is a .5 chance that the mean for the sample will be more than .65 or .92 pound (the two median deviations) or (say) .785 pound (the midpoint of the two medians) from the mean of the universe that generates such samples, which in this situation is 31.75 pounds. Is this close enough? That is up to you to decide in light of the purposes for which you are running the experiment. (The logic of the inference you make here is inevitably murky, and use of the term “real mean” can make it even murkier, as is seen in the discussion in Chapters 20-22 on confidence intervals.)

+

To see how accuracy is affected by larger samples, try a sample size of forty-eight “pigs” dealt from the same deck. (But, if the sample size were to be much larger than forty-eight, you might need a “universe” greater than 360 cards.) The results of twenty trials are in Table 24-4.

+

In half the trials with a sample size of forty-eight the difference between the sample mean and the “real” mean of 31.75 will be .36 or .37 pound (the median deviations), smaller than with the values of .65 and .92 for samples of 12 pigs. Again, is this too little accuracy for you? If so, increase the sample size further.

+

Table 24-3

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialMean

Absolut e Devisatio n of Trial Mean

+

from Actual Mean

TrialMean

Absolut e Deviation of Trial Mean

+

from Actual Mean

131.77.021132.10.35
232.271.521230.671.08
331.75.001332.42.67
430.83.921430.671.08
530.521.231532.25.50
631.60.151631.60.15
732.46.711732.33.58
831.10.651833.081.33
932.42.351933.011.26
1030.601.152030.601.15
Mean31.75
+

The attentive reader of this example may have been troubled by this question: How do you know what kind of a distribution of values is contained in the universe before the sample is taken? The answer is that you guess, just as in Example 24-1 you guessed at the mean of the universe. If you guess wrong, you will get either more accuracy or less accuracy than you expected from a given sample size, but the results will not be fatal; if you obtain more accuracy than you wanted, you have wasted some money, and, if you obtain less accuracy, your sample dispersion will tell you so, and you can then augment the sample to boost the accuracy. But an error in guessing will not introduce error into your final results.

+

Table 24-4

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialMean

Absolut e Deviation of Trial Mean

+

from Actual Mean

TrialMean

Absolut e Deviation of Trial Mean

+

from Actual Mean

131.80.051131.93.18
232.27.521232.40.65
331.82.071331.32.43
431.39.361432.07.68
531.22.531532.03.28
631.88.131631.95.20
731.37.381731.75.00
831.48.271831.11.64
931.20.551931.96.21
1032.01.262031.32.43
Mean31.75
+

The guess should be based on something, however. One source for guessing is your general knowledge of the likely dispersion; for example, if you were estimating male heights in Rhode Island, you would be able to guess what proportion of observations would fall within 2 inches, 4 inches, 6 inches, and 8 inches, perhaps, of the real value. Or, much better yet, a very small pretest will yield quite satisfactory estimates of the dispersion.

+

Here is a RESAMPLING STATS program that will let you try different sample sizes, and then take bootstrap samples to determine the range of sampling error. You set the sample size with the DATA command, and the NUMBERS command records the data. Above I noted that we could sample without replacement from a “deck” of thirty “31”’s, thirty “34”’s, etc, as a substitute for creating a universe of a million “31”’s, a million “34”’s, etc. We can achieve the same effect if we replace each card after we sample it; this is equivalent to creating a “deck” of an infinite number of “31”’s, “34”’s, etc. That is what the SAMPLE command does, below. Note that the sample size is determined by the value of the “sampsize” variable, which you set at the beginning. From here on the program takes the MEAN of each sample, keeps SCORE of that result, and produces a HISTOGRAM. The PERCENTILE command will also tell you what values enclose 90% of all sample results, excluding those below the 5th percentile and above the 95th percentile.

+

Here is a program for a sample size of 12.

+ +
' Program file: "how_big_sample_01.rss"
+
+DATA (12) sampsize
+NUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a
+REPEAT 1000
+    SAMPLE sampsize a b
+    MEAN b c
+    SCORE c z
+END
+HISTOGRAM z
+PERCENTILE z (5 95) k
+PRINT k
+' **Bin Center Freq Pct Cum Pct**
+

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
29.020.20.2
29.540.40.6
30.0303.03.6
30.5717.110.7
31.016216.226.9
31.520920.947.8
32.023723.771.5
32.514314.385.8
33.0909.094.8
33.5373.798.5
34.0121.299.7
34.530.3100.0
k = 30.41733.25
+

Example 24-3

+

This is the first example of sample-size estimation for probability (testing) statistics, rather than the summarization statistics dealt with above.

+

Recall the problem of the sex of fruit-fly offspring discussed in Example 15-1. The question now is, how large a sample is needed to determine whether the radiation treatment results in a sex ratio other than a 50-50 male-female split?

+

The first step is, as usual, difficult but necessary. As the researcher, you must guess what the sex ratio will be if the treatment does have an effect. Let’s say that you use all your general knowledge of genetics and of this treatment and that you guess the sex ratio will be 75 percent males and 25 percent females if the treatment alters the ratio from 50-50.

+

In the random-number table let “01-25” stand for females and “26-00” for males. Take twenty successive pairs of numbers for each trial, and run perhaps fifty trials, as in Table 24-5.

+

Table 24-5

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
14161871334416
26141931735614
36142071336317
45152141637812
55152241638416
63172351539317
77132481240614
86142541641515
93172611942218
102182751543812
116142831744416
121192981245614
136143081246515
143173151547317
151193231748515
165153341649317
1751550515
+

Trial Females Males Trial Females Males Trial Females Males

+

In Example 15-1 with a sample of twenty flies that contained fourteen or more males, we found only an 8% probability that such an extreme sample would result from a 50-50 universe. Therefore, if we observe such an extreme sample, we rule out a 50-50 universe.

+

Now Table 24-5 tells us that, if the ratio is really 75 to 25, then a sample of twenty will show fourteen or more males forty-two of fifty times (84 percent of the time). If we take a sample of twenty flies and if the ratio is really 75-25, we will make the correct decision by deciding that the split is not 50-50 84 percent of the time.

+

Perhaps you are not satisfied with reaching the right conclusion only 84 percent of the time. In that case, still assuming that the ratio will really be 75-25 if it is not 50-50, you need to take a sample larger than twenty flies. How much larger? That depends on how much surer you want to be. Follow the same procedure for a sample size of perhaps eighty flies. First work out for a sample of eighty, as was done in Example 15-1 for a sample of twenty, the number of males out of eighty that you would need to find for the odds to be, say, 9 to 1 that the universe is not 50-50; your estimate turns out to be forty-eight males. Then run fifty trials of eighty flies each on the basis of 75-25 probability, and see how often you would not get as many as forty-eight males in the sample. Table 24-6 shows the results we got. No trial was anywhere near as low as forty-eight, which suggests that a sample of eighty is larger than necessary if the split is really 75-25.

+

Table 24-6

+

+

+

Trial Females Males Trial Females Males Trial Females Males

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
12159181367342159
22258191961351763
31367201763362258
41565211763371961
52258221862382159
62159232654392159
71367242060402159
82456251664412159
91664262258421862
102159271664431961
112060282159441763
121961292258451367
132159302159461664
141763312258472159
152268321961481664
162268331070491763
171763502159
+

Table 24-7

+

Trial Females Males Trial Females Males Trial Females Males

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
13545183248343545
23644192852353644
33545203248362951
43545213347373644
53644223743383644
63644233644393149
73644243149402951
83446252753413050
93446263050423545
102951273149433248
112951283347443050
123248293743453743
132951303050463149
143149313149473644
152852323248483464
163347333446492951
173644503743
+

+

It is obvious that, if the split you guess at is 60 to 40 rather than 75 to 25, you will need a bigger sample to obtain the “correct” result with the same probability. For example, run some eighty-fly random-number trials with 1-40 representing males and 51-100 representing females. Table 24-7 shows that only twenty-four of fifty (48 percent) of the trials reach the necessary cut-off at which one would judge that a sample of eighty really does not come from a universe that is split 50-50; therefore, a sample of eighty is not big enough if the split is 60-40.

+

To review the main principles of this example: First, the closer together the two possible universes from which you think the sample might have come (50-50 and 60-40 are closer together than are 50-50 and 75-25), the larger the sample needed to distinguish between them. Second, the surer you want to be that you reach the right decision based upon the sample evidence, the larger the sample you need.

+

The problem may be handled with the following RESAMPLING STATS program. We construct a benchmark universe that is 60-40 male-female, and take samples of size 80, observing whether the numbers of males and females differs enough in these resamples to rule out a 50-50 universe. Recall that we need at least 48 males to say that the proportion of males is not 50%.

+ +
' Program file: "how_big_sample_02.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 80 1,10 a
+    ' Generate 80 "flies," each represented by a number between 1 and 10 where
+    ' \<= 6 is a male
+    COUNT a <=6 b
+    ' Count the males
+    SCORE b z
+    ' Keep score
+END
+COUNT z >=48 k
+' How many of the trials produced more than 48 males?
+DIVIDE k 1000 kk
+' Convert to a proportion
+PRINT kk
+' If the result "kk" is close to 1, we then know that samples of size 80
+' will almost always produce samples with enough males to avoid misleading
+' us into thinking that they could have come from a universe in which
+' males and females are split 50-50.
+

Example 24-3

+

Referring back to Example 15-3, on the cable-television poll, how large a sample should you have taken? Pretend that the data have not yet been collected. You need some estimate of how the results will turn out before you can select a sample size. But you have not the foggiest idea how the results will turn out. Therefore, go out and take a very small sample, maybe ten people, to give you some idea of whether people will split quite evenly or unevenly. Seven of your ten initial interviews say they are for CATV. How large a sample do you now need to provide an answer of which you can be fairly sure?

+

Using the techniques of the previous chapter, we estimate roughly that from a sample of fifty people at least thirty-two would have to vote the same way for you to believe that the odds are at least 19 to 1 that the sample does not misrepresent the universe, that is, that the sample does not show a majority different from that of the whole universe if you polled everyone. This estimate is derived from the resampling experiment described in example 15-3. The table shows that if half the people (or more) are against cable television, only one in twenty times will thirty-two (or more) people of a sample of fifty say that they are for cable television; that is, only one of twenty trials with a 50-50 universe will produce as many as thirty-two yeses if a majority of the population is against it.

+

Therefore, designate numbers 1-30 as no and 31-00 as yes in the random-number table (that is, 70 percent, as in your estimate based on your presample of ten), work through a trial sample size of fifty, and count the number of yeses . Run through perhaps ten or fifteen trials, and reckon how often the observed number of yeses exceeds thirty-two, the number you must exceed for a result you can rely on. In Table 24-8 we see that a sample of fifty respondents, from a universe split 70-30, will show that many yeses a preponderant proportion of the time — in fact, in fifteen of fifteen experiments; therefore, the sample size of fifty is large enough if the split is “really” 70-30.

+

Table 24-8

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialNoYesTrialNoYes
1133791535
2143610941
31832111535
41040121535
5133713941
61535141634
71436151733
+

The following RESAMPLING STATS program takes samples of size 50 from a universe that is 70% “yes.” It then observes how often such samples produce more than 32 “yeses” — the number we must get if we are to be sure that the sample is not from a 50/50 universe.

+ +
' Program file: "how_big_sample_03.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 50 1,10 a
+    ' Generate 50 numbers between 1 and 10, let 1-7 = yes.
+    COUNT a <=7 b
+    ' Count the "yeses"
+    SCORE b z
+    ' Keep score of the result
+END
+COUNT z >=32 k
+' Count how often the sample result \>= our 32 cutoff (recall that samples
+' with 32 or fewer "yeses" cannot be ruled out of a 50/50 universe)
+DIVIDE k 1000 kk
+' Convert to a proportion
+

If “kk” is close to 1, we can be confident that this sample will be large enough to avoid a result that we might mistakenly think comes from a 50/50 universe (provided that the real universe is 70% favorable).

+

Example 24-4

+

How large a sample is needed to determine whether there is any difference between the two pig rations in Example 15-7? The first step is to guess the results of the tests. You estimate that the average for ration A will be a weight gain of thirty-two pounds. You further guess that twelve pigs on ration A might gain thirty-six, thirty-five, thirty-four, thirty-three, thirty-three, thirty-two, thirty-two, thirty-one, thirty-one, thirty, twentynine, and twenty-eight pounds. This set of guesses has an equal number of pigs above and below the average and more pigs close to the average than farther away. That is, there are more pigs at 33 and 31 pounds than at 36 and 28 pounds. This would seem to be a reasonable distribution of pigs around an average of 32 pounds. In similar fashion, you guess an average weight gain of 28 pounds for ration B and a distribution of 32, 31, 30, 29, 29, 28, 28, 27, 27, 26, 25, and 24 pounds.

+

Let us review the basic strategy. We want to find a sample size large enough so that a large proportion of the time it will reveal a difference between groups big enough to be accepted as not attributable to chance. First, then, we need to find out how big the difference must be to be accepted as evidence that the difference is not attributable to chance. We do so from trials with samples that size from the benchmark universe. We state that a difference larger than the benchmark universe will usually produce is not attributable to chance.

+

In this case, let us try samples of 12 pigs on each ration. First we draw two samples from a combined benchmark universe made up of the results that we have guessed will come from ration A and ration B. (The procedure is the same as was followed in Example 15-7.) We find that in 19 out of 20 trials the difference between the two observed groups of 12 pigs was 3 pounds or less. Now we investigate how often samples of 12 pigs, drawn from the separate universes, will show a mean difference as large as 3 pounds. We do so by making up a deck of 25 or 50 cards for each of the 12 hypothesized A’s and each of the 12 B’s, with the ration name and the weight gain written on it — that is, a deck of, say, 300 cards for each ration. Then from each deck we draw a set of 12 cards at random, record the group averages, and find the difference.

+

Here is the same work done with more runs on the computer:

+ +
' Program file: "how_big_sample_04.rss"
+
+NUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a
+NUMBERS (32 32 31 30 29 29 29 28 28 26 26 24) b
+REPEAT 1000
+    SAMPLE 12 a aa
+    MEAN aa aaa
+    SAMPLE 12 b bb
+    MEAN bb bbb
+    SUBTRACT aaa bbb c
+    SCORE c z
+END
+HISTOGRAM z
+' **Difference in mean weights between resamples**
+

+

Therefore, two samples of twelve pigs each are clearly large enough, and, in fact, even smaller samples might be sufficient if the universes are really like those we guessed at. If, on the other hand, the differences in the guessed universes had been smaller, then twelve-pig groups would have seemed too small and we would then have had to try out larger sample sizes, say forty-eight pigs in each group and perhaps 200 pigs in each group if forty-eight were not enough. And so on until the sample size is large enough to promise the accuracy we want. (In that case, the decks would also have to be much larger, of course.)

+

If we had guessed different universes for the two rations, then the sample sizes required would have been larger or smaller. If we had guessed the averages for the two samples to be closer together, then we would have needed larger samples. Also, if we had guessed the weight gains within each universe to be less spread out, the samples could have been smaller and vice versa.

+

The following RESAMPLING STATS program first records the data from the two samples, and then draws from decks of infinite size by sampling with replacement from the original samples.

+ +
' Program file: "how_big_sample_05.rss"
+
+DATA (36 35 34 33 33 32 32 31 31 30 29 28) a
+DATA (32 31 30 29 29 28 28 27 27 26 25 24) b
+REPEAT 1000
+    SAMPLE 12 a aa
+    ' Draw a sample of 12 from ration a with replacement (this is like drawing
+    ' from a large deck made up of many replicates of the elements in a)
+    SAMPLE 12 b bb
+    ' Same for b
+    MEAN aa aaa
+    ' Find the averages of the resamples
+    MEAN bb bbb
+    SUBTRACT aaa bbb c
+    ' Find the difference
+    SCORE c z
+END
+COUNT z >=3 k
+' How often did the difference exceed the cutoff point for our
+' significance test of 3 pounds?
+DIVIDE k 1000 kk
+PRINT kk
+' If kk is close to zero, we know that the sample size is large enough
+' that samples drawn from the universes we have hypothesized will not
+' mislead us into thinking that they could come from the same universe.
+
+
+

30.3 Step-wise sample-size determination

+

Often it is wisest to determine the sample size as you go along, rather than fixing it firmly in advance. In sequential sampling, you continue sampling until the split is sufficiently even to make you believe you have a reliable answer.

+

Related techniques work in a series of jumps from sample size to sample size. Step-wise sampling makes it less likely that you will take a sample that is much larger than necessary. For example, in the cable-television case, if you took a sample of perhaps fifty you could see whether the split was as wide as 32-18, which you figure you need for 9 to 1 odds that your answer is right. If the split were not that wide, you would sample another fifty, another 100, or however large a sample you needed until you reached a split wide enough to satisfy you that your answer was reliable and that you really knew which way the entire universe would vote.

+

Step-wise sampling is not always practical, however, and the cable-television telephone-survey example is unusually favorable for its use. One major pitfall is that the early responses to a mail survey, for example, do not provide a random sample of the whole, and therefore it is a mistake simply to look at the early returns when the split is not wide enough to justify a verdict. If you have listened to early radio or television reports of election returns, you know how misleading the reports from the first precincts can be if we regard them as a fair sample of the whole.2

+

Stratified sampling is another device that helps reduce the sample size required, by balancing the amounts of information you obtain in the various strata. (Cluster sampling does not reduce the sample size. Rather, it aims to reduce the cost of obtaining a sample that will produce a given level of accuracy.)

+
+
+

30.4 Summary

+

Sample sizes are too often determined on the basis of convention or of the available budget. A more rational method of choosing the size of the sample is by balancing the diminution of error expected with a larger sample, and its value, against the cost of increasing the sample size. The relationship of various sample sizes to various degrees of accuracy can be estimated with resampling methods, which are illustrated here.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/images/13-Chap-9_002.png b/r-book/images/13-Chap-9_002.png new file mode 100644 index 00000000..3bdb1595 Binary files /dev/null and b/r-book/images/13-Chap-9_002.png differ diff --git a/r-book/images/17_d10s.png b/r-book/images/17_d10s.png new file mode 100644 index 00000000..32576729 Binary files /dev/null and b/r-book/images/17_d10s.png differ diff --git a/r-book/images/20_d10s.jpg b/r-book/images/20_d10s.jpg new file mode 100755 index 00000000..9bb9ac84 Binary files /dev/null and b/r-book/images/20_d10s.jpg differ diff --git a/r-book/images/21-Chap-17_000.png b/r-book/images/21-Chap-17_000.png new file mode 100644 index 00000000..7e9c7cfa Binary files /dev/null and b/r-book/images/21-Chap-17_000.png differ diff --git a/r-book/images/21-Chap-17_001.png b/r-book/images/21-Chap-17_001.png new file mode 100644 index 00000000..9e21f1ab Binary files /dev/null and b/r-book/images/21-Chap-17_001.png differ diff --git a/r-book/images/21-Chap-17_002.png b/r-book/images/21-Chap-17_002.png new file mode 100644 index 00000000..d44041ba Binary files /dev/null and b/r-book/images/21-Chap-17_002.png differ diff --git a/r-book/images/21-Chap-17_003.png b/r-book/images/21-Chap-17_003.png new file mode 100644 index 00000000..ca1ed7c4 Binary files /dev/null and b/r-book/images/21-Chap-17_003.png differ diff --git a/r-book/images/22-Chap-18_000.png b/r-book/images/22-Chap-18_000.png new file mode 100644 index 00000000..cd6ff97d Binary files /dev/null and b/r-book/images/22-Chap-18_000.png differ diff --git a/r-book/images/22-Chap-18_001.png b/r-book/images/22-Chap-18_001.png new file mode 100644 index 00000000..db94c27c Binary files /dev/null and b/r-book/images/22-Chap-18_001.png differ diff --git a/r-book/images/22-Chap-18_002.png b/r-book/images/22-Chap-18_002.png new file mode 100644 index 00000000..ddf5bf04 Binary files /dev/null and b/r-book/images/22-Chap-18_002.png differ diff --git a/r-book/images/22-Chap-18_006.png b/r-book/images/22-Chap-18_006.png new file mode 100644 index 00000000..4eeb5c4e Binary files /dev/null and b/r-book/images/22-Chap-18_006.png differ diff --git a/r-book/images/22-Chap-18_007.png b/r-book/images/22-Chap-18_007.png new file mode 100644 index 00000000..bd0269eb Binary files /dev/null and b/r-book/images/22-Chap-18_007.png differ diff --git a/r-book/images/22-Chap-18_008.png b/r-book/images/22-Chap-18_008.png new file mode 100644 index 00000000..e32fe0ca Binary files /dev/null and b/r-book/images/22-Chap-18_008.png differ diff --git a/r-book/images/22-Chap-18_009.png b/r-book/images/22-Chap-18_009.png new file mode 100644 index 00000000..74fca22a Binary files /dev/null and b/r-book/images/22-Chap-18_009.png differ diff --git a/r-book/images/25-Chap-21_004.png b/r-book/images/25-Chap-21_004.png new file mode 100644 index 00000000..c66d1a56 Binary files /dev/null and b/r-book/images/25-Chap-21_004.png differ diff --git a/r-book/images/25-Chap-21_005.png b/r-book/images/25-Chap-21_005.png new file mode 100644 index 00000000..ac7136fe Binary files /dev/null and b/r-book/images/25-Chap-21_005.png differ diff --git a/r-book/images/27-Chap-23_000.png b/r-book/images/27-Chap-23_000.png new file mode 100644 index 00000000..bc428ca0 Binary files /dev/null and b/r-book/images/27-Chap-23_000.png differ diff --git a/r-book/images/27-Chap-23_004.png b/r-book/images/27-Chap-23_004.png new file mode 100644 index 00000000..a925791c Binary files /dev/null and b/r-book/images/27-Chap-23_004.png differ diff --git a/r-book/images/27-Chap-23_005.png b/r-book/images/27-Chap-23_005.png new file mode 100644 index 00000000..ccdbd10f Binary files /dev/null and b/r-book/images/27-Chap-23_005.png differ diff --git a/r-book/images/27-Chap-23_006.png b/r-book/images/27-Chap-23_006.png new file mode 100644 index 00000000..9285f3b7 Binary files /dev/null and b/r-book/images/27-Chap-23_006.png differ diff --git a/r-book/images/28-Chap-24_000.png b/r-book/images/28-Chap-24_000.png new file mode 100644 index 00000000..5f0dbd7d Binary files /dev/null and b/r-book/images/28-Chap-24_000.png differ diff --git a/r-book/images/28-Chap-24_001.png b/r-book/images/28-Chap-24_001.png new file mode 100644 index 00000000..f5ef2338 Binary files /dev/null and b/r-book/images/28-Chap-24_001.png differ diff --git a/r-book/images/28-Chap-24_002.png b/r-book/images/28-Chap-24_002.png new file mode 100644 index 00000000..129e8554 Binary files /dev/null and b/r-book/images/28-Chap-24_002.png differ diff --git a/r-book/images/28-Chap-24_003.png b/r-book/images/28-Chap-24_003.png new file mode 100644 index 00000000..49f99e9a Binary files /dev/null and b/r-book/images/28-Chap-24_003.png differ diff --git a/r-book/images/28-Chap-24_004.png b/r-book/images/28-Chap-24_004.png new file mode 100644 index 00000000..36eff509 Binary files /dev/null and b/r-book/images/28-Chap-24_004.png differ diff --git a/r-book/images/30-Exercise-sol_000.png b/r-book/images/30-Exercise-sol_000.png new file mode 100644 index 00000000..70c0be45 Binary files /dev/null and b/r-book/images/30-Exercise-sol_000.png differ diff --git a/r-book/images/30-Exercise-sol_001.png b/r-book/images/30-Exercise-sol_001.png new file mode 100644 index 00000000..a6c4936e Binary files /dev/null and b/r-book/images/30-Exercise-sol_001.png differ diff --git a/r-book/images/30-Exercise-sol_002.png b/r-book/images/30-Exercise-sol_002.png new file mode 100644 index 00000000..33f896a1 Binary files /dev/null and b/r-book/images/30-Exercise-sol_002.png differ diff --git a/r-book/images/30-Exercise-sol_003.png b/r-book/images/30-Exercise-sol_003.png new file mode 100644 index 00000000..9c26c296 Binary files /dev/null and b/r-book/images/30-Exercise-sol_003.png differ diff --git a/r-book/images/30-Exercise-sol_004.png b/r-book/images/30-Exercise-sol_004.png new file mode 100644 index 00000000..82f38d7b Binary files /dev/null and b/r-book/images/30-Exercise-sol_004.png differ diff --git a/r-book/images/30-Exercise-sol_005.png b/r-book/images/30-Exercise-sol_005.png new file mode 100644 index 00000000..5a99a19f Binary files /dev/null and b/r-book/images/30-Exercise-sol_005.png differ diff --git a/r-book/images/30-Exercise-sol_006.png b/r-book/images/30-Exercise-sol_006.png new file mode 100644 index 00000000..98152f59 Binary files /dev/null and b/r-book/images/30-Exercise-sol_006.png differ diff --git a/r-book/images/30-Exercise-sol_007.png b/r-book/images/30-Exercise-sol_007.png new file mode 100644 index 00000000..93a927c5 Binary files /dev/null and b/r-book/images/30-Exercise-sol_007.png differ diff --git a/r-book/images/nile_levels.png b/r-book/images/nile_levels.png new file mode 100644 index 00000000..a4253fd6 Binary files /dev/null and b/r-book/images/nile_levels.png differ diff --git a/r-book/images/one_d10s.jpg b/r-book/images/one_d10s.jpg new file mode 100644 index 00000000..1d16ce3d Binary files /dev/null and b/r-book/images/one_d10s.jpg differ diff --git a/r-book/index.html b/r-book/index.html new file mode 100644 index 00000000..8c0c7006 --- /dev/null +++ b/r-book/index.html @@ -0,0 +1,653 @@ + + + + + + + + + + + + + +Resampling statistics + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Resampling statistics

+
+ + + +
+ +
+
Authors
+
+

Julian Lincoln Simon

+

Matthew Brett

+

Stéfan van der Walt

+

Ian Nimmo-Smith

+
+
+ + + +
+ + +
+ + +
+

R edition

+
+

There are two editions of this book; one with examples in the R programming language 1, and another with examples in the Python language 2.

+

This is the R edition.

+

The files on this website are free to view and download. We release the content under the Creative Commons Attribution / No Derivatives 4.0 License. If you’d like a physical copy of the book, you should be able to order it from Sage, when it is published.

+

We wrote this book in RMarkdown with Quarto. It is automatically rebuilt from source by Github

+ + + + + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/inference_ideas.html b/r-book/inference_ideas.html new file mode 100644 index 00000000..0f382cf7 --- /dev/null +++ b/r-book/inference_ideas.html @@ -0,0 +1,997 @@ + + + + + + + + + +Resampling statistics - 17  The Basic Ideas in Statistical Inference + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

17  The Basic Ideas in Statistical Inference

+
+ + + +
+ + + + +
+ + +
+ +

Probabilistic statistical inference is a crucial part of the process of informing ourselves about the world around us. Statistics and statistical inference help us understand our world and make sound decisions about how to act.

+

More specifically, statistical inference is the process of drawing conclusions about populations or other collections of objects about which we have only partial knowledge from samples. Technically, inference may be defined as the selection of a probabilistic model to resemble the process you wish to investigate, investigation of that model’s behavior, and interpretation of the results. Fuller understanding of the nature of statistical inference comes with practice in handling a variety of problems.

+

Until the 18th century, humanity’s extensive knowledge of nature and technology was not based on formal probabilistic statistical inference. But now that we have already dealt with many of the big questions that are easy to answer without probabilistic statistics, and now that we live in a more ramified world than in earlier centuries, the methods of inferential statistics become ever more important.

+

Furthermore, statistical inference will surely become ever more important in the future as we voyage into realms that are increasingly difficult to comprehend. The development of an accurate chronometer to tell time on sea voyages became a crucial need when Europeans sought to travel to the New World. Similarly, probability and statistical inference become crucial as we voyage out into space and down into the depths of the ocean and the earth, as well as probe into the secrets of the microcosm and of the human mind and soul.

+

Where probabilistic statistical inference is employed, the inferential procedures may well not be the crucial element. For example, the wording of the questions asked in a public-opinion poll may be more critical than the statistical-inferential procedures used to discern the reliability of the poll results. Yet we dare not disregard the role of the statistical procedures.

+
+

17.1 Knowledge without probabilistic statistical inference

+

Let us distinguish two kinds of knowledge with which inference at large (that is, not just probabilistic statistical inference) is mainly concerned: a) one or more absolute measurements on one or more dimensions of a collection of one or more items — for example, your income, or the mean income of the people in your country; and b) comparative measurements and evaluations of two or more collections of items (especially whether they are equal or unequal)—for example, the mean income in Brazil compared to the mean income in Argentina. Types (a) and (b) both include asking whether there has been a change between one observation and another.

+

What is the conceptual basis for gathering these types of knowledge about the world? I believe that our rock bottom conceptual tool is the assumption of what we may call sameness , or continuity , or constancy , or repetition , or equality , or persistence ; “constancy” and “continuity” will be the terms used most frequently here, and I shall use them interchangeably.

+

Continuity is a non-statistical concept. It is a best guess about the next point beyond the known observations, without any idea of the accuracy of the estimate. It is like testing the ground ahead when walking in a marsh. It is local rather than global. We’ll talk a bit later about why continuity seems to be present in much of the world that we encounter.

+

The other great concept in statistical inference, and perhaps in all inference taken together, is representative (usually random) sampling, to be discussed in Chapter 18. Representative sampling — which depends upon the assumption of sameness (homogeneity) throughout the universe to be investigated — is quite different than continuity; representative sampling assumes that there is no greater chance of a connection between any two elements that might be drawn into the sample than between any other two elements; the order of drawing is immaterial. In contrast, continuity assumes that there is a greater chance of connection between two contiguous elements than between either one of the elements and any of the many other elements that are not contiguous to either. Indeed, the process of randomizing is a device for doing away with continuity and autocorrelation within some bounded closed system — the sample “frame.” It is an attempt to map (describe) the entire area ahead using the device of the systematic survey. Random representative sampling enables us to make probabilistic inferences about a population based on the evidence of a sample.

+ +

To return now to the concept of sameness: Examples of the principle are that we assume: a) our house will be in the same place tomorrow as today; b) a hammer will break an egg every time you hit the latter with the former (or even the former with the latter); c) if you observe that the first fifteen persons you see walking out of a door at the airport are male, the sixteenth probably will be male also; d) paths in the village stay much the same through a person’s life; e) religious ritual changes little through the decades; f) your best guess about tomorrow’s temperature or stock price is that will be the same as today’s. This principle of constancy is related to David Hume’s concept of constant conjunction .

+

When my children were young, I would point to a tree on our lawn and ask: “Do you think that tree will be there tomorrow?” And when they would answer “Yes,” I’d ask, “Why doesn’t the tree fall?” That’s a tough question to answer.

+

There are two reasonable bases for predicting that the tree will be standing tomorrow. First and most compelling for most of us is that almost all trees continue standing from day to day, and this particular one has never fallen; hence, what has been in the past is likely to continue. This assessment requires no scientific knowledge of trees, yet it is a very functional way to approach most questions concerning the trees — such as whether to hang a clothesline from it, or whether to worry that it will fall on the house tonight. That is, we can predict the outcome in this case with very high likelihood of being correct even though we do not utilize anything that would be called either science or statistical inference. (But what do you reply when your child says: “Why should I wear a seat belt? I’ve never been in an accident”?)

+

A second possible basis for prediction that the tree will be standing is scientific analysis of the tree’s roots — how the tree’s weight is distributed, its sickness or health, and so on. Let’s put aside this sort of scientific-engineering analysis for now.

+

The first basis for predicting that the tree will be standing tomorrow — sameness — is the most important heuristic device in all of knowledge-gathering. It is often a weak heuristic; certainly the prediction about the tree would be better grounded (!) after a skilled forester examines the tree. But persistence alone might be a better heuristic in a particular case than an engineering-scientific analysis alone.

+

This heuristic appears more obvious if the child — or the adult — were to respond to the question about the tree with another question: Why should I expect it to fall ? In the absence of some reason to expect change, it is quite reasonable to expect no change. And the child’s new question does not duck the central question we have asked about the tree, any more than one ducks a probability estimate by estimating the complementary probability (that is, unity minus the probability sought); indeed, this is a very sound strategy in many situations.

+ +

Constancy can refer to location, time, relationship to another variable, or yet another dimension. Constancy may also be cyclical. Some cyclical changes can be charted or mapped with relative certainty — for example the life-cycles of persons, plants, and animals; the diurnal cycle of dark and light; and the yearly cycle of seasons. The courses of some diseases can also be charted. Hence these kinds of knowledge have long been well known.

+

Consider driving along a road. One can predict that the price of the next gasoline station will be within a few cents of the gasoline station that you just passed. But as you drive further and further, the dispersion increases as you cross state lines and taxes differ. This illustrates continuity.

+

The attention to constancy can focus on a single event, such as leaves of similar shape appearing on the same plant. Or attention can focus on single sequences of “production,” as in the process by which a seed produces a tree. For example, let’s say you see two puppies — one that looks like a low-slung dachshund, and the other a huge mastiff. You also see two grown male dogs, also apparently dachshund and mastiff. If asked about the parentage of the small ones, you are likely — using the principle of sameness — to point — quickly and with surety — to the adult dogs of the same breed. (Here it is important to notice that this answer implicitly assumes that the fathers of the puppies are among these dogs. But the fathers might be somewhere else entirely; it is in these ways that the principle of sameness can lead you astray.)

+

When applying the concept of sameness, the object of interest may be collections of data, as in Semmelweiss’s (1983, 64) data on the consistent differences in rates of maternal deaths from childbed fever in two clinics with different conditions (see Table 17.1), or the similarities in sex ratios from year to year in Graunt’s (1759, 304) data on christenings in London (Table 17.2), or the stark effect in John Snow’s (Winslow 1980, 276) data on the numbers of cholera cases associated with two London water suppliers (Table 17.3), or Kanehiro Takaki’s (Kornberg 1991, 9) discovery of the reduction in beriberi among Japanese sailors as a result of a change in diet (Table 17.4). These data seem so overwhelmingly clear cut that our naive statistical sense makes the relationships seem deterministic, and the conclusions seems straightforward. (But the same statistical sense frequently misleads us when considering sports and stock market data.)

+
+ + +++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 17.1: Deaths of Mothers from childbed fever in two clinics
First clinicSecond clinic
BirthsDeathsRateBirthsDeathsRate
18413,0362377.72,442863.5
18423,28751815.82,6592027.5
18433,0602748.92,7391645.9
18443,1572608.22,956682.3
18453,4922416.83,241662.03
18454,01045911.43,7541052.7
Total20,0421,98917,791691
Average9.923.38
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 17.2: Ratio of number of male to number of female christenings in London
PeriodMale / Female ratio
1629-16361.072
1637-16401.073
1641-16481.063
1649-16561.095
1657-16601.069
+
+
+ + + + + + + + + + + + + + + + + + + + + + +
Table 17.3: Rates of death from cholera for three water suppliers
Water supplierCholera deaths per 10,000 houses
Southwark and Vauxhall71
Lambeth5
Rest of London9
+
+
+ + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 17.4: Takaki’s Japanese Naval Records of Deaths from Beriberi
YearDietTotal Navy PersonnelDeaths from Beriberi
1880Rice diet4,9561,725
1881Rice diet4,6411,165
1882Rice diet4,7691,929
1883Rice Diet5,3461,236
1884Change to new diet5,638718
1885New diet6,91841
1886New diet8,4753
1887New diet9,1060
1888New diet9,1840
+
+

Constancy and sameness can be seen in macro structures; consider, for example, the constant location of your house. Constancy can also be seen in micro aggregations — for example, the raindrops and rain that account for the predictably fluctuating height of the Nile, or the ratio of boys to girls born in London, cases in which we can average to see the “statistical” sameness. The total sum of the raindrops produces the level of a reservoir or a river from year to year, and the sum of the behaviors of collections of persons causes the birth rates in the various years.

+

Statistical inference is only needed when a person thinks that s/he might have found a pattern but the pattern is not completely obvious to all. Probabilistic inference works to test — either to confirm or discount — the belief in the pattern’s existence. We will see such cases in the following chapter.

+

People have always been forced to think about and act in situations that have not been constant — that is, situations where the amount of variability in the phenomenon makes it impossible to draw clear cut, sensible conclusions. For example, the appearance of game animals in given places and at given times has always been uncertain to hunters, and therefore it has always been difficult to know which target to hunt in which place at what time. And of course variability of the weather has always made it a very uncertain element. The behavior of one’s enemies and friends has always been uncertain, too, though uncertain in a manner different from the behavior of wild animals; there often is a gaming element in interactions with other humans. But in earlier times, data and techniques did not exist to enable us to bring statistical inference to bear.

+
+
+

17.2 The treatment of uncertainty

+

The purpose of statistical inference is to help us peer through the veil of variability when it obscures the main thrust of the data, so as to improve the decisions we make. Statistical inference (or in most cases, simply probabilistic estimation) can help:

+
    +
  • a gambler deciding on the appropriate odds in a betting game when there seems to be little or no difference between two or more outcomes;
  • +
  • an astronomer deciding upon one or another value as the central estimate for the location of a star when there is considerable variation in the observations s/he has made of the star;
  • +
  • a basketball coach pondering whether to remove from the game her best shooter who has heretofore done poorly tonight;
  • +
  • an oil-drilling firm debating whether to follow up a test-well drilling with a full-bore drilling when the probability of success is not overwhelming but the payoff to a gusher could be large.
  • +
+

Returning to the tree near the Simon house: Let’s change the facts. Assume now that one major part of the tree is mostly dead, and we expect a big winter storm tonight. What is the danger that the tree will fall on the house? Should we spend $1500 to have the mostly-dead third of it cut down? We know that last year a good many trees fell on houses in the neighborhood during such a storm.

+

We can gather some data on the proportion of old trees this size that fell on houses — about 5 in 100, so far as we can tell. Now it is no longer an open-and-shut case about whether the tree will be standing tomorrow, and we are using statistical inference to help us with our thinking. We proceed to find a set of trees that we consider similar to this one , and study the variation in the outcomes of such trees. So far we have estimated that the average for this group of trees — the mean (proportion) that fell in the last big storm — is 5 percent. Averages are much more “stable” — that is, more similar to each other — than are individual cases.

+

Notice how we use the crucial concept of sameness: We assume that our tree is like the others we observed, or at least that it is not systematically different from most of them and it is more-or-less average.

+

How would our thinking be different if our data were that one tree in 10 had fallen instead of 5 in 100? This is a question in statistical inference.

+ +

How about if we investigate further and find that 4 of 40 elms fell, but only one of 60 oaks , and ours is an oak tree. Should we consider that oaks and elms have different chances of falling? Proceeding a bit further, we can think of the question as: Should we or should we not consider oaks and elms as different? This is the type of statistical inference called “hypothesis testing”: We apply statistical procedures to help us decide whether to treat the two classes of trees as the same or different. If we should consider them the same, our worries about the tree falling are greater than if we consider them different with respect to the chance of damage.1

+

Notice that statistical inference was not necessary for accurate prediction when I asked the kids about the likelihood of a live tree falling on a day when there would be no storm. So it is with most situations we encounter. But when the assumption of constancy becomes shaky for one reason or another, as with the sick tree falling in a storm, we need a more refined form of thinking. We collect data on a large number of instances, inquire into whether the instances in which we are interested (our tree and the chance of it falling) are representative — that is, whether it resembles what we would get if we drew a sample randomly — and we then investigate the behavior of this large class of instances to see what light it throws on the instances(s) in which we are interested.

+

The procedure in this case — which we shall discuss in greater detail later on — is to ask: If oaks and elms are not different, how likely is it that only one of 60 oaks would fall whereas 4 of 40 elms would fall? Again, notice the assumption that our tree is “representative” of the other trees about which we have information — that it is not systematically different from most of them, but rather that it is more-or-less average. Our tree certainly was not chosen randomly from the set of trees we are considering. But for purposes of our analysis, we proceed as if it had been chosen randomly — because we deem it “representative.”

+

This is the first of two roles that the concept of randomness plays in statistical thinking. Here is an example of the second use of the concept of randomness: We conduct an experiment — plant elm and oak trees at randomly-selected locations on a plot of land, and then try to blow them down with a wind-making machine. (The random selection of planting spots is important because some locations on a plot of ground have different growing characteristics than do others.) Some purists object that only this sort of experimental sampling is a valid subject of statistical inference; it can never be appropriate, they say, to simply assume on the basis of other knowledge that the tree is representative. I regard that purist view as a helpful discipline on our thinking. But accepting its conclusion — that one should not apply statistical inference except to randomly-drawn or randomly-constituted samples — would take from us a tool that has proven useful in a variety of activities.

+

As discussed earlier in this chapter, the data in some (probably most) scientific situations are so overwhelming that one can proceed without probabilistic inference. Historical examples include those shown above of Semmelweiss and puerperal fever, and John Snow and cholera.2 But where there was lack of overwhelming evidence, the causation of many diseases long remained unclear for lack of statistical procedures. This led to superstitious beliefs and counter-productive behavior, such as quarantines against plague often were. Some effective practices also arose despite the lack of sound theory, however — the waxed costumes of doctors, and the burning of mattresses, despite the wrong theory about the causation of plague; see (Cipolla 1981).

+

So far I have spoken only of predictability and not of other elements of statistical knowledge such as understanding and control . This is simply because statistical correlation is the bed rock of most scientific understanding, and predictability. Later we will expand the discussion beyond predictability; it holds no sacred place here.

+
+
+

17.3 Where statistical inference becomes crucial

+

There was little role for statistical inference until about three centuries ago because there existed very few scientific data. When scientific data began to appear, the need emerged for statistical inference to improve the interpretation of the data. As we saw, statistical inference is not needed when the evidence is overwhelming. A thousand cholera cases at one well and zero at another obviously does not require a statistical test. Neither would 999 cases to one, or even 700 cases to 300, because our inbred and learned statistical senses can detect that the two situations are different. But probabilistic inference is needed when the number of cases is relatively small or where for other reasons the data are somewhat ambiguous.

+

For example, when working with the 17th century data on births and deaths, John Graunt — great statistician though he was — drew wrong conclusions about some matters because he lacked modern knowledge of statistical inference. For example, he found that in the rural parish of Romsey “there were born 15 Females for 16 Males, whereas in London there were 13 for 14, which shows, that London is somewhat more apt to produce Males, then the country” (p. 71). He suggests that the “curious” inquire into the causes of this phenomenon, apparently not recognizing — and at that time he had no way to test — that the difference might be due solely to chance. He also notices (p. 94) that the variations in deaths among years in Romsey were greater than in London, and he attempted to explain this apparent fact (which is just a statistical artifact) rather than understanding that this is almost inevitable because Romsey is so much smaller than London. Because we have available to us the modern understanding of variability, we can now reach sound conclusions on these matters.3

+

Summary statistics — such as the simple mean — are devices for reducing a large mass of data (inevitably confusing unless they are absolutely clear cut) to something one can manage to understand. And probabilistic inference is a device for determining whether patterns should be considered as facts or artifacts.

+

Here is another example that illustrates the state of early quantitative research in medicine:

+
+

Exploring the effect of a common medicinal substance, Bőcker examined the effect of sasparilla on the nitrogenous and other constituents of the urine. An individual receiving a controlled diet was given a decoction of sasparilla for a period of twelve days, and the volume of urine passed daily was carefully measured. For a further twelve days that same individual, on the same diet, was given only distilled water, and the daily quantity of urine was again determined. The first series of researches gave the following figures (in cubic centimeters): 1,467, 1,744, 1,665, 1,220, 1,161, 1,369, 1,675, 2,199, 887, 1,634, 943, and 2,093 (mean = 1,499); the second series: 1,263, 1,740, 1,538, 1,526, 1,387, 1,422, 1,754, 1,320, 1,809, 2,139, 1,574, and 1,114 (mean = 1,549). Much uncertainty surrounded the exactitude of these measurements, but this played little role in the ensuing discussion. The fundamental issue was not the quality of the experimental data but how inferences were drawn from those data (Coleman 1987, 207).

+
+

The experimenter Böcker had no reliable way of judging whether the data for the two groups were or were not meaningfully different, and therefore he arrived at the unsound conclusion that there was indeed a difference. (Gustav Radicke used this example as the basis for early work on statistical significance (Støvring 1999).)

+

Another example: Joseph Lister convinced the scientific world of the germ theory of infection, and the possibility of preventing death with a disinfectant, with these data: Prior to the use of antiseptics — 16 post-operative deaths in 35 amputations; subsequent to the use of antiseptics — 6 deaths in 40 amputations (Winslow 1980, 303). But how sure could one be that a difference of that size might not occur just by chance? No one then could say, nor did anyone inquire, apparently.

+

Here’s another example of great scientists falling into error because of a too-primitive approach to data (Feller 1968, 1:69–70): Charles Darwin wanted to compare two sets of measured data, each containing 16 observations. At Darwin’s request, Francis Galton compared the two sets of data by ranking each, and then comparing them pairwise. The a’s were ahead 13 times. Without knowledge of the actual probabilities Galton concluded that the treatment was effective. But, assuming perfect randomness, the probability that the a’s beat [the others] 13 times or more equals 3/16. This means that in three out of sixteen cases a perfectly ineffectual treatment would appear as good or better than the treatment classified as effective by Galton.

+

That is, Galton and Darwin reached an unsound conclusion. As Feller (1968, 1:70) says, “This shows that a quantitative analysis may be a valuable supplement to our rather shaky intuition”.

+

Looking ahead, the key tool in situations like Graunt’s and Böcker’s and Lister’s is creating ceteris paribus — making “everything else the same” — with random selection in experiments, or at least with statistical controls in non-experimental situations.

+
+
+

17.4 Conclusions

+

In all knowledge-seeking and decision-making, our aim is to peer into the unknown and reduce our uncertainty a bit. The two main concepts that we use — the two great concepts in all of scientific knowledge-seeking, and perhaps in all practical thinking and decision-making — are a) continuity (or non-randomness) and the extent to which it applies in given situation, and b) random sampling, and the extent to which we can assume that our observations are indeed chosen by a random process.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/inference_intro.html b/r-book/inference_intro.html new file mode 100644 index 00000000..3cc0df2c --- /dev/null +++ b/r-book/inference_intro.html @@ -0,0 +1,738 @@ + + + + + + + + + +Resampling statistics - 18  Introduction to Statistical Inference + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

18  Introduction to Statistical Inference

+
+ + + +
+ + + + +
+ + +
+ +

The usual goal of a statistical inference is a decision about which of two or more hypotheses a person will thereafter choose to believe and act upon. The strategy of such inference is to consider the behavior of a given universe in terms of the samples it is likely to produce, and if the observed sample is not a likely outcome of sampling from that universe, we then proceed as if the sample did not in fact come from that universe. (The previous sentence is a restatement in somewhat different form of the core of statistical analysis.)

+
+

18.1 Statistical inference and random sampling

+

Continuity and sameness is the fundamental concept in inference in general, as discussed in Chapter 17. Random sampling is the second great concept in inference, and it distinguishes probabilistic statistical inference from non-statistical inference as well as from non-probabilistic inference based on statistical data.

+

Let’s begin the discussion with a simple though unrealistic situation. Your friend Arista a) looks into a cardboard carton, b) reaches in, c) pulls out her hand, and d) shows you a green ball. What might you reasonably infer?

+

You might at least be fairly sure that the green ball came from the carton, though you recognize that Arista might have had it concealed in her hand when she reached into the carton. But there is not much more you might reasonably conclude at this point except that there was at least one green ball in the carton to start with. There could be no more balls; there could be many green balls and no others; there could be a thousand red balls and just one green ball; and there could be one green ball, a hundred balls of different colors, and two pounds of mud — given that she looked in first, it is not improbable that she picked out the only green ball among other material of different sorts.

+

There is not much you could say with confidence about the probability of yourself reaching into the same carton with your eyes closed and pulling out a single green ball. To use other language (which some philosophers might say is not appropriate here as the situation is too specific), there is little basis for induction about the contents of the box. Nor is the situation very different if your friend reaches in three times in a row and hands you a green ball each time.

+

So far we have put our question rather vaguely. Let us frame a more precise inquiry: What do we predict about the next item(s) we might draw from the carton? If we assume — based on who-knows-what information or notions — that another ball will emerge, we could simply use the principle of sameness and (until we see a ball of another color) predict that the next ball will be green, whether one or three or 100 balls is (are) drawn.

+

But now what about if Arista pulls out nine green balls and one red ball? The principle of sameness cannot be applied as simply as before. Based on the last previous ball, the next one will be red. But taking into account all the balls we have seen, the next will “probably” be green. We have no solid basis on which to go further. There cannot be any “solution” to the “problem” of reaching a general conclusion on the basis of these specific pieces of evidence.

+

Now consider what you might conclude if you were told that a single green ball had been drawn with a random sampling procedure from a box containing nothing but balls. Knowledge that the sample was drawn randomly from a given universe is grounds for belief that one knows much more than if a sample were not drawn randomly. First, you would be sure — if you had reasonable basis to believe that the sampling really was random, which is not easy to guarantee — that the ball came from the box. Second, you would guess that the proportion of green balls is not very small, because if there are only a few green balls and many other-colored balls, it would be unusual — that is, the event would have a low probability — to draw a green ball. Not impossible, but unlikely. And we can compute the probability of drawing a green ball — or any other combination of colors — for different assumed compositions within the box . So the knowledge that the sampling process is random greatly increases our ability — or our confidence in our ability — to infer the contents of the box.

+

Let us note well the strategy of the previous paragraph: Ask about the probability that one or more various possible contents of the box (the “universe”) will produce the observed sample , on the assumption that the sample was drawn randomly. This is the central strategy of all statistical inference , though I do not find it so stated elsewhere. We shall come back to this idea shortly.

+

There are several kinds of questions one might ask about the contents of the box. One general category includes questions about our best guesses of the box’s contents — that is, questions of estimation . Another category includes questions about our surety of that description, and our surety that the contents are similar or different from the contents of other boxes; the consideration of surety follows after estimates are made. The estimation questions can be subtle and unexpected (Savage 1972, chap. 15), but do not cause major controversy about the foundations of statistics. So we can quickly move on to questions about the extent of surety in our estimations.

+

Consider your reaction if the sampling produces 10 green balls in a row, or 9 out of 10. If you had no other information (a very important assumption that we will leave aside for now), your best guess would be that the box contains all green balls, or a proportion of 9 of 10, in the two cases respectively. This estimation process seems natural enough.

+

You would be surprised if someone told you that instead of the box containing the proportion in the sample, it contained just half green balls. How surprised? Intuitively, the extent of your surprise would depend on the probability that a half-green “universe” would produce 10 or 9 green balls out of 10. This surprise is a key element in the logic of the hypothesis-testing branch of statistical inference.

+

We learn more about the likely contents of the box by asking about the probability that various specific populations of balls within the box would produce the particular sample that we received. That is, we can ask how likely a collection of 25 percent green balls is to produce (say) 9 of 10 green ones, and how likely collections of 50 percent, 75 percent, 90 percent (and any other collections of interest) are to produce the observed sample. That is, we ask about the consistency between any particular hypothesized collection within the box and the sample we observe. And it is reasonable to believe that those universes which have greater consistency with the observed sample — that is, those universes that are more likely to produce the observed sample — are more likely to be in the box than other universes. This (to repeat, as I shall repeat many times) is the basic strategy of statistical investigation. If we observe 9 of 10 green balls, we then determine that universes with (say) 9/10 and 10/10 green balls are more consistent with the observed evidence than are universes of 0/10 and 1/10 green balls. So by this process of considering specific universes that the box might contain, we make possible more specific inferences about the box’s probable contents based on the sample evidence than we could without this process.

+

Please notice the role of the assessment of probabilities here: By one technical means or another (either simulation or formulas), we assess the probabilities that a particular universe will produce the observed sample, and other samples as well.

+

It is of the highest importance to recognize that without additional knowledge (or assumption) one cannot make any statements about the probability of the sample having come from any particular universe , on the basis of the sample evidence. (Better read that last sentence again.) We can only speak about the probability that a particular universe will produce the observed sample, a very different matter. This issue will arise again very sharply in the context of confidence intervals.

+

Let us generalize the steps in statistical inference:

+
    +
  1. Frame the original question as: What is the chance of getting the observed sample x from population X? That is, what is probability of (If x then X)?

  2. +
  3. Proceed to this question: What kinds of samples does X produce, with which probability? That is, what is the probability of this particular x coming from X? That is, what is p(x|X)?

  4. +
  5. Actually investigate the behavior of X with respect to x and other samples. One can do this in two ways:

    +
      +
    1. Use the formulaic calculus of probability, perhaps resorting to Monte Carlo methods if an appropriate formula does not exist. Or,
    2. +
    3. Use resampling (in the larger sense), the domain of which equals (all Monte Carlo experimentation) minus (the use of Monte Carlo methods for approximations, investigation of complex functions in statistics and other theoretical mathematics, and uses elsewhere in science). Resampling in its more restricted sense includes the bootstrap, permutation tests, and other non-parametric methods.
    4. +
  6. +
  7. Interpretation of the probabilities that result from step 3 in terms of

    +
      +
    1. acceptance or rejection of hypotheses, ii) surety of conclusions, or iii) inputs to decision theory.
    2. +
  8. +
+

Here is a short definition of statistical inference:

+
+

The selection of a probabilistic model that might resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.

+
+

We will get even more specific about the procedure when we discuss the canonical procedures for hypothesis testing and for the finding of confidence intervals in the chapters on those subjects.

+

The discussion so far has been in the spirit of what is known as hypothesis testing . The result of a hypothesis test is a decision about whether or not one believes that the sample is likely to have been drawn randomly from the “benchmark universe” X. The logic is that if the probability of such a sample coming from that universe is low, we will then choose to believe the alternative — to wit, that the sample came from the universe that resembles the sample.

+ +

The underlying idea is that if an event would be very surprising if it really happened — as it would be very surprising if the dog had really eaten the homework (see Chapter 21) — we are inclined not to believe in that possibility. (This logic will be explored further in later chapters on hypothesis testing.)

+

We have so far assumed that our only relevant knowledge is the sample. And though we almost never lack some additional information, this can be a sensible way to proceed when we wish to suppress any other information or speculation. This suppression is controversial; those known as Bayesians or subjectivists want us to take into account all the information we have. But even they would not dispute suppressing information in certain cases — such as a teacher who does not want to know students’ IQ scores because s/he might want avoid the possibility of unconsciously being affected by that score, or an employer who wants not to know the potential employee’s ethnic or racial background even though the hiring process might be more “successful” on some metric, or a sports coach who refuses to pick the starting team each year until the players have competed for the positions.

+ +

Now consider a variant on the green-ball situation discussed above. Assume now that you are told that samples of balls are alternately drawn from one of two specified universes — two buckets of balls, one with 50 percent green balls and the other with 80 percent green balls. Now you are shown a sample of nine green and one red balls drawn from one of those buckets. On the basis of your sample you can then say how probable it is that the sample came from one or the other universe . You proceed by computing the probabilities (often called the likelihoods in this situation) that each of those two universes would individually produce the observed samples — probabilities that you could arrive at with resampling, with Pascal’s Triangle, or with a table of binomial probabilities, or with the Normal approximation and the Z distribution, or with yet other devices. Those probabilities are .01 and .27, and the ratio of the two (0.1/.27) is a bit less than .04. That is, fair betting odds are about 1 to 27.

+

Let us consider a genetics problem on this model. Plant A produces 3/4 black seeds and 1/4 reds; plant B produces all reds. You get a red seed. Which plant would you guess produced it? You surely would guess plant B. Now, how about 9 reds and a black, from Plants A and C, the latter producing 50 percent reds on average?

+

To put the question more precisely: What betting odds would you give that the one red seed came from plant B? Let us reason this way: If you do this again and again, 4 of 5 of the red seeds you see will come from plant B. Therefore, reasonable (or “fair”) odds are 4 to 1, because this is in accord with the ratios with which red seeds are produced by the two plants — 4/4 to 1/4.

+

How about the sample of 9 reds and a black, and plants A and C? It would make sense that the appropriate odds would be derived from the probabilities of the two plants producing that particular sample, probabilities which we computed above.

+

Now let us move to a bit more complex problem: Consider two buckets — bucket G with 2 red and 1 black balls, and bucket H with 100 red and 100 black balls. Someone flips a coin to decide which bucket will be drawn from, reaches into that bucket, and chooses two balls without replacing the first one before drawing the second. Both are red. What are the odds that the sample came from bucket G? Clearly, the answer should derive from the probabilities that the two buckets would produce the observed sample.

+

(Now just for fun, how about if the first ball drawn is thrown back after examining? What now are the appropriate odds?)

+

Let’s restate the central issue. One can state the probability that a particular plant which produces on average 1 red and 3 black seeds will produce one red seed, or 5 reds among a sample of 10. But without further assumptions — such as the assumption above that the possibilities are limited to two specific universes — one cannot say how likely a given red seed is to have come from a given plant, even if we know that that plant produces only reds. (For example, it may have come from other plants producing only red seeds.)

+

When we limit the possibilities to two universes (or to a larger set of specified universes) we are able to put a probability on one hypothesis or another. But to repeat, in many or most cases, one cannot reasonably assume it is only one or the other. And then we cannot state any odds that the sample came from a particular universe. This is a very difficult point to grasp, experience shows, but a crucial one. (It is the sort of subtle issue that makes statistics so difficult.)

+

The additional assumptions necessary to talk about the probability that the red seed came from a given plant are the stuff of statistical inference. And they must be combined with such “objective” probabilistic assessments as the probability that a 1-red-3-black plant will produce one red, or 5 reds among 10 seeds.

+

Now let us move one step further. Instead of stating as a fact under our control that there is a .5 chance of the sample being drawn from each of the two buckets in the problem above, let us assume that we do not know the probability of each bucket being picked, but instead we estimate a probability of .5 for each bucket, based on a variety of other information that all is uncertain. But though the facts are now different, the most reasonable estimate of the odds that the observed sample was drawn from one or the other bucket will not be different than before — because in both situations we were working with a “prior probability” of .5.

+ +

Now let us go a step further by allowing the universes from which the sample may have come to have different assumed probabilities as well as different compositions. That is, we now consider prior probabilities other than .5.

+

How do we decide which universe(s) to investigate for the probability of producing the observed sample, and of producing samples that are even less likely, in the sense of being more surprising? That judgment depends upon the purpose of your analysis, upon your point of view of how statistics ought to be done, and upon some other factors.

+

It should be noted that the logic described so far applies in exactly the same fashion whether we do our work estimating probabilities with the resampling method or with conventional methods. We can figure the probability of nine or more green chips from a universe of (say) p = .7 with either approach.

+

So far we have discussed the comparison of various hypotheses and possible universes. We must also consider where the consideration of the reliability of estimates comes in. This leads to the concept of confidence limits, which will be discussed in Chapter 26 and Chapter 27.

+
+
+

18.2 Samples Whose Observations May Have More Than Two Values

+

So far we have discussed samples and universes that we can characterize as proportions of elements which can have only one of two characteristics — green or other, in this case, which is equivalent to “1” or “0.” This expositional choice has been solely for clarity. All the ideas discussed above pertain just as well to samples whose observations may have more than two values, and which may be either discrete or continuous.

+
+
+

18.3 Summary and conclusions

+

A statistical question asks about the probabilities of a sample having arisen from various source universes in light of the evidence of a sample. In every case, the statistical answer comes from considering the behavior of particular specified universes in relation to the sample evidence and to the behavior of other possible universes. That is, a statistical problem is an exercise in postulating universes of interest and interpreting the probabilistic distributions of results of those universes. The preceding sentence is the key operational idea in statistical inference.

+

Different sorts of realistic contexts call for different ways of framing the inquiry. For each of the established models there are types of problems which fit that model better than other models, and other types of problems for which the model is quite inappropriate.

+

Fundamental wisdom in statistics, as in all other contexts, is to employ a large tool kit rather than just applying only a hammer, screwdriver, or wrench no matter what the problem is at hand. (Philosopher Abraham Kaplan once stated Kaplan’s Law of scientific method: Give a small boy a hammer and there is nothing that he will encounter that does not require pounding.) Studying the text of a poem statistically to infer whether Shakespeare or Bacon was the more likely author is quite different than inferring whether bioengineer Smythe can produce an increase in the proportion of calves, and both are different from decisions about whether to remove a basketball player from the game or to produce a new product.

+

Some key points: 1) In statistical inference as in all sound thinking, one’s purpose is central . All judgments should be made relative to that purpose, and in light of costs and benefits. (This is the spirit of the Neyman-Pearson approach). 2) One cannot avoid making judgments; the process of statistical inference cannot ever be perfectly routinized or objectified. Even in science, fitting a model to experience requires judgment. 3) The best ways to infer are different in different situations — economics, psychology, history, business, medicine, engineering, physics, and so on. 4) Different tools must be used when the situations call for them — sequential vs. fixed sampling, Neyman-Pearson vs. Fisher, and so on. 5) In statistical inference it is wise not to argue about the proper conclusion when the data and procedures are ambiguous. Instead, whenever possible, one should go back and get more data, hence lessening the importance of the efficiency of statistical tests. In some cases one cannot easily get more data, or even conduct an experiment, as in biostatistics with cancer patients. And with respect to the past one cannot produce more historical data. But one can gather more and different kinds of data, e.g. the history of research on smoking and lung cancer.

+ + + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/intro.html b/r-book/intro.html new file mode 100644 index 00000000..7f731399 --- /dev/null +++ b/r-book/intro.html @@ -0,0 +1,852 @@ + + + + + + + + + +Resampling statistics - 1  Introduction + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

1  Introduction

+
+ + + +
+ + + + +
+ + +
+ +
+

1.1 Uses of Probability and Statistics

+

This chapter introduces you to probability and statistics. First come examples of the kinds of practical problems that this knowledge can solve for us. One reason that the term “statistic” often scares and confuses people is that the term has several sorts of meanings. We discuss the meanings of “statistics” in the section “Types of statistics”. Then comes a discussion on the relationship of probabilities to decisions. Following this we talk about the limitations of probability and statistics. And last is a discussion of why statistics can be such a difficult subject. Most important, this chapter describes the types of problems the book will tackle.

+

At the foundation of sound decision-making lies the ability to make accurate estimates of the probabilities of future events. Probabilistic problems confront everyone — a company owner considering whether to expand their business, to the scientist testing a vaccine, to the individual deciding whether to buy insurance.

+
+
+

1.2 What kinds of problems shall we solve?

+

These are some examples of the kinds of problems that we can handle with the methods described in this book:

+
    +
  1. You are a doctor trying to develop a treatment for COVID19. Currently you are working on a medicine labeled AntiAnyVir. You have data from patients to whom medicine AntiAnyVir was given. You want to judge on the basis of those results whether AntiAnyVir really improves survival or whether it is no better than a sugar pill.

  2. +
  3. You are the campaign manager for the Republicrat candidate for President of the United States. You have the results from a recent poll taken in New Hampshire. You want to know the chance that your candidate would win in New Hampshire if the election were held today.

  4. +
  5. You are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.

  6. +
  7. You are an environmental scientist monitoring levels of phosphorus pollution in a lake. The phosphorus levels have been fluctuated around a relatively low level until recently, but they have been higher in the last few years. Does these recent higher levels indicate some important change or can we put them down to some chance and ordinary variation from year to year?

  8. +
+

The core of all these problems, and of the others that we will deal with in this book, is that you want to know the “chance” or “probability” — different words for the same idea — that some event will or will not happen, or that something is true or false. To put it another way, we want to answer questions about “What is the probability that…?”, given the body of information that you have in hand.

+

The question “What is the probability that…?” is usually not the ultimate question that interests us at a given moment.

+

Eventually, a person wants to use the estimated probability to help make a decision concerning some action one might take. These are the kinds of decisions, related to the questions about probability stated above, that ultimately we would like to make:

+
    +
  1. Should you (the researcher) advise doctors to prescribe medicine AntiAnyVir for COVID19 patients, or, should you (the researcher) continue to study AntiAnyVir before releasing it for use? A related matter: should you and other research workers feel sufficiently encouraged by the results of medicine AntiAnyVir so that you should continue research in this general direction rather than turning to some other promising line of research? These are just two of the possible decisions that might be influenced by the answer to the question about the probability that medicine AntiAnyVir is effective in treating COVID19.

  2. +
  3. Should you advise the Republicrat presidential candidate to go to New Hampshire to campaign? If the poll tells you conclusively that she or he will not win in New Hampshire, you might decide that it is not worthwhile investing effort to campaign there. Similarly, if the poll tells you conclusively that they surely will win in New Hampshire, you probably would not want to campaign further there. But if the poll is not conclusive in one direction or the other, you might choose to invest the effort to campaign in New Hampshire. Analysis of the chances of winning in New Hampshire based on the poll data can help you make this decision sensibly.

  4. +
  5. Should your company buy more ambulances? Clearly the answer to this question is affected by the probability that a given number of your ambulances will be out of action on a given day. But of course this estimated probability will be only one part of the decision.

  6. +
  7. Should we search for new causes of phosphorus pollution as a result of the recent measurements from the lake? If the causes have not changed, and the recent higher values were just the result of ordinary variation, our search will end up wasting time and money that could have been better spent elsewhere.

  8. +
+

The kinds of questions to which we wish to find probabilistic and statistical answers may be found throughout the social, biological and physical sciences; in business; in politics; in engineering; and in most other forms of human endeavor.

+
+
+

1.3 Types of statistics

+

The term statistics sometimes causes confusion and therefore needs explanation.

+

Statistics can mean two related things. It can refer to a certain sort of number — of which more below. Or it can refer to the field of inquiry that studies these numbers.

+

A statistic is a number that we can calculate from a larger collection of numbers we are interested in. For example, table Table 1.1 has some yearly measures of “soluble reactive phosphorus” (SRP) from Lough Erne — a lake in Ireland (Zhou, Gibson, and Foy 2000).

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1.1: Soluble Reactive Phosphorus in Lough Erne
YearSRP
197426.2
197522.8
197637.2
198354.7
198437.7
198754.3
198935.7
199172.0
199285.1
199386.7
199493.3
1995107.2
199680.3
199770.7
+
+ + +
+
+

We may want to summarize this set of SRP measurements. For example, we could add up all the SRP values to give the total. We could also divide the total by the number of measurements, to give the average. Or we could measure the spread of the values by finding the minimum and the maximum — see table Table 1.2). All these numbers are descriptive statistics, because they are summaries that describe the collection of SRP measurements.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 1.2: Statistics for SRP levels
Descriptive statistics for SRP
Total863.9
Mean61.7
Minimum22.8
Maximum107.2
+
+ + +
+
+

Descriptive statistics are nothing new to you; you have been using many of them all your life.

+

We can calculate other numbers that can be useful for drawing conclusions or inferences from a collection of numbers; these are inferential statistics. Inferential statistics are often probability values that give the answer to questions like “What are the chances that …”.

+

For example, imagine we suspect there was some environmental change in 1990. We see that the average SRP value before 1990 was 38.4 and the average SRP value after 1990 was 85. That gives us a difference in the average of 46.6. But, could this difference be due to chance fluctuations from year to year? Were we just unlucky in getting a few larger measurements in later years? We could use methods that you will see in this book to calculate a probability to answer that question. The probability value is an inferential statistic, because we can use it to draw an inference about the measures.

+

Inferential statistics use descriptive statistics as their input. Inferential statistics can be used for two purposes: to aid scientific understanding by estimating the probability that a statement is true or not, and to aid in making sound decisions by estimating which alternative among a range of possibilities is most desirable.

+
+
+

1.4 Probabilities and decisions

+

There are two differences between questions about probabilities and the ultimate decision problems:

+
    +
  1. Decision problems always involve evaluation of the consequences — that is, taking into account the benefits and the costs of the consequences — whereas pure questions about probabilities are estimated without evaluations of the consequences.

  2. +
  3. Decision problems often involve a complex combination of sets of probabilities and consequences, together with their evaluations. For example: In the case of the contractor’s ambulances, it is clear that there will be a monetary loss to the contractor if she makes a commitment to have 17 ambulances available for tomorrow and then cannot produce that many. Furthermore, the contractor must take into account the further consequence that there may be a loss of goodwill for the future if she fails to meet her obligations tomorrow — and then again there may not be any such loss; and if there is such loss of goodwill it might be a loss worth $10,000 or $20,000 or $30,000. Here the decision problem involves not only the probability that there will be fewer than 17 ambulances tomorrow but also the immediate monetary loss and the subsequent possible losses of goodwill, and the valuation of all these consequences.

  4. +
+

Continuing with the decision concerning whether to do more research on medicine AntiAnyVir: If you do decide to continue research on AntiAnyVir, (a) you may, or (b) you may not, come up with an important general treatment for viral infections within, say, the next 3 years. If you do come up with such a general treatment, of course it will have very great social benefits. Furthermore, (c) if you decide not to do further research on AntiAnyVir now, you can direct your time and that of other people to research in other directions, with some chance that the other research will produce a less-general but nevertheless useful treatment for some relatively infrequent viral infections. Those three possibilities have different social benefits. The probability that medicine AntiAnyVir really has some benefit in treating COVID19, as judged by your prior research, obviously will influence your decision on whether or not to do more research on medicine AntiAnyVir. But that judgment about the probability is only one part of the overall web of consequences and evaluations that must be taken into account when making your decision whether or not to do further research on medicine AntiAnyVir.

+

Why does this book limit itself to the specific probability questions when ultimately we are interested in decisions? A first reason is division of labor. The more general aspects of the decision-making process in the face of uncertainty are treated well in other books. This book’s special contribution is its new approach to the crucial process of estimating the chances that an event will occur.

+

Second, the specific elements of the overall decision-making process taught in this book belong to the interrelated subjects of probability theory and statistics . Though probabilistic and statistical theory ultimately is intended to be part of the general decision-making process, often only the estimation of probabilities is done systematically, and the rest of the decision-making process — for example, the decision whether or not to proceed with further research on medicine AntiAnyVir — is done in informal and unsystematic fashion. This is regrettable, but the fact that this is standard practice is an additional reason why the treatment of statistics and probability in this book is sufficiently complete.

+

A third reason that this book covers only statistics and not numerical reasoning about decisions is because most college and university statistics courses and books are limited to statistics.

+
+
+

1.5 Limitations of probability and statistics

+

Statistical testing is not equivalent to research, and research is not the same as statistical testing. Rather, statistical inference is a handmaiden of research, often but not always necessary in the research process.

+

A working knowledge of the basic ideas of statistics, especially the elements of probability, is unsurpassed in its general value to everyone in a modern society. Statistics and probability help clarify one’s thinking and improve one’s capacity to deal with practical problems and to understand the world. To be efficient, a social scientist or decision-maker is almost certain to need statistics and probability.

+

On the other hand, important research and top-notch decision-making have been done by people with absolutely no formal knowledge of statistics. And a limited study of statistics sometimes befuddles students into thinking that statistical principles are guides to research design and analysis. This mistaken belief only inhibits the exercise of sound research thinking. Alfred Kinsey long ago put it this way:

+
+

… no statistical treatment can put validity into generalizations which are based on data that were not reasonably accurate and complete to begin with. It is unfortunate that academic departments so often offer courses on the statistical manipulation of human material to students who have little understanding of the problems involved in securing the original data. … When training in these things replaces or at least precedes some of the college courses on the mathematical treatment of data, we shall come nearer to having a science of human behavior. (Kinsey, Pomeroy, and Martin 1948, p 35).

+
+

In much — even most — research in social and physical sciences, statistical testing is not necessary. Where there are large differences between different sorts of circumstances for example, if a new medicine cures 90 patients out of 100 and the old medicine cures only 10 patients out of 100 — we do not need refined statistical tests to tell us whether or not the new medicine really has an effect. And the best research is that which shows large differences, because it is the large effects that matter. If the researcher finds that s/he must use refined statistical tests to reveal whether there are differences, this sometimes means that the differences do not matter much.

+

To repeat, then, some or even much research — especially in the physical and biological sciences — does not need the kind of statistical manipulation that will be described in this book. But most decision problems do need the kind of probabilistic and statistical input that is described in this book.

+

Another matter: If the raw data are of poor quality, probabilistic and statistical manipulation cannot be very useful. In the example of the contractor and her ambulances, if the contractor’s estimate that a given ambulance has a one-in-ten chance of being unfit for service out-of-order on a given day is very inaccurate, then our calculation of the probability that three or more ambulances will be out of order on a given day will not be helpful, and may be misleading. To put it another way, one cannot make bread without flour, yeast, and water. And good raw data are the flour, yeast and water necessary to get an accurate estimate of a probability. The most refined statistical and probabilistic manipulations are useless if the input data are poor — the result of unrepresentative samples, uncontrolled experiments, inaccurate measurement, and the host of other ways that information gathering can go wrong. (See Simon and Burstein (1985) for a catalog of the obstacles to obtaining good data.) Therefore, we should constantly direct our attention to ensuring that the data upon which we base our calculations are the best it is possible to obtain.

+
+
+

1.6 Why is Statistics Such a Difficult Subject?

+

Why is statistics such a tough subject for so many people?

+

“Among mathematicians and statisticians who teach introductory statistics, there is a tendency to view students who are not skillful in mathematics as unintelligent,” say two of the authors of a popular introductory text (McCabe and McCabe 1989, p 2). As these authors imply, this view is out-and-out wrong; lack of general intelligence on the part of students is not the root of the problem.

+

Scan this book and you will find almost no formal mathematics. Yet nearly every student finds the subject very difficult — as difficult as anything taught at universities. The root of the difficulty is that the subject matter is extremely difficult. Let’s find out why .

+

It is easy to find out with high precision which movie is playing tonight at the local cinema; you can look it up on the web or call the cinema and ask. But consider by contrast how difficult it is to determine with accuracy:

+
    +
  1. Whether we will save lives by recommending vitamin D supplements for the whole population as protection against viral infections. Some evidence suggests that low vitamin D levels predispose to more severe lung infections, and that taking supplements can help (Martineau et al. 2017). But, how certain can we be of the evidence? How safe are the supplements? Does the benefit, and the risk, differ by ethnicity?
  2. +
  3. What will be the result of more than a hundred million Americans voting for president a month from now; the best attempt usually is a sample of 2000 people, selected in some fashion or another that is far from random, weeks before the election, asked questions that are by no means the same as the actual voting act, and so on;
  4. +
  5. How men feel about women and vice versa.
  6. +
+

The cleverest and wisest people have pondered for thousands of years how to obtain answers to questions like these, and made little progress. Dealing with uncertainty was completely outside the scope of the ancient philosophers. It was not until two or three hundred years ago that people began to make any progress at all on these sorts of questions, and it was only about one century ago that we began to have reasonably competent procedures — simply because the problems are inherently difficult. So it is no wonder that the body of these methods is difficult.

+

So: The bad news is that the subject is extremely difficult. The good news is that you — and that means you — can understand it with hard thinking, even if you have no mathematical background beyond arithmetic and you think that you have no mathematical capability. That’s because the difficulty lies in such matters as pin-pointing the right question, but not in any difficulties of mathematical manipulation.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/monte_carlo.html b/r-book/monte_carlo.html new file mode 100644 index 00000000..c1b81aa7 --- /dev/null +++ b/r-book/monte_carlo.html @@ -0,0 +1,709 @@ + + + + + + + + + +Resampling statistics - 15  The Procedures of Monte Carlo Simulation (and Resampling) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

15  The Procedures of Monte Carlo Simulation (and Resampling)

+
+ + + +
+ + + + +
+ + +
+ +

Until now, the steps to follow in solving particular problems have been chosen to fit the specific facts of that problem. And so they always must. Now let’s generalize what we have done in the previous chapters on probability into a general procedure for such problems, which will in turn become the basis for a detailed procedure for resampling simulation in statistics. The generalized procedure describes what we are doing when we estimate a probability using Monte Carlo simulation problem-solving operations.

+
+

15.1 A definition and general procedure for Monte Carlo simulation

+

This is what we shall mean by the term Monte Carlo simulation when discussing problems in probability: Using the given data-generating mechanism (such as a coin or die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples . That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.

+

This definition fits both problems in pure probability as well as problems in statistics, but in the latter case the process is called resampling . The reason that the same definition fits is that at the core of every problem in inferential statistics lies a problem in probability ; that is, the procedure for handling every statistics problem is the procedure for handling a problem in probability. (There is related discussion of definitions in Chapter 8 and Chapter 20.)

+

The following series of steps should apply to all problems in probability. I’ll first state the procedure straight through without examples, and then show how it applies to individual examples.

+
    +
  • Step A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.
  • +
  • Step B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.
  • +
  • Step C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.
  • +
  • Step D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.
  • +
+

Now let us apply the general procedure to some examples to make it more concrete.

+

Here are four problems to be used as illustrations:

+
    +
  1. Three percent gizmos — if on average 3 percent of the gizmos sent out are defective, what is the chance that there will be more than 10 defectives in a shipment of 200?
  2. +
  3. Three girls, 106 in 206 — what are the chances of getting three or more girls in the first four children, if the probability of a female birth is 106/206?
  4. +
  5. Less than 20 baskets — what are the chances of Joe Hothand scoring 20 or fewer baskets in 57 shots if his long-run average is 47 percent?
  6. +
  7. Same birthday in 25 — what is the probability of two or more people in a group of 25 persons having the same birthday — i. e., the same month and same day of the month?
  8. +
+
+
+

15.2 Apply step A — construct a simulation universe

+

As a reminder:

+
    +
  • Step A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.
  • +
+

For our example problems:

+
    +
  1. Three percent gizmos: A random drawing with replacement from the set of numbers 1 through 100 with 1 through 3 designated as defective, simulates the system that produces 3 defective gizmos among 100.
  2. +
  3. Three girls, 106 in 206: You could take two decks of cards, from which you take out both Aces of spades, and replace these with a Joker. You now have 103 cards (206 / 2), of which 53 (106 / 2) are red, counting the Joker as red. You could also use a random drawing from two sets of numbers, one comprising 1 through 106 and the other 107 through 206. Either universe can simulate the system that produces a single male or female birth, when we are estimating the probability of three girls in the first four children. Notice that in this universe the probability of a girl remains the same from trial event to trial event — that is, the trials are independent — demonstrating a universe from which we sample with replacement.
  4. +
  5. Less than 20 baskets: A random drawing with replacement from a bucket containing a hundred balls, 47 red and 53 black, simulates the system that produces 47 percent baskets for Joe Hothand.
  6. +
  7. Same birthday in 25: A random drawing with replacement from the numbers 1 through 365 simulates the system that produces a birthday.
  8. +
+

This step A includes two operations:

+
    +
  1. Decide which symbols will stand for the elements of the universe you will simulate.
  2. +
  3. Determine whether the sampling will be with or without replacement. (This can be ambiguous in a complex modeling situation.)
  4. +
+

Hard thinking is required in order to determine the appropriate “real” universe whose properties interest you.

+
+
+

15.3 Apply step B — specify the procedure

+
    +
  • Step B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.
  • +
+

For example:

+
    +
  1. Three percent gizmos: For a single gizmo, you can draw a single number from an infinite universe. Or one can use a finite set with replacement and shuffling.
  2. +
  3. Three girls, 106 in 206: In the case of three or more daughters among four children, you could use the deck of 103 cards, from Step A, of which 53 count as red. To simulate one child, you can draw a card and then replace it, noting female for a red card or a Joker. Or if you are using random numbers from the computer, the random numbers automatically simulate replacement. Just as the chances of having a boy or a girl do not change depending on the sex of the preceding child, so we want to ensure through sampling with replacement that the chances do not change each time we choose from the deck of cards.
  4. +
  5. Less than 20 baskets: In the case of Joe Hothand’s shooting, the procedure is to consider the numbers 1 through 47 as “baskets,” and 48 through 100 as “misses,” with the same other considerations as the gizmos.
  6. +
  7. Same birthday in 25. In the case of the birthday problem, the drawing must be with replacement, because the fact that you have drawn — say — a 10 (10th day in year), should not affect the chances of drawing 10 for a second person in the room.
  8. +
+

Recording the outcome of the sampling must be indicated as part of this step, e.g., “record ‘yes’ if girl or basket, ‘no’ if a boy or a miss.”

+
+
+

15.4 Apply step C — describe any composite events

+
    +
  • Step C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.
  • +
+

For example:

+
    +
  1. Three percent gizmos: For the gizmos, draw a sample of 200.
  2. +
  3. Three girls, 106 in 206: For the three or more girls among four children, the procedure for each simple event of a single birth was described in step B. Now we must specify repeating the simple event four times, and counting whether the outcome is or is not three girls.
  4. +
  5. Less than 20 baskets: In the case of Joe Hothand’s shots, we must draw 57 numbers to make up a sample of shots, and examine whether there are 20 or more misses.
  6. +
+

Recording the results as “ten or more defectives,” “three or more girls” or “two or less girls,” and “20 or more misses” or “19 or fewer,” is part of this step. This record indicates the results of all the trials and is the basis for a tabulation of the final result.

+
+
+

15.5 Apply step D — calculate the probability

+
    +
  • Step D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.
  • +
+

For example: the proportions of “yes” and “no,” and “20 or more” and “19 or fewer” estimate the probability we seek in step C.

+

The above procedure is similar to the procedure followed with the analytic formulaic method except that the latter method constructs notation and manipulates it.

+
+
+

15.6 Summary

+

This chapter gives a more general description of the specific steps used in prior chapters to solve problems in probability.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/more_sampling_tools.html b/r-book/more_sampling_tools.html new file mode 100644 index 00000000..0c260778 --- /dev/null +++ b/r-book/more_sampling_tools.html @@ -0,0 +1,1970 @@ + + + + + + + + + +Resampling statistics - 10  Two puzzles and more tools + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

10  Two puzzles and more tools

+
+ + + +
+ + + + +
+ + +
+ +
+

10.1 Introduction

+

In the next chapter we will deal with some more involved problems in probability, as a preparation for statistics, where we use reasoning from probability to draw conclusions about a world like our own, where variation often appears to be more or less random.

+

Before we get down to the business of complex probabilistic problems in the next few chapters, let’s consider a couple of peculiar puzzles. These puzzles allow us to introduce some more of the key tools in R for Monte Carlo resampling, and show the power of such simulation to help solve, and then reason about, problems in probability.

+
+
+

10.2 The treasure fleet recovered

+

This is a classic problem in probability:1

+
+

A Spanish treasure fleet of three ships was sunk at sea off Mexico. One ship had a chest of gold forward and another aft, another ship had a chest of gold forward and a chest of silver aft, while a third ship had a chest of silver forward and another chest of silver aft. Divers just found one of the ships and a chest of gold in it, but they don’t know whether it was from forward or aft. They are now taking bets about whether the other chest found on the same ship will contain silver or gold. What are fair odds?

+
+

These are the logical steps one may distinguish in arriving at a correct answer with deductive logic (portrayed in Figure 10.1).

+
    +
  1. Postulate three ships — Ship I with two gold chests (G-G), ship II with one gold and one silver chest (G-S), and ship III with S-S. (Choosing notation might well be considered one or more additional steps.)

  2. +
  3. Assert equal probabilities of each ship being found.

  4. +
  5. Step 2 implies equal probabilities of being found for each of the six chests.

  6. +
  7. Fact: Diver finds a chest of gold.

  8. +
  9. Step 4 implies that S-S ship III was not found; hence remove it from subsequent analysis.

  10. +
  11. Three possibilities: 6a) Diver found chest I-Ga, 6b) diver found I-Gb, 6c) diver found II-Gc.

    +

    From step 2, the cases a, b, and c in step 6 have equal probabilities.

  12. +
  13. If possibility 6a is the case, then the other chest is I-Gb; the comparable statements for cases 6b and 6c are I-Ga and II-S.

  14. +
  15. From steps 6 and 7: From equal probabilities of the three cases, and no other possible outcome, \(P(6a) = 1/3\), \(P(6b) = 1/3\), \(P(6c) = 1/3\).

  16. +
  17. So \(P(G) = P(6a) + P(6b)\) = 1/3 + 1/3 = 2/3.

  18. +
+

See Figure 10.1.

+
+
+
+
+

+
Figure 10.1: Ships with Gold and Silver
+
+
+
+
+

The following simulation arrives at the correct answer.

+
    +
  1. Write “Gold” on three pieces of paper and “Silver” on three pieces of paper. These represent the chests.
  2. +
  3. Get three buckets each with two pieces of paper. Each bucket represents a ship, each piece of paper represents a chest in that ship. One bucket has two pieces of paper with “Gold” written on them; one has pieces of paper with “Gold” and “Silver”, and one has “Silver” and “Silver”.
  4. +
  5. Choose a bucket at random, to represent choosing a ship at random.
  6. +
  7. Shuffle the pieces of paper in the bucket and pick one, to represent choosing the first chest from that ship at random.
  8. +
  9. If the piece of paper says “Silver”, the first chest we found in this ship was silver, and we stop the trial and make no further record. If “Gold”, continue.
  10. +
  11. Get the second piece of paper from the bucket, representing the second chest on the chosen ship. Record whether this was “Silver” or “Gold” on the scoreboard.
  12. +
  13. Repeat steps (3 - 6) many times, and calculate the proportion of “Gold”s on the scoreboard. (The answer should be about \(\frac{2}{3}\).)
  14. +
+ +

Here is a notebook simulation with R:

+
+

Start of gold_silver_ships notebook

+ + +
+
# The 3 buckets.  Each bucket represents a ship.  Each has two chests.
+bucket1 <- c('Gold', 'Gold')  # Chests in first ship.
+bucket2 <- c('Gold',  'Silver')  # Chests in second ship.
+bucket3 <- c('Silver', 'Silver')  # Chests in third ship.
+
+
+
# Mark trials as not valid to start with.
+# Trials where we don't get a gold chest first will
+# keep this 'No gold in chest 1, chest 2 never opened' marker.
+second_chests <- rep('No gold in chest 1, chest 2 never opened', 10000)
+
+for (i in 1:10000) {
+    # Select a ship at random from the three ships.
+    ship_no <- sample(1:3, size=1)
+    # Get the chests from this ship (represented by a bucket).
+    if (ship_no == 1) {
+        bucket <- bucket1
+    }
+    if (ship_no == 2) {
+        bucket <- bucket2
+    }
+    if (ship_no == 3) {
+        bucket <- bucket3
+    }
+
+    # We shuffle the order of the chests in this ship, to simulate
+    # the fact that we don't know which of the two chests we have
+    # found first.
+    shuffled <- sample(bucket)
+
+    if (shuffled[1] == 'Gold') {  # We found a gold chest first.
+        # Store whether the Second chest was silver or gold.
+        second_chests[i] <- shuffled[2]
+    }
+}  # End loop, go back to beginning.
+
+# Number of times we found gold in the second chest.
+n_golds <- sum(second_chests == 'Gold')
+# Number of times we found silver in the second chest.
+n_silvers <- sum(second_chests == 'Silver')
+# As a ratio of golds to all second chests (where the first was gold).
+message(n_golds / (n_golds + n_silvers))
+
+
0.655882352941176
+
+
+

End of gold_silver_ships notebook

+
+

In the code above, we have first chosen the ship number at random, and then used a set of if ... statements to get the pair of chests corresponding to the given ship. There are simpler and more elegant ways of writing this code, but they would need some R features that we haven’t covered yet.2

+
+
+

10.3 Back to Boolean vectors

+

The code above implements the procedure we might well use if we were simulating the problem physically. We do a trial, and we record the result. We do this on a piece of paper if we are doing a physical simulation, and in the second_chests vector in code.

+

Finally we tally up the results. If we are doing a physical simulation, we go back over the all the trial results and counting up the “Gold” and “Silver” outcomes. In code we use the comparisons == 'Gold' and == 'Silver' to find the trials of interest, and then count them up with sum.

+

Boolean vectors are a fundamental tool in R, and we will use them in nearly all our simulations.

+

Here is a remind of how those vectors work.

+

First, let’s slice out the first 10 values of the second_chests trial-by-trial results tally from the simulation above:

+
+
# Get values at positions 1 through 10
+first_10_chests <- second_chests[1:10]
+first_10_chests
+
+
 [1] "Gold"                                    
+ [2] "No gold in chest 1, chest 2 never opened"
+ [3] "No gold in chest 1, chest 2 never opened"
+ [4] "Silver"                                  
+ [5] "Gold"                                    
+ [6] "No gold in chest 1, chest 2 never opened"
+ [7] "Silver"                                  
+ [8] "Silver"                                  
+ [9] "Gold"                                    
+[10] "No gold in chest 1, chest 2 never opened"
+
+
+

Before we started the simulation, we set second_chests to contain 10,000 strings, where each string was “No gold in chest 1, chest 2 never opened”. In the simulation, we check whether there was gold in the first chest, and, if not, we don’t change the value in second_chest, and the value remains as “No gold in chest 1, chest 2 never opened”.

+

Only if there was gold in the first chest, do we go on to check whether the second chest contains silver or gold. Therefore, we only set a new value in second_chests where there was gold in the first chest.

+

Now let’s show the effect of running a comparison on first_10_chests:

+
+
were_gold <- (first_10_chests == 'Gold')
+were_gold
+
+
 [1]  TRUE FALSE FALSE FALSE  TRUE FALSE FALSE FALSE  TRUE FALSE
+
+
+
+
+
+ +
+
+Parentheses and Boolean comparisons +
+
+
+

Notice the round brackets (parentheses) around (first_10_chests == 'Gold'). In this particular case, we would get the same result without the parentheses, so the paretheses are optional. In general, you will see we put parentheses around all expressions that generate Boolean vectors, and we recommend you do too. It is good habit to get into, to make it clear that this is an expression that generates a value.

+
+
+

The == 'Gold' comparison is asking a question. It is asking that question of a vector, and the vector contains multiple values. R treats this comparison as asking the question of each element in the vector. We get an answer for the question for each element. The answer for position 1 is TRUE if the element at position 1 is equal to 'Gold' and FALSE otherwise, and so on, for positions 2, 3 and so on. We started with 10 strings. After the comparison == 'Gold' we have 10 Boolean values, where a Boolean value can either be TRUE or FALSE.

+ + +

Now we have an array with TRUE for the “Gold” results and FALSE otherwise, we can count the number of “Gold” results by using sum on the vector. As you remember (Section 5.13) sum counts TRUE as 1 and FALSE as 0, so the sum of the Boolean vector is just the number of TRUE values in the vector — the count that we need.

+
+
# The number of True values — so the number of "Gold" chests.
+sum(were_gold)
+
+
[1] 3
+
+
+
+
+

10.4 Boolean vectors and another take on the ships problem

+

If we are doing a physical simulation, we usually want to finish up all the work for the trial during the trial, so we have one outcome from the trial. This makes it easier to tally up the results in the end.

+

We have no such constraint when we are using code, so it is sometimes easier to record several results from the trial, and do the final combinations and tallies at the end. We will show you what we mean with a slight variation on the two-ships code you saw above.

+
+

Start of gold_silver_booleans notebook

+ + +

Notice that the first part of the code is identical to the first approach to this problem. There are two key differences — see the comments for an explanation.

+
+
# The 3 buckets, each representing two chests on a ship.
+# As before.
+bucket1 <- c('Gold', 'Gold')  # Chests in first ship.
+bucket2 <- c('Gold',  'Silver')  # Chests in second ship.
+bucket3 <- c('Silver', 'Silver')  # Chests in third ship.
+
+
+
# Here is where the difference starts.  We are now going to fill in
+# the result for the first chest _and_ the result for the second chest.
+#
+# Later we will fill in all these values, so the string we put here
+# does not matter.
+
+# Whether the first chest was Gold or Silver.
+first_chests <- rep('To be announced', 10000)
+second_chests <- rep('To be announced', 10000)
+
+for (i in 1:10000) {
+    # Select a ship at random from the three ships.
+    # As before.
+    ship_no <- sample(1:3, size=1)
+    # Get the chests from this ship.
+    # As before.
+    if (ship_no == 1) {
+        bucket <- bucket1
+    }
+    if (ship_no == 2) {
+        bucket <- bucket2
+    }
+    if (ship_no == 3) {
+        bucket <- bucket3
+    }
+
+    # As before.
+    shuffled <- sample(bucket)
+
+    # Here is the big difference - we store the result for the first and second
+    # chests.
+    first_chests[i] <- shuffled[1]
+    second_chests[i] <- shuffled[2]
+}  # End loop, go back to beginning.
+
+# We will do the calculation we need in the next cell.  For now
+# just display the first 10 values.
+ten_first_chests <- first_chests[1:10]
+message('The first 10 values of "first_chests:')
+
+
The first 10 values of "first_chests:
+
+
print(ten_first_chests)
+
+
 [1] "Gold"   "Silver" "Silver" "Silver" "Gold"   "Gold"   "Gold"   "Gold"  
+ [9] "Gold"   "Gold"  
+
+
ten_second_chests <- second_chests[1:10]
+message('The first 10 values of "second_chests:')
+
+
The first 10 values of "second_chests:
+
+
print(ten_second_chests)
+
+
 [1] "Gold"   "Gold"   "Silver" "Silver" "Gold"   "Silver" "Gold"   "Silver"
+ [9] "Gold"   "Silver"
+
+
+

In this variant, we recorded the type of first chest for each trial (“Gold” or “Silver”), and the type of second chest of the second chest (“Gold” or “Silver”).

+

We would like to count the number of times there was “Gold” in the first chest and “Gold” in the second.

+
+

10.5 Combining Boolean arrays

+

We can do the count we need by combining the Boolean vectors with the & operator. & combines Boolean vectors with a logical and. Logical and is a rule for combining two Boolean values, where the rule is: the result is TRUE if the first value is TRUE and the second value if TRUE.

+

Here we use the & operator to combine some Boolean values on the left and right of the operator:

+

Above you saw that the == operator (as in == 'Gold'), when applied to vectors, asks the question of every element in the vector.

+

First make the Boolean vectors.

+
+
ten_first_gold <- ten_first_chests == 'Gold'
+message("Ten first == 'Gold'")
+
+
Ten first == 'Gold'
+
+
print(ten_first_gold)
+
+
 [1]  TRUE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
+
+
ten_second_gold <- ten_second_chests == 'Gold'
+message("Ten second == 'Gold'")
+
+
Ten second == 'Gold'
+
+
print(ten_second_gold)
+
+
 [1]  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE FALSE  TRUE FALSE
+
+
+

Now let us use & to combine Boolean vectors:

+
+
ten_both <- (ten_first_gold & ten_second_gold)
+ten_both
+
+
 [1]  TRUE FALSE FALSE FALSE  TRUE FALSE  TRUE FALSE  TRUE FALSE
+
+
+

Notice that R does the comparison elementwise — element by element.

+

You saw that when we did second_chests == 'Gold' this had the effect of asking the == 'Gold' question of each element, so there will be one answer per element in second_chests. In that case there was a vector to the left of == and a single value to the right. We were comparing a vector to a value.

+

Here we are asking the & question of ten_first_gold and ten_second_gold. Here there is a vector to the left and a vector to the right. We are asking the & question 10 times, but the first question we are asking is:

+
+
# First question, giving first element of result.
+(ten_first_gold[1] & ten_second_gold[1])
+
+
[1] TRUE
+
+
+

The second question is:

+
+
# Second question, giving second element of result.
+(ten_first_gold[2] & ten_second_gold[2])
+
+
[1] FALSE
+
+
+

and so on. We have ten elements on each side, and 10 answers, giving a vector (ten_both) of 10 elements. Each element in ten_both is the answer to the & question for the elements at the corresponding positions in ten_first_gold and ten_second_gold.

+

We could also create the Boolean vectors and do the & operation all in one step, like this:

+ +

Remember, we wanted the answer to the question: how many trials had “Gold” in the first chest and “Gold” in the second. We can answer that question for the first 10 trials with sum:

+
+
n_ten_both <- sum(ten_both)
+n_ten_both
+
+
[1] 4
+
+
+

We can answer the same question for all the trials, in the same way:

+
+
first_gold <- first_chests == 'Gold'
+second_gold <- second_chests == 'Gold'
+n_both_gold <- sum(first_gold & second_gold)
+n_both_gold
+
+
[1] 3328
+
+
+

We could also do the same calculation all in one line:

+
+
n_both_gold <- sum((first_chests == 'Gold') & (second_chests == 'Gold'))
+n_both_gold
+
+
[1] 3328
+
+
+

We can then count all the ships where the first chest was gold:

+
+
n_first_gold <- sum(first_chests == 'Gold')
+n_first_gold
+
+
[1] 5021
+
+
+

The final calculation is the proportion of second chests that are gold, given the first chest was also gold:

+
+
p_g_given_g <- n_both_gold / n_first_gold
+p_g_given_g
+
+
[1] 0.663
+
+
+

Of course we won’t get exactly the same results from the two simulations, in the same way that we won’t get exactly the same results from any two runs of the same simulation, because of the random values we are using. But the logic for the two simulations are the same, and we are doing many trials (10,000), so the results will be very similar.

+

End of gold_silver_booleans notebook

+
+
+
+
+

10.6 The Monty Hall problem

+

The Monty Hall Problem is a puzzle in probability that is famous for its deceptive simplicity. It has its own long Wikipedia page: https://en.wikipedia.org/wiki/Monty_Hall_problem.

+

Here is the problem in the form it is best known; a letter to the columnist Marilyn vos Savant, published in Parade Magazine (1990):

+
+

Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?

+
+

In fact the first person to propose (and solve) this problem was Steve Selvin, a professor of public health at the University of California, Berkeley (Selvin 1975).

+

Most people, including at least one of us, your humble authors, quickly come to the wrong conclusion. The most common but incorrect answer is that it will make no difference if you switch doors or stay with your original choice. The obvious intuition is that, after Monty opens his door, there are two doors that might have the car behind them, and therefore, there is a 50% chance it will be behind any one of the two. It turns out that answer is wrong; you will double your chances of winning by switching doors. Did you get the answer right?

+

If you got the answer wrong, you are in excellent company. As you can see from the commentary in Savant (1990), many mathematicians wrote to Parade magazine to assert that the (correct) solution was wrong. Paul Erdős was one of the most famous mathematicians of the 20th century; he could not be convinced of the correct solution until he had seen a computer simulation (Vazsonyi 1999), of the type we will do below.

+

To simulate a trial of this problem, we need to select a door at random to house the car, and another door at random, to be the door the contestant chooses. We number the doors 1, 2 and 3. Now we need two random choices from the options 1, 2 or 3, one for the door with the car, the other for the contestant door. To chose a door for the car, we could throw a die, and chose door 1 if the die shows 1 or 4, door 2 if the die shows 2 or 5, and door 3 for 3 or 6. Then we throw the die again to chose the contestant door.

+

But throwing dice is a little boring; we have to find the die, then throw it many times, and record the results. Instead we can ask the computer to chose the doors at random.

+

For this simulation, let us do 25 trials. We ask the computer to create two sets of 25 random numbers from 1 through 3. The first set is the door with the car behind it (“Car door”). The second set have the door that the contestant chose at random (“Our door”). We put these in a table, and make some new, empty columns to fill in later. The first new column is “Monty opens”. In due course, we will use this column to record the door that Monty Hall will open on this trial. The last two columns express the outcome. The first is “Stay wins”. This has “Yes” if we win on this trial by sticking to our original choice of door, and “No” otherwise. The last column is “Switch wins”. This has “Yes” if we win by switching doors, and “No” otherwise. See table Table 10.1).

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 10.1: 25 simulations of the Monty Hall problem
Car doorOur doorMonty opensStay winsSwitch wins
133
231
313
411
523
621
722
813
912
1031
1122
1232
1322
1431
1512
1621
1733
1832
1911
2032
2122
2231
2331
2411
2523
+
+
+
+

In the first trial in Table 10.1), the computer selected door 3 for car, and door 3 for the contestant. Now Monty must open a door, and he cannot open our door (door 3) so he has the choice of opening door 1 or door 2; he chooses randomly, and opens door 2. On this trial, we win if we stay with our original choice, and we lose if we change to the remaining door, door 1.

+

Now we go the second trial. The computer chose door 3 for the car, and door 1 for our choice. Monty cannot choose our door (door 1) or the door with the car behind it (door 3), so he must open door 2. Now if we stay with our original choice, we lose, but if we switch, we win.

+

You may want to print out table Table 10.1, and fill out the blank columns, to work through the logic.

+

After doing a few more trials, and some reflection, you may see that there are two different situations here: the situation when our initial guess was right, and the situation where our initial guess was wrong. When our initial guess was right, we win by staying with our original choice, but when it was wrong, we always win by switching. The chance of our initial guess being correct is 1/3 (one door out of three). So the chances of winning by staying are 1/3, and the chances of winning by switching are 2/3. But remember, you don’t need to follow this logic to get the right answer. As you will see below, the resampling simulation shows us that the Switch strategy wins.

+

Table Table 10.2 is a version of table Table 10.1 for which we have filled in the blank columns using the logic above.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 10.2: 25 simulations of the Monty Hall problem, filled out
Car doorOur doorMonty opensStay winsSwitch wins
1332YesNo
2312NoYes
3132NoYes
4113YesNo
5231NoYes
6213NoYes
7223YesNo
8132NoYes
9123NoYes
10312NoYes
11223YesNo
12321NoYes
13221YesNo
14312NoYes
15123NoYes
16213NoYes
17331YesNo
18321NoYes
19112YesNo
20321NoYes
21221YesNo
22312NoYes
23312NoYes
24112YesNo
25231NoYes
+
+
+
+

The proportion of times “Stay” wins in these 25 trials is 0.36. The proportion of times “Switch” wins is 0.64; the Switch strategy wins about twice as often as the Stay strategy.

+
+
+

10.7 Monty Hall with R

+

Now you have seen what the results might look like for a physical simulation, you can exercise some of your newly-strengthened R muscles to do the simulation with code.

+
+

Start of monty_hall notebook

+ + +

The Monty Hall problem has a slightly complicated structure, so we will start by looking at the procedure for one trial. When we have that clear, we will put that procedure into a for loop for the simulation.

+

Let’s start with some variables. Let’s call the door I choose my_door.

+

We choose that door at random from a sequence of all possible doors. Call the doors 1, 2 and 3 from left to right.

+
+
# Vector of doors to chose from.
+doors = c(1, 2, 3)
+
+# We choose one door at random.
+my_door <- sample(doors, size=1)
+
+# Show the result
+my_door
+
+
[1] 3
+
+
+

We choose one of the doors to be the door with the car behind it:

+
+
# One door at random has the car behind it.
+car_door <- sample(doors, size=1)
+
+# Show the result
+car_door
+
+
[1] 1
+
+
+

Now we need to decide which door Monty will open.

+

By our set up, Monty cannot open our door (my_door). By the set up, he has not opened (and cannot open) the door with the car behind it (car_door).

+

my_door and car_door might be the same.

+

So, to get Monty’s choices, we want to take all doors (doors) and remove my_door and car_door. That leaves the door or doors Monty can open.

+

Here are the doors Monty cannot open. Remember, a third of the time my_door and car_door will be the same, so we will include the same door twice, as doors Monty can’t open.

+
+
cant_open = c(my_door, car_door)
+cant_open
+
+
[1] 3 1
+
+
+

We want to find the remaining doors from doors after removing the doors named in cant_open.

+

R has a good function for this, called setdiff. It calculates the set difference between two sequences, such as vectors.

+

The set difference between two sequences is the members that are in the first sequence, but are not in the second sequence. Here are a few examples of this set difference function in R.

+
+
# Members in c(1, 2, 3) that are *not* in c(1)
+# 1, 2, 3, removing 1, if present.
+setdiff(c(1, 2, 3), c(1))
+
+
[1] 2 3
+
+
+
+
# Members in c(1, 2, 3) that are *not* in c(2, 3)
+# 1, 2, 3, removing 2 and 3, if present.
+setdiff(c(1, 2, 3), c(2, 3))
+
+
[1] 1
+
+
+
+
# Members in c(1, 2, 3) that are *not* in c(2, 2)
+# 1, 2, 3, removing 2 and 2 again, if present.
+setdiff(c(1, 2, 3), c(2, 2))
+
+
[1] 1 3
+
+
+

This logic allows us to choose the doors Monty can open:

+
+
montys_choices <- setdiff(doors, c(my_door, car_door))
+montys_choices
+
+
[1] 2
+
+
+

Notice that montys_choices will only have one element left when my_door and car_door were different, but it will have two elements if my_door and car_door were the same.

+

Let’s play out those two cases:

+
+
my_door <- 1  # For example.
+car_door <- 2  # For example.
+# Monty can only choose door 3 now.
+montys_choices <- setdiff(doors, c(my_door, car_door))
+montys_choices
+
+
[1] 3
+
+
+
+
my_door <- 1  # For example.
+car_door <- 1  # For example.
+# Monty can choose either door 2 or door 3.
+montys_choices <- setdiff(doors, c(my_door, car_door))
+montys_choices
+
+
[1] 2 3
+
+
+

If Monty can only choose one door, we’ll take that. Otherwise we’ll chose a door at random from the two doors available.

+
+
if (length(montys_choices) == 1) {  # Only one door available.
+    montys_door <- montys_choices[1]  # Take the first (of 1!).
+} else {  # Two doors to choose from:
+    # Choose at random.
+    montys_door <- sample(montys_choices, size=1)
+}
+montys_door
+
+
[1] 2
+
+
+

Now we know Monty’s door, we can identify the other door, by removing our door, and Monty’s door, from the available options:

+
+
remaining_doors <- setdiff(doors, c(my_door, montys_door))
+# There is only one remaining door, take that.
+other_door <- remaining_doors[1]
+other_door
+
+
[1] 3
+
+
+

The logic above gives us the full procedure for one trial.

+
+
my_door <- sample(doors, size=1)
+car_door <- sample(doors, size=1)
+# Which door will Monty open?
+montys_choices <- setdiff(doors, c(my_door, car_door))
+# Choose single door left to choose, or door at random if two.
+if (length(montys_choices) == 1) {  # Only one door available.
+    montys_door <- montys_choices[1]  # Take the first (of 1!).
+} else {  # Two doors to choose from:
+    # Choose at random.
+    montys_door <- sample(montys_choices, size=1)
+}
+# Now find the door we'll open if we switch.
+# There is only one door left.
+remaining_doors <- setdiff(doors, c(my_door, montys_door))
+other_door <- remaining_doors[1]
+# Calculate the result of this trial.
+if (my_door == car_door) {
+    stay_wins <- TRUE
+}
+if (other_door == car_door) {
+    switch_wins <- TRUE
+}
+
+

All that remains is to put that trial procedure into a loop, and collect the results as we repeat the procedure many times.

+
+
# Vectors to store the results for each trial.
+stay_wins <- rep(FALSE, 10000)
+switch_wins <- rep(FALSE, 10000)
+
+# Doors to chose from.
+doors <- c(1, 2, 3)
+
+for (i in 1:10000) {
+    # You will recognize the below as the single-trial procedure above.
+    my_door <- sample(doors, size=1)
+    car_door <- sample(doors, size=1)
+    # Which door will Monty open?
+    montys_choices <- setdiff(doors, c(my_door, car_door))
+    # Choose single door left to choose, or door at random if two.
+    if (length(montys_choices) == 1) {  # Only one door available.
+        montys_door <- montys_choices[1]  # Take the first (of 1!).
+    } else {  # Two doors to choose from:
+        # Choose at random.
+        montys_door <- sample(montys_choices, size=1)
+    }
+    # Now find the door we'll open if we switch.
+    # There is only one door left.
+    remaining_doors <- setdiff(doors, c(my_door, montys_door))
+    other_door <- remaining_doors[1]
+    # Calculate the result of this trial.
+    if (my_door == car_door) {
+        stay_wins[i] <- TRUE
+    }
+    if (other_door == car_door) {
+        switch_wins[i] <- TRUE
+    }
+}
+
+p_for_stay <- sum(stay_wins) / 10000
+p_for_switch <- sum(switch_wins) / 10000
+
+message('p for stay: ', p_for_stay)
+
+
p for stay: 0.3293
+
+
message('p for switch: ', p_for_switch)
+
+
p for switch: 0.6707
+
+
+

We can also follow the same strategy as we used for the second implementation of the two-ships problem (Section 10.4).

+

Here, as in the second two-ships implementation, we do not calculate the trial results (stay_wins, switch_wins) in each trial. Instead, we store the doors for each trial, and then use Boolean vectors to calculate the results for all trials, at the end.

+
+
# Instead of storing the trial results, we store the doors for each trial.
+my_doors <- numeric(10000)
+car_doors <- numeric(10000)
+other_doors <- numeric(10000)
+
+# Doors to chose from.
+doors <- c(1, 2, 3)
+
+for (i in 1:10000) {
+    my_door <- sample(doors, size=1)
+    car_door <- sample(doors, size=1)
+    # Which door will Monty open?
+    montys_choices <- setdiff(doors, c(my_door, car_door))
+    # Choose single door left to choose, or door at random if two.
+    if (length(montys_choices) == 1) {  # Only one door available.
+        montys_door <- montys_choices[1]  # Take the first (of 1!).
+    } else {  # Two doors to choose from:
+        # Choose at random.
+        montys_door <- sample(montys_choices, size=1)
+    }
+    # Now find the door we'll open if we switch.
+    # There is only one door left.
+    remaining_doors <- setdiff(doors, c(my_door, montys_door))
+    other_door <- remaining_doors[1]
+
+    # Store the doors we chose.
+    my_doors[i] <- my_door
+    car_doors[i] <- car_door
+    other_doors[i] <- other_door
+}
+
+# Now - at the end of all the trials, we use Boolean vectors to calculate the
+# results.
+stay_wins <- my_doors == car_doors
+switch_wins <- other_doors == car_doors
+
+p_for_stay <- sum(stay_wins) / 10000
+p_for_switch <- sum(switch_wins) / 10000
+
+message('p for stay: ', p_for_stay)
+
+
p for stay: 0.3336
+
+
message('p for switch: ', p_for_switch)
+
+
p for switch: 0.6664
+
+
+
+

10.7.1 Insight from the Monty Hall simulation

+

The code simulation gives us an estimate of the right answer, but it also forces us to set out the exact mechanics of the problem. For example, by looking at the code, we see that we can calculate “stay_wins” with this code alone:

+
+
# Just choose my door and the car door for each trial.
+my_doors <- numeric(10000)
+car_doors <- numeric(10000)
+doors <- c(1, 2, 3)
+
+for (i in 1:10000) {
+    my_doors[i] <- sample(doors, size=1)
+    car_doors[i] <- sample(doors, size=1)
+}
+
+# Calculate whether I won by staying.
+stay_wins <- my_doors == car_doors
+p_for_stay <- sum(stay_wins) / 10000
+
+message('p for stay: ', p_for_stay)
+
+
p for stay: 0.3363
+
+
+

This calculation, on its own, tells us the answer, but it also points to another insight — whatever Monty does with the doors, it doesn’t change the probability that our initial guess is right, and that must be 1 in 3 (0.333). If the probability of stay_win is 1 in 3, and we only have one other door to switch to, the probability of winning after switching must be 2 in 3 (0.666).

+
+
+

10.7.2 Simulation and a variant of Monty Hall

+

You have seen that you can avoid the silly mistakes that many of us make with probability — by asking the computer to tell you the result before you start to reason from first principles.

+

As an example, consider the following variant of the Monty Hall problem.

+

The set up to the problem has us choosing a door (my_door above), and then Monty opens one of the other two doors.

+

Sometimes (in fact, 2/3 of the time) there is a car behind one of Monty’s doors. We’ve obliged Monty to open the other door, and his choice is forced.

+

When his choice was not forced, we had Monty choose the door at random.

+

For example, let us say we chose door 1.

+

Let us say that the car is also under door 1.

+

Monty has the option of choosing door 2 or door 3, and he chooses randomly between them.

+
+
my_door <- 1  # We chose door 1 at random.
+car_door <- 1  # This trial, by chance, the car door is 1.
+# Monty is left with doors 2 and 3 to choose from.
+montys_choices <- setdiff(doors, c(my_door, car_door))
+# He chooses randomly.
+montys_door <- sample(montys_choices, size=1)
+# Show the result
+montys_door
+
+
[1] 2
+
+
+

Now — let us say we happen to know that Monty is rather lazy, and he will always choose the left-most (lower-numbered) door of the two options.

+

In the previous example, Monty had the option of choosing door 2 and 3. In this new scenario, we know that he will always choose door 2 (the left-most door).

+
+
my_door <- 1  # We chose door 1 at random.
+car_door <- 1  # This trial, by chance, the car door is 1.
+# Monty is left with doors 2 and 3 to choose from.
+montys_choices <- setdiff(doors, c(my_door, car_door))
+# He chooses the left-most door, always.
+montys_door <- montys_choices[1]
+# Show the result
+montys_door
+
+
[1] 2
+
+
+

It feels as if we have more information about where the car is, when we know this. Consider the situation where we have chosen door 1, and Monty opens door 3. We know that he would have preferred to open door 2, if he was allowed. We therefore know he wasn’t allowed to open door 2, and that means the car is definitely under door 2.

+
+
my_door <- 1  # We chose door 1 at random.
+car_door <- 1  # This trial, by chance, the car door is 1.
+# Monty is left with door 3 only to choose from.
+montys_choices <- setdiff(doors, c(my_door, car_door))
+# He chooses the left-most door, always.  But in this case, the left-most
+# available door is 3 (he can't choose 2, it is the car_door).
+# Notice the doors were in order, so the left-most door is the first door
+# in the vector.
+montys_door <- montys_choices[1]
+# Show the result
+montys_door
+
+
[1] 2
+
+
+

To take that into account, we might try a different strategy. We will stick to our own choice if Monty has chosen the left-most of the two doors he had available to him, because he might have chosen that door because there was a car underneath the other door, or because there was a car under neither, but he preferred the left door. But, if Monty chooses the right-most of the two-doors available to him, we will switch from our own choice to the other (unopened) door, because we can be sure that the car is under the other (unopened) door.

+

Call this the “switch if Monty chooses right door” strategy, or “switch if right” for short.

+

Can you see quickly whether this will be better than the “always stay” strategy? Will it be better than the “always switch” strategy? Take a moment to think it through, and write down your answers.

+

If you can quickly see the answer to both questions — well done — but, are you sure you are right?

+

We can test by simulation.

+

For our test of the “switch is right” strategy, we can tell if one door is to the right of another door by comparison; higher numbers mean further to the right: 2 is right of 1, and 3 is right of 2.

+
+
# Door 3 is right of door 1.
+3 > 1
+
+
[1] TRUE
+
+
+
+
# A test of the switch-if-right strategy.
+# The car doors.
+car_doors <- numeric(10000)
+# The door we chose using the strategy.
+strategy_doors <- numeric(10000)
+
+doors <- c(1, 2, 3)
+
+for (i in 1:10000) {
+    my_door <- sample(doors, size=1)
+    car_door <- sample(doors, size=1)
+    # Which door will Monty open?
+    montys_choices <- setdiff(doors, c(my_door, car_door))
+    # Choose Monty's door from the remaining options.
+    # This time, he always prefers the left door.
+    montys_door <- montys_choices[1]
+    # Now find the door we'll open if we switch.
+    remaining_doors <- setdiff(doors, c(my_door, montys_door))
+    # There is only one door remaining - but is Monty's door
+    # to the right of this one?  Then Monty had to shift.
+    other_door <- remaining_doors[1]
+    if (montys_door > other_door) {
+        # Monty's door was the right-hand door, the car is under the other one.
+        strategy_doors[i] <- other_door
+    } else {  # We stick with the door we first thought of.
+        strategy_doors[i] <- my_door
+    }
+    # Store the car door for this trial.
+    car_doors[i] <- car_door
+}
+
+strategy_wins <- strategy_doors == car_doors
+
+p_for_strategy <- sum(strategy_wins) / 10000
+
+message('p for strategy: ', p_for_strategy)
+
+
p for strategy: 0.6668
+
+
+

We find that the “switch-if-right” has around the same chance of success as the “always-switch” strategy — of about 66.6%, or 2 in 3. Were your initial answers right? Now you’ve seen the result, can you see why it should be so? It may not be obvious — the Monty Hall problem is deceptively difficult. But our case here is that the simulation first gives you an estimate of the correct answer, and then, gives you a good basis for thinking more about the problem. That is:

+
    +
  • simulation is useful for estimation and
  • +
  • simulation is useful for reflection.
  • +
+

End of monty_hall notebook

+
+
+
+
+

10.8 Why use simulation?

+

Doing these simulations has two large benefits. First, it gives us the right answer, saving us from making a mistake. Second, the process of simulation forces us to think about how the problem works. This can give us better understanding, and make it easier to reason about the solution.

+

We will soon see that these same advantages also apply to reasoning about statistics.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/notebooks/ambulances.Rmd b/r-book/notebooks/ambulances.Rmd new file mode 100644 index 00000000..c2d50a80 --- /dev/null +++ b/r-book/notebooks/ambulances.Rmd @@ -0,0 +1,279 @@ +# Ambulances + + +The first thing to say about the code you will see below is there are +some lines that do not do anything; these are the lines beginning with a +`#` character (read `#` as “hash”). Lines beginning with `#` are called +*comments*. When R sees a `#` at the start of a line, it ignores +everything else on that line, and skips to the next. Here’s an example +of a comment: + +```{r} +# R will completely ignore this text. +``` + +Because R ignores lines beginning with `#`, the text after the `#` is +just for us, the humans reading the code. The person writing the code +will often use comments to explain what the code is doing. + +Our next task is to use R to simulate a single day of ambulances. We +will again represent each ambulance by a random number from 0 through 9. +20 of these numbers represents a simulation of all 20 ambulances +available to the contractor. We call a simulation of all ambulances for +a specific day one *trial*. + +Recall that we want twenty 10-sided dice — one per ambulance. Our dice +should be 10-sided, because each ambulance has a 1-in-10 chance of being +out of order. + +The program to simulate one trial of the ambulances problem therefore +begins with these commands: + +```{r} +# Ask R to generate 20 numbers from 0 through 9. + +# These are the numbers we will ask R to select from. +numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9) + +# Get 20 values from the *numbers* sequence. +# Store the 20 numbers with the name "a" +# We will explain the replace=TRUE later. +a <- sample(numbers, 20, replace=TRUE) + +# The result is a sequence of 20 numbers. +a +``` + +The commands above ask the computer to store the results of the random +drawing in a location in the computer’s memory to which we give a name +such as “a” or “ambulances” or “aardvark” — the name is up to us. + +Next, we need to count the number of defective ambulances: + +```{r} +# Count the number of nines in the random numbers. +# The "a == 9" part identifies all the numbers equal to 9. +# The "sum" part counts how many numbers "a == 9" found. +b <- sum(a == 9) +# Show the result +b +``` + +
+ +
+ +
+ + + +
+ +
+ +Counting sequence elements + +
+ +
+ +
+ +We see that the code uses: + +```{r} +sum(a == 9) +``` + +What exactly happens here under the hood? First `a == 9` creates an +sequence of values that only contains + +`TRUE` or `FALSE` + +values, depending on whether each element is equal to 9 or not. + +Then, we ask R to add up (`sum`). R counts `TRUE` as 1, and `FALSE` as +0; thus we can use `sum` to count the number of `TRUE` values. + +This comes down to asking “how many elements in `a` are equal to 9”. + +Don’t worry, we will go over this again in the next chapter. + +
+ +
+ +The `sum` command is a *counting* operation. It asks the computer to +*count* the number of `9`s among the twenty numbers that are in location +`a` following the random draw carried out by the `sample` operation. The +result of the `sum` operation will be somewhere between 0 and 20, the +number of simulated ambulances that were out-of-order on a given +simulated day. The result is then placed in another location in the +computer’s memory that we label `b`. + +Above you see that we have worked out how to tell the computer to do a +single trial — one simulated day. + +### 2.3.1 Repeating trials + +We could run the code above for one trial over and over, and write down +the result on a piece of paper. If we did this 100 times we would have +100 counts of the number of simulated ambulances that had broken down +for each simulated day. To answer our question, we will then count the +number of times the count was more than three, and divide by 100, to get +an estimate of the proportion of days with more than three out-of-order +ambulances. + +One of the great things about the computer is that it is very good at +repeating tasks many times, so we do not have to. Our next task is to +ask the computer to repeat the single trial many times — say 1000 times +— and count up the results for us. + +Of course R is very good at repeating things, but the instructions to +tell R to repeat things will take a little while to get used to. Soon, +we will spend some time going over it in more detail. For now though, we +show you how what it looks like, and ask you to take our word for it. + +The standard way to repeat steps in R is a `for` loop. For example, let +us say we wanted to display “Hello” five times. Here is how we would do +that with a `for` loop: + +```{r} +# Read the next line as "repeat the following steps five times". +for (i in 1:5) { + # The stuff between the curly brackets is the code we + # repeat five times. + # Print "Hello" to the screen. + message("Hello") +} +``` + +You can probably see where we are going here. We are going to put the +code for one trial inside a `for` loop, to repeat that trial code many +times. + +Our next job is to *store* the results of each trial. If we are going to +run 1000 trials, we need to store 1000 results. + +To do this, we start with a sequence of 1000 zeros, that we will fill in +later, like this: + +```{r} +# Ask R to make a sequence of 1000 zeros that we will use +# to store the results of our 1000 trials. +# Call this sequence "z" +z <- numeric(1000) +``` + +For now, `z` contains 1000 zeros, but we will soon use a `for` loop to +execute 1000 trials. For each trial we will calculate our result (the +number of broken-down ambulances), and we will store the result in the +`z` store. We end up with 1000 trial results stored in `z`. + +With these parts, we are now ready to solve the ambulance problem, using +R. + +### 2.3.2 The solution + +This is our big moment! Here we will combine the elements shown above to +perform our ambulance simulation over, say, 1000 days. Just a quick +reminder: we do not expect you to understand all the detail of the code +below; we will cover that later. For now, see if you can follow along +with the gist of it. + +To solve resampling problems, we typically proceed as we have done +above. We figure out the structure of a single trial and then place that +trial in a `for` loop that executes it multiple times (once for each +day, in our case). + +Now, let us apply this procedure to our ambulance problem. We simulate +1000 days. You will see that we have just taken the parts above, and put +them together. The only new part here, is the step at the end, where we +store the result of the trial. Bear with us for that; we will come to it +soon. + +```{r} +# Ask R to make a sequence of 1000 zeros that we will use +# to store the results of our 1000 trials. +# Call this sequence "z" +z <- numeric(1000) + +# These are the numbers we will ask R to select from. +numbers <- 0:9 + +# Read the next line as "repeat the following steps 1000 times". +for (i in 1:1000) { + # The stuff between the curly brackets is the code we + # repeat 1000 times. + + # Get 20 values from the *numbers* sequence. + # Store the 20 numbers with the name "a" + a <- sample(numbers, 20, replace=TRUE) + + # Count the number of nines in the random numbers. + # The "a == 9" part identifies all the numbers equal to 9. + # The "sum" part counts how many numbers "a == 9" found. + b <- sum(a == 9) + + # Store the result from this trial in the sequence "z" + z[i] <- b + + # Now go back and repeat the trial, until done. +} +``` + +The `z[i] <- b` statement that follows the `sum` *counting* operation +simply keeps track of the results of each trial, placing the number of +defective ambulances for each trial inside the sequence called `z`. The +sequence has 1000 positions: one for each trial. + +When we have run the code above, we have stored 1000 trial results in +the sequence `z`. These are 1000 counts of out-of-order ambulances, one +for each of our simulated days. Our last task is to calculate the +proportion of these days for which we had more than three broken-down +ambulances. + +Since our aim is to count the number of days in which more than 3 (4 or +more) defective ambulances occur, we use another *counting* `sum` +command at the end of the 1000 trials. This command *counts* how many +times more than 3 defects occurred in the 1000 days recorded in our `z` +sequence, and we place the result in another location, `k`. This gives +us the total number of days where 4 or more defective ambulances are +seen to occur. Then we divide the number in `k` by 1000, the number of +trials. Thus we obtain an estimate of the chance, expressed as a +probability between 0 and 1, that 4 or more ambulances will be defective +on a given day. And we store that result in a location that we call +`kk`, which R subsequently prints to the screen. + +```{r} +# How many trials resulted in more than 3 ambulances out of order? +k <- sum(z > 3) + +# Convert to a proportion. +kk <- k / 1000 + +# Show the result. +message(kk) +``` + +This is the estimate we wanted; the proportion of days where more than +three ambulances were out of action. + +We have crept up on the solution, so it might not be clear to you how +few steps you needed to do this task. Here is the whole solution to the +problem, without the comments: + +```{r} +z <- numeric(1000) +numbers <- 0:9 + +for (i in 1:1000) { + a <- sample(numbers, 20, replace=TRUE) + b <- sum(a == 9) + z[i] <- b +} + +k <- sum(z > 3) +kk <- k / 1000 +message(kk) +``` diff --git a/r-book/notebooks/basketball_shots.Rmd b/r-book/notebooks/basketball_shots.Rmd new file mode 100644 index 00000000..2a2f169d --- /dev/null +++ b/r-book/notebooks/basketball_shots.Rmd @@ -0,0 +1,36 @@ +# Three or more basketball shots + + +We simulate the probability of scoring three or more baskets from five +shots, if each shot has a 25% probability of success. + +```{r} +n_baskets <- numeric(10000) + +# Do 10000 experimental trials. +for (i in 1:10000) { + + # Generate 5 random numbers, each between 1 and 4, put them in "a". + # Let "1" represent a basket, "2" through "4" be a miss. + a <- sample(1:4, size=5, replace=TRUE) + + # Count the number of baskets, put that result in b. + b <- sum(a == 1) + + # Keep track of each experiment's results in z. + n_baskets[i] <- b + + # End the experiment, go back and repeat until all 10000 are completed, then + # proceed. +} + +# Determine how many experiments produced more than two baskets, put that +# result in k. +n_more_than_2 <- sum(n_baskets > 2) + +# Convert to a proportion. +prop_more_than_2 <- n_more_than_2 / 10000 + +# Print the result. +message(prop_more_than_2) +``` diff --git a/r-book/notebooks/billies_bill.Rmd b/r-book/notebooks/billies_bill.Rmd new file mode 100644 index 00000000..60c03bf4 --- /dev/null +++ b/r-book/notebooks/billies_bill.Rmd @@ -0,0 +1,306 @@ +# Billie's Bill + + +The text in this notebook section assumes you have opened the page as an +interactive notebook, on your own computer, or one of the RStudio web +interfaces. + +A notebook can contain blocks of text — like this one — as well as code, +and the results from running the code. + +RMarkdown notebooks contain text — like this, but they can also contain +snippets of code, in *code chunks*. You will see examples of code chunks +soon. + +Notebook text can have formatting, such as links. + +For example, this sentence ends with a link to the earlier [second +edition of this +book](https://resample.statistics.com/intro-text-online). + +If you are in the notebook interface (rather than reading this in the +textbook), you will see the RStudio menu near the top of the page, with +headings “File”, “Edit” and so on. + +Underneath that, by default, you may see a row of icons - the “Toolbar”. + +In the toolbar, you may see a list box that will allow you to run the +code in the notebook, among other icons. + +When we get to code chunks, you will also see a green play icon at the +right edge of the interface, in the chunk. This will allow you to run +the code chunk. + +Although you can use this “run” button, we suggest you get used to using +the keyboard shortcut. The default shortcut on Windows or Linux is to +hold down the Control key and the Shift key and the Enter (Return) key +at the same time. We will call this Control-Shift-Enter. On Mac the +default combination is Command-Shift-Enter, where Command is the key +with the four-leaf-clover-like icon to the left of the space-bar. To +save us having to say this each time, we will call this combination +Ctl/Cmd-Shift-Enter. + +In this, our first notebook, we will be using R to solve one of those +difficult and troubling problems in life — working out the bill in a +restaurant. + +## 4.3 The meal in question + +Alex and Billie are at a restaurant, getting ready to order. They do not +have much money, so they are calculating the expected bill before they +order. + +Alex is thinking of having the fish for £10.50, and Billie is leaning +towards the chicken, at £9.25. First they calculate their combined bill. + +Below this text you see a *code* chunk. It contains the R code to +calculate the total bill. Press Control-Shift-Enter or Cmd-Shift-Enter +(on Mac) in the chunk below, to see the total. + +```{r} +10.50 + 9.25 +``` + +The contents of the chunk above is R code. As you would predict, R +understands numbers like `10.50`, and it understands `+` between the +numbers as an instruction to add the numbers. + +When you press Ctl/Cmd-Shift-Enter, R finds `10.50`, realizes it is a +number, and stores that number somewhere in memory. It does the same +thing for `9.25`, and then it runs the *addition* operation on these two +numbers in memory, which gives the number 19.75. + +Finally, R sends the resulting number (19.75) back to the notebook for +display. The notebook detects that R sent back a value, and shows it to +us. + +This is exactly what a calculator would do. + +## 4.4 Comments + +Unlike a calculator, we can also put notes next to our calculations, to +remind us what they are for. One way of doing this is to use a +“comment”. You have already seen comments in the previous chapter. + +A comment is some text that the computer will ignore. In R, you can make +a comment by starting a line with the `#` (hash) character. For example, +the next cell is a code cell, but when you run it, it does not show any +result. In this case, that is because the computer sees the `#` at the +beginning of the line, and then ignores the rest. + +Many of the code cells you see will have comments in them, to explain +what the code is doing. + +Practice writing comments for your own code. It is a very good habit to +get into. You will find that experienced programmers write many comments +on their code. They do not do this to show off, but because they have a +lot of experience in reading code, and they know that comments make it +much easier to read and understand code. + +## 4.5 More calculations + +Let us continue with the struggle that Alex and Billie are having with +their bill. + +They realize that they will also need to pay a tip. + +They think it would be reasonable to leave a 15% tip. Now they need to +multiply their total bill by 0.15, to get the tip. The bill is about +£20, so they know that the tip will be about £3. + +In R `*` means multiplication. This is the equivalent of the “×” key on +a calculator. + +What about this, for the correct calculation? + +```{r} +# The tip - with a nasty mistake. +10.50 + 9.25 * 0.15 +``` + +Oh dear, no, that isn’t doing the right calculation. + +R follows the normal rules of *precedence* with calculations. These +rules tell us to do multiplication before addition. + +See for more detail +on the standard rules. + +In the case above the rules tell R to first calculate `9.25 * 0.15` (to +get `1.3875`) and then to add the result to `10.50`, giving `11.8875`. + +We need to tell R we want it to do the *addition* and *then* the +multiplication. We do this with round brackets (parentheses): + +
+ +
+ +
+ + + +
+ +
+ +
+ +
+ +
+ +There are three types of brackets in R. + +These are: + +- *round brackets* or *parentheses*: `()`; +- *square brackets*: `[]`; +- *curly brackets*: `{}`. + +Each type of bracket has a different meaning in R. In the examples, play +close to attention to the type of brackets we are using. + +
+ +
+ +```{r} +# The bill plus tip - mistake fixed. +(10.50 + 9.25) * 0.15 +``` + +The obvious next step is to calculate the bill *including the tip*. + +```{r} +# The bill, including the tip +10.50 + 9.25 + (10.50 + 9.25) * 0.15 +``` + +At this stage we start to feel that we are doing too much typing. Notice +that we had to type out `10.50 + 9.25` twice there. That is a little +boring, but it also makes it easier to make mistakes. The more we have +to type, the greater the chance we have to make a mistake. + +To make things simpler, we would like to be able to *store* the result +of the calculation `10.50 + 9.25`, and then re-use this value, to +calculate the tip. + +This is the role of *variables*. A *variable* is a value with a name. + +Here is a variable: + +```{r} +# The cost of Alex's meal. +a <- 10.50 +``` + +`a` is a *name* we give to the value 10.50. You can read the line above +as “The variable `a` *gets the value* 10.50”. We can also talk of +*setting* the variable. Here we are *setting* `a` to equal 10.50. + +Now, when we use `a` in code, it refers to the value we gave it. For +example, we can put `a` on a line on its own, and R will show us the +*value* of `a`: + +```{r} +# The value of a +a +``` + +We did not have to use the name `a` — we can choose almost any name we +like. For example, we could have chosen `alex_meal` instead: + +```{r} +# The cost of Alex's meal. +# alex_meal gets the value 10.50 +alex_meal <- 10.50 +``` + +We often set variables like this, and then display the result, all in +the same chunk. We do this by first setting the variable, as above, and +then, on the final line of the chunk, we put the variable name on a line +on its own, to ask R to show us the value of the variable. Here we set +`billie_meal` to have the value 9.25, and then show the value of +`billie_meal`, all in the same chunk. + +```{r} +# The cost of Alex's meal. +# billie_meal gets the value 10.50 +billie_meal <- 10.50 +# Show the value of billie_meal +billie_meal +``` + +Of course, here, we did not learn much, but we often set variable values +with the results of a calculation. For example: + +```{r} +# The cost of both meals, before tip. +bill_before_tip <- 10.50 + 9.25 +# Show the value of both meals. +bill_before_tip +``` + +But wait — we can do better than typing in the calculation like this. We +can use the values of our variables, instead of typing in the values +again. + +```{r} +# The cost of both meals, before tip, using variables. +bill_before_tip <- alex_meal + billie_meal +# Show the value of both meals. +bill_before_tip +``` + +We make the calculation clearer by writing the calculation this way — we +are calculating the bill before the tip by adding the cost of Alex’s and +Billie’s meal — and that’s what the code looks like. But this also +allows us to *change* the variable value, and recalculate. For example, +say Alex decided to go for the hummus plate, at £7.75. Now we can tell R +that we want `alex_meal` to have the value 7.75 instead of 10.50: + +```{r} +# The new cost of Alex's meal. +# alex_meal gets the value 7.75 +alex_meal = 7.75 +# Show the value of alex_meal +alex_meal +``` + +Notice that `alex_meal` now has a new value. It was 10.50, but now it is +7.75. We have *reset* the value of `alex_meal`. In order to use the new +value for `alex_meal`, we must *recalculate* the bill before tip with +*exactly the same code as before*: + +```{r} +# The new cost of both meals, before tip. +bill_before_tip <- alex_meal + billie_meal +# Show the value of both meals. +bill_before_tip +``` + +Notice that, now we have rerun this calculation, we have *reset* the +value for `bill_before_tip` to the correct value corresponding to the +new value for `alex_meal`. + +All that remains is to recalculate the bill plus tip, using the new +value for the variable: + +```{r} +# The cost of both meals, after tip. +bill_after_tip = bill_before_tip + bill_before_tip * 0.15 +# Show the value of both meals, after tip. +bill_after_tip +``` + +Now we are using variables with relevant names, the calculation looks +right to our eye. The code expresses the calculation as we mean it: the +bill after tip is equal to the bill before the tip, plus the bill before +the tip times 0.15. + +## 4.6 And so, on + +Now you have done some practice with the notebook, and with variables, +you are ready for a new problem in probability and statistics, in the +next chapter. diff --git a/r-book/notebooks/birthday_problem.Rmd b/r-book/notebooks/birthday_problem.Rmd new file mode 100644 index 00000000..656a8107 --- /dev/null +++ b/r-book/notebooks/birthday_problem.Rmd @@ -0,0 +1,43 @@ +# The Birthday Problem + + +Here we answer the question: “What is the probability that two or more +people among a roomful of (say) twenty-five people will have the same +birthday?” + +```{r} +n_with_same_birthday <- numeric(10000) + +# All the days of the year from "1" through "365" +all_days <- 1:365 + +# Do 10000 trials (experiments) +for (i in 1:10000) { + # Generate 25 numbers randomly between "1" and "365," put them in a. + a <- sample(all_days, size=25, replace=TRUE) + + # Looking in a, count the number of multiples and put the result in + # "counts". + counts <- tabulate(a) + + # We request multiples > 1 because we are interested in any multiple, + # whether it is a duplicate, triplicate, etc. Had we been interested only + # in duplicates, we would have put in sum(counts == 2). + n_duplicates <- sum(counts > 1) + + # Score the result of each trial to our store + n_with_same_birthday[i] <- n_duplicates + + # End the loop for the trial, go back and repeat the trial until all 10000 + # are complete, then proceed. +} + +# Determine how many trials had at least one multiple. +k <- sum(n_with_same_birthday) + +# Convert to a proportion. +kk <- k / 10000 + +# Print the result. +message(kk) +``` diff --git a/r-book/notebooks/bullseye.Rmd b/r-book/notebooks/bullseye.Rmd new file mode 100644 index 00000000..15b8105a --- /dev/null +++ b/r-book/notebooks/bullseye.Rmd @@ -0,0 +1,55 @@ +# Bullseye + + +This notebook solves the “bullseye” problem: assume from past experience +that a given archer puts 10 percent of his shots in the black +(“bullseye”) and 60 percent of his shots in the white ring around the +bullseye, but misses with 30 percent of his shots. How likely is it that +in three shots the shooter will get exactly one bullseye, two in the +white, and no misses? + +```{r} +# Make a vector to store the results of each trial. +white_counts <- numeric(10000) + +# Do 10000 experimental trials +for (i in 1:10000) { + + # To represent 3 shots, generate 3 numbers at random between "1" and "10" + # and put them in a. We will let a "1" denote a bullseye, "2"-"7" a shot in + # the white, and "8"-"10" a miss. + a <- sample(1:10, size=3, replace=TRUE) + + # Count the number of bullseyes, put that result in b. + b <- sum(a == 1) + + # If there is exactly one bullseye, we will continue with counting the + # other shots. (If there are no bullseyes, we need not bother — the + # outcome we are interested in has not occurred.) + if (b == 1) { + + # Count the number of shots in the white, put them in c. (Recall we are + # doing this only if we got one bullseye.) + c <- sum((a >= 2) & (a <=7)) + + # Keep track of the results of this second count. + white_counts[i] <- c + + # End the "if" sequence — we will do the following steps without regard + # to the "if" condition. + } + + # End the above experiment and repeat it until 10000 repetitions are + # complete, then continue. +} + +# Count the number of occasions on which there are two in the white and a +# bullseye. +n_desired <- sum(white_counts == 2) + +# Convert to a proportion. +prop_desired <- n_desired / 10000 + +# Print the results. +message(prop_desired) +``` diff --git a/r-book/notebooks/cards_pennies.Rmd b/r-book/notebooks/cards_pennies.Rmd new file mode 100644 index 00000000..4a3dfcaf --- /dev/null +++ b/r-book/notebooks/cards_pennies.Rmd @@ -0,0 +1,76 @@ +# Cards and pennies + + +An answer for the following puzzle: “… shuffle a packet of four cards — +two red, two black — and deal them face down in a row. Two cards are +picked at random, say by placing a penny on each. What is the +probability that those two cards are the same color?” + +```{r} +# Numbers representing the slips in the hat. +N <- c(1, 1, 2, 2) + +# An array in which we will store the result of each trial. +z <- rep('No result yet', 10000) + +for (i in 1:10000) { + # sample, used in this way, has the effect of shuffling the vector + # into a random order. See the section linked above for an explanation. + shuffled <- sample(N) + + A <- shuffled[1] # The first slip from the shuffled array. + B <- shuffled[2] # The second slip from the shuffled array. + + # Set the result of this trial. + if (A == B) { + z[i] <- 'Yes' + } else { + z[i] <- 'No' + } +} # End of the loop. + +# How many times did we see "Yes"? +k <- sum(z == 'Yes') + +# The proportion. +kk <- k / 10000 + +message(kk) +``` + +Now let’s play the game differently, first picking one card and *putting +it back and shuffling* before picking a second card. What are the +results now? You can try it with the cards, but here is another program, +similar to the last, to run that variation. + +```{r} +# An array in which we will store the result of each trial. +z <- rep('No result yet', 10000) + +for (i in 1:10000) { + # Shuffle the numbers in N into a random order. + first_shuffle <- sample(N) + # Draw a slip of paper. + A <- first_shuffle[1] # The first slip. + + # Shuffle again (with all the slips). + second_shuffle <- sample(N) + # Draw a slip of paper. + B <- second_shuffle[1] # The second slip. + + # Set the result of this trial. + if (A == B) { + z[i] <- 'Yes' + } else { + z[i] <- 'No' + } +} # End of the loop. + +# How many times did we see "Yes"? +k <- sum(z == 'Yes') + +# The proportion. +kk <- k / 10000 + +message(kk) +``` diff --git a/r-book/notebooks/contract_poll.Rmd b/r-book/notebooks/contract_poll.Rmd new file mode 100644 index 00000000..c9f747e7 --- /dev/null +++ b/r-book/notebooks/contract_poll.Rmd @@ -0,0 +1,40 @@ +# Contract poll simulation + + +This R notebook generates samples of 50 simulated voters on the +assumption that only 50 percent are in favor of the contract. Then it +counts (`sum`s) the number of samples where over 29 (30 or more) of the +50 respondents said they were in favor of the contract. (That is, we use +a “one-tailed test.”) The result in the `kk` variable is the chance of a +“false positive,” that is, 30 or more people saying they favor a +contract when support for the proposal is actually split evenly down the +middle. + +```{r} +# We will do 10,000 iterations. +n <- 10000 + +# Make an array of integers to store the "Yes" counts. +yeses <- numeric(n) + +for (i in 1:n) { + answers <- sample(c('No', 'Yes'), size=50, replace=TRUE) + yeses[i] <- sum(answers == 'Yes') +} + +# Produce a histogram of the trial results. +# Use integer bins for histogram, from 10 through 40. +hist(yeses, breaks=10:40, + main='Number of yes votes out of 50, in null universe') +``` + +In the histogram above, we see that about 11 percent of our trials had +30 or more voters in favor, despite the fact that they were drawn from a +population that was split 50-50. R will calculate this proportion +directly if we add the following commands to the above: + +```{r} +k <- sum(yeses >= 30) +kk <- k / n +message('Proportion >= 30: ', round(kk, 2)) +``` diff --git a/r-book/notebooks/female_calves.Rmd b/r-book/notebooks/female_calves.Rmd new file mode 100644 index 00000000..67c56282 --- /dev/null +++ b/r-book/notebooks/female_calves.Rmd @@ -0,0 +1,46 @@ +# Female calf numbers simulation + + +This notebook uses simulation to test the null hypothesis that the +chances of any one calf being female is 100 / 206. + +```{r} +# set the number of trials +n_trials <- 10000 + +# set the size of each sample +sample_size <- 10 + +# an array to store the results +scores <- numeric(n_trials) + +# for 10000 repeats +for (i in 1:n_trials) { + + # generate 10 numbers between 1 and 206 + a <- sample(1:206, size = sample_size) + + # count how many numbers were between 101 and 206 + b <- sum((a >= 101) & ((a <= 206))) + + # store the result of the current trial + scores[i] <- b +} + +# plot a histogram of the scores +title_of_plot <- paste0("Number of females in", n_trials, " samples of \n", sample_size, " simulated calves") +hist(scores, xlab = 'Number of Females', main = title_of_plot) + +# count the number of scores that were greater than or equal to 9 +k <- sum(scores >= 9) + +# express as a proportion +kk <- k / n_trials + +# show the proportion +print(paste("The probability of 9 or 10 females occurring by chance is", kk)) +``` + +We read from the result in vector `kk` in the “calves” program that the +probability of 9 or 10 females occurring by chance is a bit more than +one percent. diff --git a/r-book/notebooks/fifteen_points_in_bridge.Rmd b/r-book/notebooks/fifteen_points_in_bridge.Rmd new file mode 100644 index 00000000..ff4e1901 --- /dev/null +++ b/r-book/notebooks/fifteen_points_in_bridge.Rmd @@ -0,0 +1,51 @@ +# Fifteen points in a bridge hand + + +Let us assume that ace counts as 4, king = 3, queen = 2, and jack = 1. + +```{r} +# Constitute a deck with 4 jacks (point value 1), 4 queens (value 2), 4 +# kings (value 3), 4 aces (value 4), and 36 other cards with no point +# value +whole_deck <- rep(c(1, 2, 3, 4, 0), c(4, 4, 4, 4, 36)) +whole_deck +``` + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Do N trials. +for (i in 1:N) { + # Shuffle the deck of cards and draw 13 + hand <- sample(whole_deck, size=13) # replace=FALSE is default. + + # Total the points. + points <- sum(hand) + + # Keep score of the result. + trial_results[i] <- points + + # End one experiment, go back and repeat until all N trials are done. +} +``` + +```{r} +# Produce a histogram of trial results. +hist(trial_results, breaks=0:max(trial_results), main='Points in bridge hands') +``` + +From this histogram, we see that in about 4 percent of our trials we +obtained a total of exactly 15 points. We can also compute this +directly: + +```{r} +# How many times did we have a hand with fifteen points? +k <- sum(trial_results == 15) + +# Convert to a proportion. +kk <- k / N + +# Show the result. +kk +``` diff --git a/r-book/notebooks/fine_win.Rmd b/r-book/notebooks/fine_win.Rmd new file mode 100644 index 00000000..a78ca784 --- /dev/null +++ b/r-book/notebooks/fine_win.Rmd @@ -0,0 +1,195 @@ +# Fine day and win + + +This notebook calculates the chances that the Commanders win on a fine +day. + +We also go through the logic of the `if` statement, and its associated +`else` clause. + +```{r} +# blue means "nice day", yellow means "not nice". +bucket_A <- rep(c('blue', 'yellow'), c(7, 3)) +bucket_A +``` + +Now let us draw a ball at random from bucket_A: + +```{r} +a_ball <- sample(bucket_A, size=1) +a_ball +``` + +How we run our first `if` statement. Running this code will display “The +ball was blue” if the ball was blue, otherwise it will not display +anything: + +```{r} +if (a_ball == 'blue') { + message('The ball was blue') +} +``` + +Notice that the header line has `if`, followed by an open parenthesis +`(` introducing the *conditional expression* `a_ball == 'blue'`. There +follows close parenthesis `)` to finish the conditional expression. Next +there is a open curly brace `{` to signal the start of the body of the +`if` statement. The *body* of the `if` statement is one or more lines of +code, followed by the close curly brace `}`. Here there is only one +line: `message('The ball was blue')`. R only runs the body of the if +statement if the *condition* is `TRUE`.[^1] + +To confirm we see “The ball was blue” if `a_ball` is `'blue'` and +nothing otherwise, we can set `a_ball` and re-run the code: + +```{r} +# Set value of a_ball so we know what it is. +a_ball <- 'blue' +``` + +```{r} +if (a_ball == 'blue') { + # The conditional statement is True in this case, so the body does run. + message('The ball was blue') +} +``` + +```{r} +a_ball <- 'yellow' +``` + +```{r} +if (a_ball == 'blue') { + # The conditional statement is False, so the body does not run. + message('The ball was blue') +} +``` + +We can add an `else` clause to the `if` statement. Remember the *body* +of the `if` statement runs if the *conditional expression* (here +`a_ball == 'blue')` is `TRUE`. The `else` clause runs if the conditional +statement is `FALSE`. This may be clearer with an example: + +```{r} +a_ball <- 'blue' +``` + +```{r} +if (a_ball == 'blue') { + # The conditional expression is True in this case, so the body runs. + message('The ball was blue') +} else { + # The conditional expression was True, so the else clause does not run. + message('The ball was not blue') +} +``` + +Notice that the `else` clause of the `if` statement starts with the end +of the `if` body with the closing curly brace `}`. `else` follows, +followed in turn by the opening curly brace `{` to start the body of the +`else` clause. The body of the `else` clause only runs if the initial +conditional expression is *not* `TRUE`. + +```{r} +a_ball <- 'yellow' +``` + +```{r} +if (a_ball == 'yellow') { + # The conditional expression was False, so the body does not run. + message('The ball was blue') +} else { + # but the else clause does run. + message('The ball was not blue') +} +``` + +With this machinery, we can now implement the full logic of step 4 +above: + + If you have drawn a blue ball from bucket A: + Draw a ball from bucket B + if the ball is green: + record "yes" + otherwise: + record "no". + +Here is bucket B. Remember green means “win” (65% of the time) and red +means “lose” (35% of the time). We could call this the “Commanders win +when it is a nice day” bucket: + +```{r} +bucket_B <- rep(c('green', 'red'), c(65, 35)) +``` + +The full logic for step 4 is: + +Now we have everything we need to run many trials with the same logic. + +```{r} +# By default, say we have no result. +result = 'No result' +a_ball <- sample(bucket_A, size=1) +# If you have drawn a blue ball from bucket A: +if (a_ball == 'blue') { + # Draw a ball at random from bucket B + b_ball <- sample(bucket_B, size=1) + # if the ball is green: + if (b_ball == 'green') { + # record "yes" + result <- 'yes' + # otherwise: + } else { + # record "no". + result <- 'no' + } +} +# Show what we got in this case. +result +``` + +```{r} +# The result of each trial. +# To start with, say we have no result for all the trials. +z <- rep('No result', 10000) + +# Repeat trial procedure 10000 times +for (i in 1:10000) { + # draw one "ball" for the weather, store in "a_ball" + # blue is "nice day", yellow is "not nice" + a_ball <- sample(bucket_A, size=1) + if (a_ball == 'blue') { # nice day + # if no rain, check on game outcome + # green is "win" (give nice day), red is "lose" (given nice day). + b_ball <- sample(bucket_B, size=1) + if (b_ball == 'green') { # Commanders win + # Record result. + z[i] <- 'yes' + } else { + z[i] <- 'no' + } + } + # End of trial, go back to the beginning until done. +} + +# Count of the number of times we got "yes". +k <- sum(z == 'yes') +# Show the proportion of *both* fine day *and* wins +kk <- k / 10000 +kk +``` + +The above procedure gives us the probability that it will be a nice day +and the Commanders will win — about 46.1%. + +[^1]: In this case, the result of the conditional expression is in fact + either `TRUE` or `FALSE`. R is more liberal on what it allows in the + conditional expression; it will take whatever the result is, and + then force the result into either `TRUE` or `FALSE`, in fact, by + wrapping the result with the `logical` function, that takes anything + as input, and returns either `TRUE` or `FALSE`. Therefore, we could + refer to the result of the conditional expression as something + “truthy” — that is - something that comes back as `TRUE` or `FALSE` + from the `logical` function. In the case here, that does not arise, + because the result is in fact either exactly `TRUE` or exactly + `FALSE`. diff --git a/r-book/notebooks/five_spades_four_clubs.Rmd b/r-book/notebooks/five_spades_four_clubs.Rmd new file mode 100644 index 00000000..ab0a51d6 --- /dev/null +++ b/r-book/notebooks/five_spades_four_clubs.Rmd @@ -0,0 +1,57 @@ +# Five spades and four clubs + + +**This is an example of multiple-outcome sampling without replacement, +order does not matter**. + +The problem is similar to the example in +sec-four-girls-one-boy, except +that now there are four equally-likely outcomes instead of only two. An +R solution is: + +```{r} +# Constitute the deck of 52 cards. +# Repeat the suit names 13 times each, to make a 52 card deck. +deck <- rep(c('spade', 'club', 'diamond', 'heart'), c(13, 13, 13, 13)) +# Show the deck +deck +``` + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Repeat the trial N times. +for (i in 1:N) { + + # Shuffle the deck and draw 13 cards. + hand <- sample(deck, 13) # replace=FALSE is the default. + + # Count the number of spades in "hand", put the result in "n_spades". + n_spades <- sum(hand == 'spade') + + # If we have five spades, we'll continue on to count the clubs. If we don't + # have five spades, the number of clubs is irrelevant — we have not gotten + # the hand we are interested in. + if (n_spades == 5) { + # Count the clubs, put the result in "n_clubs" + n_clubs <- sum(hand == 'club') + # Keep track of the number of clubs in each trial + trial_results[i] <- n_clubs + } + + # End one experiment, go back and repeat until all N trials are done. +} + +# Count the number of trials where we got 4 clubs. This is the answer we want - +# the number of hands out of 1000 with 5 spades and 4 clubs. (Recall that we +# only counted the clubs if the hand already had 5 spades.) +n_5_and_4 <- sum(trial_results == 4) + +# Convert to a proportion. +prop_5_and_4 <- n_5_and_4 / N + +# Print the result +message(prop_5_and_4) +``` diff --git a/r-book/notebooks/five_spades_four_girls.Rmd b/r-book/notebooks/five_spades_four_girls.Rmd new file mode 100644 index 00000000..fc4ebb23 --- /dev/null +++ b/r-book/notebooks/five_spades_four_girls.Rmd @@ -0,0 +1,107 @@ +# Five spades, four girls + + +This is a compound problem: what are the chances of *both* five or more +spades in one bridge hand, and four girls and a boy in a five-child +family? + +“Compound” does not necessarily mean “complicated”. It means that the +problem is a compound of two or more simpler problems. + +A natural way to handle such a compound problem is in stages, as we saw +in the archery problem of +sec-one-black-archery. If a +“success” is achieved in the first stage, go on to the second stage; if +not, don’t go on. More specifically in this example: + +- **Step 1.** Use a bridge card deck, and five coins with heads = + “girl”. +- **Step 2.** Deal a 13-card bridge hand and count the spades. If 5 or + more spades, record “no” and end the experimental trial. Otherwise, + continue to step 3. +- **Step 3.** Throw five coins, and count “heads.” If four heads, record + “yes,” otherwise record “no.” +- **Step 4.** Repeat steps 2 and 3 a thousand times. +- **Step 5.** Compute the proportion of “yes” in step 3. This estimates + the probability sought. + +The R solution to this compound problem is neither long nor difficult. +We tackle it almost as if the two parts of the problem were to be dealt +with separately. We first determine, in a random bridge hand, whether 5 +spades or more are dealt, as was done in the problem +sec-five-spades-four-clubs. +Then, `if` 5 or more spades are found, we use `sample` to generate a +random family of 5 children. This means that we need not generate +families if 5 or more spades were not dealt to the bridge hand, because +a “success” is only recorded if both conditions are met. After we record +the number of girls in each sample of 5 children, we need only finish +the loop (by `}` and then use `sum` to count the number of samples that +had 4 girls, storing the result in `k`. Since we only drew samples of +children for those trials in which a bridge hand of 5 spades had already +been dealt, `k` will have the number of trials out of 10000 in which +both conditions were met. + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Deck with 13 spades and 39 other cards +deck <- rep(c('spade', 'others'), c(13, 52 - 13)) + +for (i in 1:N) { + # Shuffle deck and draw 13 cards + hand <- sample(deck, 13) # replace=FALSE is default + + n_spades <- sum(hand == 'spade') + + if (n_spades >= 5) { + # Generate a family, zeros for boys, ones for girls + children <- sample(c('girl', 'boy'), 5, replace=TRUE) + n_girls <- sum(children == 'girl') + trial_results[i] <- n_girls + } +} + +k <- sum(trial_results == 4) + +kk <- k / N + +print(kk) +``` + +Here is an alternative approach to the same problem, but getting the +result at the end of the loop, by combining Boolean vectors (see +sec-combine-booleans). + +```{r} +N <- 10000 +trial_spades <- numeric(N) +trial_girls <- numeric(N) + +# Deck with 13 spades and 39 other cards +deck <- rep(c('spade', 'other'), c(13, 39)) + +for (i in 1:N) { + # Shuffle deck and draw 13 cards + hand <- sample(deck, 13) # replace=FALSE is default + # Count and store the number of spades. + n_spades <- sum(hand == 'spade') + trial_spades[i] <- n_spades + + # Generate a family, zeros for boys, ones for girls + children <- sample(c('girl', 'boy'), 5, replace=TRUE) + # Count and store the number of girls. + n_girls <- sum(children == 'girl') + trial_girls[i] <- n_girls +} + +k <- sum((trial_spades >= 5) & (trial_girls == 4)) + +kk <- k / N + +# Show the result +message(kk) +``` diff --git a/r-book/notebooks/four_girls_one_boy.Rmd b/r-book/notebooks/four_girls_one_boy.Rmd new file mode 100644 index 00000000..55834c83 --- /dev/null +++ b/r-book/notebooks/four_girls_one_boy.Rmd @@ -0,0 +1,60 @@ +# Four girls and one boy + + +What is the probability of selecting four girls and one boy when +selecting five students from any group of twenty-five girls and +twenty-five boys? + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Constitute the set of 25 girls and 25 boys. +whole_class <- rep(c('girl', 'boy'), c(25, 25)) + +# Repeat the following steps N times. +for (i in 1:N) { + + # Shuffle the numbers + shuffled <- sample(whole_class) + + # Take the first 5 numbers, call them c. + c <- shuffled[1:5] + + # Count how many girls there are, put the result in d. + d <- sum(c == 'girl') + + # Keep track of each trial result in z. + trial_results[i] <- d + + # End the experiment, go back and repeat until all 1000 trials are + # complete. +} + +# Count the number of times we got four girls, put the result in k. +k <- sum(trial_results == 4) + +# Convert to a proportion. +kk <- k / N + +# Print the result. +message(kk) +``` + +We can also find the probabilities of other outcomes from a histogram of +trial results obtained with the following command: + +```{r} +# Do histogram, with one bin for each possible number. +hist(trial_results, breaks=0:max(trial_results), main='# of girls') +``` + +In the resulting histogram we can see that in 15 percent of the trials, +4 of the 5 selected were girls. + +It should be noted that for this problem — as for most other problems — +there are several other resampling procedures that will also do the job +correctly. + +In analytic probability theory this problem is worked with a formula for +“combinations.” diff --git a/r-book/notebooks/four_girls_then_one_boy_25.Rmd b/r-book/notebooks/four_girls_then_one_boy_25.Rmd new file mode 100644 index 00000000..4d92f045 --- /dev/null +++ b/r-book/notebooks/four_girls_then_one_boy_25.Rmd @@ -0,0 +1,170 @@ +# Four girls then one boy from 25/25 + + +**In this problem, order matters; we are sampling without replacement, +with two outcomes, several of each item.** + +What is the probability of getting an ordered series of *four girls and +then one boy* , from a universe of 25 girls and 25 boys? This +illustrates Case 3 above. Clearly we can use the same sampling mechanism +as in the example +sec-four-girls-one-boy, but now +we record “yes” for a smaller number of composite events. + +We record “no” even if a single one boy is chosen but he is chosen 1st, +2nd, 3rd, or 4th, whereas in +sec-four-girls-one-boy, such +outcomes are recorded as “yes”-es. + +- **Step 1.** Generate a class (vector) of length 50, consisting of 25 + strings valued “boy” and 25 strings valued “girl”. +- **Step 2.** Shuffle the class array, and select the first five + elements. +- **Step 3.** If the first five elements are exactly + `'girl', 'girl', 'girl', 'girl', 'boy'`, write “yes,” otherwise + “no.” +- **Step 4.** Repeat steps 2 and 3, say, 10,000 times, and count the + proportion of “yes” results, which estimates the probability sought. + +Let us start the single trial procedure like so: + +```{r} +# Constitute the set of 25 girls and 25 boys. +whole_class <- rep(c('girl', 'boy'), c(25, 25)) + +# Shuffle the class into a random order. +shuffled <- sample(whole_class) +# Take the first 5 class members, call them c. +c <- shuffled[1:5] +# Show the result. +c +``` + +Our next step (step 3) is to check whether `c` is exactly equal to the +result of interest. The result of interest is: + +```{r} +# The result we are looking for - four girls and then a boy. +result_of_interest <- rep(c('girl', 'boy'), c(4, 1 )) +result_of_interest +``` + +We can then use a vector *comparison* with `==` to do an element by +element (*elementwise*) check, asking whether the corresponding elements +are equal: + +```{r} +# A Boolean array, with True where corresponding elements are equal, False +# otherwise. +are_equal <- c == result_of_interest +are_equal +``` + +We are nearly finished with step 3 — it only remains to check whether +*all* of the elements were equal, by checking whether *all* of the +values in `are_equal` are `TRUE`. + +We know that there are 5 elements, so we could check whether there are 5 +`TRUE` values with `sum`: + +```{r} +# Are there exactly 5 TRUE values in `are_equal`? +sum(are_equal) == 5 +``` + +Another way to ask the same question is by using the `all` function on +`are_equal`. This returns `TRUE` if *all* the elements in `are_equal` +are `TRUE`, and `FALSE` otherwise. + +
+ +
+ +
+ + + +
+ +
+ +Testing whether all elements of a vector are the same + +
+ +
+ +
+ +The `all`, applied to a Boolean vector (as here), checks whether *all* +of the elements in the Boolean vector are `TRUE`. If so, it returns +`TRUE`, otherwise, it returns `FALSE`. + +For example: + +```{r} +# All elements are TRUE, `all` returns TRUE +all(c(TRUE, TRUE, TRUE, TRUE)) +``` + +```{r} +# At least one element is FALSE, `all` returns FALSE +all(c(TRUE, TRUE, FALSE, TRUE)) +``` + +
+ +
+ +Here is the full procedure for steps 2 and 3 (a single trial): + +```{r} +# Shuffle the class into a random order. +shuffled <- sample(whole_class) +# Take the first 5 class members, call them c. +c <- shuffled[1:5] +# For each element, test whether the result is the result of interest. +are_equal <- c == result_of_interest +# Check whether we have the result we are looking for. +is_four_girls_then_one_boy <- all(are_equal) +``` + +All that remains is to put the single trial procedure into a loop. + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Repeat the following steps 1000 times. +for (i in 1:N) { + + # Shuffle the class into a random order. + shuffled <- sample(whole_class) + # Take the first 5 class members, call them c. + c <- shuffled[1:5] + # For each element, test whether the result is the result of interest. + are_equal <- c == result_of_interest + # Check whether we have the result we are looking for. + is_four_girls_then_one_boy <- all(are_equal) + + # Store the result of this trial. + trial_results[i] <- is_four_girls_then_one_boy + + # End the experiment, go back and repeat until all N trials are + # complete. +} + +# Count the number of times we got four girls then a boy +k <- sum(trial_results) + +# Convert to a proportion. +kk <- k / N + +# Print the result. +message(kk) +``` + +This type of problem is conventionally done with a *permutation* +formula. diff --git a/r-book/notebooks/framingham_hearts.Rmd b/r-book/notebooks/framingham_hearts.Rmd new file mode 100644 index 00000000..9ca57c9a --- /dev/null +++ b/r-book/notebooks/framingham_hearts.Rmd @@ -0,0 +1,48 @@ +# Framingham heart data + + +We use simulation to investigate the relationship between serum +cholesterol and heart attacks in the Framingham data. + +```{r} +n <- 10000 + +men <- rep(c('infarction', 'no infarction'), c(31, 574)) + +n_high <- 135 # Number of men with high cholesterol +n_low <- 470 # Number of men with low cholesterol + +infarct_differences <- numeric(n) + +for (i in 1:n) { + highs <- sample(men, size=n_high, replace=TRUE) + lows <- sample(men, size=n_low, replace=TRUE) + high_infarcts <- sum(highs == 'infarction') + low_infarcts <- sum(lows == 'infarction') + high_prop <- high_infarcts / n_high + low_prop <- low_infarcts / n_low + infarct_differences[i] <- high_prop - low_prop +} + +hist(infarct_differences, breaks=seq(-0.1, 0.1, by=0.005), + main='Infarct proportion differences in null universe') + +# How often was the resampled difference >= the observed difference? +k <- sum(infarct_differences >= 0.029) +# Convert this result to a proportion +kk <- k / n + +message('Proportion of trials with difference >= observed: ', + round(kk, 2)) +``` + +The results of the test using this program may be seen in the histogram. +We find — perhaps surprisingly — that a difference as large as observed +would occur by chance around 10 percent of the time. (If we were not +guided by the theoretical expectation that high serum cholesterol +produces heart disease, we might include the 10 percent difference going +in the other direction, giving a 20 percent chance). Even a ten percent +chance is sufficient to call into question the conclusion that high +serum cholesterol is dangerous. At a minimum, this statistical result +should call for more research before taking any strong action clinically +or otherwise. diff --git a/r-book/notebooks/fruit_fly.Rmd b/r-book/notebooks/fruit_fly.Rmd new file mode 100644 index 00000000..026ec14c --- /dev/null +++ b/r-book/notebooks/fruit_fly.Rmd @@ -0,0 +1,56 @@ +# Fruit fly simulation + + +This notebook uses simulation to test the null hypothesis that it is +equally likely that new fruit files are male or female. + +```{r} +# Set the number of trials +n_trials <- 10000 + +# set the sample size for each trial +sample_size <- 20 + +# An empty array to store the trials +scores <- numeric(n_trials) + +# Do 1000 trials +for (i in 1:n_trials) { + # Generate 20 simulated fruit flies, where each has an equal chance of being + # male or female + a <- sample(c('male', 'female'), size = sample_size, prob = c(0.5, 0.5), + replace = TRUE) + + # count the number of males in the sample + b <- sum(a == 'male') + + # store the result of this trial + scores[i] <- b +} + +# Produce a histogram of the trial results +title_of_plot <- paste0("Number of males in", n_trials, " samples of \n", sample_size, " simulated fruit flies") +hist(scores, xlab = 'Number of Males', main = title_of_plot) +``` + +In the histogram above, we see that in 16 percent of the trials, the +number of males was 14 or more, or 6 or fewer. Or instead of reading the +results from the histogram, we can calculate the result by tacking on +the following commands to the above program: + +```{r} +# Determine the number of trials in which we had 14 or more males. +j <- sum(scores >= 14) + +# Determine the number of trials in which we had 6 or fewer males. +k <- sum(scores <= 6) + +# Add the two results together. +m <- j + k + +# Convert to a proportion. +mm <- m/n_trials + +# Print the results. +print(mm) +``` diff --git a/r-book/notebooks/gold_silver_booleans.Rmd b/r-book/notebooks/gold_silver_booleans.Rmd new file mode 100644 index 00000000..2f5c6bdc --- /dev/null +++ b/r-book/notebooks/gold_silver_booleans.Rmd @@ -0,0 +1,202 @@ +# Another approach to ships with gold and silver + + +This notebook is a variation on the problem with gold and silver chests +in ships. It shows how we can count and tally the results at the end, +rather than in the trial itself. + +Notice that the first part of the code is identical to the first +approach to this problem. There are two key differences — see the +comments for an explanation. + +```{r} +# The 3 buckets, each representing two chests on a ship. +# As before. +bucket1 <- c('Gold', 'Gold') # Chests in first ship. +bucket2 <- c('Gold', 'Silver') # Chests in second ship. +bucket3 <- c('Silver', 'Silver') # Chests in third ship. +``` + +```{r} +# Here is where the difference starts. We are now going to fill in +# the result for the first chest _and_ the result for the second chest. +# +# Later we will fill in all these values, so the string we put here +# does not matter. + +# Whether the first chest was Gold or Silver. +first_chests <- rep('To be announced', 10000) +second_chests <- rep('To be announced', 10000) + +for (i in 1:10000) { + # Select a ship at random from the three ships. + # As before. + ship_no <- sample(1:3, size=1) + # Get the chests from this ship. + # As before. + if (ship_no == 1) { + bucket <- bucket1 + } + if (ship_no == 2) { + bucket <- bucket2 + } + if (ship_no == 3) { + bucket <- bucket3 + } + + # As before. + shuffled <- sample(bucket) + + # Here is the big difference - we store the result for the first and second + # chests. + first_chests[i] <- shuffled[1] + second_chests[i] <- shuffled[2] +} # End loop, go back to beginning. + +# We will do the calculation we need in the next cell. For now +# just display the first 10 values. +ten_first_chests <- first_chests[1:10] +message('The first 10 values of "first_chests:') +``` + +```{r} +print(ten_first_chests) +``` + +```{r} +ten_second_chests <- second_chests[1:10] +message('The first 10 values of "second_chests:') +``` + +```{r} +print(ten_second_chests) +``` + +In this variant, we recorded the type of first chest for each trial +(“Gold” or “Silver”), and the type of second chest of the second chest +(“Gold” or “Silver”). + +**We would like to count the number of times there was “Gold” in the +first chest *and* “Gold” in the second.** + +## 10.5 Combining Boolean arrays + +We can do the count we need by *combining* the Boolean vectors with the +`&` operator. `&` combines Boolean vectors with a *logical and*. +*Logical and* is a rule for combining two Boolean values, where the rule +is: the result is `TRUE` if the first value is `TRUE` *and* the second +value if `TRUE`. + +Here we use the `&` *operator* to combine some Boolean values on the +left and right of the operator: + +Above you saw that the `==` operator (as in `== 'Gold'`), when applied +to vectors, asks the question of every element in the vector. + +First make the Boolean vectors. + +```{r} +ten_first_gold <- ten_first_chests == 'Gold' +message("Ten first == 'Gold'") +``` + +```{r} +print(ten_first_gold) +``` + +```{r} +ten_second_gold <- ten_second_chests == 'Gold' +message("Ten second == 'Gold'") +``` + +```{r} +print(ten_second_gold) +``` + +Now let us use `&` to combine Boolean vectors: + +```{r} +ten_both <- (ten_first_gold & ten_second_gold) +ten_both +``` + +Notice that R does the comparison *elementwise* — element by element. + +You saw that when we did `second_chests == 'Gold'` this had the effect +of asking the `== 'Gold'` question of *each element*, so there will be +one answer per element in `second_chests`. In that case there was a +vector to the *left* of `==` and a single value to the *right*. We were +comparing a vector to a value. + +Here we are asking the `&` question of `ten_first_gold` and +`ten_second_gold`. Here there is a vector to the *left* and a vector to +the *right*. We are asking the `&` question 10 times, but the first +question we are asking is: + +```{r} +# First question, giving first element of result. +(ten_first_gold[1] & ten_second_gold[1]) +``` + +The second question is: + +```{r} +# Second question, giving second element of result. +(ten_first_gold[2] & ten_second_gold[2]) +``` + +and so on. We have ten elements on *each side*, and 10 answers, giving a +vector (`ten_both`) of 10 elements. Each element in `ten_both` is the +answer to the `&` question for the elements at the corresponding +positions in `ten_first_gold` and `ten_second_gold`. + +We could also create the Boolean vectors and do the `&` operation all in +one step, like this: + + + +Remember, we wanted the answer to the question: how many trials had +“Gold” in the first chest *and* “Gold” in the second. We can answer that +question for the first 10 trials with `sum`: + +```{r} +n_ten_both <- sum(ten_both) +n_ten_both +``` + +We can answer the same question for *all* the trials, in the same way: + +```{r} +first_gold <- first_chests == 'Gold' +second_gold <- second_chests == 'Gold' +n_both_gold <- sum(first_gold & second_gold) +n_both_gold +``` + +We could also do the same calculation all in one line: + +```{r} +n_both_gold <- sum((first_chests == 'Gold') & (second_chests == 'Gold')) +n_both_gold +``` + +We can then count all the ships where the first chest was gold: + +```{r} +n_first_gold <- sum(first_chests == 'Gold') +n_first_gold +``` + +The final calculation is the proportion of second chests that are gold, +given the first chest was also gold: + +```{r} +p_g_given_g <- n_both_gold / n_first_gold +p_g_given_g +``` + +Of course we won’t get exactly the same results from the two +simulations, in the same way that we won’t get exactly the same results +from any two runs of the same simulation, because of the random values +we are using. But the logic for the two simulations are the same, and we +are doing many trials (10,000), so the results will be very similar. diff --git a/r-book/notebooks/gold_silver_ships.Rmd b/r-book/notebooks/gold_silver_ships.Rmd new file mode 100644 index 00000000..eb572206 --- /dev/null +++ b/r-book/notebooks/gold_silver_ships.Rmd @@ -0,0 +1,51 @@ +# Ships with gold and silver + + +In which we solve the problem of gold and silver chests in a discovered +ship. + +```{r} +# The 3 buckets. Each bucket represents a ship. Each has two chests. +bucket1 <- c('Gold', 'Gold') # Chests in first ship. +bucket2 <- c('Gold', 'Silver') # Chests in second ship. +bucket3 <- c('Silver', 'Silver') # Chests in third ship. +``` + +```{r} +# Mark trials as not valid to start with. +# Trials where we don't get a gold chest first will +# keep this 'No gold in chest 1, chest 2 never opened' marker. +second_chests <- rep('No gold in chest 1, chest 2 never opened', 10000) + +for (i in 1:10000) { + # Select a ship at random from the three ships. + ship_no <- sample(1:3, size=1) + # Get the chests from this ship (represented by a bucket). + if (ship_no == 1) { + bucket <- bucket1 + } + if (ship_no == 2) { + bucket <- bucket2 + } + if (ship_no == 3) { + bucket <- bucket3 + } + + # We shuffle the order of the chests in this ship, to simulate + # the fact that we don't know which of the two chests we have + # found first. + shuffled <- sample(bucket) + + if (shuffled[1] == 'Gold') { # We found a gold chest first. + # Store whether the Second chest was silver or gold. + second_chests[i] <- shuffled[2] + } +} # End loop, go back to beginning. + +# Number of times we found gold in the second chest. +n_golds <- sum(second_chests == 'Gold') +# Number of times we found silver in the second chest. +n_silvers <- sum(second_chests == 'Silver') +# As a ratio of golds to all second chests (where the first was gold). +message(n_golds / (n_golds + n_silvers)) +``` diff --git a/r-book/notebooks/liquor_prices.Rmd b/r-book/notebooks/liquor_prices.Rmd new file mode 100644 index 00000000..56b2a7e1 --- /dev/null +++ b/r-book/notebooks/liquor_prices.Rmd @@ -0,0 +1,50 @@ +# Public and private liquor prices + + +This notebook asks the question whether the difference in the means of +private and government-specified prices of a particular whiskey could +plausibly have come about as a result of random sampling. + +```{r} +fake_diffs <- numeric(10000) + +priv <- c(4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54, + 4.75, 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75, + 5.20, 5.10, 4.80, 4.29) + +govt <- c(4.65, 4.55, 4.11, 4.15, 4.20, 4.55, 3.80, 4.00, 4.19, 4.75, 4.74, + 4.50, 4.10, 4.00, 5.05, 4.20) + +actual_diff <- mean(priv) - mean(govt) + +# Join the two vectors of data +both <- c(priv, govt) + +# Repeat 10000 simulation trials +for (i in 1:10000) { + + # Sample 26 with replacement for private group + fake_priv <- sample(both, size=26, replace=TRUE) + + # Sample 16 with replacement for govt. group + fake_govt <- sample(both, size=16, replace=TRUE) + + # Find the mean of the "private" group. + p <- mean(fake_priv) + + # Mean of the "govt." group + g <- mean(fake_govt) + + # Difference in the means + diff <- p - g + + # Keep score of the trials + fake_diffs[i] <- diff +} + +# Graph of simulation results to compare with the observed result. +fig_title <- paste('Average price difference (Actual difference = ', + round(actual_diff * 100), + 'cents') +hist(fake_diffs, main=fig_title, xlab='Difference in average prices (cents)') +``` diff --git a/r-book/notebooks/monty_hall.Rmd b/r-book/notebooks/monty_hall.Rmd new file mode 100644 index 00000000..49dfe030 --- /dev/null +++ b/r-book/notebooks/monty_hall.Rmd @@ -0,0 +1,452 @@ +# The Monty Hall problem + + +Here we do a R simulation of the Monty Hall problem. + +The Monty Hall problem has a slightly complicated structure, so we will +start by looking at the procedure for one trial. When we have that +clear, we will put that procedure into a `for` loop for the simulation. + +Let’s start with some variables. Let’s call the door I choose `my_door`. + +We choose that door at random from a sequence of all possible doors. +Call the doors 1, 2 and 3 from left to right. + +```{r} +# Vector of doors to chose from. +doors = c(1, 2, 3) + +# We choose one door at random. +my_door <- sample(doors, size=1) + +# Show the result +my_door +``` + +We choose one of the doors to be the door with the car behind it: + +```{r} +# One door at random has the car behind it. +car_door <- sample(doors, size=1) + +# Show the result +car_door +``` + +Now we need to decide which door Monty will open. + +By our set up, Monty cannot open our door (`my_door`). By the set up, he +has not opened (and cannot open) the door with the car behind it +(`car_door`). + +`my_door` and `car_door` might be the same. + +So, to get Monty’s choices, we want to take all doors (`doors`) and +remove `my_door` and `car_door`. That leaves the door or doors Monty can +open. + +Here are the doors Monty cannot open. Remember, a third of the time +`my_door` and `car_door` will be the same, so we will include the same +door twice, as doors Monty can’t open. + +```{r} +cant_open = c(my_door, car_door) +cant_open +``` + +We want to find the remaining doors from `doors` after removing the +doors named in `cant_open`. + +R has a good function for this, called `setdiff`. It calculates the *set +difference* between two sequences, such as vectors. + +The set difference between two sequences is the members that *are* in +the first sequence, but are *not* in the second sequence. Here are a few +examples of this set difference function in R. + +```{r} +# Members in c(1, 2, 3) that are *not* in c(1) +# 1, 2, 3, removing 1, if present. +setdiff(c(1, 2, 3), c(1)) +``` + +```{r} +# Members in c(1, 2, 3) that are *not* in c(2, 3) +# 1, 2, 3, removing 2 and 3, if present. +setdiff(c(1, 2, 3), c(2, 3)) +``` + +```{r} +# Members in c(1, 2, 3) that are *not* in c(2, 2) +# 1, 2, 3, removing 2 and 2 again, if present. +setdiff(c(1, 2, 3), c(2, 2)) +``` + +This logic allows us to choose the doors Monty can open: + +```{r} +montys_choices <- setdiff(doors, c(my_door, car_door)) +montys_choices +``` + +Notice that `montys_choices` will only have one element left when +`my_door` and `car_door` were different, but it will have two elements +if `my_door` and `car_door` were the same. + +Let’s play out those two cases: + +```{r} +my_door <- 1 # For example. +car_door <- 2 # For example. +# Monty can only choose door 3 now. +montys_choices <- setdiff(doors, c(my_door, car_door)) +montys_choices +``` + +```{r} +my_door <- 1 # For example. +car_door <- 1 # For example. +# Monty can choose either door 2 or door 3. +montys_choices <- setdiff(doors, c(my_door, car_door)) +montys_choices +``` + +If Monty can only choose one door, we’ll take that. Otherwise we’ll +chose a door at random from the two doors available. + +```{r} +if (length(montys_choices) == 1) { # Only one door available. + montys_door <- montys_choices[1] # Take the first (of 1!). +} else { # Two doors to choose from: + # Choose at random. + montys_door <- sample(montys_choices, size=1) +} +montys_door +``` + +Now we know Monty’s door, we can identify the other door, by removing +our door, and Monty’s door, from the available options: + +```{r} +remaining_doors <- setdiff(doors, c(my_door, montys_door)) +# There is only one remaining door, take that. +other_door <- remaining_doors[1] +other_door +``` + +The logic above gives us the full procedure for one trial. + +```{r} +my_door <- sample(doors, size=1) +car_door <- sample(doors, size=1) +# Which door will Monty open? +montys_choices <- setdiff(doors, c(my_door, car_door)) +# Choose single door left to choose, or door at random if two. +if (length(montys_choices) == 1) { # Only one door available. + montys_door <- montys_choices[1] # Take the first (of 1!). +} else { # Two doors to choose from: + # Choose at random. + montys_door <- sample(montys_choices, size=1) +} +# Now find the door we'll open if we switch. +# There is only one door left. +remaining_doors <- setdiff(doors, c(my_door, montys_door)) +other_door <- remaining_doors[1] +# Calculate the result of this trial. +if (my_door == car_door) { + stay_wins <- TRUE +} +if (other_door == car_door) { + switch_wins <- TRUE +} +``` + +All that remains is to put that trial procedure into a loop, and collect +the results as we repeat the procedure many times. + +```{r} +# Vectors to store the results for each trial. +stay_wins <- rep(FALSE, 10000) +switch_wins <- rep(FALSE, 10000) + +# Doors to chose from. +doors <- c(1, 2, 3) + +for (i in 1:10000) { + # You will recognize the below as the single-trial procedure above. + my_door <- sample(doors, size=1) + car_door <- sample(doors, size=1) + # Which door will Monty open? + montys_choices <- setdiff(doors, c(my_door, car_door)) + # Choose single door left to choose, or door at random if two. + if (length(montys_choices) == 1) { # Only one door available. + montys_door <- montys_choices[1] # Take the first (of 1!). + } else { # Two doors to choose from: + # Choose at random. + montys_door <- sample(montys_choices, size=1) + } + # Now find the door we'll open if we switch. + # There is only one door left. + remaining_doors <- setdiff(doors, c(my_door, montys_door)) + other_door <- remaining_doors[1] + # Calculate the result of this trial. + if (my_door == car_door) { + stay_wins[i] <- TRUE + } + if (other_door == car_door) { + switch_wins[i] <- TRUE + } +} + +p_for_stay <- sum(stay_wins) / 10000 +p_for_switch <- sum(switch_wins) / 10000 + +message('p for stay: ', p_for_stay) +``` + +```{r} +message('p for switch: ', p_for_switch) +``` + +We can also follow the same strategy as we used for the second +implementation of the two-ships problem +(sec-ships-booleans). + +Here, as in the second two-ships implementation, we do not calculate the +trial results (`stay_wins`, `switch_wins`) in each trial. Instead, we +store the *doors* for each trial, and then use Boolean vectors to +calculate the results for all trials, at the end. + +```{r} +# Instead of storing the trial results, we store the doors for each trial. +my_doors <- numeric(10000) +car_doors <- numeric(10000) +other_doors <- numeric(10000) + +# Doors to chose from. +doors <- c(1, 2, 3) + +for (i in 1:10000) { + my_door <- sample(doors, size=1) + car_door <- sample(doors, size=1) + # Which door will Monty open? + montys_choices <- setdiff(doors, c(my_door, car_door)) + # Choose single door left to choose, or door at random if two. + if (length(montys_choices) == 1) { # Only one door available. + montys_door <- montys_choices[1] # Take the first (of 1!). + } else { # Two doors to choose from: + # Choose at random. + montys_door <- sample(montys_choices, size=1) + } + # Now find the door we'll open if we switch. + # There is only one door left. + remaining_doors <- setdiff(doors, c(my_door, montys_door)) + other_door <- remaining_doors[1] + + # Store the doors we chose. + my_doors[i] <- my_door + car_doors[i] <- car_door + other_doors[i] <- other_door +} + +# Now - at the end of all the trials, we use Boolean vectors to calculate the +# results. +stay_wins <- my_doors == car_doors +switch_wins <- other_doors == car_doors + +p_for_stay <- sum(stay_wins) / 10000 +p_for_switch <- sum(switch_wins) / 10000 + +message('p for stay: ', p_for_stay) +``` + +```{r} +message('p for switch: ', p_for_switch) +``` + +### 10.7.1 Insight from the Monty Hall simulation + +The code simulation gives us an estimate of the right answer, but it +also forces us to set out the exact mechanics of the problem. For +example, by looking at the code, we see that we can calculate +“stay_wins” with this code alone: + +```{r} +# Just choose my door and the car door for each trial. +my_doors <- numeric(10000) +car_doors <- numeric(10000) +doors <- c(1, 2, 3) + +for (i in 1:10000) { + my_doors[i] <- sample(doors, size=1) + car_doors[i] <- sample(doors, size=1) +} + +# Calculate whether I won by staying. +stay_wins <- my_doors == car_doors +p_for_stay <- sum(stay_wins) / 10000 + +message('p for stay: ', p_for_stay) +``` + +This calculation, on its own, tells us the answer, but it also points to +another insight — whatever Monty does with the doors, it doesn’t change +the probability that our *initial guess* is right, and that must be 1 in +3 (0.333). If the probability of `stay_win` is 1 in 3, and we only have +one other door to switch to, the probability of winning after switching +must be 2 in 3 (0.666). + +### 10.7.2 Simulation and a variant of Monty Hall + +You have seen that you can avoid the silly mistakes that many of us make +with probability — by asking the computer to tell you the result +*before* you start to reason from first principles. + +As an example, consider the following variant of the Monty Hall problem. + +The set up to the problem has us choosing a door (`my_door` above), and +then Monty opens one of the other two doors. + +Sometimes (in fact, 2/3 of the time) there is a car behind one of +Monty’s doors. We’ve obliged Monty to open the *other* door, and his +choice is forced. + +When his choice was not forced, we had Monty choose the door at random. + +For example, let us say we chose door 1. + +Let us say that the car is also under door 1. + +Monty has the option of choosing door 2 or door 3, and he chooses +randomly between them. + +```{r} +my_door <- 1 # We chose door 1 at random. +car_door <- 1 # This trial, by chance, the car door is 1. +# Monty is left with doors 2 and 3 to choose from. +montys_choices <- setdiff(doors, c(my_door, car_door)) +# He chooses randomly. +montys_door <- sample(montys_choices, size=1) +# Show the result +montys_door +``` + +Now — let us say we happen to know that Monty is rather lazy, and he +will always choose the left-most (lower-numbered) door of the two +options. + +In the previous example, Monty had the option of choosing door 2 and 3. +In this new scenario, we know that he will always choose door 2 (the +left-most door). + +```{r} +my_door <- 1 # We chose door 1 at random. +car_door <- 1 # This trial, by chance, the car door is 1. +# Monty is left with doors 2 and 3 to choose from. +montys_choices <- setdiff(doors, c(my_door, car_door)) +# He chooses the left-most door, always. +montys_door <- montys_choices[1] +# Show the result +montys_door +``` + +It feels as if we have more information about where the car is, when we +know this. Consider the situation where we have chosen door 1, and Monty +opens door 3. We know that he would have preferred to open door 2, if he +was allowed. We therefore know he wasn’t allowed to open door 2, and +that means the car is definitely under door 2. + +```{r} +my_door <- 1 # We chose door 1 at random. +car_door <- 1 # This trial, by chance, the car door is 1. +# Monty is left with door 3 only to choose from. +montys_choices <- setdiff(doors, c(my_door, car_door)) +# He chooses the left-most door, always. But in this case, the left-most +# available door is 3 (he can't choose 2, it is the car_door). +# Notice the doors were in order, so the left-most door is the first door +# in the vector. +montys_door <- montys_choices[1] +# Show the result +montys_door +``` + +To take that into account, we might try a different strategy. We will +stick to our own choice if Monty has chosen the left-most of the two +doors he had available to him, because he might have chosen that door +because there was a car underneath the other door, or because there was +a car under neither, but he preferred the left door. But, if Monty +chooses the right-most of the two-doors available to him, we will switch +from our own choice to the other (unopened) door, because we can be sure +that the car is under the other (unopened) door. + +Call this the “switch if Monty chooses right door” strategy, or “switch +if right” for short. + +Can you see quickly whether this will be better than the “always stay” +strategy? Will it be better than the “always switch” strategy? Take a +moment to think it through, and write down your answers. + +If you can quickly see the answer to both questions — well done — but, +are you sure you are right? + +We can test by simulation. + +For our test of the “switch is right” strategy, we can tell if one door +is to the right of another door by comparison; higher numbers mean +further to the right: 2 is right of 1, and 3 is right of 2. + +```{r} +# Door 3 is right of door 1. +3 > 1 +``` + +```{r} +# A test of the switch-if-right strategy. +# The car doors. +car_doors <- numeric(10000) +# The door we chose using the strategy. +strategy_doors <- numeric(10000) + +doors <- c(1, 2, 3) + +for (i in 1:10000) { + my_door <- sample(doors, size=1) + car_door <- sample(doors, size=1) + # Which door will Monty open? + montys_choices <- setdiff(doors, c(my_door, car_door)) + # Choose Monty's door from the remaining options. + # This time, he always prefers the left door. + montys_door <- montys_choices[1] + # Now find the door we'll open if we switch. + remaining_doors <- setdiff(doors, c(my_door, montys_door)) + # There is only one door remaining - but is Monty's door + # to the right of this one? Then Monty had to shift. + other_door <- remaining_doors[1] + if (montys_door > other_door) { + # Monty's door was the right-hand door, the car is under the other one. + strategy_doors[i] <- other_door + } else { # We stick with the door we first thought of. + strategy_doors[i] <- my_door + } + # Store the car door for this trial. + car_doors[i] <- car_door +} + +strategy_wins <- strategy_doors == car_doors + +p_for_strategy <- sum(strategy_wins) / 10000 + +message('p for strategy: ', p_for_strategy) +``` + +We find that the “switch-if-right” has around the same chance of success +as the “always-switch” strategy — of about 66.6%, or 2 in 3. Were your +initial answers right? Now you’ve seen the result, can you see why it +should be so? It may not be obvious — the Monty Hall problem is +deceptively difficult. But our case here is that the simulation first +gives you an estimate of the correct answer, and then, gives you a good +basis for thinking more about the problem. That is: + +- simulation is useful for estimation and +- simulation is useful for reflection. diff --git a/r-book/notebooks/one_pair.Rmd b/r-book/notebooks/one_pair.Rmd new file mode 100644 index 00000000..8bb5da0f --- /dev/null +++ b/r-book/notebooks/one_pair.Rmd @@ -0,0 +1,51 @@ +# One pair + + +This is a simulation to find the probability of exactly one pair in a +poker hand of five cards. + +```{r} +# Create a bucket (vector) called a with four "1's," four "2's," four "3's," +# etc., to represent a deck of cards +one_suit = 1:13 +one_suit +``` + +```{r} +# Repeat values for one suit four times to make a 52 card deck of values. +deck <- rep(one_suit, 4) +deck +``` + +```{r} +# Vector to store result of each trial. +z <- numeric(10000) + +# Repeat the following steps 10000 times +for (i in 1:10000) { + # Shuffle the deck + shuffled <- sample(deck) + + # Take the first five cards to make a hand. + hand = shuffled[1:5] + + # How many pairs? + # Counts for each card rank. + repeat_nos <- tabulate(hand) + n_pairs <- sum(repeat_nos == 2) + + # Keep score of # of pairs + z[i] <- n_pairs + + # End loop, go back and repeat +} + +# How often was there 1 pair? +k <- sum(z == 1) + +# Convert to proportion. +kk = k / 10000 + +# Show the result. +message(kk) +``` diff --git a/r-book/notebooks/pennies.Rmd b/r-book/notebooks/pennies.Rmd new file mode 100644 index 00000000..44315bd9 --- /dev/null +++ b/r-book/notebooks/pennies.Rmd @@ -0,0 +1,87 @@ +# Simulating the pennies game + + +This notebook calculates the probability that one player will run out of +pennies within 200 turns of the Pennies game. + +```{r} +someone_won <- numeric(10000) + +# Do 10000 trials +for (i in 1:10000) { + + # Record the number 10: a's stake + a_stake <- 10 + + # Same for b + b_stake <- 10 + + # An indicator flag that will be set to "1" when somebody wins. + flag <- 0 + + # Repeat the following steps 200 times. + # Notice we use "j" as the counter variable, to avoid overwriting + # "i", the counter variable for the 10000 trials. + for (j in 1:200) { + # Generate the equivalent of a coin flip, letting 1 <- heads, + # 2 <- tails + c <- sample(1:2, size=1) + + # If it's a heads + if (c == 1) { + + # Add 1 to b's stake + b_stake <- b_stake + 1 + + # Subtract 1 from a's stake + a_stake <- a_stake - 1 + + # End the "if" condition + } + + # If it's a tails + if (c == 2) { + + # Add one to a's stake + a_stake <- a_stake + 1 + + # Subtract 1 from b's stake + b_stake <- b_stake - 1 + + # End the "if" condition + } + + # If a has won + if (a_stake == 20) { + + # Set the indicator flag to 1 + flag <- 1 + } + + # If b has won + if (b_stake == 20) { + + # Set the indicator flag to 1 + flag <- 1 + + } + + # End the repeat loop for 200 plays (note that the indicator flag stays + # at 0 if neither a nor b has won) + } + + # Keep track of whether anybody won. + someone_won[i] <- flag + + # End the 10000 trials +} + +# Find out how often somebody won +n_wins <- sum(someone_won) + +# Convert to a proportion +prop_wins <- n_wins / 10000 + +# Print the results +message(prop_wins) +``` diff --git a/r-book/notebooks/pig_rations.Rmd b/r-book/notebooks/pig_rations.Rmd new file mode 100644 index 00000000..af2af827 --- /dev/null +++ b/r-book/notebooks/pig_rations.Rmd @@ -0,0 +1,84 @@ +# Weight gain on pig rations + + +We do a simulation of weight gain ranks for two different pig rations. + +The `ranks <- 1:24` statement creates a vector of numbers 1 through 24, +which will represent the rankings of weight gains for each of the 24 +pigs. We repeat the following procedure for 10000 trials. First we +shuffle the elements of vector `ranks` so that the rank numbers for +weight gains are randomized and placed in vector `shuffled`. We then +select the first 12 elements of `shuffled` and place them in `first_12`; +this represents the rankings of a randomly-selected group of 12 pigs. We +next count (`sum`) in `n_top` the number of pigs whose rankings for +weight gain were in the top half — that is, a rank of less than 13. We +record that number in `top_ranks`, and then continue the loop, until we +finish our `n` trials. + +Since we did not know beforehand the direction of the effect of ration A +on weight gain, we want to count the times that *either more than 8* of +the random selection of 12 pigs were in the top half of the rankings, +*or that fewer than 4* of these pigs were in the top half of the weight +gain rankings — (The latter is the same as counting the number of times +that more than 8 of the 12 *non-selected* random pigs were in the top +half in weight gain.) + +We do so with the final two `sum` statements. By adding the two results +`n_gte_9` and `n_lte_3` together, we have the number of times out of +10,000 that differences in weight gains in two groups as dramatic as +those obtained in the actual experiment would occur by chance. + +```{r} +# Constitute the set of the weight gain rank orders. ranks is now a vector +# consisting of the numbers 1 — 24, in that order. +ranks <- 1:24 + +n <- 10000 + +top_ranks <- numeric(n) + +for (i in 1:n) { + # Shuffle the ranks of the weight gains. + shuffled <- sample(ranks) + # Take the first 12 ranks. + first_12 <- shuffled[1:12] + # Determine how many of these randomly selected 12 ranks are less than + # 12 (i.e. 1-12), put that result in n_top. + n_top <- sum(first_12 <= 12) + # Keep track of each trial result in top_ranks + top_ranks[i] <- n_top +} + +hist(top_ranks, breaks=1:11, + main='Number of top 12 ranks in pig-ration trials') +``` + +We see from the histogram that, in about 3 percent of the trials, either +more than 8 or fewer than 4 top half ranks (1-12) made it into the +random group of twelve that we selected. R will calculate this for us as +follows: + +```{r} +# Determine how many of the trials yielded 9 or more top ranks. +n_gte_9 <- sum(top_ranks >= 9) +# Determine how many trials yielded 3 or fewer of the top ranks. +# If there were 3 or fewer, then 9 or more of the top ranks must +# have been in the other group (not selected). +n_lte_3 <- sum(top_ranks <= 3) +# Add the two together. +n_both <- n_gte_9 + n_lte_3 +# Convert to a proportion. +prop_both <- n_both / n + +message('Trial proportion >=9 top ranks in either group: ', + round(prop_both, 2)) +``` + +The decisions that are warranted on the basis of the results depend upon +one’s purpose. If writing a scientific paper on the merits of ration A +is the ultimate purpose, it would be sensible to test another batch of +pigs to get further evidence. (Or you could proceed to employ another +sort of test for a slightly more precise evaluation.) But if the goal is +a decision on which type of ration to buy for a small farm and they are +the same price, just go ahead and buy ration A because, even if it is no +better than ration B, you have strong evidence that it is *no worse* . diff --git a/r-book/notebooks/pill_placebo.Rmd b/r-book/notebooks/pill_placebo.Rmd new file mode 100644 index 00000000..b3dbe3b6 --- /dev/null +++ b/r-book/notebooks/pill_placebo.Rmd @@ -0,0 +1,53 @@ +# Cures for pill vs placebo + + +Now for a R solution. Again, the benchmark hypothesis is that pill P has +no effect, and we ask how often, on this assumption, the results that +were obtained from the actual test of the pill would occur by chance. + +Given that in the test 7 of 12 patients overall got well, the benchmark +hypothesis assumes 7/12 to be the chances of any random patient being +cured. We generate two similar samples of 6 patients, both taken from +the same universe composed of the combined samples — the bootstrap +procedure. We count (`sum`) the number who are “get well” in each +sample. Then we subtract the number who got well in the “pill” sample +from the number who got well in the “no-pill” sample. We record the +resulting difference for each trial in the variable `pill_betters`. + +In the actual test, 3 more patients got well in the sample given the +pill than in the sample given the placebo. We therefore count how many +of the trials yield results where the difference between the sample +given the pill and the sample not given the pill was greater than 2 +(equal to or greater than 3). This result is the probability that the +results derived from the actual test would be obtained from random +samples drawn from a population which has a constant cure rate, pill or +no pill. + +```{r} +# The bucket with the pieces of paper. +options <- rep(c('get well', 'not well'), c(7, 5)) + +n <- 10000 + +pill_betters <- numeric(n) + +for (i in 1:n) { + pill <- sample(options, size=6, replace=TRUE) + pill_cures <- sum(pill == 'get well') + placebo <- sample(options, size=6, replace=TRUE) + placebo_cures <- sum(placebo == 'get well') + pill_betters[i] <- pill_cures - placebo_cures +} + +hist(pill_betters, breaks=-6:6, + main='Number of extra cures pill vs placebo in null universe') +``` + +Recall our actual observed results: In the medicine group, three more +patients were cured than in the placebo group. From the histogram, we +see that in only about 8 percent of the simulated trials did the +“medicine” group do as well or better. The results seem to suggest — but +by no means conclusively — that the medicine’s performance is not due to +chance. Further study would probably be warranted. The following +commands added to the above program will calculate this proportion +directly: diff --git a/r-book/notebooks/planet_densities.Rmd b/r-book/notebooks/planet_densities.Rmd new file mode 100644 index 00000000..b3fd4d34 --- /dev/null +++ b/r-book/notebooks/planet_densities.Rmd @@ -0,0 +1,43 @@ +# Planet densities and distance + + +We apply the logic of resampling to the problem of close and distant +planets and their densities. + +```{r} +# Steps 1 and 2. +actual_mean_diff <- 8 / 2 - 7 / 3 + +# Step 3 +ranks <- 1:5 + +n <- 10000 + +mean_differences <- numeric(n) + +for (i in 1:n) { + # Step 4 + shuffled <- sample(ranks) + # Step 5 + closer <- shuffled[1:2] # First 2 + further <- shuffled[3:5] # Last 3 + # Step 6 + mean_close <- mean(closer) + mean_far <- mean(further) + # Step 7 + mean_differences[i] <- mean_close - mean_far +} + +# Step 9 +k <- sum(mean_differences >= actual_mean_diff) +prob <- k / n + +message('Proportion of trials with mean difference >= 1.67: ', + round(prob, 2)) +``` + +Interpretation: 20 percent of the time, random shufflings produced a +difference in ranks as great as or greater than observed. Hence, on the +strength of this evidence, we should *not* conclude that there is a +statistically surprising difference in densities between the further +planets and the closer planets. diff --git a/r-book/notebooks/sampling_tools.Rmd b/r-book/notebooks/sampling_tools.Rmd new file mode 100644 index 00000000..4b60f945 --- /dev/null +++ b/r-book/notebooks/sampling_tools.Rmd @@ -0,0 +1,459 @@ +# Sampling tools + + +## 6.2 Samples and labels + +Thus far we have used numbers such as 1 and 0 and 10 to represent the +elements we are sampling from. For example, in +sec-resampling-two, we were +simulating the chance of a particular juror being black, given that 26% +of the eligible jurors in the county were black. We used *integers* for +that task, where we started with all the integers from 0 through 99, and +asked R to select values at random from those integers. When R selected +an integer from 0 through 25, we chose to label the resulting simulated +juror as black — there are 26 integers in the range 0 through 25, so +there is a 26% chance that any one integer will be in that range. If the +integer was from 26 through 99, the simulated juror was white (there are +74 integers in the range 26 through 99). + +Here is the process of simulating a single juror, adapted from +sec-random-zero-through-ninety-nine: + +```{r} +# Get 1 random number from 0 through 99 +# replace=TRUE is redundant here (why?), but we leave it for consistency. +a <- sample(0:99, 1, replace=TRUE) + +# Show the result +a +``` + +After that, we have to unpack our labeling of 0 through 25 as being +“black” and 26 through 99 as being “white”. We might do that like this: + +```{r} +this_juror_is_black <- a < 26 +this_juror_is_black +``` + +This all works as we want it to, but it’s just a little bit difficult to +remember the coding (less than 26 means “black”, greater than 25 means +“white”). We had to use that coding because we committed ourselves to +using random numbers to simulate the outcomes. + +However, R can also store bits of text, called *strings*. Values that +are bits of text can be very useful because the text values can be +memorable labels for the entities we are sampling from, in our +simulations. + +## 6.3 String values + +So far, all the values you have seen in R vectors have been numbers. Now +we get on to values that are bits of text. These are called *strings*. + +Here is a single R string value: + +```{r} +s <- "Resampling" +s +``` + +We can see what type of value `v` holds by using the `class` function. + +For example, for a number value, you will usually find the `class` is +`numeric`: + +```{r} +v <- 10 +class(v) +``` + +What is the `class` of the new bit-of-text value `s`? + +```{r} +class(s) +``` + +The R `character` value is a bit of text, and therefore consists of a +sequence of characters. + +As vectors are containers for other things, such as numbers, strings are +containers for characters. + +To get the length of a string, use the `nchar` function (Number of +Characters): + +```{r} +# Number of characters in s +nchar(s) +``` + +R has a `substring` function that allows you to select individual +characters or sequences of characters from a string. The arguments to +`substring` are: first — the string; second — the index of the first +character you want to select; and third — the index of the last +character you want to select. For example to select the second character +in the string you would specify 2 as the starting index, and 2 as the +ending index, like this: + +```{r} +# Get the second character of the string +second_char <- substring(s, 2, 2) +second_char +``` + +## 6.4 Strings in vectors + +As we can store numbers as elements in vectors, we can also store +strings as vector elements. + +```{r} +vector_of_strings = c('Julian', 'Lincoln', 'Simon') +vector_of_strings +``` + +As for any vector, you can select elements with *indexing*. When you +select an element with a given position (index), you get the *string* at +at that position: + +```{r} +# Julian Lincoln Simon's second name +middle_name <- vector_of_strings[2] +middle_name +``` + +As for numbers, we can compare strings with, for example, the `==` +operator, that asks whether the two strings are equal: + +```{r} +middle_name == 'Lincoln' +``` + +## 6.5 Repeating elements + +Now let us go back to the problem of selecting black and white jurors. + +We started with the strategy of using numbers 0 through 25 to mean +“black” jurors, and 26 through 99 to mean “white” jurors. We selected +values at random from 0 through 99, and then worked out whether the +number meant a “black” juror (was less than 26) or a “white” juror (was +greater than 25). + +It would be good to use strings instead of numbers to identify the +potential jurors. Then we would not have to remember our coding of 0 +through 25 and 26 through 99. + +If only there was a way to make a vector of 100 strings, where 26 of the +strings were “black” and 74 were “white”. Then we could select randomly +from that array, and it would be immediately obvious that we had a +“black” or “white” juror. + +Luckily, of course, we can do that, by using the `rep` function to +construct the vector. + +Here is how that works: + +```{r} +# The values that we will repeat to fill up the larger array. +juror_types <- c('black', 'white') +# The number of times we want to repeat "black" and "white". +repeat_nos <- c(26, 74) +# Repeat "black" 26 times and "white" 74 times. +jury_pool <- rep(juror_types, repeat_nos) +# Show the result +jury_pool +``` + +We can use this vector of repeats of strings, to sample from. The result +is easier to grasp, because we are using the string labels, instead of +numbers: + +```{r} +# Select one juror at random from the black / white pool. +# replace=TRUE is redundant here, but we leave it for consistency. +one_juror <- sample(jury_pool, 1, replace=TRUE) +one_juror +``` + +We can select our full jury of 12 jurors, and see the results in a more +obvious form: + +```{r} +# Select one juror at random from the black / white pool. +one_jury <- sample(jury_pool, 12, replace=TRUE) +one_jury +``` + +
+ +
+ +
+ + + +
+ +
+ +Using the `size` argument to `sample` + +
+ +
+ +
+ +In the code above, we have specified the *size* of the sample we want +(12) with the second argument to `sample`. As you saw in +sec-named-arguments, we can +also give names to the function arguments, in this case, to make it +clearer what we mean by “12” in the code above. In fact, from now on, +that is what we will do; we will specify the *size* of our sample by +using the *name* for the function argument to `sample` — `size` — like +this: + +```{r} +# Select one juror at random from the black / white pool. +# Specify the sample size using the "size" named argument. +one_jury <- sample(jury_pool, size=12, replace=TRUE) +one_jury +``` + +
+ +
+ +We can use `==` on the vector to get `TRUE` values where the juror was +“black” and `FALSE` values otherwise: + +```{r} +are_black <- one_jury == 'black' +are_black +``` + +Finally, we can `sum` to find the number of black jurors +(sec-count-with-sum): + +```{r} +# Number of black jurors in this simulated jury. +n_black <- sum(are_black) +n_black +``` + +Putting that all together, this is our new procedure to select one jury +and count the number of black jurors: + +```{r} +one_jury <- sample(jury_pool, size=12, replace=TRUE) +are_black <- one_jury == 'black' +n_black <- sum(are_black) +n_black +``` + +Or we can be even more compact by putting several statements together +into one line: + +```{r} +# The same as above, but on one line. +n_black = sum(sample(jury_pool, size=12, replace=TRUE) == 'black') +n_black +``` + +## 6.6 Resampling with and without replacement + +Now let us return to the details of Robert Swain’s case, that you first +saw in sec-resampling-two. + +We looked at the composition of Robert Swain’s 12-person jury — but in +fact, by law, that does not have to be representative of the eligible +jurors. The 12-person jury is drawn from a jury *panel*, of 100 people, +and this should, in turn, be drawn from the population of all eligible +jurors in the county, consisting, at the time, of “all male citizens in +the community over 21 who are reputed to be honest, intelligent men and +are esteemed for their integrity, good character and sound judgment.” +So, unless there was some bias against black jurors, we might expect the +100-person jury panel to be a plausibly random sample of the eligible +jurors, of whom 26% were black. See [the Supreme Court case +judgement](https://supreme.justia.com/cases/federal/us/380/202) for +details. + +In fact, in Robert Swain’s trial, there were 8 black members in the +100-person jury panel. We will leave it to you to adapt the simulation +from sec-resampling-two to ask the +question — is 8% surprising as a random sample from a population with +26% black people? + +But we have a different question: given that 8 out of 100 of the jury +panel were black, is it surprising that none of the 12-person jury were +black? As usual, we can answer that question with simulation. + +Let’s think about what a single simulated jury selection would look +like. + +First we compile a representation of the actual jury panel, using the +tools we have used above. + +```{r} +juror_types <- c('black', 'white') +# in fact there were 8 black jurors and 92 white jurors. +panel_nos <- c(8, 92) +jury_panel <- rep(juror_types, panel_nos) +# Show the result +jury_panel +``` + +Now consider taking a 12-person jury at random from this panel. We +select the first juror at random, so that juror has an 8 out of 100 +chance of being black. But when we select the second jury member, the +situation has changed slightly. We can’t select the first juror again, +so our panel is now 99 people. If our first juror was black, then the +chances of selecting another black juror next are not 8 out of 100, but +7 out of 99 — a smaller chance. The problem is, as we shall see in more +detail later, the chances of getting a black juror as the second, and +third and fourth members of the jury depend on whether we selected a +black juror as the first and second and third jury members. At its most +extreme, imagine we had already selected eight jurors, and by some +strange chance, all eight were black. Now our chances of selecting a +black juror as the ninth juror are zero — there are no black jurors left +to select from the panel. + +In this case we are selecting jurors from the panel *without +replacement*, meaning, that once we have selected a particular juror, we +cannot select them again, and we do not put them back into the panel +when we select our next juror. + +This is the probability equivalent of the situation when you are dealing +a hand of cards. Let’s say someone is dealing you, and you only, a hand +of five cards. You get an ace as your first card. Your chances of +getting an ace as your first card were just the number of aces in the +deck divided by the number of cards — four in 52 – $\frac{4}{52}$. But +for your second card, the probability has changed, because there is one +less ace remaining in the pack, and one less card, so your chances of +getting an ace as your second card are now $\frac{3}{51}$. This is +sampling without replacement — in a normal game, you can’t get the same +card twice. Of course, you could imagine getting a hand where you +sampled *with replacement*. In that case, you’d get a card, you’d write +down what it was, and you’d give the card back to the dealer, who would +*replace* the card in the deck, shuffle again, and give you another +card. + +As you can see, the chances change if you are sampling with or without +replacement, and the kind of sampling you do, will dictate how you model +your chances in your simulations. + +Because this distinction is so common, and so important, the machinery +you have already seen in `sample` has simple ways for you to select your +sampling type. You have already seen sampling *with replacement*, and it +looks like this: + +```{r} +# Take a sample of 12 jurors from the panel *with replacement* +strange_jury <- sample(jury_panel, size=12, replace=TRUE) +strange_jury +``` + +This is a strange jury, because it can select any member of the jury +pool more than once. Perhaps that juror would have to fill two (or +more!) seats, or run quickly between them. But of course, that is not +how juries are selected. They are selected *without replacement*: + +Thus far, we have always done sampling *with replacement*, and, in order +to do that with `sample`, we pass the argument `replace=TRUE`. We do +that because the default for `sample` is `replace=FALSE`, that is, by +default, `sample` does sampling without replacement. If you want to do +sampling without replacement, you can just omit the `replace=TRUE` +argument to `sample`, or you can specify `replace=FALSE` explicitly, +perhaps to remind yourself that this is sampling without replacement. +Whether you omit the `replace` argument, or specify `replace=FALSE`, the +behavior is the same. + +```{r} +# Take a sample of 12 jurors from the panel *with replacement* +# replace=FALSE is the default for sample. +ok_jury <- sample(jury_panel, size=12) +ok_jury +``` + +
+ +
+ +
+ + + +
+ +
+ +Comments at the end of lines + +
+ +
+ +
+ +You have already seen comment lines. These are lines beginning with `#`, +to signal to R that the rest of the line is text for humans to read, but +R to ignore. + +```{r} +# This is a comment. R ignores this line. +``` + +You can also put comments at the *end of code lines*, by finishing the +code part of the line, and then putting a `#`, followed by more text. +Again, R will ignore everything after the `#` as a text for humans, but +not for R. + +```{r} +message('Hello') # This is a comment at the end of the line. +``` + +
+ +
+ +To finish the procedure for simulating a single jury selection, we count +the number of black jurors: + +```{r} +n_black <- sum(ok_jury == 'black') # How many black jurors? +n_black +``` + +Now we have the procedure for one simulated trial, here is the procedure +for 10000 simulated trials. + +```{r} +counts <- numeric(10000) +for (i in 1:10000) { + # Single trial procedure + jury <- sample(jury_panel, size=12) # replace=FALSE is the default. + n_black <- sum(jury == 'black') # How many black jurors? + # Store the result + counts[i] <- n_black +} +# Number of juries with 0 black jurors. +zero_black <- sum(counts == 0) +# Proportion +p_zero_black <- zero_black / 10000 +message(p_zero_black) +``` + +We have found that, when there are only 8% black jurors in the jury +panel, having no black jurors in the final jury happens about 34% of the +time, even in this case, where the jury is selected completely at random +from the jury panel. + +We should look for the main source of bias in the initial selection of +the jury panel, not in the selection of the jury from the panel. + diff --git a/r-book/notebooks/sampling_variability.Rmd b/r-book/notebooks/sampling_variability.Rmd new file mode 100644 index 00000000..c2a29514 --- /dev/null +++ b/r-book/notebooks/sampling_variability.Rmd @@ -0,0 +1,27 @@ +# Experiment in sampling variability + + +Try generating some rookie “seasons” yourself with the following +commands, ranging the batter’s “true” performance by changing the value +of `p_hit` (the probability of a hit). + +```{r} +# Simulate a rookie season of 400 at-bats. + +# You might try changing the value below and rerunning. +# This is the true (long-run) probability of a hit for this batter. +p_hit <- 0.4 +message('True average is: ', p_hit) +``` + +```{r} +# We resample _with_ replacement here; the chances of a hit do not change +# From at-bat to at-bat. +at_bats <- sample(c('Hit', 'Out'), prob=c(p_hit, 1 - p_hit), size=400, replace=TRUE) +simulated_average <- sum(at_bats == 'Hit') / 400 +# Show the result +message('Simulated average is: ', simulated_average) +``` + +Simulate a set of 10 or 20 such rookie seasons, and look at the one who +did best. How did their rookie season compare to their “true” average? diff --git a/r-book/notebooks/santas_hats.Rmd b/r-book/notebooks/santas_hats.Rmd new file mode 100644 index 00000000..c0805c38 --- /dev/null +++ b/r-book/notebooks/santas_hats.Rmd @@ -0,0 +1,48 @@ +# Santas' hats + + +**The welcome staff at a restaurant mix up the hats of a party of six +Christmas Santas. What is the probability that at least one will get +their own hat?**. + +After a long Christmas day, six Santas meet in the pub to let off steam. +However, as luck would have it, their hosts have mixed up their hats. +When the hats are returned, what is the chance that at least one Santa +will get his own hat back? + +First, assign each of the six Santas a number, and place these numbers +in an array. Next, shuffle the array (this represents the mixed-up hats) +and compare to the original. The rest of the problem is the same as the +pairs one from before, except that we are now interested in any trial +where at least one ($\ge 1$) Santa received the right hat. + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Assign numbers to each owner +owners <- 1:6 + +# Each hat gets the number of their owner +hats <- 1:6 + +for (i in 1:N) { + # Randomly shuffle the hats and compare to their owners + shuffled_hats <- sample(hats) + + # In how many cases did at least one person get their hat back? + trial_results[i] <- sum(shuffled_hats == owners) >= 1 +} + +# How many times, over all trials, did at least one person get their hat back? +k <- sum(trial_results) + +# Convert to a proportion. +kk <- k / N + +# Print the result. +print(kk) +``` + +We see that in roughly 63 percent of the trials at least one Santa +received their own hat back. diff --git a/r-book/notebooks/three_girls.Rmd b/r-book/notebooks/three_girls.Rmd new file mode 100644 index 00000000..f9de8651 --- /dev/null +++ b/r-book/notebooks/three_girls.Rmd @@ -0,0 +1,35 @@ +# Three Girls + + +This notebook estimates the probability that a family of four children +will have exactly three girls. + +```{r} +girl_counts <- numeric(10000) + +# Do 10000 trials +for (i in 1:10000) { + + # Select 'girl' or 'boy' at random, four times. + children <- sample(c('girl', 'boy'), size=4, replace=TRUE) + + # Count the number of girls and put the result in b. + b <- sum(children == 'girl') + + # Keep track of each trial result in z. + girl_counts[i] <- b + + # End this trial, repeat the experiment until 10000 trials are complete, + # then proceed. +} + +# Count the number of experiments where we got exactly 3 girls, and put this +# result in k. +n_three_girls <- sum(girl_counts == 3) + +# Convert to a proportion. +three_girls_prop <- n_three_girls / 10000 + +# Print the results. +message(three_girls_prop) +``` diff --git a/r-book/notebooks/three_of_a_kind.Rmd b/r-book/notebooks/three_of_a_kind.Rmd new file mode 100644 index 00000000..ab42da54 --- /dev/null +++ b/r-book/notebooks/three_of_a_kind.Rmd @@ -0,0 +1,38 @@ +# Three of a kind + + +We count the number of times we get three of a kind in a random hand of +five cards. + +```{r} +one_suit <- 1:13 +deck <- rep(one_suit, 4) +``` + +```{r} +triples_per_trial <- numeric(10000) + +# Repeat the following steps 10000 times +for (i in 1:10000) { + # Shuffle the deck + shuffled <- sample(deck) + + # Take the first five cards. + hand <- shuffled[1:5] + + # How many triples? + repeat_nos <- tabulate(hand) + n_triples <- sum(repeat_nos == 3) + + # Keep score of # of triples + triples_per_trial[i] <- n_triples + + # End loop, go back and repeat +} + +# How often was there 1 pair? +n_triples <- sum(triples_per_trial == 1) + +# Convert to proportion +message(n_triples / 10000) +``` diff --git a/r-book/notebooks/trump_clinton.Rmd b/r-book/notebooks/trump_clinton.Rmd new file mode 100644 index 00000000..c2cb871d --- /dev/null +++ b/r-book/notebooks/trump_clinton.Rmd @@ -0,0 +1,54 @@ +# Trump/Clinton poll simulation + + +What is the probability that a sample outcome such as actually observed +(840 Trump, 660 Clinton) would occur by chance if Clinton is “really” +ahead — that is, if Clinton has 50 percent (or more) of the support? To +restate in sharper statistical language: What is the probability that +the observed sample or one even more favorable to Trump would occur if +the universe has a mean of 50 percent or below? + +Here is a procedure that responds to that question: + +1. Create a benchmark universe with one ball marked “Trump” and another + marked “Clinton” +2. Draw a ball, record its marking, and replace. (We sample with + replacement to simulate the practically-infinite population of U. S. + voters.) +3. Repeat step 2 1500 times and count the number of “Trump”s. If 840 or + greater, record “Y”; otherwise, record “N.” +4. Repeat steps 3 and 4 perhaps 1000 or 10,000 times, and count the + number of “Y”s. The outcome estimates the probability that 840 or + more Trump choices would occur if the universe is “really” half or + more in favor of Clinton. + +This procedure may be done as follows with R. + +```{r} +# Number of repeats we will run. +n <- 10000 + +# Make an array to store the counts. +trumps <- numeric(n) + +for (i in 1:n) { + votes <- sample(c('Trump', 'Clinton'), size=1500, replace=TRUE) + trumps[i] <- sum(votes == 'Trump') +} + +# Integer bins from 675 through 825 in steps of 5. +hist(trumps, breaks=seq(675, 826, by=5), + main='Number of Trump voters of 1500 in null-world simulation') + +# How often >= 840 Trump votes in random draw? +k <- sum(trumps >= 840) +# As a proportion of simulated resamples. +kk <- k / n + +message('Proportion voting for Trump: ', kk) +``` + +The value for `kk` is our estimate of the probability that Trump’s +“victory” in the sample would occur by chance if he really were behind. +In this case, our probability estimate is less than 1 in 10,000 (\< +0.0001). diff --git a/r-book/notebooks/twenty_executives.Rmd b/r-book/notebooks/twenty_executives.Rmd new file mode 100644 index 00000000..64b6ec62 --- /dev/null +++ b/r-book/notebooks/twenty_executives.Rmd @@ -0,0 +1,37 @@ +# Twenty executives, two divisions + + +The top manager wants to spread the talent reasonably evenly, but she +does not want to label particular executives with a quality rating and +therefore considers distributing them with a random selection. She +therefore wonders: What are probabilities of the best ten among the +twenty being split among the divisions in the ratios 5 and 5, 4 and 6, 3 +and 7, etc., if their names are drawn from a hat? One might imagine much +the same sort of problem in choosing two teams for a football or +baseball contest. + +One may proceed as follows: + +1. Put 10 balls labeled “W” (for “worst”) and 10 balls labeled “B” + (best) in a bucket. +2. Draw 10 balls without replacement and count the W’s. +3. Repeat (say) 400 times. +4. Count the number of times each split — 5 W’s and 5 B’s, 4 and 6, + etc. — appears in the results. + +The problem can be done with R as follows: + +```{r} +N <- 10000 +trial_results <- numeric(N) + +managers <- rep(c('Worst', 'Best'), c(10, 10)) + +for (i in 1:N) { + chosen <- sample(managers, 10) # replace=FALSE is the default. + trial_results[i] <- sum(chosen == 'Best') +} + +hist(trial_results, breaks=0:max(trial_results), + main= 'Number of best managers chosen') +``` diff --git a/r-book/notebooks/two_pairs.Rmd b/r-book/notebooks/two_pairs.Rmd new file mode 100644 index 00000000..5e2bc666 --- /dev/null +++ b/r-book/notebooks/two_pairs.Rmd @@ -0,0 +1,38 @@ +# Two pairs + + +We count the number of times we get two pairs in a random hand of five +cards. + +```{r} +deck <- rep(1:13, 4) +``` + +```{r} +pairs_per_trial <- numeric(10000) + +# Repeat the following steps 10000 times +for (i in 1:10000) { + # Shuffle the deck + shuffled <- sample(deck) + + # Take the first five cards. + hand <- shuffled[1:5] + + # How many pairs? + # Counts for each card rank. + repeat_nos <- tabulate(hand) + n_pairs <- sum(repeat_nos == 2) + + # Keep score of # of pairs + pairs_per_trial[i] <- n_pairs + + # End loop, go back and repeat +} + +# How often were there 2 pairs? +n_two_pairs <- sum(pairs_per_trial == 2) + +# Convert to proportion +print(n_two_pairs / 10000) +``` diff --git a/r-book/notebooks/university_icebreaker.Rmd b/r-book/notebooks/university_icebreaker.Rmd new file mode 100644 index 00000000..c0de9f3a --- /dev/null +++ b/r-book/notebooks/university_icebreaker.Rmd @@ -0,0 +1,70 @@ +# An icebreaker for two universities + + +**First put two groups of 10 people into 10 pairs. Then re-randomize the +pairings. What is the chance that four or more pairs are the same in the +second random pairing? This is a problem in the probability of matching +by chance**. + +Ten representatives each from two universities, Birmingham and Berkeley, +attend a meeting. As a social icebreaker, representatives are divided, +randomly, into pairs consisting of one person from each university. + +If they held a second round of the icebreaker, with a new random +pairing, what is the chance that four or more pairs will be the same? + +In approaching this problem, we start at the point where the first +icebreaker is complete. We now have to determine what happens after the +second round. + +- **Step 1.** Let “ace” through “10” of hearts represent the ten + representatives from Birmingham University. Let “ace” through “10” of + spades be their allocated partners (in round one) from Berkeley. +- **Step 2.** Shuffle the hearts and deal them out in a row; shuffle the + spades and deal in a row just below the hearts. +- **Step 3.** Count the pairs — a pair is one card from the heart row + and one card from the spade row — that contain the same denomination. + If 4 or more pairs match, record “yes,” otherwise “no.” +- **Step 4.** Repeat steps (2) and (3), say, 10,000 times. +- **Step 5.** Count the proportion “yes.” This estimates the probability + of 4 or more pairs. + +Exercise for the student: Write the steps to do this example with random +numbers. The R solution follows below. + +```{r} +N <- 10000 +trial_results <- numeric(N) + +# Assign numbers to each student, according to their pair, after the first +# icebreaker +birmingham <- 1:10 +berkeley <- 1:10 + +for (i in 1:N) { + # Randomly shuffle the students from Berkeley + shuffled_berkeley <- sample(berkeley) + + # Randomly shuffle the students from Birmingham + # (This step is not really necessary — shuffling one array is enough to make the matching random.) + shuffled_birmingham <- sample(birmingham) + + # Count in how many cases people landed with the same person as in the + # first round, and store in trial_results. + matches <- sum(shuffled_berkeley == shuffled_birmingham) + trial_results[i] <- matches +} + +# Count the number of times we got 4 or more people assigned to the same person +k <- sum(trial_results >= 4) + +# Convert to a proportion. +kk <- k / N + +# Print the result. +message(kk) +``` + +We see that in about 2 percent of the trials did 4 or more couples end +up being re-paired with their own partners. This can also be seen from +the histogram: diff --git a/r-book/notebooks/viewer_numbers.Rmd b/r-book/notebooks/viewer_numbers.Rmd new file mode 100644 index 00000000..89f4a010 --- /dev/null +++ b/r-book/notebooks/viewer_numbers.Rmd @@ -0,0 +1,46 @@ +# Number of viewers + + +The notebook calculates the expected number of viewers in a sample of +400, given that there is a 30% chance of any one person being a viewer, +and then calculates how far that value is from 120. + +```{r} +# set the number of trials +n_trials <- 10000 + +# an empty array to store the scores +scores <- numeric(n_trials) + +# What are the options to choose from? +options <- c('viewer', 'not viewer') + +# do n_trials trials +for (i in 1:n_trials) { + + # Choose 'viewer' 30% of the time. + a <- sample(options, size=400, prob=c(0.3, 0.7), replace=TRUE) + + # count the viewers + b <- sum(a == 'viewer') + + # how different from expected? + c <- 120 - b + + # absolute value of the difference + d <- abs(c) + + # express as a proportion of sample + e <- d / 400 + + # keep score of the result + scores[i] <- e +} + +# find the mean divergence +k <- mean(scores) + +# Show the result +k +``` + diff --git a/r-book/point_estimation.html b/r-book/point_estimation.html new file mode 100644 index 00000000..de397f51 --- /dev/null +++ b/r-book/point_estimation.html @@ -0,0 +1,898 @@ + + + + + + + + + +Resampling statistics - 19  Point Estimation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

19  Point Estimation

+
+ + + +
+ + + + +
+ + +
+ +

One of the great questions in statistical inference is: How big is it? This can mean — How long? How deep? How much time? At what angle?

+

This question about size may pertain to a single object, of which there are many measurements; an example is the location of a star in the heavens. Or the question may pertain to a varied set of elements and their measurements; examples include the effect of treatment with a given drug, and the incomes of the people of the United States in 1994.

+

From where the observer stands, having only the evidence of a sample in hand, it often is impossible to determine whether the data represent multiple observations of a single object, or single (or multiple) observations of multiple objects. For example, from crude measurements of weight you could not know whether one person is being weighed repeatedly, or several people have been weighed once. Hence all the following discussion of point estimation is the same for both of these situations.

+

The word “big” in the first sentence above is purposely vague, because there are many possible kinds of estimates that one might wish to make concerning a given object or collection. For a single object like a star, one surely will wish to make a best guess about its location. But about the effects of a drug treatment, or the incomes of a nation, there are many questions that one may wish to answer. The average effect or income is a frequent and important object of our interest. But one may also wish to know about the amount of dispersion in the distribution of treatment effects, or of incomes, or the symmetry of the distribution. And there are still other questions one may wish to answer.

+

Even if we focus on the average, the issue often is less clear cut than we may think at first. If we are to choose a single number to characterize the population (universe) from which a given set of data has been drawn, what should that representative number be for the case at hand? The answer must depend on the purpose with which we ask the question, of course. There are several main possibilities such as the mean, the median, and the mode.

+

Even if we confine our attention to the mean as our measure of the central tendency of a distribution, there are various ways of estimating it, each of them having a different rationale. The various methods of estimation often lead to the same estimate, especially if the distribution is symmetric (such as the distribution of errors you make in throwing darts at a dart board). But in an asymmetric case such as a distribution of incomes, the results may differ among the contending modes of estimation. So the entire topic is more messy than appears at first look. Though we will not inquire into the complexities, it is important that you understand that the matter is not as simple as it may seem. (See Savage (1972), Chapter 15, for more discussion of this topic.)

+
+

19.1 Ways to estimate the mean

+
+

19.1.1 The Method of Moments

+

Since elementary school you have been taught to estimate the mean of a universe (or calculate the mean of a sample) by taking a simple arithmetic average. A fancy name for that process is “the method of moments.” It is the equivalent of estimating the center of gravity of a pole by finding the place where it will balance on your finger. If the pole has the same size and density all along its length, that balance point will be halfway between the endpoints, and the point may be thought of as the arithmetic average of the distances from the balance point of all the one-centimeter segments of the pole.

+

Consider this example:

+

Example: Twenty-nine Out of Fifty People Polled Say They Will Vote For The Democrat. Who Will Win The Election? The Relationship Between The Sample Proportion and The Population Proportion in a Two-Outcome Universe.

+

You take a random sample of 50 people in Maryland and ask which party’s candidate for governor they will vote for. Twenty-nine say they will vote for the Democrat. Let’s say it is reasonable to assume in this case that people will vote exactly as they say they will. The statistical question then facing you is: What proportion of the voters in Maryland will vote for the Democrat in the general election?

+

Your intuitive best guess is that the proportion of the “universe” — which is composed of voters in the general election, in this case — will be the same as the proportion of the sample. That is, 58 percent = 29/50 is likely to be your guess about the proportion that will vote Democratic. Of course, your estimate may be too high or too low in this particular case, but in the long run — that is, if you take many samples like this one — on the average the sample mean will equal the universe (population) proportion, for reasons to be discussed later.

+

The sample mean seems to be the “natural” estimator of the population mean in this and many other cases. That is, it seems quite natural to say that the best estimate is the sample mean, and indeed it probably is best. But why? This is the problem of inverse probability that has bedeviled statisticians for two centuries.

+

If the only information that you have (or that seems relevant) is the evidence of the sample, then there would seem to be no basis for judging that the shape and location of the population differs to the “left” or “right” from that of the sample. That is often a strong argument.

+

Another way of saying much the same thing: If a sample has been drawn randomly, each single observation is a representative estimator of the mean; if you only have one observation, that observation is your best guess about the center of the distribution (if you have no reason to believe that the distribution of the population is peculiar — such as not being symmetrical). And therefore the sum of 2, 3…n of such observations (divided by their number) should have that same property, based on basic principles.

+

But if you are on a ship at sea and a leaf comes raining down from the sky, your best guess about the location of the tree from which it comes is not directly above you, and if two leaves fall, the midpoint of them is not the best location guess, either; you know that trees don’t grow at sea, and birds sometimes carry leaves out to sea.

+

We’ll return to this subject when we discuss criteria of methods.

+
+
+

19.1.2 Expected Value and the Method of Moments

+

Consider this gamble: You and another person roll a die. If it falls with the “6” upwards you get $4, and otherwise you pay $1. If you play 120 times, at the end of the day you would expect to have (20 * $4 - 100 * $1 =) -$20 dollars. We say that -$20 is your “expected value,” and your expected value per roll is (-$20 / 120 =) $.166 or the loss of 1/6 of a dollar. If you get $5 instead of $4, your expected value is $0.

+

This is exactly the same idea as the method of moments, and we even use the same term — “expected value,” or “expectation” — for the outcome of a calculation of the mean of a distribution. We say that the expected value for the success of rolling a “6” with a single cast of a die is 1/6, and that the expected value of rolling a “6” or a “5” is (1/6 + 1/6 = ) 2/6.

+
+
+

19.1.3 The Maximum Likelihood Principle

+

Another way of thinking about estimation of the population mean asks: Which population(s) would, among the possible populations, have the highest probability of producing the observed sample? This criterion frequently produces the same answer as the method of moments, but in some situations the estimates differ. Furthermore, the logic of the maximum-likelihood principle is important.

+

Consider that you draw without replacement six balls — 2 black and 4 white — from a bucket that contains twenty balls. What would you guess is the composition of the bucket from which they were drawn? Is it likely that those balls came from a bucket with 4 white and 16 black balls? Rather obviously not, because it would be most unusual to get all the 4 white balls in your draw. Indeed, we can estimate the probability of that happening with simulation or formula to be about .003.

+

How about a bucket with 2 black and 18 whites? The probability is much higher than with the previous bucket, but it still is low — about .075.

+

Let us now estimate the probabilities for all buckets across the range of probabilities. In Figure 19.1 we see that the bucket with the highest probability of producing the observed sample has the same proportions of black and white balls as does the sample. This is called the “maximum likelihood universe.” Nor should this be very surprising, because that universe obviously has an equal chance of producing samples with proportions below and above that observed proportion — as was discussed in connection with the method of moments.

+

We should note, however, that the probability that even such a maximum-likelihood universe would produce exactly the observed sample is very low (though it has an even lower probability of producing any other sample).

+
+
+
+
+

+
Figure 19.1: Number of White Balls in the Universe (N=20)
+
+
+
+
+
+
+
+

19.2 Choice of Estimation Method

+

When should you base your estimate on the method of moments, or of maximum likelihood, or still some other principle? There is no general answer. Sound estimation requires that you think long and hard about the purpose of your estimation, and fit the method to the purpose. I am well aware that this is a very vague statement. But though it may be an uncomfortable idea to live with, guidance to sound statistical method must be vague because it requires sound judgment and deep knowledge of the particular set of facts about the situation at hand.

+
+
+

19.3 Criteria of estimates

+

How should one judge the soundness of the process that produces an estimate? General criteria include representativeness and accuracy . But these are pretty vague; we’ll have to get more specific.

+
+

19.3.1 Unbiasedness

+

Concerning representativeness: We want a procedure that will not be systematically in error in one direction or another. In technical terms, we want an “unbiased estimate,” if possible. “Unbiased” in this case does not mean “friendly” or “unprejudiced,” but rather implies that on the average — that is, in the long run, after taking repeated samples — estimates that are too high will about balance (in percentage terms) those that are too low. The mean of the universe (or the proportion, if we are speaking of two-valued “binomial situations”) is a frequent object of our interest. And the sample mean is (in most cases) an unbiased estimate of the population mean.

+

Let’s now see an informal proof that the mean of a randomlydrawn sample is an “unbiased” estimator of the population mean. That is, the errors of the sample means will cancel out after repeated samples because the mean of a large number of sample means approaches the population mean. A second “law” to be informally proven is that the size of the inaccuracy of a sample proportion is largest when the population proportion is near 50 percent, and smallest when it approaches zero percent or 100 percent.

+

The statement that the sample mean is an unbiased estimate of the population mean holds for many but not all kinds of samples — proportions of two-outcome (Democrat-Republican) events (as in this case) and also the means of many measured-data universes (heights, speeds, and so on) that we will come to later.

+

But, you object, I have only said that this is so; I haven’t proven it. Quite right. Now we will go beyond this simple assertion, though we won’t reach the level of formal proof. This discussion applies to conventional analytic statistical theory as well as to the resampling approach.

+

We want to know why the mean of a repeated sample — or the proportion, in the case of a binomial universe — tends to equal the mean of the universe (or the proportion of a binomial sample). Consider a population of one thousand voters. Split the population into random sub-populations of 500 voters each; let’s call these sub-populations by the name “samples.” Almost inevitably, the proportions voting Democratic in the samples will not exactly equal the “true” proportions in the population. (Why not? Well, why should they split evenly? There is no general reason why they should.) But if the sample proportions do not equal the population proportion, we can say that the extent of the difference between the two sample proportions and the population proportion will be identical but in the opposite direction .

+

If the population proportion is 600/1000 = 60 percent, and one sample’s proportion is 340/500 = 68 percent, then the other sample’s proportion must be (600-340 = 260)/500 = 52 percent. So if in the very long run you would choose each of these two samples about half the time (as you would if you selected between the two samples randomly) the average of the sample proportions would be (68 percent + 52 percent)/2 = 60 percent. This shows that on the average the sample proportion is a fair and unbiased estimate of the population proportion — if the sample is half the size of the population.

+

If we now sub-divide each of our two samples of 500 (each of which was half the population size) into equal-size subsamples of 250 each, the same argument will hold for the proportions of the samples of 250 with respect to the sample of 500: The proportion of a 250-voter sample is an unbiased estimate of the proportion of the 500-voter sample from which it is drawn. It seems inductively reasonable, then, that if the proportion of a 250-voter sample is an unbiased estimate of the 500-voter sample from which it is drawn, and the proportion of a 500-voter sample is an unbiased estimate of the 1000-voter population, then the proportion of a 250-voter sample should be an unbiased estimate of the population proportion. And if so, this argument should hold for samples of 1/2 x 250 = 125, and so on — in fact for any size sample.

+

The argument given above is not a rigorous formal proof. But I doubt that the non-mathematician needs, or will benefit from, a more formal proof of this proposition. You are more likely to be persuaded if you demonstrate this proposition to yourself experimentally in the following manner:

+
    +
  • Step 1. Let “1-6” = Democrat, “7-10” = Republican
  • +
  • Step 2. Choose a sample of, say, ten random numbers, and record the proportion Democrat (the sample proportion).
  • +
  • Step 3. Repeat step 2 a thousand times.
  • +
  • Step 4. Compute the mean of the sample proportions, and compare it to the population proportion of 60 percent. This result should be close enough to reassure you that on the average the sample proportion is an “unbiased” estimate of the population proportion, though in any particular sample it may be substantially off in either direction.
  • +
+
+
+

19.3.2 Efficiency

+

We want an estimate to be accurate, in the sense that it is as close to the “actual” value of the parameter as possible. Sometimes it is possible to get more accuracy at the cost of biasing the estimate. More than that does not need to be said here.

+
+
+

19.3.3 Maximum Likelihood

+

Knowing that a particular value is the most likely of all values may be of importance in itself. For example, a person betting on one horse in a horse race is interested in his/her estimate of the winner having the highest possible probability, and is not the slightest bit interested in getting nearly the right horse. Maximum likelihood estimates are of particular interest in such situations.

+

See (Savage 1972, chap. 15), for many other criteria of estimators.

+
+
+
+

19.4 Criteria of the Criteria

+

What should we look for in choosing criteria? Logically, this question should precede the above list of criteria.

+

Savage (1972, chap. 15) has urged that we should always think in terms of the consequences of choosing criteria, in light of our purposes in making the estimate. I believe that he is making an important point. But it often is very hard work to think the matter through all the way to the consequences of the criteria chosen. And in most cases, such fine inquiry is not needed, in the sense that the estimating procedure chosen will be the same no matter what consequences are considered.1

+
+
+

19.5 Estimation of accuracy of the point estimate

+

So far we have discussed how to make a point estimate, and criteria of good estimators. We also are interested in estimating the accuracy of that estimate. That subject — which is harder to grapple with — is discussed in Chapter 26 and Chapter 27 on confidence intervals.

+

Most important: One cannot sensibly talk about the accuracy of probabilities in the abstract, without reference to some set of facts. In the abstract, the notion of accuracy loses any meaning, and invites confusion and argument.

+
+
+

19.6 Uses of the mean

+

Let’s consider when the use of a device such as the mean is valuable, in the context of the data on marksmen in Table 19.1.2. If we wish to compare marksman A versus marksman B, we can immediately see that marksman A hit the bullseye (80 shots for 3 points each time) as many times as marksman B hit either the bullseye or simply got in the black (30 shots for 3 points and 50 shots for 2 points), and A hit the black (2 points) as many times as B just got in the white (1 point). From these two comparisons covering all the shots, in both of which comparisons A does better, it is immediately obvious that marksman A is better than marksman B. We can say that A’s score dominates B’s score.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 19.1: Score percentages by marksman
Score# occurrencesProbability
Marksman A
100
220.2
380.8
Marksman B
120.2
250.5
330.3
Marksman C
110.1
260.6
330.3
+
+

When we turn to comparing marksman C to marksman D, however, we cannot say that one “dominates” the other as we could with the comparison of marksmen A and B. Therefore, we turn to a summarizing device. One such device that is useful here is the mean. For marksman C the mean score is \((40 * 1) + (10 * 2) + (50 * 3) = 210\), while for marksman D the mean score is \((10 * 1) + (60 * 2) + (30 * 3) = 220\). Hence we can say that D is better than C even though D’s score does not dominate C’s score in the bullseye category.

+

Another use of the mean (Gnedenko, Aleksandr, and Khinchin 1962, 68) is shown in the estimation of the number of matches that we need to start fires for an operation carried out 20 times in a day (Table 19.2). Let’s say that the number of cases where s/he needs 1, 2 … 5 matches to start a fire are as follows (along with their probabilities) based on the last 100 fires started:

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 19.2: Number of matches needed to start a fire
Number of MatchesNumber of CasesProbabilities
17.16
216.16
355.55
421.21
51.01
+
+

If you know that the operator will be lighting twenty fires, you can estimate the number of matches that s/he will need by multiplying the mean number of matches (which turns out be \(1 * .07 + 2 * 0.16 + 3 * 0.55 + 4 * 0.21 + 5 * 0.01 = 2.93\)) in the observed experience by 20. Here you are using the mean as an indication of a representative case.

+

It is common for writers to immediately produce the data in the forms of percentages or probabilities. But I think it is important to include in our discussion the absolute numbers, because this is what one must begin with in practice. And keeping the absolute numbers in mind is likely to avoid some confusions that arise if one immediately goes to percentages or to probabilities.

+

Still another use for the mean is when you have a set of observations with error in them. The mean of the observations probably is your best guess about which is the “right” one. Furthermore, the distance you are likely to be off the mark is less if you select the mean of the observations. An example might be a series of witnesses giving the police their guesses about the height of a man who overturned an outhouse. The mean probably is the best estimate to give to police officers as a description of the perpetrator (though it would be helpful to give the range of the observations as well).

+

We use the mean so often, in so many different circumstances, that we become used to it and never think about its nature. So let’s do so a bit now.

+

Different statistical ideas are appropriate for business and engineering decisions, biometrics, econometrics, scientific explanation (the philosophers’ case), and other fields. So nothing said here holds everywhere and always.

+

One might ask: What is the “meaning” of a mean? But that is not a helpful question. Rather, we should ask about the uses of a mean. Usually a mean is used to summarize a set of data. As we saw with marksmen C and D, it often is difficult to look at a table of data and obtain an overall idea of how big or how small the observations are; the mean (or other measurements) can help. Or if you wish to compare two sets of data where the distributions of observations overlap each other, comparing the means of the two distributions can often help you better understand the matter.

+

Another complication is the confusion between description and estimation , which makes it difficult to decide where to place the topic of descriptive statistics in a textbook. For example, compare the mean income of all men in the U. S., as measured by the decennial census. This mean of the universe can have a very different meaning from the mean of a sample of men with respect to the same characteristic. The sample mean is a point estimate, a statistical device, whereas the mean of the universe is a description. The use of the mean as an estimator is fraught with complications. Still, maybe it is no more complicated than deciding what describer to use for a population. This entire matter is much more complex than it appears at first glance.

+

When the sample size approaches in size the entire population — when the sample becomes closer and closer to being the same as the population — the two issues blend. What does that tell us? Anything? What is the relationship between a baseball player’s average for two weeks, and his/her lifetime average? This is subtle stuff — rivaling the subtleness of arguments about inference versus probability, and about the nature of confidence limits (see Chapter 26 and Chapter 27 ). Maybe the only solid answer is to try to stay super-clear on what you are doing for what purpose, and to ask continually what job you want the statistic (or describer) to do for you.

+

The issue of the relationship of sample size to population size arises here. If the sample size equals or approaches the population size, the very notion of estimation loses its meaning.

+

The notion of “best estimator” makes no sense in some situations, including the following: a) You draw one black ball from a bucket. You cannot put confidence intervals around your estimate of the proportion of black balls, except to say that the proportion is somewhere between 1 and 0. No one would proceed without bringing in more information. That is, when there is almost no information, you simply cannot make much of an estimate — and the resampling method breaks down, too. It does not help much to shift the discussion to the models of the buckets, because then the issue is the unknown population of the buckets, in which case we need to bring in our general knowledge. b) When the sample size equals or is close to the population size, as discussed in this section, the data are a description rather than an estimate, because the sample is getting to be much the same as the universe; that is, if there are twelve people in your family, and you randomly take a sample of the amount of sugar used by eight members of the family, the results of the sample cannot be very different than if you compute the amount for all twelve family members. In such a case, the interpretation of the mean becomes complex.

+

Underlying all estimation is the assumption of continuation, which follows from random sampling — that there is no reason to expect the next sample to be different from the present one in any particular fashion, mean or variation. But we do expect it to be different in some fashion because of sampling variability.

+
+
+

19.7 Conclusion

+

A Newsweek article says, “According to a recent reader’s survey in Bride’s magazine, the average blowout [wedding] will set you back about $16,000” (Feb 15, 1993, p. 67). That use of the mean (I assume) for the average, rather than the median, could cost the parents of some brides a pretty penny. It could be that the cost for the average person — that is, the median expenditure — might be a lot less than $16,000. (A few million dollar weddings could have a huge effect on a survey mean.) An inappropriate standard of comparison might enter into some family discussions as a result of this article, and cause higher outlays than otherwise. This chapter helps one understand the nature of such estimates.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/preface_second.html b/r-book/preface_second.html new file mode 100644 index 00000000..e8135653 --- /dev/null +++ b/r-book/preface_second.html @@ -0,0 +1,717 @@ + + + + + + + + + +Resampling statistics - Preface to the second edition + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Preface to the second edition

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+ +
+
+
+

This is a slightly edited version of the original preface to the second edition. We removed an introduction to the original custom software, and a look ahead at the original contents of the book.

+
+
+
+

Brief history of the resampling method

+

This book describes a revolutionary — but now fully accepted — approach to probability and statistics. Monte Carlo resampling simulation takes the mumbo-jumbo out of statistics and enables even beginning students to understand completely everything that is done.

+

Before we go further, let’s make the discussion more concrete with an example. Ask a class: What are the chances that three of a family’s first four children will be girls? After various entertaining class suggestions about procreating four babies, or surveying families with four children, someone in the group always suggests flipping a coin. This leads to valuable student discussion about whether the probability of a girl is exactly half (there are about 105 males born for each 100 females), whether .5 is a satisfactory approximation, whether four coins flipped once give the same answer as one coin flipped four times, and so on. Soon the class decides to take actual samples of coin flips. And students see that this method quickly arrives at estimates that are accurate enough for most purposes. Discussion of what is “accurate enough” also comes up, and that discussion is valuable, too.

+

The Monte Carlo method itself is not new. Near the end of World War II, a group of physicists at the Rand Corp. began to use random-number simulations to study processes too complex to handle with formulas. The name “Monte Carlo” came from the analogy to the gambling houses on the French Riviera. The application of Monte Carlo methods in teaching statistics also is not new. Simulations have often been used to illustrate basic concepts. What is new and radical is using Monte Carlo methods routinely as problem-solving tools for everyday problems in probability and statistics.

+

From here on, the related term resampling will be used throughout the book. Resampling refers to the use of the observed data or of a data generating mechanism (such as a die) to produce new hypothetical samples, the results of which can then be analyzed. The term computer-intensive methods also is frequently used to refer to techniques such as these.

+

The history of resampling is as follows: In the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly in their social science research. I sympathized with them. Even many experts are unable to understand intuitively the formal mathematical approach to the subject. Clearly, we need a method free of the formulas that bewilder almost everyone.

+

The solution is as follows: Beneath the logic of a statistical inference there necessarily lies a physical process. The resampling methods described in this book allow us to work directly with the underlying physical model by simulating it, rather than describing it with formulae. This general insight is also the heart of the specific technique Bradley Efron felicitously labeled ‘the bootstrap’ (1979), a device I introduced in 1969 that is now the most commonly used, and best known, resampling method.

+

The resampling approach was first tried with graduate students in 1966, and it worked exceedingly well. Next, under the auspices of the father of the “new math,” Max Beberman, I “taught” the method to a class of high school seniors in 1967. The word “taught” is in quotation marks because the pedagogical essence of the resampling approach is that the students discover the method for themselves with a minimum of explicit instruction from the teacher.

+

The first classes were a success and the results were published in 1969 (J. L. Simon and Holmes 1969). Three PhD experiments were then conducted under Kenneth Travers’ supervision, and they all showed overwhelming superiority for the resampling method (J. L. Simon, Atkinson, and Shevokas 1976). Subsequent research has confirmed this success.

+

The method was first presented at some length in the 1969 edition of my book Basic Research Methods in Social Science (J. L. Simon 1969) (third edition with Paul Burstein -Simon Julian Lincoln and Burstein (1985)).

+

For some years, the resampling method failed to ignite interest among statisticians. While many factors (including the accumulated intellectual and emotional investment in existing methods) impede the adoption of any new technique, the lack of readily available computing power and tools was an obstacle. (The advent of the personal computer in the 1980s changed that, of course.)

+

Then in the late 1970s, Efron began to publish formal analyses of the bootstrap — an important resampling application (Efron 1979). Interest among statisticians has exploded since then, in conjunction with the availability of easy, fast, and inexpensive computer simulations. The bootstrap has been the most widely used, but across-the-board application of computer intensive methods now seems at hand. As Noreen (1989) noted, “there is a computer-intensive alternative to just about every conventional parametric and non-parametric test.” And the bootstrap method has now been hailed by an official American Statistical Association volume as the only “great breakthrough” in statistics since 1970 (Kotz and Johnson 1992).

+

It seems appropriate now to offer the resampling method as the technique of choice for beginning students as well as for the advanced practitioners who have been exploring and applying the method.

+

Though the term “computer-intensive methods” is nowadays used to describe the techniques elaborated here, this book can be read either with or without the accompanying use of the computer. However, as a practical matter, users of these methods are unlikely to be content with manual simulations if a quick and simple computer-program alternative is available.

+

The ultimate test of the resampling method is how well you, the reader, learn it and like it. But knowing about the experiences of others may help beginners as well as experienced statisticians approach the scary subject of statistics with a good attitude. Students as early as junior high school, taught by a variety of instructors and in other languages as well as English, have — in a matter of 6 or 12 short hours — learned how to handle problems that students taught conventionally do not learn until advanced university courses. And several controlled experimental studies show that, on average, students who learn this method are more likely to arrive at correct solutions than are students who are taught conventional methods.

+

Best of all, the experiments comparing the resampling method against conventional methods show that students enjoy learning statistics and probability this way, and they don’t suffer statistics panic. This experience contrasts sharply with the reactions of students learning by conventional methods. (This is true even when the same teachers teach both methods as part of an experiment.)

+

A public offer: The intellectual history of probability and statistics began with gambling games and betting. Therefore, perhaps a lighthearted but very serious offer would not seem inappropriate here: I hereby publicly offer to stake $5,000 in a contest against any teacher of conventional statistics, with the winner to be decided by whose students get the larger number of simple and complex numerical problems correct, when teaching similar groups of students for a limited number of class hours — say, six or ten. And if I should win, as I am confident that I will, I will contribute the winnings to the effort to promulgate this teaching method. (Here it should be noted that I am far from being the world’s most skillful or charming teacher. It is the subject matter that does the job, not the teacher’s excellence.) This offer has been in print for many years now, but no one has accepted it.

+

The early chapters of the book contain considerable discussion of the resampling method, and of ways to teach it. This material is intended mainly for the instructor; because the method is new and revolutionary, many instructors appreciate this guidance. But this didactic material is also intended to help the student get actively involved in the learning process rather than just sitting like a baby bird with its beak open waiting for the mother bird to drop morsels into its mouth. You may skip this didactic material, of course, and I hope that it does not get in your way. But all things considered, I decided it was better to include this material early on rather than to put it in the back or in a separate publication where it might be overlooked.

+
+
+

Brief history of statistics

+

In ancient times, mathematics developed from the needs of governments and rich men to number armies, flocks, and especially to count the taxpayers and their possessions. Up until the beginning of the 20th century, the term statistic meant the number of something — soldiers, births, taxes, or what-have-you. In many cases, the term statistic still means the number of something; the most important statistics for the United States are in the Statistical Abstract of the United States . These numbers are now known as descriptive statistics. This book will not deal at all with the making or interpretation of descriptive statistics, because the topic is handled very well in most conventional statistics texts.

+

Another stream of thought entered the field of probability and statistics in the 17th century by way of gambling in France. Throughout history people had learned about the odds in gambling games by repeated plays of the game. But in the year 1654, the French nobleman Chevalier de Mere asked the great mathematician and philosopher Pascal to help him develop correct odds for some gambling games. Pascal, the famous Fermat, and others went on to develop modern probability theory.

+

Later these two streams of thought came together. Researchers wanted to know how accurate their descriptive statistics were — not only the descriptive statistics originating from sample surveys, but also the numbers arising from experiments. Statisticians began to apply the theory of probability to the accuracy of the data arising from sample surveys and experiments, and that became the theory of inferential statistics .

+

Here we find a guidepost: probability theory and statistics are relevant whenever there is uncertainty about events occurring in the world, or in the numbers describing those events.

+

Later, probability theory was also applied to another context in which there is uncertainty — decision-making situations. Descriptive statistics like those gathered by insurance companies — for example, the number of people per thousand in each age bracket who die in a five-year period — have been used for a long time in making decisions such as how much to charge for insurance policies. But in the modern probabilistic theory of decision-making in business, politics and war, the emphasis is different; in such situations the emphasis is on methods of combining estimates of probabilities that depend upon each other in complicated ways in order to arrive at the best decision. This is a return to the gambling origins of probability and statistics. In contrast, in standard insurance situations (not including war insurance or insurance on a dancer’s legs) the probabilities can be estimated with good precision without complex calculation, on the basis of a great many observations, and the main statistical task is gathering the information. In business and political decision-making situations, however, one often works with probabilities based on very limited information — often little better than guesses. There the task is how best to combine these guesses about various probabilities into an overall probability estimate.

+

Estimating probabilities with conventional mathematical methods is often so complex that the process scares many people. And properly so, because its difficulty leads to errors. The statistics profession worries greatly about the widespread use of conventional tests whose foundations are poorly understood. The wide availability of statistical computer packages that can easily perform these tests with a single command, regardless of whether the user understands what is going on or whether the test is appropriate, has exacerbated this problem. This led John Tukey to turn the field toward descriptive statistics with his techniques of “exploratory data analysis” (Tukey 1977). These descriptive methods are well described in many texts.

+

Probabilistic analysis also is crucial, however. Judgments about whether the government should allow a new medicine on the market, or whether an operator should adjust a screw machine, require more than eyeball inspection of data to assess the chance variability. But until now the teaching of probabilistic statistics, with its abstruse structure of mathematical formulas, mysterious tables of calculations, and restrictive assumptions concerning data distributions — all of which separate the student from the actual data or physical process under consideration — have been an insurmountable obstacle to intuitive understanding.

+

Now, however, the resampling method enables researchers and decision-makers in all walks of life to obtain the benefits of statistics and predictability without the shortcomings of conventional methods, free of mathematical formulas and restrictive assumptions. Resampling’s repeated experimental trials on the computer enable the data (or a data-generating mechanism representing a hypothesis) to express their own properties, without difficult and misleading assumptions.

+

So — good luck. I hope that you enjoy the book and profit from it.

+

Julian Lincoln Simon

+

1997

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/preface_third.html b/r-book/preface_third.html new file mode 100644 index 00000000..07cfcd93 --- /dev/null +++ b/r-book/preface_third.html @@ -0,0 +1,726 @@ + + + + + + + + + +Resampling statistics - Preface to the third edition + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

Preface to the third edition

+
+ + + +
+ + + + +
+ + +
+ +

The book in your hands, or on your screen, is the third edition of a book originally called “Resampling: the new statistics”, by Julian Lincoln Simon (1992).

+

One of the pleasures of writing an edition of someone else’s book is that we have some freedom to praise a previous version of our own book. We will do that, in the next section. Next we talk about the resampling methods in this book, and their place at the heart of “data science”, Finally, we discuss what we have changed, and why, and make some suggestions about where this book could fit into your learning and teaching.

+
+

What Simon saw

+

Simon gives the early history of this book in the original preface. He starts with the following observation:

+
+

In the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly…

+
+

Simon then applied his striking capacity for independent thought to the problem — and came to two essential conclusions.

+

The first was that introductory courses in statistics use far too much mathematics. Most students cannot follow along and quickly get lost, reducing the subject to — as Simon puts it — “mumbo-jumbo”.

+

On its own, this was not a new realization. Simon quotes a classic textbook by Wallis and Roberts (1956), in which they compare teaching statistics through mathematics to teaching in a foreign language. More recently, other teachers of statistics have come to the same conclusion. Cobb (2007) argues that it is practically impossible to teach students the level of mathematics they would need to understand standard introductory courses. As you will see below, Cobb also agrees with Simon about the solution.

+

Simon’s great contribution was to see how we can replace the mathematics, to better reveal the true heart of statistical thinking. His starting point appears in the original preface: “Beneath the logic of a statistical inference there necessarily lies a physical process”. Drawing conclusions from noisy data means building a model of the noisy world, and seeing how that model behaves. That model can be physical, where we generate the noisiness of the world using physical devices like dice and spinners and coin-tosses. In fact, Simon used exactly these kinds of devices in his first experiments in teaching (Simon 1969). He then saw that it was much more efficient to build these models with simple computer code, and the result was the first and second editions of this book, with their associated software, the Resampling Stats language.

+

Simon’s second conclusion follows from the first. Now that Simon had stripped away the unnecessary barrier of mathematics, he had got to the heart of what is interesting and difficult in statistics. Drawing conclusions from noisy data involves a lot of hard, clear thinking. We need to be honest with our students about that; statistics is hard, not because it is obscure (it need not be), but because it deals with difficult problems. It is exactly that hard logical thinking that can make statistics so interesting to our best students; “statistics” is just reasoning about the world when the world is noisy. Simon writes eloquently about this in a section in the introduction — “Why is statistics such a difficult subject” (Section 1.6).

+

We needed both of Simon’s conclusions to get anywhere. We cannot hope to teach two hard subjects at the same time; mathematics, and statistical reasoning. That is what Simon has done: he replaced the mathematics with something that is much easier to reason about. Then he can concentrate on the real, interesting problem — the hard thinking about data, and the world it comes from. To quote from a later section in this book (Section 2.4): “Once we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics.” Instead of asking “where would I look up the right recipe for this”, you find yourself asking “what kind of world do these data come from?” and “how can I reason about that world?”. Like Simon, we have found that this way of thinking and teaching is almost magically liberating and satisfying. We hope and believe that you will find the same.

+
+
+

Resampling and data science

+

The ideas in Simon’s book, first published in 1992, have found themselves at the center of the modern movement of data science.

+

In the section above, we described Simon’s path in discovering physical models as a way of teaching and explaining statistical tests. He saw that code was the right way to express these physical models, and therefore, to build and explain statistical tests.

+

Meanwhile, the wider world of data analysis has been coming to the same conclusion, but from the opposite direction. Simon saw the power of resampling for explanation, and then that code was the right way to express these explanations. The data science movement discovered first that code was essential for data analysis, and then that code was the right way to explain statistics.

+

The modern use of the phrase “data science” comes from the technology industry. From around 2007, companies such as LinkedIn and Facebook began to notice that there was a new type of data analyst that was much more effective than their predecessors. They came to call these analysts “data scientists”, because they had learned how to deal with large and difficult data while working in scientific fields such as ecology, biology, or astrophysics. They had done this by learning to use code:

+
+

Data scientists’ most basic, universal skill is the ability to write code. (Davenport and Patil 2012)

+
+

Further reflection (Donoho 2017) suggested that something deep was going on: that data science was the expression of a radical change in the way we analyze data, in academia, and in industry. At the center of this change — was code. Code is the language that allows us to tell the computer what it should do with data; it is the native language of data analysis.

+

This insight transforms the way with think of code. In the past, we have thought of code as a separate, specialized skill, that some of us learn. We take coding courses — we “learn to code”. If code is the fundamental language for analyzing data, then we need code to express what data analysis does, and explain how it works. Here we “code to learn”. Code is not an aim in itself, but a language we can use to express the simple ideas behind data analysis and statistics.

+

Thus the data science movement started from code as the foundation for data analysis, to using code to explain statistics. It ends at the same place as this book, from the other side of the problem.

+

The growth of data science is the inevitable result of taking computing seriously in education and research. We have already cited Cobb (2007) on the impossibility of teaching the mathematics students would need in order to understand traditional statistics courses. He goes on to explain why there is so much mathematics, and why we should remove it. In the age before ubiquitous computing, we needed mathematics to simplify calculations that we could not practically do by hand. Now we have great computing power in our phones and laptops, we do not have this constraint, and we can use simpler resampling methods to solve the same problems. As Simon shows, these are much easier to describe and understand. Data science, and teaching with resampling, are the obvious consequences of ubiquitous computing.

+
+
+

What we changed

+

This diversion, through data science, leads us to the changes that we have made for the new edition. The previous edition of this book is still excellent, and you can read it free, online, at http://www.resample.com/intro-text-online. It continues to be ahead of its time, and ahead of our time. Its one major drawback is that Simon bases much of the book around code written in a special language that he developed with Dan Weidenfeld, called Resampling Stats. Resampling Stats is well designed for expressing the steps in simulating worlds that include elements of randomness, and it was a useful contribution at the time that it was written. Since then, and particularly in the last decade, there have been many improvements in more powerful and general languages, such as R and Python. These languages are particularly suitable for beginners in data analysis, and they come with a huge range of tools and libraries for a many tasks in data analysis, including the kinds of models and simulations you will see in this book. We have updated the book to use R, instead of Resampling Stats. If you already know R or a similar language, such as Python, you will have a big head start in reading this book, but even if you do not, we have written the book so it will be possible to pick up the R code that you need to understand and build the kind of models that Simon uses. The advantage to us, your authors, is that we can use the very powerful tools associated with R to make it easier to run and explain the code. The advantage to you, our readers, is that you can also learn these tools, and the R language. They will serve you well for the rest of your career in data analysis.

+ +

Our second major change is that we have added some content that Simon specifically left out. Simon knew that his approach was radical for its time, and designed his book as a commentary, correction, and addition to traditional courses in statistics. He assumes some familiarity with the older world of normal distributions, t-tests, Chi-squared tests, analysis of variance, and correlation. In the time that has passed since he wrote the book, his approach to explanation has reached the mainstream. It is now perfectly possible to teach an introductory statistics course without referring to the older statistical methods. This means that the earlier editions of this book can now serve on their own as an introduction to statistics — but, used this way, at the time we write, this will leave our readers with some gaps to fill. Simon’s approach will give you a deep understanding of the ideas of statistics, and resampling methods to apply them, but you will likely come across other teachers and researchers using the traditional methods. To bridge this gap, we have added new sections that explain how resampling methods relate to their corresponding traditional methods. Luckily, we find these explanations add deeper understanding to the traditional methods. Teaching resampling is the best foundation for statistics, including the traditional methods.

+

Lastly, we have extended Simon’s explanation of Bayesian probability and inference. This is partly because Bayesian methods have become so important in statistical inference, and partly because Simon’s approach has such obvious application in explaining how Bayesian methods work.

+
+
+

Who should read this book, and when

+

As you have seen in the previous sections, this book uses a radical approach to explaining statistical inference — the science of drawing conclusions from noisy data. This approach is quickly becoming the standard in teaching of data science, partly because it is so much easier to explain, and partly because of the increasing role of code in data analysis.

+

Our book teaches the basics of using the R language, basic probability, statistical inference through simulation and resampling, confidence intervals, and basic Bayesian reasoning, all through the use of model building in simple code.

+

Statistical inference is an important part of research methods for many subjects; so much so, that research methods courses may even be called “statistics” courses, or include “statistics” components. This book covers the basic ideas behind statistical inference, and how you can apply these ideas to draw practical statistical conclusions. We recommend it to you as an introduction to statistics. If you are a teacher, we suggest you consider this book as a primary text for first statistics courses. We hope you will find, as we have, that this method of explaining through building is much more productive and satisfying than the traditional method of trying to convey some “intuitive” understanding of fairly complicated mathematics. We explain the relationship of these resampling techniques to traditional methods. Even if you do need to teach your students t-tests, and analysis of variance, we hope you will share our experience that this way of explaining is much more compelling than the traditional approach.

+

Simon wrote this book for students and teachers who were interested to discover a radical new method of explanation in statistics and probability. The book will still work well for that purpose. If you have done a statistics course, but you kept feeling that you did not really understand it, or there was something fundamental missing that you could not put your finger on — good for you! — then, please, read this book. There is a good chance that it will give you deeper understanding, and reveal the logic behind the often arcane formulations of traditional statistics.

+

Our book is only part of a data science course. There are several important aspects to data science. A data science course needs all the elements we list above, but it should also cover the process of reading, cleaning, and reorganizing data using R, or another language, such as

+

Python

+

It may also go into more detail about the experimental design, and cover prediction techniques, such as classification with machine learning, and data exploration with plots, tables, and summary measures. We do not cover those here. If you are teaching a full data science course, we suggest that you use this book as your first text, as an introduction to code, and statistical inference, and then add some of the many excellent resources on these other aspects of data science that assume some knowledge of statistics and programming.

+
+
+

Welcome to resampling

+

We hope you will agree that Simon’s insights for understanding and explaining are — really extraordinary. We are catching up slowly. If you are like us, your humble authors, you will find that Simon has succeeded in explaining what statistics is, and exactly how it works, to anyone with the patience to work through the examples, and think hard about the problems. If you have that patience, the rewards are great. Not only will you understand statistics down to its deepest foundations, but you will be able to think of your own tests, for your own problems, and have the tools to implement them yourself.

+

Matthew Brett

+

Stéfan van der Walt

+

Ian Nimmo-Smith

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/probability_theory_1a.html b/r-book/probability_theory_1a.html new file mode 100644 index 00000000..55f896df --- /dev/null +++ b/r-book/probability_theory_1a.html @@ -0,0 +1,1242 @@ + + + + + + + + + +Resampling statistics - 8  Probability Theory, Part 1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

8  Probability Theory, Part 1

+
+ + + +
+ + + + +
+ + +
+ +
+

8.1 Introduction

+

Let’s assume we understand the nature of the system or mechanism that produces the uncertain events in which we are interested. That is, the probability of the relevant independent simple events is assumed to be known, the way we assume we know the probability of a single “6” with a given die. The task is to determine the probability of various sequences or combinations of the simple events — say, three “6’s” in a row with the die. These are the sorts of probability problems dealt with in this chapter.

+ +

The resampling method — or just call it simulation or Monte Carlo method, if you prefer — will be illustrated with classic examples. Typically, a single trial of the system is simulated with cards, dice, random numbers, or a computer program. Then trials are repeated again and again to estimate the frequency of occurrence of the event in which we are interested; this is the probability we seek. We can obtain as accurate an estimate of the probability as we wish by increasing the number of trials. The key task in each situation is designing an experiment that accurately simulates the system in which we are interested.

+

This chapter begins the Monte Carlo simulation work that culminates in the resampling method in statistics proper. The chapter deals with problems in probability theory — that is, situations where one wants to estimate the probability of one or more particular events when the basic structure and parameters of the system are known. In later chapters we move on to inferential statistics, where similar simulation work is known as resampling.

+
+
+

8.2 Definitions

+

A few definitions first:

+
    +
  • Simple Event : An event such as a single flip of a coin, or one draw of a single card. A simple event cannot be broken down into simpler events of a similar sort.
  • +
  • Simple Probability (also called “primitive probability”): The probability that a simple event will occur; for example, that my favorite football team, the Washington Commanders, will win on Sunday.
  • +
+

During a recent season, the “experts” said that the Commanders had a 60 percent chance of winning on Opening Day; that estimate is a simple probability. We can model that probability by putting into a bucket six green balls to stand for wins, and four red balls to stand for losses (or we could use 60 and 40 balls, or 600 and 400). For the outcome on any given day, we draw one ball from the bucket, and record a simulated win if the ball is green, a loss if the ball is red.

+

So far the bucket has served only as a physical representation of our thoughts. But as we shall see shortly, this representation can help us think clearly about the process of interest to us. It can also give us information that is not yet in our thoughts.

+

Estimating simple probabilities wisely depends largely upon gathering evidence well. It also helps to adjust one’s probability estimates skillfully to make them internally consistent. Estimating probabilities has much in common with estimating lengths, weights, skills, costs, and other subjects of measurement and judgment.

+

Some more definitions:

+
    +
  • Composite Event : A composite event is the combination of two or more simple events. Examples include all heads in three throws of a single coin; all heads in one throw of three coins at once; Sunday being a nice day and the Commanders winning; and the birth of nine females out of the next ten calves born if the chance of a female in a single birth is 0.48.
  • +
  • Compound Probability : The probability that a composite event will occur.
  • +
+

The difficulty in estimating simple probabilities such as the chance of the Commanders winning on Sunday arises from our lack of understanding of the world around us. The difficulty of estimating compound probabilities such as the probability of it being a nice day Sunday and the Commanders winning is the weakness in our mathematical intuition interacting with our lack of understanding of the world around us. Our task in the study of probability and statistics is to overcome the weakness of our mathematical intuition by using a systematic process of simulation (or the devices of formulaic deductive theory).

+

Consider now a question about a compound probability: What are the chances of the Commanders winning their first two games if we think that each of those games can be modeled by our bucket containing six red and four green balls? If one drawing from the bucket represents one game, a second drawing should represent the second game (assuming we replace the first ball drawn in order to keep the chances of winning the same for the two games). If so, two drawings from the bucket should represent two games. And we can then estimate the compound probability we seek with a series of two-ball trial experiments.

+

More specifically, our procedure in this case — the prototype of all procedures in the resampling simulation approach to probability and statistics — is as follows:

+
    +
  1. Put six green (“Win”) and four red (“Lose”) balls in a bucket.
  2. +
  3. Draw a ball, record its color, and replace it (so that the probability of winning the second simulated game is the same as the first).
  4. +
  5. Draw another ball and record its color.
  6. +
  7. If both balls drawn were green record “Yes”; otherwise record “No.”
  8. +
  9. Repeat steps 2-4 a thousand times.
  10. +
  11. Count the proportion of “Yes”s to the total number of “Yes”s and “No”s; the result is the probability we seek.
  12. +
+

Much the same procedure could be used to estimate the probability of the Commanders winning (say) 3 of their next 4 games. We will return to this illustration again and we will see how it enables us to estimate many other sorts of probabilities.

+
    +
  • Experiment or Experimental Trial, or Trial, or Resampling Experiment : A simulation experiment or trial is a randomly-generated composite event which has the same characteristics as the actual composite event in which we are interested (except that in inferential statistics the resampling experiment is generated with the “benchmark” or “null” universe rather than with the “alternative” universe).
  • +
  • Parameter : A numerical property of a universe. For example, the “true” mean (don’t worry about the meaning of “true”), and the range between largest and smallest members, are two of its parameters.
  • +
+
+
+

8.3 Theoretical and historical methods of estimation

+

As introduced in Section 3.5, there are two general ways to tackle any probability problem: theoretical-deductive and empirical , each of which has two sub-types. These concepts have complicated links with the concept of “frequency series” discussed earlier.

+
    +
  • Empirical Methods . One empirical method is to look at actual cases in nature — for example, examine all (or a sample of) the families in Brazil that have four children and count the proportion that have three girls among them. (This is the most fundamental process in science and in information-getting generally. But in general we do not discuss it in this book and leave it to courses called “research methods.” I regard that as a mistake and a shame, but so be it.) In some cases, of course, we cannot get data in such fashion because it does not exist.

    +

    Another empirical method is to manipulate the simple elements in such fashion as to produce hypothetical experience with how the simple elements behave. This is the heart of the resampling method, as well as of physical simulations such as wind tunnels.

  • +
  • Theoretical Methods . The most fundamental theoretical approach is to resort to first principles, working with the elements in their full deductive simplicity, and examining all possibilities. This is what we do when we use a tree diagram to calculate the probability of three girls in families of four children.

  • +
+ +

The formulaic approach is a theoretical method that aims to avoid the inconvenience of resorting to first principles, and instead uses calculation shortcuts that have been worked out in the past.

+

What the Book Teaches . This book teaches you the empirical method using hypothetical cases. Formulas can be misleading for most people in most situations, and should be used as a shortcut only when a person understands exactly which first principles are embodied in the formulas. But most of the time, students and practitioners resort to the formulaic approach without understanding the first principles that lie behind them — indeed, their own teachers often do not understand these first principles — and therefore they have almost no way to verify that the formula is right. Instead they use canned checklists of qualifying conditions.

+
+
+

8.4 Samples and universes

+

The terms “sample” and “universe” (or “population”) 1 were used earlier without definition. But now these terms must be defined.

+
+

8.4.1 The concept of a sample

+

For our purposes, a “sample” is a collection of observations for which you obtain the data to be used in the problem. Almost any set of observations for which you have data constitutes a sample. (You might, or might not, choose to call a complete census a sample.)

+ +
+
+
+

8.5 The concept of a universe or population

+

For every sample there must also be a universe “behind” it. But “universe” is harder to define, partly because it is often an imaginary concept. A universe is the collection of things or people that you want to say that your sample was taken from . A universe can be finite and well defined — “all live holders of the Congressional Medal of Honor,” “all presidents of major universities,” “all billion-dollar corporations in the United States.” Of course, these finite universes may not be easy to pin down; for instance, what is a “major university”? And these universes may contain some elements that are difficult to find; for instance, some Congressional Medal winners may have left the country, and there may not be adequate public records on some billion-dollar corporations.

+

Universes that are called “infinite” are harder to understand, and it is often difficult to decide which universe is appropriate for a given purpose. For example, if you are studying a sample of patients suffering from schizophrenia, what is the universe from which the sample comes? Depending on your purposes, the appropriate universe might be all patients with schizophrenia now alive, or it might be all patients who might ever live. The latter concept of the universe of patients with schizophrenia is imaginary because some of the universe does not exist. And it is infinite because it goes on forever.

+

Not everyone likes this definition of “universe.” Others prefer to think of a universe, not as the collection of people or things that you want to say your sample was taken from, but as the collection that the sample was actually taken from. This latter view equates the universe to the “sampling frame” (the actual list or set of elements you sample from) which is always finite and existent. The definition of universe offered here is simply the most practical, in our opinion.

+ +
+
+

8.6 The conventions of probability

+

Let’s review the basic conventions and rules used in the study of probability:

+
    +
  1. Probabilities are expressed as decimals between 0 and 1, like percentages. The weather forecaster might say that the probability of rain tomorrow is 0.2, or 0.97.
  2. +
  3. The probabilities of all the possible alternative outcomes in a single “trial” must add to unity. If you are prepared to say that it must either rain or not rain, with no other outcome being possible — that is, if you consider the outcomes to be mutually exclusive (a term that we discuss below), then one of those probabilities implies the other. That is, if you estimate that the probability of rain is 0.2 — written \(P(\text{rain}) = 0.2\) — that implies that you estimate that \(P(\text{no rain}) = 0.8\).
  4. +
+
+
+
+ +
+
+Writing probabilities +
+
+
+

We will now be writing some simple formulae using probability. Above we write the probability of rain tomorrow as \(P(\text{rain})\). This probability might be 0.2, and we could write this as:

+

\[ +P(\text{rain}) = 0.2 +\]

+

We can term “rain tomorrow” an event — the event may occur: \(\text{rain}\), or it may not occur: \(\text{no rain}\).

+

We often shorten the name of our event — here \(\text{rain}\) — to a single letter, such as \(R\). So, in this case, we could write \(P(\text{rain}) = 0.2\) as \(P(R) = 0.2\) — meaning the same thing. We tend to prefer single letters — as in \(P(R)\) — to longer names — as in \(P(\text{rain})\). This is because the single letters can be easier to read in these compact formulae.

+

Above we have written the probability of “rain tomorrow” event not occurring as \(P(\text{no rain})\). Another way of referring to an event not occurring is to suffix the event name with a caret (^) character like this: \(\ \hat{} R\). So read \(P(\ \hat{} R)\) as “the probability that it will not rain”, and it is just another way of writing \(P(\text{no rain})\). We sometimes call \(\ \hat{} R\) the complement of \(R\).

+

We use \(\text{and}\) between two events to mean both events occur.

+

For example, say we call the event “Commanders win the game” as \(W\). One example of a compound event (see above) would be the event \(W \text{and} R\), meaning, the event where the Commanders won the game and it rained.

+
+
+
+
+

8.7 Mutually exclusive events — the addition rule

+

Definition: If there are just two events \(A\) and \(B\) and they are “mutually exclusive” or “disjoint,” each implies the absence of the other. Green and red coats are mutually exclusive for you if (but only if) you never wear more than one coat at a time.

+

To state this idea formally, if \(A\) and \(B\) are mutually exclusive, then:

+

\[ +P(A \text{ and } B) = 0 +\]

+

If \(A\) is “wearing a green coat” and \(B\) is “wearing a red coat” (and you never wear two coats at the same time), then the probability that you are wearing a green coat and a red coat is 0: \(P(A \text{ and } B) = 0\).

+

In that case, outcomes \(A\) and \(B\), and hence outcome \(A\) and its own absence (written \(P(\ \hat{} A)\)), are necessarily mutually exclusive, and hence the two probabilities add to unity:

+ +

\[ +P(A) + P(\ \hat{} A) = 1 +\]

+

The sales of your store in a given year cannot be both above and below $1 million. Therefore if \(P(\text{sales > \$1 million}) = 0.2\), \(P(\text{sales <= +\$1 million}) = 0.8\).

+

This “complements” rule is useful as a consistency check on your estimates of probabilities. If you say that the probability of rain is 0.2, then you should check that you think that the probability of no rain is 0.8; if not, reconsider both the estimates. The same for the probabilities of your team winning and losing its next game.

+
+
+

8.8 Joint probabilities

+

Let’s return now to the Commanders. We said earlier that our best guess of the probability that the Commanders will win the first game is 0.6. Let’s complicate the matter a bit and say that the probability of the Commanders winning depends upon the weather; on a nice day we estimate a 0.65 chance of winning, on a nasty (rainy or snowy) day a chance of 0.55. It is obvious that we then want to know the chance of a nice day, and we estimate a probability of 0.7. Let’s now ask the probability that both will happen — it will be a nice day and the Commanders will win .

+

Before getting on with the process of estimation itself, let’s tarry a moment to discuss the probability estimates. Where do we get the notion that the probability of a nice day next Sunday is 0.7? We might have done so by checking the records of the past 50 years, and finding 35 nice days on that date. If we assume that the weather has not changed over that period (an assumption that some might not think reasonable, and the wisdom of which must be the outcome of some non-objective judgment), our probability estimate of a nice day would then be 35/50 = 0.7.

+

Two points to notice here: 1) The source of this estimate is an objective “frequency series.” And 2) the data come to us as the records of 50 days, of which 35 were nice. We would do best to stick with exactly those numbers rather than convert them into a single number — 70 percent. Percentages have a way of being confusing. (When his point score goes up from 2 to 3, my racquetball partner is fond of saying that he has made a “fifty percent increase”; that’s just one of the confusions with percentages.) And converting to a percent loses information: We no longer know how many observations the percent is based upon, whereas 35/50 keeps that information.

+

Now, what about the estimate that the Commanders have a 0.65 chance of winning on a nice day — where does that come from? Unlike the weather situation, there is no long series of stable data to provide that information about the probability of winning. Instead, we construct an estimate using whatever information or “hunch” we have. The information might include the Commanders’ record earlier in this season, injuries that have occurred, what the “experts” in the newspapers say, the gambling odds, and so on. The result certainly is not “objective,” or the result of a stable frequency series. But we treat the 0.65 probability in quite the same way as we treat the .7 estimate of a nice day. In the case of winning, however, we produce an estimate expressed directly as a percent.

+

If we are shaky about the estimate of winning — as indeed we ought to be, because so much judgment and guesswork inevitably goes into it — we might proceed as follows: Take hold of a bucket and two bags of balls, green and red. Put into the bucket some number of green balls — say 10. Now add enough red balls to express your judgment that the ratio is the ratio of expected wins to losses on a nice day, adding or subtracting green balls as necessary to get the ratio you want. If you end up with 13 green and 7 red balls, then you are “modeling” a probability of 0.65, as stated above. If you end up with a different ratio of balls, then you have learned from this experiment with your own mind processes that you think that the probability of a win on a nice day is something other than 0.65.

+

Don’t put away the bucket. We will be using it again shortly. And keep in mind how we have just been using it, because our use later will be somewhat different though directly related.

+

One good way to begin the process of producing a compound estimate is by portraying the available data in a “tree diagram” like Figure 8.1. The tree diagram shows the possible events in the order in which they might occur. A tree diagram is extremely valuable whether you will continue with either simulation or the formulaic method.

+
+
+
+
+

+
Figure 8.1: Tree diagram
+
+
+
+
+
+
+

8.9 The Monte Carlo simulation method (resampling)

+

The steps we follow to simulate an answer to the compound probability question are as follows:

+
    +
  1. Put seven blue balls (for “nice day”) and three yellow balls (“not nice”) into a bucket labeled A.
  2. +
  3. Put 65 green balls (for “win”) and 35 red balls (“lose”) into a bucket labeled B. This bucket represents the chance that the Commanders will when it is a nice day.
  4. +
  5. Draw one ball from bucket A. If it is blue, carry on to the next step; otherwise record “no” and stop.
  6. +
  7. If you have drawn a blue ball from bucket A, now draw a ball from bucket B, and if it is green, record “yes” on a score sheet; otherwise write “no.”
  8. +
  9. Repeat steps 3-4 perhaps 10000 times.
  10. +
  11. Count the number of “yes” trials.
  12. +
  13. Compute the probability you seek as (number of “yeses”/ 10000). (This is the same as (number of “yeses”/ (number of “yeses” + number of “noes”)
  14. +
+

Actually doing the above series of steps by hand is useful to build your intuition about probability and simulation methods. But the procedure can also be simulated with a computer. We will use R to do this in a moment.

+
+
+

8.10 If statements in R

+

Before we get to the simulation, we need another feature of R, called a conditional or if statement.

+

Here we have rewritten step 4 above, but using indentation to emphasize the idea:

+
If you have drawn a blue ball from bucket A:
+    Draw a ball from bucket B
+    if the ball is green:
+        record "yes"
+    otherwise:
+        record "no".
+

Notice the structure. The first line is the header of the if statement. It has a condition — this is why if statements are often called conditional statements. The condition here is “you have drawn a blue ball from bucket A”. If this condition is met — it is True that you have drawn a blue ball from bucket A then we go on to do the stuff that is indented. Otherwise we do not do any of the stuff that is indented.

+

The indented stuff above is the body of the if statement. It is the stuff we do if the conditional at the top is True.

+

Now let’s see how we would write that in R.

+

Let’s make bucket A. Remember, this is the weather bucket. It has seven blue balls (for 70% fine days) and 3 yellow balls (for 30% rainy days). See Section 6.5 for the rep way of repeating elements multiple times.

+
+

Start of fine_win notebook

+ + +
+
# blue means "nice day", yellow means "not nice".
+bucket_A <- rep(c('blue', 'yellow'), c(7, 3))
+bucket_A
+
+
 [1] "blue"   "blue"   "blue"   "blue"   "blue"   "blue"   "blue"   "yellow"
+ [9] "yellow" "yellow"
+
+
+

Now let us draw a ball at random from bucket_A:

+
+
a_ball <- sample(bucket_A, size=1)
+a_ball
+
+
[1] "blue"
+
+
+

How we run our first if statement. Running this code will display “The ball was blue” if the ball was blue, otherwise it will not display anything:

+
+
if (a_ball == 'blue') {
+    message('The ball was blue')
+}
+
+
The ball was blue
+
+
+
+

Notice that the header line has if, followed by an open parenthesis ( introducing the conditional expression a_ball == 'blue'. There follows close parenthesis ) to finish the conditional expression. Next there is a open curly brace { to signal the start of the body of the if statement. The body of the if statement is one or more lines of code, followed by the close curly brace }. Here there is only one line: message('The ball was blue'). R only runs the body of the if statement if the condition is TRUE.2

+
+

To confirm we see “The ball was blue” if a_ball is 'blue' and nothing otherwise, we can set a_ball and re-run the code:

+
+
# Set value of a_ball so we know what it is.
+a_ball <- 'blue'
+
+
+
if (a_ball == 'blue') {
+    # The conditional statement is True in this case, so the body does run.
+    message('The ball was blue')
+}
+
+
The ball was blue
+
+
+
+
a_ball <- 'yellow'
+
+
+
if (a_ball == 'blue') {
+    # The conditional statement is False, so the body does not run.
+    message('The ball was blue')
+}
+
+

We can add an else clause to the if statement. Remember the body of the if statement runs if the conditional expression (here a_ball == 'blue') is TRUE. The else clause runs if the conditional statement is FALSE. This may be clearer with an example:

+
+
a_ball <- 'blue'
+
+
+
if (a_ball == 'blue') {
+    # The conditional expression is True in this case, so the body runs.
+    message('The ball was blue')
+} else {
+    # The conditional expression was True, so the else clause does not run.
+    message('The ball was not blue')
+}
+
+
The ball was blue
+
+
+
+

Notice that the else clause of the if statement starts with the end of the if body with the closing curly brace }. else follows, followed in turn by the opening curly brace { to start the body of the else clause. The body of the else clause only runs if the initial conditional expression is not TRUE.

+
+
+
a_ball <- 'yellow'
+
+
+
if (a_ball == 'yellow') {
+    # The conditional expression was False, so the body does not run.
+    message('The ball was blue')
+} else {
+    # but the else clause does run.
+    message('The ball was not blue')
+}
+
+
The ball was blue
+
+
+

With this machinery, we can now implement the full logic of step 4 above:

+
If you have drawn a blue ball from bucket A:
+    Draw a ball from bucket B
+    if the ball is green:
+        record "yes"
+    otherwise:
+        record "no".
+

Here is bucket B. Remember green means “win” (65% of the time) and red means “lose” (35% of the time). We could call this the “Commanders win when it is a nice day” bucket:

+
+
bucket_B <- rep(c('green', 'red'), c(65, 35))
+
+

The full logic for step 4 is:

+

Now we have everything we need to run many trials with the same logic.

+
+
# By default, say we have no result.
+result = 'No result'
+a_ball <- sample(bucket_A, size=1)
+# If you have drawn a blue ball from bucket A:
+if (a_ball == 'blue') {
+    # Draw a ball at random from bucket B
+    b_ball <- sample(bucket_B, size=1)
+    # if the ball is green:
+    if (b_ball == 'green') {
+        # record "yes"
+        result <- 'yes'
+    # otherwise:
+    } else {
+        # record "no".
+        result <- 'no'
+    }
+}
+# Show what we got in this case.
+result
+
+
[1] "yes"
+
+
+
+
# The result of each trial.
+# To start with, say we have no result for all the trials.
+z <- rep('No result', 10000)
+
+# Repeat trial procedure 10000 times
+for (i in 1:10000) {
+    # draw one "ball" for the weather, store in "a_ball"
+    # blue is "nice day", yellow is "not nice"
+    a_ball <- sample(bucket_A, size=1)
+    if (a_ball == 'blue') {  # nice day
+        # if no rain, check on game outcome
+        # green is "win" (give nice day), red is "lose" (given nice day).
+        b_ball <- sample(bucket_B, size=1)
+        if (b_ball == 'green') {  # Commanders win
+            # Record result.
+            z[i] <- 'yes'
+        } else {
+            z[i] <- 'no'
+        }
+    }
+    # End of trial, go back to the beginning until done.
+}
+
+# Count of the number of times we got "yes".
+k <- sum(z == 'yes')
+# Show the proportion of *both* fine day *and* wins
+kk <- k / 10000
+kk
+
+
[1] 0.461
+
+
+

The above procedure gives us the probability that it will be a nice day and the Commanders will win — about 46.1%.

+

End of fine_win notebook

+
+

Let’s say that we think that the Commanders have a 0.55 (55%) chance of winning on a not-nice day. With the aid of a bucket with a different composition — one made by substituting 55 green and 45 yellow balls in Step 4 — a similar procedure yields the chance that it will be a nasty day and the Commanders will win. With a similar substitution and procedure we could also estimate the probabilities that it will be a nasty day and the Commanders will lose, and a nice day and the Commanders will lose. The sum of these probabilities should come close to unity, because the sum includes all the possible outcomes. But it will not exactly equal unity because of what we call “sampling variation” or “sampling error.”

+

Please notice that each trial of the procedure begins with the same numbers of balls in the buckets as the previous trial. That is, you must replace the balls you draw after each trial in order that the probabilities remain the same from trial to trial. Later we will discuss the general concept of replacement versus non-replacement more fully.

+
+
+

8.11 The deductive formulaic method

+

It also is possible to get an answer with formulaic methods to the question about a nice day and the Commanders winning. The following discussion of nice-day-Commanders-win handled by formula is a prototype of the formulaic deductive method for handling other problems.

+

Return now to the tree diagram (Figure 8.1) above. We can read from the tree diagram that 70 percent of the time it will be nice, and of that 70 percent of the time, 65 percent of the games will be wins. That is, \(0.65 * 0.7 = 0.455\) = the probability of a nice day and a win. That is the answer we seek. The method seems easy, but it also is easy to get confused and obtain the wrong answer.

+
+
+

8.12 Multiplication rule

+

We can generalize what we have just done. The foregoing formula exemplifies what is known as the “multiplication rule”:

+

\[ +P(\text{nice day and win}) = P(\text{nice day}) * P(\text{winning | nice day}) +\]

+

where the vertical line in \(P(\text{winning | nice day})\) means “conditional upon” or “given that.” That is, the vertical line indicates a “conditional probability,” a concept we must consider in a minute.

+

The multiplication rule is a formula that produces the probability of the combination (juncture) of two or more events . More discussion of it will follow below.

+
+
+

8.13 Conditional and unconditional probabilities

+

Two kinds of probability statements — conditional and unconditional — must now be distinguished.

+

It is the appropriate concept when many factors, all small relative to each other rather than one force having an overwhelming influence, affect the outcome.

+

A conditional probability is formally written \(P(\text{Commanders win +| rain}) = 0.65\), and it is read “The probability that the Commanders will win if (given that) it rains is 0.65.” It is the appropriate concept when there is one (or more) major event of interest in decision contexts.

+

Let’s use another football example to explain conditional and unconditional probabilities. In the year this was being written, the University of Maryland had an unpromising football team. Someone may nevertheless ask what chance the team had of winning the post season game at the bowl to which only the best team in the University of Maryland’s league is sent. One may say that if by some miracle the University of Maryland does get to the bowl, its chance would be a bit less than 50- 50 — say, 0.40. That is, the probability of its winning, conditional on getting to the bowl is 0.40. But the chance of its getting to the bowl at all is very low, perhaps 0.01. If so, the unconditional probability of winning at the bowl is the probability of its getting there multiplied by the probability of winning if it gets there; that is, 0.01 x 0.40 = 0.004. (It would be even better to say that .004 is the probability of winning conditional only on having a team, there being a league, and so on, all of which seem almost sure things.) Every probability is conditional on many things — that war does not break out, that the sun continues to rise, and so on. But if all those unspecified conditions are very sure, and can be taken for granted, we talk of the probability as unconditional.

+

A conditional probability is a statement that the probability of an event is such-and-such if something else is so-and-so; it is the “if” that makes a probability statement conditional. True, in some sense all probability statements are conditional; for example, the probability of an even-numbered spade is 6/52 if the deck is a poker deck and not necessarily if it is a pinochle deck or Tarot deck. But we ignore such conditions for most purposes.

+

Most of the use of the concept of probability in the social sciences is conditional probability. All hypothesis-testing statistics (discussed starting in Chapter 20) are conditional probabilities.

+

Here is the typical conditional-probability question used in social-science statistics: What is the probability of obtaining this sample S (by chance) if the sample were taken from universe A? For example, what is the probability of getting a sample of five children with I.Q.s over 100 by chance in a sample randomly chosen from the universe of children whose average I.Q. is 100?

+

One way to obtain such conditional-probability statements is by examination of the results generated by universes like the conditional universe. For example, assume that we are considering a universe of children where the average I.Q. is 100.

+

Write down “over 100” and “under 100” respectively on many slips of paper, put them into a hat, draw five slips several times, and see how often the first five slips drawn are all over 100. This is the resampling (Monte Carlo simulation) method of estimating probabilities.

+

Another way to obtain such conditional-probability statements is formulaic calculation. For example, if half the slips in the hat have numbers under 100 and half over 100, the probability of getting five in a row above 100 is 0.03125 — that is, \(0.5^5\), or 0.5 x 0.5 x 0.5 x 0.5 x 0.5, using the multiplication rule introduced above. But if you are not absolutely sure you know the proper mathematical formula, you are more likely to come up with a sound answer with the simulation method.

+

Let’s illustrate the concept of conditional probability with four cards — two aces and two 3’s (or two black and two red). What is the probability of an ace? Obviously, 0.5. If you first draw an ace, what is the probability of an ace now? That is, what is the probability of an ace conditional on having drawn one already? Obviously not 0.5.

+

This change in the conditional probabilities is the basis of mathematician Edward Thorp’s famous system of card-counting to beat the casinos at blackjack (Twenty One).

+

Casinos can defeat card counting by using many decks at once so that conditional probabilities change more slowly, and are not very different than unconditional probabilities. Looking ahead, we will see that sampling with replacement, and sampling without replacement from a huge universe, are much the same in practice, so we can substitute one for the other at our convenience.

+

Let’s further illustrate the concept of conditional probability with a puzzle (from Gardner 2001, 288). “… shuffle a packet of four cards — two red, two black — and deal them face down in a row. Two cards are picked at random, say by placing a penny on each. What is the probability that those two cards are the same color?”

+

1. Play the game with the cards 100 times, and estimate the probability sought.

+

OR

+
    +
  1. Put slips with the numbers “1,” “1,” “2,” and “2” in a hat, or in a vector named N on a computer.
  2. +
  3. Shuffle the slips of paper by shaking the hat or shuffling the vector (of which more below).
  4. +
  5. Take two slips of paper from the hat or from N, to get two numbers.
  6. +
  7. Call the first number you selected A and the second B.
  8. +
  9. Are A and B the same? If so, record “Yes” otherwise “No”.
  10. +
  11. Repeat (2-5) 10000 times, and count the proportion of “Yes” results. That proportion equals the probability we seek to estimate.
  12. +
+

Before we proceed to do this procedure in R, we need a command to shuffle a vector.

+
+
+

8.14 Shuffling with sample

+

In the recipe above, the vector N has four values:

+
+
N = c(1, 1, 2, 2)
+
+

For the physical simulation, we specified that we would shuffle the slips of paper with these numbers, meaning that we would jumble them up into a random order. When we have done this, we will select two slips — say the first two — from the shuffled slips.

+

As we will be discussing more in various places, this shuffle-then-draw procedure is also called resampling without replacement. The without replacement idea refers to the fact that, after shuffling, we take a first virtual slip of paper from the shuffled vector, and then a second — but we do not replace the first slip of paper into the shuffled vector before drawing the second. For example, say I drew a “1” from N for the first value. If I am sampling without replacement then, when I draw the next value, the candidates I am choosing from are now “1”, “2” and “2”, because I have removed the “1” I got as the first value. If I had instead been sampling with replacement, then I would put back the “1” I had drawn, and would draw the second sample from the full set of “1”, “1”, “2”, “2”.

+
+

In fact we can can use R’s sample function to shuffle any vector. The default behavior of sample is to sample without replacement. Up until now we have always told R to change that default behavior, using the replace=TRUE argument to sample. replace=TRUE tells sample to sample with replacement. Now we want to sample without replacement, so we leave out replace=TRUE to let sample do its default sampling, without replacement. That is, when we do not specify replace=, R assumes replace=FALSE — sampling without replacement.

+
+
# The vector N, shuffled into a random order.
+# Note that "sample" *by default*, samples without replacement.
+# When we ask for size=4, we are asking for a sample that is the same
+# size as the original vector, and so, this will be the original vector
+# with a random reordering.
+shuffled <- sample(N, size=4)
+# The "slips" are now in random order.
+shuffled
+
+
[1] 1 2 2 1
+
+
+

And in fact, if you omit the size= argument to sample, it will assume you mean the size to be the same size as the input array — in this case, it will assume size=length(N) and therefore size=4. So we can get the same effect of a reordered (shuffled) vector by omitting both size= and replace=:

+
+
# The vector N, shuffled into a random order (the same procedure as the chunk
+# above).
+shuffled <- sample(N)
+# The "slips" are now in random order.
+shuffled
+
+
[1] 2 1 1 2
+
+
+
+

::: python You can use rnd.permuted to shuffle an array into a random order.

+

Like rnd.choice, rnd.permuted is a function (actually, a method) of rnd, that takes an array as input, and produces a version of the array, where the elements are in random order.

+

See Section 11.4 for some more discussion of shuffling and sampling without replacement.

+
+
+

8.15 Code answers to the cards and pennies problem

+
+

Start of cards_pennies notebook

+ + +
+
# Numbers representing the slips in the hat.
+N <- c(1, 1, 2, 2)
+
+# An array in which we will store the result of each trial.
+z <- rep('No result yet', 10000)
+
+for (i in 1:10000) {
+    # sample, used in this way, has the effect of shuffling the vector
+    # into a random order.  See the section linked above for an explanation.
+    shuffled <- sample(N)
+
+    A <- shuffled[1]  # The first slip from the shuffled array.
+    B <- shuffled[2]  # The second slip from the shuffled array.
+
+    # Set the result of this trial.
+    if (A == B) {
+        z[i] <- 'Yes'
+    } else {
+        z[i] <- 'No'
+    }
+}  # End of the loop.
+
+# How many times did we see "Yes"?
+k <- sum(z == 'Yes')
+
+# The proportion.
+kk <- k / 10000
+
+message(kk)
+
+
0.3273
+
+
+

Now let’s play the game differently, first picking one card and putting it back and shuffling before picking a second card. What are the results now? You can try it with the cards, but here is another program, similar to the last, to run that variation.

+
+
# An array in which we will store the result of each trial.
+z <- rep('No result yet', 10000)
+
+for (i in 1:10000) {
+    # Shuffle the numbers in N into a random order.
+    first_shuffle <- sample(N)
+    # Draw a slip of paper.
+    A <- first_shuffle[1]  # The first slip.
+
+    # Shuffle again (with all the slips).
+    second_shuffle <- sample(N)
+    # Draw a slip of paper.
+    B <- second_shuffle[1]  # The second slip.
+
+    # Set the result of this trial.
+    if (A == B) {
+        z[i] <- 'Yes'
+    } else {
+        z[i] <- 'No'
+    }
+}  # End of the loop.
+
+# How many times did we see "Yes"?
+k <- sum(z == 'Yes')
+
+# The proportion.
+kk <- k / 10000
+
+message(kk)
+
+
0.5059
+
+
+

End of cards_pennies notebook

+
+

Why do you get different results in the two cases? Let’s ask the question differently: What is the probability of first picking a black card? Clearly, it is 50-50, or 0.5. Now, if you first pick a black card, what is the probability in the first game above of getting a second black card? There are two red and one black cards left, so now p = 1/3.

+

But in the second game, what is the probability of picking a second black card if the first one you pick is black? It is still 0.5 because we are sampling with replacement.

+

The probability of picking a second black card conditional on picking a first black card in the first game is 1/3, and it is different from the unconditional probability of picking a black card first. But in the second game the probability of the second black card conditional on first picking a black card is the same as the probability of the first black card.

+

So the reason you lose money if you play the first game at even odds against a carnival game operator is because the conditional probability is different than the original probability.

+

And an illustrative joke: The best way to avoid there being a live bomb aboard your plane flight is to take an inoperative bomb aboard with you; the probability of one bomb is very low, and by the multiplication rule, the probability of two bombs is very very low . Two hundred years ago the same joke was told about the midshipman who, during a battle, stuck his head through a hole in the ship’s side that had just been made by an enemy cannon ball because he had heard that the probability of two cannonballs striking in the same place was one in a million.

+

What’s wrong with the logic in the joke? The probability of there being a bomb aboard already, conditional on your bringing a bomb aboard, is the same as the conditional probability if you do not bring a bomb aboard. Hence you change nothing by bringing a bomb aboard, and do not reduce the probability of an explosion.

+
+
+

8.16 The Commanders again, plus leaving the game early

+

Let’s carry exactly the same process one tiny step further. Assume that if the Commanders win, there is a 0.3 chance you will leave the game early. Now let us ask the probability of a nice day, the Commanders winning, and you leaving early. You should be able to see that this probability can be estimated with three buckets instead of two. Or it can be computed with the multiplication rule as 0.65 * 0.7 * 0.3 = 0.1365 (about 0.14) — the probability of a nice day and a win and you leave early.

+

The book shows you the formal method — the multiplication rule, in this case — for several reasons: 1) Simulation is weak with very low probabilities, e.g. P(50 heads in 50 throws). But — a big but — statistics and probability is seldom concerned with very small probabilities. Even for games like poker, the orders of magnitude of 5 aces in a wild game with joker, or of a royal flush, matter little. 2) The multiplication rule is wonderfully handy and convenient for quick calculations in a variety of circumstances. A back-of-the-envelope calculation can be quicker than a simulation. And it can also be useful in situations where the probability you will calculate will be very small, in which case simulation can require considerable computer time to be accurate. (We will shortly see this point illustrated in the case of estimating the rate of transmission of AIDS by surgeons.) 3) It is useful to know the theory so that you are able to talk to others, or if you go on to other courses in the mathematics of probability and statistics.

+

The multiplication rule also has the drawback of sometimes being confusing, however. If you are in the slightest doubt about whether the circumstances are correct for applying it, you will be safer to perform a simulation as we did earlier with the Commanders, though in practice you are likely to simulate with the aid of a computer program, as we shall see shortly. So use the multiplication rule only when there is no possibility of confusion. Usually that means using it only when the events under consideration are independent.

+

Notice that the same multiplication rule gives us the probability of any particular sequence of hits and misses — say, a miss, then a hit, then a hit if the probability of a single miss is 2/3. Among the 2/3 of the trials with misses on the first shot, 1/3 will next have a hit, so 2/3 x 1/3 equals the probability of a miss then a hit. Of those 2/9 of the trials, 1/3 will then have a hit, or 2/3 x 1/3 x 1/3 = 2/27 equals the probability of the sequence miss-hit-hit.

+

The multiplication rule is very useful in everyday life. It fits closely to a great many situations such as “What is the chance that it will rain (.3) and that (if it does rain) the plane will not fly (.8)?” Hence the probability of your not leaving the airport today is 0.3 x 0.8 = 0.24.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/probability_theory_1b.html b/r-book/probability_theory_1b.html new file mode 100644 index 00000000..352c255a --- /dev/null +++ b/r-book/probability_theory_1b.html @@ -0,0 +1,802 @@ + + + + + + + + + +Resampling statistics - 9  Probability Theory Part I (continued) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

9  Probability Theory Part I (continued)

+
+ + + +
+ + + + +
+ + +
+ +
+

9.1 The special case of independence

+

A key concept in probability and statistics is that of the independence of two events in which we are interested. Two events are said to be “independent” when one of them does not have any apparent relationship to the other. If I flip a coin that I know from other evidence is a fair coin, and I get a head, the chance of then getting another head is still 50-50 (one in two, or one to one.) And, if I flip a coin ten times and get heads the first nine times, the probability of getting a head on the tenth flip is still 50-50. Hence the concept of independence is characterized by the phrase “The coin has no memory.” (Actually the matter is a bit more complicated. If you had previously flipped the coin many times and knew it to be a fair coin, then the odds would still be 50-50, even after nine heads. But, if you had never seen the coin before, the run of nine heads might reasonably make you doubt that the coin was a fair one.)

+

In the Washington Commanders example above, we needed a different set of buckets to estimate the probability of a nice day plus a win, and of a nasty day plus a win. But what if the Commanders’ chances of winning are the same whether the day is nice or nasty? If so, we say that the chance of winning is independent of the kind of day. That is, in this special case,

+

\[ +P(\text{win | nice day}) = P(\text{win | nasty day}) \text{ and } P(\text{nice +day and win}) +\]

+

\[ += P(\text{nice day}) * P(\text{winning | nice day}) +\]

+

\[ += P(\text{nice day}) * P(\text{winning}) +\]

+
+
+
+ +
+
+ +
+
+
+

See section Section 8.13 for an explanation of this notation.

+
+
+

In this case we need only one set of two buckets to make all the estimates.

+

Independence means that the elements are drawn from 2 or more separate sets of possibilities . That is, \(P(A | B) = P(A | \ \hat{} B) = P(A)\) and vice versa.

+ +

In other words, if the occurrence of the first event does not change this probability that the second event will occur, then the events are independent.

+

Another way to put the matter: Events A and B are said to be independent of each other if knowing whether A occurs does not change the probability that B will occur, and vice versa. If knowing whether A does occur alters the probability of B occurring, then A and B are dependent.

+

If two events are independent, the multiplication rule simplifies to \(P(A \text{ and } B) = P(A) * P(B)\) . I’ll repeat once more: This rule is simply a mathematical shortcut, and one can make the desired estimate by simulation.

+

Also again, if two events are not independent — that is, if \(P(A | B)\) is not equal to \(P(A)\) because \(P(A)\) is dependent upon the occurrence of \(B\), then the formula to be used now is, \(P(A \text{ and } B) = P(A | B) * P(B)\) , which is sufficiently confusing that you are probably better off with a simulation.

+

What about if each of the probabilities is dependent on the other outcome? There is no easy formulaic method to deal with such a situation.

+

People commonly make the mistake of treating independent events as non-independent, perhaps from superstitious belief. After a long run of blacks, roulette gamblers say that the wheel is “due” to come up red. And sportswriters make a living out of interpreting various sequences of athletic events that occur by chance, and they talk of teams that are “due” to win because of the “Law of Averages.” For example, if Barry Bonds goes to bat four times without a hit, all of us (including trained statisticians who really know better) feel that he is “due” to get a hit and that the probability of his doing so is very high — higher that is, than his season’s average. The so-called “Law of Averages” implies no such thing, of course.

+

Events are often dependent in subtle ways. A boy may telephone one of several girls chosen at random. But, if he calls the same girl again (or if he does not call her again), the second event is not likely to be independent of the first. And the probability of his calling her is different after he has gone out with her once than before he went out with her.

+

As noted in the section above, events A and B are said to be independent of each other if the conditional probabilities of A and B remain the same . And the conditional probabilities remain the same if sampling is conducted with replacement .

+ +

Let’s now re-consider the multiplication rule with the special but important case of independence.

+
+

9.1.1 Example: Four Events in a Row — The Multiplication Rule

+

Assume that we want to know the probability of four successful archery shots in a row, where the probability of a success on a given shot is .25.

+

Instead of simulating the process with resampling trials we can, if we wish, arrive at the answer with the “multiplication rule.” This rule says that the probability that all of a given number of independent events (the successful shots) will occur (four out of four in this case) is the product of their individual probabilities — in this case, 1/4 x 1/4 x 1/4 x 1/4 = 1/256. If in doubt about whether the multiplication rule holds in any given case, however, you may check by resampling simulation. For the case of four daughters in a row, assuming that the probability of a girl is .5, the probability is 1/2 x 1/2 x 1/2 x 1/2 = 1/16.

+

Better yet, we’d use the more exact probability of getting a girl: \(100/206\), and multiply out the result as \((100/206)^4\). An important point here, however: we have estimated the probability of a particular family having four daughters as 1 in 16 — that is, odds of 15 to 1. But note well: This is a very different idea from stating that the odds are 15 to 1 against some family’s having four daughters in a row. In fact, as many families will have four girls in a row as will have boy-girl-boy-girl in that order or girl-boy-girl-boy or any other series of four children. The chances against any particular series is the same — 1 in 16 — and one-sixteenth of all four-children families will have each of these series, on average. This means that if your next-door neighbor has four daughters, you cannot say how much “out of the ordinary” the event is. It is easy to slip into unsound thinking about this matter.

+ +

Why do we multiply the probabilities of the independent simple events to learn the probability that they will occur jointly (the composite event)? Let us consider this in the context of three basketball shots each with 1/3 probability of hitting.

+
+
+
+
+

+
Figure 9.1: Tree Diagram for 3 Basketball Shots, Probability of a Hit is 1/3
+
+
+
+
+

Figure 9.1 is a tree diagram showing a set of sequential simple events where each event is conditional upon a prior simple event. Hence every probability after the first is a conditional probability.

+

In Figure 9.1, follow the top path first. On approximately one-third of the occasions, the first shot will hit. Among that third of the first shots, roughly a third will again hit on the second shot, that is, 1/3 of 1/3 or 1/3 x 1/3 = 1/9. The top path makes it clear that in 1/3 x 1/3 = 1/9 of the trials, two hits in a row will occur. Then, of the 1/9 of the total trials in which two hits in a row occur, about 1/3 will go on to a third hit, or 1/3 x 1/3 x 1/3 = 1/27. Remember that we are dealing here with independent events; regardless of whether the player made his first two shots, the probability is still 1 in 3 on the third shot.

+
+
+
+

9.2 The addition of probabilities

+

Back to the Washington Redskins again. You ponder more deeply the possibility of a nasty day, and you estimate with more discrimination that the probability of snow is .1 and of rain it is .2 (with .7 of a nice day). Now you wonder: What is the probability of a rainy day or a nice day?

+

To find this probability by simulation:

+
    +
  1. Put 7 blue balls (nice day), 1 black ball (snowy day) and 2 gray balls (rainy day) into a bucket. You want to know the probability of a blue or a gray ball. To find this probability:

  2. +
  3. Draw one ball and record “yes” if its color is blue or gray, “no” otherwise.

  4. +
  5. Repeat step 1 perhaps 200 times.

  6. +
  7. Find the proportion of “yes” trials.

  8. +
+

This procedure certainly will do the job. And simulation may be unavoidable when the situation gets more complex. But in this simple case, you are likely to see that you can compute the probability by adding the .7 probability of a nice day and the .2 probability of a rainy day to get the desired probability. This procedure of formulaic deductive probability theory is called the addition rule .

+
+
+

9.3 The addition rule

+

The addition rule applies to mutually exclusive outcomes — that is, the case where if one outcome occurs, the other(s) cannot occur; one event implies the absence of the other when events are mutually exclusive. Green and red coats are mutually exclusive if you never wear more than one coat at a time. If there are only two possible mutually-exclusive outcomes, the outcomes are complementary . It may be helpful to note that mutual exclusivity equals total dependence; if one outcome occurs, the other cannot. Hence we write formally that

+

\[ +\text{If} P(A \text{ and } B) = 0 \text{ then } +\]

+

\[ +P(A \text{ or } B) = P(A) + P(B) +\]

+

An outcome and its absence are mutually exclusive, and their probabilities add to unity.

+

\[ +P(A) + P(\ \hat{} A) = 1 +\]

+

Examples include a) rain and no rain, and b) if \(P(\text{sales > \$1 million}) = 0.2\), then \(P(\text{sales <= \$1 million}) = 0.8\).

+

As with the multiplication rule, the addition rule can be a useful shortcut. The answer can always be obtained by simulation, too.

+

We have so far implicitly assumed that a rainy day and a snowy day are mutually exclusive. But that need not be so; both rain and snow can occur on the same day; if we take this possibility into account, we cannot then use the addition rule.

+

Consider the case in which seven days in ten are nice, one day is rainy, one day is snowy, and one day is both rainy and snowy. What is the chance that it will be either nice or snowy? The procedure is just as before, except that some rainy days are included because they are also snowy.

+

When A and B are not mutually exclusive — when it is possible that the day might be both rainy and snowy, or you might wear both red and green coats on the same day, we write (in the latter case) P(red and green coats) > 0, and the appropriate formula is

+

\[ +P(\text{red or green}) = P(\text{red}) + P(\text{green}) - P(\text{red and green}) ` +\]

+ +

In this case as in much of probability theory, the simulation for the case in which the events are not mutually exclusive is no more complex than when they are mutually exclusive; indeed, if you simulate you never even need to know the concept of mutual exclusivity or inquire whether that is your situation. In contrast, the appropriate formula for non-exclusivity is more complex, and if one uses formulas one must inquire into the characteristics of the situation and decide which formula to apply depending upon the classification; if you classify wrongly and therefore apply the wrong formula, the result is a wrong answer.

+ +

To repeat, the addition rule only works when the probabilities you are adding are mutually exclusive — that is, when the two cannot occur together.

+

The multiplication and addition rules are as different from each other as mortar and bricks; both, however, are needed to build walls. The multiplication rule pertains to a single outcome composed of two or more elements (e.g. weather, and win-or-lose), whereas the addition rule pertains to two or more possible outcomes for one element. Drawing from a card deck (with replacement) provides an analogy: the addition rule is like one draw with two or more possible cards of interest, whereas the multiplication rule is like two or more cards being drawn with one particular “hand” being of interest.

+
+
+

9.4 Theoretical devices for the study of probability

+

It may help you to understand the simulation approach to estimating composite probabilities demonstrated in this book if you also understand the deductive formulaic approach. So we’ll say a bit about it here.

+

The most fundamental concept in theoretical probability is the list of events that may occur, together with the probability of each one (often arranged so as to be equal probabilities). This is the concept that Galileo employed in his great fundamental work in theoretical probability about four hundred years ago when a gambler asked Galileo about the chances of getting a nine rather than a ten in a game of three dice (though others such as Cardano had tackled the subject earlier). 1

+

Galileo wrote down all the possibilities in a tree form, a refinement for mapping out the sample space.

+

Galileo simply displayed the events themselves — such as “2,” “4,” and “4,” making up a total of 10, a specific event arrived at in a specific way. Several different events can lead to a 10 with three dice. If we now consider each of these events, we arrive at the concept of the ways that a total of 10 can arise. We ask the number of ways that an outcome can and cannot occur. (See the paragraph above). This is equivalent both operationally and linguistically to the paths in (say) the quincunx device or Pascal’s Triangle which we shall discuss shortly.

+

A tree is the most basic display of the paths in a given situation. Each branch of the tree — a unique path from the start on the left-hand side to the endpoint on the right-hand side — contains the sequence of all the elements that make up that event, in the order in which they occur. The right-hand ends of the branches constitute a list of the outcomes. That list includes all possible permutations — that is, it distinguishes among outcomes by the orders in which the particular die outcomes occur.

+
+
+

9.5 The Concept of Sample Space

+

The formulaic approach begins with the idea of sample space , which is the set of all possible outcomes of the “experiment” or other situation that interests us. Here is a formal definition from Goldberg (1986, 46):

+
+

A sample space S associated with a real or conceptual experiment is a set such that (1) each element of S denotes an outcome of the experiment, and (2) any performance of the experiment results in an outcome that corresponds to one and only one element of S.

+
+

Because the sum of the probabilities for all the possible outcomes in a given experimental trial is unity, the sum of all the events in the sample space (S) = 1.

+

Early on, people came up with the idea of estimating probabilities by arraying the possibilities for, and those against, the event occurring. For example, the coin could fall in three ways — head, tail, or on its side. They then speedily added the qualification that the possibilities in the list must have an equal chance, to distinguish the coin falling on its side from the other possibilities (so ignore it). Or, if it is impossible to make the probabilities equal, make special allowance for inequality. Working directly with the sample space is the method of first principles . The idea of a list was refined to the idea of sample space, and “for” and “against” were refined to the “success” and “failure” elements among the total elements.

+

The concept of sample space raises again the issue of how to estimate the simple probabilities. While we usually can estimate the probabilities accurately in gambling games because we ourselves construct the games and therefore control the probabilities that they produce, we have much less knowledge of the structures that underlie the important problems in life — in science, business, the stock market, medicine, sports, and so on. We therefore must wrestle with the issue of what probabilities we should include in our theoretical sample space, or in our experiments. Often we proceed by choosing as an analogy a physical “model” whose properties we know and which we consider to be appropriate — such as a gambling game with coins, dice, cards. This model becomes our idealized setup. But this step makes crystal-clear that judgment is heavily involved in the process, because choosing the analogy requires judgment.

+

A Venn diagram is another device for displaying the elements that make up an event. But unlike a tree diagram, it does not show the sequence of those elements; rather, it shows the extent of overlap among various classes of elements .

+

A Venn diagram expresses by areas (especially rectangular Venn diagrams) the numbers at the end of the branches in a tree.

+

Pascal’s Triangle is still another device. It aggregates the last permutation branches in the tree into combinations — that is, without distinguishing by order. It shows analytically (by tracing them) the various paths that lead to various combinations.

+

The study of the mathematics of probability is the study of calculational shortcuts to do what tree diagrams do. If you don’t care about the shortcuts, then you don’t need the formal mathematics--though it may improve your mathematical insight (or it may not). The resampling method dispenses not only with the shortcuts but also with the entire counting of points in the sample space.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/probability_theory_2_compound.html b/r-book/probability_theory_2_compound.html new file mode 100644 index 00000000..a925b936 --- /dev/null +++ b/r-book/probability_theory_2_compound.html @@ -0,0 +1,1644 @@ + + + + + + + + + +Resampling statistics - 11  Probability Theory, Part 2: Compound Probability + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

11  Probability Theory, Part 2: Compound Probability

+
+ + + +
+ + + + +
+ + +
+ +
+

11.1 Introduction

+

In this chapter we will deal with what are usually called “probability problems” rather than the “statistical inference problems” discussed in later chapters. The difference is that for probability problems we begin with a knowledge of the properties of the universe with which we are working. (See Section 8.9 on the definition of resampling.)

+

We start with some basic problems in probability. To make sure we do know the properties of the universe we are working with, we start with poker, and a pack of cards. Working with some poker problems, we rediscover the fundamental distinction between sampling with and without replacement.

+
+
+

11.2 Introducing a poker problem: one pair (two of a kind)

+

What is the chance that the first five cards chosen from a deck of 52 (bridge/poker) cards will contain two (and only two) cards of the same denomination (two 3’s for example)? (Please forgive the rather sterile unrealistic problems in this and the other chapters on probability. They reflect the literature in the field for 300 years. We’ll get more realistic in the statistics chapters.)

+

We shall estimate the odds the way that gamblers have estimated gambling odds for thousands of years. First, check that the deck is a standard deck and is not missing any cards. (Overlooking such small but crucial matters often leads to errors in science.) Shuffle thoroughly until you are satisfied that the cards are randomly distributed. (It is surprisingly hard to shuffle well.) Then deal five cards, and mark down whether the hand does or does not contain a pair of the same denomination.

+

At this point, we must decide whether three of a kind, four of a kind or two pairs meet our criterion for a pair. Since our criterion is “two and only two,” we decide not to count them.

+

Then replace the five cards in the deck, shuffle, and deal again. Again mark down whether the hand contains one pair of the same denomination. Do this many times. Then count the number of hands with one pair, and figure the proportion (as a percentage) of all hands.

+

Table 11.1 has the results of 25 hands of this procedure.

+
+
+ + +++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 11.1: Results of 25 hands for the problem “one pair”
HandCard 1Card 2Card 3Card 4Card 5One pair?
1King ♢King ♠Queen ♠10 ♢6 ♠Yes
28 ♢Ace ♢4 ♠10 ♢3 ♣No
34 ♢5 ♣Ace ♢Queen ♡10 ♠No
43 ♡Ace ♡5 ♣3 ♢Jack ♢Yes
56 ♠King ♣6 ♢3 ♣3 ♡No
6Queen ♣7 ♢Jack ♠5 ♡8 ♡No
79 ♣4 ♣9 ♠Jack ♣5 ♠Yes
83 ♠3 ♣3 ♡5 ♠5 ♢Yes
9Queen ♢4 ♠Queen ♣6 ♡4 ♢No
10Queen ♠3 ♣7 ♠7 ♡8 ♢Yes
118 ♡9 ♠7 ♢8 ♠Ace ♡Yes
12Ace ♠9 ♡4 ♣2 ♠Ace ♢Yes
134 ♡3 ♣Ace ♢9 ♡5 ♡No
1410 ♣7 ♠8 ♣King ♣4 ♢No
15Queen ♣8 ♠Queen ♠8 ♣5 ♣No
16King ♡10 ♣Jack ♠10 ♢10 ♡No
17Queen ♠Queen ♡Ace ♡King ♢7 ♡Yes
185 ♢6 ♡Ace ♡4 ♡6 ♢Yes
193 ♠5 ♡2 ♢King ♣9 ♡No
208 ♠Jack ♢7 ♣10 ♡3 ♡No
215 ♢4 ♠Jack ♡2 ♠King ♠No
225 ♢4 ♢Jack ♣King ♢2 ♠No
23King ♡King ♠6 ♡2 ♠5 ♣Yes
248 ♠9 ♠6 ♣Ace ♣5 ♢No
25Ace ♢7 ♠4 ♡9 ♢9 ♠Yes
% Yes44%
+
+
+

In this series of 25 experiments, 44 percent of the hands contained one pair, and therefore 0.44 is our estimate (for the time being) of the probability that one pair will turn up in a poker hand. But we must notice that this estimate is based on only 25 hands, and therefore might well be fairly far off the mark (as we shall soon see).

+

This experimental “resampling” estimation does not require a deck of cards. For example, one might create a 52-sided die, one side for each card in the deck, and roll it five times to get a “hand.” But note one important part of the procedure: No single “card” is allowed to come up twice in the same set of five spins, just as no single card can turn up twice or more in the same hand. If the same “card” did turn up twice or more in a dice experiment, one could pretend that the roll had never taken place; this procedure is necessary to make the dice experiment analogous to the actual card-dealing situation under investigation. Otherwise, the results will be slightly in error. This type of sampling is “sampling without replacement,” because each card is not replaced in the deck prior to dealing the next card (that is, prior to the end of the hand).

+
+
+

11.3 A first approach to the one-pair problem with code

+

We could also approach this problem using random numbers from the computer to simulate the values.

+

Let us first make some numbers from which to sample. We want to simulate a deck of playing cards analogous to the real cards we used previously. We don’t need to simulate all the features of a deck, but only the features that matter for the problem at hand. In our case, the feature that matters is the face value. We require a deck with four “1”s, four “2”s, etc., up to four “13”s, where 1 is an Ace, and 13 is a King. The suits don’t matter for our present purposes.

+

We first first make a vector to represent the face values in one suit.

+
+
one_suit <- 1:13
+one_suit
+
+
 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13
+
+
+

We have the face values for one suit, but we need the face values for whole deck of cards — four suits. We do this by making a new vector that consists of four repeats of one_suit:

+
+
# Repeat the one_suit vector four times
+deck <- rep(one_suit, 4)
+deck
+
+
 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13  1  2  3  4  5  6  7  8  9 10 11 12
+[26] 13  1  2  3  4  5  6  7  8  9 10 11 12 13  1  2  3  4  5  6  7  8  9 10 11
+[51] 12 13
+
+
+
+
+

11.4 Shuffling the deck with R

+

At this point we have a complete deck in the variable deck . But that “deck” is in the same order as a new deck of cards . If we do not shuffle the deck, the results will be predictable. Therefore, we would like to select five of these “cards” (52 values) at random. There are two ways of doing this. The first is to use the sample’rnd.choice`]{.python} tool in the familiar way, to choose 5 values at random from this strictly ordered deck. We want to draw these cards without replacement (of which more later). Without replacement means that once we have drawn a particular value, we cannot draw that value a second time — just as you cannot get the same card twice in a hand when the dealer deals you a hand of five cards.

+
+

As you saw in Section 8.14, the default behavior of sample is to sample without replacement, so simply omit the replace=TRUE argument to sample to get sampling without replacement:

+
+
+
# One hand, sampling from the deck without replacement.
+hand <- sample(deck, size=5)
+hand
+
+
[1]  6 10 12 11 12
+
+
+

The above is one way to get a random hand of five cards from the deck. Another way is to use sample to shuffle the whole deck of 52 “cards” into a random order, just as a dealer would shuffle the deck before dealing. Then we could take — for example — the first five cards from the shuffled deck to give a random hand. See Section 8.14 for more on this use of sample.

+
+
# Shuffle the whole 52 card deck.
+shuffled <- sample(deck)
+# The "cards" are now in random order.
+shuffled
+
+
 [1]  8 13  5  4 12  9  5  7 11  2 13  2  6  8  8  6 10  9 12  9 11  7 13 11 12
+[26]  7 10  4  2  4  7  1  3  5  1  9  2  4  6  1  8 10  3 13  5 11 12  3  1 10
+[51]  6  3
+
+
+

Now we can get our hand by taking the first five cards from the deck:

+
+
# Select the first five "cards" from the shuffled deck.
+hand <- shuffled[1:5]
+hand
+
+
[1]  8 13  5  4 12
+
+
+

You have seen that we can use one of two procedures to a get random sample of five cards from deck, drawn without replacement:

+
    +
  1. Using sample with size=5 to take the random sample directly from deck, or
  2. +
  3. shuffling the entire deck and then taking the first five “cards” from the result of the shuffle.
  4. +
+

Either is a valid way of getting five cards at random from the deck. It’s up to us which to choose — we slightly prefer to shuffle and take the first five, because it is more like the physical procedure of shuffling the deck and dealing, but which you prefer, is up to you.

+
+

11.4.1 A first-pass computer solution to the one-pair problem

+

Choosing the shuffle deal way, the chunk to generate one hand is:

+
+
shuffled <- sample(deck)
+hand <- shuffled[1:5]
+hand
+
+
[1] 6 9 6 2 1
+
+
+

Without doing anything further, we could run this chunk many times, and each time, we could note down whether the particular hand had exactly one pair or not.

+

Table 11.2 has the result of running that procedure 25 times:

+
+
+ + +++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 11.2: Results of 25 hands using random numbers
HandCard 1Card 2Card 3Card 4Card 5One pair?
19411913Yes
2876111No
3111099No
442211No
581113103No
613711106No
781101112No
8126119Yes
9412131210Yes
109121287Yes
115241113No
12341185No
13242131Yes
14113512Yes
1546111311Yes
161048912No
17711434Yes
18126111213Yes
1953869No
20116896Yes
211311582No
2211810113No
23105813No
24181399Yes
255132411No
% Yes44%
+
+
+
+
+
+

11.5 Finding exactly one pair using code

+

Thus far we have had to look ourselves at the set of cards, or at the numbers, and decide if there was exactly one pair. We would like the computer to do this for us. Let us stay with the numbers we generated above by dealing the random hand from the deck of numbers. To find pairs, we will go through the following procedure:

+
    +
  • For each possible value (1 through 13), count the number of times each value has occurred in hand. Call the result of this calculation — repeat_nos.
  • +
  • Select repeat_nos values equal to 2;
  • +
  • Count the number of “2” values in repeat_nos. This the number of pairs, and excludes three of a kind or four a kind.
  • +
  • If the number of pairs is exactly one, label the hand as “Yes”, otherwise label it as “No”.
  • +
+
+
+

11.6 Finding number of repeats using tabulate

+

Consider the following 5-card “hand” of values:

+
+
hand <- c(5, 7, 5, 4, 7)
+
+

This hand represents a pair of 5s and a pair of 7s.

+

We want to detect the number of repeats for each possible card value, 1 through 13. Let’s say we are looking for 5s. We can detect which of the values are equal to 5 by making a Boolean vector, where there is TRUE for a value equal to 5, and FALSE otherwise:

+
+
is_5 <- (hand == 5)
+
+

We can then count the number of 5s with:

+
+
sum(is_5)
+
+
[1] 2
+
+
+

In one chunk:

+
+
number_of_5s <- sum(hand == 5)
+number_of_5s
+
+
[1] 2
+
+
+

We could do this laborious task for every possible card value (1 through 13):

+
+
number_of_1s <- sum(hand == 1)  # Number of aces in hand
+number_of_2s <- sum(hand == 2)  # Number of 2s in hand
+number_of_3s <- sum(hand == 3)
+number_of_4s <- sum(hand == 4)
+number_of_5s <- sum(hand == 5)
+number_of_6s <- sum(hand == 6)
+number_of_7s <- sum(hand == 7)
+number_of_8s <- sum(hand == 8)
+number_of_9s <- sum(hand == 9)
+number_of_10s <- sum(hand == 10)
+number_of_11s <- sum(hand == 11)
+number_of_12s <- sum(hand == 12)
+number_of_13s <- sum(hand == 13)  # Number of Kings in hand.
+
+

Above, we store the result for each card in a separate variable; this is inconvenient, because we would have to go through each variable checking for a pair (a value of 2). It would be more convenient to store these results in a vector. One way to do that would be to store the result for card value 1 at position (index) 1, the result for value 2 at position 2, and so on, like this:

+
+
# Make vector length 13, with one element for each card value.
+repeat_nos <- numeric(13)
+repeat_nos[1] <- sum(hand == 1)  # Number of aces in hand
+repeat_nos[2] <- sum(hand == 2)  # Number of 2s in hand
+repeat_nos[3] <- sum(hand == 3)
+repeat_nos[4] <- sum(hand == 4)
+repeat_nos[5] <- sum(hand == 5)
+repeat_nos[6] <- sum(hand == 6)
+repeat_nos[7] <- sum(hand == 7)
+repeat_nos[8] <- sum(hand == 8)
+repeat_nos[9] <- sum(hand == 9)
+repeat_nos[10] <- sum(hand == 10)
+repeat_nos[11] <- sum(hand == 11)
+repeat_nos[12] <- sum(hand == 12)
+repeat_nos[13] <- sum(hand == 13)  # Number of Kings in hand.
+# Show the result
+repeat_nos
+
+
 [1] 0 0 0 1 2 0 2 0 0 0 0 0 0
+
+
+

You may recognize all this repetitive typing as a good sign we could use a for loop to do the work — er — for us.

+
+
repeat_nos <- numeric(13)
+for (i in 1:13) {  # Set i to be first 1, then 2, ... through 13.
+    repeat_nos[i] <- sum(hand == i)
+}
+# Show the result
+repeat_nos
+
+
 [1] 0 0 0 1 2 0 2 0 0 0 0 0 0
+
+
+

In our particular hand, after we have done the count for 7s, we will always get 0 for card values 8, 9 … 13, because 7 was the highest card (maximum value) for our particular hand. As you might expect, there is a an R function max that will quickly tell us the maximum value in the hand:

+
+
max(hand)
+
+
[1] 7
+
+
+

We can use max to make our loop more efficient, by stopping our checks when we’ve reached the maximum value, like this:

+
+
max_value <- max(hand)
+# Only make a vector large enough to house counts for the max value.
+repeat_nos <- numeric(max_value)
+for (i in 1:max_value) {  # Set i to 0, then 1 ... through max_value
+    repeat_nos[i] <- sum(hand == i)
+}
+# Show the result
+repeat_nos
+
+
[1] 0 0 0 1 2 0 2
+
+
+

In fact, this is exactly what the function tabulate does, so we can use that function instead of our loop, to do the same job:

+
+
repeat_nos <- tabulate(hand)
+repeat_nos
+
+
[1] 0 0 0 1 2 0 2
+
+
+
+
+

11.7 Looking for hands with exactly one pair

+

Now we have repeat_nos, we can proceed with the rest of the steps above.

+

We can count the number of cards that have exactly two repeats:

+
+
(repeat_nos == 2)
+
+
[1] FALSE FALSE FALSE FALSE  TRUE FALSE  TRUE
+
+
+
+
n_pairs <- sum(repeat_nos == 2)
+# Show the result
+n_pairs
+
+
[1] 2
+
+
+

The hand is of interest to us only if the number of pairs is exactly 1:

+
+
# Check whether there is exactly one pair in this hand.
+n_pairs == 1
+
+
[1] FALSE
+
+
+

We now have the machinery to use R for all the logic in simulating multiple hands, and checking for exactly one pair.

+

Let’s do that, and use R to do the full job of dealing many hands and finding pairs in each one. We repeat the procedure above using a for loop. The for loop commands the program to do ten thousand repeats of the statements in the “loop” between the start { and end } curly braces.

+

In the body of the loop (the part that gets repeated for each trial) we:

+
    +
  • Shuffle the deck.
  • +
  • Deal ourselves a new hand.
  • +
  • Calculate the repeat_nos for this new hand.
  • +
  • Calculate the number of pairs from repeat_nos; store this as n_pairs.
  • +
  • Put n_pairs for this repetition into the correct place in the scoring vector z.
  • +
+

With that we end a single trial, and go back to the beginning, until we have done this 10000 times.

+

When those 10000 repetitions are over, the computer moves on to count (sum) the number of “1’s” in the score-keeping vector z, each “1” indicating a hand with exactly one pair. We store this count at location k. We divide k by 10000 to get the proportion of hands that had one pair, and we message the result of k to the screen.

+
+

Start of one_pair notebook

+ + +
+
# Create a bucket (vector) called a with four "1's," four "2's," four "3's,"
+# etc., to represent a deck of cards
+one_suit = 1:13
+one_suit
+
+
 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13
+
+
+
+
# Repeat values for one suit four times to make a 52 card deck of values.
+deck <- rep(one_suit, 4)
+deck
+
+
 [1]  1  2  3  4  5  6  7  8  9 10 11 12 13  1  2  3  4  5  6  7  8  9 10 11 12
+[26] 13  1  2  3  4  5  6  7  8  9 10 11 12 13  1  2  3  4  5  6  7  8  9 10 11
+[51] 12 13
+
+
+
+
# Vector to store result of each trial.
+z <- numeric(10000)
+
+# Repeat the following steps 10000 times
+for (i in 1:10000) {
+    # Shuffle the deck
+    shuffled <- sample(deck)
+
+    # Take the first five cards to make a hand.
+    hand = shuffled[1:5]
+
+    # How many pairs?
+    # Counts for each card rank.
+    repeat_nos <- tabulate(hand)
+    n_pairs <- sum(repeat_nos == 2)
+
+    # Keep score of # of pairs
+    z[i] <- n_pairs
+
+    # End loop, go back and repeat
+}
+
+# How often was there 1 pair?
+k <- sum(z == 1)
+
+# Convert to proportion.
+kk = k / 10000
+
+# Show the result.
+message(kk)
+
+
0.4285
+
+
+

End of one_pair notebook

+
+

In one run of the program, the result in kk was 0.428, so our estimate would be that the probability of a single pair is 0.428.

+

How accurate are these resampling estimates? The accuracy depends on the number of hands we deal — the more hands, the greater the accuracy. If we were to examine millions of hands, 42 percent would contain a pair each; that is, the chance of getting a pair in the long run is 42 percent. It turns out the estimate of 44 percent based on 25 hands in Table 11.1 is fairly close to the long-run estimate, though whether or not it is close enough depends on one’s needs of course. If you need great accuracy, deal many more hands.

+

A note on the decks, hands, repeat_noss in the above program, etc.: These “variables” are called “vector”s in R. A vector is an array (sequence) of elements that gets filled with numbers as R conducts its operations.

+

To help keep things straight (though the program does not require it), we often use z to name the vector that collects all the trial results, and k to denote our overall summary results. Or you could call it something like scoreboard — it’s up to you.

+

How many trials (hands) should be made for the estimate? There is no easy answer.1 One useful device is to run several (perhaps ten) equal sized sets of trials, and then examine whether the proportion of pairs found in the entire group of trials is very different from the proportions found in the various subgroup sets. If the proportions of pairs in the various subgroups differ greatly from one another or from the overall proportion, then keep running additional larger subgroups of trials until the variation from one subgroup to another is sufficiently small for your purposes. While such a procedure would be impractical using a deck of cards or any other physical means, it requires little effort with the computer and R.

+
+
+

11.8 Two more tntroductory poker problems

+

Which is more likely, a poker hand with two pairs, or a hand with three of a kind? This is a comparison problem, rather than a problem in absolute estimation as was the previous example.

+

In a series of 100 “hands” that were “dealt” using random numbers, four hands contained two pairs, and two hands contained three of a kind. Is it safe to say, on the basis of these 100 hands, that hands with two pairs are more frequent than hands with three of a kind? To check, we deal another 300 hands. Among them we see fifteen hands with two pairs (3.75 percent) and eight hands with three of a kind (2 percent), for a total of nineteen to ten. Although the difference is not enormous, it is reasonably clear-cut. Another 400 hands might be advisable, but we shall not bother.

+

Earlier I obtained forty-four hands with one pair each out of 100 hands, which makes it quite plain that one pair is more frequent than either two pairs or three-of-a-kind. Obviously, we need more hands to compare the odds in favor of two pairs with the odds in favor of three-of-a-kind than to compare those for one pair with those for either two pairs or three-of-a-kind. Why? Because the difference in odds between one pair, and either two pairs or three-of-a-kind, is much greater than the difference in odds between two pairs and three-of-a-kind. This observation leads to a general rule: The closer the odds between two events, the more trials are needed to determine which has the higher odds.

+

Again it is interesting to compare the odds with the formulaic mathematical computations, which are 1 in 21 (4.75 percent) for a hand containing two pairs and 1 in 47 (2.1 percent) for a hand containing three-of-a-kind — not too far from the estimates of .0375 and .02 derived from simulation.

+

To handle the problem with the aid of the computer, we simply need to estimate the proportion of hands having triplicates and the proportion of hands with two pairs, and compare those estimates.

+

To estimate the hands with three-of-a-kind, we can use a notebook just like “One Pair” earlier, except using repeat_nos == 3 to search for triplicates instead of duplicates. The program, then, is:

+
+

Start of three_of_a_kind notebook

+ + +
+
one_suit <- 1:13
+deck <- rep(one_suit, 4)
+
+
+
triples_per_trial <- numeric(10000)
+
+# Repeat the following steps 10000 times
+for (i in 1:10000) {
+    # Shuffle the deck
+    shuffled <- sample(deck)
+
+    # Take the first five cards.
+    hand <- shuffled[1:5]
+
+    # How many triples?
+    repeat_nos <- tabulate(hand)
+    n_triples <- sum(repeat_nos == 3)
+
+    # Keep score of # of triples
+    triples_per_trial[i] <- n_triples
+
+    # End loop, go back and repeat
+}
+
+# How often was there 1 pair?
+n_triples <- sum(triples_per_trial == 1)
+
+# Convert to proportion
+message(n_triples / 10000)
+
+
0.0251
+
+
+

End of three_of_a_kind notebook

+
+

To estimate the probability of getting a two-pair hand, we revert to the original program (counting pairs), except that we examine all the results in the score-keeping vector z for hands in which we had two pairs, instead of one .

+
+

Start of two_pairs notebook

+ + +
+
deck <- rep(1:13, 4)
+
+
+
pairs_per_trial <- numeric(10000)
+
+# Repeat the following steps 10000 times
+for (i in 1:10000) {
+    # Shuffle the deck
+    shuffled <- sample(deck)
+
+    # Take the first five cards.
+    hand <- shuffled[1:5]
+
+    # How many pairs?
+    # Counts for each card rank.
+    repeat_nos <- tabulate(hand)
+    n_pairs <- sum(repeat_nos == 2)
+
+    # Keep score of # of pairs
+    pairs_per_trial[i] <- n_pairs
+
+    # End loop, go back and repeat
+}
+
+# How often were there 2 pairs?
+n_two_pairs <- sum(pairs_per_trial == 2)
+
+# Convert to proportion
+print(n_two_pairs / 10000)
+
+
[1] 0.0465
+
+
+

End of two_pairs notebook

+
+

For efficiency (though efficiency really is not important here because the computer performs its operations so cheaply) we could develop both estimates in a single program by simply generating 10000 hands, and count the number with three-of-a-kind and the number with two pairs.

+

Before we leave the poker problems, we note a difficulty with Monte Carlo simulation. The probability of a royal flush is so low (about one in half a million) that it would take much computer time to compute. On the other hand, considerable inaccuracy is of little matter. Should one care whether the probability of a royal flush is 1/100,000 or 1/500,000?

+
+
+

11.9 The concepts of replacement and non-replacement

+

In the poker example above, we did not replace the first card we drew. If we were to replace the card, it would leave the probability the same before the second pick as before the first pick. That is, the conditional probability remains the same. If we replace, conditions do not change. But if we do not replace the item drawn, the probability changes from one moment to the next. (Perhaps refresh your mind with the examples in the discussion of conditional probability including Section 9.1.1)

+

If we sample with replacement, the sample drawings remain independent of each other — a topic addressed in Section 9.1.

+

In many cases, a key decision in modeling the situation in which we are interested is whether to sample with or without replacement. The choice must depend on the characteristics of the situation.

+

There is a close connection between the lack of finiteness of the concept of universe in a given situation, and sampling with replacement. That is, when the universe (population) we have in mind is not small, or has no conceptual bounds at all, then the probability of each successive observation remains the same, and this is modeled by sampling with replacement. (“Not finite” is a less expansive term than “infinite,” though one might regard them as synonymous.)

+

Chapter 12 discusses problems whose appropriate concept of a universe is finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is not finite. This general procedure will be discussed several times, with examples included.

+ + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/probability_theory_3.html b/r-book/probability_theory_3.html new file mode 100644 index 00000000..8a505865 --- /dev/null +++ b/r-book/probability_theory_3.html @@ -0,0 +1,1728 @@ + + + + + + + + + +Resampling statistics - 12  Probability Theory, Part 3 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

12  Probability Theory, Part 3

+
+ + + +
+ + + + +
+ + +
+ +

This chapter discusses problems whose appropriate concept of a universe is not finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is finite.

+

How can a universe be infinite yet known? Consider, for example, the possible flips with a given coin; the number is not limited in any meaningful sense, yet we understand the properties of the coin and the probabilities of a head and a tail.

+
+

12.1 Example: The Birthday Problem

+

This examples illustrates the probability of duplication in a multi-outcome sample from an infinite universe.

+

As an indication of the power and simplicity of resampling methods, consider this famous examination question used in probability courses: What is the probability that two or more people among a roomful of (say) twenty-five people will have the same birthday? To obtain an answer we need simply examine the first twenty-five numbers from the random-number table that fall between “001” and “365” (the number of days in the year), record whether or not there is a duplication among the twenty-five, and repeat the process often enough to obtain a reasonably stable probability estimate.

+

Pose the question to a mathematical friend of yours, then watch her or him sweat for a while, and afterwards compare your answer to hers/his. I think you will find the correct answer very surprising. It is not unheard of for people who know how this problem works to take advantage of their knowledge by making and winning big bets on it. (See how a bit of knowledge of probability can immediately be profitable to you by avoiding such unfortunate occurrences?)

+

More specifically, these steps answer the question for the case of twenty-five people in the room:

+
    +
  • Step 1. Let three-digit random numbers 1-365 stand for the 365 days in the year. (Ignore leap year for simplicity.)
  • +
  • Step 2. Examine for duplication among the first twenty-five random numbers chosen “001-365.” (Triplicates or higher-order repeats are counted as duplicates here.) If there is one or more duplicate, record “yes.” Otherwise record “no.”
  • +
  • Step 3. Repeat perhaps a thousand times, and calculate the proportion of a duplicate birthday among twenty-five people.
  • +
+

You would probably use the computer to generate the initial random numbers.

+

Now try the program written as follows.

+
+

Start of birthday_problem notebook

+ + +
+
n_with_same_birthday <- numeric(10000)
+
+# All the days of the year from "1" through "365"
+all_days <- 1:365
+
+# Do 10000 trials (experiments)
+for (i in 1:10000) {
+    # Generate 25 numbers randomly between "1" and "365," put them in a.
+    a <- sample(all_days, size=25, replace=TRUE)
+
+    # Looking in a, count the number of multiples and put the result in
+    # "counts".
+    counts <- tabulate(a)
+
+    # We request multiples > 1 because we are interested in any multiple,
+    # whether it is a duplicate, triplicate, etc. Had we been interested only
+    # in duplicates, we would have put in sum(counts == 2).
+    n_duplicates <- sum(counts > 1)
+
+    # Score the result of each trial to our store
+    n_with_same_birthday[i] <- n_duplicates
+
+    # End the loop for the trial, go back and repeat the trial until all 10000
+    # are complete, then proceed.
+}
+
+# Determine how many trials had at least one multiple.
+k <- sum(n_with_same_birthday)
+
+# Convert to a proportion.
+kk <- k / 10000
+
+# Print the result.
+message(kk)
+
+
0.7823
+
+
+

End of birthday_problem notebook

+
+

We have dealt with this example in a rather intuitive and unsystematic fashion. From here on, we will work in a more systematic, step-by-step manner. And from here on the problems form an orderly sequence of the classical types of problems in probability theory (Chapter 12 and Chapter 13), and inferential statistics (Chapter 20 to Chapter 28.)

+
+
+

12.2 Example: Three Daughters Among Four Children

+

This problem illustrates a problem with two outcomes (Binomial 1) and sampling with Replacement Among Equally Likely Outcomes.

+

What is the probability that exactly three of the four children in a four-child family will be daughters?2

+

The first step is to state that the approximate probability that a single birth will produce a daughter is 50-50 (1 in 2). This estimate is not strictly correct, because there are roughly 106 male children born to each 100 female children. But the approximation is close enough for most purposes, and the 50-50 split simplifies the job considerably. (Such “false” approximations are part of the everyday work of the scientist. The appropriate question is not whether or not a statement is “only” an approximation, but whether or not it is a good enough approximation for your purposes.)

+

The probability that a fair coin will turn up heads is .50 or 50-50, close to the probability of having a daughter. Therefore, flip a coin in groups of four flips, and count how often three of the flips produce heads . (You must decide in advance whether three heads means three girls or three boys.) It is as simple as that.

+

In resampling estimation it is of the highest importance to work in a careful, step-by-step fashion — to write down the steps in the estimation, and then to do the experiments just as described in the steps. Here are a set of steps that will lead to a correct answer about the probability of getting three daughters among four children:

+
    +
  • Step 1. Using coins, let “heads” equal “girl” and “tails” equal “boy.”
  • +
  • Step 2. Throw four coins.
  • +
  • Step 3. Examine whether the four coins fall with exactly three heads up. If so, write “yes” on a record sheet; otherwise write “no.”
  • +
  • Step 4. Repeat step 2 perhaps two hundred times.
  • +
  • Step 5. Count the proportion “yes.” This proportion is an estimate of the probability of obtaining exactly 3 daughters in 4 children.
  • +
+

The first few experimental trials might appear in the record sheet as follows (Table 12.1):

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12.1: Example trials from the three-girls problem
Number of HeadsYes or No
1No
0No
3Yes
2No
1No
2No
+
+

The probability of getting three daughters in four births could also be found with a deck of cards, a random number table, a die, or with R. For example, half the cards in a deck are black, so the probability of getting a black card (“daughter”) from a full deck is 1 in 2. Therefore, deal a card, record “daughter” or “son,” replace the card, shuffle, deal again, and so forth for 200 sets of four cards. Then count the proportion of groups of four cards in which you got four daughters.

+
+

Start of three_girls notebook

+ + +
+
girl_counts <- numeric(10000)
+
+# Do 10000 trials
+for (i in 1:10000) {
+
+    # Select 'girl' or 'boy' at random, four times.
+    children <- sample(c('girl', 'boy'), size=4, replace=TRUE)
+
+    # Count the number of girls and put the result in b.
+    b <- sum(children == 'girl')
+
+    # Keep track of each trial result in z.
+    girl_counts[i] <- b
+
+    # End this trial, repeat the experiment until 10000 trials are complete,
+    # then proceed.
+}
+
+# Count the number of experiments where we got exactly 3 girls, and put this
+# result in k.
+n_three_girls <- sum(girl_counts == 3)
+
+# Convert to a proportion.
+three_girls_prop <- n_three_girls / 10000
+
+# Print the results.
+message(three_girls_prop)
+
+
0.2392
+
+
+

End of three_girls notebook

+
+

Notice that the procedure outlined in the steps above would have been different (though almost identical) if we asked about the probability of three or more daughters rather than exactly three daughters among four children. For three or more daughters we would have scored “yes” on our score-keeping pad for either three or four heads, rather than for just three heads. Likewise, in the computer solution we would have used the statement n_three_girls <- sum(girl_counts >= 3).

+

It is important that, in this case, in contrast to what we did in the example from Section 11.2 (the introductory poker example), the card is replaced each time so that each card is dealt from a full deck. This method is known as sampling with replacement . One samples with replacement whenever the successive events are independent ; in this case we assume that the chance of having a daughter remains the same (1 girl in 2 births) no matter what sex the previous births were 3. But, if the first card dealt is black and would not be replaced, the chance of the second card being black would no longer be 26 in 52 (.50), but rather 25 in 51 (.49), if the first three cards are black and would not be replaced, the chances of the fourth card’s being black would sink to 23 in 49 (.47).

+

To push the illustration further, consider what would happen if we used a deck of only six cards, half (3 of 6) black and half (3 of 6) red, instead of a deck of 52 cards. If the chosen card is replaced each time, the 6-card deck produces the same results as a 52-card deck; in fact, a two-card deck would do as well. But, if the sampling is done without replacement, it is impossible to obtain 4 “daughters” with the 6-card deck because there are only 3 “daughters” in the deck. To repeat, then, whenever you want to estimate the probability of some series of events where each event is independent of the other, you must sample with replacement .

+
+
+

12.3 Variations of the daughters problem

+

In later chapters we will frequently refer to a problem which is identical in basic structure to the problem of three girls in four children — the probability of getting 9 females in ten calf births if the probability of a female birth is (say) .5 — when we set this problem in the context of the possibility that a genetic engineering practice is effective in increasing the proportion of females (desirable for the production of milk).

+

So far we have assumed the simple case where we have a vector of values that we are sampling from, and we are selecting each of these values into the sample with equal probability.

+

For example, we started with the simple assumption that a child is just as likely to be born a boy as a girl. Our input is:

+
+
input_values = c('girl', 'boy')
+
+

By default, sample will draw the input values with equal probability. Here, we draw a sample (children) of four values from the input, where each value in children has an equal chance of being “girl” or “boy”.

+
+
children <- sample(input_values, size=4, replace=TRUE)
+children
+
+
[1] "girl" "girl" "boy"  "boy" 
+
+
+

That is, sample gives each element in input_values an equal chance of being selected as the next element in children.

+

That is fine if we have some simple probability to simulate, like 0.5. But now let us imagine we want to get more precise. We happen to know that any given birth is just slightly more likely to be a boy than a girl.4. For example, the proportion of boys born in the UK is 0.513. Hence the proportion of girls is 1-0.513 = 0.487.

+
+
+

12.4 sample and the prob argument

+

We could replicate this probability of 0.487 for ‘girl’ in the output sample by making an input array of 1000 strings, that contains 487 ‘girls’ and 513 ‘boys’:

+
+
big_girls <- rep(c('girl', 'boy'), c(487, 513))
+
+

Now if we sample using the default in sample, each element in the input big_girls array will have the same chance of appearing in the sample, but because there are 487 ‘girls’, and 513 ‘boys’, each with an equal chance of appearing in the sample, we will get a ‘girl’ in roughly 487 out of every 1000 elements we draw, and a boy roughly 513 / 1000 times. That is, our chance of any one element of being a ‘girl’ is, as we want, 0.487.

+
+
# Now each element has probability 0.487 of 'girl', 0.513 of 'boy'.
+realistic_children <- sample(big_girls, size=4, replace=TRUE)
+realistic_children
+
+
[1] "girl" "boy"  "girl" "boy" 
+
+
+

But, there is an easier way than compiling a big 1000 element array, and that is to use the prob= argument to sample. This allows us to specify the probability with which we will draw each of the input elements into the output sample. For example, to draw ‘girl’ with probability 0.487 and ‘boy’ with probability 0.513, we would do:

+
+
# Draw 'girl' with probability (p) 0.487 and 'boy' 0.513.
+children_again <- sample(c('girl', 'boy'), size=4, prob=c(0.487, 0.513),
+                         replace=TRUE)
+children_again
+
+
[1] "boy"  "girl" "girl" "boy" 
+
+
+

The prob argument allows us to specify the probability of each element in the input vector — so if we had three elements in the input array, we would need three probabilities in prob. For example, let’s say we were looking at some poorly-entered hospital records, we might have ‘girl’ or ‘boy’ recorded as the child’s gender, but the record might be missing — ‘not-recorded’ — with a 19% chance:

+
+
# Draw 'girl' with probability (p) 0.4, 'boy' with p=0.41, 'not-recorded' with
+# p=0.19.
+sample(c('girl', 'boy', 'not-recorded'), size=30, prob=c(0.4, 0.41, 0.19),
+       replace=TRUE)
+
+
 [1] "boy"          "boy"          "boy"          "boy"          "girl"        
+ [6] "girl"         "boy"          "boy"          "boy"          "girl"        
+[11] "girl"         "boy"          "boy"          "girl"         "girl"        
+[16] "boy"          "girl"         "girl"         "girl"         "girl"        
+[21] "boy"          "girl"         "not-recorded" "not-recorded" "not-recorded"
+[26] "not-recorded" "boy"          "not-recorded" "girl"         "girl"        
+
+
+
+
+
+ +
+
+How does the prob argument to sample work? +
+
+
+

You might wonder how R does this trick of choosing the elements with different probabilities.

+

One way of doing this is to use uniform random numbers from 0 through 1. These are floating point numbers that can take any value, at random, from 0 through 1.

+
+
# Run this chunk a few times to see random numbers anywhere from 0 through 1.
+# `runif` means "Random UNIForm".
+runif(1)
+
+
[1] 0.684
+
+
+

Because this random uniform number has an equal chance of being anywhere in the range 0 through 1, there is a 50% chance that any given number will be less then 0.5 and a 50% chance it is greater than 0.5. (Of course it could be exactly equal to 0.5, but this is vanishingly unlikely, so we will ignore that for now).

+

So, if we thought girls were exactly as likely as boys, we could select from ‘girl’ and ‘boy’ using this simple logic:

+
+
if (runif(1) < 0.5) {
+    result = 'girl'
+} else {
+    result = 'boy'
+}
+
+

But, by the same logic, there is a 0.487 chance that the random uniform number will be less than 0.487 and a 0.513 chance it will be greater. So, if we wanted to give ourselves a 0.487 chance of ‘girl’, we could do:

+
+
if (runif(1) < 0.487) {
+    result = 'girl'
+} else {
+    result = 'boy'
+}
+
+

We can extend the same kind of logic to three options. For example, there is a 0.4 chance the random uniform number will be less than 0.4, a 0.41 chance it will be somewhere between 0.4 and 0.81, and a 0.19 chance it will be greater than 0.81.

+
+
+
+
+

12.5 The daughters problem with more accurate probabilities

+

We can use the probability argument to sample to do a more realistic simulation of the chance of a family with exactly three girls. In this case it is easy to make the chance for the R simulation, but much more difficult using physical devices like coins to simulate the randomness.

+

Remember, the original code for the 50-50 case, has the following:

+
+
# Select 'girl' or 'boy' at random, four times.
+children <- sample(c('girl', 'boy'), size=4, replace=TRUE)
+
+# Count the number of girls and put the result in b.
+b <- sum(children == 'girl')
+
+

The only change we need to the above, for the 0.487 - 0.513 case, is the one you see above:

+
+
# Give 'girl' 48.7% of the time, 'boy' 51.3% of the time.
+children <- sample(c('girl', 'boy'), size=4, prob=c(0.487, 0.513),
+                   replace=TRUE)
+
+# Count the number of girls and put the result in b.
+b <- sum(children == 'girl')
+
+

The rest of the program remains unchanged.

+
+
+

12.6 A note on clarifying and labeling problems

+

In conventional analytic texts and courses on inferential statistics, students are taught to distinguish between various classes of problems in order to decide which formula to apply. I doubt the wisdom of categorizing and labeling problems in that fashion, and the practice is unnecessary here. I consider it better that the student think through every new problem in the most fundamental terms. The exercise of this basic thinking avoids the mistakes that come from too-hasty and superficial pigeon-holing of problems into categories. Nevertheless, in order to help readers connect up the resampling material with the conventional curriculum of analytic methods, the examples presented here are given their conventional labels. And the examples given here cover the range of problems encountered in courses in probability and inferential statistics.

+

To repeat, one does not need to classify a problem when one proceeds with the Monte Carlo resampling method; you simply model the features of the situation you wish to analyze. In contrast, with conventional methods you must classify the situation and then apply procedures according to rules that depend upon the classification; often the decision about which rules to follow must be messy because classification is difficult in many cases, which contributes to the difficulty of choosing correct conventional formulaic methods.

+
+
+

12.7 Binomial trials

+

The problem of the three daughters in four births is known in the conventional literature as a “binomial sampling experiment with equally-likely outcomes.” “Binomial” means that the individual simple event (a birth or a coin flip) can have only two outcomes (boy or girl, heads or tails), “binomial” meaning “two names” in Latin.5

+

A fundamental property of binomial processes is that the individual trials are independent , a concept discussed earlier. A binomial sampling process is a series of binomial (one-of-two-outcome) events about which one may ask many sorts of questions — the probability of exactly X heads (“successes”) in N trials, or the probability of X or more “successes” in N trials, and so on.

+

“Equally likely outcomes” means we assume that the probability of a girl or boy in any one birth is the same (though this assumption is slightly contrary to fact); we represent this assumption with the equal-probability heads and tails of a coin. Shortly we will come to binomial sampling experiments where the probabilities of the individual outcomes are not equal.

+

The term “with replacement” was explained earlier; if we were to use a deck of red and black cards (instead of a coin) for this resampling experiment, we would replace the card each time a card is drawn.

+

The introductory poker example from Section 11.2, illustrated sampling without replacement, as will other examples to follow.

+

This problem would be done conventionally with the binomial theorem using probabilities of .5, or of .487 and .513, asking about 3 successes in 4 trials.

+
+
+

12.8 Example: Three or More Successful Basketball Shots in Five Attempts

+

This is an example of two-outcome sampling with unequally-likely outcomes, with replacement — a binomial experiment.

+

What is the probability that a basketball player will score three or more baskets in five shots from a spot 30 feet from the basket, if on the average she succeeds with 25 percent of her shots from that spot?

+

In this problem the probabilities of “success” or “failure” are not equal, in contrast to the previous problem of the daughters. Instead of a 50-50 coin, then, an appropriate “model” would be a thumbtack that has a 25 percent chance of landing “up” when it falls, and a 75 percent chance of landing down.

+

If we lack a thumbtack known to have a 25 percent chance of landing “up,” we could use a card deck and let spades equal “success” and the other three suits represent “failure.” Our resampling experiment could then be done as follows:

+
    +
  1. Let “spade” stand for “successful shot,” and the other suits stand for unsuccessful shot.
  2. +
  3. Draw a card, record its suit (“spade” or “other”) and replace. Do so five times (for five shots).
  4. +
  5. Record whether the outcome of step 2 was three or more spades. If so indicate “yes,” and otherwise “no.”
  6. +
  7. Repeat steps 2-4 perhaps four hundred times.
  8. +
  9. Count the proportion “yes” out of the four hundred throws. That proportion estimates the probability of getting three or more baskets out of five shots if the probability of a single basket is .25.
  10. +
+

The first four repetitions on your score sheet might look like this (Table 12.2):

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12.2: First four repetitions of 3 or more shots simulation
Card 1Card 2Card 3Card 4Card 5Result
SpadeOtherOtherOtherOtherNo
OtherOtherOtherOtherOtherNo
SpadeSpadeOtherSpadeSpadeYes
OtherSpadeOtherOtherSpadeNo
+
+

Instead of cards, we could have used two-digit random numbers, with (say) “1-25” standing for “success,” and “26-00” (“00” in place of “100”) standing for failure. Then the steps would simply be:

+
    +
  1. Let the random numbers “1-25” stand for “successful shot,” “26-00” for unsuccessful shot.
  2. +
  3. Draw five random numbers;
  4. +
  5. Count how many of the numbers are between “01” and “25.” If three or more, score “yes.”
  6. +
  7. Repeat step 2 four hundred times.
  8. +
+

If you understand the earlier “three_girls” program, then the program below should be easy: To create 10000 samples, we start with a for statement. We then sample 5 numbers between “1” and “4” into our variable a to simulate the 5 shots, each with a 25 percent — or 1 in 4 — chance of scoring. We decide that 1 will stand for a successful shot, and 2 through 4 will stand for a missed shot, and therefore we count (sum) the number of 1’s in a to determine the number of shots resulting in baskets in the current sample. The next step is to transfer the results of each trial to vector n_baskets. We then finish the loop with the } close brace. The final step is to search the vector n_baskets, after the 10000 samples have been generated and sum the times that 3 or more baskets were made. We place the results in n_more_than_2, calculate the proportion in propo_more_than_2, and then display the result.

+
+

Start of basketball_shots notebook

+ + +
+
n_baskets <- numeric(10000)
+
+# Do 10000 experimental trials.
+for (i in 1:10000) {
+
+    # Generate 5 random numbers, each between 1 and 4, put them in "a".
+    # Let "1" represent a basket, "2" through "4" be a miss.
+    a <- sample(1:4, size=5, replace=TRUE)
+
+    # Count the number of baskets, put that result in b.
+    b <- sum(a == 1)
+
+    # Keep track of each experiment's results in z.
+    n_baskets[i] <- b
+
+    # End the experiment, go back and repeat until all 10000 are completed, then
+    # proceed.
+}
+
+# Determine how many experiments produced more than two baskets, put that
+# result in k.
+n_more_than_2 <- sum(n_baskets > 2)
+
+# Convert to a proportion.
+prop_more_than_2 <- n_more_than_2 / 10000
+
+# Print the result.
+message(prop_more_than_2)
+
+
0.1055
+
+
+

End of basketball_shots notebook

+
+
+
+

12.9 Note to the student of analytic probability theory

+

This problem would be done conventionally with the binomial theorem, asking about the chance of getting 3 successes in 5 trials, with the probability of a success = .25.

+
+
+

12.10 Example: One in Black, Two in White, No Misses in Three Archery Shots

+

This is an example of a multiple outcome (multinomial) sampling with unequally likely outcomes; with replacement.

+

Assume from past experience that a given archer puts 10 percent of his shots in the black (“bullseye”) and 60 percent of his shots in the white ring around the bullseye, but misses with 30 percent of his shots. How likely is it that in three shots the shooter will get exactly one bullseye, two in the white, and no misses? Notice that unlike the previous cases, in this example there are more than two outcomes for each trial.

+

This problem may be handled with a deck of three colors (or suits) of cards in proportions varying according to the probabilities of the various outcomes, and sampling with replacement. Using random numbers is simpler, however:

+
    +
  • Step 1. Let “1” = “bullseye,” “2-7” = “in the white,” and “8-0” = “miss.”
  • +
  • Step 2. Choose three random numbers, and examine whether there are one “1” and two numbers “2-7.” If so, record “yes,” otherwise “no.”
  • +
  • Step 3. Repeat step 2 perhaps 400 times, and count the proportion of “yeses.” This estimates the probability sought.
  • +
+

This problem would be handled in conventional probability theory with what is known as the Multinomial Distribution.

+

This problem may be quickly solved on the computer using R with the notebook labeled “bullseye” below. Bullseye has a complication not found in previous problems: It tests whether two different sorts of events both happen — a bullseye plus two shots in the white.

+

After generating three randomly-drawn numbers between 1 and 10, we check with the sum function to see if there is a bullseye. If there is, the if statement tells the computer to continue with the operations, checking if there are two shots in the white; if there is no bullseye, the if statement tells the computer to end the trial and start another trial. A thousand repetitions are called for, the number of trials meeting the criteria are counted, and the results are then printed.

+

In addition to showing how this particular problem may be handled with R, the “bullseye” program teaches you some more fundamentals of computer programming. The if statement and the two loops, one within the other, are basic tools of programming.

+
+

Start of bullseye notebook

+ + +
+
# Make a vector to store the results of each trial.
+white_counts <- numeric(10000)
+
+# Do 10000 experimental trials
+for (i in 1:10000) {
+
+    # To represent 3 shots, generate 3 numbers at random between "1" and "10"
+    # and put them in a. We will let a "1" denote a bullseye, "2"-"7" a shot in
+    # the white, and "8"-"10" a miss.
+    a <- sample(1:10, size=3, replace=TRUE)
+
+    # Count the number of bullseyes, put that result in b.
+    b <- sum(a == 1)
+
+    # If there is exactly one bullseye, we will continue with counting the
+    # other shots. (If there are no bullseyes, we need not bother — the
+    # outcome we are interested in has not occurred.)
+    if (b == 1) {
+
+        # Count the number of shots in the white, put them in c. (Recall we are
+        # doing this only if we got one bullseye.)
+        c <- sum((a >= 2) & (a <=7))
+
+        # Keep track of the results of this second count.
+        white_counts[i] <- c
+
+        # End the "if" sequence — we will do the following steps without regard
+        # to the "if" condition.
+    }
+
+    # End the above experiment and repeat it until 10000 repetitions are
+    # complete, then continue.
+}
+
+# Count the number of occasions on which there are two in the white and a
+# bullseye.
+n_desired <- sum(white_counts == 2)
+
+# Convert to a proportion.
+prop_desired <- n_desired / 10000
+
+# Print the results.
+message(prop_desired)
+
+
0.1047
+
+
+

End of bullseye notebook

+
+

This example illustrates the addition rule that was introduced and discussed in Chapter 9. In Section 12.10, a bullseye, an in-the-white shot, and a missed shot are “mutually exclusive” events because a single shot cannot result in more than one of the three possible outcomes. One can calculate the probability of either of two mutually-exclusive outcomes by adding their probabilities. The probability of either a bullseye or a shot in the white is .1 + .6 = .7. The probability of an arrow either in the white or a miss is .6 + .3 = .9. The logic of the addition rule is obvious when we examine the random numbers given to the outcomes. Seven of 10 random numbers belong to “bullseye” or “in the white,” and nine of 10 belong to “in the white” or “miss.”

+
+
+

12.11 Example: Two Groups of Heart Patients

+

We want to learn how likely it is that, by chance, group A would have as little as two deaths more than group B — Table 12.3:

+
+ + + + + + + + + + + + + + + + + + + + + +
Table 12.3: Two Groups of Heart Patients
LiveDie
Group A7911
Group B219
+
+

This problem, phrased here as a question in probability, is the prototype of a problem in statistics that we will consider later (which the conventional theory would handle with a “chi square distribution”). We can handle it in either of two ways, as follows:

+

Approach A

+
    +
  1. Put 120 balls into a bucket, 100 white (for live) and 20 black (for die).
  2. +
  3. Draw 30 balls randomly and assign them to Group B; the others are assigned to group A.
  4. +
  5. Count the numbers of black balls in the two groups and determine whether Group A’s excess “deaths” (= black balls), compared to Group B, is two or fewer (or what is equivalent in this case, whether there are 11 or fewer black balls in Group A); if so, write “Yes,” otherwise “No.”
  6. +
  7. Repeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”
  8. +
+

A second way we shall think about this sort of problem may be handled as follows:

+

Approach B

+
    +
  1. Put 120 balls into a bucket, 100 white (for live) and 20 black (for die) (as before).
  2. +
  3. Draw balls one by one, replacing the drawn ball each time, until you have accumulated 90 balls for Group A and 30 balls for Group B. (You could, of course, just as well use a bucket for 4 white and 1 black balls or 8 white and 2 black in this approach.)
  4. +
  5. As in approach “A” above, count the numbers of black balls in the two groups and determine whether Group A’s excess deaths is two or fewer; if so, write “Yes,” otherwise “No.”
  6. +
  7. As above, repeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”
  8. +
+

We must also take into account the possibility of a similar eye-catching “unbalanced” result of a much larger proportion of deaths in Group B. It will be a tough decision how to do so, but a reasonable option is to simply double the probability computed in step 4a or 4b.

+

Deciding which of these two approaches — the “permutation” (without replacement) and “bootstrap” (with replacement) methods — is the more appropriate is often a thorny matter; it will be discussed latter in Chapter 24. In many cases, however, the two approaches will lead to similar results.

+

Later, we will actually carry out these procedures with the aid of R, and estimate the probabilities we seek.

+
+
+

12.12 Example: Dispersion of a Sum of Random Variables — Hammer Lengths — Heads and Handles

+

The distribution of lengths for hammer handles is as follows: 20 percent are 10 inches long, 30 percent are 10.1 inches, 30 percent are 10.2 inches, and 20 percent are 10.3 inches long. The distribution of lengths for hammer heads is as follows: 2.0 inches, 20 percent; 2.1 inches, 20 percent; 2.2 inches, 30 percent; 2.3 inches, 20 percent; 2.4 inches, 10 percent.

+

If you draw a handle and a head at random, what will be the mean total length? In Chapter 9 we saw that the conventional formulaic method tells you that an answer with a formula that says the sum of the means is the mean of the sums, but it is easy to get the answer with simulation. But now we ask about the dispersion of the sum. There are formulaic rules for such measures as the variance. But consider this other example: What proportion of the hammers made with handles and heads drawn at random will have lengths equal to or greater than 12.4 inches? No simple formula will provide an answer. And if the number of categories is increased considerably, any formulaic approach will be become burdensome if not undoable. But Monte Carlo simulation produces an answer quickly and easily, as follows:

+
    +
  1. Fill a bucket with:

    +
      +
    • 2 balls marked “10” (inches),
    • +
    • 3 balls marked “10.1”,
    • +
    • 3 marked “10.2”, and
    • +
    • 2 marked “10.3”.
    • +
    +

    This bucket represents the handles.

    +

    Fill another bucket with:

    +
      +
    • 2 balls marked “2.0”,
    • +
    • 2 balls marked “2.1”,
    • +
    • 3 balls marked “2.2”,
    • +
    • 2 balls marked “2.3” and
    • +
    • 1 ball marked “2.4”.
    • +
    +

    This bucket represents the heads.

  2. +
  3. Pick a ball from each of the “handles” and “heads” bucket, calculate the sum, and replace the balls.

  4. +
  5. Repeat perhaps 200 times (more when you write a computer program), and calculate the proportion of the sums that are greater than 12.4 inches.

  6. +
+

You may also want to forego learning the standard “rule,” and simply estimate the mean this way, also. As an exercise, compute the interquartile range — the difference between the 25th and the 75th percentiles.

+
+
+

12.13 Example: The Product of Random Variables — Theft by Employees

+

The distribution of the number of thefts per month you can expect in your business is as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NumberProbability
00.5
10.2
20.1
30.1
40.1
+

The amounts that may be stolen on any theft are as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
AmountProbability
$500.4
$750.4
$1000.1
$1250.1
+

The same procedure as used above to estimate the mean length of hammers — add the lengths of handles and heads — can be used for this problem except that the results of the drawings from each bucket are multiplied rather than added.

+

In this case there is again a simple rule: The mean of the products equals the product of the means. But this rule holds only when the two urns are indeed independent of each other, as they are in this case.

+

The next two problems are a bit harder than the previous ones; you might skip them for now and come back to them a bit later. However, with the Monte Carlo simulation method they are within the grasp of any introductory student who has had just a bit of experience with the method. In contrast, a standard book whose lead author is Frederick Mosteller, as respected a statistician as there is, says of this type of problem: “Naturally, in this book we cannot expect to study such difficult problems in their full generality [that is, show how to solve them, rather than merely state them], but we can lay a foundation for their study.” (Mosteller, Rourke, and Thomas 1961, 5)

+
+
+

12.14 Example: Flipping Pennies to the End

+

Two players, each with a stake of ten pennies, engage in the following game: A coin is tossed, and if it is (say) heads, player A gives player B a penny; if it is tails, player B gives player A a penny. What is the probability that one player will lose his or her entire stake of 10 pennies if they play for 200 tosses?

+

This is a classic problem in probability theory; it has many everyday applications in situations such as inventory management. For example, what is the probability of going out of stock of a given item in a given week if customers and deliveries arrive randomly? It also is a model for many processes in modern particle physics.

+

Solution of the penny-matching problem with coins is straightforward. Repeatedly flip a coin and check if one player or the other reaches a zero balance before you reach 200 flips. Or with random numbers:

+
    +
  1. Numbers “1-5” = head = “+1”; Numbers “6-0” = tail = “-1.”
  2. +
  3. Proceed down a series of 200 numbers, keeping a running tally of the “+1”’s and the “-1”’s. If the tally reaches “+10” or “-10” on or before the two-hundredth digit, record “yes”; otherwise record “no.”
  4. +
  5. Repeat step 2 perhaps 400 or 10000 times, and calculate the proportion of “yeses.” This estimates the probability sought.
  6. +
+

The following R program also solves the problem. The heart of the program starts at the line where the program models a coin flip with the statement: c = sample(1:2, size=1) After you study that, go back and notice the inner for loop starting with for (j in 1:200) { that describes the procedure for flipping a coin 200 times. Finally, note how the outer for (i in 1:10000) { loop simulates 10000 games, each game consisting of the 200 coin flips we generated with the inner for loop above.

+
+

Start of pennies notebook

+ + +
+
someone_won <- numeric(10000)
+
+# Do 10000 trials
+for (i in 1:10000) {
+
+    # Record the number 10: a's stake
+    a_stake <- 10
+
+    # Same for b
+    b_stake <- 10
+
+    # An indicator flag that will be set to "1" when somebody wins.
+    flag <- 0
+
+    # Repeat the following steps 200 times.
+    # Notice we use "j" as the counter variable, to avoid overwriting
+    # "i", the counter variable for the 10000 trials.
+    for (j in 1:200) {
+        # Generate the equivalent of a coin flip, letting 1 <- heads,
+        # 2 <- tails
+        c <- sample(1:2, size=1)
+
+        # If it's a heads
+        if (c == 1) {
+
+            # Add 1 to b's stake
+            b_stake <- b_stake + 1
+
+            # Subtract 1 from a's stake
+            a_stake <- a_stake - 1
+
+            # End the "if" condition
+        }
+
+        # If it's a tails
+        if (c == 2) {
+
+            # Add one to a's stake
+            a_stake <- a_stake + 1
+
+            # Subtract 1 from b's stake
+            b_stake <- b_stake - 1
+
+            # End the "if" condition
+        }
+
+        # If a has won
+        if (a_stake == 20) {
+
+            # Set the indicator flag to 1
+            flag <- 1
+        }
+
+        # If b has won
+        if (b_stake == 20) {
+
+            # Set the indicator flag to 1
+            flag <- 1
+
+        }
+
+        # End the repeat loop for 200 plays (note that the indicator flag stays
+        # at 0 if neither a nor b has won)
+    }
+
+    # Keep track of whether anybody won.
+    someone_won[i] <- flag
+
+    # End the 10000 trials
+}
+
+# Find out how often somebody won
+n_wins <- sum(someone_won)
+
+# Convert to a proportion
+prop_wins <- n_wins / 10000
+
+# Print the results
+message(prop_wins)
+
+
0.8919
+
+
+

End of pennies notebook

+
+

A similar example: Your warehouse starts out with a supply of twelve capacirators. Every three days a new shipment of two capacirators is received. There is a .6 probability that a capacirator will be used each morning, and the same each afternoon. (It is as if a random drawing is made each half-day to see if a capacirator is used; two capacirators may be used in a single day, or one or none). How long will be it, on the average, before the warehouse runs out of stock?

+
+
+

12.15 Example: A Drunk’s Random Walk

+

If a drunk chooses the direction of each step randomly, will he ever get home? If he can only walk on the road on which he lives, the problem is almost the same as the gambler’s-ruin problem above (“pennies”). But if the drunk can go north-south as well as east-west, the problem becomes a bit different and interesting.

+

Looking now at Figure 12.1 — what is the probability of the drunk reaching either his house (at 3 steps east, 2 steps north) or my house (1 west, 4 south) before he finishes taking twelve steps?

+

One way to handle the problem would be to use a four-directional spinner such as is used with a child’s board game, and then keep track of each step on a piece of graph paper. The reader may construct a R program as an exercise.

+
+
+
+
+

+
Figure 12.1: Drunk random walk
+
+
+
+
+
+
+

12.16 Example: public and private liquor pricing

+

Let’s end this chapter with an actual example that will be used again in Chapter 13 when discussing probability in finite universes, and then at great length in the context of statistics in Chapter 24. This example also illustrates the close connection between problems in pure probability and those in statistical inference.

+

As of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned, and 16 “monopoly” states where the state government owns the retail liquor stores. (Some states were omitted for technical reasons.) These were the representative 1961 prices of a fifth of Seagram 7 Crown whiskey in the two sets of states (Table 12.4):

+
+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 12.4: Whiskey prices by state category
PrivateGovernment
4.824.65
5.294.55
4.894.11
4.954.15
4.554.2
4.94.55
5.253.8
5.34.0
4.294.19
4.854.75
4.544.74
4.754.5
4.854.1
4.854.0
4.55.05
4.754.2
4.79
4.85
4.79
4.95
4.95
4.75
5.2
5.1
4.8
4.29
Count2616
Mean4.844.35
+
+
+
+
+
+
+

+
Figure 12.2: Whiskey prices by state category
+
+
+
+
+

Let us consider that all these states’ prices constitute one single universe (an assumption whose justification will be discussed later). If so, one can ask: If these 42 states constitute a single universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference between the means that was actually observed)?

+

This can be thought of as problem in pure probability because we begin with a known universe and ask how it would behave with random drawings from it. We sample with replacement ; the decision to do so, rather than to sample without replacement (which is the way I had first done it, and for which there may be better justification) will be discussed later. We do so to introduce a “bootstrap”-type procedure (defined later) as follows: Write each of the forty-two observed state prices on a separate card. The shuffled deck simulated a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. For each trial, calculate the difference in mean prices.

+

These are the steps systematically:

+
    +
  • Step A: Write each of the 42 prices on a card and shuffle.
  • +
  • Steps B and C (combined in this case): i) Draw cards randomly with replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $.49; if it is as great or greater than $.49, write “yes,” otherwise “no.”
  • +
  • Step D: Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.
  • +
+

The probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state-ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices. The following notebook performs the operations described above.

+
+

Start of liquor_prices notebook

+ + +
+
fake_diffs <- numeric(10000)
+
+priv <- c(4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54,
+          4.75, 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75,
+          5.20, 5.10, 4.80, 4.29)
+
+govt <- c(4.65, 4.55, 4.11, 4.15, 4.20, 4.55, 3.80, 4.00, 4.19, 4.75, 4.74,
+          4.50, 4.10, 4.00, 5.05, 4.20)
+
+actual_diff <- mean(priv) - mean(govt)
+
+# Join the two vectors of data
+both <- c(priv, govt)
+
+# Repeat 10000 simulation trials
+for (i in 1:10000) {
+
+    # Sample 26 with replacement for private group
+    fake_priv <- sample(both, size=26, replace=TRUE)
+
+    # Sample 16 with replacement for govt. group
+    fake_govt <- sample(both, size=16, replace=TRUE)
+
+    # Find the mean of the "private" group.
+    p <- mean(fake_priv)
+
+    # Mean of the "govt." group
+    g <- mean(fake_govt)
+
+    # Difference in the means
+    diff <- p - g
+
+    # Keep score of the trials
+    fake_diffs[i] <- diff
+}
+
+# Graph of simulation results to compare with the observed result.
+fig_title <- paste('Average price difference (Actual difference = ',
+                   round(actual_diff * 100),
+                   'cents')
+hist(fake_diffs, main=fig_title, xlab='Difference in average prices (cents)')
+
+
+
+

+
+
+
+
+

End of liquor_prices notebook

+
+

The results shown above — not even one “success” in 10,000 trials — imply that there is only a very small probability that two groups with mean prices as different as were observed would happen by chance if drawn with replacement from the universe of 42 observed prices.

+

Here we think of these states as if they came from a non-finite universe, which is one possible interpretation for one particular context. However, in Chapter 13 we will postulate a finite universe, which is appropriate if it is reasonable to consider that these observations constitute the entire universe (aside from those states excluded from the analysis because of data complexities).

+
+
+

12.17 The general procedure

+

Chapter 25 generalizes what we have done in the probability problems above into a general procedure, which will in turn be a subpart of a general procedure for all of resampling.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/probability_theory_3_files/figure-html/fig-whiskey-hist-1.png b/r-book/probability_theory_3_files/figure-html/fig-whiskey-hist-1.png new file mode 100644 index 00000000..5f4aa17b Binary files /dev/null and b/r-book/probability_theory_3_files/figure-html/fig-whiskey-hist-1.png differ diff --git a/r-book/probability_theory_3_files/figure-html/unnamed-chunk-41-3.png b/r-book/probability_theory_3_files/figure-html/unnamed-chunk-41-3.png new file mode 100644 index 00000000..2ccb3c90 Binary files /dev/null and b/r-book/probability_theory_3_files/figure-html/unnamed-chunk-41-3.png differ diff --git a/r-book/probability_theory_4_finite.html b/r-book/probability_theory_4_finite.html new file mode 100644 index 00000000..64c80463 --- /dev/null +++ b/r-book/probability_theory_4_finite.html @@ -0,0 +1,1421 @@ + + + + + + + + + +Resampling statistics - 13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes

+
+ + + +
+ + + + +
+ + +
+ +
+

13.1 Introduction

+

The examples in Chapter 12 dealt with infinite universes , in which the probability of a given simple event is unaffected by the outcome of the previous simple event. But now we move on to finite universes, situations in which you begin with a given set of objects whose number is not enormous — say, a total of two, or two hundred, or two thousand. If we liken such a situation to a bucket containing balls of different colors each with a number on it, we are interested in the probability of drawing various sets of numbered and colored balls from the bucket on the condition that we do not replace balls after they are drawn.

+

In the cases addressed in this chapter, it is important to remember that the single events no longer are independent of each other. A typical situation in which sampling without replacement occurs is when items are chosen from a finite universe — for example, when children are selected randomly from a classroom. If the class has five boys and five girls, and if you were to choose three girls in a row, then the chance of selecting a fourth girl on the next choice obviously is lower than the chance that you would pick a girl on the first selection.

+

The key to dealing with this type of problem is the same as with earlier problems: You must choose a simulation procedure that produces simple events having the same probabilities as the simple events in the actual problem involving sampling without replacement. That is, you must make sure that your simulation does not allow duplication of events that have already occurred. The easiest way to sample without replacement with resampling techniques is by simply ignoring an outcome if it has already occurred.

+

Examples Section 13.3.1 through Section 13.3.10 deal with some of the more important sorts of questions one may ask about drawings without replacement from such an urn. To get an overview, I suggest that you read over the summaries (in bold) introducing examples Section 13.3.1 to Section 13.3.10 before beginning to work through the examples themselves.

+

This chapter also revisits the general procedure used in solving problems in probability and statistics with simulation, here in connection with problems involving a finite universe. The steps that one follows in simulating the behavior of a universe of interest are set down in such fashion that one may, by random drawings, deduce the probability of various events. Having had by now the experience of working through the problems in Chapter 9 and Chapter 12, the reader should have a solid basis to follow the description of the general procedure which then helps in dealing with specific problems.

+

Let us begin by describing some of the major sorts of problems with the aid of a bucket with six balls.

+
+
+

13.2 Some building-block programs

+

Case 1. Each of six balls is labeled with a number between “1” and “6.” We ask: What is the probability of choosing balls 1, 2, and 3 in that order if we choose three balls without replacement? Figure 13.1 diagrams the events we consider “success.”

+
+
+
+
+

+
Figure 13.1: The Event Classified as “Success” for Case 1
+
+
+
+
+

Case 2. We begin with the same bucket as in Case 1, but now ask the probability of choosing balls 1, 2, and 3 in any order if we choose three balls without replacement. Figure 13.2 diagrams two of the events we consider success. These possibilities include that which is shown in Figure 13.1 above, plus other possibilities.

+
+
+
+
+

+
Figure 13.2: An Incomplete List of the Events Classified as “Success” for Case 2
+
+
+
+
+

Case 3. The odd-numbered balls “1,” “3,” and “5,” are painted red and the even-numbered balls “2,” “4,” and “6” are painted black. What is the probability of getting a red ball and then a black ball in that order? Some possibilities are illustrated in Figure 13.3, which includes the possibility shown in Figure 13.1. It also includes some but not all possibilities found in Figure 13.2; for example, Figure 13.2 includes choosing balls 2, 3 and 1 in that order, but Figure 13.3 does not.

+
+
+
+
+

+
Figure 13.3: An Incomplete List of the Events Classified as “Success” for Case 3
+
+
+
+
+

Case 4. What is the probability of getting two red balls and one black ball in any order?

+
+
+
+
+

+
Figure 13.4: An Incomplete List of the Events Classified as “Success” for Case 4
+
+
+
+
+

Case 5. Various questions about matching may be asked with respect to the six balls. For example, what is the probability of getting ball 1 on the first draw or ball 2 on the second draw or ball 3 on the third draw? (Figure 13.5) Or, what is the probability of getting all balls on the draws corresponding to their numbers?

+
+
+
+
+

+
Figure 13.5: An Incomplete List of the Events Classified as “Success” for Case 5
+
+
+
+
+
+
+

13.3 Problems in finite universes

+
+

13.3.1 Example: four girls and one boy

+

What is the probability of selecting four girls and one boy when selecting five students from any group of twenty-five girls and twenty-five boys? This is an example of sampling without replacement when there are two outcomes and the order does not matter.

+

The important difference between this example and the infinite-universe examples in the prior chapter is that the probability of obtaining a boy or a girl in a single simple event differs from one event to the next in this example, whereas it stays the same when the sampling is with replacement. To illustrate, the probability of a girl is .5 (25 out of 50) when the first student is chosen, but the probability of a girl is either 25/49 or 24/49 when the second student is chosen, depending on whether a boy or a girl was chosen on the first pick. Or after, say, three girls and one boy are picked, the probability of getting a girl on the next choice is (28-3)/(50-4) = 22/46 which is clearly not equal to .5.

+

As always, we must create a satisfactory analog to the process whose probability we want to learn. In this case, we can use a deck of 50 cards, half red and half black, and deal out five cards without replacing them after each card is dealt; this simulates the choice of five students from among the fifty.

+

We can no longer use our procedure from before. If we designated “1-25” as being girls and “26-50” as being boys and then proceeded to draw random numbers, the probability of a girl would be the same on each pick.

+

At this point, it is important to note that — for this particular problem — we do not need to distinguish between particular girls (or boys). That is, it does not matter which girl (or boy) is selected in a given trial. Nor did we pay attention to the order in which we selected girls or boys. This is an instance of Case 4 discussed above. Subsequent problems will deal with situations where the order of selection, and the particular individuals, do matter.

+

Our approach then is to mimic having the class in front of us: an array of 50 strings, half of the entries ‘boy’ and the other half ‘girl’. We then shuffle the class (the array), and choose the first N students (strings).

+
    +
  • Step 1. Create a list with 50 labels, half ‘boy’ and half ‘girl’.
  • +
  • Step 2. Shuffle the class and select five students. Count whether there are four labels equal ‘girl’. If so, write “yes,” otherwise “no”.
  • +
  • Step 3. Repeat step 2, say, 10,000 times, and count the proportion “yes”, which estimates the probability sought.
  • +
+

The results of a few experimental trials are shown in Table 13.1.

+
+ + +++++ + + + + + + + + + + + + + + + + + + + + + + +
Table 13.1: A few experimental trials of four girls and one boy
ExperimentStrings ChosenSuccess?
         1
‘girl’, ‘boy’, ‘boy’, ‘girl’, ‘boy’No
         2
‘boy’, ‘girl’, ‘girl’, ‘girl’, ‘girl’Yes
         3
‘girl, ’girl’, ‘girl’, ‘boy’, ‘girl’Yes
+
+

A solution to this problem with R is presented below.

+
+

Start of four_girls_one_boy notebook

+ + +
+
N <- 10000
+trial_results <- numeric(N)
+
+# Constitute the set of 25 girls and 25 boys.
+whole_class <- rep(c('girl', 'boy'), c(25, 25))
+
+# Repeat the following steps N times.
+for (i in 1:N) {
+
+    # Shuffle the numbers
+    shuffled <- sample(whole_class)
+
+    # Take the first 5 numbers, call them c.
+    c <- shuffled[1:5]
+
+    # Count how many girls there are, put the result in d.
+    d <- sum(c == 'girl')
+
+    # Keep track of each trial result in z.
+    trial_results[i] <- d
+
+    # End the experiment, go back and repeat until all 1000 trials are
+    # complete.
+}
+
+# Count the number of times we got four girls, put the result in k.
+k <- sum(trial_results == 4)
+
+# Convert to a proportion.
+kk <- k / N
+
+# Print the result.
+message(kk)
+
+
0.1481
+
+
+

We can also find the probabilities of other outcomes from a histogram of trial results obtained with the following command:

+
+
# Do histogram, with one bin for each possible number.
+hist(trial_results, breaks=0:max(trial_results), main='# of girls')
+
+
+
+

+
+
+
+
+

In the resulting histogram we can see that in 15 percent of the trials, 4 of the 5 selected were girls.

+

It should be noted that for this problem — as for most other problems — there are several other resampling procedures that will also do the job correctly.

+

In analytic probability theory this problem is worked with a formula for “combinations.”

+

End of four_girls_one_boy notebook

+
+
+
+

13.3.2 Example: Five spades and four clubs in a bridge hand

+
+

Start of five_spades_four_clubs notebook

+ + +

This is an example of multiple-outcome sampling without replacement, order does not matter.

+

The problem is similar to the example in Section 13.3.1, except that now there are four equally-likely outcomes instead of only two. An R solution is:

+
+
# Constitute the deck of 52 cards.
+# Repeat the suit names 13 times each, to make a 52 card deck.
+deck <- rep(c('spade', 'club', 'diamond', 'heart'), c(13, 13, 13, 13))
+# Show the deck
+deck
+
+
 [1] "spade"   "spade"   "spade"   "spade"   "spade"   "spade"   "spade"  
+ [8] "spade"   "spade"   "spade"   "spade"   "spade"   "spade"   "club"   
+[15] "club"    "club"    "club"    "club"    "club"    "club"    "club"   
+[22] "club"    "club"    "club"    "club"    "club"    "diamond" "diamond"
+[29] "diamond" "diamond" "diamond" "diamond" "diamond" "diamond" "diamond"
+[36] "diamond" "diamond" "diamond" "diamond" "heart"   "heart"   "heart"  
+[43] "heart"   "heart"   "heart"   "heart"   "heart"   "heart"   "heart"  
+[50] "heart"   "heart"   "heart"  
+
+
+
+
N <- 10000
+trial_results <- numeric(N)
+
+# Repeat the trial N times.
+for (i in 1:N) {
+
+    # Shuffle the deck and draw 13 cards.
+    hand <- sample(deck, 13)  # replace=FALSE is the default.
+
+    # Count the number of spades in "hand", put the result in "n_spades".
+    n_spades <- sum(hand == 'spade')
+
+    # If we have five spades, we'll continue on to count the clubs. If we don't
+    # have five spades, the number of clubs is irrelevant — we have not gotten
+    # the hand we are interested in.
+    if (n_spades == 5) {
+        # Count the clubs, put the result in "n_clubs"
+        n_clubs <- sum(hand == 'club')
+        # Keep track of the number of clubs in each trial
+        trial_results[i] <- n_clubs
+    }
+
+    # End one experiment, go back and repeat until all N trials are done.
+}
+
+# Count the number of trials where we got 4 clubs. This is the answer we want -
+# the number of hands out of 1000 with 5 spades and 4 clubs. (Recall that we
+# only counted the clubs if the hand already had 5 spades.)
+n_5_and_4 <- sum(trial_results == 4)
+
+# Convert to a proportion.
+prop_5_and_4 <- n_5_and_4 / N
+
+# Print the result
+message(prop_5_and_4)
+
+
0.022
+
+
+

End of five_spades_four_clubs notebook

+
+
+
+

13.3.3 Example: a total of fifteen points in a bridge hand

+
+

Start of fifteen_points_in_bridge notebook

+ + +

Let us assume that ace counts as 4, king = 3, queen = 2, and jack = 1.

+
+
# Constitute a deck with 4 jacks (point value 1), 4 queens (value 2), 4
+# kings (value 3), 4 aces (value 4), and 36 other cards with no point
+# value
+whole_deck <- rep(c(1, 2, 3, 4, 0), c(4, 4, 4, 4, 36))
+whole_deck
+
+
 [1] 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+[39] 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+
+
+
+
N <- 10000
+trial_results <- numeric(N)
+
+# Do N trials.
+for (i in 1:N) {
+    # Shuffle the deck of cards and draw 13
+    hand <- sample(whole_deck, size=13)  # replace=FALSE is default.
+
+    # Total the points.
+    points <- sum(hand)
+
+    # Keep score of the result.
+    trial_results[i] <- points
+
+    # End one experiment, go back and repeat until all N trials are done.
+}
+
+
+
# Produce a histogram of trial results.
+hist(trial_results, breaks=0:max(trial_results), main='Points in bridge hands')
+
+
+
+

+
+
+
+
+

From this histogram, we see that in about 4 percent of our trials we obtained a total of exactly 15 points. We can also compute this directly:

+
+
# How many times did we have a hand with fifteen points?
+k <- sum(trial_results == 15)
+
+# Convert to a proportion.
+kk <- k / N
+
+# Show the result.
+kk
+
+
[1] 0.0426
+
+
+

End of fifteen_points_in_bridge notebook

+
+
+
+

13.3.4 Example: Four girls then one boy from 25 girls and 25 boys

+
+

Start of four_girls_then_one_boy_25 notebook

+ + +

In this problem, order matters; we are sampling without replacement, with two outcomes, several of each item.

+

What is the probability of getting an ordered series of four girls and then one boy , from a universe of 25 girls and 25 boys? This illustrates Case 3 above. Clearly we can use the same sampling mechanism as in the example Section 13.3.1, but now we record “yes” for a smaller number of composite events.

+

We record “no” even if a single one boy is chosen but he is chosen 1st, 2nd, 3rd, or 4th, whereas in Section 13.3.1, such outcomes are recorded as “yes”-es.

+
    +
  • Step 1. Generate a class (vector) of length 50, consisting of 25 strings valued “boy” and 25 strings valued “girl”.
  • +
  • Step 2. Shuffle the class array, and select the first five elements.
  • +
  • Step 3. If the first five elements are exactly 'girl', 'girl', 'girl', 'girl', 'boy', write “yes,” otherwise “no.”
  • +
  • Step 4. Repeat steps 2 and 3, say, 10,000 times, and count the proportion of “yes” results, which estimates the probability sought.
  • +
+

Let us start the single trial procedure like so:

+
+
# Constitute the set of 25 girls and 25 boys.
+whole_class <- rep(c('girl', 'boy'), c(25, 25))
+
+# Shuffle the class into a random order.
+shuffled <- sample(whole_class)
+# Take the first 5 class members, call them c.
+c <- shuffled[1:5]
+# Show the result.
+c
+
+
[1] "boy"  "boy"  "boy"  "boy"  "girl"
+
+
+

Our next step (step 3) is to check whether c is exactly equal to the result of interest. The result of interest is:

+
+
# The result we are looking for - four girls and then a boy.
+result_of_interest <- rep(c('girl', 'boy'), c(4, 1 ))
+result_of_interest
+
+
[1] "girl" "girl" "girl" "girl" "boy" 
+
+
+

We can then use a vector comparison with == to do an element by element (elementwise) check, asking whether the corresponding elements are equal:

+
+
# A Boolean array, with True where corresponding elements are equal, False
+# otherwise.
+are_equal <- c == result_of_interest
+are_equal
+
+
[1] FALSE FALSE FALSE FALSE FALSE
+
+
+

We are nearly finished with step 3 — it only remains to check whether all of the elements were equal, by checking whether all of the values in are_equal are TRUE.

+

We know that there are 5 elements, so we could check whether there are 5 TRUE values with sum:

+
+
# Are there exactly 5 TRUE values in `are_equal`?
+sum(are_equal) == 5
+
+
[1] FALSE
+
+
+

Another way to ask the same question is by using the all function on are_equal. This returns TRUE if all the elements in are_equal are TRUE, and FALSE otherwise.

+
+
+
+ +
+
+Testing whether all elements of a vector are the same +
+
+
+

The all, applied to a Boolean vector (as here), checks whether all of the elements in the Boolean vector are TRUE. If so, it returns TRUE, otherwise, it returns FALSE.

+

For example:

+
+
# All elements are TRUE, `all` returns TRUE
+all(c(TRUE, TRUE, TRUE, TRUE))
+
+
[1] TRUE
+
+
+
+
# At least one element is FALSE, `all` returns FALSE
+all(c(TRUE, TRUE, FALSE, TRUE))
+
+
[1] FALSE
+
+
+
+
+

Here is the full procedure for steps 2 and 3 (a single trial):

+
+
# Shuffle the class into a random order.
+shuffled <- sample(whole_class)
+# Take the first 5 class members, call them c.
+c <- shuffled[1:5]
+# For each element, test whether the result is the result of interest.
+are_equal <- c == result_of_interest
+# Check whether we have the result we are looking for.
+is_four_girls_then_one_boy <- all(are_equal)
+
+

All that remains is to put the single trial procedure into a loop.

+
+
N <- 10000
+trial_results <- numeric(N)
+
+# Repeat the following steps 1000 times.
+for (i in 1:N) {
+
+    # Shuffle the class into a random order.
+    shuffled <- sample(whole_class)
+    # Take the first 5 class members, call them c.
+    c <- shuffled[1:5]
+    # For each element, test whether the result is the result of interest.
+    are_equal <- c == result_of_interest
+    # Check whether we have the result we are looking for.
+    is_four_girls_then_one_boy <- all(are_equal)
+
+    # Store the result of this trial.
+    trial_results[i] <- is_four_girls_then_one_boy
+
+    # End the experiment, go back and repeat until all N trials are
+    # complete.
+}
+
+# Count the number of times we got four girls then a boy
+k <- sum(trial_results)
+
+# Convert to a proportion.
+kk <- k / N
+
+# Print the result.
+message(kk)
+
+
0.0311
+
+
+

This type of problem is conventionally done with a permutation formula.

+

End of four_girls_then_one_boy_25 notebook

+
+
+
+

13.3.5 Example: repeat pairings from random pairing

+
+

Start of university_icebreaker notebook

+ + +

First put two groups of 10 people into 10 pairs. Then re-randomize the pairings. What is the chance that four or more pairs are the same in the second random pairing? This is a problem in the probability of matching by chance.

+

Ten representatives each from two universities, Birmingham and Berkeley, attend a meeting. As a social icebreaker, representatives are divided, randomly, into pairs consisting of one person from each university.

+

If they held a second round of the icebreaker, with a new random pairing, what is the chance that four or more pairs will be the same?

+

In approaching this problem, we start at the point where the first icebreaker is complete. We now have to determine what happens after the second round.

+
    +
  • Step 1. Let “ace” through “10” of hearts represent the ten representatives from Birmingham University. Let “ace” through “10” of spades be their allocated partners (in round one) from Berkeley.
  • +
  • Step 2. Shuffle the hearts and deal them out in a row; shuffle the spades and deal in a row just below the hearts.
  • +
  • Step 3. Count the pairs — a pair is one card from the heart row and one card from the spade row — that contain the same denomination. If 4 or more pairs match, record “yes,” otherwise “no.”
  • +
  • Step 4. Repeat steps (2) and (3), say, 10,000 times.
  • +
  • Step 5. Count the proportion “yes.” This estimates the probability of 4 or more pairs.
  • +
+

Exercise for the student: Write the steps to do this example with random numbers. The R solution follows below.

+
+
N <- 10000
+trial_results <- numeric(N)
+
+# Assign numbers to each student, according to their pair, after the first
+# icebreaker
+birmingham <- 1:10
+berkeley <- 1:10
+
+for (i in 1:N) {
+    # Randomly shuffle the students from Berkeley
+    shuffled_berkeley <- sample(berkeley)
+
+    # Randomly shuffle the students from Birmingham
+    # (This step is not really necessary — shuffling one array is enough to make the matching random.)
+    shuffled_birmingham <- sample(birmingham)
+
+    # Count in how many cases people landed with the same person as in the
+    # first round, and store in trial_results.
+    matches <- sum(shuffled_berkeley == shuffled_birmingham)
+    trial_results[i] <- matches
+}
+
+# Count the number of times we got 4 or more people assigned to the same person
+k <- sum(trial_results >= 4)
+
+# Convert to a proportion.
+kk <- k / N
+
+# Print the result.
+message(kk)
+
+
0.0203
+
+
+

We see that in about 2 percent of the trials did 4 or more couples end up being re-paired with their own partners. This can also be seen from the histogram:

+

End of university_icebreaker notebook

+
+
+
+

13.3.6 Example: Matching Santa Hats

+
+

Start of santas_hats notebook

+ + +

The welcome staff at a restaurant mix up the hats of a party of six Christmas Santas. What is the probability that at least one will get their own hat?.

+

After a long Christmas day, six Santas meet in the pub to let off steam. However, as luck would have it, their hosts have mixed up their hats. When the hats are returned, what is the chance that at least one Santa will get his own hat back?

+

First, assign each of the six Santas a number, and place these numbers in an array. Next, shuffle the array (this represents the mixed-up hats) and compare to the original. The rest of the problem is the same as the pairs one from before, except that we are now interested in any trial where at least one (\(\ge 1\)) Santa received the right hat.

+
+
N <- 10000
+trial_results <- numeric(N)
+
+# Assign numbers to each owner
+owners <- 1:6
+
+# Each hat gets the number of their owner
+hats <- 1:6
+
+for (i in 1:N) {
+    # Randomly shuffle the hats and compare to their owners
+    shuffled_hats <- sample(hats)
+
+    # In how many cases did at least one person get their hat back?
+    trial_results[i] <- sum(shuffled_hats == owners) >= 1
+}
+
+# How many times, over all trials, did at least one person get their hat back?
+k <- sum(trial_results)
+
+# Convert to a proportion.
+kk <- k / N
+
+# Print the result.
+print(kk)
+
+
[1] 0.629
+
+
+

We see that in roughly 63 percent of the trials at least one Santa received their own hat back.

+

End of santas_hats notebook

+
+
+
+

13.3.7 Example: Twenty executives assigned to two divisions of a firm

+
+

Start of twenty_executives notebook

+ + +

The top manager wants to spread the talent reasonably evenly, but she does not want to label particular executives with a quality rating and therefore considers distributing them with a random selection. She therefore wonders: What are probabilities of the best ten among the twenty being split among the divisions in the ratios 5 and 5, 4 and 6, 3 and 7, etc., if their names are drawn from a hat? One might imagine much the same sort of problem in choosing two teams for a football or baseball contest.

+

One may proceed as follows:

+
    +
  1. Put 10 balls labeled “W” (for “worst”) and 10 balls labeled “B” (best) in a bucket.
  2. +
  3. Draw 10 balls without replacement and count the W’s.
  4. +
  5. Repeat (say) 400 times.
  6. +
  7. Count the number of times each split — 5 W’s and 5 B’s, 4 and 6, etc. — appears in the results.
  8. +
+

The problem can be done with R as follows:

+
+
N <- 10000
+trial_results <- numeric(N)
+
+managers <- rep(c('Worst', 'Best'), c(10, 10))
+
+for (i in 1:N) {
+    chosen <- sample(managers, 10)  # replace=FALSE is the default.
+    trial_results[i] <- sum(chosen == 'Best')
+}
+
+hist(trial_results, breaks=0:max(trial_results),
+     main= 'Number of best managers chosen')
+
+
+
+

+
+
+
+
+

End of twenty_executives notebook

+
+
+
+

13.3.8 Example: Executives Moving

+ +

A major retail chain moves its store managers from city to city every three years in order to calculate individuals’ knowledge and experience. To make the procedure seem fair, the new locations are drawn at random. Nevertheless, the movement is not popular with managers’ families. Therefore, to make the system a bit sporting and to give people some hope of remaining in the same location, the chain allows managers to draw in the lottery the same posts they are now in. What are the probabilities that 1, 2, 3 … will get their present posts again if the number of managers is 30?

+

The problem can be solved with the following steps:

+
    +
  1. Number a set of green balls from “1” to “30” and put them into Bucket A. Number a set of red balls from “1” to “30” and then put into Bucket B. For greater concreteness one could use 30 little numbered dolls in Bucket A and 30 little toy houses in Bucket B.
  2. +
  3. Shuffle Bucket A, and array all its green balls into a row (vector A). Array all the red balls from Bucket B into a second row B just below row A.
  4. +
  5. Count how many green balls in row A have the same numbers as the red balls just below them, and record that number on a scoreboard.
  6. +
  7. Repeat steps 2 and 3 perhaps 1000 times. Then count in the scoreboard the numbers of “0,” “1,” “2,” “3.”
  8. +
+
+
+

13.3.9 Example: State Liquor Systems Again

+

Let’s end this chapter with the example of state liquor systems that we first examined in Chapter 12 and which will be discussed again later in the context of problems in statistics.

+

Remember that as of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned (“Private”), and 16 monopoly states where the state government owns the retail liquor stores (“Government”). See Table 12.4 for the prices in the Private and Government states.

+

We found the average prices were:

+
    +
  • Private: $4.35;
  • +
  • Government: $4.84;
  • +
  • Difference (Government - Private): $0.49.
  • +
+

Let us now consider that all these states’ prices constitute one single finite universe. We ask: If these 42 states constitute a universe, and if they are all shuffled together, how likely is it that if one divides them into two samples at random (sampling without replacement), containing 16 and 26 observations respectively, the difference in mean prices turns out to be as great as $0.49 (the difference that was actually observed)?

+

Again we write each of the forty-two observed state prices on a separate card. The shuffled deck simulates a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, without replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. In each trial calculate the difference in mean prices.

+

The steps more systematically:

+
    +
  • Step A. Write each of the 42 prices on a card and shuffle.
  • +
  • Steps B and C (combined in this case). i) Draw cards randomly without replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $0.49; if it is as great or greater than $0.49, write “yes,” otherwise “no.”
  • +
  • Step D. Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.
  • +
+

The probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices.

+

Please notice how the only difference between this treatment of the problem and the treatment in Chapter 12 is that the drawing in this case is without replacement whereas in Chapter 12 the drawing is with replacement.

+

In Chapter 12 we thought of these states as if they came from a non-finite universe, which is one possible interpretation in one context. But one can also reasonably think about them in another context — as if they constitute the entire universe (aside from those states excluded from the analysis because of data complexities). If so, one can ask: If these 42 states constitute a universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference that was actually observed)?

+
+
+

13.3.10 Example: Five or More Spades in One Bridge Hand; Four Girls and a Boy

+
+

Start of five_spades_four_girls notebook

+ + +

This is a compound problem: what are the chances of both five or more spades in one bridge hand, and four girls and a boy in a five-child family?

+

“Compound” does not necessarily mean “complicated”. It means that the problem is a compound of two or more simpler problems.

+

A natural way to handle such a compound problem is in stages, as we saw in the archery problem of Section 12.10. If a “success” is achieved in the first stage, go on to the second stage; if not, don’t go on. More specifically in this example:

+
    +
  • Step 1. Use a bridge card deck, and five coins with heads = “girl”.
  • +
  • Step 2. Deal a 13-card bridge hand and count the spades. If 5 or more spades, record “no” and end the experimental trial. Otherwise, continue to step 3.
  • +
  • Step 3. Throw five coins, and count “heads.” If four heads, record “yes,” otherwise record “no.”
  • +
  • Step 4. Repeat steps 2 and 3 a thousand times.
  • +
  • Step 5. Compute the proportion of “yes” in step 3. This estimates the probability sought.
  • +
+

The R solution to this compound problem is neither long nor difficult. We tackle it almost as if the two parts of the problem were to be dealt with separately. We first determine, in a random bridge hand, whether 5 spades or more are dealt, as was done in the problem Section 13.3.2. Then, if 5 or more spades are found, we use sample to generate a random family of 5 children. This means that we need not generate families if 5 or more spades were not dealt to the bridge hand, because a “success” is only recorded if both conditions are met. After we record the number of girls in each sample of 5 children, we need only finish the loop (by } and then use sum to count the number of samples that had 4 girls, storing the result in k. Since we only drew samples of children for those trials in which a bridge hand of 5 spades had already been dealt, k will have the number of trials out of 10000 in which both conditions were met.

+
+
N <- 10000
+trial_results <- numeric(N)
+
+# Deck with 13 spades and 39 other cards
+deck <- rep(c('spade', 'others'), c(13, 52 - 13))
+
+for (i in 1:N) {
+    # Shuffle deck and draw 13 cards
+    hand <- sample(deck, 13)  # replace=FALSE is default
+
+    n_spades <- sum(hand == 'spade')
+
+    if (n_spades >= 5) {
+        # Generate a family, zeros for boys, ones for girls
+        children <- sample(c('girl', 'boy'), 5, replace=TRUE)
+        n_girls <- sum(children == 'girl')
+        trial_results[i] <- n_girls
+    }
+}
+
+k <- sum(trial_results == 4)
+
+kk <- k / N
+
+print(kk)
+
+
[1] 0.0262
+
+
+

Here is an alternative approach to the same problem, but getting the result at the end of the loop, by combining Boolean vectors (see Section 10.5).

+
+
N <- 10000
+trial_spades <- numeric(N)
+trial_girls <- numeric(N)
+
+# Deck with 13 spades and 39 other cards
+deck <- rep(c('spade', 'other'), c(13, 39))
+
+for (i in 1:N) {
+    # Shuffle deck and draw 13 cards
+    hand <- sample(deck, 13)  # replace=FALSE is default
+    # Count and store the number of spades.
+    n_spades <- sum(hand == 'spade')
+    trial_spades[i] <- n_spades
+
+    # Generate a family, zeros for boys, ones for girls
+    children <- sample(c('girl', 'boy'), 5, replace=TRUE)
+    # Count and store the number of girls.
+    n_girls <- sum(children == 'girl')
+    trial_girls[i] <- n_girls
+}
+
+k <- sum((trial_spades >= 5) & (trial_girls == 4))
+
+kk <- k / N
+
+# Show the result
+message(kk)
+
+
0.0271
+
+
+

End of five_spades_four_girls notebook

+
+
+
+
+ +
+
+Speed and readability +
+
+
+

The last version is a fraction more expensive, but has the advantage that the condition we are testing for is summarized on one line. However, this would not be a good approach to take if the experiments were not completely unrelated.

+
+
+
+
+
+

13.4 Summary

+

This completes the discussion of problems in probability — that is, problems where we assume that the structure is known. Whereas Chapter 12 dealt with samples drawn from universes considered not finite , this chapter deals with problems drawn from finite universes and therefore you sample without replacement.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-18-1.png b/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-18-1.png new file mode 100644 index 00000000..61b6b5fd Binary files /dev/null and b/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-18-1.png differ diff --git a/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-48-1.png b/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-48-1.png new file mode 100644 index 00000000..d7268504 Binary files /dev/null and b/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-5-1.png b/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-5-1.png new file mode 100644 index 00000000..ca2a4f6e Binary files /dev/null and b/r-book/probability_theory_4_finite_files/figure-html/unnamed-chunk-5-1.png differ diff --git a/r-book/references.html b/r-book/references.html new file mode 100644 index 00000000..2e08b6f9 --- /dev/null +++ b/r-book/references.html @@ -0,0 +1,1034 @@ + + + + + + + + + +Resampling statistics - References + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

References

+
+ + + +
+ + + + +
+ + +
+ +
+
+Ani Adhikari, John DeNero, and David Wagner. 2021. Computational and +Inferential Thinking: The Foundations of Data Science. https://inferentialthinking.com. https://inferentialthinking.com. +
+
+Arbuthnot, John. 1710. “An Argument for Divine Providence, Taken +from the Constant Regularity Observ’d in the Births of Both Sexes. By +Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of +the College of Physitians and the Royal Society.” +Philosophical Transactions of the Royal Society of London 27 +(328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011. +
+
+Barnett, Vic. 1982. Comparative Statistical Inference. 2nd ed. +Wiley Series in Probability and Mathematical Statistics. Chichester: +John Wiley & Sons. https://archive.org/details/comparativestati0000barn. +
+
+Box, George E. P., and George C. Tiao. 1992. Bayesian Inference in +Statistical Analysis. New York: Wiley & Sons, Inc. +https://www.google.co.uk/books/edition/Bayesian_Inference_in_Statistical_Analys/T8Askeyk1k4C. +
+
+Brooks, Charles Ernest Pelham. 1928. “Periodicities in the Nile +Floods.” Memoirs of the Royal Meteorological Society 2 +(12): 9--26. https://www.rmets.org/sites/default/files/papers/brooksmem2-12.pdf. +
+
+Bulmer, M. G. 1979. Principles of Statistics. New York, NY: +Dover Publications, inc. https://archive.org/details/principlesofstat0000bulm. +
+
+Burnett, Ed. 1988. The Complete Direct Mail List Handbook: +Everything You Need to Know about Lists and How to Use Them for Greater +Profit. Englewood Cliffs, New Jersey: Prentice Hall. https://archive.org/details/completedirectma00burn. +
+
+Cascells, Ward, Arno Schoenberger, and Thomas B. Grayboys. 1978. +“Interpretation by Physicians of Clinical Laboratory +Results.” New England Journal of Medicine 299: 999–1001. +https://www.nejm.org/doi/full/10.1056/NEJM197811022991808. +
+
+Catling, HW, and RE Jones. 1977. “A Reinvestigation of the +Provenance of the Inscribed Stirrup Jars Found at Thebes.” +Archaeometry 19 (2): 137–46. +
+
+Chung, James H, and Donald AS Fraser. 1958. “Randomization Tests +for a Multivariate Two-Sample Problem.” Journal of the +American Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf. +
+
+Cipolla, C. M. 1981. Fighting the Plague in Seventeenth-Century +Italy. Merle Curti Lectures. Madison, Wisconsin: University of +Wisconsin Press. https://books.google.co.uk/books?id=Ct\_OJYgnKCsC. +
+
+Cobb, George W. 2007. “The Introductory Statistics Course: A +Ptolemaic Curriculum?” Technology Innovations in Statistics +Education 1 (1). https://escholarship.org/uc/item/6hb3k0nz. +
+
+Coleman, William. 1987. “Experimental Physiology and Statistical +Inference: The Therapeutic Trial in Nineteenth Century +Germany.” In The Probabilistic Revolution: +Volume 2: Ideas in the Sciences, edited by Lorenz Krüger, Gerd +Gigerenzer, and Mary S. Morgan. An MIT Press Classic. MIT Press. https://books.google.co.uk/books?id=SLftmgEACAAJ. +
+
+Cook, Earl. 1976. “Limits to Exploitation of Nonrenewable +Resources.” Science 191 (4228): 677–82. https://www.jstor.org/stable/pdf/1741483.pdf. +
+
+Davenport, Thomas H, and DJ Patil. 2012. “Data Scientist: The +Sexiest Job of the 21st Century.” Harvard Business +Review 90 (10): 70–76. https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century. +
+
+Deshpande, Jayant V, AP Gore, and A Shanubhogue. 1995. Statistical +Analysis of Nonnormal Data. Taylor & Francis. https://www.google.co.uk/books/edition/Statistical_Analysis_of_Nonnormal_Data/sS0on2XqwwoC. +
+
+Dixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to +Statistical Analysis.” +
+
+Donoho, David. 2017. “50 Years of Data Science.” +Journal of Computational and Graphical Statistics 26 (4): +745–66. http://courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf. +
+
+Dunleavy, Kieron, Stefania Pittaluga, John Janik, Nicole Grant, Margaret +Shovlin, Richard Little, Robert Yarchoan, Seth Steinberg, Elaine S. +Jaffe, and Wyndham H. Wilson. 2006. Novel +Treatment of Burkitt Lymphoma with Dose-Adjusted EPOCH-Rituximab: +Preliminary Results Showing Excellent Outcome. +Blood 108 (11): 2736–36. https://doi.org/10.1182/blood.V108.11.2736.2736. +
+
+Dwass, Meyer. 1957. “Modified Randomization Tests for +Nonparametric Hypotheses.” The Annals of Mathematical +Statistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf. +
+
+Efron, Bradley. 1979. “Bootstrap Methods; Another Look at the +Jackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf. +
+
+Efron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to +the Bootstrap.” In Monographs on Statistics and Applied +Probability, edited by David R Cox, David V Hinkley, Nancy Reid, +Donald B Rubin, and Bernard W Silverman. Vol. 57. New York: +Chapman & Hall. +
+
+Feller, William. 1968. An Introduction to Probability Theory and Its +Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & +Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ. +
+
+Feynman, Richard P., and Ralph Leighton. 1988. What Do You +Care What Other People Think? Further Adventures of a Curious +Character. New York, NY: W. W. Norton; Company, Inc. https://archive.org/details/whatdoyoucarewha0000feyn_x5w7. +
+
+Fisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. +Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684. +
+
+———. 1959. “Statistical Methods and Scientific Inference.” +https://archive.org/details/statisticalmetho0000fish. +
+
+———. 1960. The Design of Experiments. 7th ed. Edinburgh: +Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5. +
+
+Fussler, Herman Howe, and Julian Lincoln Simon. 1961. Patterns in +the Use of Books in Large Research Libraries. Chicago: University +of Chicago Library. +
+
+Gardner, Martin. 1985. Mathematical Magic Show. Penguin Books +Ltd, Harmondsworth. +
+
+———. 2001. The Colossal Book of Mathematics. W.W. Norton & +Company Inc., New York. https://archive.org/details/B-001-001-265. +
+
+Gilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot +Hand in Basketball: On the Misperception of Random Sequences.” +Cognitive Psychology 17 (3): 295–314. https://www.joelvelasco.net/teaching/122/Gilo.Vallone.Tversky.pdf. +
+
+Gnedenko, Boris Vladimirovich, I Aleksandr, and Akovlevich Khinchin. +1962. An Elementary Introduction to the Theory of Probability. +New York, NY, USA: Dover Publications, Inc. https://archive.org/details/gnedenko-khinchin-an-elementary-introduction-to-the-theory-of-probability. +
+
+Goldberg, Samuel. 1986. Probability: An Introduction. Courier +Corporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC. +
+
+Graunt, John. 1759. “Natural and Political Observations Mentioned +in a Following Index and Made Upon the Bills of Mortality.” In +Collection of Yearly Bills of Mortality, from 1657 to 1758 +Inclusive, edited by Thomas Birch. London: A. Miller. https://archive.org/details/collectionyearl00hebegoog. +
+
+Hald, Anders. 1990. A History of Probability and Statistics and +Their Applications Before 1750. New York: John Wiley & Sons. https://archive.org/details/historyofprobabi0000hald. +
+
+Hansen, Morris H, William N Hurwitz, and William G Madow. 1953. +“Sample Survey Methods and Theory. Vol. I. Methods and +Applications.” https://archive.org/details/SampleSurveyMethodsAndTheoryVol1. +
+
+Hodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic +Concepts of Probability and Statistics. 2nd ed. San Francisco, +California: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9. +
+
+Hollander, Myles, and Douglas A Wolfe. 1999. Nonparametric +Statistical Methods. 2nd ed. Wiley Series in Probability and +Statistics: Applied Probability and Statistics. New York: John Wiley +& Sons, Inc. https://archive.org/details/nonparametricsta0000ed2holl. +
+
+Hyndman, Rob J, and Yanan Fan. 1996. “Sample Quantiles in +Statistical Packages.” The American Statistician 50 (4): +361–65. https://www.jstor.org/stable/pdf/2684934.pdf. +
+
+Kahn, Harold A, and Christopher T Sempos. 1989. Statistical Methods +in Epidemiology. Vol. 12. Monographs in Epidemiology and +Biostatistics. New York: Oxford University Press. https://www.google.co.uk/books/edition/Statistical_Methods_in_Epidemiology/YERYAgAAQBAJ. +
+
+Kinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948. +“Sexual Behavior in the Human Male.” W. B. Saunders +Company. https://books.google.co.uk/books?id=pfMKrY3VvigC. +
+
+Kornberg, Arthur. 1991. For the Love of Enzymes: The Odyssey of a +Biochemist. Cambridge, Massachusetts: Harvard University Press. https://archive.org/details/forloveofenzymes00arth. +
+
+Kotz, Samuel, and Norman Lloyd Johnson. 1992. Breakthroughs in +Statistics. New York: Springer-Verlag. +
+
+Lee, Peter M. 2012. Bayesian Statistics: An Introduction. 4th +ed. Wiley Online Library. https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/book.htm. +
+
+Lorie, James Hirsch, and Harry V Roberts. 1951. Basic Methods of +Marketing Research. McGraw-Hill. +
+
+Lyon, Herbert L, and Julian Lincoln Simon. 1968. “Price Elasticity +of the Demand for Cigarettes in the United States.” American +Journal of Agricultural Economics 50 (4): 888–95. +
+
+Martineau, Adrian R, David A Jolliffe, Richard L Hooper, Lauren +Greenberg, John F Aloia, Peter Bergman, Gal Dubnov-Raz, et al. 2017. +“Vitamin D Supplementation to Prevent Acute +Respiratory Tract Infections: Systematic Review and Meta-Analysis of +Individual Participant Data.” Bmj 356. +
+
+McCabe, George P, and Linda Doyle McCabe. 1989. Instructor’s Guide +with Solutions for Introduction to the Practice of Statistics. New +York: W. H. Freeman. +
+
+Mosteller, Frederick. 1987. Fifty Challenging Problems in +Probability with Solutions. Courier Corporation. +
+
+Mosteller, Frederick, and Robert E. K. Rourke. 1973. Sturdy +Statistics: Nonparametrics and Order Statistics. Addison-Wesley +Publishing Company. +
+
+Mosteller, Frederick, Robert E. K. Rourke, and George Brinton Thomas Jr. +1961. Probability with Statistical Applications. 2nd ed. https://archive.org/details/probabilitywiths0000most. +
+
+Noreen, Eric W. 1989. Computer-Intensive Methods for Testing +Hypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore. +
+
+Peirce, Charles Sanders. 1923. Chance, Love, and Logic: +Philosophical Essays. New York: Harcourt Brace & Company, Inc. +https://www.gutenberg.org/files/65274/65274-h/65274-h.htm. +
+
+Piketty, Thomas. 2018. “Brahmin Left Vs Merchant Right: Rising +Inequality & the Changing Structure of Political Conflict.” +2018. https://www.prsinstitute.org/downloads/related/economics/RisingInequalityandtheChangingStructureofPoliticalConflict1.pdf. +
+
+Pitman, Edwin JG. 1937. “Significance Tests Which May Be Applied +to Samples from Any Populations.” Supplement to the Journal +of the Royal Statistical Society 4 (1): 119–30. https://www.jstor.org/stable/pdf/2984124.pdf. +
+
+Raiffa, Howard. 1968. “Decision Analysis: Introductory Lectures on +Choices Under Uncertainty.” https://archive.org/details/decisionanalysis0000raif. +
+
+Ruark, Arthur Edward, and Harold Clayton Urey. 1930. Atoms, +Moleculues and Quanta. New York, NY: McGraw-Hill book +company, inc. https://archive.org/details/atomsmoleculesqu00ruar. +
+
+Russell, Bertrand. 1945. A History of Western +Philosophy. New York: Simon; Schuster. +
+
+Savage, Leonard J. 1972. The Foundations of Statistics. New +York: Dover Publications, Inc. +
+
+Savant, Marilyn vos. 1990. “Ask Marilyn.” 1990. https://web.archive.org/web/20160318182523/http://marilynvossavant.com/game-show-problem. +
+
+Schlaifer, Robert. 1961. Introduction to Statistics for Business +Decisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl. +
+
+Selvin, Steve. 1975. “Letters to the Editor.” The +American Statistician 29 (1): 67. http://www.jstor.org/stable/2683689. +
+
+Semmelweis, Ignác Fülöp. 1983. The Etiology, Concept, and +Prophylaxis of Childbed Fever. Translated by K. Codell Carter. +Madison, Wisconsin: University of Wisconsin Press. https://archive.org/details/etiologyconcepta0000unse. +
+
+Shurtleff, Dewey. 1970. “Some Characteristics Related to the +Incidence of Cardiovascular Disease and Death: Framingham Study, 16-Year +Follow-up.” Section 26. Edited by William B. Kannel and Tavia +Gordon. The Framingham Study: An Epidemiological Investigation of +Cardiovascular Disease. Washington, D.C.: U.S. Government Printing +Office. https://upload.wikimedia.org/wikipedia/commons/6/6d/The_Framingham_study_-_an_epidemiological_investigation_of_cardiovascular_disease_sec.26_1970_%28IA_framinghamstudye00kann_25%29.pdf. +
+
+Simon, Julian Lincoln. 1967. “Doctors, Smoking, and Reference +Groups.” Public Opinion Quarterly 31 (4): 646–47. +
+
+———. 1969. Basic Research Methods in Social Science. 1st ed. +New York: Random House. +
+
+———. 1992. Resampling: The New Statistics. 1st ed. +Arlington, VA: Resampling Stats Inc. +
+
+———. 1998. “The Philosophy and Practice of Resampling +Statistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy. +
+
+Simon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976. +“Probability and Statistics: Experimental Results of a Radically +Different Teaching Method.” The American Mathematical +Monthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf. +
+
+Simon, Julian Lincoln, and Paul Burstein. 1985. Basic Research +Methods in Social Science. 3rd ed. New York: Random House. +
+
+Simon, Julian Lincoln, and Allen Holmes. 1969. “A New Way to Teach +Probability Statistics.” The Mathematics Teacher 62 (4): +283–88. +
+
+Simon, Julian Lincoln, Manouchehr Mokhtari, and Daniel H Simon. 1996. +“Are Mergers Beneficial or Detrimental? Evidence from Advertising +Agencies.” International Journal of the Economics of +Business 3 (1): 69–82. +
+
+Simon, Julian Lincoln, and David M Simon. 1996. “The Effects of +Regulations on State Liquor Prices.” Empirica 23: +303–16. +
+
+Støvring, H. 1999. “On Radicke and His Method for Testing Mean +Differences.” Journal of the Royal Statistical Society: +Series D (The Statistician) 48 (2): 189–201. https://www.jstor.org/stable/pdf/2681185.pdf. +
+
+Sudman, Seymour. 1976. Applied Sampling. New York: +Academic Press. https://archive.org/details/appliedsampling0000unse. +
+
+Tukey, John W. 1977. Exploratory Data Analysis. Reading, MA, +USA: Addison-Wesley. +
+
+Tversky, Amos, and Daniel Kahneman. 1982. “Evidential Impact of +Base Rates.” In Judgement Under Uncertainty: Heuristics and +Biases, edited by Daniel Kahneman, Paul Slovic, and Amos Tversky. +Cambridge: Cambridge University Press. https://www.google.co.uk/books/edition/Judgment_Under_Uncertainty/_0H8gwj4a1MC. +
+
+Vazsonyi, Andrew. 1999. “Which Door Has the Cadillac.” +Decision Line 30 (1): 17–19. https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf. +
+
+Wallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New +Approach. New York: The Free Press. +
+
+Whitworth, William Allen. 1897. DCC Exercises in Choice +and Chance. Cambridge, UK: Deighton Bell; Co. https://archive.org/details/dccexerciseschoi00whit. +
+
+Winslow, Charles-Edward Amory. 1980. The Conquest of Epidemic +Disease: A Chapter in the History of Ideas. Madison, Wisconsin: +University of Wisconsin Press. https://archive.org/details/conquestofepidem0000wins_p3k0. +
+
+Wonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory +Statistics. 5th ed. New York: John Wiley & Sons. +
+
+Zhou, Qixing, Christopher E Gibson, and Robert H Foy. 2000. +“Long-Term Changes of Nitrogen and Phosphorus Loadings to a Large +Lake in North-West Ireland.” Water Research 34 (3): +922–26. https://doi.org/10.1016/S0043-1354(99)00199-2. +
+
+ + + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/reliability_average.html b/r-book/reliability_average.html new file mode 100644 index 00000000..a9852143 --- /dev/null +++ b/r-book/reliability_average.html @@ -0,0 +1,698 @@ + + + + + + + + + +Resampling statistics - 28  Some Last Words About the Reliability of Sample Averages + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

28  Some Last Words About the Reliability of Sample Averages

+
+ + + +
+ + + + +
+ + +
+ +
+

28.1 The problem of uncertainty about the dispersion

+

The inescapable difficulty of estimating the amount of dispersion in the population has greatly exercised statisticians over the years. Hence I must try to clarify the matter. Yet in practice this issue turns out not to be the likely source of much error even if one is somewhat wrong about the extent of dispersion, and therefore we should not let it be a stumbling block in the way of our producing estimates of the accuracy of samples in estimating population parameters.

+

Student’s t test was designed to get around the problem of the lack of knowledge of the population dispersion. But Wallis and Roberts wrote about the t test: “[F]ar-reaching as have been the consequences of the t distribution for technical statistics, in elementary applications it does not differ enough from the normal distribution…to justify giving beginners this added complexity.” [wallis1956statistics], p. x) “Although Student’s t and the F ratio are explained…the student…is advised not ordinarily to use them himself but to use the shortcut methods… These, being non-parametric and involving simpler computations, are more nearly foolproof in the hands of the beginner — and, ordinarily, only a little less powerful.” (p. xi)1

+

If we knew the population parameter — the proportion, in the case we will discuss — we could easily determine how inaccurate the sample proportion is likely to be. If, for example, we wanted to know about the likely inaccuracy of the proportion of a sample of 100 voters drawn from a population of a million that is 60% Democratic, we could simply simulate drawing (say) 200 samples of 100 voters from such a universe, and examine the average inaccuracy of the 200 sample proportions.

+

But in fact we do not know the characteristics of the actual universe. Rather, the nature of the actual universe is what we seek to learn about. Of course, if the amount of variation among samples were the same no matter what the Republican-Democrat proportions in the universe, the issue would still be simple, because we could then estimate the average inaccuracy of the sample proportion for any universe and then assume that it would hold for our universe. But it is reasonable to suppose that the amount of variation among samples will be different for different Democrat-Republican proportions in the universe.

+

Let us first see why the amount of variation among samples drawn from a given universe is different with different relative proportions of the events in the universe. Consider a universe of 999,999 Democrats and one Republican. Most samples of 100 taken from this universe will contain 100 Democrats. A few (and only a very, very few) samples will contain 99 Democrats and one Republican. So the biggest possible difference between the sample proportion and the population proportion (99.9999%) is less than one percent (for the very few samples of 99% Democrats). And most of the time the difference will only be the tiny difference between a sample of 100 Democrats (sample proportion = 100%), and the population proportion of 99.9999%.

+

Compare the above to the possible difference between a sample of 100 from a universe of half a million Republicans and half a million Democrats. At worst a sample could be off by as much as 50% (if it got zero Republicans or zero Democrats), and at best it is unlikely to get exactly 50 of each. So it will almost always be off by 1% or more.

+

It seems, therefore, intuitively reasonable (and in fact it is true) that the likely difference between a sample proportion and the population proportion is greatest with a 50%-50% universe, least with a 0%-100% universe, and somewhere in between for probabilities, in the fashion of Figure 28.1.

+
+
+
+
+

+
Figure 28.1: Relationship Between the Population Proportion and the Likely Error In a Sample
+
+
+
+
+

Perhaps it will help to clarify the issue of estimating dispersion if we consider this: If we compare estimates for a second sample based on a) the population , versus b) the first sample , the former will be more accurate than the latter, because of the sampling variation in the first sample that affects the latter estimate. But we cannot estimate that sampling variation without knowing more about the population.

+
+
+

28.2 Notes on the use of confidence intervals

+
    +
  1. Confidence intervals are used more frequently in the physical sciences — indeed, the concept was developed for use in astronomy — than in bio-statistics and in the social sciences; in these latter fields, measurement is less often the main problem and the distinction between hypotheses often is difficult.
  2. +
  3. Some statisticians suggest that one can do hypothesis tests with the confidence-interval concept. But that seems to me equivalent to suggesting that one can get from New York to Chicago by flying first to Los Angeles. Additionally, the logic of hypothesis tests is much clearer than the logic of confidence intervals, and it corresponds to our intuitions so much more easily.
  4. +
  5. Discussions of confidence intervals sometimes assert that one cannot make a probability statement about where the population mean may be, yet can make statements about the probability that a particular set of samples may bound that mean.
  6. +
+

If we agree that our interest is upcoming events and probably decision-making, then we obviously are interested in putting betting odds on the location of the population mean (and subsequent samples). And a statement about process will not help us with that, but only a probability statement.

+

Moving progressively farther away from the sample mean, we can find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can say that this point represents a “limit” or “boundary” between which and the sample mean may be called a confidence interval, I suppose.

+

This issue is discussed in more detail in Simon (1998, published online).

+
+
+

28.3 Overall summary and conclusions about confidence intervals

+

The first task in statistics is to measure how much — to make a quantitative estimate of the universe from which a given sample has been drawn, including especially the average and the dispersion; the theory of point estimation is discussed in Chapter 19.

+

The next task is to make inferences about the meaning of the estimates. A hypothesis test helps us decide whether two or more universes are the same or different from each other. In contrast, the confidence interval concept helps us decide on the reliability of an estimate.

+

Confidence intervals and hypothesis tests are not entirely disjoint. In fact, hypothesis testing of a single sample against a benchmark value is, under all interpretations, I think, operationally identical with constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different because the questions which they are designed to answer are different.

+

Having now worked through the entire procedure of producing a confidence interval, it should be glaringly obvious why statistics is such a difficult subject. The procedure is very long, and involves a very large number of logical steps. Such a long logical train is very hard to control intellectually, and very hard to follow with one’s intuition. The actual computation of the probabilities is the very least of it, almost a trivial exercise.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/resampling_method.html b/r-book/resampling_method.html new file mode 100644 index 00000000..c2922791 --- /dev/null +++ b/r-book/resampling_method.html @@ -0,0 +1,2225 @@ + + + + + + + + + +Resampling statistics - 2  The resampling method + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

2  The resampling method

+
+ + + +
+ + + + +
+ + +
+ +

This chapter is a brief introduction to the resampling method of solving problems in probability and statistics. We’re going to dive right in and solve a problem hands-on.

+

You will see that the resampling method is easy to understand and apply: all it requires is to understand the physical problem. You then simulate a statistical model of the physical problem with techniques that are intuitively obvious, and estimate the probability sought with repeated random sampling.

+

After finding a solution, we will look at the more conventional formulaic approach, and how that compares. Here’s the spoiler: it requires you to understand complex formulas, and to choose the correct one from many.

+

After reading this chapter, you will understand why we are excited about the resampling method, and why it will allow you to approach even even hard problems without knowing sophisticated statistic techniques.

+
+

2.1 The resampling approach in action

+

Recall the problem from section Section 1.2 in which the contractor owns 20 ambulances:

+
+

You are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.

+
+

The resampling approach produces the estimate as follows.

+
+

2.1.1 Randomness from physical methods

+

We collect 10 coins, and mark one of them with a pen or pencil or tape as being the coin that represents “out-of-order;” the other nine coins stand for “in operation”. For any one ambulance, this set of 10 coins provides a “model” for the one-in-ten chance — a probability of .10 (10 percent) — of it being out of order on a given day. We put the coins into a little jar or bucket.

+

For ambulance #1, we draw a single coin from the bucket. This coin represents whether that ambulance is going to be broken tomorrow. After replacing the coin and shaking the bucket, we repeat the same procedure for ambulance #2, ambulance #3 and so forth. Having repeated the procedure 20 times, we now have a representation of all ambulances for a single day.

+

We can now repeat this whole process as many times as we like: each time, we come up with a representation for a different day, telling us how many ambulances will be out-of-service on that day.

+

After collecting evidence for, say, 50 experimental days we determine the proportion of the experimental days on which three or more ambulances are out of order. That proportion is an estimate of the probability that three or more ambulances will be out of order on a given day — the answer we seek. This procedure is an example of Monte Carlo simulation, which is the heart of the resampling method of statistical estimation.

+

A more direct way to answer this question would be to examine the firm’s actual records for the past 100 days or, better, 500 days (if that’s available) to determine how many days had three or more ambulances out of order. But the resampling procedure described above gives us an estimate even if we do not have such long-term information. This is realistic; it is frequently the case in the real world that we must make estimates on the basis of insufficient history about an event.

+

A quicker resampling method than the coins could be obtained with 20 ten-sided dice or spinners (like those found in the popular Dungeons & Dragons games). For each die, we identify one of its ten sides as “out-of-order”.

+

Funnily enough, standard 10-sided dice have the numbers 0 through 9 on their faces, rather than 1 through 10. Figure 2.1 shows a standard 10-sided die:

+
+
+

+
Figure 2.1: 10-sided die
+
+
+

We decide, arbitrarily, that the 9 side means “out-of-order”. We could even put a little bit of paint on the 9 side to remind us. The die represents an ambulance. If we roll the die, and get this face, this indicates that the ambulance was out of order. If we get any of the other faces — 0 through 8 — this ambulance was in working order. A single throw of all 20 dice will be our experimental trial that represents a single day; we just have to count whether three or more ambulances turn up “out of order”. Figure 2.2 show the result of one trial — throwing 20 dice:

+
+
+

+
Figure 2.2: 20 10-sided dice
+
+
+

As you can see, the trial in Figure 2.2 gave us a single 9, so there was only one ambulance out of order.

+

In a hundred quick throws of the 20 dice — which probably takes less than 5 minutes — we can get a fast and reasonably accurate answer to our question.

+
+
+
+

2.2 Randomness from your computer

+

Computers make it easy to generate random numbers for resampling.

+
+
+
+ +
+
+What do we mean by random? +
+
+
+

Random numbers are numbers where it is impossible to predict which number is coming next. If we ask the computer for a number between 0 and 9, we will get one of the numbers 0 though 9, but we cannot do any better than that in predicting which number it will give us. There is an equal (10%) chance we will get any of the numbers 0 through 9 — just as there is when we roll a fair 10-sided die. We will go into more detail about what exactly we mean by random and chance later in the book (Section 3.8).

+
+
+ +

We can use random numbers from computers to simulate our problem. For example, we can ask the computer to choose a random number between 0 and 9 to represent one ambulance. Let’s say the number 9 represents “out-of-order” and 0 through 8 “in operation”, then any one random number gives us a trial observation for a single ambulance. To get an experimental trial for a single day we look at 20 numbers and count how many of them are 9. We then look at, say, one hundred sets of 20 numbers and count the proportion of sets whose 20 numbers show three or more ambulances being “out-of-order”. Once again, that proportion estimates the probability that three or more ambulances will be out-of-order on any given day.

+

Soon we will do all these steps with some R code, but for now, consider Table Table 2.1. In each row, we placed 20 numbers, each one representing an ambulance. We added 25 such rows, each representing a simulation of one day.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2.1: 25 simulations of 20 ambulances, with counts
A1A2A3A4A5A6A7A8A9A10A11A12A13A14A15A16A17A18A19A20
Day 154459829158218266505
Day 227446395258125490584
Day 359128753892690725222
Day 424760451376329580604
Day 574891512364851750987
Day 673917799684772024692
Day 739537130800330038646
Day 804679719818704470561
Day 909070160860319831278
Day 1086108345884910869207
Day 1170079230005540178208
Day 1232246396887664387043
Day 1342690085315187683635
Day 1431243162952406190794
Day 1520158581322782212925
Day 1699606332683905788386
Day 1783001537096412501871
Day 1871264300756292803191
Day 1956598430674942061041
Day 2005599434169243186802
Day 2141015164852158620526
Day 2285203509042811571475
Day 2310854752872644356557
Day 2495796347725200919528
Day 2560948348088710734751
+
+ + +
+
+

To know how many ambulances were “out of order” on any given day, we count number of ones in that row. We place the counts in the final column called “#9” (for “number of nines”):

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 2.2: 25 simulations of 20 ambulances, with counts
A1A2A3A4A5A6A7A8A9A10A11A12A13A14A15A16A17A18A19A20#9
Day 1544598291582182665052
Day 2274463952581254905842
Day 3591287538926907252223
Day 4247604513763295806041
Day 5748915123648517509872
Day 6739177996847720246924
Day 7395371308003300386461
Day 8046797198187044705612
Day 9090701608603198312782
Day 10861083458849108692072
Day 11700792300055401782081
Day 12322463968876643870431
Day 13426900853151876836351
Day 14312431629524061907943
Day 15201585813227822129251
Day 16996063326839057883863
Day 17830015370964125018711
Day 18712643007562928031912
Day 19565984306749420610412
Day 20055994341692431868023
Day 21410151648521586205260
Day 22852035090428115714751
Day 23108547528726443565570
Day 24957963477252009195284
Day 25609483480887107347511
+
+ + +
+
+

Each value in the last column of Table Table 2.2 is the count of 9s in that row and, therefore, the result from our simulation of one day.

+

We can estimate how often three or more ambulances would break down by looking for values of three or greater in the last column. We find there are 6 rows with three or more in the last column. Finally we divide this number of rows by the number of trials (25) to get an estimate of the proportion of days with three or more breakdowns. The result is 0.24.

+
+
+

2.3 Solving the problem using R

+

Here we rush ahead to show you how to do this simulation in R.

+

We go through the R code for the simulation, but we don’t expect you to understand all of it right now. The rest of this book goes into more detail on reading and writing R code, and how you can use R to build your own simulations. Here we just want to show you what this code looks like, to give you an idea of where we are headed.

+

While you can run the code below on your own computer, for now we only need you to read it and follow along; the text explains what each line of code does.

+
+
+
+ +
+
+Coming back to the example +
+
+
+

If you are interested, you can come back to this example later, and run it for yourself. To do this, we recommend you read Chapter 4 that explains how to execute notebooks online or on your own computer.

+
+
+
+

Start of ambulances notebook

+ + +

The first thing to say about the code you will see below is there are some lines that do not do anything; these are the lines beginning with a # character (read # as “hash”). Lines beginning with # are called comments. When R sees a # at the start of a line, it ignores everything else on that line, and skips to the next. Here’s an example of a comment:

+
+
# R will completely ignore this text.
+
+

Because R ignores lines beginning with #, the text after the # is just for us, the humans reading the code. The person writing the code will often use comments to explain what the code is doing.

+

Our next task is to use R to simulate a single day of ambulances. We will again represent each ambulance by a random number from 0 through 9. 20 of these numbers represents a simulation of all 20 ambulances available to the contractor. We call a simulation of all ambulances for a specific day one trial.

+

Recall that we want twenty 10-sided dice — one per ambulance. Our dice should be 10-sided, because each ambulance has a 1-in-10 chance of being out of order.

+

The program to simulate one trial of the ambulances problem therefore begins with these commands:

+
+
# Ask R to generate 20 numbers from 0 through 9.
+
+# These are the numbers we will ask R to select from.
+numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
+
+# Get 20 values from the *numbers* sequence.
+# Store the 20 numbers with the name "a"
+# We will explain the replace=TRUE later.
+a <- sample(numbers, 20, replace=TRUE)
+
+# The result is a sequence of 20 numbers.
+a
+
+
 [1] 6 4 5 3 5 8 4 4 7 1 6 4 4 1 5 3 1 2 8 5
+
+
+

The commands above ask the computer to store the results of the random drawing in a location in the computer’s memory to which we give a name such as “a” or “ambulances” or “aardvark” — the name is up to us.

+

Next, we need to count the number of defective ambulances:

+
+
# Count the number of nines in the random numbers.
+# The "a == 9" part identifies all the numbers equal to 9.
+# The "sum" part counts how many numbers "a == 9" found.
+b <- sum(a == 9)
+# Show the result
+b
+
+
[1] 0
+
+
+
+
+
+ +
+
+Counting sequence elements +
+
+
+

We see that the code uses:

+
+
sum(a == 9)
+
+
[1] 0
+
+
+

What exactly happens here under the hood? First a == 9 creates an sequence of values that only contains

+

TRUE or FALSE

+

values, depending on whether each element is equal to 9 or not.

+

Then, we ask R to add up (sum). R counts TRUE as 1, and FALSE as 0; thus we can use sum to count the number of TRUE values.

+

This comes down to asking “how many elements in a are equal to 9”.

+

Don’t worry, we will go over this again in the next chapter.

+
+
+

The sum command is a counting operation. It asks the computer to count the number of 9s among the twenty numbers that are in location a following the random draw carried out by the sample operation. The result of the sum operation will be somewhere between 0 and 20, the number of simulated ambulances that were out-of-order on a given simulated day. The result is then placed in another location in the computer’s memory that we label b.

+

Above you see that we have worked out how to tell the computer to do a single trial — one simulated day.

+
+

2.3.1 Repeating trials

+

We could run the code above for one trial over and over, and write down the result on a piece of paper. If we did this 100 times we would have 100 counts of the number of simulated ambulances that had broken down for each simulated day. To answer our question, we will then count the number of times the count was more than three, and divide by 100, to get an estimate of the proportion of days with more than three out-of-order ambulances.

+

One of the great things about the computer is that it is very good at repeating tasks many times, so we do not have to. Our next task is to ask the computer to repeat the single trial many times — say 1000 times — and count up the results for us.

+

Of course R is very good at repeating things, but the instructions to tell R to repeat things will take a little while to get used to. Soon, we will spend some time going over it in more detail. For now though, we show you how what it looks like, and ask you to take our word for it.

+

The standard way to repeat steps in R is a for loop. For example, let us say we wanted to display “Hello” five times. Here is how we would do that with a for loop:

+
+
# Read the next line as "repeat the following steps five times".
+for (i in 1:5) {
+    # The stuff between the curly brackets is the code we
+    # repeat five times.
+    # Print "Hello" to the screen.
+    message("Hello")
+}
+
+
Hello
+Hello
+Hello
+Hello
+Hello
+
+
+

You can probably see where we are going here. We are going to put the code for one trial inside a for loop, to repeat that trial code many times.

+

Our next job is to store the results of each trial. If we are going to run 1000 trials, we need to store 1000 results.

+

To do this, we start with a sequence of 1000 zeros, that we will fill in later, like this:

+
+
# Ask R to make a sequence of 1000 zeros that we will use
+# to store the results of our 1000 trials.
+# Call this sequence "z"
+z <- numeric(1000)
+
+

For now, z contains 1000 zeros, but we will soon use a for loop to execute 1000 trials. For each trial we will calculate our result (the number of broken-down ambulances), and we will store the result in the z store. We end up with 1000 trial results stored in z.

+

With these parts, we are now ready to solve the ambulance problem, using R.

+
+
+

2.3.2 The solution

+

This is our big moment! Here we will combine the elements shown above to perform our ambulance simulation over, say, 1000 days. Just a quick reminder: we do not expect you to understand all the detail of the code below; we will cover that later. For now, see if you can follow along with the gist of it.

+

To solve resampling problems, we typically proceed as we have done above. We figure out the structure of a single trial and then place that trial in a for loop that executes it multiple times (once for each day, in our case).

+

Now, let us apply this procedure to our ambulance problem. We simulate 1000 days. You will see that we have just taken the parts above, and put them together. The only new part here, is the step at the end, where we store the result of the trial. Bear with us for that; we will come to it soon.

+
+
# Ask R to make a sequence of 1000 zeros that we will use
+# to store the results of our 1000 trials.
+# Call this sequence "z"
+z <- numeric(1000)
+
+# These are the numbers we will ask R to select from.
+numbers <- 0:9
+
+# Read the next line as "repeat the following steps 1000 times".
+for (i in 1:1000) {
+    # The stuff between the curly brackets is the code we
+    # repeat 1000 times.
+
+    # Get 20 values from the *numbers* sequence.
+    # Store the 20 numbers with the name "a"
+    a <- sample(numbers, 20, replace=TRUE)
+
+    # Count the number of nines in the random numbers.
+    # The "a == 9" part identifies all the numbers equal to 9.
+    # The "sum" part counts how many numbers "a == 9" found.
+    b <- sum(a == 9)
+
+    # Store the result from this trial in the sequence "z"
+    z[i] <- b
+
+    # Now go back and repeat the trial, until done.
+}
+
+

The z[i] <- b statement that follows the sum counting operation simply keeps track of the results of each trial, placing the number of defective ambulances for each trial inside the sequence called z. The sequence has 1000 positions: one for each trial.

+

When we have run the code above, we have stored 1000 trial results in the sequence z. These are 1000 counts of out-of-order ambulances, one for each of our simulated days. Our last task is to calculate the proportion of these days for which we had more than three broken-down ambulances.

+

Since our aim is to count the number of days in which more than 3 (4 or more) defective ambulances occur, we use another counting sum command at the end of the 1000 trials. This command counts how many times more than 3 defects occurred in the 1000 days recorded in our z sequence, and we place the result in another location, k. This gives us the total number of days where 4 or more defective ambulances are seen to occur. Then we divide the number in k by 1000, the number of trials. Thus we obtain an estimate of the chance, expressed as a probability between 0 and 1, that 4 or more ambulances will be defective on a given day. And we store that result in a location that we call kk, which R subsequently prints to the screen.

+
+
# How many trials resulted in more than 3 ambulances out of order?
+k <- sum(z > 3)
+
+# Convert to a proportion.
+kk <- k / 1000
+
+# Show the result.
+message(kk)
+
+
0.14
+
+
+

This is the estimate we wanted; the proportion of days where more than three ambulances were out of action.

+

We have crept up on the solution, so it might not be clear to you how few steps you needed to do this task. Here is the whole solution to the problem, without the comments:

+
+
z <- numeric(1000)
+numbers <- 0:9
+
+for (i in 1:1000) {
+    a <- sample(numbers, 20, replace=TRUE)
+    b <- sum(a == 9)
+    z[i] <- b
+}
+
+k <- sum(z > 3)
+kk <- k / 1000
+message(kk)
+
+
0.141
+
+
+

End of ambulances notebook

+
+
+

Notice that the code above is exactly the same as the code we built up in steps. But notice too, that the answer we got from this code was slightly different from the answer we got first.

+

Why did we get a different answer from the same code?

+
+
+
+ +
+
+Randomness in estimates +
+
+
+

This is an essential point — our code uses random numbers to get an estimate of the quantity we want — in this case, the probability of three or more ambulances being out of order. Every run of our code will use a different set of random numbers. Therefore, every run of our code will give us a very slightly different number. As you will soon see, we can make our estimate more and more accurate, and less and less different between each run, by doing many trials in each run. Here we did 1000 trials, but we will usually do 10000 trials, to give us a good estimate, that does not vary much from run to run.

+
+
+

Don’t worry about the detail of how each of these commands works — we will cover those details gradually, over the next few chapters. But, we hope that you can see, in principle, how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using the equivalent of a ten-sided die. This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with.

+

While writing programs like these take a bit of getting used to, it is vastly simpler than the older, more conventional approaches to such problems routinely taught to students.

+
+
+

2.4 How resampling differs from the conventional approach

+

In the standard approach the student learns to choose and solve a formula. Doing the algebra and arithmetic is quick and easy. The difficulty is in choosing the correct formula. Unless you are a professional mathematician, it may take you quite a while to arrive at the correct formula — considerable hard thinking, and perhaps some digging in textbooks. More important than the labor, however, is that you may come up with the wrong formula, and hence obtain the wrong answer. And how would you know if you were wrong?

+

Most students who have had a standard course in probability and statistics are quick to tell you that it is not easy to find the correct formula, even immediately after finishing a course (or several courses) on the subject. After leaving school or university, it is harder still to choose the right formula. Even many people who have taught statistics at the university level (including this writer) must look at a book to get the correct formula for a problem as simple as the ambulances, and then we are often still not sure we have the right answer. This is the grave disadvantage of the standard approach.

+

In the past few decades, resampling and other Monte Carlo simulation methods have come to be used extensively in scientific research. But in contrast to the material in this book, simulation has mostly been used in situations so complex that mathematical methods have not yet been developed to handle them. Here are examples of such situations:

+ +
    +
  1. For a flight to Mars, calculating the correct route involves a great many variables, too many to solve with formulas. Hence, the Monte Carlo simulation method is used.

  2. +
  3. The Navy might want to know how long the average ship will have to wait for dock facilities. The time of completion varies from ship to ship, and the number of ships waiting in line for dock work varies over time. This problem can be handled quite easily with the experimental simulation method, but formal mathematical analysis would be difficult or impossible.

  4. +
  5. What are the best tactics in baseball? Should one bunt? Should one put the best hitter up first, or later? By trying out various tactics with dice or random numbers, Earnshaw Cook (in his book Percentage Baseball), found that it is best never to bunt, and the highest-average hitter should be put up first, in contrast to usual practice. Finding this answer would have been much more difficult with the analytic method.

    +
  6. +
  7. Which search pattern will yield the best results for a ship searching for a school of fish? Trying out “models” of various search patterns with simulation can provide a fast answer.

  8. +
  9. What strategy in the game of Monopoly will be most likely to win? The simulation method systematically plays many games (with a computer) testing various strategies to find the best one.

  10. +
+

But those five examples are all complex problems. This book and its earlier editions break new ground by using this method for simple rather than complex problems , especially in statistics rather than pure probability, and in teaching beginning rather than advanced students to solve problems this way. (Here it is necessary to emphasize that the resampling method is used to solve the problems themselves rather than as a demonstration device to teach the notions found in the standard conventional approach . Simulation has been used in elementary courses in the past, but only to demonstrate the operation of the analytical mathematical ideas. That is very different than using the resampling approach to solve statistics problems themselves, as is done here.)

+

Once we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics . Then we can get down to the business of learning how to do that clear statistical thinking, and putting it to work for you. The study of probability is purely mathematics (though not necessarily formulas) and technique. But statistics has to do with meaning . For example, what is the meaning of data showing an association just discovered between a type of behavior and a disease? Of differences in the pay of men and women in your firm? Issues of causation, acceptability of control, and design of experiments cannot be reduced to technique. This is “philosophy” in the fullest sense. Probability and statistics calculations are just one input. Resampling simulation enables us to get past issues of mathematical technique and focus on the crucial statistical elements of statistical problems.

+

We hope you will find, as you read through the chapters, that the resampling way of thinking is a good way to think about the more traditional statistical methods that some of you may already know. Our approach will be to use resampling to understand the ideas, and then apply this understanding to reason about traditional methods. You may also find that the resampling methods are not only easier to understand — they are often more useful, because they are so general in their application.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/resampling_with_code.html b/r-book/resampling_with_code.html new file mode 100644 index 00000000..2072de27 --- /dev/null +++ b/r-book/resampling_with_code.html @@ -0,0 +1,1355 @@ + + + + + + + + + +Resampling statistics - 5  Resampling with code + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

5  Resampling with code

+
+ + + +
+ + + + +
+ + +
+ +

Chapter 2 used simulation and resampling from tables of random numbers, dice, and coins. Making random choices in this way can make it easier to understand the process, but of course, physical methods of making random outcomes can be slow and boring.

+

We saw that short computer programs can do a huge number of resampling trials in a less than a second. The flexibility of a programming language makes it possible to simulate many different outcomes and tests.

+

Programs can build up tables of random numbers, and do basic tasks like counting the number of values in a row or taking proportions. With these simple tools, we can simulate many problems in probability and statistics.

+

In this chapter, we will model another problem using R, but this chapter will add three new things.

+
    +
  • The problem we will work on is a little different from the ambulances problem from Chapter 2. It is a real problem about deciding whether a new cancer treatment is better than the alternatives, and it introduces the idea of making a model of the world, to ask questions about chances and probabilities.

  • +
  • We will slow down a little to emphasize the steps in solving this kind of problem. First we work out how to simulate a single trial. Then we work out how to run many simulated trials.

  • +
  • We sprinted through the code in Chapter 2, with the promise we would come back to the details. Here we go into more detail about some ideas from the code in the last chapter. These are:

    +
      +
    • Storing several values together in one place, with vectors.
    • +
    • Using functions (code recipes) to apply procedures.
    • +
    • Comparing numbers to other numbers.
    • +
    • Counting numbers that match a condition.
    • +
  • +
+

In the next chapter, we will talk more about using vectors to store results, and for loops to repeat a procedure many times.

+
+

5.1 Statistics and probability

+

We have already emphasized that statistics is a way of drawing conclusions about data from the real world, in the presence of random variation; probability is the way of reasoning about random variation. This chapter introduces our first statistical problem, where we use probability to draw conclusions about some important data — about a potential cure for a type of cancer. We will not make much of the distinction between probability and statistics here, but we will come back to it several times in later chapters.

+
+
+

5.2 A new treatment for Burkitt lymphoma

+

Burkitt lymphoma is an unusual cancer of the lymphatic system. The lymphatic system is a vein-like network throughout the body that is involved in the immune reaction to disease. In developed countries, with standard treatment, the cure rate for Burkitt lymphoma is about 90%.

+

In 2006, researchers at the US National Cancer Institute (NCI), tested a new treatment for Burkitt lymphoma (Dunleavy et al. 2006). They gave the new treatment to 17 patients, and found that all 17 patients were doing well after two years or more of follow up. By “doing well”, we mean that their lymphoma had not progressed; as a short-hand, we will say that these patients were “cured”, but of course, we do not know what happened to them after this follow up.

+

Here is where we put on our statistical hat and ask ourselves the following question — how surprised are we that the NCI researchers saw their result of 17 out of 17 patients cured?

+

At this stage you might and should ask, what could we possibly mean by “surprised”? That is a good and important question, and we will discuss that much more in the chapters to come. For now, please bear with us as we do a thought experiment.

+

Let us forget the 17 out of 17 result of the NCI study for a moment. Imagine that there is another hospital, called Saint Hypothetical General, just down the road from the NCI, that was also treating 17 patients with Burkitt lymphoma. Saint Hypothetical were not using the NCI treatment, they were using the standard treatment.

+

We already know that each patient given the standard treatment has a 90% chance of cure. Given that 90% cure rate, what is the chance that 17 out of 17 of the Hypothetical group will be cured?

+

You may notice that this question about the Hypothetical group is similar to the problem of the 20 ambulances in Chapter Chapter 2. In that problem, we were interested to know how likely it was that 3 or more of 20 ambulances would be out of action on any one day, given that each ambulance had a 10% chance of being out of action. Here we would like to know the chances that all 17 patients would be cured, given that each patient has a 90% chance of being cured.

+
+
+

5.3 A physical model of the hypothetical hospital

+

As in the ambulance example, we could make a physical model of chance in this world. For example, to simulate whether a given patient is cured or not by a 90% effective treatment, we could throw a ten sided die and record the result. We could say, arbitrarily, that a result of 0 means “not cured”, and all the numbers 1 through 9 mean “cured” (typical 10-sided dice have sides numbered 0 through 9).

+

We could roll 17 dice to simulate one “trial” in this random world. For each trial, we record the number of dice that show numbers 1 through 9 (and not 0). This will be a number between 0 and 17, and it is the number of patients “cured” in our simulated trial.

+

Figure 5.1 is the result of one such trial we did with a set of 17 10-sided dice we happened to have to hand:

+
+
+

+
Figure 5.1: One roll of 17 10-sided dice
+
+
+

The trial in Figure 5.1 shows are four dice with the 0 face uppermost, and the rest with numbers from 1 through 9. Therefore, there were 13 out of 17 not-zero numbers, meaning that 13 out of 17 simulated “patients” were “cured” in this simulated trial.

+ +

We could repeat this simulated trial procedure 100 times, and we would then have 100 counts of the not-zero numbers. Each of the 100 counts would be the number of patients cured in that trial. We can ask how many of these 100 counts were equal to 17. This will give us an estimate of the probability we would see 17 out of 17 patients cured, given that any one patient has a 90% chance of cure. For example, say we saw 15 out of 100 counts were equal to 17. That would give us an estimate of 15 / 100 or 0.15 or 15%, for the probability we would see 17 out of 17 patients cured.

+

So, if Saint Hypothetical General did see 17 out of 17 patients cured with the standard treatment, they would be a little surprised, because they would only expect to see that happen 15% of the time. But they would not be very surprised — 15% of the time is uncommon, but not very uncommon.

+
+
+

5.4 A trial, a run, a count and a proportion

+

Here we stop to emphasize the steps in the process of a random simulation.

+
    +
  1. We decide what we mean by one trial. Here one trial has the same meaning in medicine as resampling — we mean the result of treating 17 patients. One simulated trial is then the simulation of one set of outcomes from 17 patients.
  2. +
  3. Work out the outcome of interest from the trial. The outcome here is the number of patients cured.
  4. +
  5. We work out a way to simulate one trial. Here we chose to throw 17 10-sided dice, and count the number of not zero values. This is the outcome from one simulation trial.
  6. +
  7. We repeat the simulated trial procedure many times, and collect the results from each trial. Say we repeat the trial procedure 100 times; we will call this a run of 100 trials.
  8. +
  9. We count the number of trials with an outcome that matches the outcome we are interested in. In this case we are interested in the outcome 17 out of 17 cured, so we count the number of trials with a score of 17. Say 15 out of the run of 100 trials had an outcome of 17 cured. That is our count.
  10. +
  11. Finally we divide the count by the number of trials to get the proportion. From the example above, we divide 15 by 100 to 0.15 (15%). This is our estimate of the chance of seeing 17 out of 17 patients cured in any one trial. We can also call this an estimate of the probability that 17 out of 17 patients will be cured on any on trial.
  12. +
+

Our next step is to work out the code for step 2: simulate one trial.

+
+
+

5.5 Simulate one trial with code

+

We can use the computer to do something very similar to rolling 17 10-sided dice, by asking the computer for 17 random whole numbers from 0 through 9.

+
+
+
+ +
+
+Whole numbers +
+
+
+

A whole number is a number that is not negative, and does not have fractional part (does not have anything after a decimal point). 0 and 1 and 2 and 3 are whole numbers, but -1 and \(\frac{3}{5}\) and 11.3 are not. The whole numbers from 0 through 9 are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.

+
+
+

We have already discussed what we mean by random in Section 2.2.

+
+
+

5.6 From numbers to vectors

+

We need to prepare the sequence of numbers that we want R to select from.

+

We have already seen the idea that R has values that are individual numbers. Remember, a variable is a named value. Here we attach the name a to the value 1.

+
+
a <- 1
+# Show the value of "a"
+a
+
+
[1] 1
+
+
+

R also allows values that are sequences of numbers. R calls these sequences vectors.

+
+

The name vector sounds rather technical and mathematical, but the only important idea for us is that a vector stores a sequence of numbers.

+
+

Here we make a vector that contains the 10 numbers we will select from:

+
+
# Make a vector of numbers, store with the name "some_numbers".
+some_numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
+# Show the value of "some_numbers"
+some_numbers
+
+
 [1] 0 1 2 3 4 5 6 7 8 9
+
+
+

Notice that the value for some_numbers is a vector, and that this value contains 10 numbers.

+

Put another way, some_numbers is now the name we can use for this collection of 10 values.

+

Vectors are very useful for simulations and data analysis, and we will be using these for nearly every example in this book.

+
+
+

5.7 Functions

+

Functions are another tool that we will be using everywhere, and that you seen already, although we have not introduced them until now.

+

You can think of functions as named production lines.

+

For example, consider the R function round

+

round is the name for a simple production line, that takes in a number, and (by default) sends back the number rounded to the nearest integer.

+
+
+
+ +
+
+What is an integer? +
+
+
+

An integer is a positive or negative whole number.

+

In other words, a number is an integer if the number is either a whole number (0, 1, 2 …), or a negative whole number (-1, -2, -3 …). All of -208, -2, 0, 10, 105 are integers, but \(\frac{3}{5}\), -10.3 and 0.2 are not.

+

We will use the term integer fairly often, because it is a convenient way to name all the positive and negative whole numbers.

+
+
+

Think of a function as a named production line. We sent the function (production line) raw material (components) to work on. The production line does some work on the components. A finished result comes off the other end.

+

Therefore, think of round as the name of a production line, that takes in a component (in this case, any number), and does some work, and sends back the finished result (in this case, the number rounded to the nearest integer.

+

The components we send to a function are called arguments. The finished result the function sends back is the return value.

+
    +
  • Arguments : the value or values we send to a function.
  • +
  • Return value : the values the function sends back.
  • +
+

See Figure 5.2 for an illustration of round as a production line.

+
+
+
+
+

+
Figure 5.2: The round function as a production line
+
+
+
+
+

In the next few code chunks, you see examples where round takes in a not-integer number, as an argument, and sends back the nearest integer as the return value:

+
+
# Put in 3.2, round sends back 3.
+round(3.2)
+
+
[1] 3
+
+
+
+
# Put in -2.7, round sends back -3.
+round(-2.7)
+
+
[1] -3
+
+
+

Like many functions, round can take more than one argument (component). You can send range the number of digits you want to round to, after the number of you want it to work on, like this (see Figure 5.3):

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# round sends back 3.14
+round(3.1415, 2)
+
+
[1] 3.14
+
+
+
+
+
+
+

+
Figure 5.3: round with optional arguments specifying number of digits
+
+
+
+
+

Notice that the second argument — here 2 — is optional. We only have to send round one argument: the number we want it to round. But we can optionally send it a second argument — the number of decimal places we want it to round to. If we don’t specify the second argument, then round assumes we want to round to 0 decimal places, and therefore, to the nearest integer.

+
+
+

5.8 Functions and named arguments

+

In the example above, we sent round two arguments. round knows that we mean the first argument to be the number we want to round, and the second argument is the number of decimal places we want to round to. It knows which is which by the position of the arguments — the first argument is the number it should round, and second is the number of digits.

+

In fact, internally, the round function also gives these arguments names. It calls the number it should round — x — and the number of digits it should round to — digits. This is useful, because it is often clearer and simpler to identify the argument we are specifying with its name, instead of just relying on its position.

+

If we aren’t using the argument names, we call the round function as we did above:

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# round sends back 3.14
+round(3.1415, 2)
+
+
[1] 3.14
+
+
+

In this call, we relied on the fact that we, the people writing the code, and you, the person reading the code, remembers that the second argument (2) means the number of decimal places it should round to. But, we can also specify the argument using its name, like this (see Figure 5.4):

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# Use the name of the number-of-decimals argument for clarity:
+round(3.1415, digits=2)
+
+
[1] 3.14
+
+
+
+
+
+
+

+
Figure 5.4: The round function with argument names
+
+
+
+
+
+
+
+
+

+
Figure 5.5: The np.round function with argument names
+
+
+
+
+

Here R sees the first argument, as before, and assumes that it is the number we want to round. Then it sees the second, named argument — digits=2 — and knows, from the name, that we mean this to be the number of decimals to round to.

+

In fact, we could even specify both arguments by name, like this:

+
+
# Put in 3.1415, and the number of digits to round to (2).
+# Name both arguments.
+round(x=3.1415, digits=2)
+
+
[1] 3.14
+
+
+

We don’t usually name both arguments for round, as we have above, because it is so obvious that the first argument is the thing we want to round, and so naming the argument does not make it any more clear what the code is doing. But — as so often in programming — whether to use the names, or let R work out which argument is which by position, is a judgment call. The judgment you are making is about the way to write the code to be most clear for your reader, where your most important reader may be you, coming back to the code in a week or a year.

+
+
+
+ +
+
+How do you know what names to use for the function arguments? +
+
+
+

You can find the names of the function arguments in the help for the function, either online, or in the notebook interface. For example, to get the help for round, including the argument names, you could make a new chunk, and type ?round, then execute the cell by running the chunk. This will show the help for the function in the notebook interface.

+
+
+
+
+

5.9 Ranges

+

Now let us return to the variable some_numbers that we created above:

+
+
# Make a vector of numbers, store with the name "some_numbers".
+some_numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
+# Show the value of "some_numbers"
+some_numbers
+
+
 [1] 0 1 2 3 4 5 6 7 8 9
+
+
+

In fact, we often need to do this: generate a sequence or range of integers, such as 0 through 9.

+
+
+
+ +
+
+Pick a number from 1 through 5 +
+
+
+

Ranges can be confusing in normal speech because it is not always clear whether they include their beginning and end. For example, if someone says “pick a number between 1 and 5”, do they mean all the numbers, including the first and last (any of 1 or 2 or 3 or 4 or 5)? Or do they mean only the numbers that are between 1 and 5 (so 2 or 3 or 4)? Or do they mean all the numbers up to, but not including 5 (so 1 or 2 or 3 or 4)?

+

To avoid this confusion, we will nearly always use “from” and “through” in ranges, meaning that we do include both the start and the end number. For example, if we say “pick a number from 1 through 5” we mean one of 1 or 2 or 3 or 4 or 5.

+
+
+

Creating ranges of numbers is so common that R has a special syntax to do that.

+
+

R allows you to write a colon (:) between two values, to mean that you want a vector (sequence) that is all the integers from the first value (before the colon) through the second value (after the colon):

+
+
+
# A vector containing all the integers from 0 through 9.
+some_integers = 0:9
+some_integers
+
+
 [1] 0 1 2 3 4 5 6 7 8 9
+
+
+

Here are some more examples of the colon syntax:

+
+
# All the integers from 10 through 14
+10:14
+
+
[1] 10 11 12 13 14
+
+
+
+
# All the integers from -1 through 5
+-1:5
+
+
[1] -1  0  1  2  3  4  5
+
+
+
+
+

5.10 Choosing values at random

+

We can use the sample function to select a single value at random from the sequence of numbers in some_integers.

+
+
+
+ +
+
+More on sample +
+
+
+

The sample function will be a fundamental tool for taking many kinds of samples, and we cover it in more detail in Chapter 6.

+
+
+
+
# Select 1 integer (the second argument) from the choices in some_integers
+# (the first argument).
+my_integer <- sample(some_integers, 1)
+# Show the value that results.
+my_integer
+
+
[1] 6
+
+
+

Like round (above), sample is a function.

+

As you remember, a function is a named production line. In our case, the production line has the name the sample function.

+

We sent the sample function. a value to work on — an argument. In this case, the argument was the value of some_integers.

+
+

sample also needs the number of random values we should select from the first argument. We can send the number of values we want with the second argument.

+
+

Figure 5.6 is a diagram illustrating an example run of the sample function (production line).

+
+
+
+
+
+

+
Figure 5.6: Example run of the sample function
+
+
+
+
+
+

Here is the same code again, with new comments.

+
+
# Send the value of "some_integers" to sample.
+# some_integers is the *argument*.  Ask sample to return 1 of the values.
+# Put the *return* value from the function into "my_number".
+my_number <- sample(some_integers, 1)
+# Show the value that results.
+my_number
+
+
[1] 4
+
+
+
+
+

5.11 Sampling into vectors

+
+

In the code above, we asked R to select a single number at random — by sending 1 as the second argument to the function.

+

As you can imagine, we can tell sample to select any number of values at random, by changing the second argument to the function.

+

In our case, we would like R to select 17 numbers at random from the sequence of some_integers.

+

But — there is a complication here. By default, sample selects numbers from the first argument without replacement, meaning that, by default, sample cannot select the same number twice, and in our case, where we want 17 numbers, that is bad, because sample is going to run out of numbers. To get the result we want, we must also add an extra argument: replace=TRUE. replace=TRUE tells R to sample some_integers with replacement, where sample can select the same number more than once in the same sample. Sampling with and without replacement is a fundamental distinction in probability and statistics. Chapter 6 goes into much more detail about this, but for now, please take our word for it that using replace=TRUE for sample gives us the same effect as rolling several 10-sided dice.

+
+
+
# Get 17 values from the *some_integers* vector.
+# Sample *with replacement*, so sample can select numbers more than once.
+# Store the 17 numbers with the name "a"
+a <- sample(some_integers, 17, replace=TRUE)
+# Show the result.
+a
+
+
 [1] 5 3 5 8 4 4 7 1 6 4 4 1 5 3 1 2 8
+
+
+

As you can see, the function sent back (returned) 17 numbers. Because it is sending back more than one number, the thing it sends back is a vector, where the vector has 17 elements.

+
+
+

5.12 Counting results

+

We now have the code to do the equivalent of throwing 17 10-sided dice. This is the basis for one simulated trial in the world of Saint Hypothetical General.

+

Our next job is to get the code to count the number of numbers that are not zero in the vector a. That will give us the number of patients who were cured in simulated trial.

+

Another way of asking this question, is to ask how many elements in a are greater than zero.

+
+

5.12.1 Comparison

+

To ask whether a number is greater than zero, we use comparison. Here is a greater than zero comparison on a single number:

+
+
n <- 5
+# Is the value of n greater than 0?
+# Show the result of the comparison.
+n > 0
+
+
[1] TRUE
+
+
+

> is a comparison — it asks a question about the numbers either side of it. In this case > is asking the question “is the value of n (on the left hand side) greater than 0 (on the right hand side)?” The value of n is 5, so the question becomes, “is 5 greater than 0?” The answer is Yes, and R represents this Yes answer as the value TRUE.

+

In contrast, the comparison below boils down to “is 0 greater than 0?”, to which the answer is No, and R represents this as FALSE.

+
+
p <- 0
+# Is the value of p greater than 0?
+# Show the result of the comparison.
+p > 0
+
+
[1] FALSE
+
+
+

So far you have seen the results of comparison on a single number. Now say we do the same comparison on a vector. For example, say we ask the question “is the value of a greater than 0”? Remember, a is a vector containing 17 values. We are comparing 17 values to one value (0). What answer do you think R will give? You may want to think a little about this before you read on.

+

As a reminder, here is the current value for a:

+
+
# Show the current value for "a"
+a
+
+
 [1] 5 3 5 8 4 4 7 1 6 4 4 1 5 3 1 2 8
+
+
+

Now you have had some time to think, here is what happens:

+
+
# Is the value of "a" greater than 0?
+# Show the result of the comparison.
+a > 0
+
+
 [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
+[16] TRUE TRUE
+
+
+

There are 17 values in a, so the comparison to 0 means there are 17 comparisons, and 17 answers. R therefore returns a vector of 17 elements, containing these 17 answers. The first answer is the answer to the question “is the value of the first element of a greater than 0”, and the second is the answer to “is the value of the second element of a greater than 0”.

+

Let us store the result of this comparison to work on:

+
+
# Is the value of "a" greater than 0?
+# Store as another vector "q".
+q <- a > 0
+# Show the value of q
+q
+
+
 [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
+[16] TRUE TRUE
+
+
+
+
+
+

5.13 Counting TRUE values with sum

+

Notice above that there is one TRUE element in q for every element in a that was greater than 0. It only remains to count the number of TRUE values in q, to get the count of patients in our simulated trial who were cured.

+

We can use the R function sum to count the number of TRUE elements in a vector. As you can imagine, sum adds up all the elements in a vector, to give a single number. This will work as we want for the q vector, because R counts FALSE as equal to 0 and TRUE as equal to 1:

+
+
# Question: is FALSE equal to 0?
+# Answer - Yes! (TRUE)
+FALSE == 0
+
+
[1] TRUE
+
+
+
+
# Question: is TRUE equal to 1?
+# Answer - Yes! (TRUE)
+TRUE == 1
+
+
[1] TRUE
+
+
+

Therefore, the function sum, when applied to a vector of TRUE and FALSE values, will count the number of TRUE values in the vector.

+

To see this in action we can make a new vector of TRUE and FALSE values, and try using sum on the new array.

+
+
# A vector containing three TRUE values and two FALSE values.
+trues_and_falses <- c(TRUE, FALSE, TRUE, TRUE, FALSE)
+# Show the new vector.
+trues_and_falses
+
+
[1]  TRUE FALSE  TRUE  TRUE FALSE
+
+
+

The sum operation adds all the elements in the vector. Because TRUE counts as 1, and FALSE counts as 0, adding all the elements in trues_and_falses is the same as adding up the values 1 + 0 + 1 + 1 + 0, to give 3.

+

We can apply the same operation on q to count the number of TRUE values.

+
+
# Count the number of TRUE values in "q"
+# This is the same as the number of values in "a" that are greater than 0.
+b <- sum(q)
+# Show the result
+b
+
+
[1] 17
+
+
+
+
+

5.14 The procedure for one simulated trial

+

We now have the whole procedure for one simulated trial. We can put the whole procedure in one chunk:

+
+
# Procedure for one simulated trial
+
+# Get 17 values from the *some_integers* vector.
+# Store the 17 numbers with the name "a"
+a <- sample(some_integers, 17, replace=TRUE)
+# Is the value of "a" greater than 0?
+q <- a > 0
+# Count the number of TRUE values in "q"
+b <- sum(q)
+# Show the result of this simulated trial.
+b
+
+
[1] 15
+
+
+
+
+

5.15 Repeating the trial

+

Now we know how to do one simulated trial, we could just keep running the chunk above, and writing down the result each time. Once we had run the chunk 100 times, we would have 100 counts. Then we could look at the 100 counts to see how many were equal to 17 (all 17 simulated patients cured on that trial). At least that would be much faster than rolling 17 dice 100 times, but we would also like the computer to automate the process of repeating the trial, and keeping track of the counts.

+

Please forgive us as we race ahead again, as we did in the last chapter. As in the last chapter, we will use a results vector called z to store the count for each trial. As in the last chapter, we will use a for loop to repeat the trial procedure many times. As in the last chapter, we will not explain the counts vector of the for loop in any detail, because we are going to cover those in the next chapter.

+

Let us now imagine that we want to do 100 simulated trials at Saint Hypothetical General. This will give us 100 counts. We will want to store the count for each trial.

+

To do this, we make a vector called z to hold the 100 counts. We have called the vector z, but we could have called it anything we liked, such as counts or results or cecilia.

+
+
# A vector to hold the 100 count values.
+# Later, we will fill this in with real count values from simulated trials.
+z <- numeric(100)
+
+

Next we use a for loop to repeat the single trial procedure.

+

Notice that the single trial procedure, inside this for loop, is the same as the single trial procedure above — the only two differences are:

+
    +
  • The trial procedure is inside the loop, and
  • +
  • We are storing the count for each trial as we go.
  • +
+

We will go into more detail on how this works in the next chapter.

+
+
# Procedure for 100 simulated trials.
+
+# A vector to store the counts for each trial.
+z <- numeric(100)
+
+# Repeat the trial procedure 100 times.
+for (i in 1:100) {
+    # Get 17 values from the *some_integers* vector.
+    # Store the 17 numbers with the name "a"
+    a <- sample(some_integers, 17, replace=TRUE)
+    # Is the value of "a" greater than 0?
+    q <- a > 0
+    # Count the number of TRUE values in "q".
+    b <- sum(q)
+    # Store the result at the next position in the "z" vector.
+    z[i] = b
+    # Now go back and do the next trial until finished.
+}
+# Show the result of all 100 trials.
+z
+
+
  [1] 14 15 12 17 13 17 16 16 14 16 16 15 17 14 17 13 16 15 16 15 13 14 17 17 15
+ [26] 14 13 15 13 16 17 15 15 15 15 15 13 16 15 13 17 15 16 17 15 17 16 17 17 16
+ [51] 12 17 16 12 16 15 15 13 16 16 16 13 16 14 15 15 15 15 14 15 14 11 15 13 14
+ [76] 15 15 14 13 15 15 14 17 16 14 17 16 17 15 16 16 16 14 13 15 16 17 17 15 13
+
+
+

Finally, we need to count how many of the trials results we stored in z gave a “cured” count of 17.

+

We can ask the question whether a single number is equal to 17 using the double equals comparison: ==.

+
+
s <- 17
+# Is the value of s equal to 17?
+# Show the result of the comparison.
+s == 17
+
+
[1] TRUE
+
+
+
+
+
+ +
+
+ +
+
+
+
+

5.16 Single and double equals

+

Notice that the double equals == means something entirely different to Python than the single equals =. In the code above, Python reads s = 17 to mean “Set the variable s to have the value 17”. In technical terms the single equals is called an assignment operator, because it means assign the value 17 to the variable s.

+

The code s == 17 has a completely different meaning.

+
+
+

5.17 Double equals

+

The double equals == above is a comparison in R.

+
+

It means “give TRUE if the value in s is equal to 17, and FALSE otherwise”. The == is a comparison operator — it is for comparing two values — here the value in s and the value 17. This comparison, like all comparisons, returns an answer that is either TRUE or FALSE. In our case s has the value 17, so the comparison becomes 17 == 17, meaning “is 17 equal to 17?”, to which the answer is “Yes”, and R sends back TRUE.

+
+
+

We can ask this question of all 100 counts by asking the question: is the vector z equal to 17, like this:

+
+
# Is the value of z equal to 17?
+were_cured <- z == 17
+# Show the result of the comparison.
+were_cured
+
+
  [1] FALSE FALSE FALSE  TRUE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE
+ [13]  TRUE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE
+ [25] FALSE FALSE FALSE FALSE FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE
+ [37] FALSE FALSE FALSE FALSE  TRUE FALSE FALSE  TRUE FALSE  TRUE FALSE  TRUE
+ [49]  TRUE FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
+ [61] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
+ [73] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE FALSE
+ [85] FALSE  TRUE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
+ [97]  TRUE  TRUE FALSE FALSE
+
+
+

Finally we use sum to count the number of TRUE values in the were_cured vector, to give the number of trials where all 17 patients were cured.

+
+
# Count the number of TRUE values in "were_cured"
+# This is the same as the number of values in "z" that are equal to 17.
+n_all_cured <- sum(were_cured)
+# Show the result of the comparison.
+n_all_cured
+
+
[1] 18
+
+
+

n_all_cured is the number of simulated trials for which all patients were cured. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.

+
+
# Proportion of trials where all patients were cured.
+p <- n_all_cured / 100
+# Show the result
+p
+
+
[1] 0.18
+
+
+

From this experiment, we see that there is roughly a one-in-six chance that all 17 patients are cured when using a 90% effective treatment.

+
+
+

5.18 What have we learned from Saint Hypothetical?

+

We started with a question about the results of the NCI trial on the new drug. The question was — was the result of their trial — 17 out of 17 patients cured — surprising.

+

Then, for reasons we did not explain in detail, we changed tack, and asked the same question about a hypothetical set of 17 patients getting the standard treatment in Saint Hypothetical General.

+

That Hypothetical question turns out to be fairly easy to answer, because we can use simulation to estimate the chances that 17 out of 17 patients would be cured in such a hypothetical trial, on the assumption that each patient has a 90% chance of being cured with the standard treatment.

+

The answer for Saint Hypothetical General was — we would be somewhat surprised, but not astonished. We only get 17 out of 17 patients cured about one time in six.

+

Now let us return to the NCI trial. Should the trial authors be surprised by their results? If they assumed that their new treatment was exactly as effective as the standard treatment, the result of the trial is a bit unusual, just by chance. It is up us to decide whether the result is unusual enough to make us think that the actual NCI treatment might in fact have been more effective than the standard treatment.

+

You will see this move again and again as we go through the book.

+
    +
  • We take something that really happened — in this case the 17 out of 17 patients cured.
  • +
  • Then we imagine a hypothetical world in which the results only depend on chance.
  • +
  • We do simulations in that hypothetical world to see how often we get a result like the one that happened in the real world.
  • +
  • If the real world result (17 out of 17) is an unusual, surprising result in the simulations from the hypothetical world, we take that as evidence that the real world result might not be due to chance alone.
  • +
+

We have just described the main idea in statistical inference. If that all seems strange and backwards to you, do not worry, we will go over that idea many times in this book. It is not a simple idea to grasp in one go. We hope you will find that, as you do more simulations, and think of more hypothetical worlds, the idea will start to make more sense. Later, we will start to think about asking other questions about probability and chance in the real world.

+
+
+

5.19 Conclusions

+

Can you see how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using 10-sided dice? This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with. Either we will use a device such as coins or dice, or a random number table as an analogy for the physical process we are interested in (patients being cured, in this case), or we will simulate the analogy on the computer using the R program above.

+

The program above may not seem simple at first glance, but we think you will find, over the course of this book, that these programs become much simpler to understand than the older conventional approach to such problems that has routinely been taught to students for decades.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/resampling_with_code2.html b/r-book/resampling_with_code2.html new file mode 100644 index 00000000..ebcaa436 --- /dev/null +++ b/r-book/resampling_with_code2.html @@ -0,0 +1,1265 @@ + + + + + + + + + +Resampling statistics - 7  More resampling with code + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

7  More resampling with code

+
+ + + +
+ + + + +
+ + +
+ +

Chapter 5 introduced a problem in probability, that was also a problem in statistics. We asked how surprised we should be at the results of a trial of a new cancer treatment regime.

+

Here we study another urgent problem in the real world - racial bias and the death penalty.

+
+

7.1 A question of life and death

+

This example comes from the excellent Berkeley introduction to data science (Ani Adhikari and Wagner 2021).

+

Robert Swain was a young black man who was sentenced to death in the early 60s. Swain’s trial was held in Talladega County, Alabama. At the time, 26% of the eligible jurors in that county were black, but every member of Swain’s jury was white. Swain and his legal team appealed to the Alabama Supreme Court, and then to the US Supreme Court, arguing that there was racial bias in the jury selection. They noted that there had been no black jurors in Talladega county since 1950, even though they made up about a quarter of the eligible pool of jurors. The US Supreme Court rejected this argument, in a 6 to 3 opinion, writing that “The overall percentage disparity has been small and reflects no studied attempt to include or exclude a specified number of Negros.”.

+

Swain’s team presented a variety of evidence on bias in jury selection, but here we will look at the obvious and apparently surprising fact that Swain’s jury was entirely white. The Supreme Court decided that the “disparity” between selection of white and black jurors “has been small” — but how would they, and how would we, make a rational decision about whether this disparity really was “small”?

+

You might reasonably be worried about the result of this decision for Robert Swain. In fact his death sentence was invalidated by a later, unrelated decision and he served a long prison sentence instead. In 1986, the Supreme Court overturned the precedent set by Swain’s case, in Batson v. Kentucky, 476 U.S. 79.

+
+
+

7.2 A small disparity and a hypothetical world

+

To answer the question that the Supreme Court asked, we return to the method we used in the last chapter.

+

Let us imagine a hypothetical world, in which each individual black or white person had an equal chance of being selected for the jury. Call this world Hypothetical County, Alabama.

+

Just as in 1960’s Talladega County, 26% of eligible jurors in Hypothetical County are black. Hypothetical County jury selection has no bias against black people, so we expect around 26% of the jury to be black. 0.26 * 12 = 3.12, so we expect that, on average, just over 3 out of 12 jurors in a Hypothetical County jury will be black. But, if we select each juror at random from the population, that means that, sometimes, by chance, we will have fewer than 3 black jurors, and sometimes will have more than 3 black jurors. And, by chance, sometimes we will have no black jurors. But, if the jurors really are selected at random, how often would we expect this to happen — that there are no black jurors? We would like to estimate the probability that we will get no black jurors. If that probability is small, then we have some evidence that the disparity in selection between black and white jurors, was not “small”.

+
+

What is the probability of an all white jury being randomly selected out of a population having 26% black people?

+
+
+
+

7.3 Designing the experiment

+

Before we start, we need to figure out three things:

+
    +
  1. What do we mean by one trial?
  2. +
  3. What is the outcome of interest from the trial?
  4. +
  5. How do we simulate one trial?
  6. +
+

We then take three steps to calculate the desired probability:

+
    +
  1. Repeat the simulated trial procedure N times.
  2. +
  3. Count M, the number of trials with an outcome that matches the outcome we are interested in.
  4. +
  5. Calculate the proportion, M/N. This is an estimate of the probability in question.
  6. +
+

For this problem, our task is made a little easier by the fact that our trial (in the resampling sense) is a simulated trial (in the legal sense). One trial requires 12 simulated jurors, each labeled by race (white or black).

+

The outcome we are interested in is the number of black jurors.

+

Now comes the harder part. How do we simulate one trial?

+
+

7.3.1 One trial

+

One trial requires 12 jurors, and we are interested only in the race of each juror. In Hypothetical County, where selection by race is entirely random, each juror has a 26% chance of being black.

+

We need a way of simulating a 26% chance.

+

One way of doing this is by getting a random number from 0 through 99 (inclusive). There are 100 numbers in the range 0 through 99 (inclusive).

+

We will arbitrarily say that the juror is white if the random number is in the range from 0 through 73. 74 of the 100 numbers are in this range, so the juror has a 74/100 = 74% chance of getting the label “white”. We will say the juror is black if the random number is in the range 74 though 99. There are 26 such numbers, so the juror has a 26% chance of getting the label “black”.

+

Next we need a way of getting a random number in the range 0 through 99. This is an easy job for the computer, but if we had to do this with a physical device, we could get a single number by throwing two 10-sided dice, say a blue die and a green die. The face of the blue die will be the 10s digit, and the green face will be the ones digit. So, if the blue die comes up with 8 and the green die has 4, then the random number is 84.

+

We could then simulate 12 jurors by repeating this process 12 times, each time writing down “white” if the number is from 0 through 74, and “black” otherwise. The trial outcome is the number of times we wrote “black” for these 12 simulated jurors.

+
+
+

7.3.2 Using code to simulate a trial

+

We use the same logic to simulate a trial with the computer. A little code makes the job easier, because we can ask R to give us 12 random numbers from 0 through 99, and to count how many of these numbers are in the range from 75 through 99. Numbers in the range from 75 through 99 correspond to black jurors.

+
+
+

7.3.3 Random numbers from 0 through 99

+

We can now use R and sample from the last chapter to get 12 random numbers from 0 through 99.

+
+
# Get 12 random numbers from 0 through 99
+a <- sample(0:99, size=12, replace=TRUE)
+
+# Show the result
+a
+
+
 [1] 44 22 75 62 46 30 67 72 68  4 23 78
+
+
+
+

7.3.3.1 Counting the jurors

+

We use comparison and sum to count how many numbers are greater than 74, and therefore, in the range from 75 through 99:

+
+
# How many numbers are greater than 74?
+b <- sum(a > 74)
+# Show the result
+b
+
+
[1] 2
+
+
+
+
+

7.3.3.2 A single simulated trial

+

We assemble the pieces from the last few sections to make a chunk that simulates a single trial:

+
+
# Get 12 random numbers from 0 through 99
+a <- sample(0:99, size=12, replace=TRUE)
+# How many are greater than 74?
+b <- sum(a > 74)
+# Show the result
+b
+
+
[1] 2
+
+
+
+
+
+
+

7.4 Three simulation steps

+

Now we come back to the details of how we:

+
    +
  1. Repeat the simulated trial many times;
  2. +
  3. record the results for each trial;
  4. +
  5. calculate the required proportion as an estimate of the probability we seek.
  6. +
+

Repeating the trial many times is the job of the for loop, and we will come to that soon.

+

In order to record the results, we will store each trial result in a vector.

+
+
+
+ +
+
+More on vectors +
+
+
+

Since we will be working with vectors a lot, it is worth knowing more about them.

+

A vector is a container that stores many elements of the same type. You have already seen, in Chapter 2, how we can create a vector from a sequence of numbers using the c() function.

+
+
# Make a vector of numbers, store with the name "some_numbers".
+some_numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
+# Show the value of "some_numbers"
+some_numbers
+
+
 [1] 0 1 2 3 4 5 6 7 8 9
+
+
+

Another way that we can create vectors is to use the numeric function to make a new array where all the elements are 0.

+
+
# Make a new vector containing 5 zeros.
+z <- numeric(5)
+# Show the value of "z"
+z
+
+
[1] 0 0 0 0 0
+
+
+

Notice the argument 5 to the numeric function. This tells the function how many zeros we want in the vector that the function will return.

+
+

7.5 vector length

+

The are various useful things we can do with this vector container. One is to ask how many elements there are in the vector container. We can use the length function to calculate the number of elements in a vector:

+
+
# Show the number of elements in "z"
+length(z)
+
+
[1] 5
+
+
+
+
+

7.6 Indexing into vectors

+

Another thing we can do is set the value for a particular element in the vector. To do this, we use square brackets following the vector value, on the left hand side of the equals sign, like this:

+
+
# Set the value of the first element in the vector.
+z[1] = 99
+# Show the new contents of the vector.
+z
+
+
[1] 99  0  0  0  0
+
+
+

Read the first line of code as “the element at position 1 gets a value of 99”.

+

For practice, let us also set the value of the third element in the vector:

+
+
# Set the value of the third element in the vector.
+z[3] = 99
+# Show the new contents of the vector.
+z
+
+
[1] 99  0 99  0  0
+
+
+

Read the first code line above as as “set the value at position 3 in the vector to have the value 99”.

+

We can also get the value of the element at a given position, using the same square-bracket notation:

+
+
# Get the value of the *first* element in the array.
+# Store the value with name "v"
+v = z[1]
+# Show the value we got
+v
+
+
[1] 99
+
+
+

Read the first code line here as “v gets the value at position 1 in the vector”.

+

Using square brackets to get and set element values is called indexing into the vector.

+
+
+
+
+

7.6.1 Repeating trials

+

As a preview, let us now imagine that we want to do 50 simulated trials of Robert Swain’s jury in Hypothetical County. We will want to store the count for each trial, to give 50 counts.

+

In order to do this, we make a vector to hold the 50 counts. Call this vector z.

+
+
# A vector to hold the 50 count values.
+z <- numeric(50)
+
+

We could run a single trial to get a single simulated count. Here we just repeat the code chunk you saw above. Notice that we can get a different result each time we run this code, because the numbers in a are random choices from the range 0 through 99, and different random numbers will give different counts.

+
+
# Get 12 random numbers from 0 through 99
+a <- sample(0:99, size=12, replace=TRUE)
+# How many are greater than 74?
+b <- sum(a == 9)
+# Show the result
+b
+
+
[1] 0
+
+
+

Now we have the result of a single trial, we can store it as the first number in the z vector:

+
+
# Store the single trial count as the first value in the "z" vector.
+z[1] <- b
+# Show all the values in the "z" vector.
+z
+
+
 [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+[39] 0 0 0 0 0 0 0 0 0 0 0 0
+
+
+

Of course we could just keep doing this: run the chunk corresponding to a trial, above, to get a new count, and then store it at the next position in the z vector. For example, we could store the counts for the first three trials with:

+
+
# First trial
+a <- sample(0:99, size=12, replace=TRUE)
+b <- sum(a == 9)
+# Store the result at the first position in z
+z[1] <- b
+
+# Second trial
+a <- sample(0:99, size=12, replace=TRUE)
+b <- sum(a == 9)
+# Store the result at the second position in z
+z[2] <- b
+
+# Third trial
+a <- sample(0:99, size=12, replace=TRUE)
+b <- sum(a == 9)
+# Store the result at the third position in z
+z[3] <- b
+
+# And so on ...
+
+

This would get terribly long and boring to type for 50 trials. Luckily computer code is very good at repeating the same procedure many times. For example, R can do this using a for loop. You have already seen a preview of the for loop in Chapter 2. Here we dive into for loops in more depth.

+
+
+

7.6.2 For-loops in R

+

A for-loop is a way of asking R to:

+
    +
  • Take a sequence of things, one by one, and
  • +
  • Do the same task on each one.
  • +
+

We often use this idea when we are trying to explain a repeating procedure. For example, imagine we wanted to explain what the supermarket checkout person does for the items in your shopping basket. You might say that they do this:

+
+

For each item of shopping in your basket, they take the item off the conveyor belt, scan it, and put it on the other side of the till.

+
+

You could also break this description up into bullet points with indentation, to say the same thing:

+
    +
  • For each item from your shopping basket, they: +
      +
    • Take the item off the conveyor belt.
    • +
    • Scan the item.
    • +
    • Put it on the other side of the till.
    • +
  • +
+

Notice the logic; the checkout person is repeating the same procedure for each of a series of items.

+

This is the logic of the for loop in R. The procedure that R repeats is called the body of the for loop. In the example of the checkout person above, the repeating procedure is:

+
    +
  • Take the item off the conveyor belt.
  • +
  • Scan the item.
  • +
  • Put it on the other side of the till.
  • +
+

Now imagine we wanted to use R to print out the year of birth for each of the authors for the third edition of this book:

+ + + + + + + + + + + + + + + + + + + + + + + + + +
AuthorYear of birth
Julian Lincoln Simon1932
Matthew Brett1964
Stéfan van der Walt1980
Ian Nimmo-Smith1944
+

We want to see this output:

+
Author birth year is 1932
+Author birth year is 1964
+Author birth year is 1980
+Author birth year is 1944
+

Of course, we could just ask R to print out these exact lines, like this:

+
+
message('Author birth year is 1932')
+
+
Author birth year is 1932
+
+
message('Author birth year is 1964')
+
+
Author birth year is 1964
+
+
message('Author birth year is 1980')
+
+
Author birth year is 1980
+
+
message('Author birth year is 1944')
+
+
Author birth year is 1944
+
+
+

We might instead notice that we are repeating the same procedure for each of the four birth years, and decide to do the same thing using a for loop:

+
+
author_birth_years <- c(1932, 1964, 1980, 1944)
+
+# For each birth year
+for (birth_year in author_birth_years) {
+    # Repeat this procedure ...
+    message('Author birth year is ', birth_year)
+}
+
+
Author birth year is 1932
+
+
+
Author birth year is 1964
+
+
+
Author birth year is 1980
+
+
+
Author birth year is 1944
+
+
+

The for loop starts with a line where we tell it what items we want to repeat the procedure for:

+
+
for (birth_year in author_birth_years) {
+

This initial line of the for loop ends with an opening curly brace {. The opening curly brace tells R that what follows, up until the matching closing curly brace }, is the procedure R should follow for each item. The lines between the opening { and closing } curly braces* are the body of the for loop.

+
+

The initial line of the for loop above tells R that it should take each item in author_birth_years, one by one — first 1932, then 1964, then 1980, then 1944. For each of these numbers it will:

+
    +
  • Put the number into the variable birth_year, then
  • +
  • Run the code between the curly braces.
  • +
+

Just as the person at the supermarket checkout takes each item in turn, for each iteration (repeat) of the for loop, birth_year gets a new value from the sequence in author_birth_years. birth_year is called the loop variable, because it is the variable that gets a new value each time we begin a new iteration of the for loop procedure. As for any variable in R, we can call our loop variable anything we like. We used birth_year here, but we could have used y or year or some other name.

+
+

Notice that R insists we put parentheses (round brackets) around: the loop variable; in; and the sequence that will fill the loop variable — like this:

+
for (birth_year in author_birth_years) {
+

Do not forget these round brackets — R insists on them.

+
+

Now you know what the for loop is doing, you can see that the for loop above is equivalent to the following code:

+
+
birth_year <- 1932  # Set the loop variable to contain the first value.
+message('Author birth year is ', birth_year)  # Use the first value.
+
+
Author birth year is 1932
+
+
birth_year <- 1964  # Set the loop variable to contain the next value.
+message('Author birth year is ', birth_year)  # Use the second value.
+
+
Author birth year is 1964
+
+
birth_year <- 1980
+message('Author birth year is ', birth_year)
+
+
Author birth year is 1980
+
+
birth_year <- 1944
+message('Author birth year is ', birth_year)
+
+
Author birth year is 1944
+
+
+

Writing the steps in the for loop out like this is called unrolling the loop. It can be a useful exercise to do this when you come across a for loop, in order to work through the logic of the loop. For example, you may want to write out the unrolled equivalent of the first couple of iterations, to see what the loop variable will be, and what will happen in the body of the loop.

+

We often use for loops with ranges (see Section 5.9). Here we use a loop to print out the numbers 1 through 4:

+
+
for (n in 1:4) {
+    message('The loop variable n is ', n)
+}
+
+
The loop variable n is 1
+
+
+
The loop variable n is 2
+
+
+
The loop variable n is 3
+
+
+
The loop variable n is 4
+
+
+

Notice that the range ended at 4, and that means we repeat the loop body 4 times. We can also use the loop variable value from the range as an index, to get or set the first, second, etc values from a vector.

+

For example, maybe we would like to show the author position and the author year of birth.

+

Remember our author birth years:

+
+
author_birth_years
+
+
[1] 1932 1964 1980 1944
+
+
+

We can get (for example) the second author birth year with:

+
+
author_birth_years[2]
+
+
[1] 1964
+
+
+

Using the combination of looping over a range, and vector indexing, we can print out the author position and the author birth year:

+
+
for (n in 1:4) {
+    year <- author_birth_years[n]
+    message('Birth year of author position ', n, ' is ', year)
+}
+
+
Birth year of author position 1 is 1932
+
+
+
Birth year of author position 2 is 1964
+
+
+
Birth year of author position 3 is 1980
+
+
+
Birth year of author position 4 is 1944
+
+
+

Just for practice, let us unroll the first two iterations through this for loop, to remind ourselves what the code is doing:

+
+
# Unrolling the for loop.
+n <- 1
+year <- author_birth_years[n]  # Will be 1932
+message('Birth year of author position ', n, ' is ', year)
+
+
Birth year of author position 1 is 1932
+
+
n <- 2
+year <- author_birth_years[n]  # Will be 1964
+message('Birth year of author position ', n, ' is ', year)
+
+
Birth year of author position 2 is 1964
+
+
# And so on.
+
+
+
+

7.6.3 Putting it all together

+

Here is the code we worked out above, to implement a single trial:

+
+
# Get 12 random numbers from 0 through 99
+a <- sample(0:99, size=12, replace=TRUE)
+# How many are greater than 74?
+b <- sum(a == 9)
+# Show the result
+b
+
+
[1] 0
+
+
+

We found that we could use vectors to store the results of these trials, and that we could use for loops to repeat the same procedure many times.

+

Now we can put these parts together to do 50 simulated trials:

+
+
# Procedure for 50 simulated trials.
+
+# A vector to store the counts for each trial.
+z <- numeric(50)
+
+# Repeat the trial procedure 50 times.
+for (i in 1:50) {
+    # Get 12 random numbers from 0 through 99
+    a <- sample(0:99, size=12, replace=TRUE)
+    # How many are greater than 74?
+    b <- sum(a > 74)
+    # Store the result at the next position in the "z" vector.
+    z[i] = b
+    # Now go back and do the next trial until finished.
+}
+# Show the result of all 50 trials.
+z
+
+
 [1] 4 1 1 4 2 3 4 3 1 2 3 2 5 3 2 3 4 3 1 5 5 2 1 1 2 2 2 3 0 2 6 2 2 3 4 0 3 4
+[39] 2 5 3 2 3 3 3 4 2 2 4 4
+
+
+

Finally, we need to count how many of the trials in z ended up with all-white juries. These are the trials with a z (count) value of 0.

+

To do this, we can ask a vector which elements match a certain condition. E.g.:

+
+
x <- c(2, 1, 3, 0)
+y = x < 2
+# Show the result
+y
+
+
[1] FALSE  TRUE FALSE  TRUE
+
+
+

We now use that same technique to ask, of each of the 50 counts, whether the vector z is equal to 0, like this:

+
+
# Is the value of z equal to 0?
+all_white <- z == 0
+# Show the result of the comparison.
+all_white
+
+
 [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
+[13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
+[25] FALSE FALSE FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE
+[37] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
+[49] FALSE FALSE
+
+
+

We need to get the number of TRUE values in all_white, to find how many simulated trials gave all-white juries.

+
+
# Count the number of True values in "all_white"
+# This is the same as the number of values in "z" that are equal to 0.
+n_all_white = sum(all_white)
+# Show the result of the comparison.
+n_all_white
+
+
[1] 2
+
+
+

n_all_white is the number of simulated trials for which all the jury members were white. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.

+
+
# Proportion of trials where all jury members were white.
+p <- n_all_white / 50
+# Show the result
+p
+
+
[1] 0.04
+
+
+

From this initial simulation, it seems there is around a 4% chance that a jury selected randomly from the population, which was 26% black, would have no black jurors.

+
+
+
+

7.7 Many many trials

+

Our experiment above is only 50 simulated trials. The higher the number of trials, the more confident we can be of our estimate for p — the proportion of trials where we get an all-white jury.

+

It is no extra trouble for us to tell the computer to do a very large number of trials. For example, we might want to run 10,000 trials instead of 50. All we have to do is to run the loop 10,000 times instead of 50 times. The computer has to do more work, but it is more than up to the job.

+

Here is exactly the same code we ran above, but collected into one chunk, and using 10,000 trials instead of 50. We have left out the comments, to make the code more compact.

+
+
# Full simulation procedure, with 10,000 trials.
+z <- numeric(10000)
+for (i in 1:10000) {
+    a <- sample(0:99, size=12, replace=TRUE)
+    b <- sum(a > 74)
+    z[i] = b
+}
+all_white <- z == 0
+n_all_white <- sum(all_white)
+p <- n_all_white / 10000
+p
+
+
[1] 0.0317
+
+
+

We now have a new, more accurate estimate of the proportion of Hypothetical County juries with all-white juries. The proportion is 0.032, and so 3.2%.

+

This proportion means that, for any one jury from Hypothetical County, there is a less than one in 20 chance that the jury would be all white.

+

As we will see in more detail later, we might consider using the results from this experiment in Hypothetical County, to reflect on the result we saw in the real Talladega County. We might conclude, for example, that there was likely some systematic difference between Hypothetical County and Talledega County. Maybe the difference was that there was, in fact, some bias in the jury selection in Talledega county, and that the Supreme Court was wrong to reject this. You will hear more of this line of reasoning later in the book.

+
+
+

7.8 Conclusion

+

In this chapter we studied a real life-and-death question, on racial bias and the death penalty. We continued our exploration of the ways we can use probability, and resampling, to draw conclusions about real events. Along the way, we went into more detail on vectors in R, and for loops; two basic tools in resampling.

+

In the next chapter, we will work through some more problems in probability, to show how we can use resampling, to answer questions about chance. We will add some more tools for writing code in R, to make your programs easier to write, read, and understand.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/sampling_tools.html b/r-book/sampling_tools.html new file mode 100644 index 00000000..90c44f53 --- /dev/null +++ b/r-book/sampling_tools.html @@ -0,0 +1,1025 @@ + + + + + + + + + +Resampling statistics - 6  Tools for samples and sampling + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

6  Tools for samples and sampling

+
+ + + +
+ + + + +
+ + +
+ +
+

6.1 Introduction

+

Now you have some experience with R, probabilities and resampling, it is time to introduce some useful tools for our experiments and programs.

+
+

Start of sampling_tools notebook

+ + +
+

6.2 Samples and labels

+

Thus far we have used numbers such as 1 and 0 and 10 to represent the elements we are sampling from. For example, in Chapter 7, we were simulating the chance of a particular juror being black, given that 26% of the eligible jurors in the county were black. We used integers for that task, where we started with all the integers from 0 through 99, and asked R to select values at random from those integers. When R selected an integer from 0 through 25, we chose to label the resulting simulated juror as black — there are 26 integers in the range 0 through 25, so there is a 26% chance that any one integer will be in that range. If the integer was from 26 through 99, the simulated juror was white (there are 74 integers in the range 26 through 99).

+

Here is the process of simulating a single juror, adapted from Section 7.3.3:

+
+
# Get 1 random number from 0 through 99
+# replace=TRUE is redundant here (why?), but we leave it for consistency.
+a <- sample(0:99, 1, replace=TRUE)
+
+# Show the result
+a
+
+
[1] 44
+
+
+

After that, we have to unpack our labeling of 0 through 25 as being “black” and 26 through 99 as being “white”. We might do that like this:

+
+
this_juror_is_black <- a < 26
+this_juror_is_black
+
+
[1] FALSE
+
+
+

This all works as we want it to, but it’s just a little bit difficult to remember the coding (less than 26 means “black”, greater than 25 means “white”). We had to use that coding because we committed ourselves to using random numbers to simulate the outcomes.

+

However, R can also store bits of text, called strings. Values that are bits of text can be very useful because the text values can be memorable labels for the entities we are sampling from, in our simulations.

+
+
+

6.3 String values

+

So far, all the values you have seen in R vectors have been numbers. Now we get on to values that are bits of text. These are called strings.

+

Here is a single R string value:

+
+
s <- "Resampling"
+s
+
+
[1] "Resampling"
+
+
+
+

We can see what type of value v holds by using the class function.

+

For example, for a number value, you will usually find the class is numeric:

+
+
v <- 10
+class(v)
+
+
[1] "numeric"
+
+
+
+

What is the class of the new bit-of-text value s?

+
+
class(s)
+
+
[1] "character"
+
+
+

The R character value is a bit of text, and therefore consists of a sequence of characters.

+

As vectors are containers for other things, such as numbers, strings are containers for characters.

+
+

To get the length of a string, use the nchar function (Number of Characters):

+
+
# Number of characters in s
+nchar(s)
+
+
[1] 10
+
+
+
+
+

R has a substring function that allows you to select individual characters or sequences of characters from a string. The arguments to substring are: first — the string; second — the index of the first character you want to select; and third — the index of the last character you want to select. For example to select the second character in the string you would specify 2 as the starting index, and 2 as the ending index, like this:

+
+
# Get the second character of the string
+second_char <- substring(s, 2, 2)
+second_char
+
+
[1] "e"
+
+
+
+
+
+

6.4 Strings in vectors

+

As we can store numbers as elements in vectors, we can also store strings as vector elements.

+
+
vector_of_strings = c('Julian', 'Lincoln', 'Simon')
+vector_of_strings
+
+
[1] "Julian"  "Lincoln" "Simon"  
+
+
+

As for any vector, you can select elements with indexing. When you select an element with a given position (index), you get the string at at that position:

+
+
# Julian Lincoln Simon's second name
+middle_name <- vector_of_strings[2]
+middle_name
+
+
[1] "Lincoln"
+
+
+

As for numbers, we can compare strings with, for example, the == operator, that asks whether the two strings are equal:

+
+
middle_name == 'Lincoln'
+
+
[1] TRUE
+
+
+
+
+

6.5 Repeating elements

+

Now let us go back to the problem of selecting black and white jurors.

+

We started with the strategy of using numbers 0 through 25 to mean “black” jurors, and 26 through 99 to mean “white” jurors. We selected values at random from 0 through 99, and then worked out whether the number meant a “black” juror (was less than 26) or a “white” juror (was greater than 25).

+

It would be good to use strings instead of numbers to identify the potential jurors. Then we would not have to remember our coding of 0 through 25 and 26 through 99.

+

If only there was a way to make a vector of 100 strings, where 26 of the strings were “black” and 74 were “white”. Then we could select randomly from that array, and it would be immediately obvious that we had a “black” or “white” juror.

+

Luckily, of course, we can do that, by using the rep function to construct the vector.

+

Here is how that works:

+
+
# The values that we will repeat to fill up the larger array.
+juror_types <- c('black', 'white')
+# The number of times we want to repeat "black" and "white".
+repeat_nos <- c(26, 74)
+# Repeat "black" 26 times and "white" 74 times.
+jury_pool <- rep(juror_types, repeat_nos)
+# Show the result
+jury_pool
+
+
  [1] "black" "black" "black" "black" "black" "black" "black" "black" "black"
+ [10] "black" "black" "black" "black" "black" "black" "black" "black" "black"
+ [19] "black" "black" "black" "black" "black" "black" "black" "black" "white"
+ [28] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [37] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [46] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [55] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [64] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [73] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [82] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [91] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+[100] "white"
+
+
+

We can use this vector of repeats of strings, to sample from. The result is easier to grasp, because we are using the string labels, instead of numbers:

+
+
# Select one juror at random from the black / white pool.
+# replace=TRUE is redundant here, but we leave it for consistency.
+one_juror <- sample(jury_pool, 1, replace=TRUE)
+one_juror
+
+
[1] "black"
+
+
+

We can select our full jury of 12 jurors, and see the results in a more obvious form:

+
+
# Select one juror at random from the black / white pool.
+one_jury <- sample(jury_pool, 12, replace=TRUE)
+one_jury
+
+
 [1] "white" "white" "white" "white" "white" "white" "white" "black" "black"
+[10] "white" "white" "black"
+
+
+
+
+
+ +
+
+Using the size argument to sample +
+
+
+

In the code above, we have specified the size of the sample we want (12) with the second argument to sample. As you saw in Section 5.8, we can also give names to the function arguments, in this case, to make it clearer what we mean by “12” in the code above. In fact, from now on, that is what we will do; we will specify the size of our sample by using the name for the function argument to samplesize — like this:

+
+
# Select one juror at random from the black / white pool.
+# Specify the sample size using the "size" named argument.
+one_jury <- sample(jury_pool, size=12, replace=TRUE)
+one_jury
+
+
 [1] "white" "white" "white" "white" "white" "black" "white" "white" "white"
+[10] "white" "white" "white"
+
+
+
+
+

We can use == on the vector to get TRUE values where the juror was “black” and FALSE values otherwise:

+
+
are_black <- one_jury == 'black'
+are_black
+
+
 [1] FALSE FALSE FALSE FALSE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE
+
+
+

Finally, we can sum to find the number of black jurors (Section 5.13):

+
+
# Number of black jurors in this simulated jury.
+n_black <- sum(are_black)
+n_black
+
+
[1] 1
+
+
+

Putting that all together, this is our new procedure to select one jury and count the number of black jurors:

+
+
one_jury <- sample(jury_pool, size=12, replace=TRUE)
+are_black <- one_jury == 'black'
+n_black <- sum(are_black)
+n_black
+
+
[1] 4
+
+
+

Or we can be even more compact by putting several statements together into one line:

+
+
# The same as above, but on one line.
+n_black = sum(sample(jury_pool, size=12, replace=TRUE) == 'black')
+n_black
+
+
[1] 4
+
+
+
+
+

6.6 Resampling with and without replacement

+

Now let us return to the details of Robert Swain’s case, that you first saw in Chapter 7.

+

We looked at the composition of Robert Swain’s 12-person jury — but in fact, by law, that does not have to be representative of the eligible jurors. The 12-person jury is drawn from a jury panel, of 100 people, and this should, in turn, be drawn from the population of all eligible jurors in the county, consisting, at the time, of “all male citizens in the community over 21 who are reputed to be honest, intelligent men and are esteemed for their integrity, good character and sound judgment.” So, unless there was some bias against black jurors, we might expect the 100-person jury panel to be a plausibly random sample of the eligible jurors, of whom 26% were black. See the Supreme Court case judgement for details.

+

In fact, in Robert Swain’s trial, there were 8 black members in the 100-person jury panel. We will leave it to you to adapt the simulation from Chapter 7 to ask the question — is 8% surprising as a random sample from a population with 26% black people?

+

But we have a different question: given that 8 out of 100 of the jury panel were black, is it surprising that none of the 12-person jury were black? As usual, we can answer that question with simulation.

+

Let’s think about what a single simulated jury selection would look like.

+

First we compile a representation of the actual jury panel, using the tools we have used above.

+
+
juror_types <- c('black', 'white')
+# in fact there were 8 black jurors and 92 white jurors.
+panel_nos <- c(8, 92)
+jury_panel <- rep(juror_types, panel_nos)
+# Show the result
+jury_panel
+
+
  [1] "black" "black" "black" "black" "black" "black" "black" "black" "white"
+ [10] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [19] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [28] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [37] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [46] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [55] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [64] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [73] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [82] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+ [91] "white" "white" "white" "white" "white" "white" "white" "white" "white"
+[100] "white"
+
+
+

Now consider taking a 12-person jury at random from this panel. We select the first juror at random, so that juror has an 8 out of 100 chance of being black. But when we select the second jury member, the situation has changed slightly. We can’t select the first juror again, so our panel is now 99 people. If our first juror was black, then the chances of selecting another black juror next are not 8 out of 100, but 7 out of 99 — a smaller chance. The problem is, as we shall see in more detail later, the chances of getting a black juror as the second, and third and fourth members of the jury depend on whether we selected a black juror as the first and second and third jury members. At its most extreme, imagine we had already selected eight jurors, and by some strange chance, all eight were black. Now our chances of selecting a black juror as the ninth juror are zero — there are no black jurors left to select from the panel.

+

In this case we are selecting jurors from the panel without replacement, meaning, that once we have selected a particular juror, we cannot select them again, and we do not put them back into the panel when we select our next juror.

+

This is the probability equivalent of the situation when you are dealing a hand of cards. Let’s say someone is dealing you, and you only, a hand of five cards. You get an ace as your first card. Your chances of getting an ace as your first card were just the number of aces in the deck divided by the number of cards — four in 52 – \(\frac{4}{52}\). But for your second card, the probability has changed, because there is one less ace remaining in the pack, and one less card, so your chances of getting an ace as your second card are now \(\frac{3}{51}\). This is sampling without replacement — in a normal game, you can’t get the same card twice. Of course, you could imagine getting a hand where you sampled with replacement. In that case, you’d get a card, you’d write down what it was, and you’d give the card back to the dealer, who would replace the card in the deck, shuffle again, and give you another card.

+

As you can see, the chances change if you are sampling with or without replacement, and the kind of sampling you do, will dictate how you model your chances in your simulations.

+

Because this distinction is so common, and so important, the machinery you have already seen in sample has simple ways for you to select your sampling type. You have already seen sampling with replacement, and it looks like this:

+
+
# Take a sample of 12 jurors from the panel *with replacement*
+strange_jury <- sample(jury_panel, size=12, replace=TRUE)
+strange_jury
+
+
 [1] "white" "white" "white" "white" "black" "white" "white" "white" "white"
+[10] "white" "white" "white"
+
+
+

This is a strange jury, because it can select any member of the jury pool more than once. Perhaps that juror would have to fill two (or more!) seats, or run quickly between them. But of course, that is not how juries are selected. They are selected without replacement:

+
+

Thus far, we have always done sampling with replacement, and, in order to do that with sample, we pass the argument replace=TRUE. We do that because the default for sample is replace=FALSE, that is, by default, sample does sampling without replacement. If you want to do sampling without replacement, you can just omit the replace=TRUE argument to sample, or you can specify replace=FALSE explicitly, perhaps to remind yourself that this is sampling without replacement. Whether you omit the replace argument, or specify replace=FALSE, the behavior is the same.

+
+
+
# Take a sample of 12 jurors from the panel *with replacement*
+# replace=FALSE is the default for sample.
+ok_jury <- sample(jury_panel, size=12)
+ok_jury
+
+
 [1] "white" "white" "black" "white" "black" "white" "white" "white" "black"
+[10] "white" "white" "white"
+
+
+
+
+
+ +
+
+Comments at the end of lines +
+
+
+

You have already seen comment lines. These are lines beginning with #, to signal to R that the rest of the line is text for humans to read, but R to ignore.

+
+
# This is a comment.  R ignores this line.
+
+

You can also put comments at the end of code lines, by finishing the code part of the line, and then putting a #, followed by more text. Again, R will ignore everything after the # as a text for humans, but not for R.

+
+
message('Hello')  # This is a comment at the end of the line.
+
+
Hello
+
+
+
+
+

To finish the procedure for simulating a single jury selection, we count the number of black jurors:

+
+
n_black <- sum(ok_jury == 'black')  # How many black jurors?
+n_black
+
+
[1] 3
+
+
+

Now we have the procedure for one simulated trial, here is the procedure for 10000 simulated trials.

+
+
counts <- numeric(10000)
+for (i in 1:10000) {
+    # Single trial procedure
+    jury <- sample(jury_panel, size=12)  # replace=FALSE is the default.
+    n_black <- sum(jury == 'black')  # How many black jurors?
+    # Store the result
+    counts[i] <- n_black
+}
+# Number of juries with 0 black jurors.
+zero_black <- sum(counts == 0)
+# Proportion
+p_zero_black <- zero_black / 10000
+message(p_zero_black)
+
+
0.3375
+
+
+

We have found that, when there are only 8% black jurors in the jury panel, having no black jurors in the final jury happens about 34% of the time, even in this case, where the jury is selected completely at random from the jury panel.

+

We should look for the main source of bias in the initial selection of the jury panel, not in the selection of the jury from the panel.

+ +

End of sampling_tools notebook

+
+
+
+
+
+ +
+
+With or without replacement for the original jury selection +
+
+
+

You may have noticed in Chapter 7 that we were sampling Robert Swain’s jury from the eligible pool of jurors, with replacement. You might reasonably ask whether we should have selected from the eligible jurors without replacement, given that the same juror cannot serve more than once in the same jury, and therefore, the same argument applies there as here.

+

The trick there was that we were selecting from a very large pool of many thousand eligible jurors, of whom 26% were black. Let’s say there were 10,000 eligible jurors, of whom 2,600 were black. When selecting the first juror, there is exactly a 2,600 in 10,000 chance of getting a black juror — 26%. If we do get a black juror first, then the chance that the second juror will be black has changed slightly, 2,599 in 9,999. But these changes are very small; even if we select eleven black jurors out of eleven, when we come to the twelfth juror, we still have a 2,589 out of 9,989 chance of getting another black juror, and that works out at a 25.92% chance — hardly changed from the original 26%. So yes, you’d be right, we really should have compiled our population of 2,600 black jurors and 7,400 white jurors, and then sampled without replacement from that population, but as the resulting sample probabilities will be very similar to the simpler sampling with replacement, we chose to try and slide that one quietly past you, in the hope you would forgive us when you realized.

+
+
+
+
+

6.7 Conclusion

+

This chapter introduced you to the idea of strings — values in R that store bits of text. Strings are very useful as labels for the entities we are sampling from, when we do our simulations. Strings are particularly useful when we use them with vectors, and one way we often do that is to build up vectors of strings to sample from, using the rep function.

+

There is a fundamental distinction between two different types of sampling — sampling with replacement, where we draw an element from a larger pool, then put that element back before drawing again, and sampling without replacement, where we remove the element from the remaining pool when we draw it into the sample. As we will see later, it is often a judgment call which of these two types of sampling is a more reasonable model of the world you are trying to simulate.

+ + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/sampling_variability.html b/r-book/sampling_variability.html new file mode 100644 index 00000000..514bad2a --- /dev/null +++ b/r-book/sampling_variability.html @@ -0,0 +1,1407 @@ + + + + + + + + + +Resampling statistics - 14  On Variability in Sampling + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

14  On Variability in Sampling

+
+ + + +
+ + + + +
+ + +
+ +
+

[Debra said]: “I’ve had such good luck with Japanese cars and poor luck with American...”

+
+
+

The ’65 Ford Mustang: “It was fun, but I had to put two new transmissions in it.”

+
+
+

The Ford Torino: “That got two transmissions too. That finished me with Ford.”

+
+
+

The Plymouth Horizon: “The disaster of all disasters. That should’ve been painted bright yellow. What a lemon.”

+
+

(From Washington Post Magazine , May 17, 1992, p. 19)

+

Do the quotes above convince you that Japanese cars are better than American? Has Debra got enough evidence to reach the conclusion she now holds? That sort of question, and the reasoning we use to address it, is the subject of this chapter.

+

More generally, how should one go about using the available data to test the hypothesis that Japanese cars are better? That is an example of the questions that are the subject of statistics.

+
+

14.1 Variability and small samples

+

Perhaps the most important idea for sound statistical inference — the section of the book we are now beginning, in contrast to problems in probability, which we have studied in the previous chapters — is recognition of the presence of variability in the results of small samples . The fatal error of relying on too-small samples is all too common among economic forecasters, journalists, and others who deal with trends and public opinion. Athletes, sports coaches, sportswriters, and fans too frequently disregard this principle both in their decisions and in their discussion.

+

Our intuitions often carry us far astray when the results vary from situation to situation — that is, when there is variability in outcomes — and when we have only a small sample of outcomes to look at.

+

To motivate the discussion, I’ll tell you something that almost no American sports fan will believe: There is no such thing as a slump in baseball batting. That is, a batter often goes an alarming number of at-bats without getting a hit, and everyone — the manager, the sportswriters, and the batter himself — assumes that something has changed, and the probability of the batter getting a hit is now lower than it was before the slump. It is common for the manager to replace the player for a while, and for the player and coaches to change the player’s hitting style so as to remedy the defect. But the chance of a given batter getting a hit is just the same after he has gone many at-bats without a hit as when he has been hitting well. A belief in slumps causes managers to play line-ups which may not be their best.

+

By “slump” I mean that a player’s probability of getting a hit in a given at-bat is lower during a period than during average periods. And when I say there is no such thing as a slump, I mean that the chances of getting a hit after any sequence of at-bats without a hit is not different than the long-run average.

+

The “hot hand” in basketball is another illusion. In practical terms, the hot hand does not exist — or rather — if it does, the effect is weak.1 The chance of a shooter scoring is more or less the same after they have just missed a flock of shots as when they have just sunk a long string. That is, the chance of scoring a basket is not appreciably higher after a run of successes than after a run of failures. But even professional teams choose plays on the basis of who supposedly has a hot hand.

+

Managers who substitute for the “slumping” or “cold-handed” players with other players who, in the long run, have lower batting averages, or set up plays for the shooter who supposedly has a hot hand, make a mistake. The supposed hot hand in basketball, and the slump in baseball, are illusions because the observed long runs of outs, or of baskets, are statistical artifacts, due to ordinary random variability. The identification of slumps and hot hands is superstitious behavior, classic cases of the assignment of pattern to a series of events when there really is no pattern.

+

How do statisticians ascertain that slumps and hot hands are very weak effects, or do not exist? In brief, in baseball we simulate a hitter with a given average — say .250 — and compare the results with actual hitters of that average, to see whether they have “slumps” longer than the computer. The method of investigation is roughly as follows. You program a computer or other machine to behave the way a player would, given the player’s long-run average, on the assumption that each trial is a random drawing. For example, if a player has a .250 season-long batting average, the machine is programmed like a bucket containing three black balls and one white ball. Then for each simulated at bat, the machine shuffles the “balls” and draws one; it then records whether the result is black or white, after which the ball is replaced in the bucket. To study a season with four hundred at-bats, a simulated ball is drawn four hundred times.

+

The records of the player’s real season and the simulated season are then compared. If there really is such a thing as a non-random slump or streak, there will be fewer but longer “runs” of hits or outs in the real record than in the simulated record. On the other hand, if performance is independent from at-bat trial to at-bat trial, the actual record will change from hit to out and from out to hit as often as does the random simulated record. I suggested this sort of test for the existence of slumps in my 1969 book that first set forth the resampling method, a predecessor of this book.

+

For example, Table 14.1 shows the results of one 400 at-bat season for a simulated .250 hitter. (H = hit, O = out, sequential at-bats ordered vertically) Note the “slump” — 1 for 24 — in columns 7 & 8 (in bold).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14.1: 400 simulated at-bats (ordered vertically)
OOOOOOHOOOOHOHOO
OOOOOHOOHHHOHHOO
OOOHOOOOHOOOHHOO
OOOOOHHOOOOHOOOH
HOHOOHOOOHOOOOHO
HOOHOOHHOHOOHOHO
OOHOOOOHOOOOOOHO
OOHOOOOHHOOOOOOO
OHOOOOOOHHOOOHOO
OHHOOOOHOHOOHOHO
OOHHOHOHOHHHOOOO
HOOOOOOOOHOHHOOO
OHOOOHOOOOOOOOHH
HOHOOOHOOOOHHOOH
OOOOHHOOOOOHHHHO
OOOOHHOOOOOHOOOO
HOOOOOOOOOOOOOOO
OHHHOOOHOHOOOOOO
OHOHOOOOHOOOOHOO
OOOHHOOOOOHOHOOH
OHOOHOOOOOHOOOOO
HHHOOOOHOOOOHOOH
OOOHHOOOOOOOOOHO
OHOOOOOHHOOOOOOH
OOOOOHOOOHOHOHOO
+
+

Harry Roberts investigated the batting records of a sample of major leaguers.2 He compared players’ season-long records against the behavior of random-number drawings. If slumps existed rather than being a fiction of the imagination, the real players’ records would shift from a string of hits to a string of outs less frequently than would the random-number sequences. But in fact the number of shifts, and the average lengths of strings of hits and outs, are on average the same for players as for player-simulating random-number devices.

+

Over long periods, averages may vary systematically, as Ty Cobb’s annual batting averages varied non-randomly from season to season, Roberts found. But in the short run, most individual and team performances have shown results similar to the outcomes that a lottery-type random number machine would produce.

+

Thomas Gilovich, Robert Vallone and Amos Twersky (1985) performed a similar study of basketball shooting. They examined the records of shots from the floor by the Philadelphia 76’ers, foul shots by the Boston Celtics, and a shooting experiment of Cornell University teams. They found that “basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses…provided no evidence for a positive correlation between the outcomes of successive shots.”

+

To put their conclusion differently, knowing whether a shooter has scored or not scored on the previous shot — or in any previous sequence of shots — is of absolutely no use in predicting whether the shooter will or will not score on the next shot. Similarly, knowledge of the past series of at-bats in baseball does not improve a prediction of whether a batter will get a hit this time.

+

Of course a batter feels — and intensely — as if she or he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, both sandlot players and professionals feel confident that this time will be a hit, too. And after you have hit a bunch of baskets from all over the court, you feel as if you can’t miss.

+

But notice that card players get the same poignant feeling of being “hot” or “cold,” too. After a poker player “fills” several straights and flushes in a row, s/he feels s/he will hit the next one too. (Of course there are some players who feel just the opposite, that the “law of averages” is about to catch up with them.)

+

You will agree, I’m sure, that the cards don’t have any memory, and a player’s chance of filling a straight or flush remains the same no matter how he or she has done in the last series of hands. Clearly, then, a person can have a strong feeling that something is about to happen even when that feeling has no foundation. This supports the idea that even though a player in sports “feels” that s/he is in a slump or has a hot hand, this does not imply that the feeling has any basis in reality.

+

Why, when a batter is low in his/her mind because s/he has been making a lot of outs or for personal reasons, does her/ his batting not suffer? And why the opposite? Apparently at any given moment there are many influences operating upon a player’s performance in a variety of directions, with none of them clearly dominant. Hence there is no simple convincing explanation why a player gets a hit or an out, a basket or a miss, on any given attempt.

+

But though science cannot provide an explanation, the sports commentators always are ready to offer their analyses. Listen, for example, to how they tell you that Joe Zilch must have been trying extra hard just because of his slump. There is a sportswriter’s explanation for anything that happens.

+

Why do we believe the nonsense we hear about “momentum,” “comeback,” “she’s due this time,” and so on? The adult of the human species has a powerful propensity to believe that he or she can find a pattern even when there is no pattern to be found. Two decades ago I cooked up series of numbers with a random-number machine that looked as if they were prices on the stock market. Subjects in the experiment were told to buy and sell whichever stocks they chose. Then I gave them “another day’s prices,” and asked them to buy and sell again. The subjects did all kinds of fancy figuring, using an incredible variety of assumptions — even though there was no way for the figuring to help them. That is, people sought patterns even though there was no reason to believe that there were any patterns to be found.

+

When I stopped the game before the ten buy-and-sell sessions the participants expected, people asked that the game continue. Then I would tell them that there was no basis for any patterns in the data. “Winning” or “losing” had no meaning. But the subjects demanded to continue anyway. They continued believing that they could find patterns even after I told them that the numbers were randomly looked up and not real stock prices.

+

The illusions in our thinking about sports have important counterparts in our thinking about such real-world phenomena as the climate, the stock market, and trends in the prices of raw materials such as mercury, copper and wheat. And private and public decisions made on the basis of faulty understanding of these real situations, caused by illusory thinking on the order of belief in slumps and hot hands, are often costly and sometimes disastrous.

+

An example of the belief that there are patterns when there are none: Systems for finding patterns in the stock market are peddled that have about the same reliability as advice from a racetrack tout — and millions buy them.

+

One of the scientific strands leading into research on variability was the body of studies that considers the behavior of stock prices as a “random walk.” That body of work asserts that a stock broker or chartist who claims to be able to find patterns in past price movements of stocks that will predict future movements should be listened to with about the same credulity as a racetrack tout or an astrologer. A second strand was the work in psychology in the last decade or two which has recognized that people’s estimates of uncertain events are systematically biased in a variety of interesting and knowable ways.

+

The U.S. government has made — and continues to make — blunders costing the public scores of billions of dollars, using slump-type fallacious reasoning about resources and energy. Forecasts are issued and policies are adopted based on the belief that a short-term increase in price constitutes a long-term trend. But the “experts” employed by the government to make such forecasts do no better on average than do private forecasters, and often the system of forecasting that they use is much more misleading than would be a random-number generating machine of the sort used in the baseball slump experiments.

+

Please look at the data in Figure 14.1 for the height of the Nile River over about half a century. Is it not natural to think that those data show a decline in the height of the river? One can imagine that if our modern communication technology existed then, the Cairo newspapers would have been calling for research to be done on the fall of the Nile, and the television anchors would have been warning the people to change their ways and use less water.

+
+
+
+
+

+
Figure 14.1: Height of the Nile River Over Half of a Century
+
+
+
+
+

Let’s look at Figure 14.2 which represents the data over an even longer period. What now would you say about the height of the Nile? Clearly the “threat” was non-existent, and only appeared threatening because the time span represented by the data was too short. The point of this display is that looking at too-short a segment of experience frequently leads us into error. And “too short” may be as long as a century.

+
+
+

+
Figure 14.2: Variations in the height of Nile Flood in centimeters. The sloping line indicates the secular raising of the bed of the Nile by deposition of silt. From Brooks (1928)
+
+
+

Another example is the price of mercury, which is representative of all metals. Figure 14.3 shows a forecast made in 1976 by natural-scientist Earl Cook (1976). He combined a then-recent upturn in prices with the notion that there is a finite amount of mercury on the earth’s surface, plus the mathematical charm of plotting a second-degree polynomial with the computer. Figure 14.4 and Figure 14.5 show how the forecast was almost immediately falsified, and the price continued its long-run decline.

+
+
+

+
Figure 14.3: The Price of Mercury from Cook (1976)
+
+
+
+
+
+
+

+
Figure 14.4: Mercury Reserves, 1950-1990
+
+
+
+
+
+
+
+
+

+
Figure 14.5: Mercury Price Indexes, 1950-1990
+
+
+
+
+

Lack of sound statistical intuition about variability can lead to manipulation of the public being by unscrupulous persons. Commodity funds sellers use a device of this sort to make their results look good (The Washington Post, Sep 28, 1987, p. 71). Some individual commodity traders inevitably do well in their private trading, just by chance. A firm then hires one of them, builds a public fund around him, and claims the private record for the fund’s own history. But of course the private record has no predictive power, any more than does the record of someone who happened to get ten heads in a row flipping coins.

+

How can we avoid falling into such traps? It is best to look at the longest possible sweep of history. That is, use the largest possible sample of observations to avoid sampling error. For copper we have data going back to the 18th century B.C. In Babylonia, over a period of 1000 years, the price of iron fell to one fifth of what it was under Hammurabi (almost 4000 years ago), and the price of copper then cost about a thousand times its current price in the U.S., relative to wages. So the inevitable short-run increases in price should be considered in this long-run context to avoid drawing unsound conclusions due to small-sample variability.

+

Proof that it is sound judgment to rely on the longest possible series is given by the accuracy of predictions one would have made in the past. In the context of copper, mercury, and other raw materials, we can refer to a sample of years in the past, and from those years imagine ourselves forecasting the following year. If you had bet every time that prices would go down in consonance with the long-run trend, you would have been a big winner on average.

+
+
+

14.2 Regression to the mean

+
+

UP, DOWN “The Dodgers demoted last year’s NL rookie of the year, OF Todd Hollandsworth (.237, 1 HR, 18 RBI) to AAA Albuquerque...” (Item in Washington Post , 6/14/97)

+
+

It is a well-known fact that the Rookie of the Year in a sport such as baseball seldom has as outstanding a season in their sophomore year. Why is this so? Let’s use the knowledge we have acquired of probability and simulation to explain this phenomenon.

+

The matter at hand might be thought of as a problem in pure probability — if one simply asks about the chance that a given player (the Rookie of the Year) will repeat. Or it could be considered a problem in statistics, as discussed in coming chapters. Let’s consider the matter in the context of baseball.

+

Imagine 10 mechanical “ball players,” each a machine that has three white balls (hits) and 7 black balls. Every time the machine goes to bat, you take a ball out of the machine, look to see if it is a hit or an out, and put it back. For each “ball player” you do this 100 times. One of them is going to do better than the others, and that one becomes the Rookie of the Year. See Table 14.2.

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14.2: Rookie Seasons (100 at bats)
# of HitsBatting Average
32.320
34.340
33.330
30.300
35.350
33.330
30.300
31.310
28.280
25.250
+
+

Would you now expect that the player who happened to be the best among the top ten in the first year to again be the best among the top ten in the next year, also? The sports writers do. But of course this seldom happens. The Rookie of the Year in major-league baseball seldom has as outstanding a season in their sophomore year as in their rookie year. You can expect them to do better than the average of all sophomores, but not necessarily better than all of the rest of the group of talented players who are now sophomores. (Please notice that we are not saying that there is no long-run difference among the top ten rookies. But suppose there is. Table 14.3 shows the season’s performance for ten batters of differing performances).

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 14.3: Simulated season’s performance for 10 batters of differing “true” averages
“True”Rookie
.270.340
.270.240
.280.330
.280.300
.300.280
.300.420
.320.340
.320.350
.330.260
.330.330
+
+

We see from Table 14.3 that we have ten batters whose “true” batting averages range from .270 to .330. Their rookie year performance (400 at bats), simulated on the basis of their “true”average is on the right. Which one is the rookie of the year? It’s #6, who hit .420 during the rookie session. Will they do as well next year? Not likely — their “true” average is only .300.

+
+

Start of sampling_variability notebook

+ + +

Try generating some rookie “seasons” yourself with the following commands, ranging the batter’s “true” performance by changing the value of p_hit (the probability of a hit).

+
+
# Simulate a rookie season of 400 at-bats.
+
+# You might try changing the value below and rerunning.
+# This is the true (long-run) probability of a hit for this batter.
+p_hit <- 0.4
+message('True average is: ', p_hit)
+
+
True average is: 0.4
+
+
# We resample _with_ replacement here; the chances of a hit do not change
+# From at-bat to at-bat.
+at_bats <- sample(c('Hit', 'Out'), prob=c(p_hit, 1 - p_hit), size=400, replace=TRUE)
+simulated_average <- sum(at_bats == 'Hit') / 400
+# Show the result
+message('Simulated average is: ', simulated_average)
+
+
Simulated average is: 0.445
+
+
+

Simulate a set of 10 or 20 such rookie seasons, and look at the one who did best. How did their rookie season compare to their “true” average?

+

End of sampling_variability notebook

+
+

The explanation is the presence of variability . And lack of recognition of the role of variability is at the heart of much fallacious reasoning. Being alert to the role of variability is crucial.

+

Or consider the example of having a superb meal at a restaurant — the best meal you have ever eaten. That fantastic meal is almost surely the combination of the restaurant being better than average, plus a lucky night for the chef and the dish you ordered. The next time you return you can expect a meal better than average, because the restaurant is better than average in the long run. But the meal probably will be less good than the superb one you had the first time, because there is no reason to believe that the chef will get so lucky again and that the same sort of variability will happen this time.

+

These examples illustrate the concept of “regression to the mean” — a confusingly-titled and very subtle effect caused by variability in results among successive samples drawn from the same population. This phenomenon was given its title more than a century ago by Francis Galton, one of the great founders of modern statistics, when at first he thought that the height of the human species was becoming more uniform, after he noticed that the children of the tallest and shortest parents usually are closer to the average of all people than their parents are. But later he discovered his fallacy — that the variability in heights of children of quite short and quite tall parents also causes some people to be even more exceptionally tall or short than their parents. So the spread in heights among humans remains much the same from generation to generation; there is no “regression to the mean.” The heart of the matter is that any exceptional observed case in a group is likely to be the result of two forces — a) an underlying propensity to differ from the average in one direction or the other, plus b) some chance sampling variability that happens (in the observed case) to push even further in the exceptional direction.

+

A similar phenomenon arises in direct-mail marketing. When a firm tests many small samples of many lists of names and then focuses its mass mailings on the lists that performed best in the tests, the full list “rollouts” usually do not perform as well as the samples did in the initial tests. It took many years before mail-order experts (see especially (Burnett 1988)) finally understood that regression to the mean inevitably causes an important part of the dropoff from sample to rollout observed in the set of lists that give the very best results in a multi-list test.

+

The larger the test samples, the less the dropoff, of course, because larger samples reduce variability in results. But larger samples risk more money. So the test-sample-size decision for the marketer inevitably is a trade-off between accuracy and cost.

+

And one last amusing example: After I (JLS) lectured to the class on this material, the student who had gotten the best grade on the first mid-term exam came up after class and said: “Does that mean that on the second mid-term I should expect to do well but not the best in the class?” And that’s exactly what happened: He had the second-best score in the class on the next midterm.

+

A related problem arises when one conducts multiple tests, as when testing thousands of drugs for therapeutic value. Some of the drugs may appear to have a therapeutic effect just by chance. We will discuss this problem later when discussing hypothesis testing.

+
+
+

14.3 Summary and conclusion

+

The heart of statistics is clear thinking. One of the key elements in being a clear thinker is to have a sound gut understanding of statistical processes and variability. This chapter amplifies this point.

+

A great benefit to using simulations rather than formulas to deal with problems in probability and statistics is that the presence and importance of variability becomes manifest in the course of the simulation work.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/search.json b/r-book/search.json new file mode 100644 index 00000000..22045ef8 --- /dev/null +++ b/r-book/search.json @@ -0,0 +1,1619 @@ +[ + { + "objectID": "index.html#r-edition", + "href": "index.html#r-edition", + "title": "Resampling statistics", + "section": "R edition", + "text": "R edition" + }, + { + "objectID": "preface_third.html#what-simon-saw", + "href": "preface_third.html#what-simon-saw", + "title": "Preface to the third edition", + "section": "What Simon saw", + "text": "What Simon saw\nSimon gives the early history of this book in the original preface. He starts with the following observation:\n\nIn the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly…\n\nSimon then applied his striking capacity for independent thought to the problem — and came to two essential conclusions.\nThe first was that introductory courses in statistics use far too much mathematics. Most students cannot follow along and quickly get lost, reducing the subject to — as Simon puts it — “mumbo-jumbo”.\nOn its own, this was not a new realization. Simon quotes a classic textbook by Wallis and Roberts (1956), in which they compare teaching statistics through mathematics to teaching in a foreign language. More recently, other teachers of statistics have come to the same conclusion. Cobb (2007) argues that it is practically impossible to teach students the level of mathematics they would need to understand standard introductory courses. As you will see below, Cobb also agrees with Simon about the solution.\nSimon’s great contribution was to see how we can replace the mathematics, to better reveal the true heart of statistical thinking. His starting point appears in the original preface: “Beneath the logic of a statistical inference there necessarily lies a physical process”. Drawing conclusions from noisy data means building a model of the noisy world, and seeing how that model behaves. That model can be physical, where we generate the noisiness of the world using physical devices like dice and spinners and coin-tosses. In fact, Simon used exactly these kinds of devices in his first experiments in teaching (Simon 1969). He then saw that it was much more efficient to build these models with simple computer code, and the result was the first and second editions of this book, with their associated software, the Resampling Stats language.\nSimon’s second conclusion follows from the first. Now that Simon had stripped away the unnecessary barrier of mathematics, he had got to the heart of what is interesting and difficult in statistics. Drawing conclusions from noisy data involves a lot of hard, clear thinking. We need to be honest with our students about that; statistics is hard, not because it is obscure (it need not be), but because it deals with difficult problems. It is exactly that hard logical thinking that can make statistics so interesting to our best students; “statistics” is just reasoning about the world when the world is noisy. Simon writes eloquently about this in a section in the introduction — “Why is statistics such a difficult subject” (Section 1.6).\nWe needed both of Simon’s conclusions to get anywhere. We cannot hope to teach two hard subjects at the same time; mathematics, and statistical reasoning. That is what Simon has done: he replaced the mathematics with something that is much easier to reason about. Then he can concentrate on the real, interesting problem — the hard thinking about data, and the world it comes from. To quote from a later section in this book (Section 2.4): “Once we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics.” Instead of asking “where would I look up the right recipe for this”, you find yourself asking “what kind of world do these data come from?” and “how can I reason about that world?”. Like Simon, we have found that this way of thinking and teaching is almost magically liberating and satisfying. We hope and believe that you will find the same." + }, + { + "objectID": "preface_third.html#sec-resampling-data-science", + "href": "preface_third.html#sec-resampling-data-science", + "title": "Preface to the third edition", + "section": "Resampling and data science", + "text": "Resampling and data science\nThe ideas in Simon’s book, first published in 1992, have found themselves at the center of the modern movement of data science.\nIn the section above, we described Simon’s path in discovering physical models as a way of teaching and explaining statistical tests. He saw that code was the right way to express these physical models, and therefore, to build and explain statistical tests.\nMeanwhile, the wider world of data analysis has been coming to the same conclusion, but from the opposite direction. Simon saw the power of resampling for explanation, and then that code was the right way to express these explanations. The data science movement discovered first that code was essential for data analysis, and then that code was the right way to explain statistics.\nThe modern use of the phrase “data science” comes from the technology industry. From around 2007, companies such as LinkedIn and Facebook began to notice that there was a new type of data analyst that was much more effective than their predecessors. They came to call these analysts “data scientists”, because they had learned how to deal with large and difficult data while working in scientific fields such as ecology, biology, or astrophysics. They had done this by learning to use code:\n\nData scientists’ most basic, universal skill is the ability to write code. (Davenport and Patil 2012)\n\nFurther reflection (Donoho 2017) suggested that something deep was going on: that data science was the expression of a radical change in the way we analyze data, in academia, and in industry. At the center of this change — was code. Code is the language that allows us to tell the computer what it should do with data; it is the native language of data analysis.\nThis insight transforms the way with think of code. In the past, we have thought of code as a separate, specialized skill, that some of us learn. We take coding courses — we “learn to code”. If code is the fundamental language for analyzing data, then we need code to express what data analysis does, and explain how it works. Here we “code to learn”. Code is not an aim in itself, but a language we can use to express the simple ideas behind data analysis and statistics.\nThus the data science movement started from code as the foundation for data analysis, to using code to explain statistics. It ends at the same place as this book, from the other side of the problem.\nThe growth of data science is the inevitable result of taking computing seriously in education and research. We have already cited Cobb (2007) on the impossibility of teaching the mathematics students would need in order to understand traditional statistics courses. He goes on to explain why there is so much mathematics, and why we should remove it. In the age before ubiquitous computing, we needed mathematics to simplify calculations that we could not practically do by hand. Now we have great computing power in our phones and laptops, we do not have this constraint, and we can use simpler resampling methods to solve the same problems. As Simon shows, these are much easier to describe and understand. Data science, and teaching with resampling, are the obvious consequences of ubiquitous computing." + }, + { + "objectID": "preface_third.html#what-we-changed", + "href": "preface_third.html#what-we-changed", + "title": "Preface to the third edition", + "section": "What we changed", + "text": "What we changed\nThis diversion, through data science, leads us to the changes that we have made for the new edition. The previous edition of this book is still excellent, and you can read it free, online, at http://www.resample.com/intro-text-online. It continues to be ahead of its time, and ahead of our time. Its one major drawback is that Simon bases much of the book around code written in a special language that he developed with Dan Weidenfeld, called Resampling Stats. Resampling Stats is well designed for expressing the steps in simulating worlds that include elements of randomness, and it was a useful contribution at the time that it was written. Since then, and particularly in the last decade, there have been many improvements in more powerful and general languages, such as R and Python. These languages are particularly suitable for beginners in data analysis, and they come with a huge range of tools and libraries for a many tasks in data analysis, including the kinds of models and simulations you will see in this book. We have updated the book to use R, instead of Resampling Stats. If you already know R or a similar language, such as Python, you will have a big head start in reading this book, but even if you do not, we have written the book so it will be possible to pick up the R code that you need to understand and build the kind of models that Simon uses. The advantage to us, your authors, is that we can use the very powerful tools associated with R to make it easier to run and explain the code. The advantage to you, our readers, is that you can also learn these tools, and the R language. They will serve you well for the rest of your career in data analysis.\n\nOur second major change is that we have added some content that Simon specifically left out. Simon knew that his approach was radical for its time, and designed his book as a commentary, correction, and addition to traditional courses in statistics. He assumes some familiarity with the older world of normal distributions, t-tests, Chi-squared tests, analysis of variance, and correlation. In the time that has passed since he wrote the book, his approach to explanation has reached the mainstream. It is now perfectly possible to teach an introductory statistics course without referring to the older statistical methods. This means that the earlier editions of this book can now serve on their own as an introduction to statistics — but, used this way, at the time we write, this will leave our readers with some gaps to fill. Simon’s approach will give you a deep understanding of the ideas of statistics, and resampling methods to apply them, but you will likely come across other teachers and researchers using the traditional methods. To bridge this gap, we have added new sections that explain how resampling methods relate to their corresponding traditional methods. Luckily, we find these explanations add deeper understanding to the traditional methods. Teaching resampling is the best foundation for statistics, including the traditional methods.\nLastly, we have extended Simon’s explanation of Bayesian probability and inference. This is partly because Bayesian methods have become so important in statistical inference, and partly because Simon’s approach has such obvious application in explaining how Bayesian methods work." + }, + { + "objectID": "preface_third.html#who-should-read-this-book-and-when", + "href": "preface_third.html#who-should-read-this-book-and-when", + "title": "Preface to the third edition", + "section": "Who should read this book, and when", + "text": "Who should read this book, and when\nAs you have seen in the previous sections, this book uses a radical approach to explaining statistical inference — the science of drawing conclusions from noisy data. This approach is quickly becoming the standard in teaching of data science, partly because it is so much easier to explain, and partly because of the increasing role of code in data analysis.\nOur book teaches the basics of using the R language, basic probability, statistical inference through simulation and resampling, confidence intervals, and basic Bayesian reasoning, all through the use of model building in simple code.\nStatistical inference is an important part of research methods for many subjects; so much so, that research methods courses may even be called “statistics” courses, or include “statistics” components. This book covers the basic ideas behind statistical inference, and how you can apply these ideas to draw practical statistical conclusions. We recommend it to you as an introduction to statistics. If you are a teacher, we suggest you consider this book as a primary text for first statistics courses. We hope you will find, as we have, that this method of explaining through building is much more productive and satisfying than the traditional method of trying to convey some “intuitive” understanding of fairly complicated mathematics. We explain the relationship of these resampling techniques to traditional methods. Even if you do need to teach your students t-tests, and analysis of variance, we hope you will share our experience that this way of explaining is much more compelling than the traditional approach.\nSimon wrote this book for students and teachers who were interested to discover a radical new method of explanation in statistics and probability. The book will still work well for that purpose. If you have done a statistics course, but you kept feeling that you did not really understand it, or there was something fundamental missing that you could not put your finger on — good for you! — then, please, read this book. There is a good chance that it will give you deeper understanding, and reveal the logic behind the often arcane formulations of traditional statistics.\nOur book is only part of a data science course. There are several important aspects to data science. A data science course needs all the elements we list above, but it should also cover the process of reading, cleaning, and reorganizing data using R, or another language, such as\nPython\nIt may also go into more detail about the experimental design, and cover prediction techniques, such as classification with machine learning, and data exploration with plots, tables, and summary measures. We do not cover those here. If you are teaching a full data science course, we suggest that you use this book as your first text, as an introduction to code, and statistical inference, and then add some of the many excellent resources on these other aspects of data science that assume some knowledge of statistics and programming." + }, + { + "objectID": "preface_third.html#welcome-to-resampling", + "href": "preface_third.html#welcome-to-resampling", + "title": "Preface to the third edition", + "section": "Welcome to resampling", + "text": "Welcome to resampling\nWe hope you will agree that Simon’s insights for understanding and explaining are — really extraordinary. We are catching up slowly. If you are like us, your humble authors, you will find that Simon has succeeded in explaining what statistics is, and exactly how it works, to anyone with the patience to work through the examples, and think hard about the problems. If you have that patience, the rewards are great. Not only will you understand statistics down to its deepest foundations, but you will be able to think of your own tests, for your own problems, and have the tools to implement them yourself.\nMatthew Brett\nStéfan van der Walt\nIan Nimmo-Smith\n\n\n\n\nCobb, George W. 2007. “The Introductory Statistics Course: A Ptolemaic Curriculum?” Technology Innovations in Statistics Education 1 (1). https://escholarship.org/uc/item/6hb3k0nz.\n\n\nDavenport, Thomas H, and DJ Patil. 2012. “Data Scientist: The Sexiest Job of the 21st Century.” Harvard Business Review 90 (10): 70–76. https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century.\n\n\nDonoho, David. 2017. “50 Years of Data Science.” Journal of Computational and Graphical Statistics 26 (4): 745–66. http://courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf.\n\n\nSimon, Julian Lincoln. 1969. Basic Research Methods in Social Science. 1st ed. New York: Random House.\n\n\n———. 1992. Resampling: The New Statistics. 1st ed. Arlington, VA: Resampling Stats Inc.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New Approach. New York: The Free Press." + }, + { + "objectID": "preface_second.html#sec-brief-history", + "href": "preface_second.html#sec-brief-history", + "title": "Preface to the second edition", + "section": "Brief history of the resampling method", + "text": "Brief history of the resampling method\nThis book describes a revolutionary — but now fully accepted — approach to probability and statistics. Monte Carlo resampling simulation takes the mumbo-jumbo out of statistics and enables even beginning students to understand completely everything that is done.\nBefore we go further, let’s make the discussion more concrete with an example. Ask a class: What are the chances that three of a family’s first four children will be girls? After various entertaining class suggestions about procreating four babies, or surveying families with four children, someone in the group always suggests flipping a coin. This leads to valuable student discussion about whether the probability of a girl is exactly half (there are about 105 males born for each 100 females), whether .5 is a satisfactory approximation, whether four coins flipped once give the same answer as one coin flipped four times, and so on. Soon the class decides to take actual samples of coin flips. And students see that this method quickly arrives at estimates that are accurate enough for most purposes. Discussion of what is “accurate enough” also comes up, and that discussion is valuable, too.\nThe Monte Carlo method itself is not new. Near the end of World War II, a group of physicists at the Rand Corp. began to use random-number simulations to study processes too complex to handle with formulas. The name “Monte Carlo” came from the analogy to the gambling houses on the French Riviera. The application of Monte Carlo methods in teaching statistics also is not new. Simulations have often been used to illustrate basic concepts. What is new and radical is using Monte Carlo methods routinely as problem-solving tools for everyday problems in probability and statistics.\nFrom here on, the related term resampling will be used throughout the book. Resampling refers to the use of the observed data or of a data generating mechanism (such as a die) to produce new hypothetical samples, the results of which can then be analyzed. The term computer-intensive methods also is frequently used to refer to techniques such as these.\nThe history of resampling is as follows: In the mid-1960’s, I noticed that most graduate students — among them many who had had several advanced courses in statistics — were unable to apply statistical methods correctly in their social science research. I sympathized with them. Even many experts are unable to understand intuitively the formal mathematical approach to the subject. Clearly, we need a method free of the formulas that bewilder almost everyone.\nThe solution is as follows: Beneath the logic of a statistical inference there necessarily lies a physical process. The resampling methods described in this book allow us to work directly with the underlying physical model by simulating it, rather than describing it with formulae. This general insight is also the heart of the specific technique Bradley Efron felicitously labeled ‘the bootstrap’ (1979), a device I introduced in 1969 that is now the most commonly used, and best known, resampling method.\nThe resampling approach was first tried with graduate students in 1966, and it worked exceedingly well. Next, under the auspices of the father of the “new math,” Max Beberman, I “taught” the method to a class of high school seniors in 1967. The word “taught” is in quotation marks because the pedagogical essence of the resampling approach is that the students discover the method for themselves with a minimum of explicit instruction from the teacher.\nThe first classes were a success and the results were published in 1969 (J. L. Simon and Holmes 1969). Three PhD experiments were then conducted under Kenneth Travers’ supervision, and they all showed overwhelming superiority for the resampling method (J. L. Simon, Atkinson, and Shevokas 1976). Subsequent research has confirmed this success.\nThe method was first presented at some length in the 1969 edition of my book Basic Research Methods in Social Science (J. L. Simon 1969) (third edition with Paul Burstein -Simon Julian Lincoln and Burstein (1985)).\nFor some years, the resampling method failed to ignite interest among statisticians. While many factors (including the accumulated intellectual and emotional investment in existing methods) impede the adoption of any new technique, the lack of readily available computing power and tools was an obstacle. (The advent of the personal computer in the 1980s changed that, of course.)\nThen in the late 1970s, Efron began to publish formal analyses of the bootstrap — an important resampling application (Efron 1979). Interest among statisticians has exploded since then, in conjunction with the availability of easy, fast, and inexpensive computer simulations. The bootstrap has been the most widely used, but across-the-board application of computer intensive methods now seems at hand. As Noreen (1989) noted, “there is a computer-intensive alternative to just about every conventional parametric and non-parametric test.” And the bootstrap method has now been hailed by an official American Statistical Association volume as the only “great breakthrough” in statistics since 1970 (Kotz and Johnson 1992).\nIt seems appropriate now to offer the resampling method as the technique of choice for beginning students as well as for the advanced practitioners who have been exploring and applying the method.\nThough the term “computer-intensive methods” is nowadays used to describe the techniques elaborated here, this book can be read either with or without the accompanying use of the computer. However, as a practical matter, users of these methods are unlikely to be content with manual simulations if a quick and simple computer-program alternative is available.\nThe ultimate test of the resampling method is how well you, the reader, learn it and like it. But knowing about the experiences of others may help beginners as well as experienced statisticians approach the scary subject of statistics with a good attitude. Students as early as junior high school, taught by a variety of instructors and in other languages as well as English, have — in a matter of 6 or 12 short hours — learned how to handle problems that students taught conventionally do not learn until advanced university courses. And several controlled experimental studies show that, on average, students who learn this method are more likely to arrive at correct solutions than are students who are taught conventional methods.\nBest of all, the experiments comparing the resampling method against conventional methods show that students enjoy learning statistics and probability this way, and they don’t suffer statistics panic. This experience contrasts sharply with the reactions of students learning by conventional methods. (This is true even when the same teachers teach both methods as part of an experiment.)\nA public offer: The intellectual history of probability and statistics began with gambling games and betting. Therefore, perhaps a lighthearted but very serious offer would not seem inappropriate here: I hereby publicly offer to stake $5,000 in a contest against any teacher of conventional statistics, with the winner to be decided by whose students get the larger number of simple and complex numerical problems correct, when teaching similar groups of students for a limited number of class hours — say, six or ten. And if I should win, as I am confident that I will, I will contribute the winnings to the effort to promulgate this teaching method. (Here it should be noted that I am far from being the world’s most skillful or charming teacher. It is the subject matter that does the job, not the teacher’s excellence.) This offer has been in print for many years now, but no one has accepted it.\nThe early chapters of the book contain considerable discussion of the resampling method, and of ways to teach it. This material is intended mainly for the instructor; because the method is new and revolutionary, many instructors appreciate this guidance. But this didactic material is also intended to help the student get actively involved in the learning process rather than just sitting like a baby bird with its beak open waiting for the mother bird to drop morsels into its mouth. You may skip this didactic material, of course, and I hope that it does not get in your way. But all things considered, I decided it was better to include this material early on rather than to put it in the back or in a separate publication where it might be overlooked." + }, + { + "objectID": "preface_second.html#brief-history-of-statistics", + "href": "preface_second.html#brief-history-of-statistics", + "title": "Preface to the second edition", + "section": "Brief history of statistics", + "text": "Brief history of statistics\nIn ancient times, mathematics developed from the needs of governments and rich men to number armies, flocks, and especially to count the taxpayers and their possessions. Up until the beginning of the 20th century, the term statistic meant the number of something — soldiers, births, taxes, or what-have-you. In many cases, the term statistic still means the number of something; the most important statistics for the United States are in the Statistical Abstract of the United States . These numbers are now known as descriptive statistics. This book will not deal at all with the making or interpretation of descriptive statistics, because the topic is handled very well in most conventional statistics texts.\nAnother stream of thought entered the field of probability and statistics in the 17th century by way of gambling in France. Throughout history people had learned about the odds in gambling games by repeated plays of the game. But in the year 1654, the French nobleman Chevalier de Mere asked the great mathematician and philosopher Pascal to help him develop correct odds for some gambling games. Pascal, the famous Fermat, and others went on to develop modern probability theory.\nLater these two streams of thought came together. Researchers wanted to know how accurate their descriptive statistics were — not only the descriptive statistics originating from sample surveys, but also the numbers arising from experiments. Statisticians began to apply the theory of probability to the accuracy of the data arising from sample surveys and experiments, and that became the theory of inferential statistics .\nHere we find a guidepost: probability theory and statistics are relevant whenever there is uncertainty about events occurring in the world, or in the numbers describing those events.\nLater, probability theory was also applied to another context in which there is uncertainty — decision-making situations. Descriptive statistics like those gathered by insurance companies — for example, the number of people per thousand in each age bracket who die in a five-year period — have been used for a long time in making decisions such as how much to charge for insurance policies. But in the modern probabilistic theory of decision-making in business, politics and war, the emphasis is different; in such situations the emphasis is on methods of combining estimates of probabilities that depend upon each other in complicated ways in order to arrive at the best decision. This is a return to the gambling origins of probability and statistics. In contrast, in standard insurance situations (not including war insurance or insurance on a dancer’s legs) the probabilities can be estimated with good precision without complex calculation, on the basis of a great many observations, and the main statistical task is gathering the information. In business and political decision-making situations, however, one often works with probabilities based on very limited information — often little better than guesses. There the task is how best to combine these guesses about various probabilities into an overall probability estimate.\nEstimating probabilities with conventional mathematical methods is often so complex that the process scares many people. And properly so, because its difficulty leads to errors. The statistics profession worries greatly about the widespread use of conventional tests whose foundations are poorly understood. The wide availability of statistical computer packages that can easily perform these tests with a single command, regardless of whether the user understands what is going on or whether the test is appropriate, has exacerbated this problem. This led John Tukey to turn the field toward descriptive statistics with his techniques of “exploratory data analysis” (Tukey 1977). These descriptive methods are well described in many texts.\nProbabilistic analysis also is crucial, however. Judgments about whether the government should allow a new medicine on the market, or whether an operator should adjust a screw machine, require more than eyeball inspection of data to assess the chance variability. But until now the teaching of probabilistic statistics, with its abstruse structure of mathematical formulas, mysterious tables of calculations, and restrictive assumptions concerning data distributions — all of which separate the student from the actual data or physical process under consideration — have been an insurmountable obstacle to intuitive understanding.\nNow, however, the resampling method enables researchers and decision-makers in all walks of life to obtain the benefits of statistics and predictability without the shortcomings of conventional methods, free of mathematical formulas and restrictive assumptions. Resampling’s repeated experimental trials on the computer enable the data (or a data-generating mechanism representing a hypothesis) to express their own properties, without difficult and misleading assumptions.\nSo — good luck. I hope that you enjoy the book and profit from it.\nJulian Lincoln Simon\n1997\n\n\n\n\nEfron, Bradley. 1979. “Bootstrap Methods; Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf.\n\n\nKotz, Samuel, and Norman Lloyd Johnson. 1992. Breakthroughs in Statistics. New York: Springer-Verlag.\n\n\nNoreen, Eric W. 1989. Computer-Intensive Methods for Testing Hypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore.\n\n\nSimon, Julian Lincoln. 1969. Basic Research Methods in Social Science. 1st ed. New York: Random House.\n\n\nSimon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976. “Probability and Statistics: Experimental Results of a Radically Different Teaching Method.” The American Mathematical Monthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research Methods in Social Science. 3rd ed. New York: Random House.\n\n\nSimon, Julian Lincoln, and Allen Holmes. 1969. “A New Way to Teach Probability Statistics.” The Mathematics Teacher 62 (4): 283–88.\n\n\nTukey, John W. 1977. Exploratory Data Analysis. Reading, MA, USA: Addison-Wesley." + }, + { + "objectID": "intro.html#uses-of-probability-and-statistics", + "href": "intro.html#uses-of-probability-and-statistics", + "title": "1  Introduction", + "section": "1.1 Uses of Probability and Statistics", + "text": "1.1 Uses of Probability and Statistics\nThis chapter introduces you to probability and statistics. First come examples of the kinds of practical problems that this knowledge can solve for us. One reason that the term “statistic” often scares and confuses people is that the term has several sorts of meanings. We discuss the meanings of “statistics” in the section “Types of statistics”. Then comes a discussion on the relationship of probabilities to decisions. Following this we talk about the limitations of probability and statistics. And last is a discussion of why statistics can be such a difficult subject. Most important, this chapter describes the types of problems the book will tackle.\nAt the foundation of sound decision-making lies the ability to make accurate estimates of the probabilities of future events. Probabilistic problems confront everyone — a company owner considering whether to expand their business, to the scientist testing a vaccine, to the individual deciding whether to buy insurance." + }, + { + "objectID": "intro.html#sec-what-problems", + "href": "intro.html#sec-what-problems", + "title": "1  Introduction", + "section": "1.2 What kinds of problems shall we solve?", + "text": "1.2 What kinds of problems shall we solve?\nThese are some examples of the kinds of problems that we can handle with the methods described in this book:\n\nYou are a doctor trying to develop a treatment for COVID19. Currently you are working on a medicine labeled AntiAnyVir. You have data from patients to whom medicine AntiAnyVir was given. You want to judge on the basis of those results whether AntiAnyVir really improves survival or whether it is no better than a sugar pill.\nYou are the campaign manager for the Republicrat candidate for President of the United States. You have the results from a recent poll taken in New Hampshire. You want to know the chance that your candidate would win in New Hampshire if the election were held today.\nYou are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.\nYou are an environmental scientist monitoring levels of phosphorus pollution in a lake. The phosphorus levels have been fluctuated around a relatively low level until recently, but they have been higher in the last few years. Does these recent higher levels indicate some important change or can we put them down to some chance and ordinary variation from year to year?\n\nThe core of all these problems, and of the others that we will deal with in this book, is that you want to know the “chance” or “probability” — different words for the same idea — that some event will or will not happen, or that something is true or false. To put it another way, we want to answer questions about “What is the probability that…?”, given the body of information that you have in hand.\nThe question “What is the probability that…?” is usually not the ultimate question that interests us at a given moment.\nEventually, a person wants to use the estimated probability to help make a decision concerning some action one might take. These are the kinds of decisions, related to the questions about probability stated above, that ultimately we would like to make:\n\nShould you (the researcher) advise doctors to prescribe medicine AntiAnyVir for COVID19 patients, or, should you (the researcher) continue to study AntiAnyVir before releasing it for use? A related matter: should you and other research workers feel sufficiently encouraged by the results of medicine AntiAnyVir so that you should continue research in this general direction rather than turning to some other promising line of research? These are just two of the possible decisions that might be influenced by the answer to the question about the probability that medicine AntiAnyVir is effective in treating COVID19.\nShould you advise the Republicrat presidential candidate to go to New Hampshire to campaign? If the poll tells you conclusively that she or he will not win in New Hampshire, you might decide that it is not worthwhile investing effort to campaign there. Similarly, if the poll tells you conclusively that they surely will win in New Hampshire, you probably would not want to campaign further there. But if the poll is not conclusive in one direction or the other, you might choose to invest the effort to campaign in New Hampshire. Analysis of the chances of winning in New Hampshire based on the poll data can help you make this decision sensibly.\nShould your company buy more ambulances? Clearly the answer to this question is affected by the probability that a given number of your ambulances will be out of action on a given day. But of course this estimated probability will be only one part of the decision.\nShould we search for new causes of phosphorus pollution as a result of the recent measurements from the lake? If the causes have not changed, and the recent higher values were just the result of ordinary variation, our search will end up wasting time and money that could have been better spent elsewhere.\n\nThe kinds of questions to which we wish to find probabilistic and statistical answers may be found throughout the social, biological and physical sciences; in business; in politics; in engineering; and in most other forms of human endeavor." + }, + { + "objectID": "intro.html#sec-types-of-statistics", + "href": "intro.html#sec-types-of-statistics", + "title": "1  Introduction", + "section": "1.3 Types of statistics", + "text": "1.3 Types of statistics\nThe term statistics sometimes causes confusion and therefore needs explanation.\nStatistics can mean two related things. It can refer to a certain sort of number — of which more below. Or it can refer to the field of inquiry that studies these numbers.\nA statistic is a number that we can calculate from a larger collection of numbers we are interested in. For example, table Table 1.1 has some yearly measures of “soluble reactive phosphorus” (SRP) from Lough Erne — a lake in Ireland (Zhou, Gibson, and Foy 2000).\n\n\n\n\nTable 1.1: Soluble Reactive Phosphorus in Lough Erne\n\n\nYear\nSRP\n\n\n\n\n1974\n26.2\n\n\n1975\n22.8\n\n\n1976\n37.2\n\n\n1983\n54.7\n\n\n1984\n37.7\n\n\n1987\n54.3\n\n\n1989\n35.7\n\n\n1991\n72.0\n\n\n1992\n85.1\n\n\n1993\n86.7\n\n\n1994\n93.3\n\n\n1995\n107.2\n\n\n1996\n80.3\n\n\n1997\n70.7\n\n\n\n\n\n\n\n\nWe may want to summarize this set of SRP measurements. For example, we could add up all the SRP values to give the total. We could also divide the total by the number of measurements, to give the average. Or we could measure the spread of the values by finding the minimum and the maximum — see table Table 1.2). All these numbers are descriptive statistics, because they are summaries that describe the collection of SRP measurements.\n\n\n\n\nTable 1.2: Statistics for SRP levels\n\n\n\nDescriptive statistics for SRP\n\n\n\n\nTotal\n863.9\n\n\nMean\n61.7\n\n\nMinimum\n22.8\n\n\nMaximum\n107.2\n\n\n\n\n\n\n\n\nDescriptive statistics are nothing new to you; you have been using many of them all your life.\nWe can calculate other numbers that can be useful for drawing conclusions or inferences from a collection of numbers; these are inferential statistics. Inferential statistics are often probability values that give the answer to questions like “What are the chances that …”.\nFor example, imagine we suspect there was some environmental change in 1990. We see that the average SRP value before 1990 was 38.4 and the average SRP value after 1990 was 85. That gives us a difference in the average of 46.6. But, could this difference be due to chance fluctuations from year to year? Were we just unlucky in getting a few larger measurements in later years? We could use methods that you will see in this book to calculate a probability to answer that question. The probability value is an inferential statistic, because we can use it to draw an inference about the measures.\nInferential statistics use descriptive statistics as their input. Inferential statistics can be used for two purposes: to aid scientific understanding by estimating the probability that a statement is true or not, and to aid in making sound decisions by estimating which alternative among a range of possibilities is most desirable." + }, + { + "objectID": "intro.html#probabilities-and-decisions", + "href": "intro.html#probabilities-and-decisions", + "title": "1  Introduction", + "section": "1.4 Probabilities and decisions", + "text": "1.4 Probabilities and decisions\nThere are two differences between questions about probabilities and the ultimate decision problems:\n\nDecision problems always involve evaluation of the consequences — that is, taking into account the benefits and the costs of the consequences — whereas pure questions about probabilities are estimated without evaluations of the consequences.\nDecision problems often involve a complex combination of sets of probabilities and consequences, together with their evaluations. For example: In the case of the contractor’s ambulances, it is clear that there will be a monetary loss to the contractor if she makes a commitment to have 17 ambulances available for tomorrow and then cannot produce that many. Furthermore, the contractor must take into account the further consequence that there may be a loss of goodwill for the future if she fails to meet her obligations tomorrow — and then again there may not be any such loss; and if there is such loss of goodwill it might be a loss worth $10,000 or $20,000 or $30,000. Here the decision problem involves not only the probability that there will be fewer than 17 ambulances tomorrow but also the immediate monetary loss and the subsequent possible losses of goodwill, and the valuation of all these consequences.\n\nContinuing with the decision concerning whether to do more research on medicine AntiAnyVir: If you do decide to continue research on AntiAnyVir, (a) you may, or (b) you may not, come up with an important general treatment for viral infections within, say, the next 3 years. If you do come up with such a general treatment, of course it will have very great social benefits. Furthermore, (c) if you decide not to do further research on AntiAnyVir now, you can direct your time and that of other people to research in other directions, with some chance that the other research will produce a less-general but nevertheless useful treatment for some relatively infrequent viral infections. Those three possibilities have different social benefits. The probability that medicine AntiAnyVir really has some benefit in treating COVID19, as judged by your prior research, obviously will influence your decision on whether or not to do more research on medicine AntiAnyVir. But that judgment about the probability is only one part of the overall web of consequences and evaluations that must be taken into account when making your decision whether or not to do further research on medicine AntiAnyVir.\nWhy does this book limit itself to the specific probability questions when ultimately we are interested in decisions? A first reason is division of labor. The more general aspects of the decision-making process in the face of uncertainty are treated well in other books. This book’s special contribution is its new approach to the crucial process of estimating the chances that an event will occur.\nSecond, the specific elements of the overall decision-making process taught in this book belong to the interrelated subjects of probability theory and statistics . Though probabilistic and statistical theory ultimately is intended to be part of the general decision-making process, often only the estimation of probabilities is done systematically, and the rest of the decision-making process — for example, the decision whether or not to proceed with further research on medicine AntiAnyVir — is done in informal and unsystematic fashion. This is regrettable, but the fact that this is standard practice is an additional reason why the treatment of statistics and probability in this book is sufficiently complete.\nA third reason that this book covers only statistics and not numerical reasoning about decisions is because most college and university statistics courses and books are limited to statistics." + }, + { + "objectID": "intro.html#limitations-of-probability-and-statistics", + "href": "intro.html#limitations-of-probability-and-statistics", + "title": "1  Introduction", + "section": "1.5 Limitations of probability and statistics", + "text": "1.5 Limitations of probability and statistics\nStatistical testing is not equivalent to research, and research is not the same as statistical testing. Rather, statistical inference is a handmaiden of research, often but not always necessary in the research process.\nA working knowledge of the basic ideas of statistics, especially the elements of probability, is unsurpassed in its general value to everyone in a modern society. Statistics and probability help clarify one’s thinking and improve one’s capacity to deal with practical problems and to understand the world. To be efficient, a social scientist or decision-maker is almost certain to need statistics and probability.\nOn the other hand, important research and top-notch decision-making have been done by people with absolutely no formal knowledge of statistics. And a limited study of statistics sometimes befuddles students into thinking that statistical principles are guides to research design and analysis. This mistaken belief only inhibits the exercise of sound research thinking. Alfred Kinsey long ago put it this way:\n\n… no statistical treatment can put validity into generalizations which are based on data that were not reasonably accurate and complete to begin with. It is unfortunate that academic departments so often offer courses on the statistical manipulation of human material to students who have little understanding of the problems involved in securing the original data. … When training in these things replaces or at least precedes some of the college courses on the mathematical treatment of data, we shall come nearer to having a science of human behavior. (Kinsey, Pomeroy, and Martin 1948, p 35).\n\nIn much — even most — research in social and physical sciences, statistical testing is not necessary. Where there are large differences between different sorts of circumstances for example, if a new medicine cures 90 patients out of 100 and the old medicine cures only 10 patients out of 100 — we do not need refined statistical tests to tell us whether or not the new medicine really has an effect. And the best research is that which shows large differences, because it is the large effects that matter. If the researcher finds that s/he must use refined statistical tests to reveal whether there are differences, this sometimes means that the differences do not matter much.\nTo repeat, then, some or even much research — especially in the physical and biological sciences — does not need the kind of statistical manipulation that will be described in this book. But most decision problems do need the kind of probabilistic and statistical input that is described in this book.\nAnother matter: If the raw data are of poor quality, probabilistic and statistical manipulation cannot be very useful. In the example of the contractor and her ambulances, if the contractor’s estimate that a given ambulance has a one-in-ten chance of being unfit for service out-of-order on a given day is very inaccurate, then our calculation of the probability that three or more ambulances will be out of order on a given day will not be helpful, and may be misleading. To put it another way, one cannot make bread without flour, yeast, and water. And good raw data are the flour, yeast and water necessary to get an accurate estimate of a probability. The most refined statistical and probabilistic manipulations are useless if the input data are poor — the result of unrepresentative samples, uncontrolled experiments, inaccurate measurement, and the host of other ways that information gathering can go wrong. (See Simon and Burstein (1985) for a catalog of the obstacles to obtaining good data.) Therefore, we should constantly direct our attention to ensuring that the data upon which we base our calculations are the best it is possible to obtain." + }, + { + "objectID": "intro.html#sec-stats-difficult", + "href": "intro.html#sec-stats-difficult", + "title": "1  Introduction", + "section": "1.6 Why is Statistics Such a Difficult Subject?", + "text": "1.6 Why is Statistics Such a Difficult Subject?\nWhy is statistics such a tough subject for so many people?\n“Among mathematicians and statisticians who teach introductory statistics, there is a tendency to view students who are not skillful in mathematics as unintelligent,” say two of the authors of a popular introductory text (McCabe and McCabe 1989, p 2). As these authors imply, this view is out-and-out wrong; lack of general intelligence on the part of students is not the root of the problem.\nScan this book and you will find almost no formal mathematics. Yet nearly every student finds the subject very difficult — as difficult as anything taught at universities. The root of the difficulty is that the subject matter is extremely difficult. Let’s find out why .\nIt is easy to find out with high precision which movie is playing tonight at the local cinema; you can look it up on the web or call the cinema and ask. But consider by contrast how difficult it is to determine with accuracy:\n\nWhether we will save lives by recommending vitamin D supplements for the whole population as protection against viral infections. Some evidence suggests that low vitamin D levels predispose to more severe lung infections, and that taking supplements can help (Martineau et al. 2017). But, how certain can we be of the evidence? How safe are the supplements? Does the benefit, and the risk, differ by ethnicity?\nWhat will be the result of more than a hundred million Americans voting for president a month from now; the best attempt usually is a sample of 2000 people, selected in some fashion or another that is far from random, weeks before the election, asked questions that are by no means the same as the actual voting act, and so on;\nHow men feel about women and vice versa.\n\nThe cleverest and wisest people have pondered for thousands of years how to obtain answers to questions like these, and made little progress. Dealing with uncertainty was completely outside the scope of the ancient philosophers. It was not until two or three hundred years ago that people began to make any progress at all on these sorts of questions, and it was only about one century ago that we began to have reasonably competent procedures — simply because the problems are inherently difficult. So it is no wonder that the body of these methods is difficult.\nSo: The bad news is that the subject is extremely difficult. The good news is that you — and that means you — can understand it with hard thinking, even if you have no mathematical background beyond arithmetic and you think that you have no mathematical capability. That’s because the difficulty lies in such matters as pin-pointing the right question, but not in any difficulties of mathematical manipulation.\n\n\n\n\nKinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948. “Sexual Behavior in the Human Male.” W. B. Saunders Company. https://books.google.co.uk/books?id=pfMKrY3VvigC.\n\n\nMartineau, Adrian R, David A Jolliffe, Richard L Hooper, Lauren Greenberg, John F Aloia, Peter Bergman, Gal Dubnov-Raz, et al. 2017. “Vitamin D Supplementation to Prevent Acute Respiratory Tract Infections: Systematic Review and Meta-Analysis of Individual Participant Data.” Bmj 356.\n\n\nMcCabe, George P, and Linda Doyle McCabe. 1989. Instructor’s Guide with Solutions for Introduction to the Practice of Statistics. New York: W. H. Freeman.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research Methods in Social Science. 3rd ed. New York: Random House.\n\n\nZhou, Qixing, Christopher E Gibson, and Robert H Foy. 2000. “Long-Term Changes of Nitrogen and Phosphorus Loadings to a Large Lake in North-West Ireland.” Water Research 34 (3): 922–26. https://doi.org/10.1016/S0043-1354(99)00199-2." + }, + { + "objectID": "resampling_method.html#the-resampling-approach-in-action", + "href": "resampling_method.html#the-resampling-approach-in-action", + "title": "2  The resampling method", + "section": "2.1 The resampling approach in action", + "text": "2.1 The resampling approach in action\nRecall the problem from section Section 1.2 in which the contractor owns 20 ambulances:\n\nYou are the manager and part owner of one of several contractors providing ambulances to a hospital. You own 20 ambulances. Based on past experience, the chance that any one ambulance will be unfit for service on any given day is about one in ten. You want to know the chance on a particular day — tomorrow — that three or more of them will be out of action.\n\nThe resampling approach produces the estimate as follows.\n\n2.1.1 Randomness from physical methods\nWe collect 10 coins, and mark one of them with a pen or pencil or tape as being the coin that represents “out-of-order;” the other nine coins stand for “in operation”. For any one ambulance, this set of 10 coins provides a “model” for the one-in-ten chance — a probability of .10 (10 percent) — of it being out of order on a given day. We put the coins into a little jar or bucket.\nFor ambulance #1, we draw a single coin from the bucket. This coin represents whether that ambulance is going to be broken tomorrow. After replacing the coin and shaking the bucket, we repeat the same procedure for ambulance #2, ambulance #3 and so forth. Having repeated the procedure 20 times, we now have a representation of all ambulances for a single day.\nWe can now repeat this whole process as many times as we like: each time, we come up with a representation for a different day, telling us how many ambulances will be out-of-service on that day.\nAfter collecting evidence for, say, 50 experimental days we determine the proportion of the experimental days on which three or more ambulances are out of order. That proportion is an estimate of the probability that three or more ambulances will be out of order on a given day — the answer we seek. This procedure is an example of Monte Carlo simulation, which is the heart of the resampling method of statistical estimation.\nA more direct way to answer this question would be to examine the firm’s actual records for the past 100 days or, better, 500 days (if that’s available) to determine how many days had three or more ambulances out of order. But the resampling procedure described above gives us an estimate even if we do not have such long-term information. This is realistic; it is frequently the case in the real world that we must make estimates on the basis of insufficient history about an event.\nA quicker resampling method than the coins could be obtained with 20 ten-sided dice or spinners (like those found in the popular Dungeons & Dragons games). For each die, we identify one of its ten sides as “out-of-order”.\nFunnily enough, standard 10-sided dice have the numbers 0 through 9 on their faces, rather than 1 through 10. Figure 2.1 shows a standard 10-sided die:\n\n\n\nFigure 2.1: 10-sided die\n\n\nWe decide, arbitrarily, that the 9 side means “out-of-order”. We could even put a little bit of paint on the 9 side to remind us. The die represents an ambulance. If we roll the die, and get this face, this indicates that the ambulance was out of order. If we get any of the other faces — 0 through 8 — this ambulance was in working order. A single throw of all 20 dice will be our experimental trial that represents a single day; we just have to count whether three or more ambulances turn up “out of order”. Figure 2.2 show the result of one trial — throwing 20 dice:\n\n\n\nFigure 2.2: 20 10-sided dice\n\n\nAs you can see, the trial in Figure 2.2 gave us a single 9, so there was only one ambulance out of order.\nIn a hundred quick throws of the 20 dice — which probably takes less than 5 minutes — we can get a fast and reasonably accurate answer to our question." + }, + { + "objectID": "resampling_method.html#sec-randomness-computer", + "href": "resampling_method.html#sec-randomness-computer", + "title": "2  The resampling method", + "section": "2.2 Randomness from your computer", + "text": "2.2 Randomness from your computer\nComputers make it easy to generate random numbers for resampling.\n\n\n\n\n\n\nWhat do we mean by random?\n\n\n\nRandom numbers are numbers where it is impossible to predict which number is coming next. If we ask the computer for a number between 0 and 9, we will get one of the numbers 0 though 9, but we cannot do any better than that in predicting which number it will give us. There is an equal (10%) chance we will get any of the numbers 0 through 9 — just as there is when we roll a fair 10-sided die. We will go into more detail about what exactly we mean by random and chance later in the book (Section 3.8).\n\n\n\nWe can use random numbers from computers to simulate our problem. For example, we can ask the computer to choose a random number between 0 and 9 to represent one ambulance. Let’s say the number 9 represents “out-of-order” and 0 through 8 “in operation”, then any one random number gives us a trial observation for a single ambulance. To get an experimental trial for a single day we look at 20 numbers and count how many of them are 9. We then look at, say, one hundred sets of 20 numbers and count the proportion of sets whose 20 numbers show three or more ambulances being “out-of-order”. Once again, that proportion estimates the probability that three or more ambulances will be out-of-order on any given day.\nSoon we will do all these steps with some R code, but for now, consider Table Table 2.1. In each row, we placed 20 numbers, each one representing an ambulance. We added 25 such rows, each representing a simulation of one day.\n\n\n\n\nTable 2.1: 25 simulations of 20 ambulances, with counts \n\n\n\nA1\nA2\nA3\nA4\nA5\nA6\nA7\nA8\nA9\nA10\nA11\nA12\nA13\nA14\nA15\nA16\nA17\nA18\nA19\nA20\n\n\n\n\nDay 1\n5\n4\n4\n5\n9\n8\n2\n9\n1\n5\n8\n2\n1\n8\n2\n6\n6\n5\n0\n5\n\n\nDay 2\n2\n7\n4\n4\n6\n3\n9\n5\n2\n5\n8\n1\n2\n5\n4\n9\n0\n5\n8\n4\n\n\nDay 3\n5\n9\n1\n2\n8\n7\n5\n3\n8\n9\n2\n6\n9\n0\n7\n2\n5\n2\n2\n2\n\n\nDay 4\n2\n4\n7\n6\n0\n4\n5\n1\n3\n7\n6\n3\n2\n9\n5\n8\n0\n6\n0\n4\n\n\nDay 5\n7\n4\n8\n9\n1\n5\n1\n2\n3\n6\n4\n8\n5\n1\n7\n5\n0\n9\n8\n7\n\n\nDay 6\n7\n3\n9\n1\n7\n7\n9\n9\n6\n8\n4\n7\n7\n2\n0\n2\n4\n6\n9\n2\n\n\nDay 7\n3\n9\n5\n3\n7\n1\n3\n0\n8\n0\n0\n3\n3\n0\n0\n3\n8\n6\n4\n6\n\n\nDay 8\n0\n4\n6\n7\n9\n7\n1\n9\n8\n1\n8\n7\n0\n4\n4\n7\n0\n5\n6\n1\n\n\nDay 9\n0\n9\n0\n7\n0\n1\n6\n0\n8\n6\n0\n3\n1\n9\n8\n3\n1\n2\n7\n8\n\n\nDay 10\n8\n6\n1\n0\n8\n3\n4\n5\n8\n8\n4\n9\n1\n0\n8\n6\n9\n2\n0\n7\n\n\nDay 11\n7\n0\n0\n7\n9\n2\n3\n0\n0\n0\n5\n5\n4\n0\n1\n7\n8\n2\n0\n8\n\n\nDay 12\n3\n2\n2\n4\n6\n3\n9\n6\n8\n8\n7\n6\n6\n4\n3\n8\n7\n0\n4\n3\n\n\nDay 13\n4\n2\n6\n9\n0\n0\n8\n5\n3\n1\n5\n1\n8\n7\n6\n8\n3\n6\n3\n5\n\n\nDay 14\n3\n1\n2\n4\n3\n1\n6\n2\n9\n5\n2\n4\n0\n6\n1\n9\n0\n7\n9\n4\n\n\nDay 15\n2\n0\n1\n5\n8\n5\n8\n1\n3\n2\n2\n7\n8\n2\n2\n1\n2\n9\n2\n5\n\n\nDay 16\n9\n9\n6\n0\n6\n3\n3\n2\n6\n8\n3\n9\n0\n5\n7\n8\n8\n3\n8\n6\n\n\nDay 17\n8\n3\n0\n0\n1\n5\n3\n7\n0\n9\n6\n4\n1\n2\n5\n0\n1\n8\n7\n1\n\n\nDay 18\n7\n1\n2\n6\n4\n3\n0\n0\n7\n5\n6\n2\n9\n2\n8\n0\n3\n1\n9\n1\n\n\nDay 19\n5\n6\n5\n9\n8\n4\n3\n0\n6\n7\n4\n9\n4\n2\n0\n6\n1\n0\n4\n1\n\n\nDay 20\n0\n5\n5\n9\n9\n4\n3\n4\n1\n6\n9\n2\n4\n3\n1\n8\n6\n8\n0\n2\n\n\nDay 21\n4\n1\n0\n1\n5\n1\n6\n4\n8\n5\n2\n1\n5\n8\n6\n2\n0\n5\n2\n6\n\n\nDay 22\n8\n5\n2\n0\n3\n5\n0\n9\n0\n4\n2\n8\n1\n1\n5\n7\n1\n4\n7\n5\n\n\nDay 23\n1\n0\n8\n5\n4\n7\n5\n2\n8\n7\n2\n6\n4\n4\n3\n5\n6\n5\n5\n7\n\n\nDay 24\n9\n5\n7\n9\n6\n3\n4\n7\n7\n2\n5\n2\n0\n0\n9\n1\n9\n5\n2\n8\n\n\nDay 25\n6\n0\n9\n4\n8\n3\n4\n8\n0\n8\n8\n7\n1\n0\n7\n3\n4\n7\n5\n1\n\n\n\n\n\n\n\n\nTo know how many ambulances were “out of order” on any given day, we count number of ones in that row. We place the counts in the final column called “#9” (for “number of nines”):\n\n\n\n\nTable 2.2: 25 simulations of 20 ambulances, with counts \n\n\n\nA1\nA2\nA3\nA4\nA5\nA6\nA7\nA8\nA9\nA10\nA11\nA12\nA13\nA14\nA15\nA16\nA17\nA18\nA19\nA20\n#9\n\n\n\n\nDay 1\n5\n4\n4\n5\n9\n8\n2\n9\n1\n5\n8\n2\n1\n8\n2\n6\n6\n5\n0\n5\n2\n\n\nDay 2\n2\n7\n4\n4\n6\n3\n9\n5\n2\n5\n8\n1\n2\n5\n4\n9\n0\n5\n8\n4\n2\n\n\nDay 3\n5\n9\n1\n2\n8\n7\n5\n3\n8\n9\n2\n6\n9\n0\n7\n2\n5\n2\n2\n2\n3\n\n\nDay 4\n2\n4\n7\n6\n0\n4\n5\n1\n3\n7\n6\n3\n2\n9\n5\n8\n0\n6\n0\n4\n1\n\n\nDay 5\n7\n4\n8\n9\n1\n5\n1\n2\n3\n6\n4\n8\n5\n1\n7\n5\n0\n9\n8\n7\n2\n\n\nDay 6\n7\n3\n9\n1\n7\n7\n9\n9\n6\n8\n4\n7\n7\n2\n0\n2\n4\n6\n9\n2\n4\n\n\nDay 7\n3\n9\n5\n3\n7\n1\n3\n0\n8\n0\n0\n3\n3\n0\n0\n3\n8\n6\n4\n6\n1\n\n\nDay 8\n0\n4\n6\n7\n9\n7\n1\n9\n8\n1\n8\n7\n0\n4\n4\n7\n0\n5\n6\n1\n2\n\n\nDay 9\n0\n9\n0\n7\n0\n1\n6\n0\n8\n6\n0\n3\n1\n9\n8\n3\n1\n2\n7\n8\n2\n\n\nDay 10\n8\n6\n1\n0\n8\n3\n4\n5\n8\n8\n4\n9\n1\n0\n8\n6\n9\n2\n0\n7\n2\n\n\nDay 11\n7\n0\n0\n7\n9\n2\n3\n0\n0\n0\n5\n5\n4\n0\n1\n7\n8\n2\n0\n8\n1\n\n\nDay 12\n3\n2\n2\n4\n6\n3\n9\n6\n8\n8\n7\n6\n6\n4\n3\n8\n7\n0\n4\n3\n1\n\n\nDay 13\n4\n2\n6\n9\n0\n0\n8\n5\n3\n1\n5\n1\n8\n7\n6\n8\n3\n6\n3\n5\n1\n\n\nDay 14\n3\n1\n2\n4\n3\n1\n6\n2\n9\n5\n2\n4\n0\n6\n1\n9\n0\n7\n9\n4\n3\n\n\nDay 15\n2\n0\n1\n5\n8\n5\n8\n1\n3\n2\n2\n7\n8\n2\n2\n1\n2\n9\n2\n5\n1\n\n\nDay 16\n9\n9\n6\n0\n6\n3\n3\n2\n6\n8\n3\n9\n0\n5\n7\n8\n8\n3\n8\n6\n3\n\n\nDay 17\n8\n3\n0\n0\n1\n5\n3\n7\n0\n9\n6\n4\n1\n2\n5\n0\n1\n8\n7\n1\n1\n\n\nDay 18\n7\n1\n2\n6\n4\n3\n0\n0\n7\n5\n6\n2\n9\n2\n8\n0\n3\n1\n9\n1\n2\n\n\nDay 19\n5\n6\n5\n9\n8\n4\n3\n0\n6\n7\n4\n9\n4\n2\n0\n6\n1\n0\n4\n1\n2\n\n\nDay 20\n0\n5\n5\n9\n9\n4\n3\n4\n1\n6\n9\n2\n4\n3\n1\n8\n6\n8\n0\n2\n3\n\n\nDay 21\n4\n1\n0\n1\n5\n1\n6\n4\n8\n5\n2\n1\n5\n8\n6\n2\n0\n5\n2\n6\n0\n\n\nDay 22\n8\n5\n2\n0\n3\n5\n0\n9\n0\n4\n2\n8\n1\n1\n5\n7\n1\n4\n7\n5\n1\n\n\nDay 23\n1\n0\n8\n5\n4\n7\n5\n2\n8\n7\n2\n6\n4\n4\n3\n5\n6\n5\n5\n7\n0\n\n\nDay 24\n9\n5\n7\n9\n6\n3\n4\n7\n7\n2\n5\n2\n0\n0\n9\n1\n9\n5\n2\n8\n4\n\n\nDay 25\n6\n0\n9\n4\n8\n3\n4\n8\n0\n8\n8\n7\n1\n0\n7\n3\n4\n7\n5\n1\n1\n\n\n\n\n\n\n\n\nEach value in the last column of Table Table 2.2 is the count of 9s in that row and, therefore, the result from our simulation of one day.\nWe can estimate how often three or more ambulances would break down by looking for values of three or greater in the last column. We find there are 6 rows with three or more in the last column. Finally we divide this number of rows by the number of trials (25) to get an estimate of the proportion of days with three or more breakdowns. The result is 0.24." + }, + { + "objectID": "resampling_method.html#solving-the-problem-using", + "href": "resampling_method.html#solving-the-problem-using", + "title": "2  The resampling method", + "section": "2.3 Solving the problem using R", + "text": "2.3 Solving the problem using R\nHere we rush ahead to show you how to do this simulation in R.\nWe go through the R code for the simulation, but we don’t expect you to understand all of it right now. The rest of this book goes into more detail on reading and writing R code, and how you can use R to build your own simulations. Here we just want to show you what this code looks like, to give you an idea of where we are headed.\nWhile you can run the code below on your own computer, for now we only need you to read it and follow along; the text explains what each line of code does.\n\n\n\n\n\n\nComing back to the example\n\n\n\nIf you are interested, you can come back to this example later, and run it for yourself. To do this, we recommend you read Chapter 4 that explains how to execute notebooks online or on your own computer.\n\n\n\nStart of ambulances notebook\n\nDownload notebook\nInteract\n\n\nThe first thing to say about the code you will see below is there are some lines that do not do anything; these are the lines beginning with a # character (read # as “hash”). Lines beginning with # are called comments. When R sees a # at the start of a line, it ignores everything else on that line, and skips to the next. Here’s an example of a comment:\n\n# R will completely ignore this text.\n\nBecause R ignores lines beginning with #, the text after the # is just for us, the humans reading the code. The person writing the code will often use comments to explain what the code is doing.\nOur next task is to use R to simulate a single day of ambulances. We will again represent each ambulance by a random number from 0 through 9. 20 of these numbers represents a simulation of all 20 ambulances available to the contractor. We call a simulation of all ambulances for a specific day one trial.\nRecall that we want twenty 10-sided dice — one per ambulance. Our dice should be 10-sided, because each ambulance has a 1-in-10 chance of being out of order.\nThe program to simulate one trial of the ambulances problem therefore begins with these commands:\n\n# Ask R to generate 20 numbers from 0 through 9.\n\n# These are the numbers we will ask R to select from.\nnumbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\n\n# Get 20 values from the *numbers* sequence.\n# Store the 20 numbers with the name \"a\"\n# We will explain the replace=TRUE later.\na <- sample(numbers, 20, replace=TRUE)\n\n# The result is a sequence of 20 numbers.\na\n\n [1] 6 4 5 3 5 8 4 4 7 1 6 4 4 1 5 3 1 2 8 5\n\n\nThe commands above ask the computer to store the results of the random drawing in a location in the computer’s memory to which we give a name such as “a” or “ambulances” or “aardvark” — the name is up to us.\nNext, we need to count the number of defective ambulances:\n\n# Count the number of nines in the random numbers.\n# The \"a == 9\" part identifies all the numbers equal to 9.\n# The \"sum\" part counts how many numbers \"a == 9\" found.\nb <- sum(a == 9)\n# Show the result\nb\n\n[1] 0\n\n\n\n\n\n\n\n\nCounting sequence elements\n\n\n\nWe see that the code uses:\n\nsum(a == 9)\n\n[1] 0\n\n\nWhat exactly happens here under the hood? First a == 9 creates an sequence of values that only contains\nTRUE or FALSE\nvalues, depending on whether each element is equal to 9 or not.\nThen, we ask R to add up (sum). R counts TRUE as 1, and FALSE as 0; thus we can use sum to count the number of TRUE values.\nThis comes down to asking “how many elements in a are equal to 9”.\nDon’t worry, we will go over this again in the next chapter.\n\n\nThe sum command is a counting operation. It asks the computer to count the number of 9s among the twenty numbers that are in location a following the random draw carried out by the sample operation. The result of the sum operation will be somewhere between 0 and 20, the number of simulated ambulances that were out-of-order on a given simulated day. The result is then placed in another location in the computer’s memory that we label b.\nAbove you see that we have worked out how to tell the computer to do a single trial — one simulated day.\n\n2.3.1 Repeating trials\nWe could run the code above for one trial over and over, and write down the result on a piece of paper. If we did this 100 times we would have 100 counts of the number of simulated ambulances that had broken down for each simulated day. To answer our question, we will then count the number of times the count was more than three, and divide by 100, to get an estimate of the proportion of days with more than three out-of-order ambulances.\nOne of the great things about the computer is that it is very good at repeating tasks many times, so we do not have to. Our next task is to ask the computer to repeat the single trial many times — say 1000 times — and count up the results for us.\nOf course R is very good at repeating things, but the instructions to tell R to repeat things will take a little while to get used to. Soon, we will spend some time going over it in more detail. For now though, we show you how what it looks like, and ask you to take our word for it.\nThe standard way to repeat steps in R is a for loop. For example, let us say we wanted to display “Hello” five times. Here is how we would do that with a for loop:\n\n# Read the next line as \"repeat the following steps five times\".\nfor (i in 1:5) {\n # The stuff between the curly brackets is the code we\n # repeat five times.\n # Print \"Hello\" to the screen.\n message(\"Hello\")\n}\n\nHello\nHello\nHello\nHello\nHello\n\n\nYou can probably see where we are going here. We are going to put the code for one trial inside a for loop, to repeat that trial code many times.\nOur next job is to store the results of each trial. If we are going to run 1000 trials, we need to store 1000 results.\nTo do this, we start with a sequence of 1000 zeros, that we will fill in later, like this:\n\n# Ask R to make a sequence of 1000 zeros that we will use\n# to store the results of our 1000 trials.\n# Call this sequence \"z\"\nz <- numeric(1000)\n\nFor now, z contains 1000 zeros, but we will soon use a for loop to execute 1000 trials. For each trial we will calculate our result (the number of broken-down ambulances), and we will store the result in the z store. We end up with 1000 trial results stored in z.\nWith these parts, we are now ready to solve the ambulance problem, using R.\n\n\n2.3.2 The solution\nThis is our big moment! Here we will combine the elements shown above to perform our ambulance simulation over, say, 1000 days. Just a quick reminder: we do not expect you to understand all the detail of the code below; we will cover that later. For now, see if you can follow along with the gist of it.\nTo solve resampling problems, we typically proceed as we have done above. We figure out the structure of a single trial and then place that trial in a for loop that executes it multiple times (once for each day, in our case).\nNow, let us apply this procedure to our ambulance problem. We simulate 1000 days. You will see that we have just taken the parts above, and put them together. The only new part here, is the step at the end, where we store the result of the trial. Bear with us for that; we will come to it soon.\n\n# Ask R to make a sequence of 1000 zeros that we will use\n# to store the results of our 1000 trials.\n# Call this sequence \"z\"\nz <- numeric(1000)\n\n# These are the numbers we will ask R to select from.\nnumbers <- 0:9\n\n# Read the next line as \"repeat the following steps 1000 times\".\nfor (i in 1:1000) {\n # The stuff between the curly brackets is the code we\n # repeat 1000 times.\n\n # Get 20 values from the *numbers* sequence.\n # Store the 20 numbers with the name \"a\"\n a <- sample(numbers, 20, replace=TRUE)\n\n # Count the number of nines in the random numbers.\n # The \"a == 9\" part identifies all the numbers equal to 9.\n # The \"sum\" part counts how many numbers \"a == 9\" found.\n b <- sum(a == 9)\n\n # Store the result from this trial in the sequence \"z\"\n z[i] <- b\n\n # Now go back and repeat the trial, until done.\n}\n\nThe z[i] <- b statement that follows the sum counting operation simply keeps track of the results of each trial, placing the number of defective ambulances for each trial inside the sequence called z. The sequence has 1000 positions: one for each trial.\nWhen we have run the code above, we have stored 1000 trial results in the sequence z. These are 1000 counts of out-of-order ambulances, one for each of our simulated days. Our last task is to calculate the proportion of these days for which we had more than three broken-down ambulances.\nSince our aim is to count the number of days in which more than 3 (4 or more) defective ambulances occur, we use another counting sum command at the end of the 1000 trials. This command counts how many times more than 3 defects occurred in the 1000 days recorded in our z sequence, and we place the result in another location, k. This gives us the total number of days where 4 or more defective ambulances are seen to occur. Then we divide the number in k by 1000, the number of trials. Thus we obtain an estimate of the chance, expressed as a probability between 0 and 1, that 4 or more ambulances will be defective on a given day. And we store that result in a location that we call kk, which R subsequently prints to the screen.\n\n# How many trials resulted in more than 3 ambulances out of order?\nk <- sum(z > 3)\n\n# Convert to a proportion.\nkk <- k / 1000\n\n# Show the result.\nmessage(kk)\n\n0.14\n\n\nThis is the estimate we wanted; the proportion of days where more than three ambulances were out of action.\nWe have crept up on the solution, so it might not be clear to you how few steps you needed to do this task. Here is the whole solution to the problem, without the comments:\n\nz <- numeric(1000)\nnumbers <- 0:9\n\nfor (i in 1:1000) {\n a <- sample(numbers, 20, replace=TRUE)\n b <- sum(a == 9)\n z[i] <- b\n}\n\nk <- sum(z > 3)\nkk <- k / 1000\nmessage(kk)\n\n0.141\n\n\nEnd of ambulances notebook\n\n\nNotice that the code above is exactly the same as the code we built up in steps. But notice too, that the answer we got from this code was slightly different from the answer we got first.\nWhy did we get a different answer from the same code?\n\n\n\n\n\n\nRandomness in estimates\n\n\n\nThis is an essential point — our code uses random numbers to get an estimate of the quantity we want — in this case, the probability of three or more ambulances being out of order. Every run of our code will use a different set of random numbers. Therefore, every run of our code will give us a very slightly different number. As you will soon see, we can make our estimate more and more accurate, and less and less different between each run, by doing many trials in each run. Here we did 1000 trials, but we will usually do 10000 trials, to give us a good estimate, that does not vary much from run to run.\n\n\nDon’t worry about the detail of how each of these commands works — we will cover those details gradually, over the next few chapters. But, we hope that you can see, in principle, how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using the equivalent of a ten-sided die. This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with.\nWhile writing programs like these take a bit of getting used to, it is vastly simpler than the older, more conventional approaches to such problems routinely taught to students." + }, + { + "objectID": "resampling_method.html#sec-resamp-differs", + "href": "resampling_method.html#sec-resamp-differs", + "title": "2  The resampling method", + "section": "2.4 How resampling differs from the conventional approach", + "text": "2.4 How resampling differs from the conventional approach\nIn the standard approach the student learns to choose and solve a formula. Doing the algebra and arithmetic is quick and easy. The difficulty is in choosing the correct formula. Unless you are a professional mathematician, it may take you quite a while to arrive at the correct formula — considerable hard thinking, and perhaps some digging in textbooks. More important than the labor, however, is that you may come up with the wrong formula, and hence obtain the wrong answer. And how would you know if you were wrong?\nMost students who have had a standard course in probability and statistics are quick to tell you that it is not easy to find the correct formula, even immediately after finishing a course (or several courses) on the subject. After leaving school or university, it is harder still to choose the right formula. Even many people who have taught statistics at the university level (including this writer) must look at a book to get the correct formula for a problem as simple as the ambulances, and then we are often still not sure we have the right answer. This is the grave disadvantage of the standard approach.\nIn the past few decades, resampling and other Monte Carlo simulation methods have come to be used extensively in scientific research. But in contrast to the material in this book, simulation has mostly been used in situations so complex that mathematical methods have not yet been developed to handle them. Here are examples of such situations:\n\n\nFor a flight to Mars, calculating the correct route involves a great many variables, too many to solve with formulas. Hence, the Monte Carlo simulation method is used.\nThe Navy might want to know how long the average ship will have to wait for dock facilities. The time of completion varies from ship to ship, and the number of ships waiting in line for dock work varies over time. This problem can be handled quite easily with the experimental simulation method, but formal mathematical analysis would be difficult or impossible.\nWhat are the best tactics in baseball? Should one bunt? Should one put the best hitter up first, or later? By trying out various tactics with dice or random numbers, Earnshaw Cook (in his book Percentage Baseball), found that it is best never to bunt, and the highest-average hitter should be put up first, in contrast to usual practice. Finding this answer would have been much more difficult with the analytic method.\n\nWhich search pattern will yield the best results for a ship searching for a school of fish? Trying out “models” of various search patterns with simulation can provide a fast answer.\nWhat strategy in the game of Monopoly will be most likely to win? The simulation method systematically plays many games (with a computer) testing various strategies to find the best one.\n\nBut those five examples are all complex problems. This book and its earlier editions break new ground by using this method for simple rather than complex problems , especially in statistics rather than pure probability, and in teaching beginning rather than advanced students to solve problems this way. (Here it is necessary to emphasize that the resampling method is used to solve the problems themselves rather than as a demonstration device to teach the notions found in the standard conventional approach . Simulation has been used in elementary courses in the past, but only to demonstrate the operation of the analytical mathematical ideas. That is very different than using the resampling approach to solve statistics problems themselves, as is done here.)\nOnce we get rid of the formulas and tables, we can see that statistics is a matter of clear thinking, not fancy mathematics . Then we can get down to the business of learning how to do that clear statistical thinking, and putting it to work for you. The study of probability is purely mathematics (though not necessarily formulas) and technique. But statistics has to do with meaning . For example, what is the meaning of data showing an association just discovered between a type of behavior and a disease? Of differences in the pay of men and women in your firm? Issues of causation, acceptability of control, and design of experiments cannot be reduced to technique. This is “philosophy” in the fullest sense. Probability and statistics calculations are just one input. Resampling simulation enables us to get past issues of mathematical technique and focus on the crucial statistical elements of statistical problems.\nWe hope you will find, as you read through the chapters, that the resampling way of thinking is a good way to think about the more traditional statistical methods that some of you may already know. Our approach will be to use resampling to understand the ideas, and then apply this understanding to reason about traditional methods. You may also find that the resampling methods are not only easier to understand — they are often more useful, because they are so general in their application." + }, + { + "objectID": "what_is_probability.html#introduction", + "href": "what_is_probability.html#introduction", + "title": "3  What is probability?", + "section": "3.1 Introduction", + "text": "3.1 Introduction\nThe central concept for dealing with uncertainty is probability. Hence we must inquire into the “meaning” of the term probability. (The term “meaning” is in quotes because it can be a confusing word.)\nYou have been using the notion of probability all your life when drawing conclusions about what you expect to happen, and in reaching decisions in your public and personal lives.\nYou wonder: Will the kick from the 45 yard line go through the uprights? How much oil can you expect from the next well you drill, and what value should you assign to that prospect? Will you make money if you invest in tech stocks for the medium term, or should you spread your investments across the stock market? Will the next Space-X launch end in disaster? Your answers to these questions rest on the probabilities you estimate.\nAnd you act on the basis of probabilities: You pay extra for an low-interest loan, if you think that interest rates are going to go up. You bet heavily on a poker hand if there is a high probability that you have the best hand. A hospital decides not to buy another ambulance when the administrator judges that there is a low probability that all the other ambulances will ever be in use at once. NASA decides whether or not to send off the space shuttle this morning as scheduled.\nThe idea of probability is essential when we reason about uncertainty, and so this chapter discusses what is meant by such key terms as “probability,” “chance”, “sample,” and “universe.” It discusses the nature and the usefulness of the concept of probability as used in this book, and it touches on the source of basic estimates of probability that are the raw material of statistical inferences." + }, + { + "objectID": "what_is_probability.html#the-meaning-of-probability", + "href": "what_is_probability.html#the-meaning-of-probability", + "title": "3  What is probability?", + "section": "3.2 The “Meaning” of “Probability”", + "text": "3.2 The “Meaning” of “Probability”\nProbability is difficult to define (Feller 1968), but here is a useful informal starting point:\n\nA probability is a number from 0 through 1 that reflects how likely it is that a particular event will happen.\n\nAny particular stated probability is an assertion that indicates how likely you believe it is that an event will occur.\nIf you give an event a probability of 0 you mean that you are certain it will not happen. If you give probability 1 to an event, you mean you are certain that it will happen. For example, if I give you one card from deck that you know contains only the standard 52 cards — before you look at the card, you can give probability 0 to the card being a joker, because you are certain the pack does not contain any joker cards. If I then select only the 14 spades from that deck, and give you a card from that selection, you will say there is probability 1 that the card is a black card, because all the spades are black cards.\nA probability estimate of .2 indicates that you think there is twice as great a chance of the event happening as if you had estimated a probability of .1. This is the rock-bottom interpretation of the term “probability,” and the heart of the concept. 1\nThe idea of probability arises when you are not sure about what will happen in an uncertain situation. For example, you may lack information and therefore can only make an estimate. If someone asks you your name, you do not use the concept of probability to answer; you know the answer to a very high degree of surety. To be sure, there is some chance that you do not know your own name, but for all practical purposes you can be quite sure of the answer. If someone asks you who will win tomorrow’s baseball game, however, there is a considerable chance that you will be wrong no matter what you say. Whenever there is a reasonable chance that your prediction will be wrong, the concept of probability can help you.\nThe concept of probability helps you to answer the question, “How likely is it that…?” The purpose of the study of probability and statistics is to help you make sound appraisals of statements about the future, and good decisions based upon those appraisals. The concept of probability is especially useful when you have a sample from a larger set of data — a “universe” — and you want to know the probability of various degrees of likeness between the sample and the universe. (The universe of events you are sampling from is also called the “population,” a concept to be discussed below.) Perhaps the universe of your study is all high school graduates in 2018. You might then want to know, for example, the probability that the universe’s average SAT (university entrance) score will not differ from your sample’s average SAT by more than some arbitrary number of SAT points — say, ten points.\nWe have said that a probability statement is about the future. Well, usually. Occasionally you might state a probability about your future knowledge of past events — that is, “I think I’ll find out that…” — or even about the unknown past. (Historians use probabilities to measure their uncertainty about whether events occurred in the past, and the courts do, too, though the courts hesitate to say so explicitly.)\nSometimes one knows a probability, such as in the case of a gambler playing black on an honest roulette wheel, or an insurance company issuing a policy on an event with which it has had a lot of experience, such as a life insurance policy. But often one does not know the probability of a future event. Therefore, our concept of probability must include situations where extensive data are not available.\nAll of the many techniques used to estimate probabilities should be thought of as proxies for the actual probability. For example, if Mission Control at Space Central simulates what should and probably will happen in space if a valve is turned aboard a space craft just now being built, the test result on the ground is a proxy for the real probability of what will happen when the crew turn the valve in the planned mission.\nIn some cases, it is difficult to conceive of any data that can serve as a proxy. For example, the director of the CIA, Robert Gates, said in 1993 “that in May 1989, the CIA reported that the problems in the Soviet Union were so serious and the situation so volatile that Gorbachev had only a 50-50 chance of surviving the next three to four years unless he retreated from his reform policies” (The Washington Post , January 17, 1993, p. A42). Can such a statement be based on solid enough data to be more than a crude guess?\nThe conceptual probability in any specific situation is an interpretation of all the evidence that is then available . For example, a wise biomedical worker’s estimate of the chance that a given therapy will have a positive effect on a sick patient should be an interpretation of the results of not just one study in isolation, but of the results of that study plus everything else that is known about the disease and the therapy. A wise policymaker in business, government, or the military will base a probability estimate on a wide variety of information and knowledge. The same is even true of an insurance underwriter who bases a life-insurance or shipping-insurance rate not only on extensive tables of long-time experience but also on recent knowledge of other kinds. Each situation asks us to make a choice of the best method of estimating a probability — whether that estimate is objective — from a frequency series — or subjective, from the distillation of other experience." + }, + { + "objectID": "what_is_probability.html#the-nature-and-meaning-of-the-concept-of-probability", + "href": "what_is_probability.html#the-nature-and-meaning-of-the-concept-of-probability", + "title": "3  What is probability?", + "section": "3.3 The nature and meaning of the concept of probability", + "text": "3.3 The nature and meaning of the concept of probability\nIt is confusing and unnecessary to inquire what probability “really” is. (Indeed, the terms “really” and “is,” alone or in combination, are major sources of confusion in statistics and in other logical and scientific discussions, and it is often wise to avoid their use.) Various concepts of probability — which correspond to various common definitions of the term — are useful in particular contexts. This book contains many examples of the use of probability. Work with them will gradually develop a sound understanding of the concept.\nThere are two major concepts and points of view about probability — frequency and degrees of belief. Each is useful in some situations but not in others. Though they may seem incompatible in principle, there almost never is confusion about which is appropriate in a given situation.\n\nFrequency . The probability of an event can be said to be the proportion of times that the event has taken place in the past, usually based on a long series of trials. Insurance companies use this when they estimate the probability that a thirty-five-year-old teacher will die during a period for which he wants to buy an insurance policy. (Notice this shortcoming: Sometimes you must bet upon events that have never or only infrequently taken place before, and so you cannot reasonably reckon the proportion of times they occurred one way or the other in the past.)\nDegree of belief . The probability that an event will take place or that a statement is true can be said to correspond to the odds at which you would bet that the event will take place. (Notice a shortcoming of this concept: You might be willing to accept a five-dollar bet at 2-1 odds that your team will win the game, but you might be unwilling to bet a hundred dollars at the same odds.)\n\nSee (Barnett 1982, chap. 3) for an in-depth discussion of different approaches to probability.\nThe connection between gambling and immorality or vice troubles some people about gambling examples. On the other hand, the immediacy and consequences of the decisions that the gambler has to make give the subject a special tang. There are several reasons why statistics use so many gambling examples — and especially tossing coins, throwing dice, and playing cards:\n\nHistorical . The theory of probability began with gambling examples of dice analyzed by Cardano, Galileo, and then by Pascal and Fermat.\nGenerality . These examples are not related to any particular walk of life, and therefore they can be generalized to applications in any walk of life. Students in any field — business, medicine, science — can feel equally at home with gambling examples.\nSharpness . These examples are particularly stark, and unencumbered by the baggage of particular walks of life or special uses.\nUniversality . Many other texts use these same examples, and therefore the use of them connects up this book with the main body of writing about probability and statistics.\n\nOften we’ll begin with a gambling example and then consider an example in one of the professional fields — such as business and other decision-making activities, biostatistics and medicine, social science and natural science — and everyday living. People in one field often can benefit from examples in others; for example, medical students should understand the need for business decision-making in terms of medical practice, as well as the biostatistical examples. And social scientists should understand the decision-making aspects of statistics if they have any interest in the use of their work in public policy." + }, + { + "objectID": "what_is_probability.html#back-to-proxies", + "href": "what_is_probability.html#back-to-proxies", + "title": "3  What is probability?", + "section": "3.4 Back to Proxies", + "text": "3.4 Back to Proxies\nExample of a proxy: The “probability risk assessments” (PRAs) that are made for the chances of failures of nuclear power plants are based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. A PRA can cost a nuclear facility $5 million.\nAnother example: If a manager of a high-street store looks at the sales of a particular brand of smart watches in the last two Decembers, and on that basis guesses how likely it is that she will run out of stock if she orders 200 smart watches, then the last two years’ experience is serving as a proxy for future experience. If a sales manager just “intuits” that the odds are 3 to 1 (a probability of .75) that the main local competitor will not meet a price cut, then all her past experience summed into her intuition is a proxy for the probability that it will really happen. Whether any proxy is a good or bad one depends on the wisdom of the person choosing the proxy and making the probability estimates.\nHow does one estimate a probability in practice? This involves practical skills not very different from the practical skills required to estimate with accuracy the length of a golf shot, the number of carpenters you will need to build a house, or the time it will take you to walk to a friend’s house; we will consider elsewhere some ways to improve your practical skills in estimating probabilities. For now, let us simply categorize and consider in the next section various ways of estimating an ordinary garden variety of probability, which is called an “unconditional” probability." + }, + { + "objectID": "what_is_probability.html#sec-probability-ways", + "href": "what_is_probability.html#sec-probability-ways", + "title": "3  What is probability?", + "section": "3.5 The various ways of estimating probabilities", + "text": "3.5 The various ways of estimating probabilities\nConsider the probability of drawing an even-numbered spade from a deck of poker cards (consider the queen as even and the jack and king as odd). Here are several general methods of estimation, where we define each method in terms of the operations we use to make the estimate:\n\nExperience.\nThe first possible source for an estimate of the probability of drawing an even-numbered spade is the purely empirical method of experience . If you have watched card games casually from time to time, you might simply guess at the proportion of times you have seen even-numbered spades appear — say, “about 1 in 15” or “about 1 in 9” (which is almost correct) or something like that. (If you watch long enough you might come to estimate something like 6 in 52.)\nGeneral information and experience are also the source for estimating the probability that the sales of a particular brand of smart watch this December will be between 200 and 250, based on sales the last two Decembers; that your team will win the football game tomorrow; that war will break out next year; or that a United States astronaut will reach Mars before a Russian astronaut. You simply put together all your relevant prior experience and knowledge, and then make an educated guess.\nObservation of repeated events can help you estimate the probability that a machine will turn out a defective part or that a child can memorize four nonsense syllables correctly in one attempt. You watch repeated trials of similar events and record the results.\nData on the mortality rates for people of various ages in a particular country in a given decade are the basis for estimating the probabilities of death, which are then used by the actuaries of an insurance company to set life insurance rates. This is systematized experience — called a frequency series .\nNo frequency series can speak for itself in a perfectly objective manner. Many judgments inevitably enter into compiling every frequency series — deciding which frequency series to use for an estimate, choosing which part of the frequency series to use, and so on. For example, should the insurance company use only its records from last year, which will be too few to provide as much data as is preferable, or should it also use death records from years further back, when conditions were slightly different, together with data from other sources? (Of course, no two deaths — indeed, no events of any kind — are exactly the same. But under many circumstances they are practically the same, and science is only interested in such “practical” considerations.)\nGiven that we have to use judgment in probability estimates, the reader may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities; the two terms are working synonyms.\nThere is no logical difference between the sort of probability that the life insurance company estimates on the basis of its “frequency series” of past death rates, and the manager’s estimates of the sales of smart watches in December, based on sales in that month in the past two years. 2\nThe concept of a probability based on a frequency series can be rendered almost useless when all the observations are repetitions of a single magnitude — for example, the case of all successes and zero failures of space-shuttle launches prior to the Challenger shuttle tragedy in the 1980s; in those data alone there was almost no basis to estimate the probability of a shuttle failure. (Probabilists have made some rather peculiar attempts over the centuries to estimate probabilities from the length of a zero-defect time series — such as the fact that the sun has never failed to rise (foggy days aside! — based on the undeniable fact that the longer such a series is, the smaller the probability of a failure; see e.g., (Whitworth 1897, xix–xli). However, one surely has more information on which to act when one has a long series of observations of the same magnitude rather than a short series).\nSimulated experience.\nA second possible source of probability estimates is empirical scientific investigation with repeated trials of the phenomenon. This is an empirical method even when the empirical trials are simulations. In the case of the even-numbered spades, the empirical scientific procedure is to shuffle the cards, deal one card, record whether or not the card is an even-number spade, replace the card, and repeat the steps a good many times. The proportions of times you observe an even-numbered spade come up is a probability estimate based on a frequency series.\nYou might reasonably ask why we do not just count the number of even-numbered spades in the deck of fifty-two cards — using the sample space analysis you see below. No reason at all. But that procedure would not work if you wanted to estimate the probability of a baseball batter getting a hit or a cigarette lighter producing flame.\nSome varieties of poker are so complex that experiment is the only feasible way to estimate the probabilities a player needs to know.\nThe resampling approach to statistics produces estimates of most probabilities with this sort of experimental “Monte Carlo” method. More about this later.\nSample space analysis and first principles.\nA third source of probability estimates is counting the possibilities — the quintessential theoretical method. For example, by examination of an ordinary die one can determine that there are six different numbers that can come up. One can then determine that the probability of getting (say) either a “1” or a “2,” on a single throw, is 2/6 = 1/3, because two among the six possibilities are “1” or “2.” One can similarly determine that there are two possibilities of getting a “1” plus a “6” out of thirty-six possibilities when rolling two dice, yielding a probability estimate of 2/36 = 1/18.\nEstimating probabilities by counting the possibilities has two requirements: 1) that the possibilities all be known (and therefore limited), and few enough to be studied easily; and 2) that the probability of each particular possibility be known, for example, that the probabilities of all sides of the dice coming up are equal, that is, equal to 1/6.\nMathematical shortcuts to sample-space analysis.\nA fourth source of probability estimates is mathematical calculations . If one knows by other means that the probability of a spade is 1/4 and the probability of an even-numbered card is 6/13, one can use probability calculation rules to calculate that the probability of turning up an even-numbered spade is 6/52 (that is, 1/4 x 6/13). If one knows that the probability of a spade is 1/4 and the probability of a heart is 1/4, one can then calculate that the probability of getting a heart or a spade is 1/2 (that is 1/4 + 1/4). The point here is not the particular calculation procedures, which we will touch on later, but rather that one can often calculate the desired probability on the basis of already-known probabilities.\nIt is possible to estimate probabilities with mathematical calculation only if one knows by other means the probabilities of some related events. For example, there is no possible way of mathematically calculating that a child will memorize four nonsense syllables correctly in one attempt; empirical knowledge is necessary.\nKitchen-sink methods.\nIn addition to the above four categories of estimation procedures, the statistical imagination may produce estimates in still other ways such as a) the salesman’s seat-of-the-pants estimate of what the competition’s price will be next quarter, based on who-knows-what gossip, long-time acquaintance with the competitors, and so on, and b) the probability risk assessments (PRAs) that are made for the chances of failures of nuclear power plants based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. Any of these methods may be a combination of theoretical and empirical methods.\n\nAs an example of an organization struggling with kitchen-sink methods, consider the estimation of the probability of failure for the tragic flight of the Challenger shuttle, as described by the famous physicist Nobelist Richard Feynman. This is a very real case that includes just about every sort of complication that enters into estimating probabilities.\n\n…Mr. Ullian told us that 5 out of 127 rockets that he had looked at had failed — a rate of about 4 percent. He took that 4 percent and divided it by 4, because he assumed a manned flight would be safer than an unmanned one. He came out with about a 1 percent chance of failure, and that was enough to warrant the destruct charges.\nBut NASA [the space agency in charge] told Mr. Ullian that the probability of failure was more like 1 in \\(10^5\\).\nI tried to make sense out of that number. “Did you say 1 in \\(10^5\\)?”\n“That’s right; 1 in 100,000.”\n“That means you could fly the shuttle every day for an average of 300 years between accidents — every day, one flight, for 300 years — which is obviously crazy!”\n“Yes, I know,” said Mr. Ullian. “I moved my number up to 1 in 1000 to answer all of NASA’s claims — that they were much more careful with manned flights, that the typical rocket isn’t a valid comparison, etcetera.”\nBut then a new problem came up: the Jupiter probe, Galileo , was going to use a power supply that runs on heat generated by radioactivity. If the shuttle carrying Galileo failed, radioactivity could be spread over a large area. So the argument continued: NASA kept saying 1 in 100,000 and Mr. Ullian kept saying 1 in 1000, at best.\nMr. Ullian also told us about the problems he had in trying to talk to the man in charge, Mr. Kingsbury: he could get appointments with underlings, but he never could get through to Kingsbury and find out how NASA got its figure of 1 in 100,000 (Feynman and Leighton 1988, 179–80).\n\nFeynman tried to ascertain more about the origins of the figure of 1 in 100,000 that entered into NASA’s calculations. He performed an experiment with the engineers:\n\n…“Here’s a piece of paper each. Please write on your paper the answer to this question: what do you think is the probability that a flight would be uncompleted due to a failure in this engine?”\nThey write down their answers and hand in their papers. One guy wrote “99-44/100% pure” (copying the Ivory soap slogan), meaning about 1 in 200. Another guy wrote something very technical and highly quantitative in the standard statistical way, carefully defining everything, that I had to translate — which also meant about 1 in 200. The third guy wrote, simply, “1 in 300.”\nMr. Lovingood’s paper, however, said:\n“Cannot quantify. Reliability is judged from:\n\npast experience\nquality control in manufacturing\nengineering judgment”\n\n“Well,” I said, “I’ve got four answers, and one of them weaseled.” I turned to Mr. Lovingood: “I think you weaseled.”\n“I don’t think I weaseled.”\n“You didn’t tell me what your confidence was, sir; you told me how you determined it. What I want to know is: after you determined it, what was it?”\nHe says, “100 percent” — the engineers’ jaws drop, my jaw drops; I look at him, everybody looks at him — “uh, uh, minus epsilon!”\nSo I say, “Well, yes; that’s fine. Now, the only problem is, WHAT IS EPSILON?”\nHe says, “\\(10^-5\\).” It was the same number that Mr. Ullian had told us about: 1 in 100,000.\nI showed Mr. Lovingood the other answers and said, “You’ll be interested to know that there is a difference between engineers and management here — a factor of more than 300.”\nHe says, “Sir, I’ll be glad to send you the document that contains this estimate, so you can understand it.”\nLater, Mr. Lovingood sent me that report. It said things like “The probability of mission success is necessarily very close to 1.0” — does that mean it is close to 1.0, or it ought to be close to 1.0? — and “Historically, this high degree of mission success has given rise to a difference in philosophy between unmanned and manned space flight programs; i.e., numerical probability versus engineering judgment.” As far as I can tell, “engineering judgment” means they’re just going to make up numbers! The probability of an engine-blade failure was given as a universal constant, as if all the blades were exactly the same, under the same conditions. The whole paper was quantifying everything. Just about every nut and bolt was in there: “The chance that a HPHTP pipe will burst is \\(10^-7\\).” You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you add everything together you get 1 in 100,000. (Feynman and Leighton 1988, 182–83).\n\nWe see in the Challenger shuttle case very mixed kinds of inputs to actual estimates of probabilities. They include frequency series of past flights of other rockets, judgments about the relevance of experience with that different sort of rocket, adjustments for special temperature conditions (cold), and much much more. There also were complex computational processes in arriving at the probabilities that were made the basis for the launch decision. And most impressive of all, of course, are the extraordinary differences in estimates made by various persons (or perhaps we should talk of various statuses and roles) which make a mockery of the notion of objective estimation in this case.\nWorking with different sorts of estimation methods in different sorts of situations is not new; practical statisticians do so all the time. We argue that we should make no apology for doing so.\nThe concept of probability varies from one field of endeavor to another; it is different in the law, in science, and in business. The concept is most straightforward in decision-making situations such as business and gambling; there it is crystal-clear that one’s interest is entirely in making accurate predictions so as to advance the interests of oneself and one’s group. The concept is most difficult in social science, where there is considerable doubt about the aims and values of an investigation. In sum, one should not think of what a probability “is” but rather how best to estimate it. In practice, neither in actual decision-making situations nor in scientific work — nor in classes — do people experience difficulties estimating probabilities because of philosophical confusions. Only philosophers and mathematicians worry — and even they really do not need to worry — about the “meaning” of probability3." + }, + { + "objectID": "what_is_probability.html#the-relationship-of-probability-to-other-magnitudes", + "href": "what_is_probability.html#the-relationship-of-probability-to-other-magnitudes", + "title": "3  What is probability?", + "section": "3.6 The relationship of probability to other magnitudes", + "text": "3.6 The relationship of probability to other magnitudes\nAn important argument in favor of approaching the concept of probability as an estimate is that an estimate of a probability often (though not always) is the opposite side of the coin from an estimate of a physical quantity such as time or space.\nFor example, uncertainty about the probability that one will finish a task within 9 minutes is another way of labeling the uncertainty that the time required to finish the task will be less than 9 minutes. Hence, if estimation is appropriate for time in this case, it should be equally appropriate for probability. The same is true for the probability that the quantity of smart watches sold will be between 200 and 250 units.\nHence the concept of probability, and its estimation in any particular case, should be no more puzzling than is the “dual” concept of time or distance or quantities of smart watches. That is, lack of certainty about the probability that an event will occur is not different in nature from lack of certainty about the amount of time or distance in the event. There is no essential difference between whether a part 2 inches in length will be the next to emerge from the machine, or what the length of the next part will be, or the length of the part that just emerged (if it has not yet been measured).\nThe information available for the measurement of (say) the length of a car or the location of a star is exactly the same information that is available with respect to the concept of probability in those situations. That is, one may have ten disparate observations of a car’s length which then constitute a probability distribution, and the same for the altitude of a star in the heavens.\nIn a book of puzzles about probability (Mosteller 1987, problem 42), this problem appears: “If a stick is broken in two at random, what is the average length of the smaller piece?” This particular puzzle does not even mention probability explicitly, and no one would feel the need to write a scholarly treatise on the meaning of the word “length” here, any more than one would one do so if the question were about an astronomer’s average observation of the angle of a star at a given time or place, or the average height of boards cut by a carpenter, or the average size of a basketball team. Nor would one write a treatise about the “meaning” of “time” if a similar puzzle involved the average time between two bird calls. Yet a rephrasing of the problem reveals its tie to the concept of probability, to wit: What is the probability that the smaller piece will be (say) more than half the length of the larger piece? Or, what is the probability distribution of the sizes of the shorter piece?\nThe duality of the concepts of probability and physical entities also emerges in Whitworth’s discussion (1897) of fair betting odds:\n\n…What sum ought you fairly give or take now, while the event is undetermined, in exchange for the assurance that you shall receive a stated sum (say $1,000) if the favourable event occur? The chance of receiving $1,000 is worth something. It is not as good as the certainty of receiving $1,000, and therefore it is worth less than $1,000. But the prospect or expectation or chance, however slight, is a commodity which may be bought and sold. It must have its price somewhere between zero and $1,000. (p. xix.)\n\n\n…And the ratio of the expectation to the full sum to be received is what is called the chance of the favourable event. For instance, if we say that the chance is 1/5, it is equivalent to saying that $200 is the fair price of the contingent $1,000. (p. xx.)…\n\n\nThe fair price can sometimes be calculated mathematically from a priori considerations: sometimes it can be deduced from statistics, that is, from the recorded results of observation and experiment. Sometimes it can only be estimated generally, the estimate being founded on a limited knowledge or experience. If your expectation depends on the drawing of a ticket in a raffle, the fair price can be calculated from abstract considerations: if it depend upon your outliving another person, the fair price can be inferred from recorded statistics: if it depend upon a benefactor not revoking his will, the fair price depends upon the character of your benefactor, his habit of changing his mind, and other circumstances upon the knowledge of which you base your estimate. But if in any of these cases you determine that $300 is the sum which you ought fairly to accept for your prospect, this is equivalent to saying that your chance, whether calculated or estimated, is 3/10... (p. xx.)\n\nIt is indubitable that along with frequency data, a wide variety of other information will affect the odds at which a reasonable person will bet. If the two concepts of probability stand on a similar footing here, why should they not be on a similar footing in all discussion of probability? I can think of no reason that they should not be so treated.\nScholars write about the “discovery” of the concept of probability in one century or another. But is it not likely that even in pre-history, when a fisherperson was asked how long the big fish was, s/he sometimes extended her/his arms and said, “About this long, but I’m not exactly sure,” and when a scout was asked how many of the enemy there were, s/he answered, “I don’t know for sure...probably about fifty.” The uncertainty implicit in these statements is the functional equivalent of probability statements. There simply is no need to make such heavy work of the probability concept as the philosophers and mathematicians and historians have done." + }, + { + "objectID": "what_is_probability.html#what-is-chance", + "href": "what_is_probability.html#what-is-chance", + "title": "3  What is probability?", + "section": "3.7 What is “chance”?", + "text": "3.7 What is “chance”?\nThe study of probability focuses on events with randomness — that is, events about which there is uncertainty whether or not they will occur. And the uncertainty refers to your knowledge rather than to the event itself. For example, consider this physical illustration with a remote control. The remote control has a front end that should point at the TV that is controls, and a back end that will usually be pointing at me, the user of the remote control. Call the front — the TV end, and the back — the sofa end of the remote control.\nI spin the remote control like a baton twirler. If I hold it at the sofa end and attempt to flip it so that it turns only half a revolution, I can be almost sure that I will correctly get the TV end and not the sofa end. And if I attempt to flip it a full revolution, again I can almost surely get the sofa end successfully. It is not a random event whether I catch the sofa end or the TV end (here ignoring those throws when I catch neither end) when doing only half a revolution or one revolution. The result is quite predictable in both these simple maneuvers so far.\nWhen I say the result is “predictable,” I mean that you would not bet with me about whether this time I’ll get the TV or the sofa end. So we say that the outcome of my flip aiming at half a revolution is not “random.”\nWhen I twirl the remote control so little, I control (almost completely) whether the sofa end or the TV end comes down to my hand; this is the same as saying that the outcome does not occur by chance.\nThe terms “random” and “chance” implicitly mean that you believe that I cannot control or cannot know in advance what will happen.\nWhether this twirl will be the rare time I miss, however, should be considered chance. Though you would not bet at even odds on my catching the sofa end versus the TV end if there is to be only a half or one full revolution, you might bet — at (say) odds of 50 to 1 — that I will make a mistake and get it wrong, or drop it. So the very same flip can be seen as random or determined depending on what aspect of it we are looking at.\nOf course you would not bet against me about my not making a mistake, because the bet might cause me to make a mistake purposely. This “moral hazard” is a problem that emerges when a person buys life insurance and may commit suicide, or when a boxer may lose a fight purposely. The people who stake money on those events say that such an outcome is “fixed” (a very appropriate word) and not random.\nNow I attempt more difficult maneuvers with the remote control. I can do \\(1\\frac{1}{2}\\) flips pretty well, and two full revolutions with some success — maybe even \\(2\\frac{1}{2}\\) flips on a good day. But when I get much beyond that, I cannot determine very well whether I’ll get the sofa or the TV end. The outcome gradually becomes less and less predictable — that is, more and more random.\nIf I flip the remote control so that it revolves three or more times, I can hardly control the process at all, and hence I cannot predict well whether I’ll get the sofa end or the TV end. With 5 revolutions I have absolutely no control over the outcome; I cannot predict the outcome better than 50-50. At that point, getting the sofa end or the TV end has become a completely random event for our purposes, just like flipping a coin high in the air. So at that point we say that “chance” controls the outcome, though that word is just a synonym for my lack of ability to control and predict the outcome. “Chance” can be thought to stand for the myriad small factors that influence the outcome.\nWe see the same gradual increase in randomness with increasing numbers of shuffles of cards. After one shuffle, a skilled magician can know where every card is, and after two shuffles there is still much order that s/he can work with. But after (say) five shuffles, the magician no longer has any power to predict and control, and the outcome of any draw can then be thought of as random chance.\nAt what point do we say that the outcome is “random” or “pure chance” as to whether my hand will grasp the TV end, the sofa end, or at some other spot? There is no sharp boundary to this transition. Rather, the transition is gradual; this is the crucial idea, and one that I have not seen stated before.\nWhether or not we refer to the outcome as random depends upon the twirler’s skill, which influences how predictable the event is. A baton twirler or juggler might be able to do ten flips with a non-random outcome; if the twirler is an expert and the outcome is highly predictable, we say it is not random but rather is determined.\nAgain, this shows that the randomness is not a property of the physical event, but rather of a person’s knowledge and skill." + }, + { + "objectID": "what_is_probability.html#sec-what-is-chance", + "href": "what_is_probability.html#sec-what-is-chance", + "title": "3  What is probability?", + "section": "3.8 What Do We Mean by “Random”?", + "text": "3.8 What Do We Mean by “Random”?\nWe have defined “chance” and “random* as the absence of predictive power and/or explanation and/or control. Here we should not confuse the concepts of determinacy-indeterminacy and predictable-unpredictable. What matters for decision purposes is whether you can predict. Whether the process is”really” determinate is largely a matter of definition and labeling, an unnecessary philosophical controversy for our purposes (and perhaps for any other purpose) 4.\nThe remote control in the previous demonstration becomes unpredictable — that is, random — even though it still is subject to similar physical processes as when it is predictable. I do not deny in principle that these processes can be “understood,” or that one could produce a machine that would — like a baton twirler — make the course of the remote control predictable for many turns. But in practice we cannot make the predictions — and it is the practical reality, rather than the principle, that matters here.\nWhen I flip the remote control half a turn or one turn, I control (almost completely) whether it comes down at the sofa end end or the TV end, so we do not say that the outcome is chance. Much the same can be said about what happens to the predictability of drawing a given card as one increases the number of times one shuffles a deck of cards.\nConsider, too, a set of fake dice that I roll. Before you know they are fake, you assume that the probabilities of various outcomes is a matter of chance. But after you know that the dice are loaded, you no longer assume that the outcome is chance. This illustrates how the probabilities you work with are influenced by your knowledge of the facts of the situation.\nAdmittedly, this way of thinking about probability takes some getting used to. Events may appear to be random, but in fact, we can predict them — and visa versa. For example, suppose a magician does a simple trick with dice such as this one:\n\nThe magician turns her back while a spectator throws three dice on the table. He is instructed to add the faces. He then picks up any one die, adding the number on the bottom to the previous total. This same die is rolled again. The number it now shows is also added to the total. The magician turns around. She calls attention to the fact that she has no way of knowing which of the three dice was used for the second roll. She picks up the dice, shakes them in her hand a moment, then correctly announces the final sum.\n\nMethod:. When the spectator rolls the dice, they get three numbers, one from each of the three dice. Call these numbers \\(a\\), \\(b\\) and \\(c\\). Then he chooses one die — it doesn’t matter which, but let’s say he chooses the third die, with value \\(c\\). He adds the bottom of the third die to the total. Here’s the trick — the total of opposite faces on a standard die always add up to 7 — 1 is opposite 6, 2 is opposite 5, and 3 is opposite 4. So the total is now \\(a + b + 7\\). Then the spectator rolls the third die again, to get a new number \\(d\\). The total is now \\(a + b + 7 + d\\). When the magician turns round she can see what \\(a\\) and \\(b\\) and \\(d\\) are, so to get the right final total, she just needs to add 7 (Gardner 1985, p259). Ben Sparks does a nice demonstration of the trick on Numerphile YouTube.\nThe point here is that, until you know the trick, you (the magician) cannot predict the final sum, so the magician and the spectator consider the result as random. If you do know the trick, you can predict the result, and it is not random. Whether something is “random” or not, depends on what you know.\nConsider the distributions of heights of various groups of living things (including people). When we consider all living things taken together, the shape of the overall distribution — many individuals at the tiny end where the viruses are found, and very few individuals at the tall end where the giraffes are — is determined mostly by the distribution of species that have different mean heights. Hence we can explain the shape of that distribution, and we do not say that is determined by “chance.” But with a homogenous cohort of a single species — say, all 25-year-old human females in the U.S. — our best description of the shape of the distribution is “chance.” With situations in between, the shape is partly due to identifiable factors — e.g. age — and partly due to “chance.”\nOr consider the case of a basketball shooter: What causes her or him to make (or not make) a basket this shot, after a string of successes? Much must be ascribed to chance variation. But what causes a given shooter to be very good or very poor relative to other players? For that explanation we can point to such factors as the amount of practice or natural talent.\nAgain, all this has nothing to do with whether the mechanism is “really” chance, unlike the arguments that have been raging in physics for a century. That is the point of the remote control demonstration. Our knowledge and our power to predict the outcome gradually transits from non-chance (that is, “determined”) to chance (“not determined”) in a gradual way even though the same sort of physical mechanism produces each throw of the remote control.\nEarlier I mentioned that when we say that chance controls the outcome of the remote control flip after (say) five revolutions, we mean that there are many small forces that affect the outcome. The effect of each force is not known, and each is independent of the other. None of these forces is large enough for me (as the remote control twirler) to deal with, or else I would deal with it and be able to improve my control and my ability to predict the outcome. This concept of many small influences — “small” meaning in practice those influences whose effects cannot be identified and allowed for — which affect the outcome and whose effects are not knowable and which are independent of each other is important in statistical inference. For example, as we will see later, when we add many unpredictable deviations together, and plot the distribution of the result, we end up with the famous and very common bell-shaped normal distribution — this striking result comes about because of a mathematical phenomenon called the Central Limit Theorem. We will show this at work, later in the book." + }, + { + "objectID": "what_is_probability.html#randomness-from-the-computer", + "href": "what_is_probability.html#randomness-from-the-computer", + "title": "3  What is probability?", + "section": "3.9 Randomness from the computer", + "text": "3.9 Randomness from the computer\nWe now have the idea of random variation as being variation we cannot predict. For example, when we flip the remote control through many rotations, we can no longer easily predict which end will land in our hand. We can call the result of any particular flip — random — because we cannot predict whether the result will be TV end or sofa end.\nWe still know some things about the result — it will be one of two options — TV or sofa (unless we drop it). But we cannot predict which. We say the result of each flip is random if we cannot do anything to improve our prediction of 50% for TV (or sofa) end on the next flip.\nWe are not saying the result is random in any deep, non-deterministic sense — we are only saying we can treat the result as random, because we cannot predict it.\nNow consider getting random numbers from the computer, where the numbers can either be 0 or 1. This is rather like tossing a fair coin, where the results are 0 and 1 rather than “heads” and “tails”.\nWhen we ask the computer for a random choice between 0 and 1, we accept it is random-enough, or random-like, if we can’t do anything to predict which of 0 or 1 we will get on any one trial. We can’t do better than guessing that the next value will be — say — 0 — and whichever number we guess, we will only ever have a 50% chance of being correct. We are not saying the computer is giving truly random numbers in some deep sense, only numbers we cannot distinguish from truly random numbers, because we cannot do anything to predict them. The technical term for random numbers from the computer is therefore pseudo-random — meaning, like random numbers, in the sense they are effectively unpredictable. Effectively unpredictable means there is no practical way for you, or even a very powerful computer, to do anything to improve your prediction of the next number in the series." + }, + { + "objectID": "what_is_probability.html#the-philosophers-dispute-about-the-concept-of-probability", + "href": "what_is_probability.html#the-philosophers-dispute-about-the-concept-of-probability", + "title": "3  What is probability?", + "section": "3.10 The philosophers’ dispute about the concept of probability", + "text": "3.10 The philosophers’ dispute about the concept of probability\nThose who call themselves “objectivists” or “frequentists” and those who call themselves “personalists” or “Bayesians” have been arguing for hundreds or even thousands of years about the “nature” of probability. The objectivists insist (correctly) that any estimation not based on a series of observations is subject to potential bias, from which they conclude (incorrectly) that we should never think of probability that way. They are worried about the perversion of science, the substitution of arbitrary assessments for value-free data-gathering. The personalists argue (correctly) that in many situations it is not possible to obtain sufficient data to avoid considerable judgment. Indeed, if a probability is about the future, some judgment is always required — about which observations will be relevant, and so on. They sometimes conclude (incorrectly) that the objectivists’ worries are unimportant.\nAs is so often the case, the various sides in the argument have different sorts of situations in mind. As we have seen, the arguments disappear if one thinks operationally with respect to the purpose of the work, rather than in terms of properties, as mentioned earlier.\nHere is an example of the difficulty of focusing on the supposed properties of the mechanism or situation: The mathematical theorist asserts that the probability of a die falling with the “5” side up is 1/6, on the basis of the physics of equally-weighted sides. But if one rolls a particular die a million times, and it turns up “5” less than 1/6 of the time, one surely would use the observed proportion as the practical estimate. The probabilities of various outcomes with cheap dice may depend upon the number of pips drilled out on a side. In 20,000 throws of a red die and 20,000 throws of a white die, the proportions of 3’s and 4’s were, respectively, .159 and .146, .145 and .142 — all far below the expected proportions of .167. That is, 3’s and 4’s occurred about 11 percent less often that if the dice had been perfectly formed, a difference that could make a big difference in a gambling game (Bulmer 1979, 18).\nIt is reasonable to think of both the engineering method (the theoretical approach) and the empirical method (experimentation and data collection) as two alternative ways to estimate a probability. The two methods use different processes and different proxies for the probability you wish to estimate. One must adduce additional knowledge to decide which method to use in any given situation. It is sensible to use the empirical method when data are available. (But use both together whenever possible.)\nIn view of the inevitably subjective nature of probability estimates, you may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities. The two terms are working synonyms.\nMost important: One cannot sensibly talk about probabilities in the abstract, without reference to some set of facts. The topic then loses its meaning, and invites confusion and argument. This also is a reason why a general formalization of the probability concept does not make sense." + }, + { + "objectID": "what_is_probability.html#the-relationship-of-probability-to-the-concept-of-resampling", + "href": "what_is_probability.html#the-relationship-of-probability-to-the-concept-of-resampling", + "title": "3  What is probability?", + "section": "3.11 The relationship of probability to the concept of resampling", + "text": "3.11 The relationship of probability to the concept of resampling\nThere is no all-agreed definition of the concept of the resampling method in statistics. Unlike some other writers, I prefer to apply the term to problems in both pure probability and statistics. This set of examples may illustrate:\n\nConsider asking about the number of hits one would expect from a 0.250 (25 percent) batter in a 400 at-bat season. One would call this a problem in “probability.” The sampling distribution of the batter’s results can be calculated by formula or produced by Monte Carlo simulation.\nNow consider examining the number of hits in a given batter’s season, and asking how likely that number (or fewer) is to occur by chance if the batter’s long-run batting average is 0.250. One would call this a problem in “statistics.” But just as in example (1) above, the answer can be calculated by formula or produced by Monte Carlo simulation. And the calculation or simulation is exactly the same as used in (1).\nHere the term “resampling” might be applied to the simulation with considerable agreement among people familiar with the term, but perhaps not by all such persons.\nNext consider an observed distribution of distances that a batter’s hits travel in a season with 100 hits, with an observed mean of 150 feet per hit. One might ask how likely it is that a sample of 10 hits drawn with replacement from the observed distribution of hit lengths (with a mean of 150 feet) would have a mean greater than 160 feet, and one could easily produce an answer with repeated Monte Carlo samples. Traditionally this would be called a problem in probability.\nNext consider that a batter gets 10 hits with a mean of 160 feet, and one wishes to estimate the probability that the sample would be produced by a distribution as specified in (3). This is a problem in statistics, and by 1996, it is common statistical practice to treat it with a resampling method. The actual simulation would, however, be identical to the work described in (3).\n\nBecause the work in (4) and (2) differ only in question (4) involving measured data and question (2) involving counted data, there seems no reason to discriminate between the two cases with respect to the term “resampling.” With respect to the pairs of cases (1) and (2), and (3) and (4), there is no difference in the actual work performed, though there is a difference in the way the question is framed. I would therefore urge that the label “resampling” be applied to (1) and (3) as well as to (2) and (4), to bring out the important fact that the procedure is the same as in resampling questions in statistics.\nOne could easily produce examples like (1) and (2) for cases that are similar except that the drawing is without replacement, as in the sampling version of Fisher’s permutation test — for example, a tea taster (Fisher 1935; Fisher 1960, chap. II, section 5). And one could adduce the example of prices in different state liquor control systems (see Section 12.16) which is similar to cases (3) and (4) except that sampling without replacement seems appropriate. Again, the analogs to cases (2) and (4) would generally be called “resampling.”\nThe concept of resampling is defined in a more precise way in Section 8.9." + }, + { + "objectID": "what_is_probability.html#conclusion", + "href": "what_is_probability.html#conclusion", + "title": "3  What is probability?", + "section": "3.12 Conclusion", + "text": "3.12 Conclusion\nWe define “chance” as the absence of predictive power and/ or explanation and/or control.\nWhen the remote control rotates more than three or four turns I cannot control the outcome — whether TV or sofa end — with any accuracy. That is to say, I cannot predict much better than 50-50 with more than four rotations. So we then say that the outcome is determined by “chance.”\nAs to those persons who wish to inquire into what the situation “really” is: I hope they agree that we do not need to do so to proceed with our work. I hope all will agree that the outcome of flipping the TV gradually becomes unpredictable (random) though still subject to similar physical processes as when predictable. I do not deny in principle that these processes can be “understood,” certainly one can develop a machine (or a baton twirler) that will make the outcome predictable for many turns. But this has nothing to do with whether the mechanism is “really” something one wants to say is influenced by “chance.” This is the point of the cooking-TV demonstration. The outcome traverses from non-chance (determined) to chance (not determined) in a smooth way even though the physical mechanism that produces the revolutions remains much the same over the traverse.\n\n\n\n\nBarnett, Vic. 1982. Comparative Statistical Inference. 2nd ed. Wiley Series in Probability and Mathematical Statistics. Chichester: John Wiley & Sons. https://archive.org/details/comparativestati0000barn.\n\n\nBulmer, M. G. 1979. Principles of Statistics. New York, NY: Dover Publications, inc. https://archive.org/details/principlesofstat0000bulm.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nFeynman, Richard P., and Ralph Leighton. 1988. What Do You Care What Other People Think? Further Adventures of a Curious Character. New York, NY: W. W. Norton; Company, Inc. https://archive.org/details/whatdoyoucarewha0000feyn_x5w7.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nGardner, Martin. 1985. Mathematical Magic Show. Penguin Books Ltd, Harmondsworth.\n\n\nMosteller, Frederick. 1987. Fifty Challenging Problems in Probability with Solutions. Courier Corporation.\n\n\nRaiffa, Howard. 1968. “Decision Analysis: Introductory Lectures on Choices Under Uncertainty.” https://archive.org/details/decisionanalysis0000raif.\n\n\nRuark, Arthur Edward, and Harold Clayton Urey. 1930. Atoms, Moleculues and Quanta. New York, NY: McGraw-Hill book company, inc. https://archive.org/details/atomsmoleculesqu00ruar.\n\n\nRussell, Bertrand. 1945. A History of Western Philosophy. New York: Simon; Schuster.\n\n\nWhitworth, William Allen. 1897. DCC Exercises in Choice and Chance. Cambridge, UK: Deighton Bell; Co. https://archive.org/details/dccexerciseschoi00whit." + }, + { + "objectID": "about_technology.html#the-environment", + "href": "about_technology.html#the-environment", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.1 The environment", + "text": "4.1 The environment\nMany of the chapters have sections with code for you to run, and experiment with. These sections contain Jupyter notebooks 1]. Jupyter notebooks are interactive web pages that allow you to read, write and run R code. We mark the start of each notebook in the text with a note and link heading like the one you see below. In the web edition of this book, you can click on the Download link in this header to download the section as a notebook. You can also click on the Interact link in this header to open the notebook on a cloud computer. This allows you to interact with the notebook on the cloud computer. You can run the code, and experiment by making changes.\nIn the print version of the book, we point you to the web version, to get the links.\nAt the end of this chapter, we explain how to run these notebooks on your own computer. In the next section you will see an example notebook; you might want to run this in the cloud to get started." + }, + { + "objectID": "about_technology.html#getting-started-with-the-notebook", + "href": "about_technology.html#getting-started-with-the-notebook", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.2 Getting started with the notebook", + "text": "4.2 Getting started with the notebook\nThe next section contains a notebook called “Billie’s Bill”. If you are looking at the web edition, you will see links to interact with this notebook in the cloud, or download it to your computer.\n\nStart of billies_bill notebook\n\nDownload notebook\nInteract\n\n\nThe text in this notebook section assumes you have opened the page as an interactive notebook, on your own computer, or one of the RStudio web interfaces.\nA notebook can contain blocks of text — like this one — as well as code, and the results from running the code.\nIf you are in the notebook interface (rather than reading this in the textbook), you will see the RStudio menu near the top of the page, with headings “File”, “Edit” and so on.\n\nUnderneath that, by default, you may see a row of icons - the “Toolbar”.\nIn the toolbar, you may see a list box that will allow you to run the code in the notebook, among other icons.\nWhen we get to code chunks, you will also see a green play icon at the right edge of the interface, in the chunk. This will allow you to run the code chunk.\nAlthough you can use this “run” button, we suggest you get used to using the keyboard shortcut. The default shortcut on Windows or Linux is to hold down the Control key and the Shift key and the Enter (Return) key at the same time. We will call this Control-Shift-Enter. On Mac the default combination is Command-Shift-Enter, where Command is the key with the four-leaf-clover-like icon to the left of the space-bar. To save us having to say this each time, we will call this combination Ctl/Cmd-Shift-Enter.\n\nIn this, our first notebook, we will be using R to solve one of those difficult and troubling problems in life — working out the bill in a restaurant.\n\n4.3 The meal in question\nAlex and Billie are at a restaurant, getting ready to order. They do not have much money, so they are calculating the expected bill before they order.\nAlex is thinking of having the fish for £10.50, and Billie is leaning towards the chicken, at £9.25. First they calculate their combined bill.\nBelow this text you see a code chunk. It contains the R code to calculate the total bill. Press Control-Shift-Enter or Cmd-Shift-Enter (on Mac) in the chunk below, to see the total.\n\n10.50 + 9.25\n\n[1] 19.8\n\n\nThe contents of the chunk above is R code. As you would predict, R understands numbers like 10.50, and it understands + between the numbers as an instruction to add the numbers.\nWhen you press Ctl/Cmd-Shift-Enter, R finds 10.50, realizes it is a number, and stores that number somewhere in memory. It does the same thing for 9.25, and then it runs the addition operation on these two numbers in memory, which gives the number 19.75.\nFinally, R sends the resulting number (19.75) back to the notebook for display. The notebook detects that R sent back a value, and shows it to us.\nThis is exactly what a calculator would do.\n\n\n4.4 Comments\nUnlike a calculator, we can also put notes next to our calculations, to remind us what they are for. One way of doing this is to use a “comment”. You have already seen comments in the previous chapter.\nA comment is some text that the computer will ignore. In R, you can make a comment by starting a line with the # (hash) character. For example, the next cell is a code cell, but when you run it, it does not show any result. In this case, that is because the computer sees the # at the beginning of the line, and then ignores the rest.\nMany of the code cells you see will have comments in them, to explain what the code is doing.\nPractice writing comments for your own code. It is a very good habit to get into. You will find that experienced programmers write many comments on their code. They do not do this to show off, but because they have a lot of experience in reading code, and they know that comments make it much easier to read and understand code.\n\n\n4.5 More calculations\nLet us continue with the struggle that Alex and Billie are having with their bill.\nThey realize that they will also need to pay a tip.\nThey think it would be reasonable to leave a 15% tip. Now they need to multiply their total bill by 0.15, to get the tip. The bill is about £20, so they know that the tip will be about £3.\nIn R * means multiplication. This is the equivalent of the “×” key on a calculator.\nWhat about this, for the correct calculation?\n\n# The tip - with a nasty mistake.\n10.50 + 9.25 * 0.15\n\n[1] 11.9\n\n\nOh dear, no, that isn’t doing the right calculation.\nR follows the normal rules of precedence with calculations. These rules tell us to do multiplication before addition.\nSee https://en.wikipedia.org/wiki/Order_of_operations for more detail on the standard rules.\nIn the case above the rules tell R to first calculate 9.25 * 0.15 (to get 1.3875) and then to add the result to 10.50, giving 11.8875.\nWe need to tell R we want it to do the addition and then the multiplication. We do this with round brackets (parentheses):\n\n\n\n\n\n\n\n\n\n\nThere are three types of brackets in R.\nThese are:\n\nround brackets or parentheses: ();\nsquare brackets: [];\ncurly brackets: {}.\n\nEach type of bracket has a different meaning in R. In the examples, play close to attention to the type of brackets we are using.\n\n\n\n# The bill plus tip - mistake fixed.\n(10.50 + 9.25) * 0.15\n\n[1] 2.96\n\n\nThe obvious next step is to calculate the bill including the tip.\n\n# The bill, including the tip\n10.50 + 9.25 + (10.50 + 9.25) * 0.15\n\n[1] 22.7\n\n\nAt this stage we start to feel that we are doing too much typing. Notice that we had to type out 10.50 + 9.25 twice there. That is a little boring, but it also makes it easier to make mistakes. The more we have to type, the greater the chance we have to make a mistake.\nTo make things simpler, we would like to be able to store the result of the calculation 10.50 + 9.25, and then re-use this value, to calculate the tip.\nThis is the role of variables. A variable is a value with a name.\nHere is a variable:\n\n# The cost of Alex's meal.\na <- 10.50\n\na is a name we give to the value 10.50. You can read the line above as “The variable a gets the value 10.50”. We can also talk of setting the variable. Here we are setting a to equal 10.50.\nNow, when we use a in code, it refers to the value we gave it. For example, we can put a on a line on its own, and R will show us the value of a:\n\n# The value of a\na\n\n[1] 10.5\n\n\nWe did not have to use the name a — we can choose almost any name we like. For example, we could have chosen alex_meal instead:\n\n# The cost of Alex's meal.\n# alex_meal gets the value 10.50\nalex_meal <- 10.50\n\nWe often set variables like this, and then display the result, all in the same chunk. We do this by first setting the variable, as above, and then, on the final line of the chunk, we put the variable name on a line on its own, to ask R to show us the value of the variable. Here we set billie_meal to have the value 9.25, and then show the value of billie_meal, all in the same chunk.\n\n# The cost of Alex's meal.\n# billie_meal gets the value 10.50\nbillie_meal <- 10.50\n# Show the value of billie_meal\nbillie_meal\n\n[1] 10.5\n\n\nOf course, here, we did not learn much, but we often set variable values with the results of a calculation. For example:\n\n# The cost of both meals, before tip.\nbill_before_tip <- 10.50 + 9.25\n# Show the value of both meals.\nbill_before_tip\n\n[1] 19.8\n\n\nBut wait — we can do better than typing in the calculation like this. We can use the values of our variables, instead of typing in the values again.\n\n# The cost of both meals, before tip, using variables.\nbill_before_tip <- alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n[1] 21\n\n\nWe make the calculation clearer by writing the calculation this way — we are calculating the bill before the tip by adding the cost of Alex’s and Billie’s meal — and that’s what the code looks like. But this also allows us to change the variable value, and recalculate. For example, say Alex decided to go for the hummus plate, at £7.75. Now we can tell R that we want alex_meal to have the value 7.75 instead of 10.50:\n\n# The new cost of Alex's meal.\n# alex_meal gets the value 7.75\nalex_meal = 7.75\n# Show the value of alex_meal\nalex_meal\n\n[1] 7.75\n\n\nNotice that alex_meal now has a new value. It was 10.50, but now it is 7.75. We have reset the value of alex_meal. In order to use the new value for alex_meal, we must recalculate the bill before tip with exactly the same code as before:\n\n# The new cost of both meals, before tip.\nbill_before_tip <- alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n[1] 18.2\n\n\nNotice that, now we have rerun this calculation, we have reset the value for bill_before_tip to the correct value corresponding to the new value for alex_meal.\nAll that remains is to recalculate the bill plus tip, using the new value for the variable:\n\n# The cost of both meals, after tip.\nbill_after_tip = bill_before_tip + bill_before_tip * 0.15\n# Show the value of both meals, after tip.\nbill_after_tip\n\n[1] 21\n\n\nNow we are using variables with relevant names, the calculation looks right to our eye. The code expresses the calculation as we mean it: the bill after tip is equal to the bill before the tip, plus the bill before the tip times 0.15.\n\n\n4.6 And so, on\nNow you have done some practice with the notebook, and with variables, you are ready for a new problem in probability and statistics, in the next chapter.\nEnd of billies_bill notebook" + }, + { + "objectID": "about_technology.html#the-meal-in-question", + "href": "about_technology.html#the-meal-in-question", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.3 The meal in question", + "text": "4.3 The meal in question\nAlex and Billie are at a restaurant, getting ready to order. They do not have much money, so they are calculating the expected bill before they order.\nAlex is thinking of having the fish for £10.50, and Billie is leaning towards the chicken, at £9.25. First they calculate their combined bill.\nBelow this text you see a code chunk. It contains the R code to calculate the total bill. Press Control-Shift-Enter or Cmd-Shift-Enter (on Mac) in the chunk below, to see the total.\n\n10.50 + 9.25\n\n[1] 19.8\n\n\nThe contents of the chunk above is R code. As you would predict, R understands numbers like 10.50, and it understands + between the numbers as an instruction to add the numbers.\nWhen you press Ctl/Cmd-Shift-Enter, R finds 10.50, realizes it is a number, and stores that number somewhere in memory. It does the same thing for 9.25, and then it runs the addition operation on these two numbers in memory, which gives the number 19.75.\nFinally, R sends the resulting number (19.75) back to the notebook for display. The notebook detects that R sent back a value, and shows it to us.\nThis is exactly what a calculator would do." + }, + { + "objectID": "about_technology.html#comments", + "href": "about_technology.html#comments", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.4 Comments", + "text": "4.4 Comments\nUnlike a calculator, we can also put notes next to our calculations, to remind us what they are for. One way of doing this is to use a “comment”. You have already seen comments in the previous chapter.\nA comment is some text that the computer will ignore. In R, you can make a comment by starting a line with the # (hash) character. For example, the next cell is a code cell, but when you run it, it does not show any result. In this case, that is because the computer sees the # at the beginning of the line, and then ignores the rest.\nMany of the code cells you see will have comments in them, to explain what the code is doing.\nPractice writing comments for your own code. It is a very good habit to get into. You will find that experienced programmers write many comments on their code. They do not do this to show off, but because they have a lot of experience in reading code, and they know that comments make it much easier to read and understand code." + }, + { + "objectID": "about_technology.html#more-calculations", + "href": "about_technology.html#more-calculations", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.5 More calculations", + "text": "4.5 More calculations\nLet us continue with the struggle that Alex and Billie are having with their bill.\nThey realize that they will also need to pay a tip.\nThey think it would be reasonable to leave a 15% tip. Now they need to multiply their total bill by 0.15, to get the tip. The bill is about £20, so they know that the tip will be about £3.\nIn R * means multiplication. This is the equivalent of the “×” key on a calculator.\nWhat about this, for the correct calculation?\n\n# The tip - with a nasty mistake.\n10.50 + 9.25 * 0.15\n\n[1] 11.9\n\n\nOh dear, no, that isn’t doing the right calculation.\nR follows the normal rules of precedence with calculations. These rules tell us to do multiplication before addition.\nSee https://en.wikipedia.org/wiki/Order_of_operations for more detail on the standard rules.\nIn the case above the rules tell R to first calculate 9.25 * 0.15 (to get 1.3875) and then to add the result to 10.50, giving 11.8875.\nWe need to tell R we want it to do the addition and then the multiplication. We do this with round brackets (parentheses):\n\n\n\n\n\n\n\n\n\n\nThere are three types of brackets in R.\nThese are:\n\nround brackets or parentheses: ();\nsquare brackets: [];\ncurly brackets: {}.\n\nEach type of bracket has a different meaning in R. In the examples, play close to attention to the type of brackets we are using.\n\n\n\n# The bill plus tip - mistake fixed.\n(10.50 + 9.25) * 0.15\n\n[1] 2.96\n\n\nThe obvious next step is to calculate the bill including the tip.\n\n# The bill, including the tip\n10.50 + 9.25 + (10.50 + 9.25) * 0.15\n\n[1] 22.7\n\n\nAt this stage we start to feel that we are doing too much typing. Notice that we had to type out 10.50 + 9.25 twice there. That is a little boring, but it also makes it easier to make mistakes. The more we have to type, the greater the chance we have to make a mistake.\nTo make things simpler, we would like to be able to store the result of the calculation 10.50 + 9.25, and then re-use this value, to calculate the tip.\nThis is the role of variables. A variable is a value with a name.\nHere is a variable:\n\n# The cost of Alex's meal.\na <- 10.50\n\na is a name we give to the value 10.50. You can read the line above as “The variable a gets the value 10.50”. We can also talk of setting the variable. Here we are setting a to equal 10.50.\nNow, when we use a in code, it refers to the value we gave it. For example, we can put a on a line on its own, and R will show us the value of a:\n\n# The value of a\na\n\n[1] 10.5\n\n\nWe did not have to use the name a — we can choose almost any name we like. For example, we could have chosen alex_meal instead:\n\n# The cost of Alex's meal.\n# alex_meal gets the value 10.50\nalex_meal <- 10.50\n\nWe often set variables like this, and then display the result, all in the same chunk. We do this by first setting the variable, as above, and then, on the final line of the chunk, we put the variable name on a line on its own, to ask R to show us the value of the variable. Here we set billie_meal to have the value 9.25, and then show the value of billie_meal, all in the same chunk.\n\n# The cost of Alex's meal.\n# billie_meal gets the value 10.50\nbillie_meal <- 10.50\n# Show the value of billie_meal\nbillie_meal\n\n[1] 10.5\n\n\nOf course, here, we did not learn much, but we often set variable values with the results of a calculation. For example:\n\n# The cost of both meals, before tip.\nbill_before_tip <- 10.50 + 9.25\n# Show the value of both meals.\nbill_before_tip\n\n[1] 19.8\n\n\nBut wait — we can do better than typing in the calculation like this. We can use the values of our variables, instead of typing in the values again.\n\n# The cost of both meals, before tip, using variables.\nbill_before_tip <- alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n[1] 21\n\n\nWe make the calculation clearer by writing the calculation this way — we are calculating the bill before the tip by adding the cost of Alex’s and Billie’s meal — and that’s what the code looks like. But this also allows us to change the variable value, and recalculate. For example, say Alex decided to go for the hummus plate, at £7.75. Now we can tell R that we want alex_meal to have the value 7.75 instead of 10.50:\n\n# The new cost of Alex's meal.\n# alex_meal gets the value 7.75\nalex_meal = 7.75\n# Show the value of alex_meal\nalex_meal\n\n[1] 7.75\n\n\nNotice that alex_meal now has a new value. It was 10.50, but now it is 7.75. We have reset the value of alex_meal. In order to use the new value for alex_meal, we must recalculate the bill before tip with exactly the same code as before:\n\n# The new cost of both meals, before tip.\nbill_before_tip <- alex_meal + billie_meal\n# Show the value of both meals.\nbill_before_tip\n\n[1] 18.2\n\n\nNotice that, now we have rerun this calculation, we have reset the value for bill_before_tip to the correct value corresponding to the new value for alex_meal.\nAll that remains is to recalculate the bill plus tip, using the new value for the variable:\n\n# The cost of both meals, after tip.\nbill_after_tip = bill_before_tip + bill_before_tip * 0.15\n# Show the value of both meals, after tip.\nbill_after_tip\n\n[1] 21\n\n\nNow we are using variables with relevant names, the calculation looks right to our eye. The code expresses the calculation as we mean it: the bill after tip is equal to the bill before the tip, plus the bill before the tip times 0.15." + }, + { + "objectID": "about_technology.html#and-so-on", + "href": "about_technology.html#and-so-on", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.6 And so, on", + "text": "4.6 And so, on\nNow you have done some practice with the notebook, and with variables, you are ready for a new problem in probability and statistics, in the next chapter.\nEnd of billies_bill notebook" + }, + { + "objectID": "about_technology.html#running-the-code-on-your-own-computer", + "href": "about_technology.html#running-the-code-on-your-own-computer", + "title": "4  Introducing {{< var lang >}} and the {{< var nb_app >}} notebook", + "section": "4.7 Running the code on your own computer", + "text": "4.7 Running the code on your own computer\nMany people, including your humble authors, like to be able to run code examples on their own computers. This section explains how you can set up to run the notebooks on your own computer.\nOnce you have done this setup, you can use the “download” link\n\nTo run the R notebook, you will need two software packages on your computer. These are:\n\nThe base R language\nThe RStudio graphical interface to R.\n\nThe base R language gives you the software to run R code and show results. You can use the base R language on its own, but, in order to interact with the R notebook on your computer, you will need the RStudio interface. RStudio gives you a richer interface to interact with the R language, including the ability to open, edit and run R notebooks, like the notebook in this chapter. RStudio uses the base R language to run R code from the notebook, and show the results.\nInstall the base R language by going to the main R website at https://www.r-project.org, following the links to the package for your system (Windows, Mac, or Linux), and install according to the instructions on the website.\nThen install the RStudio interface by visiting the RStudio website at https://www.rstudio.com, and navigating to the download links for the free edition of the “RStudio IDE”. IDE stands for Integrated Development Environment; it refers to the RStudio application’s ability make it easier to interact with, and develop, R code. You only need the free version; it has all the features you will need. The free version is the only version that we, your humble authors, have used for this book, and for all our own work and teaching." + }, + { + "objectID": "resampling_with_code.html#statistics-and-probability", + "href": "resampling_with_code.html#statistics-and-probability", + "title": "5  Resampling with code", + "section": "5.1 Statistics and probability", + "text": "5.1 Statistics and probability\nWe have already emphasized that statistics is a way of drawing conclusions about data from the real world, in the presence of random variation; probability is the way of reasoning about random variation. This chapter introduces our first statistical problem, where we use probability to draw conclusions about some important data — about a potential cure for a type of cancer. We will not make much of the distinction between probability and statistics here, but we will come back to it several times in later chapters." + }, + { + "objectID": "resampling_with_code.html#a-new-treatment-for-burkitt-lymphoma", + "href": "resampling_with_code.html#a-new-treatment-for-burkitt-lymphoma", + "title": "5  Resampling with code", + "section": "5.2 A new treatment for Burkitt lymphoma", + "text": "5.2 A new treatment for Burkitt lymphoma\nBurkitt lymphoma is an unusual cancer of the lymphatic system. The lymphatic system is a vein-like network throughout the body that is involved in the immune reaction to disease. In developed countries, with standard treatment, the cure rate for Burkitt lymphoma is about 90%.\nIn 2006, researchers at the US National Cancer Institute (NCI), tested a new treatment for Burkitt lymphoma (Dunleavy et al. 2006). They gave the new treatment to 17 patients, and found that all 17 patients were doing well after two years or more of follow up. By “doing well”, we mean that their lymphoma had not progressed; as a short-hand, we will say that these patients were “cured”, but of course, we do not know what happened to them after this follow up.\nHere is where we put on our statistical hat and ask ourselves the following question — how surprised are we that the NCI researchers saw their result of 17 out of 17 patients cured?\nAt this stage you might and should ask, what could we possibly mean by “surprised”? That is a good and important question, and we will discuss that much more in the chapters to come. For now, please bear with us as we do a thought experiment.\nLet us forget the 17 out of 17 result of the NCI study for a moment. Imagine that there is another hospital, called Saint Hypothetical General, just down the road from the NCI, that was also treating 17 patients with Burkitt lymphoma. Saint Hypothetical were not using the NCI treatment, they were using the standard treatment.\nWe already know that each patient given the standard treatment has a 90% chance of cure. Given that 90% cure rate, what is the chance that 17 out of 17 of the Hypothetical group will be cured?\nYou may notice that this question about the Hypothetical group is similar to the problem of the 20 ambulances in Chapter Chapter 2. In that problem, we were interested to know how likely it was that 3 or more of 20 ambulances would be out of action on any one day, given that each ambulance had a 10% chance of being out of action. Here we would like to know the chances that all 17 patients would be cured, given that each patient has a 90% chance of being cured." + }, + { + "objectID": "resampling_with_code.html#a-physical-model-of-the-hypothetical-hospital", + "href": "resampling_with_code.html#a-physical-model-of-the-hypothetical-hospital", + "title": "5  Resampling with code", + "section": "5.3 A physical model of the hypothetical hospital", + "text": "5.3 A physical model of the hypothetical hospital\nAs in the ambulance example, we could make a physical model of chance in this world. For example, to simulate whether a given patient is cured or not by a 90% effective treatment, we could throw a ten sided die and record the result. We could say, arbitrarily, that a result of 0 means “not cured”, and all the numbers 1 through 9 mean “cured” (typical 10-sided dice have sides numbered 0 through 9).\nWe could roll 17 dice to simulate one “trial” in this random world. For each trial, we record the number of dice that show numbers 1 through 9 (and not 0). This will be a number between 0 and 17, and it is the number of patients “cured” in our simulated trial.\nFigure 5.1 is the result of one such trial we did with a set of 17 10-sided dice we happened to have to hand:\n\n\n\nFigure 5.1: One roll of 17 10-sided dice\n\n\nThe trial in Figure 5.1 shows are four dice with the 0 face uppermost, and the rest with numbers from 1 through 9. Therefore, there were 13 out of 17 not-zero numbers, meaning that 13 out of 17 simulated “patients” were “cured” in this simulated trial.\n\nWe could repeat this simulated trial procedure 100 times, and we would then have 100 counts of the not-zero numbers. Each of the 100 counts would be the number of patients cured in that trial. We can ask how many of these 100 counts were equal to 17. This will give us an estimate of the probability we would see 17 out of 17 patients cured, given that any one patient has a 90% chance of cure. For example, say we saw 15 out of 100 counts were equal to 17. That would give us an estimate of 15 / 100 or 0.15 or 15%, for the probability we would see 17 out of 17 patients cured.\nSo, if Saint Hypothetical General did see 17 out of 17 patients cured with the standard treatment, they would be a little surprised, because they would only expect to see that happen 15% of the time. But they would not be very surprised — 15% of the time is uncommon, but not very uncommon." + }, + { + "objectID": "resampling_with_code.html#a-trial-a-run-a-count-and-a-proportion", + "href": "resampling_with_code.html#a-trial-a-run-a-count-and-a-proportion", + "title": "5  Resampling with code", + "section": "5.4 A trial, a run, a count and a proportion", + "text": "5.4 A trial, a run, a count and a proportion\nHere we stop to emphasize the steps in the process of a random simulation.\n\nWe decide what we mean by one trial. Here one trial has the same meaning in medicine as resampling — we mean the result of treating 17 patients. One simulated trial is then the simulation of one set of outcomes from 17 patients.\nWork out the outcome of interest from the trial. The outcome here is the number of patients cured.\nWe work out a way to simulate one trial. Here we chose to throw 17 10-sided dice, and count the number of not zero values. This is the outcome from one simulation trial.\nWe repeat the simulated trial procedure many times, and collect the results from each trial. Say we repeat the trial procedure 100 times; we will call this a run of 100 trials.\nWe count the number of trials with an outcome that matches the outcome we are interested in. In this case we are interested in the outcome 17 out of 17 cured, so we count the number of trials with a score of 17. Say 15 out of the run of 100 trials had an outcome of 17 cured. That is our count.\nFinally we divide the count by the number of trials to get the proportion. From the example above, we divide 15 by 100 to 0.15 (15%). This is our estimate of the chance of seeing 17 out of 17 patients cured in any one trial. We can also call this an estimate of the probability that 17 out of 17 patients will be cured on any on trial.\n\nOur next step is to work out the code for step 2: simulate one trial." + }, + { + "objectID": "resampling_with_code.html#simulate-one-trial-with-code", + "href": "resampling_with_code.html#simulate-one-trial-with-code", + "title": "5  Resampling with code", + "section": "5.5 Simulate one trial with code", + "text": "5.5 Simulate one trial with code\nWe can use the computer to do something very similar to rolling 17 10-sided dice, by asking the computer for 17 random whole numbers from 0 through 9.\n\n\n\n\n\n\nWhole numbers\n\n\n\nA whole number is a number that is not negative, and does not have fractional part (does not have anything after a decimal point). 0 and 1 and 2 and 3 are whole numbers, but -1 and \\(\\frac{3}{5}\\) and 11.3 are not. The whole numbers from 0 through 9 are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.\n\n\nWe have already discussed what we mean by random in Section 2.2." + }, + { + "objectID": "resampling_with_code.html#from-numbers-to-s", + "href": "resampling_with_code.html#from-numbers-to-s", + "title": "5  Resampling with code", + "section": "5.6 From numbers to vectors", + "text": "5.6 From numbers to vectors\nWe need to prepare the sequence of numbers that we want R to select from.\nWe have already seen the idea that R has values that are individual numbers. Remember, a variable is a named value. Here we attach the name a to the value 1.\n\na <- 1\n# Show the value of \"a\"\na\n\n[1] 1\n\n\nR also allows values that are sequences of numbers. R calls these sequences vectors.\n\nThe name vector sounds rather technical and mathematical, but the only important idea for us is that a vector stores a sequence of numbers.\n\nHere we make a vector that contains the 10 numbers we will select from:\n\n# Make a vector of numbers, store with the name \"some_numbers\".\nsome_numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\n# Show the value of \"some_numbers\"\nsome_numbers\n\n [1] 0 1 2 3 4 5 6 7 8 9\n\n\nNotice that the value for some_numbers is a vector, and that this value contains 10 numbers.\nPut another way, some_numbers is now the name we can use for this collection of 10 values.\nVectors are very useful for simulations and data analysis, and we will be using these for nearly every example in this book." + }, + { + "objectID": "resampling_with_code.html#sec-introducing-functions", + "href": "resampling_with_code.html#sec-introducing-functions", + "title": "5  Resampling with code", + "section": "5.7 Functions", + "text": "5.7 Functions\nFunctions are another tool that we will be using everywhere, and that you seen already, although we have not introduced them until now.\nYou can think of functions as named production lines.\nFor example, consider the R function round\nround is the name for a simple production line, that takes in a number, and (by default) sends back the number rounded to the nearest integer.\n\n\n\n\n\n\nWhat is an integer?\n\n\n\nAn integer is a positive or negative whole number.\nIn other words, a number is an integer if the number is either a whole number (0, 1, 2 …), or a negative whole number (-1, -2, -3 …). All of -208, -2, 0, 10, 105 are integers, but \\(\\frac{3}{5}\\), -10.3 and 0.2 are not.\nWe will use the term integer fairly often, because it is a convenient way to name all the positive and negative whole numbers.\n\n\nThink of a function as a named production line. We sent the function (production line) raw material (components) to work on. The production line does some work on the components. A finished result comes off the other end.\nTherefore, think of round as the name of a production line, that takes in a component (in this case, any number), and does some work, and sends back the finished result (in this case, the number rounded to the nearest integer.\nThe components we send to a function are called arguments. The finished result the function sends back is the return value.\n\nArguments : the value or values we send to a function.\nReturn value : the values the function sends back.\n\nSee Figure 5.2 for an illustration of round as a production line.\n\n\n\n\n\nFigure 5.2: The round function as a production line\n\n\n\n\nIn the next few code chunks, you see examples where round takes in a not-integer number, as an argument, and sends back the nearest integer as the return value:\n\n# Put in 3.2, round sends back 3.\nround(3.2)\n\n[1] 3\n\n\n\n# Put in -2.7, round sends back -3.\nround(-2.7)\n\n[1] -3\n\n\nLike many functions, round can take more than one argument (component). You can send range the number of digits you want to round to, after the number of you want it to work on, like this (see Figure 5.3):\n\n# Put in 3.1415, and the number of digits to round to (2).\n# round sends back 3.14\nround(3.1415, 2)\n\n[1] 3.14\n\n\n\n\n\n\n\nFigure 5.3: round with optional arguments specifying number of digits\n\n\n\n\nNotice that the second argument — here 2 — is optional. We only have to send round one argument: the number we want it to round. But we can optionally send it a second argument — the number of decimal places we want it to round to. If we don’t specify the second argument, then round assumes we want to round to 0 decimal places, and therefore, to the nearest integer." + }, + { + "objectID": "resampling_with_code.html#sec-named-arguments", + "href": "resampling_with_code.html#sec-named-arguments", + "title": "5  Resampling with code", + "section": "5.8 Functions and named arguments", + "text": "5.8 Functions and named arguments\nIn the example above, we sent round two arguments. round knows that we mean the first argument to be the number we want to round, and the second argument is the number of decimal places we want to round to. It knows which is which by the position of the arguments — the first argument is the number it should round, and second is the number of digits.\nIn fact, internally, the round function also gives these arguments names. It calls the number it should round — x — and the number of digits it should round to — digits. This is useful, because it is often clearer and simpler to identify the argument we are specifying with its name, instead of just relying on its position.\nIf we aren’t using the argument names, we call the round function as we did above:\n\n# Put in 3.1415, and the number of digits to round to (2).\n# round sends back 3.14\nround(3.1415, 2)\n\n[1] 3.14\n\n\nIn this call, we relied on the fact that we, the people writing the code, and you, the person reading the code, remembers that the second argument (2) means the number of decimal places it should round to. But, we can also specify the argument using its name, like this (see Figure 5.4):\n\n# Put in 3.1415, and the number of digits to round to (2).\n# Use the name of the number-of-decimals argument for clarity:\nround(3.1415, digits=2)\n\n[1] 3.14\n\n\n\n\n\n\n\nFigure 5.4: The round function with argument names\n\n\n\n\n\n\n\n\n\nFigure 5.5: The np.round function with argument names\n\n\n\n\nHere R sees the first argument, as before, and assumes that it is the number we want to round. Then it sees the second, named argument — digits=2 — and knows, from the name, that we mean this to be the number of decimals to round to.\nIn fact, we could even specify both arguments by name, like this:\n\n# Put in 3.1415, and the number of digits to round to (2).\n# Name both arguments.\nround(x=3.1415, digits=2)\n\n[1] 3.14\n\n\nWe don’t usually name both arguments for round, as we have above, because it is so obvious that the first argument is the thing we want to round, and so naming the argument does not make it any more clear what the code is doing. But — as so often in programming — whether to use the names, or let R work out which argument is which by position, is a judgment call. The judgment you are making is about the way to write the code to be most clear for your reader, where your most important reader may be you, coming back to the code in a week or a year.\n\n\n\n\n\n\nHow do you know what names to use for the function arguments?\n\n\n\nYou can find the names of the function arguments in the help for the function, either online, or in the notebook interface. For example, to get the help for round, including the argument names, you could make a new chunk, and type ?round, then execute the cell by running the chunk. This will show the help for the function in the notebook interface." + }, + { + "objectID": "resampling_with_code.html#sec-ranges", + "href": "resampling_with_code.html#sec-ranges", + "title": "5  Resampling with code", + "section": "5.9 Ranges", + "text": "5.9 Ranges\nNow let us return to the variable some_numbers that we created above:\n\n# Make a vector of numbers, store with the name \"some_numbers\".\nsome_numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\n# Show the value of \"some_numbers\"\nsome_numbers\n\n [1] 0 1 2 3 4 5 6 7 8 9\n\n\nIn fact, we often need to do this: generate a sequence or range of integers, such as 0 through 9.\n\n\n\n\n\n\nPick a number from 1 through 5\n\n\n\nRanges can be confusing in normal speech because it is not always clear whether they include their beginning and end. For example, if someone says “pick a number between 1 and 5”, do they mean all the numbers, including the first and last (any of 1 or 2 or 3 or 4 or 5)? Or do they mean only the numbers that are between 1 and 5 (so 2 or 3 or 4)? Or do they mean all the numbers up to, but not including 5 (so 1 or 2 or 3 or 4)?\nTo avoid this confusion, we will nearly always use “from” and “through” in ranges, meaning that we do include both the start and the end number. For example, if we say “pick a number from 1 through 5” we mean one of 1 or 2 or 3 or 4 or 5.\n\n\nCreating ranges of numbers is so common that R has a special syntax to do that.\n\nR allows you to write a colon (:) between two values, to mean that you want a vector (sequence) that is all the integers from the first value (before the colon) through the second value (after the colon):\n\n\n# A vector containing all the integers from 0 through 9.\nsome_integers = 0:9\nsome_integers\n\n [1] 0 1 2 3 4 5 6 7 8 9\n\n\nHere are some more examples of the colon syntax:\n\n# All the integers from 10 through 14\n10:14\n\n[1] 10 11 12 13 14\n\n\n\n# All the integers from -1 through 5\n-1:5\n\n[1] -1 0 1 2 3 4 5" + }, + { + "objectID": "resampling_with_code.html#sec-random-choice", + "href": "resampling_with_code.html#sec-random-choice", + "title": "5  Resampling with code", + "section": "5.10 Choosing values at random", + "text": "5.10 Choosing values at random\nWe can use the sample function to select a single value at random from the sequence of numbers in some_integers.\n\n\n\n\n\n\nMore on sample\n\n\n\nThe sample function will be a fundamental tool for taking many kinds of samples, and we cover it in more detail in Chapter 6.\n\n\n\n# Select 1 integer (the second argument) from the choices in some_integers\n# (the first argument).\nmy_integer <- sample(some_integers, 1)\n# Show the value that results.\nmy_integer\n\n[1] 6\n\n\nLike round (above), sample is a function.\nAs you remember, a function is a named production line. In our case, the production line has the name the sample function.\nWe sent the sample function. a value to work on — an argument. In this case, the argument was the value of some_integers.\n\nsample also needs the number of random values we should select from the first argument. We can send the number of values we want with the second argument.\n\nFigure 5.6 is a diagram illustrating an example run of the sample function (production line).\n\n\n\n\n\n\nFigure 5.6: Example run of the sample function\n\n\n\n\n\nHere is the same code again, with new comments.\n\n# Send the value of \"some_integers\" to sample.\n# some_integers is the *argument*. Ask sample to return 1 of the values.\n# Put the *return* value from the function into \"my_number\".\nmy_number <- sample(some_integers, 1)\n# Show the value that results.\nmy_number\n\n[1] 4" + }, + { + "objectID": "resampling_with_code.html#sec-sampling-arrays", + "href": "resampling_with_code.html#sec-sampling-arrays", + "title": "5  Resampling with code", + "section": "5.11 Sampling into vectors", + "text": "5.11 Sampling into vectors\n\nIn the code above, we asked R to select a single number at random — by sending 1 as the second argument to the function.\nAs you can imagine, we can tell sample to select any number of values at random, by changing the second argument to the function.\nIn our case, we would like R to select 17 numbers at random from the sequence of some_integers.\nBut — there is a complication here. By default, sample selects numbers from the first argument without replacement, meaning that, by default, sample cannot select the same number twice, and in our case, where we want 17 numbers, that is bad, because sample is going to run out of numbers. To get the result we want, we must also add an extra argument: replace=TRUE. replace=TRUE tells R to sample some_integers with replacement, where sample can select the same number more than once in the same sample. Sampling with and without replacement is a fundamental distinction in probability and statistics. Chapter 6 goes into much more detail about this, but for now, please take our word for it that using replace=TRUE for sample gives us the same effect as rolling several 10-sided dice.\n\n\n# Get 17 values from the *some_integers* vector.\n# Sample *with replacement*, so sample can select numbers more than once.\n# Store the 17 numbers with the name \"a\"\na <- sample(some_integers, 17, replace=TRUE)\n# Show the result.\na\n\n [1] 5 3 5 8 4 4 7 1 6 4 4 1 5 3 1 2 8\n\n\nAs you can see, the function sent back (returned) 17 numbers. Because it is sending back more than one number, the thing it sends back is a vector, where the vector has 17 elements." + }, + { + "objectID": "resampling_with_code.html#counting-results", + "href": "resampling_with_code.html#counting-results", + "title": "5  Resampling with code", + "section": "5.12 Counting results", + "text": "5.12 Counting results\nWe now have the code to do the equivalent of throwing 17 10-sided dice. This is the basis for one simulated trial in the world of Saint Hypothetical General.\nOur next job is to get the code to count the number of numbers that are not zero in the vector a. That will give us the number of patients who were cured in simulated trial.\nAnother way of asking this question, is to ask how many elements in a are greater than zero.\n\n5.12.1 Comparison\nTo ask whether a number is greater than zero, we use comparison. Here is a greater than zero comparison on a single number:\n\nn <- 5\n# Is the value of n greater than 0?\n# Show the result of the comparison.\nn > 0\n\n[1] TRUE\n\n\n> is a comparison — it asks a question about the numbers either side of it. In this case > is asking the question “is the value of n (on the left hand side) greater than 0 (on the right hand side)?” The value of n is 5, so the question becomes, “is 5 greater than 0?” The answer is Yes, and R represents this Yes answer as the value TRUE.\nIn contrast, the comparison below boils down to “is 0 greater than 0?”, to which the answer is No, and R represents this as FALSE.\n\np <- 0\n# Is the value of p greater than 0?\n# Show the result of the comparison.\np > 0\n\n[1] FALSE\n\n\nSo far you have seen the results of comparison on a single number. Now say we do the same comparison on a vector. For example, say we ask the question “is the value of a greater than 0”? Remember, a is a vector containing 17 values. We are comparing 17 values to one value (0). What answer do you think R will give? You may want to think a little about this before you read on.\nAs a reminder, here is the current value for a:\n\n# Show the current value for \"a\"\na\n\n [1] 5 3 5 8 4 4 7 1 6 4 4 1 5 3 1 2 8\n\n\nNow you have had some time to think, here is what happens:\n\n# Is the value of \"a\" greater than 0?\n# Show the result of the comparison.\na > 0\n\n [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE\n[16] TRUE TRUE\n\n\nThere are 17 values in a, so the comparison to 0 means there are 17 comparisons, and 17 answers. R therefore returns a vector of 17 elements, containing these 17 answers. The first answer is the answer to the question “is the value of the first element of a greater than 0”, and the second is the answer to “is the value of the second element of a greater than 0”.\nLet us store the result of this comparison to work on:\n\n# Is the value of \"a\" greater than 0?\n# Store as another vector \"q\".\nq <- a > 0\n# Show the value of q\nq\n\n [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE\n[16] TRUE TRUE" + }, + { + "objectID": "resampling_with_code.html#sec-count-with-sum", + "href": "resampling_with_code.html#sec-count-with-sum", + "title": "5  Resampling with code", + "section": "5.13 Counting TRUE values with sum", + "text": "5.13 Counting TRUE values with sum\nNotice above that there is one TRUE element in q for every element in a that was greater than 0. It only remains to count the number of TRUE values in q, to get the count of patients in our simulated trial who were cured.\nWe can use the R function sum to count the number of TRUE elements in a vector. As you can imagine, sum adds up all the elements in a vector, to give a single number. This will work as we want for the q vector, because R counts FALSE as equal to 0 and TRUE as equal to 1:\n\n# Question: is FALSE equal to 0?\n# Answer - Yes! (TRUE)\nFALSE == 0\n\n[1] TRUE\n\n\n\n# Question: is TRUE equal to 1?\n# Answer - Yes! (TRUE)\nTRUE == 1\n\n[1] TRUE\n\n\nTherefore, the function sum, when applied to a vector of TRUE and FALSE values, will count the number of TRUE values in the vector.\nTo see this in action we can make a new vector of TRUE and FALSE values, and try using sum on the new array.\n\n# A vector containing three TRUE values and two FALSE values.\ntrues_and_falses <- c(TRUE, FALSE, TRUE, TRUE, FALSE)\n# Show the new vector.\ntrues_and_falses\n\n[1] TRUE FALSE TRUE TRUE FALSE\n\n\nThe sum operation adds all the elements in the vector. Because TRUE counts as 1, and FALSE counts as 0, adding all the elements in trues_and_falses is the same as adding up the values 1 + 0 + 1 + 1 + 0, to give 3.\nWe can apply the same operation on q to count the number of TRUE values.\n\n# Count the number of TRUE values in \"q\"\n# This is the same as the number of values in \"a\" that are greater than 0.\nb <- sum(q)\n# Show the result\nb\n\n[1] 17" + }, + { + "objectID": "resampling_with_code.html#the-procedure-for-one-simulated-trial", + "href": "resampling_with_code.html#the-procedure-for-one-simulated-trial", + "title": "5  Resampling with code", + "section": "5.14 The procedure for one simulated trial", + "text": "5.14 The procedure for one simulated trial\nWe now have the whole procedure for one simulated trial. We can put the whole procedure in one chunk:\n\n# Procedure for one simulated trial\n\n# Get 17 values from the *some_integers* vector.\n# Store the 17 numbers with the name \"a\"\na <- sample(some_integers, 17, replace=TRUE)\n# Is the value of \"a\" greater than 0?\nq <- a > 0\n# Count the number of TRUE values in \"q\"\nb <- sum(q)\n# Show the result of this simulated trial.\nb\n\n[1] 15" + }, + { + "objectID": "resampling_with_code.html#repeating-the-trial", + "href": "resampling_with_code.html#repeating-the-trial", + "title": "5  Resampling with code", + "section": "5.15 Repeating the trial", + "text": "5.15 Repeating the trial\nNow we know how to do one simulated trial, we could just keep running the chunk above, and writing down the result each time. Once we had run the chunk 100 times, we would have 100 counts. Then we could look at the 100 counts to see how many were equal to 17 (all 17 simulated patients cured on that trial). At least that would be much faster than rolling 17 dice 100 times, but we would also like the computer to automate the process of repeating the trial, and keeping track of the counts.\nPlease forgive us as we race ahead again, as we did in the last chapter. As in the last chapter, we will use a results vector called z to store the count for each trial. As in the last chapter, we will use a for loop to repeat the trial procedure many times. As in the last chapter, we will not explain the counts vector of the for loop in any detail, because we are going to cover those in the next chapter.\nLet us now imagine that we want to do 100 simulated trials at Saint Hypothetical General. This will give us 100 counts. We will want to store the count for each trial.\nTo do this, we make a vector called z to hold the 100 counts. We have called the vector z, but we could have called it anything we liked, such as counts or results or cecilia.\n\n# A vector to hold the 100 count values.\n# Later, we will fill this in with real count values from simulated trials.\nz <- numeric(100)\n\nNext we use a for loop to repeat the single trial procedure.\nNotice that the single trial procedure, inside this for loop, is the same as the single trial procedure above — the only two differences are:\n\nThe trial procedure is inside the loop, and\nWe are storing the count for each trial as we go.\n\nWe will go into more detail on how this works in the next chapter.\n\n# Procedure for 100 simulated trials.\n\n# A vector to store the counts for each trial.\nz <- numeric(100)\n\n# Repeat the trial procedure 100 times.\nfor (i in 1:100) {\n # Get 17 values from the *some_integers* vector.\n # Store the 17 numbers with the name \"a\"\n a <- sample(some_integers, 17, replace=TRUE)\n # Is the value of \"a\" greater than 0?\n q <- a > 0\n # Count the number of TRUE values in \"q\".\n b <- sum(q)\n # Store the result at the next position in the \"z\" vector.\n z[i] = b\n # Now go back and do the next trial until finished.\n}\n# Show the result of all 100 trials.\nz\n\n [1] 14 15 12 17 13 17 16 16 14 16 16 15 17 14 17 13 16 15 16 15 13 14 17 17 15\n [26] 14 13 15 13 16 17 15 15 15 15 15 13 16 15 13 17 15 16 17 15 17 16 17 17 16\n [51] 12 17 16 12 16 15 15 13 16 16 16 13 16 14 15 15 15 15 14 15 14 11 15 13 14\n [76] 15 15 14 13 15 15 14 17 16 14 17 16 17 15 16 16 16 14 13 15 16 17 17 15 13\n\n\nFinally, we need to count how many of the trials results we stored in z gave a “cured” count of 17.\nWe can ask the question whether a single number is equal to 17 using the double equals comparison: ==.\n\ns <- 17\n# Is the value of s equal to 17?\n# Show the result of the comparison.\ns == 17\n\n[1] TRUE\n\n\n\n\n\n\n\n\n\n\n\n\n\n5.16 Single and double equals\nNotice that the double equals == means something entirely different to Python than the single equals =. In the code above, Python reads s = 17 to mean “Set the variable s to have the value 17”. In technical terms the single equals is called an assignment operator, because it means assign the value 17 to the variable s.\nThe code s == 17 has a completely different meaning.\n\n\n5.17 Double equals\nThe double equals == above is a comparison in R.\n\nIt means “give TRUE if the value in s is equal to 17, and FALSE otherwise”. The == is a comparison operator — it is for comparing two values — here the value in s and the value 17. This comparison, like all comparisons, returns an answer that is either TRUE or FALSE. In our case s has the value 17, so the comparison becomes 17 == 17, meaning “is 17 equal to 17?”, to which the answer is “Yes”, and R sends back TRUE.\n\n\nWe can ask this question of all 100 counts by asking the question: is the vector z equal to 17, like this:\n\n# Is the value of z equal to 17?\nwere_cured <- z == 17\n# Show the result of the comparison.\nwere_cured\n\n [1] FALSE FALSE FALSE TRUE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE\n [13] TRUE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE TRUE\n [25] FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE\n [37] FALSE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE\n [49] TRUE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n [61] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n [73] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE TRUE FALSE\n [85] FALSE TRUE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n [97] TRUE TRUE FALSE FALSE\n\n\nFinally we use sum to count the number of TRUE values in the were_cured vector, to give the number of trials where all 17 patients were cured.\n\n# Count the number of TRUE values in \"were_cured\"\n# This is the same as the number of values in \"z\" that are equal to 17.\nn_all_cured <- sum(were_cured)\n# Show the result of the comparison.\nn_all_cured\n\n[1] 18\n\n\nn_all_cured is the number of simulated trials for which all patients were cured. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.\n\n# Proportion of trials where all patients were cured.\np <- n_all_cured / 100\n# Show the result\np\n\n[1] 0.18\n\n\nFrom this experiment, we see that there is roughly a one-in-six chance that all 17 patients are cured when using a 90% effective treatment." + }, + { + "objectID": "resampling_with_code.html#single-and-double-equals", + "href": "resampling_with_code.html#single-and-double-equals", + "title": "5  Resampling with code", + "section": "5.16 Single and double equals", + "text": "5.16 Single and double equals\nNotice that the double equals == means something entirely different to Python than the single equals =. In the code above, Python reads s = 17 to mean “Set the variable s to have the value 17”. In technical terms the single equals is called an assignment operator, because it means assign the value 17 to the variable s.\nThe code s == 17 has a completely different meaning." + }, + { + "objectID": "resampling_with_code.html#double-equals", + "href": "resampling_with_code.html#double-equals", + "title": "5  Resampling with code", + "section": "5.17 Double equals", + "text": "5.17 Double equals\nThe double equals == above is a comparison in R." + }, + { + "objectID": "resampling_with_code.html#what-have-we-learned-from-saint-hypothetical", + "href": "resampling_with_code.html#what-have-we-learned-from-saint-hypothetical", + "title": "5  Resampling with code", + "section": "5.18 What have we learned from Saint Hypothetical?", + "text": "5.18 What have we learned from Saint Hypothetical?\nWe started with a question about the results of the NCI trial on the new drug. The question was — was the result of their trial — 17 out of 17 patients cured — surprising.\nThen, for reasons we did not explain in detail, we changed tack, and asked the same question about a hypothetical set of 17 patients getting the standard treatment in Saint Hypothetical General.\nThat Hypothetical question turns out to be fairly easy to answer, because we can use simulation to estimate the chances that 17 out of 17 patients would be cured in such a hypothetical trial, on the assumption that each patient has a 90% chance of being cured with the standard treatment.\nThe answer for Saint Hypothetical General was — we would be somewhat surprised, but not astonished. We only get 17 out of 17 patients cured about one time in six.\nNow let us return to the NCI trial. Should the trial authors be surprised by their results? If they assumed that their new treatment was exactly as effective as the standard treatment, the result of the trial is a bit unusual, just by chance. It is up us to decide whether the result is unusual enough to make us think that the actual NCI treatment might in fact have been more effective than the standard treatment.\nYou will see this move again and again as we go through the book.\n\nWe take something that really happened — in this case the 17 out of 17 patients cured.\nThen we imagine a hypothetical world in which the results only depend on chance.\nWe do simulations in that hypothetical world to see how often we get a result like the one that happened in the real world.\nIf the real world result (17 out of 17) is an unusual, surprising result in the simulations from the hypothetical world, we take that as evidence that the real world result might not be due to chance alone.\n\nWe have just described the main idea in statistical inference. If that all seems strange and backwards to you, do not worry, we will go over that idea many times in this book. It is not a simple idea to grasp in one go. We hope you will find that, as you do more simulations, and think of more hypothetical worlds, the idea will start to make more sense. Later, we will start to think about asking other questions about probability and chance in the real world." + }, + { + "objectID": "resampling_with_code.html#conclusions", + "href": "resampling_with_code.html#conclusions", + "title": "5  Resampling with code", + "section": "5.19 Conclusions", + "text": "5.19 Conclusions\nCan you see how each of the operations that the computer carries out are analogous to the operations that you yourself executed when you solved this problem using 10-sided dice? This is exactly the procedure that we will use to solve every problem in probability and statistics that we must deal with. Either we will use a device such as coins or dice, or a random number table as an analogy for the physical process we are interested in (patients being cured, in this case), or we will simulate the analogy on the computer using the R program above.\nThe program above may not seem simple at first glance, but we think you will find, over the course of this book, that these programs become much simpler to understand than the older conventional approach to such problems that has routinely been taught to students for decades.\n\n\n\n\nDunleavy, Kieron, Stefania Pittaluga, John Janik, Nicole Grant, Margaret Shovlin, Richard Little, Robert Yarchoan, Seth Steinberg, Elaine S. Jaffe, and Wyndham H. Wilson. 2006. “Novel Treatment of Burkitt Lymphoma with Dose-Adjusted EPOCH-Rituximab: Preliminary Results Showing Excellent Outcome.” Blood 108 (11): 2736–36. https://doi.org/10.1182/blood.V108.11.2736.2736." + }, + { + "objectID": "sampling_tools.html#introduction", + "href": "sampling_tools.html#introduction", + "title": "6  Tools for samples and sampling", + "section": "6.1 Introduction", + "text": "6.1 Introduction\nNow you have some experience with R, probabilities and resampling, it is time to introduce some useful tools for our experiments and programs.\n\nStart of sampling_tools notebook\n\nDownload notebook\nInteract\n\n\n\n6.2 Samples and labels\nThus far we have used numbers such as 1 and 0 and 10 to represent the elements we are sampling from. For example, in Chapter 7, we were simulating the chance of a particular juror being black, given that 26% of the eligible jurors in the county were black. We used integers for that task, where we started with all the integers from 0 through 99, and asked R to select values at random from those integers. When R selected an integer from 0 through 25, we chose to label the resulting simulated juror as black — there are 26 integers in the range 0 through 25, so there is a 26% chance that any one integer will be in that range. If the integer was from 26 through 99, the simulated juror was white (there are 74 integers in the range 26 through 99).\nHere is the process of simulating a single juror, adapted from Section 7.3.3:\n\n# Get 1 random number from 0 through 99\n# replace=TRUE is redundant here (why?), but we leave it for consistency.\na <- sample(0:99, 1, replace=TRUE)\n\n# Show the result\na\n\n[1] 44\n\n\nAfter that, we have to unpack our labeling of 0 through 25 as being “black” and 26 through 99 as being “white”. We might do that like this:\n\nthis_juror_is_black <- a < 26\nthis_juror_is_black\n\n[1] FALSE\n\n\nThis all works as we want it to, but it’s just a little bit difficult to remember the coding (less than 26 means “black”, greater than 25 means “white”). We had to use that coding because we committed ourselves to using random numbers to simulate the outcomes.\nHowever, R can also store bits of text, called strings. Values that are bits of text can be very useful because the text values can be memorable labels for the entities we are sampling from, in our simulations.\n\n\n6.3 String values\nSo far, all the values you have seen in R vectors have been numbers. Now we get on to values that are bits of text. These are called strings.\nHere is a single R string value:\n\ns <- \"Resampling\"\ns\n\n[1] \"Resampling\"\n\n\n\nWe can see what type of value v holds by using the class function.\nFor example, for a number value, you will usually find the class is numeric:\n\nv <- 10\nclass(v)\n\n[1] \"numeric\"\n\n\n\nWhat is the class of the new bit-of-text value s?\n\nclass(s)\n\n[1] \"character\"\n\n\nThe R character value is a bit of text, and therefore consists of a sequence of characters.\nAs vectors are containers for other things, such as numbers, strings are containers for characters.\n\nTo get the length of a string, use the nchar function (Number of Characters):\n\n# Number of characters in s\nnchar(s)\n\n[1] 10\n\n\n\n\nR has a substring function that allows you to select individual characters or sequences of characters from a string. The arguments to substring are: first — the string; second — the index of the first character you want to select; and third — the index of the last character you want to select. For example to select the second character in the string you would specify 2 as the starting index, and 2 as the ending index, like this:\n\n# Get the second character of the string\nsecond_char <- substring(s, 2, 2)\nsecond_char\n\n[1] \"e\"\n\n\n\n\n\n6.4 Strings in vectors\nAs we can store numbers as elements in vectors, we can also store strings as vector elements.\n\nvector_of_strings = c('Julian', 'Lincoln', 'Simon')\nvector_of_strings\n\n[1] \"Julian\" \"Lincoln\" \"Simon\" \n\n\nAs for any vector, you can select elements with indexing. When you select an element with a given position (index), you get the string at at that position:\n\n# Julian Lincoln Simon's second name\nmiddle_name <- vector_of_strings[2]\nmiddle_name\n\n[1] \"Lincoln\"\n\n\nAs for numbers, we can compare strings with, for example, the == operator, that asks whether the two strings are equal:\n\nmiddle_name == 'Lincoln'\n\n[1] TRUE\n\n\n\n\n6.5 Repeating elements\nNow let us go back to the problem of selecting black and white jurors.\nWe started with the strategy of using numbers 0 through 25 to mean “black” jurors, and 26 through 99 to mean “white” jurors. We selected values at random from 0 through 99, and then worked out whether the number meant a “black” juror (was less than 26) or a “white” juror (was greater than 25).\nIt would be good to use strings instead of numbers to identify the potential jurors. Then we would not have to remember our coding of 0 through 25 and 26 through 99.\nIf only there was a way to make a vector of 100 strings, where 26 of the strings were “black” and 74 were “white”. Then we could select randomly from that array, and it would be immediately obvious that we had a “black” or “white” juror.\nLuckily, of course, we can do that, by using the rep function to construct the vector.\nHere is how that works:\n\n# The values that we will repeat to fill up the larger array.\njuror_types <- c('black', 'white')\n# The number of times we want to repeat \"black\" and \"white\".\nrepeat_nos <- c(26, 74)\n# Repeat \"black\" 26 times and \"white\" 74 times.\njury_pool <- rep(juror_types, repeat_nos)\n# Show the result\njury_pool\n\n [1] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\"\n [10] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\"\n [19] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"white\"\n [28] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [37] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [46] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [55] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [64] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [73] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [82] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [91] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n[100] \"white\"\n\n\nWe can use this vector of repeats of strings, to sample from. The result is easier to grasp, because we are using the string labels, instead of numbers:\n\n# Select one juror at random from the black / white pool.\n# replace=TRUE is redundant here, but we leave it for consistency.\none_juror <- sample(jury_pool, 1, replace=TRUE)\none_juror\n\n[1] \"black\"\n\n\nWe can select our full jury of 12 jurors, and see the results in a more obvious form:\n\n# Select one juror at random from the black / white pool.\none_jury <- sample(jury_pool, 12, replace=TRUE)\none_jury\n\n [1] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"black\" \"black\"\n[10] \"white\" \"white\" \"black\"\n\n\n\n\n\n\n\n\nUsing the size argument to sample\n\n\n\nIn the code above, we have specified the size of the sample we want (12) with the second argument to sample. As you saw in Section 5.8, we can also give names to the function arguments, in this case, to make it clearer what we mean by “12” in the code above. In fact, from now on, that is what we will do; we will specify the size of our sample by using the name for the function argument to sample — size — like this:\n\n# Select one juror at random from the black / white pool.\n# Specify the sample size using the \"size\" named argument.\none_jury <- sample(jury_pool, size=12, replace=TRUE)\none_jury\n\n [1] \"white\" \"white\" \"white\" \"white\" \"white\" \"black\" \"white\" \"white\" \"white\"\n[10] \"white\" \"white\" \"white\"\n\n\n\n\nWe can use == on the vector to get TRUE values where the juror was “black” and FALSE values otherwise:\n\nare_black <- one_jury == 'black'\nare_black\n\n [1] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE\n\n\nFinally, we can sum to find the number of black jurors (Section 5.13):\n\n# Number of black jurors in this simulated jury.\nn_black <- sum(are_black)\nn_black\n\n[1] 1\n\n\nPutting that all together, this is our new procedure to select one jury and count the number of black jurors:\n\none_jury <- sample(jury_pool, size=12, replace=TRUE)\nare_black <- one_jury == 'black'\nn_black <- sum(are_black)\nn_black\n\n[1] 4\n\n\nOr we can be even more compact by putting several statements together into one line:\n\n# The same as above, but on one line.\nn_black = sum(sample(jury_pool, size=12, replace=TRUE) == 'black')\nn_black\n\n[1] 4\n\n\n\n\n6.6 Resampling with and without replacement\nNow let us return to the details of Robert Swain’s case, that you first saw in Chapter 7.\nWe looked at the composition of Robert Swain’s 12-person jury — but in fact, by law, that does not have to be representative of the eligible jurors. The 12-person jury is drawn from a jury panel, of 100 people, and this should, in turn, be drawn from the population of all eligible jurors in the county, consisting, at the time, of “all male citizens in the community over 21 who are reputed to be honest, intelligent men and are esteemed for their integrity, good character and sound judgment.” So, unless there was some bias against black jurors, we might expect the 100-person jury panel to be a plausibly random sample of the eligible jurors, of whom 26% were black. See the Supreme Court case judgement for details.\nIn fact, in Robert Swain’s trial, there were 8 black members in the 100-person jury panel. We will leave it to you to adapt the simulation from Chapter 7 to ask the question — is 8% surprising as a random sample from a population with 26% black people?\nBut we have a different question: given that 8 out of 100 of the jury panel were black, is it surprising that none of the 12-person jury were black? As usual, we can answer that question with simulation.\nLet’s think about what a single simulated jury selection would look like.\nFirst we compile a representation of the actual jury panel, using the tools we have used above.\n\njuror_types <- c('black', 'white')\n# in fact there were 8 black jurors and 92 white jurors.\npanel_nos <- c(8, 92)\njury_panel <- rep(juror_types, panel_nos)\n# Show the result\njury_panel\n\n [1] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"white\"\n [10] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [19] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [28] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [37] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [46] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [55] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [64] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [73] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [82] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [91] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n[100] \"white\"\n\n\nNow consider taking a 12-person jury at random from this panel. We select the first juror at random, so that juror has an 8 out of 100 chance of being black. But when we select the second jury member, the situation has changed slightly. We can’t select the first juror again, so our panel is now 99 people. If our first juror was black, then the chances of selecting another black juror next are not 8 out of 100, but 7 out of 99 — a smaller chance. The problem is, as we shall see in more detail later, the chances of getting a black juror as the second, and third and fourth members of the jury depend on whether we selected a black juror as the first and second and third jury members. At its most extreme, imagine we had already selected eight jurors, and by some strange chance, all eight were black. Now our chances of selecting a black juror as the ninth juror are zero — there are no black jurors left to select from the panel.\nIn this case we are selecting jurors from the panel without replacement, meaning, that once we have selected a particular juror, we cannot select them again, and we do not put them back into the panel when we select our next juror.\nThis is the probability equivalent of the situation when you are dealing a hand of cards. Let’s say someone is dealing you, and you only, a hand of five cards. You get an ace as your first card. Your chances of getting an ace as your first card were just the number of aces in the deck divided by the number of cards — four in 52 – \\(\\frac{4}{52}\\). But for your second card, the probability has changed, because there is one less ace remaining in the pack, and one less card, so your chances of getting an ace as your second card are now \\(\\frac{3}{51}\\). This is sampling without replacement — in a normal game, you can’t get the same card twice. Of course, you could imagine getting a hand where you sampled with replacement. In that case, you’d get a card, you’d write down what it was, and you’d give the card back to the dealer, who would replace the card in the deck, shuffle again, and give you another card.\nAs you can see, the chances change if you are sampling with or without replacement, and the kind of sampling you do, will dictate how you model your chances in your simulations.\nBecause this distinction is so common, and so important, the machinery you have already seen in sample has simple ways for you to select your sampling type. You have already seen sampling with replacement, and it looks like this:\n\n# Take a sample of 12 jurors from the panel *with replacement*\nstrange_jury <- sample(jury_panel, size=12, replace=TRUE)\nstrange_jury\n\n [1] \"white\" \"white\" \"white\" \"white\" \"black\" \"white\" \"white\" \"white\" \"white\"\n[10] \"white\" \"white\" \"white\"\n\n\nThis is a strange jury, because it can select any member of the jury pool more than once. Perhaps that juror would have to fill two (or more!) seats, or run quickly between them. But of course, that is not how juries are selected. They are selected without replacement:\n\nThus far, we have always done sampling with replacement, and, in order to do that with sample, we pass the argument replace=TRUE. We do that because the default for sample is replace=FALSE, that is, by default, sample does sampling without replacement. If you want to do sampling without replacement, you can just omit the replace=TRUE argument to sample, or you can specify replace=FALSE explicitly, perhaps to remind yourself that this is sampling without replacement. Whether you omit the replace argument, or specify replace=FALSE, the behavior is the same.\n\n\n# Take a sample of 12 jurors from the panel *with replacement*\n# replace=FALSE is the default for sample.\nok_jury <- sample(jury_panel, size=12)\nok_jury\n\n [1] \"white\" \"white\" \"black\" \"white\" \"black\" \"white\" \"white\" \"white\" \"black\"\n[10] \"white\" \"white\" \"white\"\n\n\n\n\n\n\n\n\nComments at the end of lines\n\n\n\nYou have already seen comment lines. These are lines beginning with #, to signal to R that the rest of the line is text for humans to read, but R to ignore.\n\n# This is a comment. R ignores this line.\n\nYou can also put comments at the end of code lines, by finishing the code part of the line, and then putting a #, followed by more text. Again, R will ignore everything after the # as a text for humans, but not for R.\n\nmessage('Hello') # This is a comment at the end of the line.\n\nHello\n\n\n\n\nTo finish the procedure for simulating a single jury selection, we count the number of black jurors:\n\nn_black <- sum(ok_jury == 'black') # How many black jurors?\nn_black\n\n[1] 3\n\n\nNow we have the procedure for one simulated trial, here is the procedure for 10000 simulated trials.\n\ncounts <- numeric(10000)\nfor (i in 1:10000) {\n # Single trial procedure\n jury <- sample(jury_panel, size=12) # replace=FALSE is the default.\n n_black <- sum(jury == 'black') # How many black jurors?\n # Store the result\n counts[i] <- n_black\n}\n# Number of juries with 0 black jurors.\nzero_black <- sum(counts == 0)\n# Proportion\np_zero_black <- zero_black / 10000\nmessage(p_zero_black)\n\n0.3375\n\n\nWe have found that, when there are only 8% black jurors in the jury panel, having no black jurors in the final jury happens about 34% of the time, even in this case, where the jury is selected completely at random from the jury panel.\nWe should look for the main source of bias in the initial selection of the jury panel, not in the selection of the jury from the panel.\n\nEnd of sampling_tools notebook\n\n\n\n\n\n\n\n\nWith or without replacement for the original jury selection\n\n\n\nYou may have noticed in Chapter 7 that we were sampling Robert Swain’s jury from the eligible pool of jurors, with replacement. You might reasonably ask whether we should have selected from the eligible jurors without replacement, given that the same juror cannot serve more than once in the same jury, and therefore, the same argument applies there as here.\nThe trick there was that we were selecting from a very large pool of many thousand eligible jurors, of whom 26% were black. Let’s say there were 10,000 eligible jurors, of whom 2,600 were black. When selecting the first juror, there is exactly a 2,600 in 10,000 chance of getting a black juror — 26%. If we do get a black juror first, then the chance that the second juror will be black has changed slightly, 2,599 in 9,999. But these changes are very small; even if we select eleven black jurors out of eleven, when we come to the twelfth juror, we still have a 2,589 out of 9,989 chance of getting another black juror, and that works out at a 25.92% chance — hardly changed from the original 26%. So yes, you’d be right, we really should have compiled our population of 2,600 black jurors and 7,400 white jurors, and then sampled without replacement from that population, but as the resulting sample probabilities will be very similar to the simpler sampling with replacement, we chose to try and slide that one quietly past you, in the hope you would forgive us when you realized." + }, + { + "objectID": "sampling_tools.html#samples-and-labels", + "href": "sampling_tools.html#samples-and-labels", + "title": "6  Tools for samples and sampling", + "section": "6.2 Samples and labels", + "text": "6.2 Samples and labels\nThus far we have used numbers such as 1 and 0 and 10 to represent the elements we are sampling from. For example, in Chapter 7, we were simulating the chance of a particular juror being black, given that 26% of the eligible jurors in the county were black. We used integers for that task, where we started with all the integers from 0 through 99, and asked R to select values at random from those integers. When R selected an integer from 0 through 25, we chose to label the resulting simulated juror as black — there are 26 integers in the range 0 through 25, so there is a 26% chance that any one integer will be in that range. If the integer was from 26 through 99, the simulated juror was white (there are 74 integers in the range 26 through 99).\nHere is the process of simulating a single juror, adapted from Section 7.3.3:\n\n# Get 1 random number from 0 through 99\n# replace=TRUE is redundant here (why?), but we leave it for consistency.\na <- sample(0:99, 1, replace=TRUE)\n\n# Show the result\na\n\n[1] 44\n\n\nAfter that, we have to unpack our labeling of 0 through 25 as being “black” and 26 through 99 as being “white”. We might do that like this:\n\nthis_juror_is_black <- a < 26\nthis_juror_is_black\n\n[1] FALSE\n\n\nThis all works as we want it to, but it’s just a little bit difficult to remember the coding (less than 26 means “black”, greater than 25 means “white”). We had to use that coding because we committed ourselves to using random numbers to simulate the outcomes.\nHowever, R can also store bits of text, called strings. Values that are bits of text can be very useful because the text values can be memorable labels for the entities we are sampling from, in our simulations." + }, + { + "objectID": "sampling_tools.html#sec-intro-to-strings", + "href": "sampling_tools.html#sec-intro-to-strings", + "title": "6  Tools for samples and sampling", + "section": "6.3 String values", + "text": "6.3 String values\nSo far, all the values you have seen in R vectors have been numbers. Now we get on to values that are bits of text. These are called strings.\nHere is a single R string value:\n\ns <- \"Resampling\"\ns\n\n[1] \"Resampling\"\n\n\n\nWe can see what type of value v holds by using the class function.\nFor example, for a number value, you will usually find the class is numeric:\n\nv <- 10\nclass(v)\n\n[1] \"numeric\"\n\n\n\nWhat is the class of the new bit-of-text value s?\n\nclass(s)\n\n[1] \"character\"\n\n\nThe R character value is a bit of text, and therefore consists of a sequence of characters.\nAs vectors are containers for other things, such as numbers, strings are containers for characters.\n\nTo get the length of a string, use the nchar function (Number of Characters):\n\n# Number of characters in s\nnchar(s)\n\n[1] 10\n\n\n\n\nR has a substring function that allows you to select individual characters or sequences of characters from a string. The arguments to substring are: first — the string; second — the index of the first character you want to select; and third — the index of the last character you want to select. For example to select the second character in the string you would specify 2 as the starting index, and 2 as the ending index, like this:\n\n# Get the second character of the string\nsecond_char <- substring(s, 2, 2)\nsecond_char\n\n[1] \"e\"" + }, + { + "objectID": "sampling_tools.html#strings-in-s", + "href": "sampling_tools.html#strings-in-s", + "title": "6  Tools for samples and sampling", + "section": "6.4 Strings in vectors", + "text": "6.4 Strings in vectors\nAs we can store numbers as elements in vectors, we can also store strings as vector elements.\n\nvector_of_strings = c('Julian', 'Lincoln', 'Simon')\nvector_of_strings\n\n[1] \"Julian\" \"Lincoln\" \"Simon\" \n\n\nAs for any vector, you can select elements with indexing. When you select an element with a given position (index), you get the string at at that position:\n\n# Julian Lincoln Simon's second name\nmiddle_name <- vector_of_strings[2]\nmiddle_name\n\n[1] \"Lincoln\"\n\n\nAs for numbers, we can compare strings with, for example, the == operator, that asks whether the two strings are equal:\n\nmiddle_name == 'Lincoln'\n\n[1] TRUE" + }, + { + "objectID": "sampling_tools.html#sec-repeating", + "href": "sampling_tools.html#sec-repeating", + "title": "6  Tools for samples and sampling", + "section": "6.5 Repeating elements", + "text": "6.5 Repeating elements\nNow let us go back to the problem of selecting black and white jurors.\nWe started with the strategy of using numbers 0 through 25 to mean “black” jurors, and 26 through 99 to mean “white” jurors. We selected values at random from 0 through 99, and then worked out whether the number meant a “black” juror (was less than 26) or a “white” juror (was greater than 25).\nIt would be good to use strings instead of numbers to identify the potential jurors. Then we would not have to remember our coding of 0 through 25 and 26 through 99.\nIf only there was a way to make a vector of 100 strings, where 26 of the strings were “black” and 74 were “white”. Then we could select randomly from that array, and it would be immediately obvious that we had a “black” or “white” juror.\nLuckily, of course, we can do that, by using the rep function to construct the vector.\nHere is how that works:\n\n# The values that we will repeat to fill up the larger array.\njuror_types <- c('black', 'white')\n# The number of times we want to repeat \"black\" and \"white\".\nrepeat_nos <- c(26, 74)\n# Repeat \"black\" 26 times and \"white\" 74 times.\njury_pool <- rep(juror_types, repeat_nos)\n# Show the result\njury_pool\n\n [1] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\"\n [10] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\"\n [19] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"white\"\n [28] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [37] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [46] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [55] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [64] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [73] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [82] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [91] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n[100] \"white\"\n\n\nWe can use this vector of repeats of strings, to sample from. The result is easier to grasp, because we are using the string labels, instead of numbers:\n\n# Select one juror at random from the black / white pool.\n# replace=TRUE is redundant here, but we leave it for consistency.\none_juror <- sample(jury_pool, 1, replace=TRUE)\none_juror\n\n[1] \"black\"\n\n\nWe can select our full jury of 12 jurors, and see the results in a more obvious form:\n\n# Select one juror at random from the black / white pool.\none_jury <- sample(jury_pool, 12, replace=TRUE)\none_jury\n\n [1] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"black\" \"black\"\n[10] \"white\" \"white\" \"black\"\n\n\n\n\n\n\n\n\nUsing the size argument to sample\n\n\n\nIn the code above, we have specified the size of the sample we want (12) with the second argument to sample. As you saw in Section 5.8, we can also give names to the function arguments, in this case, to make it clearer what we mean by “12” in the code above. In fact, from now on, that is what we will do; we will specify the size of our sample by using the name for the function argument to sample — size — like this:\n\n# Select one juror at random from the black / white pool.\n# Specify the sample size using the \"size\" named argument.\none_jury <- sample(jury_pool, size=12, replace=TRUE)\none_jury\n\n [1] \"white\" \"white\" \"white\" \"white\" \"white\" \"black\" \"white\" \"white\" \"white\"\n[10] \"white\" \"white\" \"white\"\n\n\n\n\nWe can use == on the vector to get TRUE values where the juror was “black” and FALSE values otherwise:\n\nare_black <- one_jury == 'black'\nare_black\n\n [1] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE\n\n\nFinally, we can sum to find the number of black jurors (Section 5.13):\n\n# Number of black jurors in this simulated jury.\nn_black <- sum(are_black)\nn_black\n\n[1] 1\n\n\nPutting that all together, this is our new procedure to select one jury and count the number of black jurors:\n\none_jury <- sample(jury_pool, size=12, replace=TRUE)\nare_black <- one_jury == 'black'\nn_black <- sum(are_black)\nn_black\n\n[1] 4\n\n\nOr we can be even more compact by putting several statements together into one line:\n\n# The same as above, but on one line.\nn_black = sum(sample(jury_pool, size=12, replace=TRUE) == 'black')\nn_black\n\n[1] 4" + }, + { + "objectID": "sampling_tools.html#resampling-with-and-without-replacement", + "href": "sampling_tools.html#resampling-with-and-without-replacement", + "title": "6  Tools for samples and sampling", + "section": "6.6 Resampling with and without replacement", + "text": "6.6 Resampling with and without replacement\nNow let us return to the details of Robert Swain’s case, that you first saw in Chapter 7.\nWe looked at the composition of Robert Swain’s 12-person jury — but in fact, by law, that does not have to be representative of the eligible jurors. The 12-person jury is drawn from a jury panel, of 100 people, and this should, in turn, be drawn from the population of all eligible jurors in the county, consisting, at the time, of “all male citizens in the community over 21 who are reputed to be honest, intelligent men and are esteemed for their integrity, good character and sound judgment.” So, unless there was some bias against black jurors, we might expect the 100-person jury panel to be a plausibly random sample of the eligible jurors, of whom 26% were black. See the Supreme Court case judgement for details.\nIn fact, in Robert Swain’s trial, there were 8 black members in the 100-person jury panel. We will leave it to you to adapt the simulation from Chapter 7 to ask the question — is 8% surprising as a random sample from a population with 26% black people?\nBut we have a different question: given that 8 out of 100 of the jury panel were black, is it surprising that none of the 12-person jury were black? As usual, we can answer that question with simulation.\nLet’s think about what a single simulated jury selection would look like.\nFirst we compile a representation of the actual jury panel, using the tools we have used above.\n\njuror_types <- c('black', 'white')\n# in fact there were 8 black jurors and 92 white jurors.\npanel_nos <- c(8, 92)\njury_panel <- rep(juror_types, panel_nos)\n# Show the result\njury_panel\n\n [1] \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"black\" \"white\"\n [10] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [19] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [28] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [37] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [46] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [55] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [64] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [73] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [82] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n [91] \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\" \"white\"\n[100] \"white\"\n\n\nNow consider taking a 12-person jury at random from this panel. We select the first juror at random, so that juror has an 8 out of 100 chance of being black. But when we select the second jury member, the situation has changed slightly. We can’t select the first juror again, so our panel is now 99 people. If our first juror was black, then the chances of selecting another black juror next are not 8 out of 100, but 7 out of 99 — a smaller chance. The problem is, as we shall see in more detail later, the chances of getting a black juror as the second, and third and fourth members of the jury depend on whether we selected a black juror as the first and second and third jury members. At its most extreme, imagine we had already selected eight jurors, and by some strange chance, all eight were black. Now our chances of selecting a black juror as the ninth juror are zero — there are no black jurors left to select from the panel.\nIn this case we are selecting jurors from the panel without replacement, meaning, that once we have selected a particular juror, we cannot select them again, and we do not put them back into the panel when we select our next juror.\nThis is the probability equivalent of the situation when you are dealing a hand of cards. Let’s say someone is dealing you, and you only, a hand of five cards. You get an ace as your first card. Your chances of getting an ace as your first card were just the number of aces in the deck divided by the number of cards — four in 52 – \\(\\frac{4}{52}\\). But for your second card, the probability has changed, because there is one less ace remaining in the pack, and one less card, so your chances of getting an ace as your second card are now \\(\\frac{3}{51}\\). This is sampling without replacement — in a normal game, you can’t get the same card twice. Of course, you could imagine getting a hand where you sampled with replacement. In that case, you’d get a card, you’d write down what it was, and you’d give the card back to the dealer, who would replace the card in the deck, shuffle again, and give you another card.\nAs you can see, the chances change if you are sampling with or without replacement, and the kind of sampling you do, will dictate how you model your chances in your simulations.\nBecause this distinction is so common, and so important, the machinery you have already seen in sample has simple ways for you to select your sampling type. You have already seen sampling with replacement, and it looks like this:\n\n# Take a sample of 12 jurors from the panel *with replacement*\nstrange_jury <- sample(jury_panel, size=12, replace=TRUE)\nstrange_jury\n\n [1] \"white\" \"white\" \"white\" \"white\" \"black\" \"white\" \"white\" \"white\" \"white\"\n[10] \"white\" \"white\" \"white\"\n\n\nThis is a strange jury, because it can select any member of the jury pool more than once. Perhaps that juror would have to fill two (or more!) seats, or run quickly between them. But of course, that is not how juries are selected. They are selected without replacement:\n\nThus far, we have always done sampling with replacement, and, in order to do that with sample, we pass the argument replace=TRUE. We do that because the default for sample is replace=FALSE, that is, by default, sample does sampling without replacement. If you want to do sampling without replacement, you can just omit the replace=TRUE argument to sample, or you can specify replace=FALSE explicitly, perhaps to remind yourself that this is sampling without replacement. Whether you omit the replace argument, or specify replace=FALSE, the behavior is the same.\n\n\n# Take a sample of 12 jurors from the panel *with replacement*\n# replace=FALSE is the default for sample.\nok_jury <- sample(jury_panel, size=12)\nok_jury\n\n [1] \"white\" \"white\" \"black\" \"white\" \"black\" \"white\" \"white\" \"white\" \"black\"\n[10] \"white\" \"white\" \"white\"\n\n\n\n\n\n\n\n\nComments at the end of lines\n\n\n\nYou have already seen comment lines. These are lines beginning with #, to signal to R that the rest of the line is text for humans to read, but R to ignore.\n\n# This is a comment. R ignores this line.\n\nYou can also put comments at the end of code lines, by finishing the code part of the line, and then putting a #, followed by more text. Again, R will ignore everything after the # as a text for humans, but not for R.\n\nmessage('Hello') # This is a comment at the end of the line.\n\nHello\n\n\n\n\nTo finish the procedure for simulating a single jury selection, we count the number of black jurors:\n\nn_black <- sum(ok_jury == 'black') # How many black jurors?\nn_black\n\n[1] 3\n\n\nNow we have the procedure for one simulated trial, here is the procedure for 10000 simulated trials.\n\ncounts <- numeric(10000)\nfor (i in 1:10000) {\n # Single trial procedure\n jury <- sample(jury_panel, size=12) # replace=FALSE is the default.\n n_black <- sum(jury == 'black') # How many black jurors?\n # Store the result\n counts[i] <- n_black\n}\n# Number of juries with 0 black jurors.\nzero_black <- sum(counts == 0)\n# Proportion\np_zero_black <- zero_black / 10000\nmessage(p_zero_black)\n\n0.3375\n\n\nWe have found that, when there are only 8% black jurors in the jury panel, having no black jurors in the final jury happens about 34% of the time, even in this case, where the jury is selected completely at random from the jury panel.\nWe should look for the main source of bias in the initial selection of the jury panel, not in the selection of the jury from the panel.\n\nEnd of sampling_tools notebook" + }, + { + "objectID": "sampling_tools.html#conclusion", + "href": "sampling_tools.html#conclusion", + "title": "6  Tools for samples and sampling", + "section": "6.7 Conclusion", + "text": "6.7 Conclusion\nThis chapter introduced you to the idea of strings — values in R that store bits of text. Strings are very useful as labels for the entities we are sampling from, when we do our simulations. Strings are particularly useful when we use them with vectors, and one way we often do that is to build up vectors of strings to sample from, using the rep function.\nThere is a fundamental distinction between two different types of sampling — sampling with replacement, where we draw an element from a larger pool, then put that element back before drawing again, and sampling without replacement, where we remove the element from the remaining pool when we draw it into the sample. As we will see later, it is often a judgment call which of these two types of sampling is a more reasonable model of the world you are trying to simulate." + }, + { + "objectID": "resampling_with_code2.html#a-question-of-life-and-death", + "href": "resampling_with_code2.html#a-question-of-life-and-death", + "title": "7  More resampling with code", + "section": "7.1 A question of life and death", + "text": "7.1 A question of life and death\nThis example comes from the excellent Berkeley introduction to data science (Ani Adhikari and Wagner 2021).\nRobert Swain was a young black man who was sentenced to death in the early 60s. Swain’s trial was held in Talladega County, Alabama. At the time, 26% of the eligible jurors in that county were black, but every member of Swain’s jury was white. Swain and his legal team appealed to the Alabama Supreme Court, and then to the US Supreme Court, arguing that there was racial bias in the jury selection. They noted that there had been no black jurors in Talladega county since 1950, even though they made up about a quarter of the eligible pool of jurors. The US Supreme Court rejected this argument, in a 6 to 3 opinion, writing that “The overall percentage disparity has been small and reflects no studied attempt to include or exclude a specified number of Negros.”.\nSwain’s team presented a variety of evidence on bias in jury selection, but here we will look at the obvious and apparently surprising fact that Swain’s jury was entirely white. The Supreme Court decided that the “disparity” between selection of white and black jurors “has been small” — but how would they, and how would we, make a rational decision about whether this disparity really was “small”?\nYou might reasonably be worried about the result of this decision for Robert Swain. In fact his death sentence was invalidated by a later, unrelated decision and he served a long prison sentence instead. In 1986, the Supreme Court overturned the precedent set by Swain’s case, in Batson v. Kentucky, 476 U.S. 79." + }, + { + "objectID": "resampling_with_code2.html#a-small-disparity-and-a-hypothetical-world", + "href": "resampling_with_code2.html#a-small-disparity-and-a-hypothetical-world", + "title": "7  More resampling with code", + "section": "7.2 A small disparity and a hypothetical world", + "text": "7.2 A small disparity and a hypothetical world\nTo answer the question that the Supreme Court asked, we return to the method we used in the last chapter.\nLet us imagine a hypothetical world, in which each individual black or white person had an equal chance of being selected for the jury. Call this world Hypothetical County, Alabama.\nJust as in 1960’s Talladega County, 26% of eligible jurors in Hypothetical County are black. Hypothetical County jury selection has no bias against black people, so we expect around 26% of the jury to be black. 0.26 * 12 = 3.12, so we expect that, on average, just over 3 out of 12 jurors in a Hypothetical County jury will be black. But, if we select each juror at random from the population, that means that, sometimes, by chance, we will have fewer than 3 black jurors, and sometimes will have more than 3 black jurors. And, by chance, sometimes we will have no black jurors. But, if the jurors really are selected at random, how often would we expect this to happen — that there are no black jurors? We would like to estimate the probability that we will get no black jurors. If that probability is small, then we have some evidence that the disparity in selection between black and white jurors, was not “small”.\n\nWhat is the probability of an all white jury being randomly selected out of a population having 26% black people?" + }, + { + "objectID": "resampling_with_code2.html#designing-the-experiment", + "href": "resampling_with_code2.html#designing-the-experiment", + "title": "7  More resampling with code", + "section": "7.3 Designing the experiment", + "text": "7.3 Designing the experiment\nBefore we start, we need to figure out three things:\n\nWhat do we mean by one trial?\nWhat is the outcome of interest from the trial?\nHow do we simulate one trial?\n\nWe then take three steps to calculate the desired probability:\n\nRepeat the simulated trial procedure N times.\nCount M, the number of trials with an outcome that matches the outcome we are interested in.\nCalculate the proportion, M/N. This is an estimate of the probability in question.\n\nFor this problem, our task is made a little easier by the fact that our trial (in the resampling sense) is a simulated trial (in the legal sense). One trial requires 12 simulated jurors, each labeled by race (white or black).\nThe outcome we are interested in is the number of black jurors.\nNow comes the harder part. How do we simulate one trial?\n\n7.3.1 One trial\nOne trial requires 12 jurors, and we are interested only in the race of each juror. In Hypothetical County, where selection by race is entirely random, each juror has a 26% chance of being black.\nWe need a way of simulating a 26% chance.\nOne way of doing this is by getting a random number from 0 through 99 (inclusive). There are 100 numbers in the range 0 through 99 (inclusive).\nWe will arbitrarily say that the juror is white if the random number is in the range from 0 through 73. 74 of the 100 numbers are in this range, so the juror has a 74/100 = 74% chance of getting the label “white”. We will say the juror is black if the random number is in the range 74 though 99. There are 26 such numbers, so the juror has a 26% chance of getting the label “black”.\nNext we need a way of getting a random number in the range 0 through 99. This is an easy job for the computer, but if we had to do this with a physical device, we could get a single number by throwing two 10-sided dice, say a blue die and a green die. The face of the blue die will be the 10s digit, and the green face will be the ones digit. So, if the blue die comes up with 8 and the green die has 4, then the random number is 84.\nWe could then simulate 12 jurors by repeating this process 12 times, each time writing down “white” if the number is from 0 through 74, and “black” otherwise. The trial outcome is the number of times we wrote “black” for these 12 simulated jurors.\n\n\n7.3.2 Using code to simulate a trial\nWe use the same logic to simulate a trial with the computer. A little code makes the job easier, because we can ask R to give us 12 random numbers from 0 through 99, and to count how many of these numbers are in the range from 75 through 99. Numbers in the range from 75 through 99 correspond to black jurors.\n\n\n7.3.3 Random numbers from 0 through 99\nWe can now use R and sample from the last chapter to get 12 random numbers from 0 through 99.\n\n# Get 12 random numbers from 0 through 99\na <- sample(0:99, size=12, replace=TRUE)\n\n# Show the result\na\n\n [1] 44 22 75 62 46 30 67 72 68 4 23 78\n\n\n\n7.3.3.1 Counting the jurors\nWe use comparison and sum to count how many numbers are greater than 74, and therefore, in the range from 75 through 99:\n\n# How many numbers are greater than 74?\nb <- sum(a > 74)\n# Show the result\nb\n\n[1] 2\n\n\n\n\n7.3.3.2 A single simulated trial\nWe assemble the pieces from the last few sections to make a chunk that simulates a single trial:\n\n# Get 12 random numbers from 0 through 99\na <- sample(0:99, size=12, replace=TRUE)\n# How many are greater than 74?\nb <- sum(a > 74)\n# Show the result\nb\n\n[1] 2" + }, + { + "objectID": "resampling_with_code2.html#three-simulation-steps", + "href": "resampling_with_code2.html#three-simulation-steps", + "title": "7  More resampling with code", + "section": "7.4 Three simulation steps", + "text": "7.4 Three simulation steps\nNow we come back to the details of how we:\n\nRepeat the simulated trial many times;\nrecord the results for each trial;\ncalculate the required proportion as an estimate of the probability we seek.\n\nRepeating the trial many times is the job of the for loop, and we will come to that soon.\nIn order to record the results, we will store each trial result in a vector.\n\n\n\n\n\n\nMore on vectors\n\n\n\nSince we will be working with vectors a lot, it is worth knowing more about them.\nA vector is a container that stores many elements of the same type. You have already seen, in Chapter 2, how we can create a vector from a sequence of numbers using the c() function.\n\n# Make a vector of numbers, store with the name \"some_numbers\".\nsome_numbers <- c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\n# Show the value of \"some_numbers\"\nsome_numbers\n\n [1] 0 1 2 3 4 5 6 7 8 9\n\n\nAnother way that we can create vectors is to use the numeric function to make a new array where all the elements are 0.\n\n# Make a new vector containing 5 zeros.\nz <- numeric(5)\n# Show the value of \"z\"\nz\n\n[1] 0 0 0 0 0\n\n\nNotice the argument 5 to the numeric function. This tells the function how many zeros we want in the vector that the function will return.\n\n7.5 vector length\nThe are various useful things we can do with this vector container. One is to ask how many elements there are in the vector container. We can use the length function to calculate the number of elements in a vector:\n\n# Show the number of elements in \"z\"\nlength(z)\n\n[1] 5\n\n\n\n\n7.6 Indexing into vectors\nAnother thing we can do is set the value for a particular element in the vector. To do this, we use square brackets following the vector value, on the left hand side of the equals sign, like this:\n\n# Set the value of the first element in the vector.\nz[1] = 99\n# Show the new contents of the vector.\nz\n\n[1] 99 0 0 0 0\n\n\nRead the first line of code as “the element at position 1 gets a value of 99”.\nFor practice, let us also set the value of the third element in the vector:\n\n# Set the value of the third element in the vector.\nz[3] = 99\n# Show the new contents of the vector.\nz\n\n[1] 99 0 99 0 0\n\n\nRead the first code line above as as “set the value at position 3 in the vector to have the value 99”.\nWe can also get the value of the element at a given position, using the same square-bracket notation:\n\n# Get the value of the *first* element in the array.\n# Store the value with name \"v\"\nv = z[1]\n# Show the value we got\nv\n\n[1] 99\n\n\nRead the first code line here as “v gets the value at position 1 in the vector”.\nUsing square brackets to get and set element values is called indexing into the vector.\n\n\n\n\n7.6.1 Repeating trials\nAs a preview, let us now imagine that we want to do 50 simulated trials of Robert Swain’s jury in Hypothetical County. We will want to store the count for each trial, to give 50 counts.\nIn order to do this, we make a vector to hold the 50 counts. Call this vector z.\n\n# A vector to hold the 50 count values.\nz <- numeric(50)\n\nWe could run a single trial to get a single simulated count. Here we just repeat the code chunk you saw above. Notice that we can get a different result each time we run this code, because the numbers in a are random choices from the range 0 through 99, and different random numbers will give different counts.\n\n# Get 12 random numbers from 0 through 99\na <- sample(0:99, size=12, replace=TRUE)\n# How many are greater than 74?\nb <- sum(a == 9)\n# Show the result\nb\n\n[1] 0\n\n\nNow we have the result of a single trial, we can store it as the first number in the z vector:\n\n# Store the single trial count as the first value in the \"z\" vector.\nz[1] <- b\n# Show all the values in the \"z\" vector.\nz\n\n [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39] 0 0 0 0 0 0 0 0 0 0 0 0\n\n\nOf course we could just keep doing this: run the chunk corresponding to a trial, above, to get a new count, and then store it at the next position in the z vector. For example, we could store the counts for the first three trials with:\n\n# First trial\na <- sample(0:99, size=12, replace=TRUE)\nb <- sum(a == 9)\n# Store the result at the first position in z\nz[1] <- b\n\n# Second trial\na <- sample(0:99, size=12, replace=TRUE)\nb <- sum(a == 9)\n# Store the result at the second position in z\nz[2] <- b\n\n# Third trial\na <- sample(0:99, size=12, replace=TRUE)\nb <- sum(a == 9)\n# Store the result at the third position in z\nz[3] <- b\n\n# And so on ...\n\nThis would get terribly long and boring to type for 50 trials. Luckily computer code is very good at repeating the same procedure many times. For example, R can do this using a for loop. You have already seen a preview of the for loop in Chapter 2. Here we dive into for loops in more depth.\n\n\n7.6.2 For-loops in R\nA for-loop is a way of asking R to:\n\nTake a sequence of things, one by one, and\nDo the same task on each one.\n\nWe often use this idea when we are trying to explain a repeating procedure. For example, imagine we wanted to explain what the supermarket checkout person does for the items in your shopping basket. You might say that they do this:\n\nFor each item of shopping in your basket, they take the item off the conveyor belt, scan it, and put it on the other side of the till.\n\nYou could also break this description up into bullet points with indentation, to say the same thing:\n\nFor each item from your shopping basket, they:\n\nTake the item off the conveyor belt.\nScan the item.\nPut it on the other side of the till.\n\n\nNotice the logic; the checkout person is repeating the same procedure for each of a series of items.\nThis is the logic of the for loop in R. The procedure that R repeats is called the body of the for loop. In the example of the checkout person above, the repeating procedure is:\n\nTake the item off the conveyor belt.\nScan the item.\nPut it on the other side of the till.\n\nNow imagine we wanted to use R to print out the year of birth for each of the authors for the third edition of this book:\n\n\n\nAuthor\nYear of birth\n\n\n\n\nJulian Lincoln Simon\n1932\n\n\nMatthew Brett\n1964\n\n\nStéfan van der Walt\n1980\n\n\nIan Nimmo-Smith\n1944\n\n\n\nWe want to see this output:\nAuthor birth year is 1932\nAuthor birth year is 1964\nAuthor birth year is 1980\nAuthor birth year is 1944\nOf course, we could just ask R to print out these exact lines, like this:\n\nmessage('Author birth year is 1932')\n\nAuthor birth year is 1932\n\nmessage('Author birth year is 1964')\n\nAuthor birth year is 1964\n\nmessage('Author birth year is 1980')\n\nAuthor birth year is 1980\n\nmessage('Author birth year is 1944')\n\nAuthor birth year is 1944\n\n\nWe might instead notice that we are repeating the same procedure for each of the four birth years, and decide to do the same thing using a for loop:\n\nauthor_birth_years <- c(1932, 1964, 1980, 1944)\n\n# For each birth year\nfor (birth_year in author_birth_years) {\n # Repeat this procedure ...\n message('Author birth year is ', birth_year)\n}\n\nAuthor birth year is 1932\n\n\nAuthor birth year is 1964\n\n\nAuthor birth year is 1980\n\n\nAuthor birth year is 1944\n\n\nThe for loop starts with a line where we tell it what items we want to repeat the procedure for:\n\nfor (birth_year in author_birth_years) {\nThis initial line of the for loop ends with an opening curly brace {. The opening curly brace tells R that what follows, up until the matching closing curly brace }, is the procedure R should follow for each item. The lines between the opening { and closing } curly braces* are the body of the for loop.\n\nThe initial line of the for loop above tells R that it should take each item in author_birth_years, one by one — first 1932, then 1964, then 1980, then 1944. For each of these numbers it will:\n\nPut the number into the variable birth_year, then\nRun the code between the curly braces.\n\nJust as the person at the supermarket checkout takes each item in turn, for each iteration (repeat) of the for loop, birth_year gets a new value from the sequence in author_birth_years. birth_year is called the loop variable, because it is the variable that gets a new value each time we begin a new iteration of the for loop procedure. As for any variable in R, we can call our loop variable anything we like. We used birth_year here, but we could have used y or year or some other name.\n\nNotice that R insists we put parentheses (round brackets) around: the loop variable; in; and the sequence that will fill the loop variable — like this:\nfor (birth_year in author_birth_years) {\nDo not forget these round brackets — R insists on them.\n\nNow you know what the for loop is doing, you can see that the for loop above is equivalent to the following code:\n\nbirth_year <- 1932 # Set the loop variable to contain the first value.\nmessage('Author birth year is ', birth_year) # Use the first value.\n\nAuthor birth year is 1932\n\nbirth_year <- 1964 # Set the loop variable to contain the next value.\nmessage('Author birth year is ', birth_year) # Use the second value.\n\nAuthor birth year is 1964\n\nbirth_year <- 1980\nmessage('Author birth year is ', birth_year)\n\nAuthor birth year is 1980\n\nbirth_year <- 1944\nmessage('Author birth year is ', birth_year)\n\nAuthor birth year is 1944\n\n\nWriting the steps in the for loop out like this is called unrolling the loop. It can be a useful exercise to do this when you come across a for loop, in order to work through the logic of the loop. For example, you may want to write out the unrolled equivalent of the first couple of iterations, to see what the loop variable will be, and what will happen in the body of the loop.\nWe often use for loops with ranges (see Section 5.9). Here we use a loop to print out the numbers 1 through 4:\n\nfor (n in 1:4) {\n message('The loop variable n is ', n)\n}\n\nThe loop variable n is 1\n\n\nThe loop variable n is 2\n\n\nThe loop variable n is 3\n\n\nThe loop variable n is 4\n\n\nNotice that the range ended at 4, and that means we repeat the loop body 4 times. We can also use the loop variable value from the range as an index, to get or set the first, second, etc values from a vector.\nFor example, maybe we would like to show the author position and the author year of birth.\nRemember our author birth years:\n\nauthor_birth_years\n\n[1] 1932 1964 1980 1944\n\n\nWe can get (for example) the second author birth year with:\n\nauthor_birth_years[2]\n\n[1] 1964\n\n\nUsing the combination of looping over a range, and vector indexing, we can print out the author position and the author birth year:\n\nfor (n in 1:4) {\n year <- author_birth_years[n]\n message('Birth year of author position ', n, ' is ', year)\n}\n\nBirth year of author position 1 is 1932\n\n\nBirth year of author position 2 is 1964\n\n\nBirth year of author position 3 is 1980\n\n\nBirth year of author position 4 is 1944\n\n\nJust for practice, let us unroll the first two iterations through this for loop, to remind ourselves what the code is doing:\n\n# Unrolling the for loop.\nn <- 1\nyear <- author_birth_years[n] # Will be 1932\nmessage('Birth year of author position ', n, ' is ', year)\n\nBirth year of author position 1 is 1932\n\nn <- 2\nyear <- author_birth_years[n] # Will be 1964\nmessage('Birth year of author position ', n, ' is ', year)\n\nBirth year of author position 2 is 1964\n\n# And so on.\n\n\n\n7.6.3 Putting it all together\nHere is the code we worked out above, to implement a single trial:\n\n# Get 12 random numbers from 0 through 99\na <- sample(0:99, size=12, replace=TRUE)\n# How many are greater than 74?\nb <- sum(a == 9)\n# Show the result\nb\n\n[1] 0\n\n\nWe found that we could use vectors to store the results of these trials, and that we could use for loops to repeat the same procedure many times.\nNow we can put these parts together to do 50 simulated trials:\n\n# Procedure for 50 simulated trials.\n\n# A vector to store the counts for each trial.\nz <- numeric(50)\n\n# Repeat the trial procedure 50 times.\nfor (i in 1:50) {\n # Get 12 random numbers from 0 through 99\n a <- sample(0:99, size=12, replace=TRUE)\n # How many are greater than 74?\n b <- sum(a > 74)\n # Store the result at the next position in the \"z\" vector.\n z[i] = b\n # Now go back and do the next trial until finished.\n}\n# Show the result of all 50 trials.\nz\n\n [1] 4 1 1 4 2 3 4 3 1 2 3 2 5 3 2 3 4 3 1 5 5 2 1 1 2 2 2 3 0 2 6 2 2 3 4 0 3 4\n[39] 2 5 3 2 3 3 3 4 2 2 4 4\n\n\nFinally, we need to count how many of the trials in z ended up with all-white juries. These are the trials with a z (count) value of 0.\nTo do this, we can ask a vector which elements match a certain condition. E.g.:\n\nx <- c(2, 1, 3, 0)\ny = x < 2\n# Show the result\ny\n\n[1] FALSE TRUE FALSE TRUE\n\n\nWe now use that same technique to ask, of each of the 50 counts, whether the vector z is equal to 0, like this:\n\n# Is the value of z equal to 0?\nall_white <- z == 0\n# Show the result of the comparison.\nall_white\n\n [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n[13] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n[25] FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE\n[37] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE\n[49] FALSE FALSE\n\n\nWe need to get the number of TRUE values in all_white, to find how many simulated trials gave all-white juries.\n\n# Count the number of True values in \"all_white\"\n# This is the same as the number of values in \"z\" that are equal to 0.\nn_all_white = sum(all_white)\n# Show the result of the comparison.\nn_all_white\n\n[1] 2\n\n\nn_all_white is the number of simulated trials for which all the jury members were white. It only remains to get the proportion of trials for which this was true, and to do this, we divide by the number of trials.\n\n# Proportion of trials where all jury members were white.\np <- n_all_white / 50\n# Show the result\np\n\n[1] 0.04\n\n\nFrom this initial simulation, it seems there is around a 4% chance that a jury selected randomly from the population, which was 26% black, would have no black jurors." + }, + { + "objectID": "resampling_with_code2.html#sec-array-length", + "href": "resampling_with_code2.html#sec-array-length", + "title": "7  More resampling with code", + "section": "7.5 vector length", + "text": "7.5 vector length\nThe are various useful things we can do with this vector container. One is to ask how many elements there are in the vector container. We can use the length function to calculate the number of elements in a vector:\n\n# Show the number of elements in \"z\"\nlength(z)\n\n[1] 5" + }, + { + "objectID": "resampling_with_code2.html#sec-array-indexing", + "href": "resampling_with_code2.html#sec-array-indexing", + "title": "7  More resampling with code", + "section": "7.6 Indexing into vectors", + "text": "7.6 Indexing into vectors\nAnother thing we can do is set the value for a particular element in the vector. To do this, we use square brackets following the vector value, on the left hand side of the equals sign, like this:\n\n# Set the value of the first element in the vector.\nz[1] = 99\n# Show the new contents of the vector.\nz\n\n[1] 99 0 0 0 0\n\n\nRead the first line of code as “the element at position 1 gets a value of 99”.\nFor practice, let us also set the value of the third element in the vector:\n\n# Set the value of the third element in the vector.\nz[3] = 99\n# Show the new contents of the vector.\nz\n\n[1] 99 0 99 0 0\n\n\nRead the first code line above as as “set the value at position 3 in the vector to have the value 99”.\nWe can also get the value of the element at a given position, using the same square-bracket notation:\n\n# Get the value of the *first* element in the array.\n# Store the value with name \"v\"\nv = z[1]\n# Show the value we got\nv\n\n[1] 99\n\n\nRead the first code line here as “v gets the value at position 1 in the vector”.\nUsing square brackets to get and set element values is called indexing into the vector." + }, + { + "objectID": "resampling_with_code2.html#many-many-trials", + "href": "resampling_with_code2.html#many-many-trials", + "title": "7  More resampling with code", + "section": "7.7 Many many trials", + "text": "7.7 Many many trials\nOur experiment above is only 50 simulated trials. The higher the number of trials, the more confident we can be of our estimate for p — the proportion of trials where we get an all-white jury.\nIt is no extra trouble for us to tell the computer to do a very large number of trials. For example, we might want to run 10,000 trials instead of 50. All we have to do is to run the loop 10,000 times instead of 50 times. The computer has to do more work, but it is more than up to the job.\nHere is exactly the same code we ran above, but collected into one chunk, and using 10,000 trials instead of 50. We have left out the comments, to make the code more compact.\n\n# Full simulation procedure, with 10,000 trials.\nz <- numeric(10000)\nfor (i in 1:10000) {\n a <- sample(0:99, size=12, replace=TRUE)\n b <- sum(a > 74)\n z[i] = b\n}\nall_white <- z == 0\nn_all_white <- sum(all_white)\np <- n_all_white / 10000\np\n\n[1] 0.0317\n\n\nWe now have a new, more accurate estimate of the proportion of Hypothetical County juries with all-white juries. The proportion is 0.032, and so 3.2%.\nThis proportion means that, for any one jury from Hypothetical County, there is a less than one in 20 chance that the jury would be all white.\nAs we will see in more detail later, we might consider using the results from this experiment in Hypothetical County, to reflect on the result we saw in the real Talladega County. We might conclude, for example, that there was likely some systematic difference between Hypothetical County and Talledega County. Maybe the difference was that there was, in fact, some bias in the jury selection in Talledega county, and that the Supreme Court was wrong to reject this. You will hear more of this line of reasoning later in the book." + }, + { + "objectID": "resampling_with_code2.html#conclusion", + "href": "resampling_with_code2.html#conclusion", + "title": "7  More resampling with code", + "section": "7.8 Conclusion", + "text": "7.8 Conclusion\nIn this chapter we studied a real life-and-death question, on racial bias and the death penalty. We continued our exploration of the ways we can use probability, and resampling, to draw conclusions about real events. Along the way, we went into more detail on vectors in R, and for loops; two basic tools in resampling.\nIn the next chapter, we will work through some more problems in probability, to show how we can use resampling, to answer questions about chance. We will add some more tools for writing code in R, to make your programs easier to write, read, and understand.\n\n\n\n\nAni Adhikari, John DeNero, and David Wagner. 2021. Computational and Inferential Thinking: The Foundations of Data Science. https://inferentialthinking.com. https://inferentialthinking.com." + }, + { + "objectID": "probability_theory_1a.html#introduction", + "href": "probability_theory_1a.html#introduction", + "title": "8  Probability Theory, Part 1", + "section": "8.1 Introduction", + "text": "8.1 Introduction\nLet’s assume we understand the nature of the system or mechanism that produces the uncertain events in which we are interested. That is, the probability of the relevant independent simple events is assumed to be known, the way we assume we know the probability of a single “6” with a given die. The task is to determine the probability of various sequences or combinations of the simple events — say, three “6’s” in a row with the die. These are the sorts of probability problems dealt with in this chapter.\n\nThe resampling method — or just call it simulation or Monte Carlo method, if you prefer — will be illustrated with classic examples. Typically, a single trial of the system is simulated with cards, dice, random numbers, or a computer program. Then trials are repeated again and again to estimate the frequency of occurrence of the event in which we are interested; this is the probability we seek. We can obtain as accurate an estimate of the probability as we wish by increasing the number of trials. The key task in each situation is designing an experiment that accurately simulates the system in which we are interested.\nThis chapter begins the Monte Carlo simulation work that culminates in the resampling method in statistics proper. The chapter deals with problems in probability theory — that is, situations where one wants to estimate the probability of one or more particular events when the basic structure and parameters of the system are known. In later chapters we move on to inferential statistics, where similar simulation work is known as resampling." + }, + { + "objectID": "probability_theory_1a.html#definitions", + "href": "probability_theory_1a.html#definitions", + "title": "8  Probability Theory, Part 1", + "section": "8.2 Definitions", + "text": "8.2 Definitions\nA few definitions first:\n\nSimple Event : An event such as a single flip of a coin, or one draw of a single card. A simple event cannot be broken down into simpler events of a similar sort.\nSimple Probability (also called “primitive probability”): The probability that a simple event will occur; for example, that my favorite football team, the Washington Commanders, will win on Sunday.\n\nDuring a recent season, the “experts” said that the Commanders had a 60 percent chance of winning on Opening Day; that estimate is a simple probability. We can model that probability by putting into a bucket six green balls to stand for wins, and four red balls to stand for losses (or we could use 60 and 40 balls, or 600 and 400). For the outcome on any given day, we draw one ball from the bucket, and record a simulated win if the ball is green, a loss if the ball is red.\nSo far the bucket has served only as a physical representation of our thoughts. But as we shall see shortly, this representation can help us think clearly about the process of interest to us. It can also give us information that is not yet in our thoughts.\nEstimating simple probabilities wisely depends largely upon gathering evidence well. It also helps to adjust one’s probability estimates skillfully to make them internally consistent. Estimating probabilities has much in common with estimating lengths, weights, skills, costs, and other subjects of measurement and judgment.\nSome more definitions:\n\nComposite Event : A composite event is the combination of two or more simple events. Examples include all heads in three throws of a single coin; all heads in one throw of three coins at once; Sunday being a nice day and the Commanders winning; and the birth of nine females out of the next ten calves born if the chance of a female in a single birth is 0.48.\nCompound Probability : The probability that a composite event will occur.\n\nThe difficulty in estimating simple probabilities such as the chance of the Commanders winning on Sunday arises from our lack of understanding of the world around us. The difficulty of estimating compound probabilities such as the probability of it being a nice day Sunday and the Commanders winning is the weakness in our mathematical intuition interacting with our lack of understanding of the world around us. Our task in the study of probability and statistics is to overcome the weakness of our mathematical intuition by using a systematic process of simulation (or the devices of formulaic deductive theory).\nConsider now a question about a compound probability: What are the chances of the Commanders winning their first two games if we think that each of those games can be modeled by our bucket containing six red and four green balls? If one drawing from the bucket represents one game, a second drawing should represent the second game (assuming we replace the first ball drawn in order to keep the chances of winning the same for the two games). If so, two drawings from the bucket should represent two games. And we can then estimate the compound probability we seek with a series of two-ball trial experiments.\nMore specifically, our procedure in this case — the prototype of all procedures in the resampling simulation approach to probability and statistics — is as follows:\n\nPut six green (“Win”) and four red (“Lose”) balls in a bucket.\nDraw a ball, record its color, and replace it (so that the probability of winning the second simulated game is the same as the first).\nDraw another ball and record its color.\nIf both balls drawn were green record “Yes”; otherwise record “No.”\nRepeat steps 2-4 a thousand times.\nCount the proportion of “Yes”s to the total number of “Yes”s and “No”s; the result is the probability we seek.\n\nMuch the same procedure could be used to estimate the probability of the Commanders winning (say) 3 of their next 4 games. We will return to this illustration again and we will see how it enables us to estimate many other sorts of probabilities.\n\nExperiment or Experimental Trial, or Trial, or Resampling Experiment : A simulation experiment or trial is a randomly-generated composite event which has the same characteristics as the actual composite event in which we are interested (except that in inferential statistics the resampling experiment is generated with the “benchmark” or “null” universe rather than with the “alternative” universe). \nParameter : A numerical property of a universe. For example, the “true” mean (don’t worry about the meaning of “true”), and the range between largest and smallest members, are two of its parameters." + }, + { + "objectID": "probability_theory_1a.html#theoretical-and-historical-methods-of-estimation", + "href": "probability_theory_1a.html#theoretical-and-historical-methods-of-estimation", + "title": "8  Probability Theory, Part 1", + "section": "8.3 Theoretical and historical methods of estimation", + "text": "8.3 Theoretical and historical methods of estimation\nAs introduced in Section 3.5, there are two general ways to tackle any probability problem: theoretical-deductive and empirical , each of which has two sub-types. These concepts have complicated links with the concept of “frequency series” discussed earlier.\n\nEmpirical Methods . One empirical method is to look at actual cases in nature — for example, examine all (or a sample of) the families in Brazil that have four children and count the proportion that have three girls among them. (This is the most fundamental process in science and in information-getting generally. But in general we do not discuss it in this book and leave it to courses called “research methods.” I regard that as a mistake and a shame, but so be it.) In some cases, of course, we cannot get data in such fashion because it does not exist.\nAnother empirical method is to manipulate the simple elements in such fashion as to produce hypothetical experience with how the simple elements behave. This is the heart of the resampling method, as well as of physical simulations such as wind tunnels.\nTheoretical Methods . The most fundamental theoretical approach is to resort to first principles, working with the elements in their full deductive simplicity, and examining all possibilities. This is what we do when we use a tree diagram to calculate the probability of three girls in families of four children.\n\n\nThe formulaic approach is a theoretical method that aims to avoid the inconvenience of resorting to first principles, and instead uses calculation shortcuts that have been worked out in the past.\nWhat the Book Teaches . This book teaches you the empirical method using hypothetical cases. Formulas can be misleading for most people in most situations, and should be used as a shortcut only when a person understands exactly which first principles are embodied in the formulas. But most of the time, students and practitioners resort to the formulaic approach without understanding the first principles that lie behind them — indeed, their own teachers often do not understand these first principles — and therefore they have almost no way to verify that the formula is right. Instead they use canned checklists of qualifying conditions." + }, + { + "objectID": "probability_theory_1a.html#samples-and-universes", + "href": "probability_theory_1a.html#samples-and-universes", + "title": "8  Probability Theory, Part 1", + "section": "8.4 Samples and universes", + "text": "8.4 Samples and universes\nThe terms “sample” and “universe” (or “population”) 1 were used earlier without definition. But now these terms must be defined.\n\n8.4.1 The concept of a sample\nFor our purposes, a “sample” is a collection of observations for which you obtain the data to be used in the problem. Almost any set of observations for which you have data constitutes a sample. (You might, or might not, choose to call a complete census a sample.)" + }, + { + "objectID": "probability_theory_1a.html#the-concept-of-a-universe-or-population", + "href": "probability_theory_1a.html#the-concept-of-a-universe-or-population", + "title": "8  Probability Theory, Part 1", + "section": "8.5 The concept of a universe or population", + "text": "8.5 The concept of a universe or population\nFor every sample there must also be a universe “behind” it. But “universe” is harder to define, partly because it is often an imaginary concept. A universe is the collection of things or people that you want to say that your sample was taken from . A universe can be finite and well defined — “all live holders of the Congressional Medal of Honor,” “all presidents of major universities,” “all billion-dollar corporations in the United States.” Of course, these finite universes may not be easy to pin down; for instance, what is a “major university”? And these universes may contain some elements that are difficult to find; for instance, some Congressional Medal winners may have left the country, and there may not be adequate public records on some billion-dollar corporations.\nUniverses that are called “infinite” are harder to understand, and it is often difficult to decide which universe is appropriate for a given purpose. For example, if you are studying a sample of patients suffering from schizophrenia, what is the universe from which the sample comes? Depending on your purposes, the appropriate universe might be all patients with schizophrenia now alive, or it might be all patients who might ever live. The latter concept of the universe of patients with schizophrenia is imaginary because some of the universe does not exist. And it is infinite because it goes on forever.\nNot everyone likes this definition of “universe.” Others prefer to think of a universe, not as the collection of people or things that you want to say your sample was taken from, but as the collection that the sample was actually taken from. This latter view equates the universe to the “sampling frame” (the actual list or set of elements you sample from) which is always finite and existent. The definition of universe offered here is simply the most practical, in our opinion." + }, + { + "objectID": "probability_theory_1a.html#the-conventions-of-probability", + "href": "probability_theory_1a.html#the-conventions-of-probability", + "title": "8  Probability Theory, Part 1", + "section": "8.6 The conventions of probability", + "text": "8.6 The conventions of probability\nLet’s review the basic conventions and rules used in the study of probability:\n\nProbabilities are expressed as decimals between 0 and 1, like percentages. The weather forecaster might say that the probability of rain tomorrow is 0.2, or 0.97.\nThe probabilities of all the possible alternative outcomes in a single “trial” must add to unity. If you are prepared to say that it must either rain or not rain, with no other outcome being possible — that is, if you consider the outcomes to be mutually exclusive (a term that we discuss below), then one of those probabilities implies the other. That is, if you estimate that the probability of rain is 0.2 — written \\(P(\\text{rain}) = 0.2\\) — that implies that you estimate that \\(P(\\text{no rain}) = 0.8\\).\n\n\n\n\n\n\n\nWriting probabilities\n\n\n\nWe will now be writing some simple formulae using probability. Above we write the probability of rain tomorrow as \\(P(\\text{rain})\\). This probability might be 0.2, and we could write this as:\n\\[\nP(\\text{rain}) = 0.2\n\\]\nWe can term “rain tomorrow” an event — the event may occur: \\(\\text{rain}\\), or it may not occur: \\(\\text{no rain}\\).\nWe often shorten the name of our event — here \\(\\text{rain}\\) — to a single letter, such as \\(R\\). So, in this case, we could write \\(P(\\text{rain}) = 0.2\\) as \\(P(R) = 0.2\\) — meaning the same thing. We tend to prefer single letters — as in \\(P(R)\\) — to longer names — as in \\(P(\\text{rain})\\). This is because the single letters can be easier to read in these compact formulae.\nAbove we have written the probability of “rain tomorrow” event not occurring as \\(P(\\text{no rain})\\). Another way of referring to an event not occurring is to suffix the event name with a caret (^) character like this: \\(\\ \\hat{} R\\). So read \\(P(\\ \\hat{} R)\\) as “the probability that it will not rain”, and it is just another way of writing \\(P(\\text{no rain})\\). We sometimes call \\(\\ \\hat{} R\\) the complement of \\(R\\).\nWe use \\(\\text{and}\\) between two events to mean both events occur.\nFor example, say we call the event “Commanders win the game” as \\(W\\). One example of a compound event (see above) would be the event \\(W \\text{and} R\\), meaning, the event where the Commanders won the game and it rained." + }, + { + "objectID": "probability_theory_1a.html#mutually-exclusive-events-the-addition-rule", + "href": "probability_theory_1a.html#mutually-exclusive-events-the-addition-rule", + "title": "8  Probability Theory, Part 1", + "section": "8.7 Mutually exclusive events — the addition rule", + "text": "8.7 Mutually exclusive events — the addition rule\nDefinition: If there are just two events \\(A\\) and \\(B\\) and they are “mutually exclusive” or “disjoint,” each implies the absence of the other. Green and red coats are mutually exclusive for you if (but only if) you never wear more than one coat at a time.\nTo state this idea formally, if \\(A\\) and \\(B\\) are mutually exclusive, then:\n\\[\nP(A \\text{ and } B) = 0\n\\]\nIf \\(A\\) is “wearing a green coat” and \\(B\\) is “wearing a red coat” (and you never wear two coats at the same time), then the probability that you are wearing a green coat and a red coat is 0: \\(P(A \\text{ and } B) = 0\\).\nIn that case, outcomes \\(A\\) and \\(B\\), and hence outcome \\(A\\) and its own absence (written \\(P(\\ \\hat{} A)\\)), are necessarily mutually exclusive, and hence the two probabilities add to unity:\n\n\\[\nP(A) + P(\\ \\hat{} A) = 1\n\\]\nThe sales of your store in a given year cannot be both above and below $1 million. Therefore if \\(P(\\text{sales > \\$1 million}) = 0.2\\), \\(P(\\text{sales <=\n\\$1 million}) = 0.8\\).\nThis “complements” rule is useful as a consistency check on your estimates of probabilities. If you say that the probability of rain is 0.2, then you should check that you think that the probability of no rain is 0.8; if not, reconsider both the estimates. The same for the probabilities of your team winning and losing its next game." + }, + { + "objectID": "probability_theory_1a.html#joint-probabilities", + "href": "probability_theory_1a.html#joint-probabilities", + "title": "8  Probability Theory, Part 1", + "section": "8.8 Joint probabilities", + "text": "8.8 Joint probabilities\nLet’s return now to the Commanders. We said earlier that our best guess of the probability that the Commanders will win the first game is 0.6. Let’s complicate the matter a bit and say that the probability of the Commanders winning depends upon the weather; on a nice day we estimate a 0.65 chance of winning, on a nasty (rainy or snowy) day a chance of 0.55. It is obvious that we then want to know the chance of a nice day, and we estimate a probability of 0.7. Let’s now ask the probability that both will happen — it will be a nice day and the Commanders will win .\nBefore getting on with the process of estimation itself, let’s tarry a moment to discuss the probability estimates. Where do we get the notion that the probability of a nice day next Sunday is 0.7? We might have done so by checking the records of the past 50 years, and finding 35 nice days on that date. If we assume that the weather has not changed over that period (an assumption that some might not think reasonable, and the wisdom of which must be the outcome of some non-objective judgment), our probability estimate of a nice day would then be 35/50 = 0.7.\nTwo points to notice here: 1) The source of this estimate is an objective “frequency series.” And 2) the data come to us as the records of 50 days, of which 35 were nice. We would do best to stick with exactly those numbers rather than convert them into a single number — 70 percent. Percentages have a way of being confusing. (When his point score goes up from 2 to 3, my racquetball partner is fond of saying that he has made a “fifty percent increase”; that’s just one of the confusions with percentages.) And converting to a percent loses information: We no longer know how many observations the percent is based upon, whereas 35/50 keeps that information.\nNow, what about the estimate that the Commanders have a 0.65 chance of winning on a nice day — where does that come from? Unlike the weather situation, there is no long series of stable data to provide that information about the probability of winning. Instead, we construct an estimate using whatever information or “hunch” we have. The information might include the Commanders’ record earlier in this season, injuries that have occurred, what the “experts” in the newspapers say, the gambling odds, and so on. The result certainly is not “objective,” or the result of a stable frequency series. But we treat the 0.65 probability in quite the same way as we treat the .7 estimate of a nice day. In the case of winning, however, we produce an estimate expressed directly as a percent.\nIf we are shaky about the estimate of winning — as indeed we ought to be, because so much judgment and guesswork inevitably goes into it — we might proceed as follows: Take hold of a bucket and two bags of balls, green and red. Put into the bucket some number of green balls — say 10. Now add enough red balls to express your judgment that the ratio is the ratio of expected wins to losses on a nice day, adding or subtracting green balls as necessary to get the ratio you want. If you end up with 13 green and 7 red balls, then you are “modeling” a probability of 0.65, as stated above. If you end up with a different ratio of balls, then you have learned from this experiment with your own mind processes that you think that the probability of a win on a nice day is something other than 0.65.\nDon’t put away the bucket. We will be using it again shortly. And keep in mind how we have just been using it, because our use later will be somewhat different though directly related.\nOne good way to begin the process of producing a compound estimate is by portraying the available data in a “tree diagram” like Figure 8.1. The tree diagram shows the possible events in the order in which they might occur. A tree diagram is extremely valuable whether you will continue with either simulation or the formulaic method.\n\n\n\n\n\nFigure 8.1: Tree diagram" + }, + { + "objectID": "probability_theory_1a.html#sec-what-is-resampling", + "href": "probability_theory_1a.html#sec-what-is-resampling", + "title": "8  Probability Theory, Part 1", + "section": "8.9 The Monte Carlo simulation method (resampling)", + "text": "8.9 The Monte Carlo simulation method (resampling)\nThe steps we follow to simulate an answer to the compound probability question are as follows:\n\nPut seven blue balls (for “nice day”) and three yellow balls (“not nice”) into a bucket labeled A.\nPut 65 green balls (for “win”) and 35 red balls (“lose”) into a bucket labeled B. This bucket represents the chance that the Commanders will when it is a nice day.\nDraw one ball from bucket A. If it is blue, carry on to the next step; otherwise record “no” and stop.\nIf you have drawn a blue ball from bucket A, now draw a ball from bucket B, and if it is green, record “yes” on a score sheet; otherwise write “no.”\nRepeat steps 3-4 perhaps 10000 times.\nCount the number of “yes” trials.\nCompute the probability you seek as (number of “yeses”/ 10000). (This is the same as (number of “yeses”/ (number of “yeses” + number of “noes”)\n\nActually doing the above series of steps by hand is useful to build your intuition about probability and simulation methods. But the procedure can also be simulated with a computer. We will use R to do this in a moment." + }, + { + "objectID": "probability_theory_1a.html#if-statements-in", + "href": "probability_theory_1a.html#if-statements-in", + "title": "8  Probability Theory, Part 1", + "section": "8.10 If statements in R", + "text": "8.10 If statements in R\nBefore we get to the simulation, we need another feature of R, called a conditional or if statement.\nHere we have rewritten step 4 above, but using indentation to emphasize the idea:\nIf you have drawn a blue ball from bucket A:\n Draw a ball from bucket B\n if the ball is green:\n record \"yes\"\n otherwise:\n record \"no\".\nNotice the structure. The first line is the header of the if statement. It has a condition — this is why if statements are often called conditional statements. The condition here is “you have drawn a blue ball from bucket A”. If this condition is met — it is True that you have drawn a blue ball from bucket A then we go on to do the stuff that is indented. Otherwise we do not do any of the stuff that is indented.\nThe indented stuff above is the body of the if statement. It is the stuff we do if the conditional at the top is True.\nNow let’s see how we would write that in R.\nLet’s make bucket A. Remember, this is the weather bucket. It has seven blue balls (for 70% fine days) and 3 yellow balls (for 30% rainy days). See Section 6.5 for the rep way of repeating elements multiple times.\n\nStart of fine_win notebook\n\nDownload notebook\nInteract\n\n\n\n# blue means \"nice day\", yellow means \"not nice\".\nbucket_A <- rep(c('blue', 'yellow'), c(7, 3))\nbucket_A\n\n [1] \"blue\" \"blue\" \"blue\" \"blue\" \"blue\" \"blue\" \"blue\" \"yellow\"\n [9] \"yellow\" \"yellow\"\n\n\nNow let us draw a ball at random from bucket_A:\n\na_ball <- sample(bucket_A, size=1)\na_ball\n\n[1] \"blue\"\n\n\nHow we run our first if statement. Running this code will display “The ball was blue” if the ball was blue, otherwise it will not display anything:\n\nif (a_ball == 'blue') {\n message('The ball was blue')\n}\n\nThe ball was blue\n\n\n\nNotice that the header line has if, followed by an open parenthesis ( introducing the conditional expression a_ball == 'blue'. There follows close parenthesis ) to finish the conditional expression. Next there is a open curly brace { to signal the start of the body of the if statement. The body of the if statement is one or more lines of code, followed by the close curly brace }. Here there is only one line: message('The ball was blue'). R only runs the body of the if statement if the condition is TRUE.2\n\nTo confirm we see “The ball was blue” if a_ball is 'blue' and nothing otherwise, we can set a_ball and re-run the code:\n\n# Set value of a_ball so we know what it is.\na_ball <- 'blue'\n\n\nif (a_ball == 'blue') {\n # The conditional statement is True in this case, so the body does run.\n message('The ball was blue')\n}\n\nThe ball was blue\n\n\n\na_ball <- 'yellow'\n\n\nif (a_ball == 'blue') {\n # The conditional statement is False, so the body does not run.\n message('The ball was blue')\n}\n\nWe can add an else clause to the if statement. Remember the body of the if statement runs if the conditional expression (here a_ball == 'blue') is TRUE. The else clause runs if the conditional statement is FALSE. This may be clearer with an example:\n\na_ball <- 'blue'\n\n\nif (a_ball == 'blue') {\n # The conditional expression is True in this case, so the body runs.\n message('The ball was blue')\n} else {\n # The conditional expression was True, so the else clause does not run.\n message('The ball was not blue')\n}\n\nThe ball was blue\n\n\n\nNotice that the else clause of the if statement starts with the end of the if body with the closing curly brace }. else follows, followed in turn by the opening curly brace { to start the body of the else clause. The body of the else clause only runs if the initial conditional expression is not TRUE.\n\n\na_ball <- 'yellow'\n\n\nif (a_ball == 'yellow') {\n # The conditional expression was False, so the body does not run.\n message('The ball was blue')\n} else {\n # but the else clause does run.\n message('The ball was not blue')\n}\n\nThe ball was blue\n\n\nWith this machinery, we can now implement the full logic of step 4 above:\nIf you have drawn a blue ball from bucket A:\n Draw a ball from bucket B\n if the ball is green:\n record \"yes\"\n otherwise:\n record \"no\".\nHere is bucket B. Remember green means “win” (65% of the time) and red means “lose” (35% of the time). We could call this the “Commanders win when it is a nice day” bucket:\n\nbucket_B <- rep(c('green', 'red'), c(65, 35))\n\nThe full logic for step 4 is:\nNow we have everything we need to run many trials with the same logic.\n\n# By default, say we have no result.\nresult = 'No result'\na_ball <- sample(bucket_A, size=1)\n# If you have drawn a blue ball from bucket A:\nif (a_ball == 'blue') {\n # Draw a ball at random from bucket B\n b_ball <- sample(bucket_B, size=1)\n # if the ball is green:\n if (b_ball == 'green') {\n # record \"yes\"\n result <- 'yes'\n # otherwise:\n } else {\n # record \"no\".\n result <- 'no'\n }\n}\n# Show what we got in this case.\nresult\n\n[1] \"yes\"\n\n\n\n# The result of each trial.\n# To start with, say we have no result for all the trials.\nz <- rep('No result', 10000)\n\n# Repeat trial procedure 10000 times\nfor (i in 1:10000) {\n # draw one \"ball\" for the weather, store in \"a_ball\"\n # blue is \"nice day\", yellow is \"not nice\"\n a_ball <- sample(bucket_A, size=1)\n if (a_ball == 'blue') { # nice day\n # if no rain, check on game outcome\n # green is \"win\" (give nice day), red is \"lose\" (given nice day).\n b_ball <- sample(bucket_B, size=1)\n if (b_ball == 'green') { # Commanders win\n # Record result.\n z[i] <- 'yes'\n } else {\n z[i] <- 'no'\n }\n }\n # End of trial, go back to the beginning until done.\n}\n\n# Count of the number of times we got \"yes\".\nk <- sum(z == 'yes')\n# Show the proportion of *both* fine day *and* wins\nkk <- k / 10000\nkk\n\n[1] 0.461\n\n\nThe above procedure gives us the probability that it will be a nice day and the Commanders will win — about 46.1%.\nEnd of fine_win notebook\n\nLet’s say that we think that the Commanders have a 0.55 (55%) chance of winning on a not-nice day. With the aid of a bucket with a different composition — one made by substituting 55 green and 45 yellow balls in Step 4 — a similar procedure yields the chance that it will be a nasty day and the Commanders will win. With a similar substitution and procedure we could also estimate the probabilities that it will be a nasty day and the Commanders will lose, and a nice day and the Commanders will lose. The sum of these probabilities should come close to unity, because the sum includes all the possible outcomes. But it will not exactly equal unity because of what we call “sampling variation” or “sampling error.”\nPlease notice that each trial of the procedure begins with the same numbers of balls in the buckets as the previous trial. That is, you must replace the balls you draw after each trial in order that the probabilities remain the same from trial to trial. Later we will discuss the general concept of replacement versus non-replacement more fully." + }, + { + "objectID": "probability_theory_1a.html#the-deductive-formulaic-method", + "href": "probability_theory_1a.html#the-deductive-formulaic-method", + "title": "8  Probability Theory, Part 1", + "section": "8.11 The deductive formulaic method", + "text": "8.11 The deductive formulaic method\nIt also is possible to get an answer with formulaic methods to the question about a nice day and the Commanders winning. The following discussion of nice-day-Commanders-win handled by formula is a prototype of the formulaic deductive method for handling other problems.\nReturn now to the tree diagram (Figure 8.1) above. We can read from the tree diagram that 70 percent of the time it will be nice, and of that 70 percent of the time, 65 percent of the games will be wins. That is, \\(0.65 * 0.7 = 0.455\\) = the probability of a nice day and a win. That is the answer we seek. The method seems easy, but it also is easy to get confused and obtain the wrong answer." + }, + { + "objectID": "probability_theory_1a.html#multiplication-rule", + "href": "probability_theory_1a.html#multiplication-rule", + "title": "8  Probability Theory, Part 1", + "section": "8.12 Multiplication rule", + "text": "8.12 Multiplication rule\nWe can generalize what we have just done. The foregoing formula exemplifies what is known as the “multiplication rule”:\n\\[\nP(\\text{nice day and win}) = P(\\text{nice day}) * P(\\text{winning | nice day})\n\\]\nwhere the vertical line in \\(P(\\text{winning | nice day})\\) means “conditional upon” or “given that.” That is, the vertical line indicates a “conditional probability,” a concept we must consider in a minute.\nThe multiplication rule is a formula that produces the probability of the combination (juncture) of two or more events . More discussion of it will follow below." + }, + { + "objectID": "probability_theory_1a.html#sec-cond-uncond", + "href": "probability_theory_1a.html#sec-cond-uncond", + "title": "8  Probability Theory, Part 1", + "section": "8.13 Conditional and unconditional probabilities", + "text": "8.13 Conditional and unconditional probabilities\nTwo kinds of probability statements — conditional and unconditional — must now be distinguished.\nIt is the appropriate concept when many factors, all small relative to each other rather than one force having an overwhelming influence, affect the outcome.\nA conditional probability is formally written \\(P(\\text{Commanders win\n| rain}) = 0.65\\), and it is read “The probability that the Commanders will win if (given that) it rains is 0.65.” It is the appropriate concept when there is one (or more) major event of interest in decision contexts.\nLet’s use another football example to explain conditional and unconditional probabilities. In the year this was being written, the University of Maryland had an unpromising football team. Someone may nevertheless ask what chance the team had of winning the post season game at the bowl to which only the best team in the University of Maryland’s league is sent. One may say that if by some miracle the University of Maryland does get to the bowl, its chance would be a bit less than 50- 50 — say, 0.40. That is, the probability of its winning, conditional on getting to the bowl is 0.40. But the chance of its getting to the bowl at all is very low, perhaps 0.01. If so, the unconditional probability of winning at the bowl is the probability of its getting there multiplied by the probability of winning if it gets there; that is, 0.01 x 0.40 = 0.004. (It would be even better to say that .004 is the probability of winning conditional only on having a team, there being a league, and so on, all of which seem almost sure things.) Every probability is conditional on many things — that war does not break out, that the sun continues to rise, and so on. But if all those unspecified conditions are very sure, and can be taken for granted, we talk of the probability as unconditional.\nA conditional probability is a statement that the probability of an event is such-and-such if something else is so-and-so; it is the “if” that makes a probability statement conditional. True, in some sense all probability statements are conditional; for example, the probability of an even-numbered spade is 6/52 if the deck is a poker deck and not necessarily if it is a pinochle deck or Tarot deck. But we ignore such conditions for most purposes.\nMost of the use of the concept of probability in the social sciences is conditional probability. All hypothesis-testing statistics (discussed starting in Chapter 20) are conditional probabilities.\nHere is the typical conditional-probability question used in social-science statistics: What is the probability of obtaining this sample S (by chance) if the sample were taken from universe A? For example, what is the probability of getting a sample of five children with I.Q.s over 100 by chance in a sample randomly chosen from the universe of children whose average I.Q. is 100?\nOne way to obtain such conditional-probability statements is by examination of the results generated by universes like the conditional universe. For example, assume that we are considering a universe of children where the average I.Q. is 100.\nWrite down “over 100” and “under 100” respectively on many slips of paper, put them into a hat, draw five slips several times, and see how often the first five slips drawn are all over 100. This is the resampling (Monte Carlo simulation) method of estimating probabilities.\nAnother way to obtain such conditional-probability statements is formulaic calculation. For example, if half the slips in the hat have numbers under 100 and half over 100, the probability of getting five in a row above 100 is 0.03125 — that is, \\(0.5^5\\), or 0.5 x 0.5 x 0.5 x 0.5 x 0.5, using the multiplication rule introduced above. But if you are not absolutely sure you know the proper mathematical formula, you are more likely to come up with a sound answer with the simulation method.\nLet’s illustrate the concept of conditional probability with four cards — two aces and two 3’s (or two black and two red). What is the probability of an ace? Obviously, 0.5. If you first draw an ace, what is the probability of an ace now? That is, what is the probability of an ace conditional on having drawn one already? Obviously not 0.5.\nThis change in the conditional probabilities is the basis of mathematician Edward Thorp’s famous system of card-counting to beat the casinos at blackjack (Twenty One).\nCasinos can defeat card counting by using many decks at once so that conditional probabilities change more slowly, and are not very different than unconditional probabilities. Looking ahead, we will see that sampling with replacement, and sampling without replacement from a huge universe, are much the same in practice, so we can substitute one for the other at our convenience.\nLet’s further illustrate the concept of conditional probability with a puzzle (from Gardner 2001, 288). “… shuffle a packet of four cards — two red, two black — and deal them face down in a row. Two cards are picked at random, say by placing a penny on each. What is the probability that those two cards are the same color?”\n1. Play the game with the cards 100 times, and estimate the probability sought.\nOR\n\nPut slips with the numbers “1,” “1,” “2,” and “2” in a hat, or in a vector named N on a computer.\nShuffle the slips of paper by shaking the hat or shuffling the vector (of which more below).\nTake two slips of paper from the hat or from N, to get two numbers.\nCall the first number you selected A and the second B.\nAre A and B the same? If so, record “Yes” otherwise “No”.\nRepeat (2-5) 10000 times, and count the proportion of “Yes” results. That proportion equals the probability we seek to estimate.\n\nBefore we proceed to do this procedure in R, we need a command to shuffle a vector." + }, + { + "objectID": "probability_theory_1a.html#sec-shuffling", + "href": "probability_theory_1a.html#sec-shuffling", + "title": "8  Probability Theory, Part 1", + "section": "8.14 Shuffling with sample", + "text": "8.14 Shuffling with sample\nIn the recipe above, the vector N has four values:\n\nN = c(1, 1, 2, 2)\n\nFor the physical simulation, we specified that we would shuffle the slips of paper with these numbers, meaning that we would jumble them up into a random order. When we have done this, we will select two slips — say the first two — from the shuffled slips.\nAs we will be discussing more in various places, this shuffle-then-draw procedure is also called resampling without replacement. The without replacement idea refers to the fact that, after shuffling, we take a first virtual slip of paper from the shuffled vector, and then a second — but we do not replace the first slip of paper into the shuffled vector before drawing the second. For example, say I drew a “1” from N for the first value. If I am sampling without replacement then, when I draw the next value, the candidates I am choosing from are now “1”, “2” and “2”, because I have removed the “1” I got as the first value. If I had instead been sampling with replacement, then I would put back the “1” I had drawn, and would draw the second sample from the full set of “1”, “1”, “2”, “2”.\n\nIn fact we can can use R’s sample function to shuffle any vector. The default behavior of sample is to sample without replacement. Up until now we have always told R to change that default behavior, using the replace=TRUE argument to sample. replace=TRUE tells sample to sample with replacement. Now we want to sample without replacement, so we leave out replace=TRUE to let sample do its default sampling, without replacement. That is, when we do not specify replace=, R assumes replace=FALSE — sampling without replacement.\n\n# The vector N, shuffled into a random order.\n# Note that \"sample\" *by default*, samples without replacement.\n# When we ask for size=4, we are asking for a sample that is the same\n# size as the original vector, and so, this will be the original vector\n# with a random reordering.\nshuffled <- sample(N, size=4)\n# The \"slips\" are now in random order.\nshuffled\n\n[1] 1 2 2 1\n\n\nAnd in fact, if you omit the size= argument to sample, it will assume you mean the size to be the same size as the input array — in this case, it will assume size=length(N) and therefore size=4. So we can get the same effect of a reordered (shuffled) vector by omitting both size= and replace=:\n\n# The vector N, shuffled into a random order (the same procedure as the chunk\n# above).\nshuffled <- sample(N)\n# The \"slips\" are now in random order.\nshuffled\n\n[1] 2 1 1 2\n\n\n\n::: python You can use rnd.permuted to shuffle an array into a random order.\nLike rnd.choice, rnd.permuted is a function (actually, a method) of rnd, that takes an array as input, and produces a version of the array, where the elements are in random order.\nSee Section 11.4 for some more discussion of shuffling and sampling without replacement." + }, + { + "objectID": "probability_theory_1a.html#code-answers-to-the-cards-and-pennies-problem", + "href": "probability_theory_1a.html#code-answers-to-the-cards-and-pennies-problem", + "title": "8  Probability Theory, Part 1", + "section": "8.15 Code answers to the cards and pennies problem", + "text": "8.15 Code answers to the cards and pennies problem\n\nStart of cards_pennies notebook\n\nDownload notebook\nInteract\n\n\n\n# Numbers representing the slips in the hat.\nN <- c(1, 1, 2, 2)\n\n# An array in which we will store the result of each trial.\nz <- rep('No result yet', 10000)\n\nfor (i in 1:10000) {\n # sample, used in this way, has the effect of shuffling the vector\n # into a random order. See the section linked above for an explanation.\n shuffled <- sample(N)\n\n A <- shuffled[1] # The first slip from the shuffled array.\n B <- shuffled[2] # The second slip from the shuffled array.\n\n # Set the result of this trial.\n if (A == B) {\n z[i] <- 'Yes'\n } else {\n z[i] <- 'No'\n }\n} # End of the loop.\n\n# How many times did we see \"Yes\"?\nk <- sum(z == 'Yes')\n\n# The proportion.\nkk <- k / 10000\n\nmessage(kk)\n\n0.3273\n\n\nNow let’s play the game differently, first picking one card and putting it back and shuffling before picking a second card. What are the results now? You can try it with the cards, but here is another program, similar to the last, to run that variation.\n\n# An array in which we will store the result of each trial.\nz <- rep('No result yet', 10000)\n\nfor (i in 1:10000) {\n # Shuffle the numbers in N into a random order.\n first_shuffle <- sample(N)\n # Draw a slip of paper.\n A <- first_shuffle[1] # The first slip.\n\n # Shuffle again (with all the slips).\n second_shuffle <- sample(N)\n # Draw a slip of paper.\n B <- second_shuffle[1] # The second slip.\n\n # Set the result of this trial.\n if (A == B) {\n z[i] <- 'Yes'\n } else {\n z[i] <- 'No'\n }\n} # End of the loop.\n\n# How many times did we see \"Yes\"?\nk <- sum(z == 'Yes')\n\n# The proportion.\nkk <- k / 10000\n\nmessage(kk)\n\n0.5059\n\n\nEnd of cards_pennies notebook\n\nWhy do you get different results in the two cases? Let’s ask the question differently: What is the probability of first picking a black card? Clearly, it is 50-50, or 0.5. Now, if you first pick a black card, what is the probability in the first game above of getting a second black card? There are two red and one black cards left, so now p = 1/3.\nBut in the second game, what is the probability of picking a second black card if the first one you pick is black? It is still 0.5 because we are sampling with replacement.\nThe probability of picking a second black card conditional on picking a first black card in the first game is 1/3, and it is different from the unconditional probability of picking a black card first. But in the second game the probability of the second black card conditional on first picking a black card is the same as the probability of the first black card.\nSo the reason you lose money if you play the first game at even odds against a carnival game operator is because the conditional probability is different than the original probability.\nAnd an illustrative joke: The best way to avoid there being a live bomb aboard your plane flight is to take an inoperative bomb aboard with you; the probability of one bomb is very low, and by the multiplication rule, the probability of two bombs is very very low . Two hundred years ago the same joke was told about the midshipman who, during a battle, stuck his head through a hole in the ship’s side that had just been made by an enemy cannon ball because he had heard that the probability of two cannonballs striking in the same place was one in a million.\nWhat’s wrong with the logic in the joke? The probability of there being a bomb aboard already, conditional on your bringing a bomb aboard, is the same as the conditional probability if you do not bring a bomb aboard. Hence you change nothing by bringing a bomb aboard, and do not reduce the probability of an explosion." + }, + { + "objectID": "probability_theory_1a.html#the-commanders-again-plus-leaving-the-game-early", + "href": "probability_theory_1a.html#the-commanders-again-plus-leaving-the-game-early", + "title": "8  Probability Theory, Part 1", + "section": "8.16 The Commanders again, plus leaving the game early", + "text": "8.16 The Commanders again, plus leaving the game early\nLet’s carry exactly the same process one tiny step further. Assume that if the Commanders win, there is a 0.3 chance you will leave the game early. Now let us ask the probability of a nice day, the Commanders winning, and you leaving early. You should be able to see that this probability can be estimated with three buckets instead of two. Or it can be computed with the multiplication rule as 0.65 * 0.7 * 0.3 = 0.1365 (about 0.14) — the probability of a nice day and a win and you leave early.\nThe book shows you the formal method — the multiplication rule, in this case — for several reasons: 1) Simulation is weak with very low probabilities, e.g. P(50 heads in 50 throws). But — a big but — statistics and probability is seldom concerned with very small probabilities. Even for games like poker, the orders of magnitude of 5 aces in a wild game with joker, or of a royal flush, matter little. 2) The multiplication rule is wonderfully handy and convenient for quick calculations in a variety of circumstances. A back-of-the-envelope calculation can be quicker than a simulation. And it can also be useful in situations where the probability you will calculate will be very small, in which case simulation can require considerable computer time to be accurate. (We will shortly see this point illustrated in the case of estimating the rate of transmission of AIDS by surgeons.) 3) It is useful to know the theory so that you are able to talk to others, or if you go on to other courses in the mathematics of probability and statistics.\nThe multiplication rule also has the drawback of sometimes being confusing, however. If you are in the slightest doubt about whether the circumstances are correct for applying it, you will be safer to perform a simulation as we did earlier with the Commanders, though in practice you are likely to simulate with the aid of a computer program, as we shall see shortly. So use the multiplication rule only when there is no possibility of confusion. Usually that means using it only when the events under consideration are independent.\nNotice that the same multiplication rule gives us the probability of any particular sequence of hits and misses — say, a miss, then a hit, then a hit if the probability of a single miss is 2/3. Among the 2/3 of the trials with misses on the first shot, 1/3 will next have a hit, so 2/3 x 1/3 equals the probability of a miss then a hit. Of those 2/9 of the trials, 1/3 will then have a hit, or 2/3 x 1/3 x 1/3 = 2/27 equals the probability of the sequence miss-hit-hit.\nThe multiplication rule is very useful in everyday life. It fits closely to a great many situations such as “What is the chance that it will rain (.3) and that (if it does rain) the plane will not fly (.8)?” Hence the probability of your not leaving the airport today is 0.3 x 0.8 = 0.24.\n\n\n\n\nGardner, Martin. 2001. The Colossal Book of Mathematics. W.W. Norton & Company Inc., New York. https://archive.org/details/B-001-001-265." + }, + { + "objectID": "probability_theory_1b.html#sec-independence", + "href": "probability_theory_1b.html#sec-independence", + "title": "9  Probability Theory Part I (continued)", + "section": "9.1 The special case of independence", + "text": "9.1 The special case of independence\nA key concept in probability and statistics is that of the independence of two events in which we are interested. Two events are said to be “independent” when one of them does not have any apparent relationship to the other. If I flip a coin that I know from other evidence is a fair coin, and I get a head, the chance of then getting another head is still 50-50 (one in two, or one to one.) And, if I flip a coin ten times and get heads the first nine times, the probability of getting a head on the tenth flip is still 50-50. Hence the concept of independence is characterized by the phrase “The coin has no memory.” (Actually the matter is a bit more complicated. If you had previously flipped the coin many times and knew it to be a fair coin, then the odds would still be 50-50, even after nine heads. But, if you had never seen the coin before, the run of nine heads might reasonably make you doubt that the coin was a fair one.)\nIn the Washington Commanders example above, we needed a different set of buckets to estimate the probability of a nice day plus a win, and of a nasty day plus a win. But what if the Commanders’ chances of winning are the same whether the day is nice or nasty? If so, we say that the chance of winning is independent of the kind of day. That is, in this special case,\n\\[\nP(\\text{win | nice day}) = P(\\text{win | nasty day}) \\text{ and } P(\\text{nice\nday and win})\n\\]\n\\[\n= P(\\text{nice day}) * P(\\text{winning | nice day})\n\\]\n\\[\n= P(\\text{nice day}) * P(\\text{winning})\n\\]\n\n\n\n\n\n\n\n\n\n\nSee section Section 8.13 for an explanation of this notation.\n\n\nIn this case we need only one set of two buckets to make all the estimates.\nIndependence means that the elements are drawn from 2 or more separate sets of possibilities . That is, \\(P(A | B) = P(A | \\ \\hat{} B) = P(A)\\) and vice versa.\n\nIn other words, if the occurrence of the first event does not change this probability that the second event will occur, then the events are independent.\nAnother way to put the matter: Events A and B are said to be independent of each other if knowing whether A occurs does not change the probability that B will occur, and vice versa. If knowing whether A does occur alters the probability of B occurring, then A and B are dependent.\nIf two events are independent, the multiplication rule simplifies to \\(P(A \\text{ and } B) = P(A) * P(B)\\) . I’ll repeat once more: This rule is simply a mathematical shortcut, and one can make the desired estimate by simulation.\nAlso again, if two events are not independent — that is, if \\(P(A | B)\\) is not equal to \\(P(A)\\) because \\(P(A)\\) is dependent upon the occurrence of \\(B\\), then the formula to be used now is, \\(P(A \\text{ and } B) = P(A | B) * P(B)\\) , which is sufficiently confusing that you are probably better off with a simulation.\nWhat about if each of the probabilities is dependent on the other outcome? There is no easy formulaic method to deal with such a situation.\nPeople commonly make the mistake of treating independent events as non-independent, perhaps from superstitious belief. After a long run of blacks, roulette gamblers say that the wheel is “due” to come up red. And sportswriters make a living out of interpreting various sequences of athletic events that occur by chance, and they talk of teams that are “due” to win because of the “Law of Averages.” For example, if Barry Bonds goes to bat four times without a hit, all of us (including trained statisticians who really know better) feel that he is “due” to get a hit and that the probability of his doing so is very high — higher that is, than his season’s average. The so-called “Law of Averages” implies no such thing, of course.\nEvents are often dependent in subtle ways. A boy may telephone one of several girls chosen at random. But, if he calls the same girl again (or if he does not call her again), the second event is not likely to be independent of the first. And the probability of his calling her is different after he has gone out with her once than before he went out with her.\nAs noted in the section above, events A and B are said to be independent of each other if the conditional probabilities of A and B remain the same . And the conditional probabilities remain the same if sampling is conducted with replacement .\n\nLet’s now re-consider the multiplication rule with the special but important case of independence.\n\n9.1.1 Example: Four Events in a Row — The Multiplication Rule\nAssume that we want to know the probability of four successful archery shots in a row, where the probability of a success on a given shot is .25.\nInstead of simulating the process with resampling trials we can, if we wish, arrive at the answer with the “multiplication rule.” This rule says that the probability that all of a given number of independent events (the successful shots) will occur (four out of four in this case) is the product of their individual probabilities — in this case, 1/4 x 1/4 x 1/4 x 1/4 = 1/256. If in doubt about whether the multiplication rule holds in any given case, however, you may check by resampling simulation. For the case of four daughters in a row, assuming that the probability of a girl is .5, the probability is 1/2 x 1/2 x 1/2 x 1/2 = 1/16.\nBetter yet, we’d use the more exact probability of getting a girl: \\(100/206\\), and multiply out the result as \\((100/206)^4\\). An important point here, however: we have estimated the probability of a particular family having four daughters as 1 in 16 — that is, odds of 15 to 1. But note well: This is a very different idea from stating that the odds are 15 to 1 against some family’s having four daughters in a row. In fact, as many families will have four girls in a row as will have boy-girl-boy-girl in that order or girl-boy-girl-boy or any other series of four children. The chances against any particular series is the same — 1 in 16 — and one-sixteenth of all four-children families will have each of these series, on average. This means that if your next-door neighbor has four daughters, you cannot say how much “out of the ordinary” the event is. It is easy to slip into unsound thinking about this matter.\n\nWhy do we multiply the probabilities of the independent simple events to learn the probability that they will occur jointly (the composite event)? Let us consider this in the context of three basketball shots each with 1/3 probability of hitting.\n\n\n\n\n\nFigure 9.1: Tree Diagram for 3 Basketball Shots, Probability of a Hit is 1/3\n\n\n\n\nFigure 9.1 is a tree diagram showing a set of sequential simple events where each event is conditional upon a prior simple event. Hence every probability after the first is a conditional probability.\nIn Figure 9.1, follow the top path first. On approximately one-third of the occasions, the first shot will hit. Among that third of the first shots, roughly a third will again hit on the second shot, that is, 1/3 of 1/3 or 1/3 x 1/3 = 1/9. The top path makes it clear that in 1/3 x 1/3 = 1/9 of the trials, two hits in a row will occur. Then, of the 1/9 of the total trials in which two hits in a row occur, about 1/3 will go on to a third hit, or 1/3 x 1/3 x 1/3 = 1/27. Remember that we are dealing here with independent events; regardless of whether the player made his first two shots, the probability is still 1 in 3 on the third shot." + }, + { + "objectID": "probability_theory_1b.html#the-addition-of-probabilities", + "href": "probability_theory_1b.html#the-addition-of-probabilities", + "title": "9  Probability Theory Part I (continued)", + "section": "9.2 The addition of probabilities", + "text": "9.2 The addition of probabilities\nBack to the Washington Redskins again. You ponder more deeply the possibility of a nasty day, and you estimate with more discrimination that the probability of snow is .1 and of rain it is .2 (with .7 of a nice day). Now you wonder: What is the probability of a rainy day or a nice day?\nTo find this probability by simulation:\n\nPut 7 blue balls (nice day), 1 black ball (snowy day) and 2 gray balls (rainy day) into a bucket. You want to know the probability of a blue or a gray ball. To find this probability:\nDraw one ball and record “yes” if its color is blue or gray, “no” otherwise.\nRepeat step 1 perhaps 200 times.\nFind the proportion of “yes” trials.\n\nThis procedure certainly will do the job. And simulation may be unavoidable when the situation gets more complex. But in this simple case, you are likely to see that you can compute the probability by adding the .7 probability of a nice day and the .2 probability of a rainy day to get the desired probability. This procedure of formulaic deductive probability theory is called the addition rule ." + }, + { + "objectID": "probability_theory_1b.html#the-addition-rule", + "href": "probability_theory_1b.html#the-addition-rule", + "title": "9  Probability Theory Part I (continued)", + "section": "9.3 The addition rule", + "text": "9.3 The addition rule\nThe addition rule applies to mutually exclusive outcomes — that is, the case where if one outcome occurs, the other(s) cannot occur; one event implies the absence of the other when events are mutually exclusive. Green and red coats are mutually exclusive if you never wear more than one coat at a time. If there are only two possible mutually-exclusive outcomes, the outcomes are complementary . It may be helpful to note that mutual exclusivity equals total dependence; if one outcome occurs, the other cannot. Hence we write formally that\n\\[\n\\text{If} P(A \\text{ and } B) = 0 \\text{ then }\n\\]\n\\[\nP(A \\text{ or } B) = P(A) + P(B)\n\\]\nAn outcome and its absence are mutually exclusive, and their probabilities add to unity.\n\\[\nP(A) + P(\\ \\hat{} A) = 1\n\\]\nExamples include a) rain and no rain, and b) if \\(P(\\text{sales > \\$1 million}) = 0.2\\), then \\(P(\\text{sales <= \\$1 million}) = 0.8\\).\nAs with the multiplication rule, the addition rule can be a useful shortcut. The answer can always be obtained by simulation, too.\nWe have so far implicitly assumed that a rainy day and a snowy day are mutually exclusive. But that need not be so; both rain and snow can occur on the same day; if we take this possibility into account, we cannot then use the addition rule.\nConsider the case in which seven days in ten are nice, one day is rainy, one day is snowy, and one day is both rainy and snowy. What is the chance that it will be either nice or snowy? The procedure is just as before, except that some rainy days are included because they are also snowy.\nWhen A and B are not mutually exclusive — when it is possible that the day might be both rainy and snowy, or you might wear both red and green coats on the same day, we write (in the latter case) P(red and green coats) > 0, and the appropriate formula is\n\\[\nP(\\text{red or green}) = P(\\text{red}) + P(\\text{green}) - P(\\text{red and green}) `\n\\]\n\nIn this case as in much of probability theory, the simulation for the case in which the events are not mutually exclusive is no more complex than when they are mutually exclusive; indeed, if you simulate you never even need to know the concept of mutual exclusivity or inquire whether that is your situation. In contrast, the appropriate formula for non-exclusivity is more complex, and if one uses formulas one must inquire into the characteristics of the situation and decide which formula to apply depending upon the classification; if you classify wrongly and therefore apply the wrong formula, the result is a wrong answer.\n\nTo repeat, the addition rule only works when the probabilities you are adding are mutually exclusive — that is, when the two cannot occur together.\nThe multiplication and addition rules are as different from each other as mortar and bricks; both, however, are needed to build walls. The multiplication rule pertains to a single outcome composed of two or more elements (e.g. weather, and win-or-lose), whereas the addition rule pertains to two or more possible outcomes for one element. Drawing from a card deck (with replacement) provides an analogy: the addition rule is like one draw with two or more possible cards of interest, whereas the multiplication rule is like two or more cards being drawn with one particular “hand” being of interest." + }, + { + "objectID": "probability_theory_1b.html#theoretical-devices-for-the-study-of-probability", + "href": "probability_theory_1b.html#theoretical-devices-for-the-study-of-probability", + "title": "9  Probability Theory Part I (continued)", + "section": "9.4 Theoretical devices for the study of probability", + "text": "9.4 Theoretical devices for the study of probability\nIt may help you to understand the simulation approach to estimating composite probabilities demonstrated in this book if you also understand the deductive formulaic approach. So we’ll say a bit about it here.\nThe most fundamental concept in theoretical probability is the list of events that may occur, together with the probability of each one (often arranged so as to be equal probabilities). This is the concept that Galileo employed in his great fundamental work in theoretical probability about four hundred years ago when a gambler asked Galileo about the chances of getting a nine rather than a ten in a game of three dice (though others such as Cardano had tackled the subject earlier). 1\nGalileo wrote down all the possibilities in a tree form, a refinement for mapping out the sample space.\nGalileo simply displayed the events themselves — such as “2,” “4,” and “4,” making up a total of 10, a specific event arrived at in a specific way. Several different events can lead to a 10 with three dice. If we now consider each of these events, we arrive at the concept of the ways that a total of 10 can arise. We ask the number of ways that an outcome can and cannot occur. (See the paragraph above). This is equivalent both operationally and linguistically to the paths in (say) the quincunx device or Pascal’s Triangle which we shall discuss shortly.\nA tree is the most basic display of the paths in a given situation. Each branch of the tree — a unique path from the start on the left-hand side to the endpoint on the right-hand side — contains the sequence of all the elements that make up that event, in the order in which they occur. The right-hand ends of the branches constitute a list of the outcomes. That list includes all possible permutations — that is, it distinguishes among outcomes by the orders in which the particular die outcomes occur." + }, + { + "objectID": "probability_theory_1b.html#the-concept-of-sample-space", + "href": "probability_theory_1b.html#the-concept-of-sample-space", + "title": "9  Probability Theory Part I (continued)", + "section": "9.5 The Concept of Sample Space", + "text": "9.5 The Concept of Sample Space\nThe formulaic approach begins with the idea of sample space , which is the set of all possible outcomes of the “experiment” or other situation that interests us. Here is a formal definition from Goldberg (1986, 46):\n\nA sample space S associated with a real or conceptual experiment is a set such that (1) each element of S denotes an outcome of the experiment, and (2) any performance of the experiment results in an outcome that corresponds to one and only one element of S.\n\nBecause the sum of the probabilities for all the possible outcomes in a given experimental trial is unity, the sum of all the events in the sample space (S) = 1.\nEarly on, people came up with the idea of estimating probabilities by arraying the possibilities for, and those against, the event occurring. For example, the coin could fall in three ways — head, tail, or on its side. They then speedily added the qualification that the possibilities in the list must have an equal chance, to distinguish the coin falling on its side from the other possibilities (so ignore it). Or, if it is impossible to make the probabilities equal, make special allowance for inequality. Working directly with the sample space is the method of first principles . The idea of a list was refined to the idea of sample space, and “for” and “against” were refined to the “success” and “failure” elements among the total elements.\nThe concept of sample space raises again the issue of how to estimate the simple probabilities. While we usually can estimate the probabilities accurately in gambling games because we ourselves construct the games and therefore control the probabilities that they produce, we have much less knowledge of the structures that underlie the important problems in life — in science, business, the stock market, medicine, sports, and so on. We therefore must wrestle with the issue of what probabilities we should include in our theoretical sample space, or in our experiments. Often we proceed by choosing as an analogy a physical “model” whose properties we know and which we consider to be appropriate — such as a gambling game with coins, dice, cards. This model becomes our idealized setup. But this step makes crystal-clear that judgment is heavily involved in the process, because choosing the analogy requires judgment.\nA Venn diagram is another device for displaying the elements that make up an event. But unlike a tree diagram, it does not show the sequence of those elements; rather, it shows the extent of overlap among various classes of elements .\nA Venn diagram expresses by areas (especially rectangular Venn diagrams) the numbers at the end of the branches in a tree.\nPascal’s Triangle is still another device. It aggregates the last permutation branches in the tree into combinations — that is, without distinguishing by order. It shows analytically (by tracing them) the various paths that lead to various combinations.\nThe study of the mathematics of probability is the study of calculational shortcuts to do what tree diagrams do. If you don’t care about the shortcuts, then you don’t need the formal mathematics--though it may improve your mathematical insight (or it may not). The resampling method dispenses not only with the shortcuts but also with the entire counting of points in the sample space.\n\n\n\n\nBulmer, M. G. 1979. Principles of Statistics. New York, NY: Dover Publications, inc. https://archive.org/details/principlesofstat0000bulm.\n\n\nGoldberg, Samuel. 1986. Probability: An Introduction. Courier Corporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC." + }, + { + "objectID": "more_sampling_tools.html#introduction", + "href": "more_sampling_tools.html#introduction", + "title": "10  Two puzzles and more tools", + "section": "10.1 Introduction", + "text": "10.1 Introduction\nIn the next chapter we will deal with some more involved problems in probability, as a preparation for statistics, where we use reasoning from probability to draw conclusions about a world like our own, where variation often appears to be more or less random.\nBefore we get down to the business of complex probabilistic problems in the next few chapters, let’s consider a couple of peculiar puzzles. These puzzles allow us to introduce some more of the key tools in R for Monte Carlo resampling, and show the power of such simulation to help solve, and then reason about, problems in probability." + }, + { + "objectID": "more_sampling_tools.html#the-treasure-fleet-recovered", + "href": "more_sampling_tools.html#the-treasure-fleet-recovered", + "title": "10  Two puzzles and more tools", + "section": "10.2 The treasure fleet recovered", + "text": "10.2 The treasure fleet recovered\nThis is a classic problem in probability:1\n\nA Spanish treasure fleet of three ships was sunk at sea off Mexico. One ship had a chest of gold forward and another aft, another ship had a chest of gold forward and a chest of silver aft, while a third ship had a chest of silver forward and another chest of silver aft. Divers just found one of the ships and a chest of gold in it, but they don’t know whether it was from forward or aft. They are now taking bets about whether the other chest found on the same ship will contain silver or gold. What are fair odds?\n\nThese are the logical steps one may distinguish in arriving at a correct answer with deductive logic (portrayed in Figure 10.1).\n\nPostulate three ships — Ship I with two gold chests (G-G), ship II with one gold and one silver chest (G-S), and ship III with S-S. (Choosing notation might well be considered one or more additional steps.)\nAssert equal probabilities of each ship being found.\nStep 2 implies equal probabilities of being found for each of the six chests.\nFact: Diver finds a chest of gold.\nStep 4 implies that S-S ship III was not found; hence remove it from subsequent analysis.\nThree possibilities: 6a) Diver found chest I-Ga, 6b) diver found I-Gb, 6c) diver found II-Gc.\nFrom step 2, the cases a, b, and c in step 6 have equal probabilities.\nIf possibility 6a is the case, then the other chest is I-Gb; the comparable statements for cases 6b and 6c are I-Ga and II-S.\nFrom steps 6 and 7: From equal probabilities of the three cases, and no other possible outcome, \\(P(6a) = 1/3\\), \\(P(6b) = 1/3\\), \\(P(6c) = 1/3\\).\nSo \\(P(G) = P(6a) + P(6b)\\) = 1/3 + 1/3 = 2/3.\n\nSee Figure 10.1.\n\n\n\n\n\nFigure 10.1: Ships with Gold and Silver\n\n\n\n\nThe following simulation arrives at the correct answer.\n\nWrite “Gold” on three pieces of paper and “Silver” on three pieces of paper. These represent the chests.\nGet three buckets each with two pieces of paper. Each bucket represents a ship, each piece of paper represents a chest in that ship. One bucket has two pieces of paper with “Gold” written on them; one has pieces of paper with “Gold” and “Silver”, and one has “Silver” and “Silver”.\nChoose a bucket at random, to represent choosing a ship at random.\nShuffle the pieces of paper in the bucket and pick one, to represent choosing the first chest from that ship at random.\nIf the piece of paper says “Silver”, the first chest we found in this ship was silver, and we stop the trial and make no further record. If “Gold”, continue.\nGet the second piece of paper from the bucket, representing the second chest on the chosen ship. Record whether this was “Silver” or “Gold” on the scoreboard.\nRepeat steps (3 - 6) many times, and calculate the proportion of “Gold”s on the scoreboard. (The answer should be about \\(\\frac{2}{3}\\).)\n\n\nHere is a notebook simulation with R:\n\nStart of gold_silver_ships notebook\n\nDownload notebook\nInteract\n\n\n\n# The 3 buckets. Each bucket represents a ship. Each has two chests.\nbucket1 <- c('Gold', 'Gold') # Chests in first ship.\nbucket2 <- c('Gold', 'Silver') # Chests in second ship.\nbucket3 <- c('Silver', 'Silver') # Chests in third ship.\n\n\n# Mark trials as not valid to start with.\n# Trials where we don't get a gold chest first will\n# keep this 'No gold in chest 1, chest 2 never opened' marker.\nsecond_chests <- rep('No gold in chest 1, chest 2 never opened', 10000)\n\nfor (i in 1:10000) {\n # Select a ship at random from the three ships.\n ship_no <- sample(1:3, size=1)\n # Get the chests from this ship (represented by a bucket).\n if (ship_no == 1) {\n bucket <- bucket1\n }\n if (ship_no == 2) {\n bucket <- bucket2\n }\n if (ship_no == 3) {\n bucket <- bucket3\n }\n\n # We shuffle the order of the chests in this ship, to simulate\n # the fact that we don't know which of the two chests we have\n # found first.\n shuffled <- sample(bucket)\n\n if (shuffled[1] == 'Gold') { # We found a gold chest first.\n # Store whether the Second chest was silver or gold.\n second_chests[i] <- shuffled[2]\n }\n} # End loop, go back to beginning.\n\n# Number of times we found gold in the second chest.\nn_golds <- sum(second_chests == 'Gold')\n# Number of times we found silver in the second chest.\nn_silvers <- sum(second_chests == 'Silver')\n# As a ratio of golds to all second chests (where the first was gold).\nmessage(n_golds / (n_golds + n_silvers))\n\n0.655882352941176\n\n\nEnd of gold_silver_ships notebook\n\nIn the code above, we have first chosen the ship number at random, and then used a set of if ... statements to get the pair of chests corresponding to the given ship. There are simpler and more elegant ways of writing this code, but they would need some R features that we haven’t covered yet.2" + }, + { + "objectID": "more_sampling_tools.html#back-to-boolean-s", + "href": "more_sampling_tools.html#back-to-boolean-s", + "title": "10  Two puzzles and more tools", + "section": "10.3 Back to Boolean vectors", + "text": "10.3 Back to Boolean vectors\nThe code above implements the procedure we might well use if we were simulating the problem physically. We do a trial, and we record the result. We do this on a piece of paper if we are doing a physical simulation, and in the second_chests vector in code.\nFinally we tally up the results. If we are doing a physical simulation, we go back over the all the trial results and counting up the “Gold” and “Silver” outcomes. In code we use the comparisons == 'Gold' and == 'Silver' to find the trials of interest, and then count them up with sum.\nBoolean vectors are a fundamental tool in R, and we will use them in nearly all our simulations.\nHere is a remind of how those vectors work.\nFirst, let’s slice out the first 10 values of the second_chests trial-by-trial results tally from the simulation above:\n\n# Get values at positions 1 through 10\nfirst_10_chests <- second_chests[1:10]\nfirst_10_chests\n\n [1] \"Gold\" \n [2] \"No gold in chest 1, chest 2 never opened\"\n [3] \"No gold in chest 1, chest 2 never opened\"\n [4] \"Silver\" \n [5] \"Gold\" \n [6] \"No gold in chest 1, chest 2 never opened\"\n [7] \"Silver\" \n [8] \"Silver\" \n [9] \"Gold\" \n[10] \"No gold in chest 1, chest 2 never opened\"\n\n\nBefore we started the simulation, we set second_chests to contain 10,000 strings, where each string was “No gold in chest 1, chest 2 never opened”. In the simulation, we check whether there was gold in the first chest, and, if not, we don’t change the value in second_chest, and the value remains as “No gold in chest 1, chest 2 never opened”.\nOnly if there was gold in the first chest, do we go on to check whether the second chest contains silver or gold. Therefore, we only set a new value in second_chests where there was gold in the first chest.\nNow let’s show the effect of running a comparison on first_10_chests:\n\nwere_gold <- (first_10_chests == 'Gold')\nwere_gold\n\n [1] TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE TRUE FALSE\n\n\n\n\n\n\n\n\nParentheses and Boolean comparisons\n\n\n\nNotice the round brackets (parentheses) around (first_10_chests == 'Gold'). In this particular case, we would get the same result without the parentheses, so the paretheses are optional. In general, you will see we put parentheses around all expressions that generate Boolean vectors, and we recommend you do too. It is good habit to get into, to make it clear that this is an expression that generates a value.\n\n\nThe == 'Gold' comparison is asking a question. It is asking that question of a vector, and the vector contains multiple values. R treats this comparison as asking the question of each element in the vector. We get an answer for the question for each element. The answer for position 1 is TRUE if the element at position 1 is equal to 'Gold' and FALSE otherwise, and so on, for positions 2, 3 and so on. We started with 10 strings. After the comparison == 'Gold' we have 10 Boolean values, where a Boolean value can either be TRUE or FALSE.\n\n\nNow we have an array with TRUE for the “Gold” results and FALSE otherwise, we can count the number of “Gold” results by using sum on the vector. As you remember (Section 5.13) sum counts TRUE as 1 and FALSE as 0, so the sum of the Boolean vector is just the number of TRUE values in the vector — the count that we need.\n\n# The number of True values — so the number of \"Gold\" chests.\nsum(were_gold)\n\n[1] 3" + }, + { + "objectID": "more_sampling_tools.html#sec-ships-booleans", + "href": "more_sampling_tools.html#sec-ships-booleans", + "title": "10  Two puzzles and more tools", + "section": "10.4 Boolean vectors and another take on the ships problem", + "text": "10.4 Boolean vectors and another take on the ships problem\nIf we are doing a physical simulation, we usually want to finish up all the work for the trial during the trial, so we have one outcome from the trial. This makes it easier to tally up the results in the end.\nWe have no such constraint when we are using code, so it is sometimes easier to record several results from the trial, and do the final combinations and tallies at the end. We will show you what we mean with a slight variation on the two-ships code you saw above.\n\nStart of gold_silver_booleans notebook\n\nDownload notebook\nInteract\n\n\nNotice that the first part of the code is identical to the first approach to this problem. There are two key differences — see the comments for an explanation.\n\n# The 3 buckets, each representing two chests on a ship.\n# As before.\nbucket1 <- c('Gold', 'Gold') # Chests in first ship.\nbucket2 <- c('Gold', 'Silver') # Chests in second ship.\nbucket3 <- c('Silver', 'Silver') # Chests in third ship.\n\n\n# Here is where the difference starts. We are now going to fill in\n# the result for the first chest _and_ the result for the second chest.\n#\n# Later we will fill in all these values, so the string we put here\n# does not matter.\n\n# Whether the first chest was Gold or Silver.\nfirst_chests <- rep('To be announced', 10000)\nsecond_chests <- rep('To be announced', 10000)\n\nfor (i in 1:10000) {\n # Select a ship at random from the three ships.\n # As before.\n ship_no <- sample(1:3, size=1)\n # Get the chests from this ship.\n # As before.\n if (ship_no == 1) {\n bucket <- bucket1\n }\n if (ship_no == 2) {\n bucket <- bucket2\n }\n if (ship_no == 3) {\n bucket <- bucket3\n }\n\n # As before.\n shuffled <- sample(bucket)\n\n # Here is the big difference - we store the result for the first and second\n # chests.\n first_chests[i] <- shuffled[1]\n second_chests[i] <- shuffled[2]\n} # End loop, go back to beginning.\n\n# We will do the calculation we need in the next cell. For now\n# just display the first 10 values.\nten_first_chests <- first_chests[1:10]\nmessage('The first 10 values of \"first_chests:')\n\nThe first 10 values of \"first_chests:\n\nprint(ten_first_chests)\n\n [1] \"Gold\" \"Silver\" \"Silver\" \"Silver\" \"Gold\" \"Gold\" \"Gold\" \"Gold\" \n [9] \"Gold\" \"Gold\" \n\nten_second_chests <- second_chests[1:10]\nmessage('The first 10 values of \"second_chests:')\n\nThe first 10 values of \"second_chests:\n\nprint(ten_second_chests)\n\n [1] \"Gold\" \"Gold\" \"Silver\" \"Silver\" \"Gold\" \"Silver\" \"Gold\" \"Silver\"\n [9] \"Gold\" \"Silver\"\n\n\nIn this variant, we recorded the type of first chest for each trial (“Gold” or “Silver”), and the type of second chest of the second chest (“Gold” or “Silver”).\nWe would like to count the number of times there was “Gold” in the first chest and “Gold” in the second.\n\n10.5 Combining Boolean arrays\nWe can do the count we need by combining the Boolean vectors with the & operator. & combines Boolean vectors with a logical and. Logical and is a rule for combining two Boolean values, where the rule is: the result is TRUE if the first value is TRUE and the second value if TRUE.\nHere we use the & operator to combine some Boolean values on the left and right of the operator:\nAbove you saw that the == operator (as in == 'Gold'), when applied to vectors, asks the question of every element in the vector.\nFirst make the Boolean vectors.\n\nten_first_gold <- ten_first_chests == 'Gold'\nmessage(\"Ten first == 'Gold'\")\n\nTen first == 'Gold'\n\nprint(ten_first_gold)\n\n [1] TRUE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE TRUE\n\nten_second_gold <- ten_second_chests == 'Gold'\nmessage(\"Ten second == 'Gold'\")\n\nTen second == 'Gold'\n\nprint(ten_second_gold)\n\n [1] TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE\n\n\nNow let us use & to combine Boolean vectors:\n\nten_both <- (ten_first_gold & ten_second_gold)\nten_both\n\n [1] TRUE FALSE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE\n\n\nNotice that R does the comparison elementwise — element by element.\nYou saw that when we did second_chests == 'Gold' this had the effect of asking the == 'Gold' question of each element, so there will be one answer per element in second_chests. In that case there was a vector to the left of == and a single value to the right. We were comparing a vector to a value.\nHere we are asking the & question of ten_first_gold and ten_second_gold. Here there is a vector to the left and a vector to the right. We are asking the & question 10 times, but the first question we are asking is:\n\n# First question, giving first element of result.\n(ten_first_gold[1] & ten_second_gold[1])\n\n[1] TRUE\n\n\nThe second question is:\n\n# Second question, giving second element of result.\n(ten_first_gold[2] & ten_second_gold[2])\n\n[1] FALSE\n\n\nand so on. We have ten elements on each side, and 10 answers, giving a vector (ten_both) of 10 elements. Each element in ten_both is the answer to the & question for the elements at the corresponding positions in ten_first_gold and ten_second_gold.\nWe could also create the Boolean vectors and do the & operation all in one step, like this:\n\nRemember, we wanted the answer to the question: how many trials had “Gold” in the first chest and “Gold” in the second. We can answer that question for the first 10 trials with sum:\n\nn_ten_both <- sum(ten_both)\nn_ten_both\n\n[1] 4\n\n\nWe can answer the same question for all the trials, in the same way:\n\nfirst_gold <- first_chests == 'Gold'\nsecond_gold <- second_chests == 'Gold'\nn_both_gold <- sum(first_gold & second_gold)\nn_both_gold\n\n[1] 3328\n\n\nWe could also do the same calculation all in one line:\n\nn_both_gold <- sum((first_chests == 'Gold') & (second_chests == 'Gold'))\nn_both_gold\n\n[1] 3328\n\n\nWe can then count all the ships where the first chest was gold:\n\nn_first_gold <- sum(first_chests == 'Gold')\nn_first_gold\n\n[1] 5021\n\n\nThe final calculation is the proportion of second chests that are gold, given the first chest was also gold:\n\np_g_given_g <- n_both_gold / n_first_gold\np_g_given_g\n\n[1] 0.663\n\n\nOf course we won’t get exactly the same results from the two simulations, in the same way that we won’t get exactly the same results from any two runs of the same simulation, because of the random values we are using. But the logic for the two simulations are the same, and we are doing many trials (10,000), so the results will be very similar.\nEnd of gold_silver_booleans notebook" + }, + { + "objectID": "more_sampling_tools.html#sec-combine-booleans", + "href": "more_sampling_tools.html#sec-combine-booleans", + "title": "10  Two puzzles and more tools", + "section": "10.5 Combining Boolean arrays", + "text": "10.5 Combining Boolean arrays\nWe can do the count we need by combining the Boolean vectors with the & operator. & combines Boolean vectors with a logical and. Logical and is a rule for combining two Boolean values, where the rule is: the result is TRUE if the first value is TRUE and the second value if TRUE.\nHere we use the & operator to combine some Boolean values on the left and right of the operator:\nAbove you saw that the == operator (as in == 'Gold'), when applied to vectors, asks the question of every element in the vector.\nFirst make the Boolean vectors.\n\nten_first_gold <- ten_first_chests == 'Gold'\nmessage(\"Ten first == 'Gold'\")\n\nTen first == 'Gold'\n\nprint(ten_first_gold)\n\n [1] TRUE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE TRUE\n\nten_second_gold <- ten_second_chests == 'Gold'\nmessage(\"Ten second == 'Gold'\")\n\nTen second == 'Gold'\n\nprint(ten_second_gold)\n\n [1] TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE\n\n\nNow let us use & to combine Boolean vectors:\n\nten_both <- (ten_first_gold & ten_second_gold)\nten_both\n\n [1] TRUE FALSE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE\n\n\nNotice that R does the comparison elementwise — element by element.\nYou saw that when we did second_chests == 'Gold' this had the effect of asking the == 'Gold' question of each element, so there will be one answer per element in second_chests. In that case there was a vector to the left of == and a single value to the right. We were comparing a vector to a value.\nHere we are asking the & question of ten_first_gold and ten_second_gold. Here there is a vector to the left and a vector to the right. We are asking the & question 10 times, but the first question we are asking is:\n\n# First question, giving first element of result.\n(ten_first_gold[1] & ten_second_gold[1])\n\n[1] TRUE\n\n\nThe second question is:\n\n# Second question, giving second element of result.\n(ten_first_gold[2] & ten_second_gold[2])\n\n[1] FALSE\n\n\nand so on. We have ten elements on each side, and 10 answers, giving a vector (ten_both) of 10 elements. Each element in ten_both is the answer to the & question for the elements at the corresponding positions in ten_first_gold and ten_second_gold.\nWe could also create the Boolean vectors and do the & operation all in one step, like this:\n\nRemember, we wanted the answer to the question: how many trials had “Gold” in the first chest and “Gold” in the second. We can answer that question for the first 10 trials with sum:\n\nn_ten_both <- sum(ten_both)\nn_ten_both\n\n[1] 4\n\n\nWe can answer the same question for all the trials, in the same way:\n\nfirst_gold <- first_chests == 'Gold'\nsecond_gold <- second_chests == 'Gold'\nn_both_gold <- sum(first_gold & second_gold)\nn_both_gold\n\n[1] 3328\n\n\nWe could also do the same calculation all in one line:\n\nn_both_gold <- sum((first_chests == 'Gold') & (second_chests == 'Gold'))\nn_both_gold\n\n[1] 3328\n\n\nWe can then count all the ships where the first chest was gold:\n\nn_first_gold <- sum(first_chests == 'Gold')\nn_first_gold\n\n[1] 5021\n\n\nThe final calculation is the proportion of second chests that are gold, given the first chest was also gold:\n\np_g_given_g <- n_both_gold / n_first_gold\np_g_given_g\n\n[1] 0.663\n\n\nOf course we won’t get exactly the same results from the two simulations, in the same way that we won’t get exactly the same results from any two runs of the same simulation, because of the random values we are using. But the logic for the two simulations are the same, and we are doing many trials (10,000), so the results will be very similar.\nEnd of gold_silver_booleans notebook" + }, + { + "objectID": "more_sampling_tools.html#the-monty-hall-problem", + "href": "more_sampling_tools.html#the-monty-hall-problem", + "title": "10  Two puzzles and more tools", + "section": "10.6 The Monty Hall problem", + "text": "10.6 The Monty Hall problem\nThe Monty Hall Problem is a puzzle in probability that is famous for its deceptive simplicity. It has its own long Wikipedia page: https://en.wikipedia.org/wiki/Monty_Hall_problem.\nHere is the problem in the form it is best known; a letter to the columnist Marilyn vos Savant, published in Parade Magazine (1990):\n\nSuppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?\n\nIn fact the first person to propose (and solve) this problem was Steve Selvin, a professor of public health at the University of California, Berkeley (Selvin 1975).\nMost people, including at least one of us, your humble authors, quickly come to the wrong conclusion. The most common but incorrect answer is that it will make no difference if you switch doors or stay with your original choice. The obvious intuition is that, after Monty opens his door, there are two doors that might have the car behind them, and therefore, there is a 50% chance it will be behind any one of the two. It turns out that answer is wrong; you will double your chances of winning by switching doors. Did you get the answer right?\nIf you got the answer wrong, you are in excellent company. As you can see from the commentary in Savant (1990), many mathematicians wrote to Parade magazine to assert that the (correct) solution was wrong. Paul Erdős was one of the most famous mathematicians of the 20th century; he could not be convinced of the correct solution until he had seen a computer simulation (Vazsonyi 1999), of the type we will do below.\nTo simulate a trial of this problem, we need to select a door at random to house the car, and another door at random, to be the door the contestant chooses. We number the doors 1, 2 and 3. Now we need two random choices from the options 1, 2 or 3, one for the door with the car, the other for the contestant door. To chose a door for the car, we could throw a die, and chose door 1 if the die shows 1 or 4, door 2 if the die shows 2 or 5, and door 3 for 3 or 6. Then we throw the die again to chose the contestant door.\nBut throwing dice is a little boring; we have to find the die, then throw it many times, and record the results. Instead we can ask the computer to chose the doors at random.\nFor this simulation, let us do 25 trials. We ask the computer to create two sets of 25 random numbers from 1 through 3. The first set is the door with the car behind it (“Car door”). The second set have the door that the contestant chose at random (“Our door”). We put these in a table, and make some new, empty columns to fill in later. The first new column is “Monty opens”. In due course, we will use this column to record the door that Monty Hall will open on this trial. The last two columns express the outcome. The first is “Stay wins”. This has “Yes” if we win on this trial by sticking to our original choice of door, and “No” otherwise. The last column is “Switch wins”. This has “Yes” if we win by switching doors, and “No” otherwise. See table Table 10.1).\n\n\n\n\nTable 10.1: 25 simulations of the Monty Hall problem \n\n\n\nCar door\nOur door\nMonty opens\nStay wins\nSwitch wins\n\n\n\n\n1\n3\n3\n\n\n\n\n\n2\n3\n1\n\n\n\n\n\n3\n1\n3\n\n\n\n\n\n4\n1\n1\n\n\n\n\n\n5\n2\n3\n\n\n\n\n\n6\n2\n1\n\n\n\n\n\n7\n2\n2\n\n\n\n\n\n8\n1\n3\n\n\n\n\n\n9\n1\n2\n\n\n\n\n\n10\n3\n1\n\n\n\n\n\n11\n2\n2\n\n\n\n\n\n12\n3\n2\n\n\n\n\n\n13\n2\n2\n\n\n\n\n\n14\n3\n1\n\n\n\n\n\n15\n1\n2\n\n\n\n\n\n16\n2\n1\n\n\n\n\n\n17\n3\n3\n\n\n\n\n\n18\n3\n2\n\n\n\n\n\n19\n1\n1\n\n\n\n\n\n20\n3\n2\n\n\n\n\n\n21\n2\n2\n\n\n\n\n\n22\n3\n1\n\n\n\n\n\n23\n3\n1\n\n\n\n\n\n24\n1\n1\n\n\n\n\n\n25\n2\n3\n\n\n\n\n\n\n\n\n\nIn the first trial in Table 10.1), the computer selected door 3 for car, and door 3 for the contestant. Now Monty must open a door, and he cannot open our door (door 3) so he has the choice of opening door 1 or door 2; he chooses randomly, and opens door 2. On this trial, we win if we stay with our original choice, and we lose if we change to the remaining door, door 1.\nNow we go the second trial. The computer chose door 3 for the car, and door 1 for our choice. Monty cannot choose our door (door 1) or the door with the car behind it (door 3), so he must open door 2. Now if we stay with our original choice, we lose, but if we switch, we win.\nYou may want to print out table Table 10.1, and fill out the blank columns, to work through the logic.\nAfter doing a few more trials, and some reflection, you may see that there are two different situations here: the situation when our initial guess was right, and the situation where our initial guess was wrong. When our initial guess was right, we win by staying with our original choice, but when it was wrong, we always win by switching. The chance of our initial guess being correct is 1/3 (one door out of three). So the chances of winning by staying are 1/3, and the chances of winning by switching are 2/3. But remember, you don’t need to follow this logic to get the right answer. As you will see below, the resampling simulation shows us that the Switch strategy wins.\nTable Table 10.2 is a version of table Table 10.1 for which we have filled in the blank columns using the logic above.\n\n\n\n\nTable 10.2: 25 simulations of the Monty Hall problem, filled out \n\n\n\nCar door\nOur door\nMonty opens\nStay wins\nSwitch wins\n\n\n\n\n1\n3\n3\n2\nYes\nNo\n\n\n2\n3\n1\n2\nNo\nYes\n\n\n3\n1\n3\n2\nNo\nYes\n\n\n4\n1\n1\n3\nYes\nNo\n\n\n5\n2\n3\n1\nNo\nYes\n\n\n6\n2\n1\n3\nNo\nYes\n\n\n7\n2\n2\n3\nYes\nNo\n\n\n8\n1\n3\n2\nNo\nYes\n\n\n9\n1\n2\n3\nNo\nYes\n\n\n10\n3\n1\n2\nNo\nYes\n\n\n11\n2\n2\n3\nYes\nNo\n\n\n12\n3\n2\n1\nNo\nYes\n\n\n13\n2\n2\n1\nYes\nNo\n\n\n14\n3\n1\n2\nNo\nYes\n\n\n15\n1\n2\n3\nNo\nYes\n\n\n16\n2\n1\n3\nNo\nYes\n\n\n17\n3\n3\n1\nYes\nNo\n\n\n18\n3\n2\n1\nNo\nYes\n\n\n19\n1\n1\n2\nYes\nNo\n\n\n20\n3\n2\n1\nNo\nYes\n\n\n21\n2\n2\n1\nYes\nNo\n\n\n22\n3\n1\n2\nNo\nYes\n\n\n23\n3\n1\n2\nNo\nYes\n\n\n24\n1\n1\n2\nYes\nNo\n\n\n25\n2\n3\n1\nNo\nYes\n\n\n\n\n\n\nThe proportion of times “Stay” wins in these 25 trials is 0.36. The proportion of times “Switch” wins is 0.64; the Switch strategy wins about twice as often as the Stay strategy." + }, + { + "objectID": "more_sampling_tools.html#monty-hall-with", + "href": "more_sampling_tools.html#monty-hall-with", + "title": "10  Two puzzles and more tools", + "section": "10.7 Monty Hall with R", + "text": "10.7 Monty Hall with R\nNow you have seen what the results might look like for a physical simulation, you can exercise some of your newly-strengthened R muscles to do the simulation with code.\n\nStart of monty_hall notebook\n\nDownload notebook\nInteract\n\n\nThe Monty Hall problem has a slightly complicated structure, so we will start by looking at the procedure for one trial. When we have that clear, we will put that procedure into a for loop for the simulation.\nLet’s start with some variables. Let’s call the door I choose my_door.\nWe choose that door at random from a sequence of all possible doors. Call the doors 1, 2 and 3 from left to right.\n\n# Vector of doors to chose from.\ndoors = c(1, 2, 3)\n\n# We choose one door at random.\nmy_door <- sample(doors, size=1)\n\n# Show the result\nmy_door\n\n[1] 3\n\n\nWe choose one of the doors to be the door with the car behind it:\n\n# One door at random has the car behind it.\ncar_door <- sample(doors, size=1)\n\n# Show the result\ncar_door\n\n[1] 1\n\n\nNow we need to decide which door Monty will open.\nBy our set up, Monty cannot open our door (my_door). By the set up, he has not opened (and cannot open) the door with the car behind it (car_door).\nmy_door and car_door might be the same.\nSo, to get Monty’s choices, we want to take all doors (doors) and remove my_door and car_door. That leaves the door or doors Monty can open.\nHere are the doors Monty cannot open. Remember, a third of the time my_door and car_door will be the same, so we will include the same door twice, as doors Monty can’t open.\n\ncant_open = c(my_door, car_door)\ncant_open\n\n[1] 3 1\n\n\nWe want to find the remaining doors from doors after removing the doors named in cant_open.\nR has a good function for this, called setdiff. It calculates the set difference between two sequences, such as vectors.\nThe set difference between two sequences is the members that are in the first sequence, but are not in the second sequence. Here are a few examples of this set difference function in R.\n\n# Members in c(1, 2, 3) that are *not* in c(1)\n# 1, 2, 3, removing 1, if present.\nsetdiff(c(1, 2, 3), c(1))\n\n[1] 2 3\n\n\n\n# Members in c(1, 2, 3) that are *not* in c(2, 3)\n# 1, 2, 3, removing 2 and 3, if present.\nsetdiff(c(1, 2, 3), c(2, 3))\n\n[1] 1\n\n\n\n# Members in c(1, 2, 3) that are *not* in c(2, 2)\n# 1, 2, 3, removing 2 and 2 again, if present.\nsetdiff(c(1, 2, 3), c(2, 2))\n\n[1] 1 3\n\n\nThis logic allows us to choose the doors Monty can open:\n\nmontys_choices <- setdiff(doors, c(my_door, car_door))\nmontys_choices\n\n[1] 2\n\n\nNotice that montys_choices will only have one element left when my_door and car_door were different, but it will have two elements if my_door and car_door were the same.\nLet’s play out those two cases:\n\nmy_door <- 1 # For example.\ncar_door <- 2 # For example.\n# Monty can only choose door 3 now.\nmontys_choices <- setdiff(doors, c(my_door, car_door))\nmontys_choices\n\n[1] 3\n\n\n\nmy_door <- 1 # For example.\ncar_door <- 1 # For example.\n# Monty can choose either door 2 or door 3.\nmontys_choices <- setdiff(doors, c(my_door, car_door))\nmontys_choices\n\n[1] 2 3\n\n\nIf Monty can only choose one door, we’ll take that. Otherwise we’ll chose a door at random from the two doors available.\n\nif (length(montys_choices) == 1) { # Only one door available.\n montys_door <- montys_choices[1] # Take the first (of 1!).\n} else { # Two doors to choose from:\n # Choose at random.\n montys_door <- sample(montys_choices, size=1)\n}\nmontys_door\n\n[1] 2\n\n\nNow we know Monty’s door, we can identify the other door, by removing our door, and Monty’s door, from the available options:\n\nremaining_doors <- setdiff(doors, c(my_door, montys_door))\n# There is only one remaining door, take that.\nother_door <- remaining_doors[1]\nother_door\n\n[1] 3\n\n\nThe logic above gives us the full procedure for one trial.\n\nmy_door <- sample(doors, size=1)\ncar_door <- sample(doors, size=1)\n# Which door will Monty open?\nmontys_choices <- setdiff(doors, c(my_door, car_door))\n# Choose single door left to choose, or door at random if two.\nif (length(montys_choices) == 1) { # Only one door available.\n montys_door <- montys_choices[1] # Take the first (of 1!).\n} else { # Two doors to choose from:\n # Choose at random.\n montys_door <- sample(montys_choices, size=1)\n}\n# Now find the door we'll open if we switch.\n# There is only one door left.\nremaining_doors <- setdiff(doors, c(my_door, montys_door))\nother_door <- remaining_doors[1]\n# Calculate the result of this trial.\nif (my_door == car_door) {\n stay_wins <- TRUE\n}\nif (other_door == car_door) {\n switch_wins <- TRUE\n}\n\nAll that remains is to put that trial procedure into a loop, and collect the results as we repeat the procedure many times.\n\n# Vectors to store the results for each trial.\nstay_wins <- rep(FALSE, 10000)\nswitch_wins <- rep(FALSE, 10000)\n\n# Doors to chose from.\ndoors <- c(1, 2, 3)\n\nfor (i in 1:10000) {\n # You will recognize the below as the single-trial procedure above.\n my_door <- sample(doors, size=1)\n car_door <- sample(doors, size=1)\n # Which door will Monty open?\n montys_choices <- setdiff(doors, c(my_door, car_door))\n # Choose single door left to choose, or door at random if two.\n if (length(montys_choices) == 1) { # Only one door available.\n montys_door <- montys_choices[1] # Take the first (of 1!).\n } else { # Two doors to choose from:\n # Choose at random.\n montys_door <- sample(montys_choices, size=1)\n }\n # Now find the door we'll open if we switch.\n # There is only one door left.\n remaining_doors <- setdiff(doors, c(my_door, montys_door))\n other_door <- remaining_doors[1]\n # Calculate the result of this trial.\n if (my_door == car_door) {\n stay_wins[i] <- TRUE\n }\n if (other_door == car_door) {\n switch_wins[i] <- TRUE\n }\n}\n\np_for_stay <- sum(stay_wins) / 10000\np_for_switch <- sum(switch_wins) / 10000\n\nmessage('p for stay: ', p_for_stay)\n\np for stay: 0.3293\n\nmessage('p for switch: ', p_for_switch)\n\np for switch: 0.6707\n\n\nWe can also follow the same strategy as we used for the second implementation of the two-ships problem (Section 10.4).\nHere, as in the second two-ships implementation, we do not calculate the trial results (stay_wins, switch_wins) in each trial. Instead, we store the doors for each trial, and then use Boolean vectors to calculate the results for all trials, at the end.\n\n# Instead of storing the trial results, we store the doors for each trial.\nmy_doors <- numeric(10000)\ncar_doors <- numeric(10000)\nother_doors <- numeric(10000)\n\n# Doors to chose from.\ndoors <- c(1, 2, 3)\n\nfor (i in 1:10000) {\n my_door <- sample(doors, size=1)\n car_door <- sample(doors, size=1)\n # Which door will Monty open?\n montys_choices <- setdiff(doors, c(my_door, car_door))\n # Choose single door left to choose, or door at random if two.\n if (length(montys_choices) == 1) { # Only one door available.\n montys_door <- montys_choices[1] # Take the first (of 1!).\n } else { # Two doors to choose from:\n # Choose at random.\n montys_door <- sample(montys_choices, size=1)\n }\n # Now find the door we'll open if we switch.\n # There is only one door left.\n remaining_doors <- setdiff(doors, c(my_door, montys_door))\n other_door <- remaining_doors[1]\n\n # Store the doors we chose.\n my_doors[i] <- my_door\n car_doors[i] <- car_door\n other_doors[i] <- other_door\n}\n\n# Now - at the end of all the trials, we use Boolean vectors to calculate the\n# results.\nstay_wins <- my_doors == car_doors\nswitch_wins <- other_doors == car_doors\n\np_for_stay <- sum(stay_wins) / 10000\np_for_switch <- sum(switch_wins) / 10000\n\nmessage('p for stay: ', p_for_stay)\n\np for stay: 0.3336\n\nmessage('p for switch: ', p_for_switch)\n\np for switch: 0.6664\n\n\n\n10.7.1 Insight from the Monty Hall simulation\nThe code simulation gives us an estimate of the right answer, but it also forces us to set out the exact mechanics of the problem. For example, by looking at the code, we see that we can calculate “stay_wins” with this code alone:\n\n# Just choose my door and the car door for each trial.\nmy_doors <- numeric(10000)\ncar_doors <- numeric(10000)\ndoors <- c(1, 2, 3)\n\nfor (i in 1:10000) {\n my_doors[i] <- sample(doors, size=1)\n car_doors[i] <- sample(doors, size=1)\n}\n\n# Calculate whether I won by staying.\nstay_wins <- my_doors == car_doors\np_for_stay <- sum(stay_wins) / 10000\n\nmessage('p for stay: ', p_for_stay)\n\np for stay: 0.3363\n\n\nThis calculation, on its own, tells us the answer, but it also points to another insight — whatever Monty does with the doors, it doesn’t change the probability that our initial guess is right, and that must be 1 in 3 (0.333). If the probability of stay_win is 1 in 3, and we only have one other door to switch to, the probability of winning after switching must be 2 in 3 (0.666).\n\n\n10.7.2 Simulation and a variant of Monty Hall\nYou have seen that you can avoid the silly mistakes that many of us make with probability — by asking the computer to tell you the result before you start to reason from first principles.\nAs an example, consider the following variant of the Monty Hall problem.\nThe set up to the problem has us choosing a door (my_door above), and then Monty opens one of the other two doors.\nSometimes (in fact, 2/3 of the time) there is a car behind one of Monty’s doors. We’ve obliged Monty to open the other door, and his choice is forced.\nWhen his choice was not forced, we had Monty choose the door at random.\nFor example, let us say we chose door 1.\nLet us say that the car is also under door 1.\nMonty has the option of choosing door 2 or door 3, and he chooses randomly between them.\n\nmy_door <- 1 # We chose door 1 at random.\ncar_door <- 1 # This trial, by chance, the car door is 1.\n# Monty is left with doors 2 and 3 to choose from.\nmontys_choices <- setdiff(doors, c(my_door, car_door))\n# He chooses randomly.\nmontys_door <- sample(montys_choices, size=1)\n# Show the result\nmontys_door\n\n[1] 2\n\n\nNow — let us say we happen to know that Monty is rather lazy, and he will always choose the left-most (lower-numbered) door of the two options.\nIn the previous example, Monty had the option of choosing door 2 and 3. In this new scenario, we know that he will always choose door 2 (the left-most door).\n\nmy_door <- 1 # We chose door 1 at random.\ncar_door <- 1 # This trial, by chance, the car door is 1.\n# Monty is left with doors 2 and 3 to choose from.\nmontys_choices <- setdiff(doors, c(my_door, car_door))\n# He chooses the left-most door, always.\nmontys_door <- montys_choices[1]\n# Show the result\nmontys_door\n\n[1] 2\n\n\nIt feels as if we have more information about where the car is, when we know this. Consider the situation where we have chosen door 1, and Monty opens door 3. We know that he would have preferred to open door 2, if he was allowed. We therefore know he wasn’t allowed to open door 2, and that means the car is definitely under door 2.\n\nmy_door <- 1 # We chose door 1 at random.\ncar_door <- 1 # This trial, by chance, the car door is 1.\n# Monty is left with door 3 only to choose from.\nmontys_choices <- setdiff(doors, c(my_door, car_door))\n# He chooses the left-most door, always. But in this case, the left-most\n# available door is 3 (he can't choose 2, it is the car_door).\n# Notice the doors were in order, so the left-most door is the first door\n# in the vector.\nmontys_door <- montys_choices[1]\n# Show the result\nmontys_door\n\n[1] 2\n\n\nTo take that into account, we might try a different strategy. We will stick to our own choice if Monty has chosen the left-most of the two doors he had available to him, because he might have chosen that door because there was a car underneath the other door, or because there was a car under neither, but he preferred the left door. But, if Monty chooses the right-most of the two-doors available to him, we will switch from our own choice to the other (unopened) door, because we can be sure that the car is under the other (unopened) door.\nCall this the “switch if Monty chooses right door” strategy, or “switch if right” for short.\nCan you see quickly whether this will be better than the “always stay” strategy? Will it be better than the “always switch” strategy? Take a moment to think it through, and write down your answers.\nIf you can quickly see the answer to both questions — well done — but, are you sure you are right?\nWe can test by simulation.\nFor our test of the “switch is right” strategy, we can tell if one door is to the right of another door by comparison; higher numbers mean further to the right: 2 is right of 1, and 3 is right of 2.\n\n# Door 3 is right of door 1.\n3 > 1\n\n[1] TRUE\n\n\n\n# A test of the switch-if-right strategy.\n# The car doors.\ncar_doors <- numeric(10000)\n# The door we chose using the strategy.\nstrategy_doors <- numeric(10000)\n\ndoors <- c(1, 2, 3)\n\nfor (i in 1:10000) {\n my_door <- sample(doors, size=1)\n car_door <- sample(doors, size=1)\n # Which door will Monty open?\n montys_choices <- setdiff(doors, c(my_door, car_door))\n # Choose Monty's door from the remaining options.\n # This time, he always prefers the left door.\n montys_door <- montys_choices[1]\n # Now find the door we'll open if we switch.\n remaining_doors <- setdiff(doors, c(my_door, montys_door))\n # There is only one door remaining - but is Monty's door\n # to the right of this one? Then Monty had to shift.\n other_door <- remaining_doors[1]\n if (montys_door > other_door) {\n # Monty's door was the right-hand door, the car is under the other one.\n strategy_doors[i] <- other_door\n } else { # We stick with the door we first thought of.\n strategy_doors[i] <- my_door\n }\n # Store the car door for this trial.\n car_doors[i] <- car_door\n}\n\nstrategy_wins <- strategy_doors == car_doors\n\np_for_strategy <- sum(strategy_wins) / 10000\n\nmessage('p for strategy: ', p_for_strategy)\n\np for strategy: 0.6668\n\n\nWe find that the “switch-if-right” has around the same chance of success as the “always-switch” strategy — of about 66.6%, or 2 in 3. Were your initial answers right? Now you’ve seen the result, can you see why it should be so? It may not be obvious — the Monty Hall problem is deceptively difficult. But our case here is that the simulation first gives you an estimate of the correct answer, and then, gives you a good basis for thinking more about the problem. That is:\n\nsimulation is useful for estimation and\nsimulation is useful for reflection.\n\nEnd of monty_hall notebook" + }, + { + "objectID": "more_sampling_tools.html#why-use-simulation", + "href": "more_sampling_tools.html#why-use-simulation", + "title": "10  Two puzzles and more tools", + "section": "10.8 Why use simulation?", + "text": "10.8 Why use simulation?\nDoing these simulations has two large benefits. First, it gives us the right answer, saving us from making a mistake. Second, the process of simulation forces us to think about how the problem works. This can give us better understanding, and make it easier to reason about the solution.\nWe will soon see that these same advantages also apply to reasoning about statistics.\n\n\n\n\nGoldberg, Samuel. 1986. Probability: An Introduction. Courier Corporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC.\n\n\nSavant, Marilyn vos. 1990. “Ask Marilyn.” 1990. https://web.archive.org/web/20160318182523/http://marilynvossavant.com/game-show-problem.\n\n\nSelvin, Steve. 1975. “Letters to the Editor.” The American Statistician 29 (1): 67. http://www.jstor.org/stable/2683689.\n\n\nVazsonyi, Andrew. 1999. “Which Door Has the Cadillac.” Decision Line 30 (1): 17–19. https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf." + }, + { + "objectID": "probability_theory_2_compound.html#introduction", + "href": "probability_theory_2_compound.html#introduction", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.1 Introduction", + "text": "11.1 Introduction\nIn this chapter we will deal with what are usually called “probability problems” rather than the “statistical inference problems” discussed in later chapters. The difference is that for probability problems we begin with a knowledge of the properties of the universe with which we are working. (See Section 8.9 on the definition of resampling.)\nWe start with some basic problems in probability. To make sure we do know the properties of the universe we are working with, we start with poker, and a pack of cards. Working with some poker problems, we rediscover the fundamental distinction between sampling with and without replacement." + }, + { + "objectID": "probability_theory_2_compound.html#sec-one-pair", + "href": "probability_theory_2_compound.html#sec-one-pair", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.2 Introducing a poker problem: one pair (two of a kind)", + "text": "11.2 Introducing a poker problem: one pair (two of a kind)\nWhat is the chance that the first five cards chosen from a deck of 52 (bridge/poker) cards will contain two (and only two) cards of the same denomination (two 3’s for example)? (Please forgive the rather sterile unrealistic problems in this and the other chapters on probability. They reflect the literature in the field for 300 years. We’ll get more realistic in the statistics chapters.)\nWe shall estimate the odds the way that gamblers have estimated gambling odds for thousands of years. First, check that the deck is a standard deck and is not missing any cards. (Overlooking such small but crucial matters often leads to errors in science.) Shuffle thoroughly until you are satisfied that the cards are randomly distributed. (It is surprisingly hard to shuffle well.) Then deal five cards, and mark down whether the hand does or does not contain a pair of the same denomination.\nAt this point, we must decide whether three of a kind, four of a kind or two pairs meet our criterion for a pair. Since our criterion is “two and only two,” we decide not to count them.\nThen replace the five cards in the deck, shuffle, and deal again. Again mark down whether the hand contains one pair of the same denomination. Do this many times. Then count the number of hands with one pair, and figure the proportion (as a percentage) of all hands.\nTable 11.1 has the results of 25 hands of this procedure.\n\n\n\nTable 11.1: Results of 25 hands for the problem “one pair”\n\n\n\n\n\n\n\n\n\n\n\nHand\nCard 1\nCard 2\nCard 3\nCard 4\nCard 5\nOne pair?\n\n\n\n\n1\nKing ♢\nKing ♠\nQueen ♠\n10 ♢\n6 ♠\nYes\n\n\n2\n8 ♢\nAce ♢\n4 ♠\n10 ♢\n3 ♣\nNo\n\n\n3\n4 ♢\n5 ♣\nAce ♢\nQueen ♡\n10 ♠\nNo\n\n\n4\n3 ♡\nAce ♡\n5 ♣\n3 ♢\nJack ♢\nYes\n\n\n5\n6 ♠\nKing ♣\n6 ♢\n3 ♣\n3 ♡\nNo\n\n\n6\nQueen ♣\n7 ♢\nJack ♠\n5 ♡\n8 ♡\nNo\n\n\n7\n9 ♣\n4 ♣\n9 ♠\nJack ♣\n5 ♠\nYes\n\n\n8\n3 ♠\n3 ♣\n3 ♡\n5 ♠\n5 ♢\nYes\n\n\n9\nQueen ♢\n4 ♠\nQueen ♣\n6 ♡\n4 ♢\nNo\n\n\n10\nQueen ♠\n3 ♣\n7 ♠\n7 ♡\n8 ♢\nYes\n\n\n11\n8 ♡\n9 ♠\n7 ♢\n8 ♠\nAce ♡\nYes\n\n\n12\nAce ♠\n9 ♡\n4 ♣\n2 ♠\nAce ♢\nYes\n\n\n13\n4 ♡\n3 ♣\nAce ♢\n9 ♡\n5 ♡\nNo\n\n\n14\n10 ♣\n7 ♠\n8 ♣\nKing ♣\n4 ♢\nNo\n\n\n15\nQueen ♣\n8 ♠\nQueen ♠\n8 ♣\n5 ♣\nNo\n\n\n16\nKing ♡\n10 ♣\nJack ♠\n10 ♢\n10 ♡\nNo\n\n\n17\nQueen ♠\nQueen ♡\nAce ♡\nKing ♢\n7 ♡\nYes\n\n\n18\n5 ♢\n6 ♡\nAce ♡\n4 ♡\n6 ♢\nYes\n\n\n19\n3 ♠\n5 ♡\n2 ♢\nKing ♣\n9 ♡\nNo\n\n\n20\n8 ♠\nJack ♢\n7 ♣\n10 ♡\n3 ♡\nNo\n\n\n21\n5 ♢\n4 ♠\nJack ♡\n2 ♠\nKing ♠\nNo\n\n\n22\n5 ♢\n4 ♢\nJack ♣\nKing ♢\n2 ♠\nNo\n\n\n23\nKing ♡\nKing ♠\n6 ♡\n2 ♠\n5 ♣\nYes\n\n\n24\n8 ♠\n9 ♠\n6 ♣\nAce ♣\n5 ♢\nNo\n\n\n25\nAce ♢\n7 ♠\n4 ♡\n9 ♢\n9 ♠\nYes\n\n\n\n\n\n\n\n\n\n\n\n% Yes\n\n\n\n\n\n44%\n\n\n\n\n\nIn this series of 25 experiments, 44 percent of the hands contained one pair, and therefore 0.44 is our estimate (for the time being) of the probability that one pair will turn up in a poker hand. But we must notice that this estimate is based on only 25 hands, and therefore might well be fairly far off the mark (as we shall soon see).\nThis experimental “resampling” estimation does not require a deck of cards. For example, one might create a 52-sided die, one side for each card in the deck, and roll it five times to get a “hand.” But note one important part of the procedure: No single “card” is allowed to come up twice in the same set of five spins, just as no single card can turn up twice or more in the same hand. If the same “card” did turn up twice or more in a dice experiment, one could pretend that the roll had never taken place; this procedure is necessary to make the dice experiment analogous to the actual card-dealing situation under investigation. Otherwise, the results will be slightly in error. This type of sampling is “sampling without replacement,” because each card is not replaced in the deck prior to dealing the next card (that is, prior to the end of the hand)." + }, + { + "objectID": "probability_theory_2_compound.html#a-first-approach-to-the-one-pair-problem-with-code", + "href": "probability_theory_2_compound.html#a-first-approach-to-the-one-pair-problem-with-code", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.3 A first approach to the one-pair problem with code", + "text": "11.3 A first approach to the one-pair problem with code\nWe could also approach this problem using random numbers from the computer to simulate the values.\nLet us first make some numbers from which to sample. We want to simulate a deck of playing cards analogous to the real cards we used previously. We don’t need to simulate all the features of a deck, but only the features that matter for the problem at hand. In our case, the feature that matters is the face value. We require a deck with four “1”s, four “2”s, etc., up to four “13”s, where 1 is an Ace, and 13 is a King. The suits don’t matter for our present purposes.\nWe first first make a vector to represent the face values in one suit.\n\none_suit <- 1:13\none_suit\n\n [1] 1 2 3 4 5 6 7 8 9 10 11 12 13\n\n\nWe have the face values for one suit, but we need the face values for whole deck of cards — four suits. We do this by making a new vector that consists of four repeats of one_suit:\n\n# Repeat the one_suit vector four times\ndeck <- rep(one_suit, 4)\ndeck\n\n [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11 12\n[26] 13 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11\n[51] 12 13" + }, + { + "objectID": "probability_theory_2_compound.html#sec-shuffling-deck", + "href": "probability_theory_2_compound.html#sec-shuffling-deck", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.4 Shuffling the deck with R", + "text": "11.4 Shuffling the deck with R\nAt this point we have a complete deck in the variable deck . But that “deck” is in the same order as a new deck of cards . If we do not shuffle the deck, the results will be predictable. Therefore, we would like to select five of these “cards” (52 values) at random. There are two ways of doing this. The first is to use the sample’rnd.choice`]{.python} tool in the familiar way, to choose 5 values at random from this strictly ordered deck. We want to draw these cards without replacement (of which more later). Without replacement means that once we have drawn a particular value, we cannot draw that value a second time — just as you cannot get the same card twice in a hand when the dealer deals you a hand of five cards.\n\nAs you saw in Section 8.14, the default behavior of sample is to sample without replacement, so simply omit the replace=TRUE argument to sample to get sampling without replacement:\n\n\n# One hand, sampling from the deck without replacement.\nhand <- sample(deck, size=5)\nhand\n\n[1] 6 10 12 11 12\n\n\nThe above is one way to get a random hand of five cards from the deck. Another way is to use sample to shuffle the whole deck of 52 “cards” into a random order, just as a dealer would shuffle the deck before dealing. Then we could take — for example — the first five cards from the shuffled deck to give a random hand. See Section 8.14 for more on this use of sample.\n\n# Shuffle the whole 52 card deck.\nshuffled <- sample(deck)\n# The \"cards\" are now in random order.\nshuffled\n\n [1] 8 13 5 4 12 9 5 7 11 2 13 2 6 8 8 6 10 9 12 9 11 7 13 11 12\n[26] 7 10 4 2 4 7 1 3 5 1 9 2 4 6 1 8 10 3 13 5 11 12 3 1 10\n[51] 6 3\n\n\nNow we can get our hand by taking the first five cards from the deck:\n\n# Select the first five \"cards\" from the shuffled deck.\nhand <- shuffled[1:5]\nhand\n\n[1] 8 13 5 4 12\n\n\nYou have seen that we can use one of two procedures to a get random sample of five cards from deck, drawn without replacement:\n\nUsing sample with size=5 to take the random sample directly from deck, or\nshuffling the entire deck and then taking the first five “cards” from the result of the shuffle.\n\nEither is a valid way of getting five cards at random from the deck. It’s up to us which to choose — we slightly prefer to shuffle and take the first five, because it is more like the physical procedure of shuffling the deck and dealing, but which you prefer, is up to you.\n\n11.4.1 A first-pass computer solution to the one-pair problem\nChoosing the shuffle deal way, the chunk to generate one hand is:\n\nshuffled <- sample(deck)\nhand <- shuffled[1:5]\nhand\n\n[1] 6 9 6 2 1\n\n\nWithout doing anything further, we could run this chunk many times, and each time, we could note down whether the particular hand had exactly one pair or not.\nTable 11.2 has the result of running that procedure 25 times:\n\n\n\nTable 11.2: Results of 25 hands using random numbers\n\n\n\n\n\n\n\n\n\n\n\nHand\nCard 1\nCard 2\nCard 3\nCard 4\nCard 5\nOne pair?\n\n\n\n\n1\n9\n4\n11\n9\n13\nYes\n\n\n2\n8\n7\n6\n11\n1\nNo\n\n\n3\n1\n1\n10\n9\n9\nNo\n\n\n4\n4\n2\n2\n1\n1\nNo\n\n\n5\n8\n11\n13\n10\n3\nNo\n\n\n6\n13\n7\n11\n10\n6\nNo\n\n\n7\n8\n1\n10\n11\n12\nNo\n\n\n8\n12\n6\n1\n1\n9\nYes\n\n\n9\n4\n12\n13\n12\n10\nYes\n\n\n10\n9\n12\n12\n8\n7\nYes\n\n\n11\n5\n2\n4\n11\n13\nNo\n\n\n12\n3\n4\n11\n8\n5\nNo\n\n\n13\n2\n4\n2\n13\n1\nYes\n\n\n14\n1\n1\n3\n5\n12\nYes\n\n\n15\n4\n6\n11\n13\n11\nYes\n\n\n16\n10\n4\n8\n9\n12\nNo\n\n\n17\n7\n11\n4\n3\n4\nYes\n\n\n18\n12\n6\n11\n12\n13\nYes\n\n\n19\n5\n3\n8\n6\n9\nNo\n\n\n20\n11\n6\n8\n9\n6\nYes\n\n\n21\n13\n11\n5\n8\n2\nNo\n\n\n22\n11\n8\n10\n1\n13\nNo\n\n\n23\n10\n5\n8\n1\n3\nNo\n\n\n24\n1\n8\n13\n9\n9\nYes\n\n\n25\n5\n13\n2\n4\n11\nNo\n\n\n\n\n\n\n\n\n\n\n\n% Yes\n\n\n\n\n\n44%" + }, + { + "objectID": "probability_theory_2_compound.html#finding-exactly-one-pair-using-code", + "href": "probability_theory_2_compound.html#finding-exactly-one-pair-using-code", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.5 Finding exactly one pair using code", + "text": "11.5 Finding exactly one pair using code\nThus far we have had to look ourselves at the set of cards, or at the numbers, and decide if there was exactly one pair. We would like the computer to do this for us. Let us stay with the numbers we generated above by dealing the random hand from the deck of numbers. To find pairs, we will go through the following procedure:\n\nFor each possible value (1 through 13), count the number of times each value has occurred in hand. Call the result of this calculation — repeat_nos.\nSelect repeat_nos values equal to 2;\nCount the number of “2” values in repeat_nos. This the number of pairs, and excludes three of a kind or four a kind.\nIf the number of pairs is exactly one, label the hand as “Yes”, otherwise label it as “No”." + }, + { + "objectID": "probability_theory_2_compound.html#finding-number-of-repeats-using", + "href": "probability_theory_2_compound.html#finding-number-of-repeats-using", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.6 Finding number of repeats using tabulate", + "text": "11.6 Finding number of repeats using tabulate\nConsider the following 5-card “hand” of values:\n\nhand <- c(5, 7, 5, 4, 7)\n\nThis hand represents a pair of 5s and a pair of 7s.\nWe want to detect the number of repeats for each possible card value, 1 through 13. Let’s say we are looking for 5s. We can detect which of the values are equal to 5 by making a Boolean vector, where there is TRUE for a value equal to 5, and FALSE otherwise:\n\nis_5 <- (hand == 5)\n\nWe can then count the number of 5s with:\n\nsum(is_5)\n\n[1] 2\n\n\nIn one chunk:\n\nnumber_of_5s <- sum(hand == 5)\nnumber_of_5s\n\n[1] 2\n\n\nWe could do this laborious task for every possible card value (1 through 13):\n\nnumber_of_1s <- sum(hand == 1) # Number of aces in hand\nnumber_of_2s <- sum(hand == 2) # Number of 2s in hand\nnumber_of_3s <- sum(hand == 3)\nnumber_of_4s <- sum(hand == 4)\nnumber_of_5s <- sum(hand == 5)\nnumber_of_6s <- sum(hand == 6)\nnumber_of_7s <- sum(hand == 7)\nnumber_of_8s <- sum(hand == 8)\nnumber_of_9s <- sum(hand == 9)\nnumber_of_10s <- sum(hand == 10)\nnumber_of_11s <- sum(hand == 11)\nnumber_of_12s <- sum(hand == 12)\nnumber_of_13s <- sum(hand == 13) # Number of Kings in hand.\n\nAbove, we store the result for each card in a separate variable; this is inconvenient, because we would have to go through each variable checking for a pair (a value of 2). It would be more convenient to store these results in a vector. One way to do that would be to store the result for card value 1 at position (index) 1, the result for value 2 at position 2, and so on, like this:\n\n# Make vector length 13, with one element for each card value.\nrepeat_nos <- numeric(13)\nrepeat_nos[1] <- sum(hand == 1) # Number of aces in hand\nrepeat_nos[2] <- sum(hand == 2) # Number of 2s in hand\nrepeat_nos[3] <- sum(hand == 3)\nrepeat_nos[4] <- sum(hand == 4)\nrepeat_nos[5] <- sum(hand == 5)\nrepeat_nos[6] <- sum(hand == 6)\nrepeat_nos[7] <- sum(hand == 7)\nrepeat_nos[8] <- sum(hand == 8)\nrepeat_nos[9] <- sum(hand == 9)\nrepeat_nos[10] <- sum(hand == 10)\nrepeat_nos[11] <- sum(hand == 11)\nrepeat_nos[12] <- sum(hand == 12)\nrepeat_nos[13] <- sum(hand == 13) # Number of Kings in hand.\n# Show the result\nrepeat_nos\n\n [1] 0 0 0 1 2 0 2 0 0 0 0 0 0\n\n\nYou may recognize all this repetitive typing as a good sign we could use a for loop to do the work — er — for us.\n\nrepeat_nos <- numeric(13)\nfor (i in 1:13) { # Set i to be first 1, then 2, ... through 13.\n repeat_nos[i] <- sum(hand == i)\n}\n# Show the result\nrepeat_nos\n\n [1] 0 0 0 1 2 0 2 0 0 0 0 0 0\n\n\nIn our particular hand, after we have done the count for 7s, we will always get 0 for card values 8, 9 … 13, because 7 was the highest card (maximum value) for our particular hand. As you might expect, there is a an R function max that will quickly tell us the maximum value in the hand:\n\nmax(hand)\n\n[1] 7\n\n\nWe can use max to make our loop more efficient, by stopping our checks when we’ve reached the maximum value, like this:\n\nmax_value <- max(hand)\n# Only make a vector large enough to house counts for the max value.\nrepeat_nos <- numeric(max_value)\nfor (i in 1:max_value) { # Set i to 0, then 1 ... through max_value\n repeat_nos[i] <- sum(hand == i)\n}\n# Show the result\nrepeat_nos\n\n[1] 0 0 0 1 2 0 2\n\n\nIn fact, this is exactly what the function tabulate does, so we can use that function instead of our loop, to do the same job:\n\nrepeat_nos <- tabulate(hand)\nrepeat_nos\n\n[1] 0 0 0 1 2 0 2" + }, + { + "objectID": "probability_theory_2_compound.html#looking-for-hands-with-exactly-one-pair", + "href": "probability_theory_2_compound.html#looking-for-hands-with-exactly-one-pair", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.7 Looking for hands with exactly one pair", + "text": "11.7 Looking for hands with exactly one pair\nNow we have repeat_nos, we can proceed with the rest of the steps above.\nWe can count the number of cards that have exactly two repeats:\n\n(repeat_nos == 2)\n\n[1] FALSE FALSE FALSE FALSE TRUE FALSE TRUE\n\n\n\nn_pairs <- sum(repeat_nos == 2)\n# Show the result\nn_pairs\n\n[1] 2\n\n\nThe hand is of interest to us only if the number of pairs is exactly 1:\n\n# Check whether there is exactly one pair in this hand.\nn_pairs == 1\n\n[1] FALSE\n\n\nWe now have the machinery to use R for all the logic in simulating multiple hands, and checking for exactly one pair.\nLet’s do that, and use R to do the full job of dealing many hands and finding pairs in each one. We repeat the procedure above using a for loop. The for loop commands the program to do ten thousand repeats of the statements in the “loop” between the start { and end } curly braces.\nIn the body of the loop (the part that gets repeated for each trial) we:\n\nShuffle the deck.\nDeal ourselves a new hand.\nCalculate the repeat_nos for this new hand.\nCalculate the number of pairs from repeat_nos; store this as n_pairs.\nPut n_pairs for this repetition into the correct place in the scoring vector z.\n\nWith that we end a single trial, and go back to the beginning, until we have done this 10000 times.\nWhen those 10000 repetitions are over, the computer moves on to count (sum) the number of “1’s” in the score-keeping vector z, each “1” indicating a hand with exactly one pair. We store this count at location k. We divide k by 10000 to get the proportion of hands that had one pair, and we message the result of k to the screen.\n\nStart of one_pair notebook\n\nDownload notebook\nInteract\n\n\n\n# Create a bucket (vector) called a with four \"1's,\" four \"2's,\" four \"3's,\"\n# etc., to represent a deck of cards\none_suit = 1:13\none_suit\n\n [1] 1 2 3 4 5 6 7 8 9 10 11 12 13\n\n\n\n# Repeat values for one suit four times to make a 52 card deck of values.\ndeck <- rep(one_suit, 4)\ndeck\n\n [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11 12\n[26] 13 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11\n[51] 12 13\n\n\n\n# Vector to store result of each trial.\nz <- numeric(10000)\n\n# Repeat the following steps 10000 times\nfor (i in 1:10000) {\n # Shuffle the deck\n shuffled <- sample(deck)\n\n # Take the first five cards to make a hand.\n hand = shuffled[1:5]\n\n # How many pairs?\n # Counts for each card rank.\n repeat_nos <- tabulate(hand)\n n_pairs <- sum(repeat_nos == 2)\n\n # Keep score of # of pairs\n z[i] <- n_pairs\n\n # End loop, go back and repeat\n}\n\n# How often was there 1 pair?\nk <- sum(z == 1)\n\n# Convert to proportion.\nkk = k / 10000\n\n# Show the result.\nmessage(kk)\n\n0.4285\n\n\nEnd of one_pair notebook\n\nIn one run of the program, the result in kk was 0.428, so our estimate would be that the probability of a single pair is 0.428.\nHow accurate are these resampling estimates? The accuracy depends on the number of hands we deal — the more hands, the greater the accuracy. If we were to examine millions of hands, 42 percent would contain a pair each; that is, the chance of getting a pair in the long run is 42 percent. It turns out the estimate of 44 percent based on 25 hands in Table 11.1 is fairly close to the long-run estimate, though whether or not it is close enough depends on one’s needs of course. If you need great accuracy, deal many more hands.\nA note on the decks, hands, repeat_noss in the above program, etc.: These “variables” are called “vector”s in R. A vector is an array (sequence) of elements that gets filled with numbers as R conducts its operations.\nTo help keep things straight (though the program does not require it), we often use z to name the vector that collects all the trial results, and k to denote our overall summary results. Or you could call it something like scoreboard — it’s up to you.\nHow many trials (hands) should be made for the estimate? There is no easy answer.1 One useful device is to run several (perhaps ten) equal sized sets of trials, and then examine whether the proportion of pairs found in the entire group of trials is very different from the proportions found in the various subgroup sets. If the proportions of pairs in the various subgroups differ greatly from one another or from the overall proportion, then keep running additional larger subgroups of trials until the variation from one subgroup to another is sufficiently small for your purposes. While such a procedure would be impractical using a deck of cards or any other physical means, it requires little effort with the computer and R." + }, + { + "objectID": "probability_theory_2_compound.html#two-more-tntroductory-poker-problems", + "href": "probability_theory_2_compound.html#two-more-tntroductory-poker-problems", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.8 Two more tntroductory poker problems", + "text": "11.8 Two more tntroductory poker problems\nWhich is more likely, a poker hand with two pairs, or a hand with three of a kind? This is a comparison problem, rather than a problem in absolute estimation as was the previous example.\nIn a series of 100 “hands” that were “dealt” using random numbers, four hands contained two pairs, and two hands contained three of a kind. Is it safe to say, on the basis of these 100 hands, that hands with two pairs are more frequent than hands with three of a kind? To check, we deal another 300 hands. Among them we see fifteen hands with two pairs (3.75 percent) and eight hands with three of a kind (2 percent), for a total of nineteen to ten. Although the difference is not enormous, it is reasonably clear-cut. Another 400 hands might be advisable, but we shall not bother.\nEarlier I obtained forty-four hands with one pair each out of 100 hands, which makes it quite plain that one pair is more frequent than either two pairs or three-of-a-kind. Obviously, we need more hands to compare the odds in favor of two pairs with the odds in favor of three-of-a-kind than to compare those for one pair with those for either two pairs or three-of-a-kind. Why? Because the difference in odds between one pair, and either two pairs or three-of-a-kind, is much greater than the difference in odds between two pairs and three-of-a-kind. This observation leads to a general rule: The closer the odds between two events, the more trials are needed to determine which has the higher odds.\nAgain it is interesting to compare the odds with the formulaic mathematical computations, which are 1 in 21 (4.75 percent) for a hand containing two pairs and 1 in 47 (2.1 percent) for a hand containing three-of-a-kind — not too far from the estimates of .0375 and .02 derived from simulation.\nTo handle the problem with the aid of the computer, we simply need to estimate the proportion of hands having triplicates and the proportion of hands with two pairs, and compare those estimates.\nTo estimate the hands with three-of-a-kind, we can use a notebook just like “One Pair” earlier, except using repeat_nos == 3 to search for triplicates instead of duplicates. The program, then, is:\n\nStart of three_of_a_kind notebook\n\nDownload notebook\nInteract\n\n\n\none_suit <- 1:13\ndeck <- rep(one_suit, 4)\n\n\ntriples_per_trial <- numeric(10000)\n\n# Repeat the following steps 10000 times\nfor (i in 1:10000) {\n # Shuffle the deck\n shuffled <- sample(deck)\n\n # Take the first five cards.\n hand <- shuffled[1:5]\n\n # How many triples?\n repeat_nos <- tabulate(hand)\n n_triples <- sum(repeat_nos == 3)\n\n # Keep score of # of triples\n triples_per_trial[i] <- n_triples\n\n # End loop, go back and repeat\n}\n\n# How often was there 1 pair?\nn_triples <- sum(triples_per_trial == 1)\n\n# Convert to proportion\nmessage(n_triples / 10000)\n\n0.0251\n\n\nEnd of three_of_a_kind notebook\n\nTo estimate the probability of getting a two-pair hand, we revert to the original program (counting pairs), except that we examine all the results in the score-keeping vector z for hands in which we had two pairs, instead of one .\n\nStart of two_pairs notebook\n\nDownload notebook\nInteract\n\n\n\ndeck <- rep(1:13, 4)\n\n\npairs_per_trial <- numeric(10000)\n\n# Repeat the following steps 10000 times\nfor (i in 1:10000) {\n # Shuffle the deck\n shuffled <- sample(deck)\n\n # Take the first five cards.\n hand <- shuffled[1:5]\n\n # How many pairs?\n # Counts for each card rank.\n repeat_nos <- tabulate(hand)\n n_pairs <- sum(repeat_nos == 2)\n\n # Keep score of # of pairs\n pairs_per_trial[i] <- n_pairs\n\n # End loop, go back and repeat\n}\n\n# How often were there 2 pairs?\nn_two_pairs <- sum(pairs_per_trial == 2)\n\n# Convert to proportion\nprint(n_two_pairs / 10000)\n\n[1] 0.0465\n\n\nEnd of two_pairs notebook\n\nFor efficiency (though efficiency really is not important here because the computer performs its operations so cheaply) we could develop both estimates in a single program by simply generating 10000 hands, and count the number with three-of-a-kind and the number with two pairs.\nBefore we leave the poker problems, we note a difficulty with Monte Carlo simulation. The probability of a royal flush is so low (about one in half a million) that it would take much computer time to compute. On the other hand, considerable inaccuracy is of little matter. Should one care whether the probability of a royal flush is 1/100,000 or 1/500,000?" + }, + { + "objectID": "probability_theory_2_compound.html#the-concepts-of-replacement-and-non-replacement", + "href": "probability_theory_2_compound.html#the-concepts-of-replacement-and-non-replacement", + "title": "11  Probability Theory, Part 2: Compound Probability", + "section": "11.9 The concepts of replacement and non-replacement", + "text": "11.9 The concepts of replacement and non-replacement\nIn the poker example above, we did not replace the first card we drew. If we were to replace the card, it would leave the probability the same before the second pick as before the first pick. That is, the conditional probability remains the same. If we replace, conditions do not change. But if we do not replace the item drawn, the probability changes from one moment to the next. (Perhaps refresh your mind with the examples in the discussion of conditional probability including Section 9.1.1)\nIf we sample with replacement, the sample drawings remain independent of each other — a topic addressed in Section 9.1.\nIn many cases, a key decision in modeling the situation in which we are interested is whether to sample with or without replacement. The choice must depend on the characteristics of the situation.\nThere is a close connection between the lack of finiteness of the concept of universe in a given situation, and sampling with replacement. That is, when the universe (population) we have in mind is not small, or has no conceptual bounds at all, then the probability of each successive observation remains the same, and this is modeled by sampling with replacement. (“Not finite” is a less expansive term than “infinite,” though one might regard them as synonymous.)\nChapter 12 discusses problems whose appropriate concept of a universe is finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is not finite. This general procedure will be discussed several times, with examples included." + }, + { + "objectID": "probability_theory_3.html#sec-birthday-problem", + "href": "probability_theory_3.html#sec-birthday-problem", + "title": "12  Probability Theory, Part 3", + "section": "12.1 Example: The Birthday Problem", + "text": "12.1 Example: The Birthday Problem\nThis examples illustrates the probability of duplication in a multi-outcome sample from an infinite universe.\nAs an indication of the power and simplicity of resampling methods, consider this famous examination question used in probability courses: What is the probability that two or more people among a roomful of (say) twenty-five people will have the same birthday? To obtain an answer we need simply examine the first twenty-five numbers from the random-number table that fall between “001” and “365” (the number of days in the year), record whether or not there is a duplication among the twenty-five, and repeat the process often enough to obtain a reasonably stable probability estimate.\nPose the question to a mathematical friend of yours, then watch her or him sweat for a while, and afterwards compare your answer to hers/his. I think you will find the correct answer very surprising. It is not unheard of for people who know how this problem works to take advantage of their knowledge by making and winning big bets on it. (See how a bit of knowledge of probability can immediately be profitable to you by avoiding such unfortunate occurrences?)\nMore specifically, these steps answer the question for the case of twenty-five people in the room:\n\nStep 1. Let three-digit random numbers 1-365 stand for the 365 days in the year. (Ignore leap year for simplicity.)\nStep 2. Examine for duplication among the first twenty-five random numbers chosen “001-365.” (Triplicates or higher-order repeats are counted as duplicates here.) If there is one or more duplicate, record “yes.” Otherwise record “no.”\nStep 3. Repeat perhaps a thousand times, and calculate the proportion of a duplicate birthday among twenty-five people.\n\nYou would probably use the computer to generate the initial random numbers.\nNow try the program written as follows.\n\nStart of birthday_problem notebook\n\nDownload notebook\nInteract\n\n\n\nn_with_same_birthday <- numeric(10000)\n\n# All the days of the year from \"1\" through \"365\"\nall_days <- 1:365\n\n# Do 10000 trials (experiments)\nfor (i in 1:10000) {\n # Generate 25 numbers randomly between \"1\" and \"365,\" put them in a.\n a <- sample(all_days, size=25, replace=TRUE)\n\n # Looking in a, count the number of multiples and put the result in\n # \"counts\".\n counts <- tabulate(a)\n\n # We request multiples > 1 because we are interested in any multiple,\n # whether it is a duplicate, triplicate, etc. Had we been interested only\n # in duplicates, we would have put in sum(counts == 2).\n n_duplicates <- sum(counts > 1)\n\n # Score the result of each trial to our store\n n_with_same_birthday[i] <- n_duplicates\n\n # End the loop for the trial, go back and repeat the trial until all 10000\n # are complete, then proceed.\n}\n\n# Determine how many trials had at least one multiple.\nk <- sum(n_with_same_birthday)\n\n# Convert to a proportion.\nkk <- k / 10000\n\n# Print the result.\nmessage(kk)\n\n0.7823\n\n\nEnd of birthday_problem notebook\n\nWe have dealt with this example in a rather intuitive and unsystematic fashion. From here on, we will work in a more systematic, step-by-step manner. And from here on the problems form an orderly sequence of the classical types of problems in probability theory (Chapter 12 and Chapter 13), and inferential statistics (Chapter 20 to Chapter 28.)" + }, + { + "objectID": "probability_theory_3.html#example-three-daughters-among-four-children", + "href": "probability_theory_3.html#example-three-daughters-among-four-children", + "title": "12  Probability Theory, Part 3", + "section": "12.2 Example: Three Daughters Among Four Children", + "text": "12.2 Example: Three Daughters Among Four Children\nThis problem illustrates a problem with two outcomes (Binomial 1) and sampling with Replacement Among Equally Likely Outcomes.\nWhat is the probability that exactly three of the four children in a four-child family will be daughters?2\nThe first step is to state that the approximate probability that a single birth will produce a daughter is 50-50 (1 in 2). This estimate is not strictly correct, because there are roughly 106 male children born to each 100 female children. But the approximation is close enough for most purposes, and the 50-50 split simplifies the job considerably. (Such “false” approximations are part of the everyday work of the scientist. The appropriate question is not whether or not a statement is “only” an approximation, but whether or not it is a good enough approximation for your purposes.)\nThe probability that a fair coin will turn up heads is .50 or 50-50, close to the probability of having a daughter. Therefore, flip a coin in groups of four flips, and count how often three of the flips produce heads . (You must decide in advance whether three heads means three girls or three boys.) It is as simple as that.\nIn resampling estimation it is of the highest importance to work in a careful, step-by-step fashion — to write down the steps in the estimation, and then to do the experiments just as described in the steps. Here are a set of steps that will lead to a correct answer about the probability of getting three daughters among four children:\n\nStep 1. Using coins, let “heads” equal “girl” and “tails” equal “boy.”\nStep 2. Throw four coins.\nStep 3. Examine whether the four coins fall with exactly three heads up. If so, write “yes” on a record sheet; otherwise write “no.”\nStep 4. Repeat step 2 perhaps two hundred times.\nStep 5. Count the proportion “yes.” This proportion is an estimate of the probability of obtaining exactly 3 daughters in 4 children.\n\nThe first few experimental trials might appear in the record sheet as follows (Table 12.1):\n\n\nTable 12.1: Example trials from the three-girls problem\n\n\nNumber of Heads\nYes or No\n\n\n\n\n1\nNo\n\n\n0\nNo\n\n\n3\nYes\n\n\n2\nNo\n\n\n1\nNo\n\n\n2\nNo\n\n\n…\n…\n\n\n…\n…\n\n\n…\n…\n\n\n\n\nThe probability of getting three daughters in four births could also be found with a deck of cards, a random number table, a die, or with R. For example, half the cards in a deck are black, so the probability of getting a black card (“daughter”) from a full deck is 1 in 2. Therefore, deal a card, record “daughter” or “son,” replace the card, shuffle, deal again, and so forth for 200 sets of four cards. Then count the proportion of groups of four cards in which you got four daughters.\n\nStart of three_girls notebook\n\nDownload notebook\nInteract\n\n\n\ngirl_counts <- numeric(10000)\n\n# Do 10000 trials\nfor (i in 1:10000) {\n\n # Select 'girl' or 'boy' at random, four times.\n children <- sample(c('girl', 'boy'), size=4, replace=TRUE)\n\n # Count the number of girls and put the result in b.\n b <- sum(children == 'girl')\n\n # Keep track of each trial result in z.\n girl_counts[i] <- b\n\n # End this trial, repeat the experiment until 10000 trials are complete,\n # then proceed.\n}\n\n# Count the number of experiments where we got exactly 3 girls, and put this\n# result in k.\nn_three_girls <- sum(girl_counts == 3)\n\n# Convert to a proportion.\nthree_girls_prop <- n_three_girls / 10000\n\n# Print the results.\nmessage(three_girls_prop)\n\n0.2392\n\n\nEnd of three_girls notebook\n\nNotice that the procedure outlined in the steps above would have been different (though almost identical) if we asked about the probability of three or more daughters rather than exactly three daughters among four children. For three or more daughters we would have scored “yes” on our score-keeping pad for either three or four heads, rather than for just three heads. Likewise, in the computer solution we would have used the statement n_three_girls <- sum(girl_counts >= 3).\nIt is important that, in this case, in contrast to what we did in the example from Section 11.2 (the introductory poker example), the card is replaced each time so that each card is dealt from a full deck. This method is known as sampling with replacement . One samples with replacement whenever the successive events are independent ; in this case we assume that the chance of having a daughter remains the same (1 girl in 2 births) no matter what sex the previous births were 3. But, if the first card dealt is black and would not be replaced, the chance of the second card being black would no longer be 26 in 52 (.50), but rather 25 in 51 (.49), if the first three cards are black and would not be replaced, the chances of the fourth card’s being black would sink to 23 in 49 (.47).\nTo push the illustration further, consider what would happen if we used a deck of only six cards, half (3 of 6) black and half (3 of 6) red, instead of a deck of 52 cards. If the chosen card is replaced each time, the 6-card deck produces the same results as a 52-card deck; in fact, a two-card deck would do as well. But, if the sampling is done without replacement, it is impossible to obtain 4 “daughters” with the 6-card deck because there are only 3 “daughters” in the deck. To repeat, then, whenever you want to estimate the probability of some series of events where each event is independent of the other, you must sample with replacement ." + }, + { + "objectID": "probability_theory_3.html#variations-of-the-daughters-problem", + "href": "probability_theory_3.html#variations-of-the-daughters-problem", + "title": "12  Probability Theory, Part 3", + "section": "12.3 Variations of the daughters problem", + "text": "12.3 Variations of the daughters problem\nIn later chapters we will frequently refer to a problem which is identical in basic structure to the problem of three girls in four children — the probability of getting 9 females in ten calf births if the probability of a female birth is (say) .5 — when we set this problem in the context of the possibility that a genetic engineering practice is effective in increasing the proportion of females (desirable for the production of milk).\nSo far we have assumed the simple case where we have a vector of values that we are sampling from, and we are selecting each of these values into the sample with equal probability.\nFor example, we started with the simple assumption that a child is just as likely to be born a boy as a girl. Our input is:\n\ninput_values = c('girl', 'boy')\n\nBy default, sample will draw the input values with equal probability. Here, we draw a sample (children) of four values from the input, where each value in children has an equal chance of being “girl” or “boy”.\n\nchildren <- sample(input_values, size=4, replace=TRUE)\nchildren\n\n[1] \"girl\" \"girl\" \"boy\" \"boy\" \n\n\nThat is, sample gives each element in input_values an equal chance of being selected as the next element in children.\nThat is fine if we have some simple probability to simulate, like 0.5. But now let us imagine we want to get more precise. We happen to know that any given birth is just slightly more likely to be a boy than a girl.4. For example, the proportion of boys born in the UK is 0.513. Hence the proportion of girls is 1-0.513 = 0.487." + }, + { + "objectID": "probability_theory_3.html#and-the-probp-argument", + "href": "probability_theory_3.html#and-the-probp-argument", + "title": "12  Probability Theory, Part 3", + "section": "12.4 sample and the prob argument", + "text": "12.4 sample and the prob argument\nWe could replicate this probability of 0.487 for ‘girl’ in the output sample by making an input array of 1000 strings, that contains 487 ‘girls’ and 513 ‘boys’:\n\nbig_girls <- rep(c('girl', 'boy'), c(487, 513))\n\nNow if we sample using the default in sample, each element in the input big_girls array will have the same chance of appearing in the sample, but because there are 487 ‘girls’, and 513 ‘boys’, each with an equal chance of appearing in the sample, we will get a ‘girl’ in roughly 487 out of every 1000 elements we draw, and a boy roughly 513 / 1000 times. That is, our chance of any one element of being a ‘girl’ is, as we want, 0.487.\n\n# Now each element has probability 0.487 of 'girl', 0.513 of 'boy'.\nrealistic_children <- sample(big_girls, size=4, replace=TRUE)\nrealistic_children\n\n[1] \"girl\" \"boy\" \"girl\" \"boy\" \n\n\nBut, there is an easier way than compiling a big 1000 element array, and that is to use the prob= argument to sample. This allows us to specify the probability with which we will draw each of the input elements into the output sample. For example, to draw ‘girl’ with probability 0.487 and ‘boy’ with probability 0.513, we would do:\n\n# Draw 'girl' with probability (p) 0.487 and 'boy' 0.513.\nchildren_again <- sample(c('girl', 'boy'), size=4, prob=c(0.487, 0.513),\n replace=TRUE)\nchildren_again\n\n[1] \"boy\" \"girl\" \"girl\" \"boy\" \n\n\nThe prob argument allows us to specify the probability of each element in the input vector — so if we had three elements in the input array, we would need three probabilities in prob. For example, let’s say we were looking at some poorly-entered hospital records, we might have ‘girl’ or ‘boy’ recorded as the child’s gender, but the record might be missing — ‘not-recorded’ — with a 19% chance:\n\n# Draw 'girl' with probability (p) 0.4, 'boy' with p=0.41, 'not-recorded' with\n# p=0.19.\nsample(c('girl', 'boy', 'not-recorded'), size=30, prob=c(0.4, 0.41, 0.19),\n replace=TRUE)\n\n [1] \"boy\" \"boy\" \"boy\" \"boy\" \"girl\" \n [6] \"girl\" \"boy\" \"boy\" \"boy\" \"girl\" \n[11] \"girl\" \"boy\" \"boy\" \"girl\" \"girl\" \n[16] \"boy\" \"girl\" \"girl\" \"girl\" \"girl\" \n[21] \"boy\" \"girl\" \"not-recorded\" \"not-recorded\" \"not-recorded\"\n[26] \"not-recorded\" \"boy\" \"not-recorded\" \"girl\" \"girl\" \n\n\n\n\n\n\n\n\nHow does the prob argument to sample work?\n\n\n\nYou might wonder how R does this trick of choosing the elements with different probabilities.\nOne way of doing this is to use uniform random numbers from 0 through 1. These are floating point numbers that can take any value, at random, from 0 through 1.\n\n# Run this chunk a few times to see random numbers anywhere from 0 through 1.\n# `runif` means \"Random UNIForm\".\nrunif(1)\n\n[1] 0.684\n\n\nBecause this random uniform number has an equal chance of being anywhere in the range 0 through 1, there is a 50% chance that any given number will be less then 0.5 and a 50% chance it is greater than 0.5. (Of course it could be exactly equal to 0.5, but this is vanishingly unlikely, so we will ignore that for now).\nSo, if we thought girls were exactly as likely as boys, we could select from ‘girl’ and ‘boy’ using this simple logic:\n\nif (runif(1) < 0.5) {\n result = 'girl'\n} else {\n result = 'boy'\n}\n\nBut, by the same logic, there is a 0.487 chance that the random uniform number will be less than 0.487 and a 0.513 chance it will be greater. So, if we wanted to give ourselves a 0.487 chance of ‘girl’, we could do:\n\nif (runif(1) < 0.487) {\n result = 'girl'\n} else {\n result = 'boy'\n}\n\nWe can extend the same kind of logic to three options. For example, there is a 0.4 chance the random uniform number will be less than 0.4, a 0.41 chance it will be somewhere between 0.4 and 0.81, and a 0.19 chance it will be greater than 0.81." + }, + { + "objectID": "probability_theory_3.html#the-daughters-problem-with-more-accurate-probabilities", + "href": "probability_theory_3.html#the-daughters-problem-with-more-accurate-probabilities", + "title": "12  Probability Theory, Part 3", + "section": "12.5 The daughters problem with more accurate probabilities", + "text": "12.5 The daughters problem with more accurate probabilities\nWe can use the probability argument to sample to do a more realistic simulation of the chance of a family with exactly three girls. In this case it is easy to make the chance for the R simulation, but much more difficult using physical devices like coins to simulate the randomness.\nRemember, the original code for the 50-50 case, has the following:\n\n# Select 'girl' or 'boy' at random, four times.\nchildren <- sample(c('girl', 'boy'), size=4, replace=TRUE)\n\n# Count the number of girls and put the result in b.\nb <- sum(children == 'girl')\n\nThe only change we need to the above, for the 0.487 - 0.513 case, is the one you see above:\n\n# Give 'girl' 48.7% of the time, 'boy' 51.3% of the time.\nchildren <- sample(c('girl', 'boy'), size=4, prob=c(0.487, 0.513),\n replace=TRUE)\n\n# Count the number of girls and put the result in b.\nb <- sum(children == 'girl')\n\nThe rest of the program remains unchanged." + }, + { + "objectID": "probability_theory_3.html#a-note-on-clarifying-and-labeling-problems", + "href": "probability_theory_3.html#a-note-on-clarifying-and-labeling-problems", + "title": "12  Probability Theory, Part 3", + "section": "12.6 A note on clarifying and labeling problems", + "text": "12.6 A note on clarifying and labeling problems\nIn conventional analytic texts and courses on inferential statistics, students are taught to distinguish between various classes of problems in order to decide which formula to apply. I doubt the wisdom of categorizing and labeling problems in that fashion, and the practice is unnecessary here. I consider it better that the student think through every new problem in the most fundamental terms. The exercise of this basic thinking avoids the mistakes that come from too-hasty and superficial pigeon-holing of problems into categories. Nevertheless, in order to help readers connect up the resampling material with the conventional curriculum of analytic methods, the examples presented here are given their conventional labels. And the examples given here cover the range of problems encountered in courses in probability and inferential statistics.\nTo repeat, one does not need to classify a problem when one proceeds with the Monte Carlo resampling method; you simply model the features of the situation you wish to analyze. In contrast, with conventional methods you must classify the situation and then apply procedures according to rules that depend upon the classification; often the decision about which rules to follow must be messy because classification is difficult in many cases, which contributes to the difficulty of choosing correct conventional formulaic methods." + }, + { + "objectID": "probability_theory_3.html#binomial-trials", + "href": "probability_theory_3.html#binomial-trials", + "title": "12  Probability Theory, Part 3", + "section": "12.7 Binomial trials", + "text": "12.7 Binomial trials\nThe problem of the three daughters in four births is known in the conventional literature as a “binomial sampling experiment with equally-likely outcomes.” “Binomial” means that the individual simple event (a birth or a coin flip) can have only two outcomes (boy or girl, heads or tails), “binomial” meaning “two names” in Latin.5\nA fundamental property of binomial processes is that the individual trials are independent , a concept discussed earlier. A binomial sampling process is a series of binomial (one-of-two-outcome) events about which one may ask many sorts of questions — the probability of exactly X heads (“successes”) in N trials, or the probability of X or more “successes” in N trials, and so on.\n“Equally likely outcomes” means we assume that the probability of a girl or boy in any one birth is the same (though this assumption is slightly contrary to fact); we represent this assumption with the equal-probability heads and tails of a coin. Shortly we will come to binomial sampling experiments where the probabilities of the individual outcomes are not equal.\nThe term “with replacement” was explained earlier; if we were to use a deck of red and black cards (instead of a coin) for this resampling experiment, we would replace the card each time a card is drawn.\nThe introductory poker example from Section 11.2, illustrated sampling without replacement, as will other examples to follow.\nThis problem would be done conventionally with the binomial theorem using probabilities of .5, or of .487 and .513, asking about 3 successes in 4 trials." + }, + { + "objectID": "probability_theory_3.html#example-three-or-more-successful-basketball-shots-in-five-attempts", + "href": "probability_theory_3.html#example-three-or-more-successful-basketball-shots-in-five-attempts", + "title": "12  Probability Theory, Part 3", + "section": "12.8 Example: Three or More Successful Basketball Shots in Five Attempts", + "text": "12.8 Example: Three or More Successful Basketball Shots in Five Attempts\nThis is an example of two-outcome sampling with unequally-likely outcomes, with replacement — a binomial experiment.\nWhat is the probability that a basketball player will score three or more baskets in five shots from a spot 30 feet from the basket, if on the average she succeeds with 25 percent of her shots from that spot?\nIn this problem the probabilities of “success” or “failure” are not equal, in contrast to the previous problem of the daughters. Instead of a 50-50 coin, then, an appropriate “model” would be a thumbtack that has a 25 percent chance of landing “up” when it falls, and a 75 percent chance of landing down.\nIf we lack a thumbtack known to have a 25 percent chance of landing “up,” we could use a card deck and let spades equal “success” and the other three suits represent “failure.” Our resampling experiment could then be done as follows:\n\nLet “spade” stand for “successful shot,” and the other suits stand for unsuccessful shot.\nDraw a card, record its suit (“spade” or “other”) and replace. Do so five times (for five shots).\nRecord whether the outcome of step 2 was three or more spades. If so indicate “yes,” and otherwise “no.”\nRepeat steps 2-4 perhaps four hundred times.\nCount the proportion “yes” out of the four hundred throws. That proportion estimates the probability of getting three or more baskets out of five shots if the probability of a single basket is .25.\n\nThe first four repetitions on your score sheet might look like this (Table 12.2):\n\n\nTable 12.2: First four repetitions of 3 or more shots simulation\n\n\nCard 1\nCard 2\nCard 3\nCard 4\nCard 5\nResult\n\n\n\n\nSpade\nOther\nOther\nOther\nOther\nNo\n\n\nOther\nOther\nOther\nOther\nOther\nNo\n\n\nSpade\nSpade\nOther\nSpade\nSpade\nYes\n\n\nOther\nSpade\nOther\nOther\nSpade\nNo\n\n\n\n\nInstead of cards, we could have used two-digit random numbers, with (say) “1-25” standing for “success,” and “26-00” (“00” in place of “100”) standing for failure. Then the steps would simply be:\n\nLet the random numbers “1-25” stand for “successful shot,” “26-00” for unsuccessful shot.\nDraw five random numbers;\nCount how many of the numbers are between “01” and “25.” If three or more, score “yes.”\nRepeat step 2 four hundred times.\n\nIf you understand the earlier “three_girls” program, then the program below should be easy: To create 10000 samples, we start with a for statement. We then sample 5 numbers between “1” and “4” into our variable a to simulate the 5 shots, each with a 25 percent — or 1 in 4 — chance of scoring. We decide that 1 will stand for a successful shot, and 2 through 4 will stand for a missed shot, and therefore we count (sum) the number of 1’s in a to determine the number of shots resulting in baskets in the current sample. The next step is to transfer the results of each trial to vector n_baskets. We then finish the loop with the } close brace. The final step is to search the vector n_baskets, after the 10000 samples have been generated and sum the times that 3 or more baskets were made. We place the results in n_more_than_2, calculate the proportion in propo_more_than_2, and then display the result.\n\nStart of basketball_shots notebook\n\nDownload notebook\nInteract\n\n\n\nn_baskets <- numeric(10000)\n\n# Do 10000 experimental trials.\nfor (i in 1:10000) {\n\n # Generate 5 random numbers, each between 1 and 4, put them in \"a\".\n # Let \"1\" represent a basket, \"2\" through \"4\" be a miss.\n a <- sample(1:4, size=5, replace=TRUE)\n\n # Count the number of baskets, put that result in b.\n b <- sum(a == 1)\n\n # Keep track of each experiment's results in z.\n n_baskets[i] <- b\n\n # End the experiment, go back and repeat until all 10000 are completed, then\n # proceed.\n}\n\n# Determine how many experiments produced more than two baskets, put that\n# result in k.\nn_more_than_2 <- sum(n_baskets > 2)\n\n# Convert to a proportion.\nprop_more_than_2 <- n_more_than_2 / 10000\n\n# Print the result.\nmessage(prop_more_than_2)\n\n0.1055\n\n\nEnd of basketball_shots notebook" + }, + { + "objectID": "probability_theory_3.html#note-to-the-student-of-analytic-probability-theory", + "href": "probability_theory_3.html#note-to-the-student-of-analytic-probability-theory", + "title": "12  Probability Theory, Part 3", + "section": "12.9 Note to the student of analytic probability theory", + "text": "12.9 Note to the student of analytic probability theory\nThis problem would be done conventionally with the binomial theorem, asking about the chance of getting 3 successes in 5 trials, with the probability of a success = .25." + }, + { + "objectID": "probability_theory_3.html#sec-one-black-archery", + "href": "probability_theory_3.html#sec-one-black-archery", + "title": "12  Probability Theory, Part 3", + "section": "12.10 Example: One in Black, Two in White, No Misses in Three Archery Shots", + "text": "12.10 Example: One in Black, Two in White, No Misses in Three Archery Shots\nThis is an example of a multiple outcome (multinomial) sampling with unequally likely outcomes; with replacement.\nAssume from past experience that a given archer puts 10 percent of his shots in the black (“bullseye”) and 60 percent of his shots in the white ring around the bullseye, but misses with 30 percent of his shots. How likely is it that in three shots the shooter will get exactly one bullseye, two in the white, and no misses? Notice that unlike the previous cases, in this example there are more than two outcomes for each trial.\nThis problem may be handled with a deck of three colors (or suits) of cards in proportions varying according to the probabilities of the various outcomes, and sampling with replacement. Using random numbers is simpler, however:\n\nStep 1. Let “1” = “bullseye,” “2-7” = “in the white,” and “8-0” = “miss.”\nStep 2. Choose three random numbers, and examine whether there are one “1” and two numbers “2-7.” If so, record “yes,” otherwise “no.”\nStep 3. Repeat step 2 perhaps 400 times, and count the proportion of “yeses.” This estimates the probability sought.\n\nThis problem would be handled in conventional probability theory with what is known as the Multinomial Distribution.\nThis problem may be quickly solved on the computer using R with the notebook labeled “bullseye” below. Bullseye has a complication not found in previous problems: It tests whether two different sorts of events both happen — a bullseye plus two shots in the white.\nAfter generating three randomly-drawn numbers between 1 and 10, we check with the sum function to see if there is a bullseye. If there is, the if statement tells the computer to continue with the operations, checking if there are two shots in the white; if there is no bullseye, the if statement tells the computer to end the trial and start another trial. A thousand repetitions are called for, the number of trials meeting the criteria are counted, and the results are then printed.\nIn addition to showing how this particular problem may be handled with R, the “bullseye” program teaches you some more fundamentals of computer programming. The if statement and the two loops, one within the other, are basic tools of programming.\n\nStart of bullseye notebook\n\nDownload notebook\nInteract\n\n\n\n# Make a vector to store the results of each trial.\nwhite_counts <- numeric(10000)\n\n# Do 10000 experimental trials\nfor (i in 1:10000) {\n\n # To represent 3 shots, generate 3 numbers at random between \"1\" and \"10\"\n # and put them in a. We will let a \"1\" denote a bullseye, \"2\"-\"7\" a shot in\n # the white, and \"8\"-\"10\" a miss.\n a <- sample(1:10, size=3, replace=TRUE)\n\n # Count the number of bullseyes, put that result in b.\n b <- sum(a == 1)\n\n # If there is exactly one bullseye, we will continue with counting the\n # other shots. (If there are no bullseyes, we need not bother — the\n # outcome we are interested in has not occurred.)\n if (b == 1) {\n\n # Count the number of shots in the white, put them in c. (Recall we are\n # doing this only if we got one bullseye.)\n c <- sum((a >= 2) & (a <=7))\n\n # Keep track of the results of this second count.\n white_counts[i] <- c\n\n # End the \"if\" sequence — we will do the following steps without regard\n # to the \"if\" condition.\n }\n\n # End the above experiment and repeat it until 10000 repetitions are\n # complete, then continue.\n}\n\n# Count the number of occasions on which there are two in the white and a\n# bullseye.\nn_desired <- sum(white_counts == 2)\n\n# Convert to a proportion.\nprop_desired <- n_desired / 10000\n\n# Print the results.\nmessage(prop_desired)\n\n0.1047\n\n\nEnd of bullseye notebook\n\nThis example illustrates the addition rule that was introduced and discussed in Chapter 9. In Section 12.10, a bullseye, an in-the-white shot, and a missed shot are “mutually exclusive” events because a single shot cannot result in more than one of the three possible outcomes. One can calculate the probability of either of two mutually-exclusive outcomes by adding their probabilities. The probability of either a bullseye or a shot in the white is .1 + .6 = .7. The probability of an arrow either in the white or a miss is .6 + .3 = .9. The logic of the addition rule is obvious when we examine the random numbers given to the outcomes. Seven of 10 random numbers belong to “bullseye” or “in the white,” and nine of 10 belong to “in the white” or “miss.”" + }, + { + "objectID": "probability_theory_3.html#example-two-groups-of-heart-patients", + "href": "probability_theory_3.html#example-two-groups-of-heart-patients", + "title": "12  Probability Theory, Part 3", + "section": "12.11 Example: Two Groups of Heart Patients", + "text": "12.11 Example: Two Groups of Heart Patients\nWe want to learn how likely it is that, by chance, group A would have as little as two deaths more than group B — Table 12.3:\n\n\nTable 12.3: Two Groups of Heart Patients\n\n\n\nLive\nDie\n\n\n\n\nGroup A\n79\n11\n\n\nGroup B\n21\n9\n\n\n\n\nThis problem, phrased here as a question in probability, is the prototype of a problem in statistics that we will consider later (which the conventional theory would handle with a “chi square distribution”). We can handle it in either of two ways, as follows:\nApproach A\n\nPut 120 balls into a bucket, 100 white (for live) and 20 black (for die).\nDraw 30 balls randomly and assign them to Group B; the others are assigned to group A.\nCount the numbers of black balls in the two groups and determine whether Group A’s excess “deaths” (= black balls), compared to Group B, is two or fewer (or what is equivalent in this case, whether there are 11 or fewer black balls in Group A); if so, write “Yes,” otherwise “No.”\nRepeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”\n\nA second way we shall think about this sort of problem may be handled as follows:\nApproach B\n\nPut 120 balls into a bucket, 100 white (for live) and 20 black (for die) (as before).\nDraw balls one by one, replacing the drawn ball each time, until you have accumulated 90 balls for Group A and 30 balls for Group B. (You could, of course, just as well use a bucket for 4 white and 1 black balls or 8 white and 2 black in this approach.)\nAs in approach “A” above, count the numbers of black balls in the two groups and determine whether Group A’s excess deaths is two or fewer; if so, write “Yes,” otherwise “No.”\nAs above, repeat steps 2 and 3 perhaps 10000 times and compute the proportion “Yes.”\n\nWe must also take into account the possibility of a similar eye-catching “unbalanced” result of a much larger proportion of deaths in Group B. It will be a tough decision how to do so, but a reasonable option is to simply double the probability computed in step 4a or 4b.\nDeciding which of these two approaches — the “permutation” (without replacement) and “bootstrap” (with replacement) methods — is the more appropriate is often a thorny matter; it will be discussed latter in Chapter 24. In many cases, however, the two approaches will lead to similar results.\nLater, we will actually carry out these procedures with the aid of R, and estimate the probabilities we seek." + }, + { + "objectID": "probability_theory_3.html#example-dispersion-of-a-sum-of-random-variables-hammer-lengths-heads-and-handles", + "href": "probability_theory_3.html#example-dispersion-of-a-sum-of-random-variables-hammer-lengths-heads-and-handles", + "title": "12  Probability Theory, Part 3", + "section": "12.12 Example: Dispersion of a Sum of Random Variables — Hammer Lengths — Heads and Handles", + "text": "12.12 Example: Dispersion of a Sum of Random Variables — Hammer Lengths — Heads and Handles\nThe distribution of lengths for hammer handles is as follows: 20 percent are 10 inches long, 30 percent are 10.1 inches, 30 percent are 10.2 inches, and 20 percent are 10.3 inches long. The distribution of lengths for hammer heads is as follows: 2.0 inches, 20 percent; 2.1 inches, 20 percent; 2.2 inches, 30 percent; 2.3 inches, 20 percent; 2.4 inches, 10 percent.\nIf you draw a handle and a head at random, what will be the mean total length? In Chapter 9 we saw that the conventional formulaic method tells you that an answer with a formula that says the sum of the means is the mean of the sums, but it is easy to get the answer with simulation. But now we ask about the dispersion of the sum. There are formulaic rules for such measures as the variance. But consider this other example: What proportion of the hammers made with handles and heads drawn at random will have lengths equal to or greater than 12.4 inches? No simple formula will provide an answer. And if the number of categories is increased considerably, any formulaic approach will be become burdensome if not undoable. But Monte Carlo simulation produces an answer quickly and easily, as follows:\n\nFill a bucket with:\n\n2 balls marked “10” (inches),\n3 balls marked “10.1”,\n3 marked “10.2”, and\n2 marked “10.3”.\n\nThis bucket represents the handles.\nFill another bucket with:\n\n2 balls marked “2.0”,\n2 balls marked “2.1”,\n3 balls marked “2.2”,\n2 balls marked “2.3” and\n1 ball marked “2.4”.\n\nThis bucket represents the heads.\nPick a ball from each of the “handles” and “heads” bucket, calculate the sum, and replace the balls.\nRepeat perhaps 200 times (more when you write a computer program), and calculate the proportion of the sums that are greater than 12.4 inches.\n\nYou may also want to forego learning the standard “rule,” and simply estimate the mean this way, also. As an exercise, compute the interquartile range — the difference between the 25th and the 75th percentiles." + }, + { + "objectID": "probability_theory_3.html#example-the-product-of-random-variables-theft-by-employees", + "href": "probability_theory_3.html#example-the-product-of-random-variables-theft-by-employees", + "title": "12  Probability Theory, Part 3", + "section": "12.13 Example: The Product of Random Variables — Theft by Employees", + "text": "12.13 Example: The Product of Random Variables — Theft by Employees\nThe distribution of the number of thefts per month you can expect in your business is as follows:\n\n\n\nNumber\nProbability\n\n\n\n\n0\n0.5\n\n\n1\n0.2\n\n\n2\n0.1\n\n\n3\n0.1\n\n\n4\n0.1\n\n\n\nThe amounts that may be stolen on any theft are as follows:\n\n\n\nAmount\nProbability\n\n\n\n\n$50\n0.4\n\n\n$75\n0.4\n\n\n$100\n0.1\n\n\n$125\n0.1\n\n\n\nThe same procedure as used above to estimate the mean length of hammers — add the lengths of handles and heads — can be used for this problem except that the results of the drawings from each bucket are multiplied rather than added.\nIn this case there is again a simple rule: The mean of the products equals the product of the means. But this rule holds only when the two urns are indeed independent of each other, as they are in this case.\nThe next two problems are a bit harder than the previous ones; you might skip them for now and come back to them a bit later. However, with the Monte Carlo simulation method they are within the grasp of any introductory student who has had just a bit of experience with the method. In contrast, a standard book whose lead author is Frederick Mosteller, as respected a statistician as there is, says of this type of problem: “Naturally, in this book we cannot expect to study such difficult problems in their full generality [that is, show how to solve them, rather than merely state them], but we can lay a foundation for their study.” (Mosteller, Rourke, and Thomas 1961, 5)" + }, + { + "objectID": "probability_theory_3.html#example-flipping-pennies-to-the-end", + "href": "probability_theory_3.html#example-flipping-pennies-to-the-end", + "title": "12  Probability Theory, Part 3", + "section": "12.14 Example: Flipping Pennies to the End", + "text": "12.14 Example: Flipping Pennies to the End\nTwo players, each with a stake of ten pennies, engage in the following game: A coin is tossed, and if it is (say) heads, player A gives player B a penny; if it is tails, player B gives player A a penny. What is the probability that one player will lose his or her entire stake of 10 pennies if they play for 200 tosses?\nThis is a classic problem in probability theory; it has many everyday applications in situations such as inventory management. For example, what is the probability of going out of stock of a given item in a given week if customers and deliveries arrive randomly? It also is a model for many processes in modern particle physics.\nSolution of the penny-matching problem with coins is straightforward. Repeatedly flip a coin and check if one player or the other reaches a zero balance before you reach 200 flips. Or with random numbers:\n\nNumbers “1-5” = head = “+1”; Numbers “6-0” = tail = “-1.”\nProceed down a series of 200 numbers, keeping a running tally of the “+1”’s and the “-1”’s. If the tally reaches “+10” or “-10” on or before the two-hundredth digit, record “yes”; otherwise record “no.”\nRepeat step 2 perhaps 400 or 10000 times, and calculate the proportion of “yeses.” This estimates the probability sought.\n\nThe following R program also solves the problem. The heart of the program starts at the line where the program models a coin flip with the statement: c = sample(1:2, size=1) After you study that, go back and notice the inner for loop starting with for (j in 1:200) { that describes the procedure for flipping a coin 200 times. Finally, note how the outer for (i in 1:10000) { loop simulates 10000 games, each game consisting of the 200 coin flips we generated with the inner for loop above.\n\nStart of pennies notebook\n\nDownload notebook\nInteract\n\n\n\nsomeone_won <- numeric(10000)\n\n# Do 10000 trials\nfor (i in 1:10000) {\n\n # Record the number 10: a's stake\n a_stake <- 10\n\n # Same for b\n b_stake <- 10\n\n # An indicator flag that will be set to \"1\" when somebody wins.\n flag <- 0\n\n # Repeat the following steps 200 times.\n # Notice we use \"j\" as the counter variable, to avoid overwriting\n # \"i\", the counter variable for the 10000 trials.\n for (j in 1:200) {\n # Generate the equivalent of a coin flip, letting 1 <- heads,\n # 2 <- tails\n c <- sample(1:2, size=1)\n\n # If it's a heads\n if (c == 1) {\n\n # Add 1 to b's stake\n b_stake <- b_stake + 1\n\n # Subtract 1 from a's stake\n a_stake <- a_stake - 1\n\n # End the \"if\" condition\n }\n\n # If it's a tails\n if (c == 2) {\n\n # Add one to a's stake\n a_stake <- a_stake + 1\n\n # Subtract 1 from b's stake\n b_stake <- b_stake - 1\n\n # End the \"if\" condition\n }\n\n # If a has won\n if (a_stake == 20) {\n\n # Set the indicator flag to 1\n flag <- 1\n }\n\n # If b has won\n if (b_stake == 20) {\n\n # Set the indicator flag to 1\n flag <- 1\n\n }\n\n # End the repeat loop for 200 plays (note that the indicator flag stays\n # at 0 if neither a nor b has won)\n }\n\n # Keep track of whether anybody won.\n someone_won[i] <- flag\n\n # End the 10000 trials\n}\n\n# Find out how often somebody won\nn_wins <- sum(someone_won)\n\n# Convert to a proportion\nprop_wins <- n_wins / 10000\n\n# Print the results\nmessage(prop_wins)\n\n0.8919\n\n\nEnd of pennies notebook\n\nA similar example: Your warehouse starts out with a supply of twelve capacirators. Every three days a new shipment of two capacirators is received. There is a .6 probability that a capacirator will be used each morning, and the same each afternoon. (It is as if a random drawing is made each half-day to see if a capacirator is used; two capacirators may be used in a single day, or one or none). How long will be it, on the average, before the warehouse runs out of stock?" + }, + { + "objectID": "probability_theory_3.html#example-a-drunks-random-walk", + "href": "probability_theory_3.html#example-a-drunks-random-walk", + "title": "12  Probability Theory, Part 3", + "section": "12.15 Example: A Drunk’s Random Walk", + "text": "12.15 Example: A Drunk’s Random Walk\nIf a drunk chooses the direction of each step randomly, will he ever get home? If he can only walk on the road on which he lives, the problem is almost the same as the gambler’s-ruin problem above (“pennies”). But if the drunk can go north-south as well as east-west, the problem becomes a bit different and interesting.\nLooking now at Figure 12.1 — what is the probability of the drunk reaching either his house (at 3 steps east, 2 steps north) or my house (1 west, 4 south) before he finishes taking twelve steps?\nOne way to handle the problem would be to use a four-directional spinner such as is used with a child’s board game, and then keep track of each step on a piece of graph paper. The reader may construct a R program as an exercise.\n\n\n\n\n\nFigure 12.1: Drunk random walk" + }, + { + "objectID": "probability_theory_3.html#sec-public-liquor", + "href": "probability_theory_3.html#sec-public-liquor", + "title": "12  Probability Theory, Part 3", + "section": "12.16 Example: public and private liquor pricing", + "text": "12.16 Example: public and private liquor pricing\nLet’s end this chapter with an actual example that will be used again in Chapter 13 when discussing probability in finite universes, and then at great length in the context of statistics in Chapter 24. This example also illustrates the close connection between problems in pure probability and those in statistical inference.\nAs of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned, and 16 “monopoly” states where the state government owns the retail liquor stores. (Some states were omitted for technical reasons.) These were the representative 1961 prices of a fifth of Seagram 7 Crown whiskey in the two sets of states (Table 12.4):\n\n\n\nTable 12.4: Whiskey prices by state category\n\n\n\n\n\n\n\n\nPrivate\nGovernment\n\n\n\n\n\n4.82\n4.65\n\n\n\n5.29\n4.55\n\n\n\n4.89\n4.11\n\n\n\n4.95\n4.15\n\n\n\n4.55\n4.2\n\n\n\n4.9\n4.55\n\n\n\n5.25\n3.8\n\n\n\n5.3\n4.0\n\n\n\n4.29\n4.19\n\n\n\n4.85\n4.75\n\n\n\n4.54\n4.74\n\n\n\n4.75\n4.5\n\n\n\n4.85\n4.1\n\n\n\n4.85\n4.0\n\n\n\n4.5\n5.05\n\n\n\n4.75\n4.2\n\n\n\n4.79\n\n\n\n\n4.85\n\n\n\n\n4.79\n\n\n\n\n4.95\n\n\n\n\n4.95\n\n\n\n\n4.75\n\n\n\n\n5.2\n\n\n\n\n5.1\n\n\n\n\n4.8\n\n\n\n\n4.29\n\n\n\n\n\n\n\n\nCount\n26\n16\n\n\nMean\n4.84\n4.35\n\n\n\n\n\n\n\n\n\n\nFigure 12.2: Whiskey prices by state category\n\n\n\n\nLet us consider that all these states’ prices constitute one single universe (an assumption whose justification will be discussed later). If so, one can ask: If these 42 states constitute a single universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference between the means that was actually observed)?\nThis can be thought of as problem in pure probability because we begin with a known universe and ask how it would behave with random drawings from it. We sample with replacement ; the decision to do so, rather than to sample without replacement (which is the way I had first done it, and for which there may be better justification) will be discussed later. We do so to introduce a “bootstrap”-type procedure (defined later) as follows: Write each of the forty-two observed state prices on a separate card. The shuffled deck simulated a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. For each trial, calculate the difference in mean prices.\nThese are the steps systematically:\n\nStep A: Write each of the 42 prices on a card and shuffle.\nSteps B and C (combined in this case): i) Draw cards randomly with replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $.49; if it is as great or greater than $.49, write “yes,” otherwise “no.”\nStep D: Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.\n\nThe probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state-ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices. The following notebook performs the operations described above.\n\nStart of liquor_prices notebook\n\nDownload notebook\nInteract\n\n\n\nfake_diffs <- numeric(10000)\n\npriv <- c(4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54,\n 4.75, 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75,\n 5.20, 5.10, 4.80, 4.29)\n\ngovt <- c(4.65, 4.55, 4.11, 4.15, 4.20, 4.55, 3.80, 4.00, 4.19, 4.75, 4.74,\n 4.50, 4.10, 4.00, 5.05, 4.20)\n\nactual_diff <- mean(priv) - mean(govt)\n\n# Join the two vectors of data\nboth <- c(priv, govt)\n\n# Repeat 10000 simulation trials\nfor (i in 1:10000) {\n\n # Sample 26 with replacement for private group\n fake_priv <- sample(both, size=26, replace=TRUE)\n\n # Sample 16 with replacement for govt. group\n fake_govt <- sample(both, size=16, replace=TRUE)\n\n # Find the mean of the \"private\" group.\n p <- mean(fake_priv)\n\n # Mean of the \"govt.\" group\n g <- mean(fake_govt)\n\n # Difference in the means\n diff <- p - g\n\n # Keep score of the trials\n fake_diffs[i] <- diff\n}\n\n# Graph of simulation results to compare with the observed result.\nfig_title <- paste('Average price difference (Actual difference = ',\n round(actual_diff * 100),\n 'cents')\nhist(fake_diffs, main=fig_title, xlab='Difference in average prices (cents)')\n\n\n\n\n\n\n\n\nEnd of liquor_prices notebook\n\nThe results shown above — not even one “success” in 10,000 trials — imply that there is only a very small probability that two groups with mean prices as different as were observed would happen by chance if drawn with replacement from the universe of 42 observed prices.\nHere we think of these states as if they came from a non-finite universe, which is one possible interpretation for one particular context. However, in Chapter 13 we will postulate a finite universe, which is appropriate if it is reasonable to consider that these observations constitute the entire universe (aside from those states excluded from the analysis because of data complexities)." + }, + { + "objectID": "probability_theory_3.html#the-general-procedure", + "href": "probability_theory_3.html#the-general-procedure", + "title": "12  Probability Theory, Part 3", + "section": "12.17 The general procedure", + "text": "12.17 The general procedure\nChapter 25 generalizes what we have done in the probability problems above into a general procedure, which will in turn be a subpart of a general procedure for all of resampling.\n\n\n\n\nArbuthnot, John. 1710. “An Argument for Divine Providence, Taken from the Constant Regularity Observ’d in the Births of Both Sexes. By Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of the College of Physitians and the Royal Society.” Philosophical Transactions of the Royal Society of London 27 (328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011.\n\n\nMosteller, Frederick, Robert E. K. Rourke, and George Brinton Thomas Jr. 1961. Probability with Statistical Applications. 2nd ed. https://archive.org/details/probabilitywiths0000most." + }, + { + "objectID": "probability_theory_4_finite.html#introduction", + "href": "probability_theory_4_finite.html#introduction", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.1 Introduction", + "text": "13.1 Introduction\nThe examples in Chapter 12 dealt with infinite universes , in which the probability of a given simple event is unaffected by the outcome of the previous simple event. But now we move on to finite universes, situations in which you begin with a given set of objects whose number is not enormous — say, a total of two, or two hundred, or two thousand. If we liken such a situation to a bucket containing balls of different colors each with a number on it, we are interested in the probability of drawing various sets of numbered and colored balls from the bucket on the condition that we do not replace balls after they are drawn.\nIn the cases addressed in this chapter, it is important to remember that the single events no longer are independent of each other. A typical situation in which sampling without replacement occurs is when items are chosen from a finite universe — for example, when children are selected randomly from a classroom. If the class has five boys and five girls, and if you were to choose three girls in a row, then the chance of selecting a fourth girl on the next choice obviously is lower than the chance that you would pick a girl on the first selection.\nThe key to dealing with this type of problem is the same as with earlier problems: You must choose a simulation procedure that produces simple events having the same probabilities as the simple events in the actual problem involving sampling without replacement. That is, you must make sure that your simulation does not allow duplication of events that have already occurred. The easiest way to sample without replacement with resampling techniques is by simply ignoring an outcome if it has already occurred.\nExamples Section 13.3.1 through Section 13.3.10 deal with some of the more important sorts of questions one may ask about drawings without replacement from such an urn. To get an overview, I suggest that you read over the summaries (in bold) introducing examples Section 13.3.1 to Section 13.3.10 before beginning to work through the examples themselves.\nThis chapter also revisits the general procedure used in solving problems in probability and statistics with simulation, here in connection with problems involving a finite universe. The steps that one follows in simulating the behavior of a universe of interest are set down in such fashion that one may, by random drawings, deduce the probability of various events. Having had by now the experience of working through the problems in Chapter 9 and Chapter 12, the reader should have a solid basis to follow the description of the general procedure which then helps in dealing with specific problems.\nLet us begin by describing some of the major sorts of problems with the aid of a bucket with six balls." + }, + { + "objectID": "probability_theory_4_finite.html#some-building-block-programs", + "href": "probability_theory_4_finite.html#some-building-block-programs", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.2 Some building-block programs", + "text": "13.2 Some building-block programs\nCase 1. Each of six balls is labeled with a number between “1” and “6.” We ask: What is the probability of choosing balls 1, 2, and 3 in that order if we choose three balls without replacement? Figure 13.1 diagrams the events we consider “success.”\n\n\n\n\n\nFigure 13.1: The Event Classified as “Success” for Case 1\n\n\n\n\nCase 2. We begin with the same bucket as in Case 1, but now ask the probability of choosing balls 1, 2, and 3 in any order if we choose three balls without replacement. Figure 13.2 diagrams two of the events we consider success. These possibilities include that which is shown in Figure 13.1 above, plus other possibilities.\n\n\n\n\n\nFigure 13.2: An Incomplete List of the Events Classified as “Success” for Case 2\n\n\n\n\nCase 3. The odd-numbered balls “1,” “3,” and “5,” are painted red and the even-numbered balls “2,” “4,” and “6” are painted black. What is the probability of getting a red ball and then a black ball in that order? Some possibilities are illustrated in Figure 13.3, which includes the possibility shown in Figure 13.1. It also includes some but not all possibilities found in Figure 13.2; for example, Figure 13.2 includes choosing balls 2, 3 and 1 in that order, but Figure 13.3 does not.\n\n\n\n\n\nFigure 13.3: An Incomplete List of the Events Classified as “Success” for Case 3\n\n\n\n\nCase 4. What is the probability of getting two red balls and one black ball in any order?\n\n\n\n\n\nFigure 13.4: An Incomplete List of the Events Classified as “Success” for Case 4\n\n\n\n\nCase 5. Various questions about matching may be asked with respect to the six balls. For example, what is the probability of getting ball 1 on the first draw or ball 2 on the second draw or ball 3 on the third draw? (Figure 13.5) Or, what is the probability of getting all balls on the draws corresponding to their numbers?\n\n\n\n\n\nFigure 13.5: An Incomplete List of the Events Classified as “Success” for Case 5" + }, + { + "objectID": "probability_theory_4_finite.html#problems-in-finite-universes", + "href": "probability_theory_4_finite.html#problems-in-finite-universes", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.3 Problems in finite universes", + "text": "13.3 Problems in finite universes\n\n13.3.1 Example: four girls and one boy\nWhat is the probability of selecting four girls and one boy when selecting five students from any group of twenty-five girls and twenty-five boys? This is an example of sampling without replacement when there are two outcomes and the order does not matter.\nThe important difference between this example and the infinite-universe examples in the prior chapter is that the probability of obtaining a boy or a girl in a single simple event differs from one event to the next in this example, whereas it stays the same when the sampling is with replacement. To illustrate, the probability of a girl is .5 (25 out of 50) when the first student is chosen, but the probability of a girl is either 25/49 or 24/49 when the second student is chosen, depending on whether a boy or a girl was chosen on the first pick. Or after, say, three girls and one boy are picked, the probability of getting a girl on the next choice is (28-3)/(50-4) = 22/46 which is clearly not equal to .5.\nAs always, we must create a satisfactory analog to the process whose probability we want to learn. In this case, we can use a deck of 50 cards, half red and half black, and deal out five cards without replacing them after each card is dealt; this simulates the choice of five students from among the fifty.\nWe can no longer use our procedure from before. If we designated “1-25” as being girls and “26-50” as being boys and then proceeded to draw random numbers, the probability of a girl would be the same on each pick.\nAt this point, it is important to note that — for this particular problem — we do not need to distinguish between particular girls (or boys). That is, it does not matter which girl (or boy) is selected in a given trial. Nor did we pay attention to the order in which we selected girls or boys. This is an instance of Case 4 discussed above. Subsequent problems will deal with situations where the order of selection, and the particular individuals, do matter.\nOur approach then is to mimic having the class in front of us: an array of 50 strings, half of the entries ‘boy’ and the other half ‘girl’. We then shuffle the class (the array), and choose the first N students (strings).\n\nStep 1. Create a list with 50 labels, half ‘boy’ and half ‘girl’.\nStep 2. Shuffle the class and select five students. Count whether there are four labels equal ‘girl’. If so, write “yes,” otherwise “no”.\nStep 3. Repeat step 2, say, 10,000 times, and count the proportion “yes”, which estimates the probability sought.\n\nThe results of a few experimental trials are shown in Table 13.1.\n\n\nTable 13.1: A few experimental trials of four girls and one boy\n\n\n\n\n\n\n\nExperiment\nStrings Chosen\nSuccess?\n\n\n 1\n‘girl’, ‘boy’, ‘boy’, ‘girl’, ‘boy’\nNo\n\n\n 2\n‘boy’, ‘girl’, ‘girl’, ‘girl’, ‘girl’\nYes\n\n\n 3\n‘girl, ’girl’, ‘girl’, ‘boy’, ‘girl’\nYes\n\n\n\n\nA solution to this problem with R is presented below.\n\nStart of four_girls_one_boy notebook\n\nDownload notebook\nInteract\n\n\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Constitute the set of 25 girls and 25 boys.\nwhole_class <- rep(c('girl', 'boy'), c(25, 25))\n\n# Repeat the following steps N times.\nfor (i in 1:N) {\n\n # Shuffle the numbers\n shuffled <- sample(whole_class)\n\n # Take the first 5 numbers, call them c.\n c <- shuffled[1:5]\n\n # Count how many girls there are, put the result in d.\n d <- sum(c == 'girl')\n\n # Keep track of each trial result in z.\n trial_results[i] <- d\n\n # End the experiment, go back and repeat until all 1000 trials are\n # complete.\n}\n\n# Count the number of times we got four girls, put the result in k.\nk <- sum(trial_results == 4)\n\n# Convert to a proportion.\nkk <- k / N\n\n# Print the result.\nmessage(kk)\n\n0.1481\n\n\nWe can also find the probabilities of other outcomes from a histogram of trial results obtained with the following command:\n\n# Do histogram, with one bin for each possible number.\nhist(trial_results, breaks=0:max(trial_results), main='# of girls')\n\n\n\n\n\n\n\n\nIn the resulting histogram we can see that in 15 percent of the trials, 4 of the 5 selected were girls.\nIt should be noted that for this problem — as for most other problems — there are several other resampling procedures that will also do the job correctly.\nIn analytic probability theory this problem is worked with a formula for “combinations.”\nEnd of four_girls_one_boy notebook\n\n\n\n13.3.2 Example: Five spades and four clubs in a bridge hand\n\nStart of five_spades_four_clubs notebook\n\nDownload notebook\nInteract\n\n\nThis is an example of multiple-outcome sampling without replacement, order does not matter.\nThe problem is similar to the example in Section 13.3.1, except that now there are four equally-likely outcomes instead of only two. An R solution is:\n\n# Constitute the deck of 52 cards.\n# Repeat the suit names 13 times each, to make a 52 card deck.\ndeck <- rep(c('spade', 'club', 'diamond', 'heart'), c(13, 13, 13, 13))\n# Show the deck\ndeck\n\n [1] \"spade\" \"spade\" \"spade\" \"spade\" \"spade\" \"spade\" \"spade\" \n [8] \"spade\" \"spade\" \"spade\" \"spade\" \"spade\" \"spade\" \"club\" \n[15] \"club\" \"club\" \"club\" \"club\" \"club\" \"club\" \"club\" \n[22] \"club\" \"club\" \"club\" \"club\" \"club\" \"diamond\" \"diamond\"\n[29] \"diamond\" \"diamond\" \"diamond\" \"diamond\" \"diamond\" \"diamond\" \"diamond\"\n[36] \"diamond\" \"diamond\" \"diamond\" \"diamond\" \"heart\" \"heart\" \"heart\" \n[43] \"heart\" \"heart\" \"heart\" \"heart\" \"heart\" \"heart\" \"heart\" \n[50] \"heart\" \"heart\" \"heart\" \n\n\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Repeat the trial N times.\nfor (i in 1:N) {\n\n # Shuffle the deck and draw 13 cards.\n hand <- sample(deck, 13) # replace=FALSE is the default.\n\n # Count the number of spades in \"hand\", put the result in \"n_spades\".\n n_spades <- sum(hand == 'spade')\n\n # If we have five spades, we'll continue on to count the clubs. If we don't\n # have five spades, the number of clubs is irrelevant — we have not gotten\n # the hand we are interested in.\n if (n_spades == 5) {\n # Count the clubs, put the result in \"n_clubs\"\n n_clubs <- sum(hand == 'club')\n # Keep track of the number of clubs in each trial\n trial_results[i] <- n_clubs\n }\n\n # End one experiment, go back and repeat until all N trials are done.\n}\n\n# Count the number of trials where we got 4 clubs. This is the answer we want -\n# the number of hands out of 1000 with 5 spades and 4 clubs. (Recall that we\n# only counted the clubs if the hand already had 5 spades.)\nn_5_and_4 <- sum(trial_results == 4)\n\n# Convert to a proportion.\nprop_5_and_4 <- n_5_and_4 / N\n\n# Print the result\nmessage(prop_5_and_4)\n\n0.022\n\n\nEnd of five_spades_four_clubs notebook\n\n\n\n13.3.3 Example: a total of fifteen points in a bridge hand\n\nStart of fifteen_points_in_bridge notebook\n\nDownload notebook\nInteract\n\n\nLet us assume that ace counts as 4, king = 3, queen = 2, and jack = 1.\n\n# Constitute a deck with 4 jacks (point value 1), 4 queens (value 2), 4\n# kings (value 3), 4 aces (value 4), and 36 other cards with no point\n# value\nwhole_deck <- rep(c(1, 2, 3, 4, 0), c(4, 4, 4, 4, 36))\nwhole_deck\n\n [1] 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n[39] 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n\n\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Do N trials.\nfor (i in 1:N) {\n # Shuffle the deck of cards and draw 13\n hand <- sample(whole_deck, size=13) # replace=FALSE is default.\n\n # Total the points.\n points <- sum(hand)\n\n # Keep score of the result.\n trial_results[i] <- points\n\n # End one experiment, go back and repeat until all N trials are done.\n}\n\n\n# Produce a histogram of trial results.\nhist(trial_results, breaks=0:max(trial_results), main='Points in bridge hands')\n\n\n\n\n\n\n\n\nFrom this histogram, we see that in about 4 percent of our trials we obtained a total of exactly 15 points. We can also compute this directly:\n\n# How many times did we have a hand with fifteen points?\nk <- sum(trial_results == 15)\n\n# Convert to a proportion.\nkk <- k / N\n\n# Show the result.\nkk\n\n[1] 0.0426\n\n\nEnd of fifteen_points_in_bridge notebook\n\n\n\n13.3.4 Example: Four girls then one boy from 25 girls and 25 boys\n\nStart of four_girls_then_one_boy_25 notebook\n\nDownload notebook\nInteract\n\n\nIn this problem, order matters; we are sampling without replacement, with two outcomes, several of each item.\nWhat is the probability of getting an ordered series of four girls and then one boy , from a universe of 25 girls and 25 boys? This illustrates Case 3 above. Clearly we can use the same sampling mechanism as in the example Section 13.3.1, but now we record “yes” for a smaller number of composite events.\nWe record “no” even if a single one boy is chosen but he is chosen 1st, 2nd, 3rd, or 4th, whereas in Section 13.3.1, such outcomes are recorded as “yes”-es.\n\nStep 1. Generate a class (vector) of length 50, consisting of 25 strings valued “boy” and 25 strings valued “girl”.\nStep 2. Shuffle the class array, and select the first five elements.\nStep 3. If the first five elements are exactly 'girl', 'girl', 'girl', 'girl', 'boy', write “yes,” otherwise “no.”\nStep 4. Repeat steps 2 and 3, say, 10,000 times, and count the proportion of “yes” results, which estimates the probability sought.\n\nLet us start the single trial procedure like so:\n\n# Constitute the set of 25 girls and 25 boys.\nwhole_class <- rep(c('girl', 'boy'), c(25, 25))\n\n# Shuffle the class into a random order.\nshuffled <- sample(whole_class)\n# Take the first 5 class members, call them c.\nc <- shuffled[1:5]\n# Show the result.\nc\n\n[1] \"boy\" \"boy\" \"boy\" \"boy\" \"girl\"\n\n\nOur next step (step 3) is to check whether c is exactly equal to the result of interest. The result of interest is:\n\n# The result we are looking for - four girls and then a boy.\nresult_of_interest <- rep(c('girl', 'boy'), c(4, 1 ))\nresult_of_interest\n\n[1] \"girl\" \"girl\" \"girl\" \"girl\" \"boy\" \n\n\nWe can then use a vector comparison with == to do an element by element (elementwise) check, asking whether the corresponding elements are equal:\n\n# A Boolean array, with True where corresponding elements are equal, False\n# otherwise.\nare_equal <- c == result_of_interest\nare_equal\n\n[1] FALSE FALSE FALSE FALSE FALSE\n\n\nWe are nearly finished with step 3 — it only remains to check whether all of the elements were equal, by checking whether all of the values in are_equal are TRUE.\nWe know that there are 5 elements, so we could check whether there are 5 TRUE values with sum:\n\n# Are there exactly 5 TRUE values in `are_equal`?\nsum(are_equal) == 5\n\n[1] FALSE\n\n\nAnother way to ask the same question is by using the all function on are_equal. This returns TRUE if all the elements in are_equal are TRUE, and FALSE otherwise.\n\n\n\n\n\n\nTesting whether all elements of a vector are the same\n\n\n\nThe all, applied to a Boolean vector (as here), checks whether all of the elements in the Boolean vector are TRUE. If so, it returns TRUE, otherwise, it returns FALSE.\nFor example:\n\n# All elements are TRUE, `all` returns TRUE\nall(c(TRUE, TRUE, TRUE, TRUE))\n\n[1] TRUE\n\n\n\n# At least one element is FALSE, `all` returns FALSE\nall(c(TRUE, TRUE, FALSE, TRUE))\n\n[1] FALSE\n\n\n\n\nHere is the full procedure for steps 2 and 3 (a single trial):\n\n# Shuffle the class into a random order.\nshuffled <- sample(whole_class)\n# Take the first 5 class members, call them c.\nc <- shuffled[1:5]\n# For each element, test whether the result is the result of interest.\nare_equal <- c == result_of_interest\n# Check whether we have the result we are looking for.\nis_four_girls_then_one_boy <- all(are_equal)\n\nAll that remains is to put the single trial procedure into a loop.\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Repeat the following steps 1000 times.\nfor (i in 1:N) {\n\n # Shuffle the class into a random order.\n shuffled <- sample(whole_class)\n # Take the first 5 class members, call them c.\n c <- shuffled[1:5]\n # For each element, test whether the result is the result of interest.\n are_equal <- c == result_of_interest\n # Check whether we have the result we are looking for.\n is_four_girls_then_one_boy <- all(are_equal)\n\n # Store the result of this trial.\n trial_results[i] <- is_four_girls_then_one_boy\n\n # End the experiment, go back and repeat until all N trials are\n # complete.\n}\n\n# Count the number of times we got four girls then a boy\nk <- sum(trial_results)\n\n# Convert to a proportion.\nkk <- k / N\n\n# Print the result.\nmessage(kk)\n\n0.0311\n\n\nThis type of problem is conventionally done with a permutation formula.\nEnd of four_girls_then_one_boy_25 notebook\n\n\n\n13.3.5 Example: repeat pairings from random pairing\n\nStart of university_icebreaker notebook\n\nDownload notebook\nInteract\n\n\nFirst put two groups of 10 people into 10 pairs. Then re-randomize the pairings. What is the chance that four or more pairs are the same in the second random pairing? This is a problem in the probability of matching by chance.\nTen representatives each from two universities, Birmingham and Berkeley, attend a meeting. As a social icebreaker, representatives are divided, randomly, into pairs consisting of one person from each university.\nIf they held a second round of the icebreaker, with a new random pairing, what is the chance that four or more pairs will be the same?\nIn approaching this problem, we start at the point where the first icebreaker is complete. We now have to determine what happens after the second round.\n\nStep 1. Let “ace” through “10” of hearts represent the ten representatives from Birmingham University. Let “ace” through “10” of spades be their allocated partners (in round one) from Berkeley.\nStep 2. Shuffle the hearts and deal them out in a row; shuffle the spades and deal in a row just below the hearts.\nStep 3. Count the pairs — a pair is one card from the heart row and one card from the spade row — that contain the same denomination. If 4 or more pairs match, record “yes,” otherwise “no.”\nStep 4. Repeat steps (2) and (3), say, 10,000 times.\nStep 5. Count the proportion “yes.” This estimates the probability of 4 or more pairs.\n\nExercise for the student: Write the steps to do this example with random numbers. The R solution follows below.\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Assign numbers to each student, according to their pair, after the first\n# icebreaker\nbirmingham <- 1:10\nberkeley <- 1:10\n\nfor (i in 1:N) {\n # Randomly shuffle the students from Berkeley\n shuffled_berkeley <- sample(berkeley)\n\n # Randomly shuffle the students from Birmingham\n # (This step is not really necessary — shuffling one array is enough to make the matching random.)\n shuffled_birmingham <- sample(birmingham)\n\n # Count in how many cases people landed with the same person as in the\n # first round, and store in trial_results.\n matches <- sum(shuffled_berkeley == shuffled_birmingham)\n trial_results[i] <- matches\n}\n\n# Count the number of times we got 4 or more people assigned to the same person\nk <- sum(trial_results >= 4)\n\n# Convert to a proportion.\nkk <- k / N\n\n# Print the result.\nmessage(kk)\n\n0.0203\n\n\nWe see that in about 2 percent of the trials did 4 or more couples end up being re-paired with their own partners. This can also be seen from the histogram:\nEnd of university_icebreaker notebook\n\n\n\n13.3.6 Example: Matching Santa Hats\n\nStart of santas_hats notebook\n\nDownload notebook\nInteract\n\n\nThe welcome staff at a restaurant mix up the hats of a party of six Christmas Santas. What is the probability that at least one will get their own hat?.\nAfter a long Christmas day, six Santas meet in the pub to let off steam. However, as luck would have it, their hosts have mixed up their hats. When the hats are returned, what is the chance that at least one Santa will get his own hat back?\nFirst, assign each of the six Santas a number, and place these numbers in an array. Next, shuffle the array (this represents the mixed-up hats) and compare to the original. The rest of the problem is the same as the pairs one from before, except that we are now interested in any trial where at least one (\\(\\ge 1\\)) Santa received the right hat.\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Assign numbers to each owner\nowners <- 1:6\n\n# Each hat gets the number of their owner\nhats <- 1:6\n\nfor (i in 1:N) {\n # Randomly shuffle the hats and compare to their owners\n shuffled_hats <- sample(hats)\n\n # In how many cases did at least one person get their hat back?\n trial_results[i] <- sum(shuffled_hats == owners) >= 1\n}\n\n# How many times, over all trials, did at least one person get their hat back?\nk <- sum(trial_results)\n\n# Convert to a proportion.\nkk <- k / N\n\n# Print the result.\nprint(kk)\n\n[1] 0.629\n\n\nWe see that in roughly 63 percent of the trials at least one Santa received their own hat back.\nEnd of santas_hats notebook\n\n\n\n13.3.7 Example: Twenty executives assigned to two divisions of a firm\n\nStart of twenty_executives notebook\n\nDownload notebook\nInteract\n\n\nThe top manager wants to spread the talent reasonably evenly, but she does not want to label particular executives with a quality rating and therefore considers distributing them with a random selection. She therefore wonders: What are probabilities of the best ten among the twenty being split among the divisions in the ratios 5 and 5, 4 and 6, 3 and 7, etc., if their names are drawn from a hat? One might imagine much the same sort of problem in choosing two teams for a football or baseball contest.\nOne may proceed as follows:\n\nPut 10 balls labeled “W” (for “worst”) and 10 balls labeled “B” (best) in a bucket.\nDraw 10 balls without replacement and count the W’s.\nRepeat (say) 400 times.\nCount the number of times each split — 5 W’s and 5 B’s, 4 and 6, etc. — appears in the results.\n\nThe problem can be done with R as follows:\n\nN <- 10000\ntrial_results <- numeric(N)\n\nmanagers <- rep(c('Worst', 'Best'), c(10, 10))\n\nfor (i in 1:N) {\n chosen <- sample(managers, 10) # replace=FALSE is the default.\n trial_results[i] <- sum(chosen == 'Best')\n}\n\nhist(trial_results, breaks=0:max(trial_results),\n main= 'Number of best managers chosen')\n\n\n\n\n\n\n\n\nEnd of twenty_executives notebook\n\n\n\n13.3.8 Example: Executives Moving\n\nA major retail chain moves its store managers from city to city every three years in order to calculate individuals’ knowledge and experience. To make the procedure seem fair, the new locations are drawn at random. Nevertheless, the movement is not popular with managers’ families. Therefore, to make the system a bit sporting and to give people some hope of remaining in the same location, the chain allows managers to draw in the lottery the same posts they are now in. What are the probabilities that 1, 2, 3 … will get their present posts again if the number of managers is 30?\nThe problem can be solved with the following steps:\n\nNumber a set of green balls from “1” to “30” and put them into Bucket A. Number a set of red balls from “1” to “30” and then put into Bucket B. For greater concreteness one could use 30 little numbered dolls in Bucket A and 30 little toy houses in Bucket B.\nShuffle Bucket A, and array all its green balls into a row (vector A). Array all the red balls from Bucket B into a second row B just below row A.\nCount how many green balls in row A have the same numbers as the red balls just below them, and record that number on a scoreboard.\nRepeat steps 2 and 3 perhaps 1000 times. Then count in the scoreboard the numbers of “0,” “1,” “2,” “3.”\n\n\n\n13.3.9 Example: State Liquor Systems Again\nLet’s end this chapter with the example of state liquor systems that we first examined in Chapter 12 and which will be discussed again later in the context of problems in statistics.\nRemember that as of 1963, there were 26 U.S. states in whose liquor systems the retail liquor stores are privately owned (“Private”), and 16 monopoly states where the state government owns the retail liquor stores (“Government”). See Table 12.4 for the prices in the Private and Government states.\nWe found the average prices were:\n\nPrivate: $4.35;\nGovernment: $4.84;\nDifference (Government - Private): $0.49.\n\nLet us now consider that all these states’ prices constitute one single finite universe. We ask: If these 42 states constitute a universe, and if they are all shuffled together, how likely is it that if one divides them into two samples at random (sampling without replacement), containing 16 and 26 observations respectively, the difference in mean prices turns out to be as great as $0.49 (the difference that was actually observed)?\nAgain we write each of the forty-two observed state prices on a separate card. The shuffled deck simulates a situation in which each state has an equal chance for each price. Repeatedly deal groups of 16 and 26 cards, without replacing the cards as they are chosen, to simulate hypothetical monopoly-state and private-state samples. In each trial calculate the difference in mean prices.\nThe steps more systematically:\n\nStep A. Write each of the 42 prices on a card and shuffle.\nSteps B and C (combined in this case). i) Draw cards randomly without replacement into groups of 16 and 26 cards. Then ii) calculate the mean price difference between the groups, and iii) compare the simulation-trial difference to the observed mean difference of $4.84 - $4.35 = $0.49; if it is as great or greater than $0.49, write “yes,” otherwise “no.”\nStep D. Repeat step B-C a hundred or a thousand times. Calculate the proportion “yes,” which estimates the probability we seek.\n\nThe probability that the postulated universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices.\nPlease notice how the only difference between this treatment of the problem and the treatment in Chapter 12 is that the drawing in this case is without replacement whereas in Chapter 12 the drawing is with replacement.\nIn Chapter 12 we thought of these states as if they came from a non-finite universe, which is one possible interpretation in one context. But one can also reasonably think about them in another context — as if they constitute the entire universe (aside from those states excluded from the analysis because of data complexities). If so, one can ask: If these 42 states constitute a universe, how likely is it that one would choose two samples at random, containing 16 and 26 observations, that would have prices as different as $.49 (the difference that was actually observed)?\n\n\n13.3.10 Example: Five or More Spades in One Bridge Hand; Four Girls and a Boy\n\nStart of five_spades_four_girls notebook\n\nDownload notebook\nInteract\n\n\nThis is a compound problem: what are the chances of both five or more spades in one bridge hand, and four girls and a boy in a five-child family?\n“Compound” does not necessarily mean “complicated”. It means that the problem is a compound of two or more simpler problems.\nA natural way to handle such a compound problem is in stages, as we saw in the archery problem of Section 12.10. If a “success” is achieved in the first stage, go on to the second stage; if not, don’t go on. More specifically in this example:\n\nStep 1. Use a bridge card deck, and five coins with heads = “girl”.\nStep 2. Deal a 13-card bridge hand and count the spades. If 5 or more spades, record “no” and end the experimental trial. Otherwise, continue to step 3.\nStep 3. Throw five coins, and count “heads.” If four heads, record “yes,” otherwise record “no.”\nStep 4. Repeat steps 2 and 3 a thousand times.\nStep 5. Compute the proportion of “yes” in step 3. This estimates the probability sought.\n\nThe R solution to this compound problem is neither long nor difficult. We tackle it almost as if the two parts of the problem were to be dealt with separately. We first determine, in a random bridge hand, whether 5 spades or more are dealt, as was done in the problem Section 13.3.2. Then, if 5 or more spades are found, we use sample to generate a random family of 5 children. This means that we need not generate families if 5 or more spades were not dealt to the bridge hand, because a “success” is only recorded if both conditions are met. After we record the number of girls in each sample of 5 children, we need only finish the loop (by } and then use sum to count the number of samples that had 4 girls, storing the result in k. Since we only drew samples of children for those trials in which a bridge hand of 5 spades had already been dealt, k will have the number of trials out of 10000 in which both conditions were met.\n\nN <- 10000\ntrial_results <- numeric(N)\n\n# Deck with 13 spades and 39 other cards\ndeck <- rep(c('spade', 'others'), c(13, 52 - 13))\n\nfor (i in 1:N) {\n # Shuffle deck and draw 13 cards\n hand <- sample(deck, 13) # replace=FALSE is default\n\n n_spades <- sum(hand == 'spade')\n\n if (n_spades >= 5) {\n # Generate a family, zeros for boys, ones for girls\n children <- sample(c('girl', 'boy'), 5, replace=TRUE)\n n_girls <- sum(children == 'girl')\n trial_results[i] <- n_girls\n }\n}\n\nk <- sum(trial_results == 4)\n\nkk <- k / N\n\nprint(kk)\n\n[1] 0.0262\n\n\nHere is an alternative approach to the same problem, but getting the result at the end of the loop, by combining Boolean vectors (see Section 10.5).\n\nN <- 10000\ntrial_spades <- numeric(N)\ntrial_girls <- numeric(N)\n\n# Deck with 13 spades and 39 other cards\ndeck <- rep(c('spade', 'other'), c(13, 39))\n\nfor (i in 1:N) {\n # Shuffle deck and draw 13 cards\n hand <- sample(deck, 13) # replace=FALSE is default\n # Count and store the number of spades.\n n_spades <- sum(hand == 'spade')\n trial_spades[i] <- n_spades\n\n # Generate a family, zeros for boys, ones for girls\n children <- sample(c('girl', 'boy'), 5, replace=TRUE)\n # Count and store the number of girls.\n n_girls <- sum(children == 'girl')\n trial_girls[i] <- n_girls\n}\n\nk <- sum((trial_spades >= 5) & (trial_girls == 4))\n\nkk <- k / N\n\n# Show the result\nmessage(kk)\n\n0.0271\n\n\nEnd of five_spades_four_girls notebook\n\n\n\n\n\n\n\nSpeed and readability\n\n\n\nThe last version is a fraction more expensive, but has the advantage that the condition we are testing for is summarized on one line. However, this would not be a good approach to take if the experiments were not completely unrelated." + }, + { + "objectID": "probability_theory_4_finite.html#summary", + "href": "probability_theory_4_finite.html#summary", + "title": "13  Probability Theory, Part 4: Estimating Probabilities from Finite Universes", + "section": "13.4 Summary", + "text": "13.4 Summary\nThis completes the discussion of problems in probability — that is, problems where we assume that the structure is known. Whereas Chapter 12 dealt with samples drawn from universes considered not finite , this chapter deals with problems drawn from finite universes and therefore you sample without replacement." + }, + { + "objectID": "sampling_variability.html#variability-and-small-samples", + "href": "sampling_variability.html#variability-and-small-samples", + "title": "14  On Variability in Sampling", + "section": "14.1 Variability and small samples", + "text": "14.1 Variability and small samples\nPerhaps the most important idea for sound statistical inference — the section of the book we are now beginning, in contrast to problems in probability, which we have studied in the previous chapters — is recognition of the presence of variability in the results of small samples . The fatal error of relying on too-small samples is all too common among economic forecasters, journalists, and others who deal with trends and public opinion. Athletes, sports coaches, sportswriters, and fans too frequently disregard this principle both in their decisions and in their discussion.\nOur intuitions often carry us far astray when the results vary from situation to situation — that is, when there is variability in outcomes — and when we have only a small sample of outcomes to look at.\nTo motivate the discussion, I’ll tell you something that almost no American sports fan will believe: There is no such thing as a slump in baseball batting. That is, a batter often goes an alarming number of at-bats without getting a hit, and everyone — the manager, the sportswriters, and the batter himself — assumes that something has changed, and the probability of the batter getting a hit is now lower than it was before the slump. It is common for the manager to replace the player for a while, and for the player and coaches to change the player’s hitting style so as to remedy the defect. But the chance of a given batter getting a hit is just the same after he has gone many at-bats without a hit as when he has been hitting well. A belief in slumps causes managers to play line-ups which may not be their best.\nBy “slump” I mean that a player’s probability of getting a hit in a given at-bat is lower during a period than during average periods. And when I say there is no such thing as a slump, I mean that the chances of getting a hit after any sequence of at-bats without a hit is not different than the long-run average.\nThe “hot hand” in basketball is another illusion. In practical terms, the hot hand does not exist — or rather — if it does, the effect is weak.1 The chance of a shooter scoring is more or less the same after they have just missed a flock of shots as when they have just sunk a long string. That is, the chance of scoring a basket is not appreciably higher after a run of successes than after a run of failures. But even professional teams choose plays on the basis of who supposedly has a hot hand.\nManagers who substitute for the “slumping” or “cold-handed” players with other players who, in the long run, have lower batting averages, or set up plays for the shooter who supposedly has a hot hand, make a mistake. The supposed hot hand in basketball, and the slump in baseball, are illusions because the observed long runs of outs, or of baskets, are statistical artifacts, due to ordinary random variability. The identification of slumps and hot hands is superstitious behavior, classic cases of the assignment of pattern to a series of events when there really is no pattern.\nHow do statisticians ascertain that slumps and hot hands are very weak effects, or do not exist? In brief, in baseball we simulate a hitter with a given average — say .250 — and compare the results with actual hitters of that average, to see whether they have “slumps” longer than the computer. The method of investigation is roughly as follows. You program a computer or other machine to behave the way a player would, given the player’s long-run average, on the assumption that each trial is a random drawing. For example, if a player has a .250 season-long batting average, the machine is programmed like a bucket containing three black balls and one white ball. Then for each simulated at bat, the machine shuffles the “balls” and draws one; it then records whether the result is black or white, after which the ball is replaced in the bucket. To study a season with four hundred at-bats, a simulated ball is drawn four hundred times.\nThe records of the player’s real season and the simulated season are then compared. If there really is such a thing as a non-random slump or streak, there will be fewer but longer “runs” of hits or outs in the real record than in the simulated record. On the other hand, if performance is independent from at-bat trial to at-bat trial, the actual record will change from hit to out and from out to hit as often as does the random simulated record. I suggested this sort of test for the existence of slumps in my 1969 book that first set forth the resampling method, a predecessor of this book.\nFor example, Table 14.1 shows the results of one 400 at-bat season for a simulated .250 hitter. (H = hit, O = out, sequential at-bats ordered vertically) Note the “slump” — 1 for 24 — in columns 7 & 8 (in bold).\n\n\nTable 14.1: 400 simulated at-bats (ordered vertically)\n\n\nO\nO\nO\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nH\nO\nO\n\n\nO\nO\nO\nO\nO\nH\nO\nO\nH\nH\nH\nO\nH\nH\nO\nO\n\n\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\nO\nH\nH\nO\nO\n\n\nO\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nH\nO\nO\nO\nH\n\n\nH\nO\nH\nO\nO\nH\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\n\n\nH\nO\nO\nH\nO\nO\nH\nH\nO\nH\nO\nO\nH\nO\nH\nO\n\n\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\nO\nO\nO\nO\nH\nO\n\n\nO\nO\nH\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nO\nO\n\n\nO\nH\nO\nO\nO\nO\nO\nO\nH\nH\nO\nO\nO\nH\nO\nO\n\n\nO\nH\nH\nO\nO\nO\nO\nH\nO\nH\nO\nO\nH\nO\nH\nO\n\n\nO\nO\nH\nH\nO\nH\nO\nH\nO\nH\nH\nH\nO\nO\nO\nO\n\n\nH\nO\nO\nO\nO\nO\nO\nO\nO\nH\nO\nH\nH\nO\nO\nO\n\n\nO\nH\nO\nO\nO\nH\nO\nO\nO\nO\nO\nO\nO\nO\nH\nH\n\n\nH\nO\nH\nO\nO\nO\nH\nO\nO\nO\nO\nH\nH\nO\nO\nH\n\n\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nH\nH\nH\nH\nO\n\n\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nH\nO\nO\nO\nO\n\n\nH\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\nO\n\n\nO\nH\nH\nH\nO\nO\nO\nH\nO\nH\nO\nO\nO\nO\nO\nO\n\n\nO\nH\nO\nH\nO\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\n\n\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nH\nO\nH\nO\nO\nH\n\n\nO\nH\nO\nO\nH\nO\nO\nO\nO\nO\nH\nO\nO\nO\nO\nO\n\n\nH\nH\nH\nO\nO\nO\nO\nH\nO\nO\nO\nO\nH\nO\nO\nH\n\n\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nO\nO\nO\nO\nH\nO\n\n\nO\nH\nO\nO\nO\nO\nO\nH\nH\nO\nO\nO\nO\nO\nO\nH\n\n\nO\nO\nO\nO\nO\nH\nO\nO\nO\nH\nO\nH\nO\nH\nO\nO\n\n\n\n\nHarry Roberts investigated the batting records of a sample of major leaguers.2 He compared players’ season-long records against the behavior of random-number drawings. If slumps existed rather than being a fiction of the imagination, the real players’ records would shift from a string of hits to a string of outs less frequently than would the random-number sequences. But in fact the number of shifts, and the average lengths of strings of hits and outs, are on average the same for players as for player-simulating random-number devices.\nOver long periods, averages may vary systematically, as Ty Cobb’s annual batting averages varied non-randomly from season to season, Roberts found. But in the short run, most individual and team performances have shown results similar to the outcomes that a lottery-type random number machine would produce.\nThomas Gilovich, Robert Vallone and Amos Twersky (1985) performed a similar study of basketball shooting. They examined the records of shots from the floor by the Philadelphia 76’ers, foul shots by the Boston Celtics, and a shooting experiment of Cornell University teams. They found that “basketball players and fans alike tend to believe that a player’s chance of hitting a shot are greater following a hit than following a miss on the previous shot. However, detailed analyses…provided no evidence for a positive correlation between the outcomes of successive shots.”\nTo put their conclusion differently, knowing whether a shooter has scored or not scored on the previous shot — or in any previous sequence of shots — is of absolutely no use in predicting whether the shooter will or will not score on the next shot. Similarly, knowledge of the past series of at-bats in baseball does not improve a prediction of whether a batter will get a hit this time.\nOf course a batter feels — and intensely — as if she or he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, both sandlot players and professionals feel confident that this time will be a hit, too. And after you have hit a bunch of baskets from all over the court, you feel as if you can’t miss.\nBut notice that card players get the same poignant feeling of being “hot” or “cold,” too. After a poker player “fills” several straights and flushes in a row, s/he feels s/he will hit the next one too. (Of course there are some players who feel just the opposite, that the “law of averages” is about to catch up with them.)\nYou will agree, I’m sure, that the cards don’t have any memory, and a player’s chance of filling a straight or flush remains the same no matter how he or she has done in the last series of hands. Clearly, then, a person can have a strong feeling that something is about to happen even when that feeling has no foundation. This supports the idea that even though a player in sports “feels” that s/he is in a slump or has a hot hand, this does not imply that the feeling has any basis in reality.\nWhy, when a batter is low in his/her mind because s/he has been making a lot of outs or for personal reasons, does her/ his batting not suffer? And why the opposite? Apparently at any given moment there are many influences operating upon a player’s performance in a variety of directions, with none of them clearly dominant. Hence there is no simple convincing explanation why a player gets a hit or an out, a basket or a miss, on any given attempt.\nBut though science cannot provide an explanation, the sports commentators always are ready to offer their analyses. Listen, for example, to how they tell you that Joe Zilch must have been trying extra hard just because of his slump. There is a sportswriter’s explanation for anything that happens.\nWhy do we believe the nonsense we hear about “momentum,” “comeback,” “she’s due this time,” and so on? The adult of the human species has a powerful propensity to believe that he or she can find a pattern even when there is no pattern to be found. Two decades ago I cooked up series of numbers with a random-number machine that looked as if they were prices on the stock market. Subjects in the experiment were told to buy and sell whichever stocks they chose. Then I gave them “another day’s prices,” and asked them to buy and sell again. The subjects did all kinds of fancy figuring, using an incredible variety of assumptions — even though there was no way for the figuring to help them. That is, people sought patterns even though there was no reason to believe that there were any patterns to be found.\nWhen I stopped the game before the ten buy-and-sell sessions the participants expected, people asked that the game continue. Then I would tell them that there was no basis for any patterns in the data. “Winning” or “losing” had no meaning. But the subjects demanded to continue anyway. They continued believing that they could find patterns even after I told them that the numbers were randomly looked up and not real stock prices.\nThe illusions in our thinking about sports have important counterparts in our thinking about such real-world phenomena as the climate, the stock market, and trends in the prices of raw materials such as mercury, copper and wheat. And private and public decisions made on the basis of faulty understanding of these real situations, caused by illusory thinking on the order of belief in slumps and hot hands, are often costly and sometimes disastrous.\nAn example of the belief that there are patterns when there are none: Systems for finding patterns in the stock market are peddled that have about the same reliability as advice from a racetrack tout — and millions buy them.\nOne of the scientific strands leading into research on variability was the body of studies that considers the behavior of stock prices as a “random walk.” That body of work asserts that a stock broker or chartist who claims to be able to find patterns in past price movements of stocks that will predict future movements should be listened to with about the same credulity as a racetrack tout or an astrologer. A second strand was the work in psychology in the last decade or two which has recognized that people’s estimates of uncertain events are systematically biased in a variety of interesting and knowable ways.\nThe U.S. government has made — and continues to make — blunders costing the public scores of billions of dollars, using slump-type fallacious reasoning about resources and energy. Forecasts are issued and policies are adopted based on the belief that a short-term increase in price constitutes a long-term trend. But the “experts” employed by the government to make such forecasts do no better on average than do private forecasters, and often the system of forecasting that they use is much more misleading than would be a random-number generating machine of the sort used in the baseball slump experiments.\nPlease look at the data in Figure 14.1 for the height of the Nile River over about half a century. Is it not natural to think that those data show a decline in the height of the river? One can imagine that if our modern communication technology existed then, the Cairo newspapers would have been calling for research to be done on the fall of the Nile, and the television anchors would have been warning the people to change their ways and use less water.\n\n\n\n\n\nFigure 14.1: Height of the Nile River Over Half of a Century\n\n\n\n\nLet’s look at Figure 14.2 which represents the data over an even longer period. What now would you say about the height of the Nile? Clearly the “threat” was non-existent, and only appeared threatening because the time span represented by the data was too short. The point of this display is that looking at too-short a segment of experience frequently leads us into error. And “too short” may be as long as a century.\n\n\n\nFigure 14.2: Variations in the height of Nile Flood in centimeters. The sloping line indicates the secular raising of the bed of the Nile by deposition of silt. From Brooks (1928)\n\n\nAnother example is the price of mercury, which is representative of all metals. Figure 14.3 shows a forecast made in 1976 by natural-scientist Earl Cook (1976). He combined a then-recent upturn in prices with the notion that there is a finite amount of mercury on the earth’s surface, plus the mathematical charm of plotting a second-degree polynomial with the computer. Figure 14.4 and Figure 14.5 show how the forecast was almost immediately falsified, and the price continued its long-run decline.\n\n\n\nFigure 14.3: The Price of Mercury from Cook (1976)\n\n\n\n\n\n\n\nFigure 14.4: Mercury Reserves, 1950-1990\n\n\n\n\n\n\n\n\n\nFigure 14.5: Mercury Price Indexes, 1950-1990\n\n\n\n\nLack of sound statistical intuition about variability can lead to manipulation of the public being by unscrupulous persons. Commodity funds sellers use a device of this sort to make their results look good (The Washington Post, Sep 28, 1987, p. 71). Some individual commodity traders inevitably do well in their private trading, just by chance. A firm then hires one of them, builds a public fund around him, and claims the private record for the fund’s own history. But of course the private record has no predictive power, any more than does the record of someone who happened to get ten heads in a row flipping coins.\nHow can we avoid falling into such traps? It is best to look at the longest possible sweep of history. That is, use the largest possible sample of observations to avoid sampling error. For copper we have data going back to the 18th century B.C. In Babylonia, over a period of 1000 years, the price of iron fell to one fifth of what it was under Hammurabi (almost 4000 years ago), and the price of copper then cost about a thousand times its current price in the U.S., relative to wages. So the inevitable short-run increases in price should be considered in this long-run context to avoid drawing unsound conclusions due to small-sample variability.\nProof that it is sound judgment to rely on the longest possible series is given by the accuracy of predictions one would have made in the past. In the context of copper, mercury, and other raw materials, we can refer to a sample of years in the past, and from those years imagine ourselves forecasting the following year. If you had bet every time that prices would go down in consonance with the long-run trend, you would have been a big winner on average." + }, + { + "objectID": "sampling_variability.html#regression-to-the-mean", + "href": "sampling_variability.html#regression-to-the-mean", + "title": "14  On Variability in Sampling", + "section": "14.2 Regression to the mean", + "text": "14.2 Regression to the mean\n\nUP, DOWN “The Dodgers demoted last year’s NL rookie of the year, OF Todd Hollandsworth (.237, 1 HR, 18 RBI) to AAA Albuquerque...” (Item in Washington Post , 6/14/97)\n\nIt is a well-known fact that the Rookie of the Year in a sport such as baseball seldom has as outstanding a season in their sophomore year. Why is this so? Let’s use the knowledge we have acquired of probability and simulation to explain this phenomenon.\nThe matter at hand might be thought of as a problem in pure probability — if one simply asks about the chance that a given player (the Rookie of the Year) will repeat. Or it could be considered a problem in statistics, as discussed in coming chapters. Let’s consider the matter in the context of baseball.\nImagine 10 mechanical “ball players,” each a machine that has three white balls (hits) and 7 black balls. Every time the machine goes to bat, you take a ball out of the machine, look to see if it is a hit or an out, and put it back. For each “ball player” you do this 100 times. One of them is going to do better than the others, and that one becomes the Rookie of the Year. See Table 14.2.\n\n\nTable 14.2: Rookie Seasons (100 at bats)\n\n\n# of Hits\nBatting Average\n\n\n\n\n32\n.320\n\n\n34\n.340\n\n\n33\n.330\n\n\n30\n.300\n\n\n35\n.350\n\n\n33\n.330\n\n\n30\n.300\n\n\n31\n.310\n\n\n28\n.280\n\n\n25\n.250\n\n\n\n\nWould you now expect that the player who happened to be the best among the top ten in the first year to again be the best among the top ten in the next year, also? The sports writers do. But of course this seldom happens. The Rookie of the Year in major-league baseball seldom has as outstanding a season in their sophomore year as in their rookie year. You can expect them to do better than the average of all sophomores, but not necessarily better than all of the rest of the group of talented players who are now sophomores. (Please notice that we are not saying that there is no long-run difference among the top ten rookies. But suppose there is. Table 14.3 shows the season’s performance for ten batters of differing performances).\n\n\nTable 14.3: Simulated season’s performance for 10 batters of differing “true” averages\n\n\n“True”\nRookie\n\n\n\n\n.270\n.340\n\n\n.270\n.240\n\n\n.280\n.330\n\n\n.280\n.300\n\n\n.300\n.280\n\n\n.300\n.420\n\n\n.320\n.340\n\n\n.320\n.350\n\n\n.330\n.260\n\n\n.330\n.330\n\n\n\n\nWe see from Table 14.3 that we have ten batters whose “true” batting averages range from .270 to .330. Their rookie year performance (400 at bats), simulated on the basis of their “true”average is on the right. Which one is the rookie of the year? It’s #6, who hit .420 during the rookie session. Will they do as well next year? Not likely — their “true” average is only .300.\n\nStart of sampling_variability notebook\n\nDownload notebook\nInteract\n\n\nTry generating some rookie “seasons” yourself with the following commands, ranging the batter’s “true” performance by changing the value of p_hit (the probability of a hit).\n\n# Simulate a rookie season of 400 at-bats.\n\n# You might try changing the value below and rerunning.\n# This is the true (long-run) probability of a hit for this batter.\np_hit <- 0.4\nmessage('True average is: ', p_hit)\n\nTrue average is: 0.4\n\n# We resample _with_ replacement here; the chances of a hit do not change\n# From at-bat to at-bat.\nat_bats <- sample(c('Hit', 'Out'), prob=c(p_hit, 1 - p_hit), size=400, replace=TRUE)\nsimulated_average <- sum(at_bats == 'Hit') / 400\n# Show the result\nmessage('Simulated average is: ', simulated_average)\n\nSimulated average is: 0.445\n\n\nSimulate a set of 10 or 20 such rookie seasons, and look at the one who did best. How did their rookie season compare to their “true” average?\nEnd of sampling_variability notebook\n\nThe explanation is the presence of variability . And lack of recognition of the role of variability is at the heart of much fallacious reasoning. Being alert to the role of variability is crucial.\nOr consider the example of having a superb meal at a restaurant — the best meal you have ever eaten. That fantastic meal is almost surely the combination of the restaurant being better than average, plus a lucky night for the chef and the dish you ordered. The next time you return you can expect a meal better than average, because the restaurant is better than average in the long run. But the meal probably will be less good than the superb one you had the first time, because there is no reason to believe that the chef will get so lucky again and that the same sort of variability will happen this time.\nThese examples illustrate the concept of “regression to the mean” — a confusingly-titled and very subtle effect caused by variability in results among successive samples drawn from the same population. This phenomenon was given its title more than a century ago by Francis Galton, one of the great founders of modern statistics, when at first he thought that the height of the human species was becoming more uniform, after he noticed that the children of the tallest and shortest parents usually are closer to the average of all people than their parents are. But later he discovered his fallacy — that the variability in heights of children of quite short and quite tall parents also causes some people to be even more exceptionally tall or short than their parents. So the spread in heights among humans remains much the same from generation to generation; there is no “regression to the mean.” The heart of the matter is that any exceptional observed case in a group is likely to be the result of two forces — a) an underlying propensity to differ from the average in one direction or the other, plus b) some chance sampling variability that happens (in the observed case) to push even further in the exceptional direction.\nA similar phenomenon arises in direct-mail marketing. When a firm tests many small samples of many lists of names and then focuses its mass mailings on the lists that performed best in the tests, the full list “rollouts” usually do not perform as well as the samples did in the initial tests. It took many years before mail-order experts (see especially (Burnett 1988)) finally understood that regression to the mean inevitably causes an important part of the dropoff from sample to rollout observed in the set of lists that give the very best results in a multi-list test.\nThe larger the test samples, the less the dropoff, of course, because larger samples reduce variability in results. But larger samples risk more money. So the test-sample-size decision for the marketer inevitably is a trade-off between accuracy and cost.\nAnd one last amusing example: After I (JLS) lectured to the class on this material, the student who had gotten the best grade on the first mid-term exam came up after class and said: “Does that mean that on the second mid-term I should expect to do well but not the best in the class?” And that’s exactly what happened: He had the second-best score in the class on the next midterm.\nA related problem arises when one conducts multiple tests, as when testing thousands of drugs for therapeutic value. Some of the drugs may appear to have a therapeutic effect just by chance. We will discuss this problem later when discussing hypothesis testing." + }, + { + "objectID": "sampling_variability.html#summary-and-conclusion", + "href": "sampling_variability.html#summary-and-conclusion", + "title": "14  On Variability in Sampling", + "section": "14.3 Summary and conclusion", + "text": "14.3 Summary and conclusion\nThe heart of statistics is clear thinking. One of the key elements in being a clear thinker is to have a sound gut understanding of statistical processes and variability. This chapter amplifies this point.\nA great benefit to using simulations rather than formulas to deal with problems in probability and statistics is that the presence and importance of variability becomes manifest in the course of the simulation work.\n\n\n\n\nBrooks, Charles Ernest Pelham. 1928. “Periodicities in the Nile Floods.” Memoirs of the Royal Meteorological Society 2 (12): 9--26. https://www.rmets.org/sites/default/files/papers/brooksmem2-12.pdf.\n\n\nBurnett, Ed. 1988. The Complete Direct Mail List Handbook: Everything You Need to Know about Lists and How to Use Them for Greater Profit. Englewood Cliffs, New Jersey: Prentice Hall. https://archive.org/details/completedirectma00burn.\n\n\nCook, Earl. 1976. “Limits to Exploitation of Nonrenewable Resources.” Science 191 (4228): 677–82. https://www.jstor.org/stable/pdf/1741483.pdf.\n\n\nGilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot Hand in Basketball: On the Misperception of Random Sequences.” Cognitive Psychology 17 (3): 295–314. https://www.joelvelasco.net/teaching/122/Gilo.Vallone.Tversky.pdf." + }, + { + "objectID": "monte_carlo.html#a-definition-and-general-procedure-for-monte-carlo-simulation", + "href": "monte_carlo.html#a-definition-and-general-procedure-for-monte-carlo-simulation", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.1 A definition and general procedure for Monte Carlo simulation", + "text": "15.1 A definition and general procedure for Monte Carlo simulation\nThis is what we shall mean by the term Monte Carlo simulation when discussing problems in probability: Using the given data-generating mechanism (such as a coin or die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples . That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.\nThis definition fits both problems in pure probability as well as problems in statistics, but in the latter case the process is called resampling . The reason that the same definition fits is that at the core of every problem in inferential statistics lies a problem in probability ; that is, the procedure for handling every statistics problem is the procedure for handling a problem in probability. (There is related discussion of definitions in Chapter 8 and Chapter 20.)\nThe following series of steps should apply to all problems in probability. I’ll first state the procedure straight through without examples, and then show how it applies to individual examples.\n\nStep A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.\nStep B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.\nStep C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.\nStep D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.\n\nNow let us apply the general procedure to some examples to make it more concrete.\nHere are four problems to be used as illustrations:\n\nThree percent gizmos — if on average 3 percent of the gizmos sent out are defective, what is the chance that there will be more than 10 defectives in a shipment of 200?\nThree girls, 106 in 206 — what are the chances of getting three or more girls in the first four children, if the probability of a female birth is 106/206?\nLess than 20 baskets — what are the chances of Joe Hothand scoring 20 or fewer baskets in 57 shots if his long-run average is 47 percent?\nSame birthday in 25 — what is the probability of two or more people in a group of 25 persons having the same birthday — i. e., the same month and same day of the month?" + }, + { + "objectID": "monte_carlo.html#apply-step-a-construct-a-simulation-universe", + "href": "monte_carlo.html#apply-step-a-construct-a-simulation-universe", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.2 Apply step A — construct a simulation universe", + "text": "15.2 Apply step A — construct a simulation universe\nAs a reminder:\n\nStep A Construct a simulation “universe” of cards or dice or some other randomizing mechanism whose composition is similar to the universe whose behavior we wish to describe and investigate. The term “universe” refers to the system that is relevant for a single simple event.\n\nFor our example problems:\n\nThree percent gizmos: A random drawing with replacement from the set of numbers 1 through 100 with 1 through 3 designated as defective, simulates the system that produces 3 defective gizmos among 100.\nThree girls, 106 in 206: You could take two decks of cards, from which you take out both Aces of spades, and replace these with a Joker. You now have 103 cards (206 / 2), of which 53 (106 / 2) are red, counting the Joker as red. You could also use a random drawing from two sets of numbers, one comprising 1 through 106 and the other 107 through 206. Either universe can simulate the system that produces a single male or female birth, when we are estimating the probability of three girls in the first four children. Notice that in this universe the probability of a girl remains the same from trial event to trial event — that is, the trials are independent — demonstrating a universe from which we sample with replacement.\nLess than 20 baskets: A random drawing with replacement from a bucket containing a hundred balls, 47 red and 53 black, simulates the system that produces 47 percent baskets for Joe Hothand.\nSame birthday in 25: A random drawing with replacement from the numbers 1 through 365 simulates the system that produces a birthday.\n\nThis step A includes two operations:\n\nDecide which symbols will stand for the elements of the universe you will simulate.\nDetermine whether the sampling will be with or without replacement. (This can be ambiguous in a complex modeling situation.)\n\nHard thinking is required in order to determine the appropriate “real” universe whose properties interest you." + }, + { + "objectID": "monte_carlo.html#apply-step-b-specify-the-procedure", + "href": "monte_carlo.html#apply-step-b-specify-the-procedure", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.3 Apply step B — specify the procedure", + "text": "15.3 Apply step B — specify the procedure\n\nStep B Specify the procedure that produces a pseudo-sample which simulates the real-life sample in which we are interested. That is, specify the procedural rules by which the sample is drawn from the simulated universe. These rules must correspond to the behavior of the real universe in which you are interested. To put it another way, the simulation procedure must produce simple experimental events with the same probabilities that the simple events have in the real world.\n\nFor example:\n\nThree percent gizmos: For a single gizmo, you can draw a single number from an infinite universe. Or one can use a finite set with replacement and shuffling.\nThree girls, 106 in 206: In the case of three or more daughters among four children, you could use the deck of 103 cards, from Step A, of which 53 count as red. To simulate one child, you can draw a card and then replace it, noting female for a red card or a Joker. Or if you are using random numbers from the computer, the random numbers automatically simulate replacement. Just as the chances of having a boy or a girl do not change depending on the sex of the preceding child, so we want to ensure through sampling with replacement that the chances do not change each time we choose from the deck of cards.\nLess than 20 baskets: In the case of Joe Hothand’s shooting, the procedure is to consider the numbers 1 through 47 as “baskets,” and 48 through 100 as “misses,” with the same other considerations as the gizmos.\nSame birthday in 25. In the case of the birthday problem, the drawing must be with replacement, because the fact that you have drawn — say — a 10 (10th day in year), should not affect the chances of drawing 10 for a second person in the room.\n\nRecording the outcome of the sampling must be indicated as part of this step, e.g., “record ‘yes’ if girl or basket, ‘no’ if a boy or a miss.”" + }, + { + "objectID": "monte_carlo.html#apply-step-c-describe-any-composite-events", + "href": "monte_carlo.html#apply-step-c-describe-any-composite-events", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.4 Apply step C — describe any composite events", + "text": "15.4 Apply step C — describe any composite events\n\nStep C Describe any composite events. If several simple events must be combined into a composite event, and if the composite event was not described in the procedure in step B, describe it now.\n\nFor example:\n\nThree percent gizmos: For the gizmos, draw a sample of 200.\nThree girls, 106 in 206: For the three or more girls among four children, the procedure for each simple event of a single birth was described in step B. Now we must specify repeating the simple event four times, and counting whether the outcome is or is not three girls.\nLess than 20 baskets: In the case of Joe Hothand’s shots, we must draw 57 numbers to make up a sample of shots, and examine whether there are 20 or more misses.\n\nRecording the results as “ten or more defectives,” “three or more girls” or “two or less girls,” and “20 or more misses” or “19 or fewer,” is part of this step. This record indicates the results of all the trials and is the basis for a tabulation of the final result." + }, + { + "objectID": "monte_carlo.html#apply-step-d-calculate-the-probability", + "href": "monte_carlo.html#apply-step-d-calculate-the-probability", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.5 Apply step D — calculate the probability", + "text": "15.5 Apply step D — calculate the probability\n\nStep D. Calculate the probability of interest from the tabulation of outcomes of the resampling trials.\n\nFor example: the proportions of “yes” and “no,” and “20 or more” and “19 or fewer” estimate the probability we seek in step C.\nThe above procedure is similar to the procedure followed with the analytic formulaic method except that the latter method constructs notation and manipulates it." + }, + { + "objectID": "monte_carlo.html#summary", + "href": "monte_carlo.html#summary", + "title": "15  The Procedures of Monte Carlo Simulation (and Resampling)", + "section": "15.6 Summary", + "text": "15.6 Summary\nThis chapter gives a more general description of the specific steps used in prior chapters to solve problems in probability." + }, + { + "objectID": "standard_scores.html#household-income-and-congressional-districts", + "href": "standard_scores.html#household-income-and-congressional-districts", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.1 Household income and congressional districts", + "text": "16.1 Household income and congressional districts\nDemocratic congresswoman Marcy Kaptur has represented the 9th district of Ohio since 1983. Ohio’s 9th district is relatively working class, and the Democratic party has, traditionally, represented people with lower income. However, Kaptur has pointed out that this pattern appears to be changing; more of the high-income congressional districts now lean Democrat, and the Republican party is now more likely to represent lower-income districts. The French economist Thomas Piketty has described this phenomenon across several Western countries. Voters for left parties are now more likely to be highly educated and wealthy. He terms this shift “Brahmin Left Vs Merchant Right” (Piketty 2018). The data below come from a table Kaptur prepared that shows this pattern in the 2023 US congress. The table lists the top 20 districts by the median income of the households in that district, along with their representatives and their party.2\n\n\n\n\nTable 16.1: 20 most wealthy 2023 Congressional districts by household income\n\n\n\nAscending_Rank\nDistrict\nMedian Income\nRepresentative\nParty\n\n\n\n\n422\n422\nMD-3\n114804\nJ. Sarbanes\nDemocrat\n\n\n423\n423\nMA-5\n115618\nK. Clark\nDemocrat\n\n\n424\n424\nNY-12\n116070\nJ. Nadler\nDemocrat\n\n\n425\n425\nVA-8\n116332\nD. Beyer\nDemocrat\n\n\n426\n426\nMD-5\n117049\nS. Hoyer\nDemocrat\n\n\n427\n427\nNJ-11\n117198\nM. Sherrill\nDemocrat\n\n\n428\n428\nNY-3\n119185\nG. Santos\nRepublican\n\n\n429\n429\nCA-14\n119209\nE. Swalwell\nDemocrat\n\n\n430\n430\nNJ-7\n119567\nT. Kean\nRepublican\n\n\n431\n431\nNY-1\n120031\nN. LaLota\nRepublican\n\n\n432\n432\nWA-1\n120671\nS. DelBene\nDemocrat\n\n\n433\n433\nMD-8\n120948\nJ. Raskin\nDemocrat\n\n\n434\n434\nNY-4\n121979\nA. D’Esposito\nRepublican\n\n\n435\n435\nCA-11\n124456\nN. Pelosi\nDemocrat\n\n\n436\n436\nCA-15\n125855\nK. Mullin\nDemocrat\n\n\n437\n437\nCA-10\n135150\nM. DeSaulnier\nDemocrat\n\n\n438\n438\nVA-11\n139003\nG. Connolly\nDemocrat\n\n\n439\n439\nVA-10\n140815\nJ. Wexton\nDemocrat\n\n\n440\n440\nCA-16\n150720\nA. Eshoo\nDemocrat\n\n\n441\n441\nCA-17\n157049\nR. Khanna\nDemocrat\n\n\n\n\n\n\n\n\nYou may notice right away that many of the 20 richest districts have Democratic Party representatives.\nIn fact, if we look at all 441 congressional districts in Kaptur’s table, we find a large difference in the average median household income for Democrat and Republican districts; the Democrat districts are, on average, about 14% richer (Table 16.2).\n\n\n\n\nTable 16.2: Means for median household income by party\n\n\n\nMean of median household income\n\n\n\n\nDemocrat\n$76,933\n\n\nRepublican\n$67,474\n\n\n\n\n\n\n\n\nNext we are going to tip our hand, and show how we got these data. In previous chapters, we had chunks like this in which we enter the values we will analyze. These values come from the example we introduced in Section 12.16:\n\n# Liquor prices for US states with private market.\npriv <- c(4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54,\n 4.75, 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75,\n 5.20, 5.10, 4.80, 4.29)\n\nNow we have 441 values to enter, and it is time to introduce Rs standard tools for loading data.\n\n16.1.1 Comma-separated-values (CSV) format\nThe data we will load is in a file on disk called data/congress_2023.csv. These are data from Kaptur’s table in a comma-separated-values (CSV) format file. We refer to this file with its filename, containing the directory (data/) followed by the name of the file (congress_2023.csv), giving a filename of data/congress_2023.csv.\nThe CSV format is a very simple text format for storing table data. Usually, the first line of the CSV file contains the column names of the table, and the rest of the lines contain the row values. As the name suggests, commas (,) separate the column names in the first line, and the row values in the following lines. If you opened the data/congress_2023.csv file in some editor, such as Notepad on Windows or TextEdit on Mac, you would find that the first few lines looked like this:\n\nAscending_Rank,District,Median_Income,Representative,Party\n1,PR-At Large,22237,J. González-Colón,Republican\n2,AS-At Large,28352,A. Coleman,Republican\n3,MP-At Large,31362,G. Sablan,Democrat\n4,KY-5,37910,H. Rogers,Republican\n5,MS-2,37933,B. G. Thompson,Democrat\n\nWe are particularly interested in the column named Median_Income.\nYou may remember the idea of indexing, introduced in Section 7.6. Indexing occurs when we fetch data from within a container, such as a string or an array. We do this by putting square brackets [] after the value we want to index into, and put something inside the brackets to say what we want.\nFor example, to get the first element of the priv array above, we use indexing:\nAs you can index into strings and Numpy arrays, by using square brackets, so you can index into Pandas data frames. Instead of putting the position between the square brackets, we can put the column name. This fetches the data from that column, returning a new type of value called a Pandas Series.\nWe want to go straight to our familiar Numpy arrays, so we convert the column of data into a Numpy array, using the np.array function you have already seen:\n\n:::\n\n\n16.1.2 Introducing R data frames\nR is a data analysis language, so, as you would expect, it is particularly good at loading data files, and presenting them to us as a useful table-like structure, called a data frame.\nWe start by using R to load our data file. R has a special function to do this, called read.csv.\n\ndistrict_income <- read.csv('data/congress_2023.csv')\n\nWe have thus far done many operations that returned R vectors. read.csv returns a new type of value, called a data frame:\n\nclass(district_income)\n\n[1] \"data.frame\"\n\n\nA data frame is R’s own way of representing a table, with columns and rows. You can think of it as R’s version of a spreadsheet. Data frames are a fundamental type in R, and there are many functions that operate on them. Among them is the function head which selects (by default) the first six rows of whatever you send it. Here we select the first six rows of the data frame.\n\n# Show the first six rows in the data frame\nhead(district_income)\n\n Ascending_Rank District Median_Income Representative Party\n1 1 PR-At Large 22237 J. González-Colón Republican\n2 2 AS-At Large 28352 A. Coleman Republican\n3 3 MP-At Large 31362 G. Sablan Democrat\n4 4 KY-5 37910 H. Rogers Republican\n5 5 MS-2 37933 B. G. Thompson Democrat\n6 6 NY-15 40319 R. Torres Democrat\n\n\nThe data are in income order, sorted lowest to highest, so the first five districts are those with the lowest household income.\nWe are particularly interested in the column named Median_Income.\nYou can fetch columns of data from a data frame by using R’s $ syntax. The $ syntax means “fetch the thing named on the right of the $ attached to the value given to the left of the $”.\nSo, to get the data for the Median_Income column, we can write:\n\n# Use $ syntax to get a column of data from a data frame.\n# \"fetch the Median_Income thing from district_income\".\nincomes = district_income$Median_Income\n# The thing that comes back is our familiar R vector.\n# Show the first five values, by indexing with a slice.\nincomes[1:5]\n\n[1] 22237 28352 31362 37910 37933\n\n\n\n\n16.1.3 Incomes and Ranks\nWe now have the incomes values as a vector.\nThere are 441 values in the whole vector, one of each congressional district:\n\nlength(incomes)\n\n[1] 441\n\n\nWhile we are at it, let us also get the values from the “Ascending_Rank” column, with the same procedure. These are ranks from low to high, meaning 1 is the lowest median income, and 441 is the highest median income.\n\nlo_to_hi_ranks <- district_income$Ascending_Rank\n# Show the first five values, by indexing with a slice.\nlo_to_hi_ranks[1:5]\n\n[1] 1 2 3 4 5\n\n\nIn our case, the data frame has the Ascending_Rank column with the ranks we need, but if we need the ranks and we don’t have them, we can calculate them using the rank function.\n\n\n16.1.4 Calculating ranks\nAs you might expect rank accepts a vector as an input argument. Let’s say that there are n <- length(data) values in the vector that we pass to rank. The function returns a vector, length \\(n\\), where the elements are the ranks of each corresponding element in the input data vector. A rank value of 1 corresponds the lowest value in data (closest to negative infinity), and a rank of \\(n\\) corresponds to the highest value (closest to positive infinity).\nHere’s an example data vector to show how rank works.\n\n# The data.\ndata <- c(3, -1, 5, -2)\n# Corresponding ranks for the data.\nrank(data)\n\n[1] 3 2 4 1\n\n\nWe can use rank to recalculate the ranks for the congressional median household income values.\n\n# Recalculate the ranks.\nrecalculated_ranks <- rank(incomes)\n# Show the first 5 ranks.\nrecalculated_ranks[1:5]\n\n[1] 1 2 3 4 5" + }, + { + "objectID": "standard_scores.html#comparing-two-values-in-the-district-income-data", + "href": "standard_scores.html#comparing-two-values-in-the-district-income-data", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.2 Comparing two values in the district income data", + "text": "16.2 Comparing two values in the district income data\nLet us say that we have taken an interest in two particular members of Congress: the Speaker of the House of Representatives, Republican Kevin McCarthy, and the progressive activist and Democrat Alexandria Ocasio-Cortez. We will refer to both using their initials: KM for Kevin Owen McCarthy and AOC for Alexandra Ocasio-Cortez.\nBy scrolling through the CSV file, or (in our case) using some simple R code that we won’t cover now, we find the rows corresponding to McCarthy (KM) and Ocasio-Cortez (AOC) — Table 16.3.\n\n\n\n\nTable 16.3: Rows for Kevin McCarthy and Alexandra Ocasio-Cortez \n\n\nAscending_Rank\nDistrict\nMedian Income\nRepresentative\nParty\n\n\n\n\n81\nNY-14\n56129\nA. Ocasio-Cortez\nDemocrat\n\n\n295\nCA-20\n77205\nK. McCarthy\nRepublican\n\n\n\n\n\n\n\n\nThe rows show the rank of each congressional district in terms of median household income. The districts are ordered by this rank, so we can get their respective indices (positions) in the incomes vector from their rank. \n\n# Rank of McCarthy's district in terms of median household income.\nkm_rank = 295\n# Index (position) of McCarthy's value in the \"incomes\" vector.\n# This is the same as the rank.\nkm_index = km_rank\n\nNow we have the index (position) of KM’s value, we can find the household income for his district from the incomes vector:\n\n# Show the median household income from McCarthy's district\n# by indexing into the \"incomes\" vector:\nkm_income <- incomes[km_index]\nkm_income\n\n[1] 77205\n\n\nHere is the corresponding index and incomes value for AOC:\n\n# Index (position) of AOC's value in the \"incomes\" array.\naoc_rank = 81\naoc_index = aoc_rank\n# Show the median household income from AOC's district\n# by indexing into the \"incomes\" array:\naoc_income <- incomes[aoc_index]\naoc_income\n\n[1] 56129\n\n\nNotice that we fetch the same value for median household income from incomes as you see in the corresponding rows." + }, + { + "objectID": "standard_scores.html#comparing-values-with-ranks-and-quantile-positions", + "href": "standard_scores.html#comparing-values-with-ranks-and-quantile-positions", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.3 Comparing values with ranks and quantile positions", + "text": "16.3 Comparing values with ranks and quantile positions\nWe have KM’s and AOC’s district median household income values, but our next question might be — how unusual are these values?\nOf course, it depends what we mean by unusual. We might mean, are they greater or smaller than most of the other values?\nOne way of answering that question is simply looking at the rank of the values. If the rank is lower than \\(\\frac{441}{2} = 220.5\\) then this is a district with lower median income than most districts. If it is greater than \\(220.5\\) then it has higher median income than most districts. We see that KM’s district, with rank 295 is wealthier than most, whereas AOC’s district (rank 81) is poorer than most.\nBut we can’t interpret the ranks without remembering that there are 441 values, so — for example - a rank of 81 represents a relatively low value, whereas one of 295 is relatively high.\nWe would like some scale that tells us immediately whether this is a relatively low or a relatively high value, without having to remembering how many values there are.\nThis is a good use for quantile positions (QPs). The QP of a value tells you where the value ranks relative to the other values, on a scale from \\(0\\) through \\(1\\). A QP of \\(0\\) tells you this is the lowest-ranking value, and a QP of \\(1\\) tells you this is the highest-ranking value.\nWe can calculate the QP for each rank. Think of the low-to-high ranks as being a line starting at 1 (the lowest rank — for the lowest median income) and going up to 441 (the highest rank — for the highest median income).\nThe QP corresponding to any particular rank tells you how far along this line the rank is. Notice that the length of the line is the distance from the first to the last value, so 441 - 1 = 440.\nSo, if the rank was \\(1\\), then the value is at the start of the line. It has got \\(\\frac{0}{440}\\) of the way along the line, and the QP is \\(0\\). If the rank is \\(441\\), the value is at the end of the line, it has got \\(\\frac{440}{440}\\) of the way along the line and the QP is \\(1\\).\nNow consider the rank of \\(100\\). It has got \\(\\frac{(100 - 1)}{440}\\) of the way along the line, and the QP position is 0.22.\nMore generally, we can translate the high-to-low ranks to QPs with:\n\n# Length of the line defining quantile positions.\n# Start of line is rank 1 (quantile position 0).\n# End of line is rank 441 (quantile position 1).\ndistance <- length(lo_to_hi_ranks) - 1 # 440 in our case.\nquantile_positions <- (lo_to_hi_ranks - 1) / distance\n# Show the first five.\nquantile_positions[1:5]\n\n[1] 0.00000 0.00227 0.00455 0.00682 0.00909\n\n\nLet’s plot the ranks and the QPs together on the x-axis:\n\n\n\n\n\n\n\n\n\nThe QPs for KM and AOC tell us where their districts’ incomes are in the ranks, on a 0 to 1 scale:\n\nkm_quantile_position <- quantile_positions[km_index]\nkm_quantile_position\n\n[1] 0.668\n\n\n\naoc_quantile_position <- quantile_positions[aoc_index]\naoc_quantile_position\n\n[1] 0.182\n\n\nIf we multiply the QP by 100, we get the percentile positions — so the percentile position ranges from 0 through 100.\n\n# Percentile positions are just quantile positions * 100\nmessage('KM percentile position: ', km_quantile_position * 100)\n\nKM percentile position: 66.8181818181818\n\nmessage('AOC percentile position: ', aoc_quantile_position * 100)\n\nAOC percentile position: 18.1818181818182\n\n\nNow consider one particular QP: \\(0.5\\). The \\(0.5\\) QP is exactly half-way along the line from rank \\(1\\) to rank \\(441\\). In our case this corresponds to rank \\(\\frac{441 - 1}{2} + 1 = 221\\).\n\nmessage('Middle rank: ', lo_to_hi_ranks[221])\n\nMiddle rank: 221\n\nmessage('Quantile position: ', quantile_positions[221])\n\nQuantile position: 0.5\n\n\nThe value corresponding to any particular QP is the quantile value, or just the quantile for short. For a QP of 0.5, the quantile (quantile value) is:\n\n# Quantile value for 0.5\nmessage('Quantile value for QP of 0.5: ', incomes[221])\n\nQuantile value for QP of 0.5: 67407\n\n\nIn fact we can ask R for this value (quantile) directly, using the quantile function:\n\nquantile(incomes, 0.5)\n\n 50% \n67407 \n\n\n\n\n\n\n\n\nquantile and sorting\n\n\n\nIn our case, the incomes data is already sorted from lowest (at position 1 in the vector to highest (at position 441 in the vector). The quantile function does not need the data to be sorted; it does its own internal sorting to do the calculation.\nFor example, we could shuffle incomes into a random order, and still get the same values from quantile.\n\nshuffled_incomes <- sample(incomes)\n# Quantile still gives the same value.\nquantile(incomes, 0.5)\n\n 50% \n67407 \n\n\n\n\nAbove we have the 0.5 quantile — the value corresponding to the QP of 0.5.\nThe 0.5 quantile is an interesting value. By the definition of QP, exactly half of the remaining values (after excluding the 0.5 quantile value) have lower rank, and are therefore less than the 0.5 quantile value. Similarly exactly half of the remaining values are greater than the 0.5 quantile. You may recognize this as the median value. This is such a common quantile value that R has a function median as a shortcut for quantile(data, 0.5).\nAnother interesting QP is 0.25. We find the QP of 0.25 at rank:\n\nqp25_rank <- (441 - 1) * 0.25 + 1\nqp25_rank\n\n[1] 111\n\n\n\nmessage('Rank corresponding to QP 0.25: ', qp25_rank)\n\nRank corresponding to QP 0.25: 111\n\nmessage('0.25 quantile value: ', incomes[qp25_rank])\n\n0.25 quantile value: 58961\n\nmessage('0.25 quantile value using quantile: ', quantile(incomes, 0.25))\n\n0.25 quantile value using quantile: 58961\n\n\n\n\n\n\n\n\n\n\n\nCall the 0.25 quantile value \\(V\\). \\(V\\) is the number such that 25% of the remaining values are less than \\(V\\), and 75% are greater.\nNow let’s think about the 0.01 quantile. We don’t have an income value exactly corresponding to this QP, because there is no rank exactly corresponding to the 0.01 QP.\n\nrank_for_qp001 <- (441 - 1) * 0.01 + 1\nrank_for_qp001\n\n[1] 5.4\n\n\nLet’s have a look at the first 10 values for rank / QP and incomes:\n\n\n\n\n\n\n\n\n\nWhat then, is the quantile value for QP = 0.01? There are various ways to answer that question (Hyndman and Fan 1996), but one obvious way, and the default for R, is to draw a straight line up from the matching rank — or equivalently, down from the QP — then note where that line crosses the lines joining the values to the left and right of the QP on the graph above, and look across to the y-axis for the corresponding value:\n\n\n\n\n\n\n\n\n\n\nquantile(incomes, 0.01)\n\n 1% \n38887 \n\n\nThis is called the linear method — because it uses straight lines joining the points to estimate the quantile value for a QP that does not correspond to a whole-number rank.\n\n\n\n\n\n\nCalculating quantiles using the linear method\n\n\n\nWe gave a graphical explanation of how to calculate the quantile for a QP that does not correspond to whole-number rank in the data. A more formal way of getting the value using the numerical equivalent of the graphical method is linear interpolation. Linear interpolation calculates the quantile value as a weighted average of the quantile values for the QPs of the whole number ranks just less than, and just greater than the QP we are interested in. For example, let us return to the QP of \\(0.01\\). Let us remind ourselves of the QPs, whole-number ranks and corresponding values either side of the QP \\(0.01\\):\n\nRanks, QPs and corresponding values around QP of 0.01\n\n\nRank\nQuantile position\nQuantile value\n\n\n\n\n5\n0.0099\n37933\n\n\n5.4\n0.01\nV\n\n\n6\n0.0113\n40319\n\n\n\nWhat value should we should give \\(V\\) in the table? One answer is to take the average of the two values either side of the desired QP — in this case \\((37933 + 40319) / 2\\). We could write this same calculation as \\(37933 * 0.5 + 40319 * 0.5\\) — showing that we are giving equal weight (\\(0.5\\)) to the two values either side.\nBut giving both values equal weight doesn’t seem quite right, because the QP we want is closer to the QP for rank 5 (and corresponding value 37933) than it is to the QP for rank 6 (and corresponding value 40319). We should give more weight to the rank 5 value than the rank 6 value. Specifically the lower value is 0.4 rank units away from the QP rank we want, and the higher is 0.6 rank units away. So we give higher weight for shorter distance, and multiply the rank 5 value by \\(1 - 0.4 = 0.6\\), and the rank 6 value by \\(1 - 0.6 = 0.4\\). Therefore the weighted average is \\(37933 * 0.6 + 40319 * 0.4 = 38887.4\\). This is a mathematical way to get the value we described graphically, of tracking up from the rank of 5.4 to the line drawn between the values for rank 5 and 6, and reading off the y-value at which this track crosses that line." + }, + { + "objectID": "standard_scores.html#unusual-values-compared-to-the-distribution", + "href": "standard_scores.html#unusual-values-compared-to-the-distribution", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.4 Unusual values compared to the distribution", + "text": "16.4 Unusual values compared to the distribution\nNow we return the problem of whether KMs and AOCs districts are unusual in terms of their median household incomes. From what we have so far, we might conclude that AOC’s district is fairly poor, and KM’s district is relatively wealthy. But — are either of their districts unusual in their wealth or poverty?\nTo answer that question, we have to think about the distribution of values. Are either AOC’s or KM’s district outside the typical spread of values for districts?\nThe rest of this section is an attempt to answer what we could mean by outside and typical spread.\nLet us start with a histogram of the district incomes, marking the position of the KM and AOC districts.\n\n\n\n\n\n\n\n\n\nWhat could we mean by “outside” the “typical spread”. By outside, we mean somewhere away from the center of the distribution. Let us take the mean of the distribution to be its center, and add that to the plot.\n\nmean_income <- mean(incomes)" + }, + { + "objectID": "standard_scores.html#on-deviations", + "href": "standard_scores.html#on-deviations", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.5 On deviations", + "text": "16.5 On deviations\nNow let us ask what we could mean by typical spread. By spread we mean deviation either side of the center.\nWe can calculate how far away each income is away from the mean, by subtracting the mean from all the income values. Call the result — the deviations from the mean, or deviations for short.\n\ndeviations <- incomes - mean(incomes)\n\nThe deviation values give, for each district, how far that district’s income is from the mean. Values near the mean will have small (positive or negative) values, and values further from the mean will have large (positive and negative) values. Here is a histogram of the deviation values.\n\n\n\n\n\n\n\n\n\nNotice that the shape of the distribution has not changed — all that changed is the position of the distribution on the x-axis. In fact, the distribution of deviations centers on zero — the deviations have a mean of (as near as the computer can accurately calculate) zero:\n\n# Show the mean of the deviations, rounded to 8 decimal places.\nround(mean(deviations), 8)\n\n[1] 0" + }, + { + "objectID": "standard_scores.html#the-mean-absolute-deviation", + "href": "standard_scores.html#the-mean-absolute-deviation", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.6 The mean absolute deviation", + "text": "16.6 The mean absolute deviation\nNow let us consider the deviation value for KM and AOC:\n\nmessage('Deviation for KM: ', deviations[km_index])\n\nDeviation for KM: 5098.03628117914\n\nmessage('Deviation for AOC: ', deviations[aoc_index])\n\nDeviation for AOC: -15977.9637188209\n\n\nWe have the same problem as before. Yes, we see that KM has a positive deviation, and therefore, that his district is more wealthy than average across the 441 districts. Conversely AOC’s district has a negative deviation, and is poorer than average. But we still lack a standard measure of how far away from the mean each district is, in terms of the spread of values in the histogram.\nTo get such a standard measure, we would like idea of a typical or average deviation. Then we will compare KM’s and AOC’s deviations to the average deviation, to see if they are unusually far from the mean.\nYou have just seen above that we cannot use the literal average (mean) of the deviations for this purpose because the positive and negative deviations will exactly cancel out, and the mean deviation will always be as near as the computer can calculate to zero.\nTo stop the negatives canceling the positives, we can simply knock the minus signs off all the negative deviations.\nThis is the job of the R abs function — where abs is short for absolute. The abs function will knock minus signs off negative values, like this:\n\nabs(c(-1, 0, 1, -2))\n\n[1] 1 0 1 2\n\n\nTo get an average of the deviations, regardless of whether they are positive or negative, we can take the mean of the absolute deviations, like this:\n\n# The Mean Absolute Deviation (MAD)\nabs_deviations <- abs(deviations)\nmad <- mean(abs_deviations)\n# Show the result\nmad\n\n[1] 15102\n\n\nThis is the Mean Absolute Deviation (MAD). It is one measure of the typical spread. MAD is the average distance (regardless of positive or negative) of a value from the mean of the values.\nWe can get an idea of how typical a particular deviation is by dividing the deviation by the MAD value, like this:\n\nmessage('Deviation in MAD units for KM: ', deviations[km_index] / mad)\n\nDeviation in MAD units for KM: 0.337581239498037\n\nmessage('Deviation in MAD units AOC: ', deviations[aoc_index] / mad)\n\nDeviation in MAD units AOC: -1.05802714993755" + }, + { + "objectID": "standard_scores.html#the-standard-deviation", + "href": "standard_scores.html#the-standard-deviation", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.7 The standard deviation", + "text": "16.7 The standard deviation\nWe are interested in the average deviation, but we find that a simple average of the deviations from the mean always gives 0 (perhaps with some tiny calculation error), because the positive and negative deviations cancel exactly.\nThe MAD calculation solves this problem by knocking the signs off the negative values before we take the mean.\nAnother very popular way of solving the same problem is to precede the calculation by squaring all the deviations, like this:\n\nsquared_deviations <- deviations ** 2\n# Show the first five values.\nsquared_deviations[1:5]\n\n[1] 2.49e+09 1.91e+09 1.66e+09 1.17e+09 1.17e+09\n\n\n\n\n\n\n\n\nExponential format for showing very large and very small numbers\n\n\n\nThe squared_deviation values above appear in exponential notation (E-notation). Other terms for E-notation are scientific notation, scientific form, or standard form. E-notation is a useful way to express very large (far from 0) or very small (close to 0) numbers in a more compact form.\nE-notation represents a value as a floating point value \\(m\\) multiplied by 10 to the power of an exponent \\(n\\):\n\\[\nm * 10^n\n\\]\n\\(m\\) is a floating point number with one digit before the decimal point — so it can be any value from 1.0 through 9.9999… \\(n\\) is an integer (positive or negative whole number).\nFor example, the median household income of KM’s district is 77205 (dollars). We can express that same number in E-notation as \\(7.7205 * 10^4\\) . R writes this as 7.7205e4, where the number before the e is \\(m\\) and the number after the e is the exponent value \\(n\\). E-notation is another way of writing the number, because \\(7.7205 * 10^4 = 77205\\).\n\n7.7205e4 == 77205\n\n[1] TRUE\n\n\nIt is no great advantage to use E-notation in this case; 77205 is probably easier to read and understand than 7.7205e4. The notation comes into its own where you start to lose track of the powers of 10 when you read a number — and that does happen when the number becomes very long without E-notation. For example, \\(77205^2 = 5960612025\\). \\(5960612025\\) is long enough that you start having to count the digits to see how large it is. In E-notation, that number is 5.960612025e9. If you remember that \\(10^9\\) is one US billion, then the E-notation tells you at a glance that the value is about \\(5.9\\) billion.\nR makes its own decision whether to print out numbers using E-notation. This only affects the display of the numbers; the underlying values remain the same whether R chooses to show them in E-notation or not.\n\n\nThe process of squaring the deviations turns all the negative values into positive values.\nWe can then take the average (mean) of the squared deviations to give a measure of the typical squared deviation:\n\nmean_squared_deviation <- mean(squared_deviations)\nmean_squared_deviation\n\n[1] 3.86e+08\n\n\nRather confusingly, the field of statistics uses the term variance to refer to mean squared deviation value. Just to emphasize that naming, let’s do the same calculation but using “variance” as the variable name.\n\n# Statistics calls the mean squared deviation - the \"variance\"\nvariance <- mean(squared_deviations)\nvariance\n\n[1] 3.86e+08\n\n\nThe variance is the typical (in the sense of the mean) squared deviation. The units for the variance, in our case, would be squared dollars. But we are more interested in the typical deviation, in our original units – dollars rather than squared dollars.\nSo we take the square root of the mean squared deviation (the square root of the variance), to get the standard deviation. It is the standard deviation in the sense that it a measure of typical deviation, in the specific sense of the square root of the mean squared deviations.\n\n# The standard deviation is the square root of the mean squared deviation.\n# (and therefore, the square root of the variance).\nstandard_deviation <- sqrt(mean_squared_deviation)\nstandard_deviation\n\n[1] 19646\n\n\nThe standard deviation (the square root of the mean squared deviation) is a popular alternative to the Mean Absolute Deviation, as a measure of typical spread.\nFigure 16.1 shows another histogram of the income values, marking the mean, the mean plus or minus one standard deviation, and the mean plus or minus two standard deviations. You can see that the mean plus or minus one standard deviation includes a fairly large proportion of the data. The mean plus or minus two standard deviation includes much larger proportion.\n\n\n\n\n\nFigure 16.1: Income histogram plus or minus 1 and 2 standard deviations\n\n\n\n\nNow let us return to the question of how unusual our two congressional districts are in terms of the distribution. First we calculate the number of standard deviations of each district from the mean:\n\nkm_std_devs <- deviations[km_index] / standard_deviation\nmessage('Deviation in standard deviation units for KM: ',\n round(km_std_devs), 2)\n\nDeviation in standard deviation units for KM: 02\n\naoc_std_devs <- deviations[aoc_index] / standard_deviation\nmessage('Deviation in standard deviation units for AOC: ',\n round(aoc_std_devs), 2)\n\nDeviation in standard deviation units for AOC: -12\n\n\nThe values for each district are a re-expression of the income values in terms of the distribution. They give the distance from the mean (positive or negative) in units of standard deviation." + }, + { + "objectID": "standard_scores.html#standard-scores", + "href": "standard_scores.html#standard-scores", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.8 Standard scores", + "text": "16.8 Standard scores\nWe will often find uses for the procedure we have just applied, where we take the original values (here, incomes) and:\n\nSubtract the mean to convert to deviations, then\nDivide by the standard deviation\n\nLet’s apply that procedure to all the incomes values.\nFirst we calculate the standard deviation:\n\ndeviations <- incomes - mean(incomes)\nincome_std <- sqrt(mean(deviations ** 2))\n\nThen we calculate standard scores:\n\ndeviations_in_stds <- deviations / income_std\ndeviations_in_stds[1:5]\n\n[1] -2.54 -2.23 -2.07 -1.74 -1.74\n\n\nThis procedure converts the original data (here incomes) to deviations from the mean in terms of the standard deviation. The resulting values are called standard scores or z-scores. One name for this procedure is “z-scoring”.\nIf you plot a histogram of the standard scores, you will see they have a mean of (actually exactly) 0, and a standard deviation of (actually exactly) 1.\n\n\n\n\n\n\n\n\n\nWith all this information — what should we conclude about the two districts in question? KM’s district is 0.26 standard deviations above the mean, but that’s not enough to conclude that it is unusual. We see from the histogram that a large proportion of the districts are at least this distance from the mean. We can calculate that proportion directly.\n\n# Distances (negative or positive) from the mean.\nabs_std_devs <- abs(deviations_in_stds)\n# Number where distance greater than KM distance.\nn_gt_km <- sum(abs_std_devs > km_std_devs)\nprop_gt_km <- n_gt_km / length(deviations_in_stds)\nmessage(\"Proportion of districts further from mean than KM: \",\n round(prop_gt_km, 2))\n\nProportion of districts further from mean than KM: 0.82\n\n\nA full 82% of districts are further from the mean than is KM’s district. KM’s district is richer than average, but not unusual. The benefit of the standard deviation distance is that we can see this directly from the value, without doing the calculation of proportions, because the standard deviation is a measure of typical spread, and KM’s district is well-within this measure.\nAOC’s district is -0.81 standard deviations from the mean. This is a little more unusual than KM’s score.\n\n# Number where distance greater than AOC distance.\n# Make AOC's distance positive to correspond to distance from the mean.\nn_gt_aoc <- sum(abs_std_devs > abs(aoc_std_devs))\nprop_gt_aoc <- n_gt_aoc / length(deviations_in_stds)\nmessage(\"Proportion of districts further from mean than AOC's district: \",\n round(prop_gt_aoc, 2))\n\nProportion of districts further from mean than AOC's district: 0.35\n\n\nOnly 35% of districts are further from the mean than AOC’s district, but this is still a reasonable proportion. We see from the standard score that AOC is within one standard deviation. AOC’s district is poorer than average, but not to a remarkable degree." + }, + { + "objectID": "standard_scores.html#standard-scores-to-compare-values-on-different-scales", + "href": "standard_scores.html#standard-scores-to-compare-values-on-different-scales", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.9 Standard scores to compare values on different scales", + "text": "16.9 Standard scores to compare values on different scales\nWhy are standard scores so useful? They allow us to compare values on very different scales.\nConsider the values in Table 16.4. Each row of the table corresponds to a team competing in the English Premier League (EPL) for the 2021-2022 season. For those of you with absolutely no interest in sports, the EPL is the league of the top 20 teams in English football, or soccer to our North American friends. The points column of the table gives the total number of points at the end of the 2021 season (from 38 games). The team gets 3 points for a win, and 1 point for a draw, so the maximum possible points from 38 games are \\(3 * 38 = 114\\). The wages column gives the estimated total wage bill in thousands of British Pounds (£1000).\n\n\n\n\nTable 16.4: 2021 points and wage bills (£1000s) for EPL teams \n\n\nteam\npoints\nwages\n\n\n\n\nManchester City\n93\n168572\n\n\nLiverpool\n92\n148772\n\n\nChelsea\n74\n187340\n\n\nTottenham Hotspur\n71\n110416\n\n\nArsenal\n69\n118074\n\n\nManchester United\n58\n238780\n\n\nWest Ham United\n56\n77936\n\n\nLeicester City\n52\n81590\n\n\nBrighton and Hove Albion\n51\n49820\n\n\nWolverhampton Wanderers\n51\n62756\n\n\nNewcastle United\n49\n73308\n\n\nCrystal Palace\n48\n71910\n\n\nBrentford\n46\n28606\n\n\nAston Villa\n45\n85330\n\n\nSouthampton\n40\n58657\n\n\nEverton\n39\n110202\n\n\nLeeds United\n38\n37354\n\n\nBurnley\n35\n40830\n\n\nWatford\n23\n42030\n\n\nNorwich City\n22\n31750\n\n\n\n\n\n\n\n\nLet’s say we own Crystal Palace Football Club. Crystal Palace was a bit below average in the league in terms of points. Now we are thinking about whether we should invest in higher-paid players for the coming season, to improve our points score, and therefore, league position.\nOne thing we might like to know is whether there is an association between the wage bill and the points scored.\nTo look at that, we can do a scatter plot. This is a plot with — say — wages on the x-axis, and points on the y-axis. For each team we have a pair of values — their wage bill and their points scored. For each team, we put a marker on the scatter plot at the coordinates given by the wage value (on the x-axis) and the points value (on the y-axis).\nHere is that plot for our EPL data in Table 16.4, with the Crystal Palace marker picked out in red.\n\n\n\n\n\n\n\n\n\nIt looks like there is a rough association of wages and points; teams that spend more in wages tend to have more points.\nAt the moment, the points and wages are in very different units. Points are on a possible scale of 0 (lose every game) to 38 * 3 = 114 (win every game). Wages are in thousands of pounds. Maybe we are not interested in the values in these units, but in how unusual the values are, in terms of wages, and in terms of points.\nThis is a good application of standard scores. Standard scores convert the original values to values on a standard scale, where 0 corresponds to an average value, 1 to a value one standard deviation above the mean, and -1 to a value one standard deviation below the mean. If we follow the standard score process for both points and wages, the values will be in the same standard units.\nTo do this calculation, we need the values from the table. We follow the same recipe as before, in loading the data with R.\n\npoints_wages = read.csv('data/premier_league.csv')\npoints = points_wages$points\nwages = points_wages$wages\n\nAs you recall, the standard deviation is the square root of the mean squared deviation. In code:\n\n# The standard deviation is the square root of the\n# mean squared deviation.\nwage_deviations <- wages - mean(wages)\nwage_std <- sqrt(mean(wage_deviations ** 2))\nwage_std\n\n[1] 55524\n\n\nNow we can apply the standard score procedure to wages. We divide the deviations by the standard deviation.\n\nstandard_wages <- (wages - mean(wages)) / wage_std\n\nWe apply the same procedure to the points:\n\npoint_deviations <- points - mean(points)\npoint_std = sqrt(mean(point_deviations ** 2))\nstandard_points = point_deviations / point_std\n\nNow, when we plot the standard score version of the points against the standard score version of the wages, we see that they are in comparable units, each with a mean of 0, and a spread (a standard deviation) of 1.\n\n\n\n\n\n\n\n\n\nLet us go back to our concerns as the owners of Crystal Palace. Counting down from the top in the table above, we see that Crystal Palace is the 12th row. Therefore, we can get the Crystal Palace wage value with:\n\ncp_index <- 12\ncp_wages <- wages[cp_index]\ncp_wages\n\n[1] 71910\n\n\nWe can get our wage bill in standard units in the same way:\n\ncp_standard_wages <- standard_wages[cp_index]\ncp_standard_wages\n\n[1] -0.347\n\n\nOur wage bill is a below average, but its still within striking distance of the mean.\nWe know that we are comparing ourselves against the other teams, so perhaps we want to increase our wage bill by one standard deviation, to push us above the mean, and somewhat away from the center of the pack. If we add one standard deviation to our wage bill, that increases the standard score of our wages by 1.\nBut — if we increase our wages by one standard deviation — how much can we expect that to increase our points — in standard units.\nThat is question about the strength of the association between two measures — here wages and points — and we will cover that topic in much more detail in Chapter 29. But, racing ahead — here is the answer to the question we have just posed — the amount we expect to gain in points, in standard units, if we increase our wages by one standard deviation (and therefore, 1 in standard units).\nFor reasons we won’t justify now, we calculate the \\(r\\) value of association between wages and points, like this:\n\nstandards_multiplied <- standard_wages * standard_points\nr = mean(standards_multiplied)\nr\n\n[1] 0.708\n\n\nThe \\(r\\) value is the answer to our question. For every one unit increase in standard scores in wages, we expect an increase of \\(r\\) (0.708) standard score units in points." + }, + { + "objectID": "standard_scores.html#conclusion", + "href": "standard_scores.html#conclusion", + "title": "16  Ranks, Quantiles and Standard Scores", + "section": "16.10 Conclusion", + "text": "16.10 Conclusion\nWhen we look at a set of values, we often ask questions about whether individual values are unusual or surprising. One way of doing that is to look at where the values are in the sorted order — for example, using the raw rank of values, or the proportion of values below this value — the quantiles or percentiles of a value. Another measure of interest is where a value is in comparison to the spread of all values either side of the mean. We use the term “deviations” to refer to the original values after we have subtracted the mean of the values. We can measure spread either side of the mean with metrics such as the mean of the absolute deviations (MAD) and the square root of the mean squared deviations (the standard deviation). One common use of the deviations and the standard deviation is to transform values into standard scores. These are the deviations divided by the standard deviation, and they transform values to have a standard mean (zero) and spread (standard deviation of 1). This can make it easier to compare sets of values with very different ranges and means.\n\n\n\n\nHyndman, Rob J, and Yanan Fan. 1996. “Sample Quantiles in Statistical Packages.” The American Statistician 50 (4): 361–65. https://www.jstor.org/stable/pdf/2684934.pdf.\n\n\nPiketty, Thomas. 2018. “Brahmin Left Vs Merchant Right: Rising Inequality & the Changing Structure of Political Conflict.” 2018. https://www.prsinstitute.org/downloads/related/economics/RisingInequalityandtheChangingStructureofPoliticalConflict1.pdf." + }, + { + "objectID": "inference_ideas.html#knowledge-without-probabilistic-statistical-inference", + "href": "inference_ideas.html#knowledge-without-probabilistic-statistical-inference", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.1 Knowledge without probabilistic statistical inference", + "text": "17.1 Knowledge without probabilistic statistical inference\nLet us distinguish two kinds of knowledge with which inference at large (that is, not just probabilistic statistical inference) is mainly concerned: a) one or more absolute measurements on one or more dimensions of a collection of one or more items — for example, your income, or the mean income of the people in your country; and b) comparative measurements and evaluations of two or more collections of items (especially whether they are equal or unequal)—for example, the mean income in Brazil compared to the mean income in Argentina. Types (a) and (b) both include asking whether there has been a change between one observation and another.\nWhat is the conceptual basis for gathering these types of knowledge about the world? I believe that our rock bottom conceptual tool is the assumption of what we may call sameness , or continuity , or constancy , or repetition , or equality , or persistence ; “constancy” and “continuity” will be the terms used most frequently here, and I shall use them interchangeably.\nContinuity is a non-statistical concept. It is a best guess about the next point beyond the known observations, without any idea of the accuracy of the estimate. It is like testing the ground ahead when walking in a marsh. It is local rather than global. We’ll talk a bit later about why continuity seems to be present in much of the world that we encounter.\nThe other great concept in statistical inference, and perhaps in all inference taken together, is representative (usually random) sampling, to be discussed in Chapter 18. Representative sampling — which depends upon the assumption of sameness (homogeneity) throughout the universe to be investigated — is quite different than continuity; representative sampling assumes that there is no greater chance of a connection between any two elements that might be drawn into the sample than between any other two elements; the order of drawing is immaterial. In contrast, continuity assumes that there is a greater chance of connection between two contiguous elements than between either one of the elements and any of the many other elements that are not contiguous to either. Indeed, the process of randomizing is a device for doing away with continuity and autocorrelation within some bounded closed system — the sample “frame.” It is an attempt to map (describe) the entire area ahead using the device of the systematic survey. Random representative sampling enables us to make probabilistic inferences about a population based on the evidence of a sample.\n\nTo return now to the concept of sameness: Examples of the principle are that we assume: a) our house will be in the same place tomorrow as today; b) a hammer will break an egg every time you hit the latter with the former (or even the former with the latter); c) if you observe that the first fifteen persons you see walking out of a door at the airport are male, the sixteenth probably will be male also; d) paths in the village stay much the same through a person’s life; e) religious ritual changes little through the decades; f) your best guess about tomorrow’s temperature or stock price is that will be the same as today’s. This principle of constancy is related to David Hume’s concept of constant conjunction .\nWhen my children were young, I would point to a tree on our lawn and ask: “Do you think that tree will be there tomorrow?” And when they would answer “Yes,” I’d ask, “Why doesn’t the tree fall?” That’s a tough question to answer.\nThere are two reasonable bases for predicting that the tree will be standing tomorrow. First and most compelling for most of us is that almost all trees continue standing from day to day, and this particular one has never fallen; hence, what has been in the past is likely to continue. This assessment requires no scientific knowledge of trees, yet it is a very functional way to approach most questions concerning the trees — such as whether to hang a clothesline from it, or whether to worry that it will fall on the house tonight. That is, we can predict the outcome in this case with very high likelihood of being correct even though we do not utilize anything that would be called either science or statistical inference. (But what do you reply when your child says: “Why should I wear a seat belt? I’ve never been in an accident”?)\nA second possible basis for prediction that the tree will be standing is scientific analysis of the tree’s roots — how the tree’s weight is distributed, its sickness or health, and so on. Let’s put aside this sort of scientific-engineering analysis for now.\nThe first basis for predicting that the tree will be standing tomorrow — sameness — is the most important heuristic device in all of knowledge-gathering. It is often a weak heuristic; certainly the prediction about the tree would be better grounded (!) after a skilled forester examines the tree. But persistence alone might be a better heuristic in a particular case than an engineering-scientific analysis alone.\nThis heuristic appears more obvious if the child — or the adult — were to respond to the question about the tree with another question: Why should I expect it to fall ? In the absence of some reason to expect change, it is quite reasonable to expect no change. And the child’s new question does not duck the central question we have asked about the tree, any more than one ducks a probability estimate by estimating the complementary probability (that is, unity minus the probability sought); indeed, this is a very sound strategy in many situations.\n\nConstancy can refer to location, time, relationship to another variable, or yet another dimension. Constancy may also be cyclical. Some cyclical changes can be charted or mapped with relative certainty — for example the life-cycles of persons, plants, and animals; the diurnal cycle of dark and light; and the yearly cycle of seasons. The courses of some diseases can also be charted. Hence these kinds of knowledge have long been well known.\nConsider driving along a road. One can predict that the price of the next gasoline station will be within a few cents of the gasoline station that you just passed. But as you drive further and further, the dispersion increases as you cross state lines and taxes differ. This illustrates continuity.\nThe attention to constancy can focus on a single event, such as leaves of similar shape appearing on the same plant. Or attention can focus on single sequences of “production,” as in the process by which a seed produces a tree. For example, let’s say you see two puppies — one that looks like a low-slung dachshund, and the other a huge mastiff. You also see two grown male dogs, also apparently dachshund and mastiff. If asked about the parentage of the small ones, you are likely — using the principle of sameness — to point — quickly and with surety — to the adult dogs of the same breed. (Here it is important to notice that this answer implicitly assumes that the fathers of the puppies are among these dogs. But the fathers might be somewhere else entirely; it is in these ways that the principle of sameness can lead you astray.)\nWhen applying the concept of sameness, the object of interest may be collections of data, as in Semmelweiss’s (1983, 64) data on the consistent differences in rates of maternal deaths from childbed fever in two clinics with different conditions (see Table 17.1), or the similarities in sex ratios from year to year in Graunt’s (1759, 304) data on christenings in London (Table 17.2), or the stark effect in John Snow’s (Winslow 1980, 276) data on the numbers of cholera cases associated with two London water suppliers (Table 17.3), or Kanehiro Takaki’s (Kornberg 1991, 9) discovery of the reduction in beriberi among Japanese sailors as a result of a change in diet (Table 17.4). These data seem so overwhelmingly clear cut that our naive statistical sense makes the relationships seem deterministic, and the conclusions seems straightforward. (But the same statistical sense frequently misleads us when considering sports and stock market data.)\n\n\nTable 17.1: Deaths of Mothers from childbed fever in two clinics\n\n\n\n\n\n\n\n\n\n\n\n\nFirst clinic\nSecond clinic\n\n\n\nBirths\nDeaths\nRate\nBirths\nDeaths\nRate\n\n\n\n\n1841\n3,036\n237\n7.7\n2,442\n86\n3.5\n\n\n1842\n3,287\n518\n15.8\n2,659\n202\n7.5\n\n\n1843\n3,060\n274\n8.9\n2,739\n164\n5.9\n\n\n1844\n3,157\n260\n8.2\n2,956\n68\n2.3\n\n\n1845\n3,492\n241\n6.8\n3,241\n66\n2.03\n\n\n1845\n4,010\n459\n11.4\n3,754\n105\n2.7\n\n\n\nTotal\n20,042\n1,989\n\n17,791\n691\n\n\n\nAverage\n\n\n9.92\n\n\n3.38\n\n\n\n\n\n\n\nTable 17.2: Ratio of number of male to number of female christenings in London\n\n\nPeriod\nMale / Female ratio\n\n\n\n\n1629-1636\n1.072\n\n\n1637-1640\n1.073\n\n\n1641-1648\n1.063\n\n\n1649-1656\n1.095\n\n\n1657-1660\n1.069\n\n\n\n\n\n\nTable 17.3: Rates of death from cholera for three water suppliers\n\n\nWater supplier\nCholera deaths per 10,000 houses\n\n\n\n\nSouthwark and Vauxhall\n71\n\n\nLambeth\n5\n\n\nRest of London\n9\n\n\n\n\n\n\nTable 17.4: Takaki’s Japanese Naval Records of Deaths from Beriberi\n\n\n\n\n\n\n\n\nYear\nDiet\nTotal Navy Personnel\nDeaths from Beriberi\n\n\n\n\n1880\nRice diet\n4,956\n1,725\n\n\n1881\nRice diet\n4,641\n1,165\n\n\n1882\nRice diet\n4,769\n1,929\n\n\n1883\nRice Diet\n5,346\n1,236\n\n\n1884\nChange to new diet\n5,638\n718\n\n\n1885\nNew diet\n6,918\n41\n\n\n1886\nNew diet\n8,475\n3\n\n\n1887\nNew diet\n9,106\n0\n\n\n1888\nNew diet\n9,184\n0\n\n\n\n\nConstancy and sameness can be seen in macro structures; consider, for example, the constant location of your house. Constancy can also be seen in micro aggregations — for example, the raindrops and rain that account for the predictably fluctuating height of the Nile, or the ratio of boys to girls born in London, cases in which we can average to see the “statistical” sameness. The total sum of the raindrops produces the level of a reservoir or a river from year to year, and the sum of the behaviors of collections of persons causes the birth rates in the various years.\nStatistical inference is only needed when a person thinks that s/he might have found a pattern but the pattern is not completely obvious to all. Probabilistic inference works to test — either to confirm or discount — the belief in the pattern’s existence. We will see such cases in the following chapter.\nPeople have always been forced to think about and act in situations that have not been constant — that is, situations where the amount of variability in the phenomenon makes it impossible to draw clear cut, sensible conclusions. For example, the appearance of game animals in given places and at given times has always been uncertain to hunters, and therefore it has always been difficult to know which target to hunt in which place at what time. And of course variability of the weather has always made it a very uncertain element. The behavior of one’s enemies and friends has always been uncertain, too, though uncertain in a manner different from the behavior of wild animals; there often is a gaming element in interactions with other humans. But in earlier times, data and techniques did not exist to enable us to bring statistical inference to bear." + }, + { + "objectID": "inference_ideas.html#the-treatment-of-uncertainty", + "href": "inference_ideas.html#the-treatment-of-uncertainty", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.2 The treatment of uncertainty", + "text": "17.2 The treatment of uncertainty\nThe purpose of statistical inference is to help us peer through the veil of variability when it obscures the main thrust of the data, so as to improve the decisions we make. Statistical inference (or in most cases, simply probabilistic estimation) can help:\n\na gambler deciding on the appropriate odds in a betting game when there seems to be little or no difference between two or more outcomes;\nan astronomer deciding upon one or another value as the central estimate for the location of a star when there is considerable variation in the observations s/he has made of the star;\na basketball coach pondering whether to remove from the game her best shooter who has heretofore done poorly tonight;\nan oil-drilling firm debating whether to follow up a test-well drilling with a full-bore drilling when the probability of success is not overwhelming but the payoff to a gusher could be large.\n\nReturning to the tree near the Simon house: Let’s change the facts. Assume now that one major part of the tree is mostly dead, and we expect a big winter storm tonight. What is the danger that the tree will fall on the house? Should we spend $1500 to have the mostly-dead third of it cut down? We know that last year a good many trees fell on houses in the neighborhood during such a storm.\nWe can gather some data on the proportion of old trees this size that fell on houses — about 5 in 100, so far as we can tell. Now it is no longer an open-and-shut case about whether the tree will be standing tomorrow, and we are using statistical inference to help us with our thinking. We proceed to find a set of trees that we consider similar to this one , and study the variation in the outcomes of such trees. So far we have estimated that the average for this group of trees — the mean (proportion) that fell in the last big storm — is 5 percent. Averages are much more “stable” — that is, more similar to each other — than are individual cases.\nNotice how we use the crucial concept of sameness: We assume that our tree is like the others we observed, or at least that it is not systematically different from most of them and it is more-or-less average.\nHow would our thinking be different if our data were that one tree in 10 had fallen instead of 5 in 100? This is a question in statistical inference.\n\nHow about if we investigate further and find that 4 of 40 elms fell, but only one of 60 oaks , and ours is an oak tree. Should we consider that oaks and elms have different chances of falling? Proceeding a bit further, we can think of the question as: Should we or should we not consider oaks and elms as different? This is the type of statistical inference called “hypothesis testing”: We apply statistical procedures to help us decide whether to treat the two classes of trees as the same or different. If we should consider them the same, our worries about the tree falling are greater than if we consider them different with respect to the chance of damage.1\nNotice that statistical inference was not necessary for accurate prediction when I asked the kids about the likelihood of a live tree falling on a day when there would be no storm. So it is with most situations we encounter. But when the assumption of constancy becomes shaky for one reason or another, as with the sick tree falling in a storm, we need a more refined form of thinking. We collect data on a large number of instances, inquire into whether the instances in which we are interested (our tree and the chance of it falling) are representative — that is, whether it resembles what we would get if we drew a sample randomly — and we then investigate the behavior of this large class of instances to see what light it throws on the instances(s) in which we are interested.\nThe procedure in this case — which we shall discuss in greater detail later on — is to ask: If oaks and elms are not different, how likely is it that only one of 60 oaks would fall whereas 4 of 40 elms would fall? Again, notice the assumption that our tree is “representative” of the other trees about which we have information — that it is not systematically different from most of them, but rather that it is more-or-less average. Our tree certainly was not chosen randomly from the set of trees we are considering. But for purposes of our analysis, we proceed as if it had been chosen randomly — because we deem it “representative.”\nThis is the first of two roles that the concept of randomness plays in statistical thinking. Here is an example of the second use of the concept of randomness: We conduct an experiment — plant elm and oak trees at randomly-selected locations on a plot of land, and then try to blow them down with a wind-making machine. (The random selection of planting spots is important because some locations on a plot of ground have different growing characteristics than do others.) Some purists object that only this sort of experimental sampling is a valid subject of statistical inference; it can never be appropriate, they say, to simply assume on the basis of other knowledge that the tree is representative. I regard that purist view as a helpful discipline on our thinking. But accepting its conclusion — that one should not apply statistical inference except to randomly-drawn or randomly-constituted samples — would take from us a tool that has proven useful in a variety of activities.\nAs discussed earlier in this chapter, the data in some (probably most) scientific situations are so overwhelming that one can proceed without probabilistic inference. Historical examples include those shown above of Semmelweiss and puerperal fever, and John Snow and cholera.2 But where there was lack of overwhelming evidence, the causation of many diseases long remained unclear for lack of statistical procedures. This led to superstitious beliefs and counter-productive behavior, such as quarantines against plague often were. Some effective practices also arose despite the lack of sound theory, however — the waxed costumes of doctors, and the burning of mattresses, despite the wrong theory about the causation of plague; see (Cipolla 1981).\nSo far I have spoken only of predictability and not of other elements of statistical knowledge such as understanding and control . This is simply because statistical correlation is the bed rock of most scientific understanding, and predictability. Later we will expand the discussion beyond predictability; it holds no sacred place here." + }, + { + "objectID": "inference_ideas.html#where-statistical-inference-becomes-crucial", + "href": "inference_ideas.html#where-statistical-inference-becomes-crucial", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.3 Where statistical inference becomes crucial", + "text": "17.3 Where statistical inference becomes crucial\nThere was little role for statistical inference until about three centuries ago because there existed very few scientific data. When scientific data began to appear, the need emerged for statistical inference to improve the interpretation of the data. As we saw, statistical inference is not needed when the evidence is overwhelming. A thousand cholera cases at one well and zero at another obviously does not require a statistical test. Neither would 999 cases to one, or even 700 cases to 300, because our inbred and learned statistical senses can detect that the two situations are different. But probabilistic inference is needed when the number of cases is relatively small or where for other reasons the data are somewhat ambiguous.\nFor example, when working with the 17th century data on births and deaths, John Graunt — great statistician though he was — drew wrong conclusions about some matters because he lacked modern knowledge of statistical inference. For example, he found that in the rural parish of Romsey “there were born 15 Females for 16 Males, whereas in London there were 13 for 14, which shows, that London is somewhat more apt to produce Males, then the country” (p. 71). He suggests that the “curious” inquire into the causes of this phenomenon, apparently not recognizing — and at that time he had no way to test — that the difference might be due solely to chance. He also notices (p. 94) that the variations in deaths among years in Romsey were greater than in London, and he attempted to explain this apparent fact (which is just a statistical artifact) rather than understanding that this is almost inevitable because Romsey is so much smaller than London. Because we have available to us the modern understanding of variability, we can now reach sound conclusions on these matters.3\nSummary statistics — such as the simple mean — are devices for reducing a large mass of data (inevitably confusing unless they are absolutely clear cut) to something one can manage to understand. And probabilistic inference is a device for determining whether patterns should be considered as facts or artifacts.\nHere is another example that illustrates the state of early quantitative research in medicine:\n\nExploring the effect of a common medicinal substance, Bőcker examined the effect of sasparilla on the nitrogenous and other constituents of the urine. An individual receiving a controlled diet was given a decoction of sasparilla for a period of twelve days, and the volume of urine passed daily was carefully measured. For a further twelve days that same individual, on the same diet, was given only distilled water, and the daily quantity of urine was again determined. The first series of researches gave the following figures (in cubic centimeters): 1,467, 1,744, 1,665, 1,220, 1,161, 1,369, 1,675, 2,199, 887, 1,634, 943, and 2,093 (mean = 1,499); the second series: 1,263, 1,740, 1,538, 1,526, 1,387, 1,422, 1,754, 1,320, 1,809, 2,139, 1,574, and 1,114 (mean = 1,549). Much uncertainty surrounded the exactitude of these measurements, but this played little role in the ensuing discussion. The fundamental issue was not the quality of the experimental data but how inferences were drawn from those data (Coleman 1987, 207).\n\nThe experimenter Böcker had no reliable way of judging whether the data for the two groups were or were not meaningfully different, and therefore he arrived at the unsound conclusion that there was indeed a difference. (Gustav Radicke used this example as the basis for early work on statistical significance (Støvring 1999).)\nAnother example: Joseph Lister convinced the scientific world of the germ theory of infection, and the possibility of preventing death with a disinfectant, with these data: Prior to the use of antiseptics — 16 post-operative deaths in 35 amputations; subsequent to the use of antiseptics — 6 deaths in 40 amputations (Winslow 1980, 303). But how sure could one be that a difference of that size might not occur just by chance? No one then could say, nor did anyone inquire, apparently.\nHere’s another example of great scientists falling into error because of a too-primitive approach to data (Feller 1968, 1:69–70): Charles Darwin wanted to compare two sets of measured data, each containing 16 observations. At Darwin’s request, Francis Galton compared the two sets of data by ranking each, and then comparing them pairwise. The a’s were ahead 13 times. Without knowledge of the actual probabilities Galton concluded that the treatment was effective. But, assuming perfect randomness, the probability that the a’s beat [the others] 13 times or more equals 3/16. This means that in three out of sixteen cases a perfectly ineffectual treatment would appear as good or better than the treatment classified as effective by Galton.\nThat is, Galton and Darwin reached an unsound conclusion. As Feller (1968, 1:70) says, “This shows that a quantitative analysis may be a valuable supplement to our rather shaky intuition”.\nLooking ahead, the key tool in situations like Graunt’s and Böcker’s and Lister’s is creating ceteris paribus — making “everything else the same” — with random selection in experiments, or at least with statistical controls in non-experimental situations." + }, + { + "objectID": "inference_ideas.html#conclusions", + "href": "inference_ideas.html#conclusions", + "title": "17  The Basic Ideas in Statistical Inference", + "section": "17.4 Conclusions", + "text": "17.4 Conclusions\nIn all knowledge-seeking and decision-making, our aim is to peer into the unknown and reduce our uncertainty a bit. The two main concepts that we use — the two great concepts in all of scientific knowledge-seeking, and perhaps in all practical thinking and decision-making — are a) continuity (or non-randomness) and the extent to which it applies in given situation, and b) random sampling, and the extent to which we can assume that our observations are indeed chosen by a random process.\n\n\n\n\nCipolla, C. M. 1981. Fighting the Plague in Seventeenth-Century Italy. Merle Curti Lectures. Madison, Wisconsin: University of Wisconsin Press. https://books.google.co.uk/books?id=Ct\\_OJYgnKCsC.\n\n\nColeman, William. 1987. “Experimental Physiology and Statistical Inference: The Therapeutic Trial in Nineteenth Century Germany.” In The Probabilistic Revolution: Volume 2: Ideas in the Sciences, edited by Lorenz Krüger, Gerd Gigerenzer, and Mary S. Morgan. An MIT Press Classic. MIT Press. https://books.google.co.uk/books?id=SLftmgEACAAJ.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nGraunt, John. 1759. “Natural and Political Observations Mentioned in a Following Index and Made Upon the Bills of Mortality.” In Collection of Yearly Bills of Mortality, from 1657 to 1758 Inclusive, edited by Thomas Birch. London: A. Miller. https://archive.org/details/collectionyearl00hebegoog.\n\n\nHald, Anders. 1990. A History of Probability and Statistics and Their Applications Before 1750. New York: John Wiley & Sons. https://archive.org/details/historyofprobabi0000hald.\n\n\nKornberg, Arthur. 1991. For the Love of Enzymes: The Odyssey of a Biochemist. Cambridge, Massachusetts: Harvard University Press. https://archive.org/details/forloveofenzymes00arth.\n\n\nSemmelweis, Ignác Fülöp. 1983. The Etiology, Concept, and Prophylaxis of Childbed Fever. Translated by K. Codell Carter. Madison, Wisconsin: University of Wisconsin Press. https://archive.org/details/etiologyconcepta0000unse.\n\n\nStøvring, H. 1999. “On Radicke and His Method for Testing Mean Differences.” Journal of the Royal Statistical Society: Series D (The Statistician) 48 (2): 189–201. https://www.jstor.org/stable/pdf/2681185.pdf.\n\n\nWinslow, Charles-Edward Amory. 1980. The Conquest of Epidemic Disease: A Chapter in the History of Ideas. Madison, Wisconsin: University of Wisconsin Press. https://archive.org/details/conquestofepidem0000wins_p3k0." + }, + { + "objectID": "inference_intro.html#statistical-inference-and-random-sampling", + "href": "inference_intro.html#statistical-inference-and-random-sampling", + "title": "18  Introduction to Statistical Inference", + "section": "18.1 Statistical inference and random sampling", + "text": "18.1 Statistical inference and random sampling\nContinuity and sameness is the fundamental concept in inference in general, as discussed in Chapter 17. Random sampling is the second great concept in inference, and it distinguishes probabilistic statistical inference from non-statistical inference as well as from non-probabilistic inference based on statistical data.\nLet’s begin the discussion with a simple though unrealistic situation. Your friend Arista a) looks into a cardboard carton, b) reaches in, c) pulls out her hand, and d) shows you a green ball. What might you reasonably infer?\nYou might at least be fairly sure that the green ball came from the carton, though you recognize that Arista might have had it concealed in her hand when she reached into the carton. But there is not much more you might reasonably conclude at this point except that there was at least one green ball in the carton to start with. There could be no more balls; there could be many green balls and no others; there could be a thousand red balls and just one green ball; and there could be one green ball, a hundred balls of different colors, and two pounds of mud — given that she looked in first, it is not improbable that she picked out the only green ball among other material of different sorts.\nThere is not much you could say with confidence about the probability of yourself reaching into the same carton with your eyes closed and pulling out a single green ball. To use other language (which some philosophers might say is not appropriate here as the situation is too specific), there is little basis for induction about the contents of the box. Nor is the situation very different if your friend reaches in three times in a row and hands you a green ball each time.\nSo far we have put our question rather vaguely. Let us frame a more precise inquiry: What do we predict about the next item(s) we might draw from the carton? If we assume — based on who-knows-what information or notions — that another ball will emerge, we could simply use the principle of sameness and (until we see a ball of another color) predict that the next ball will be green, whether one or three or 100 balls is (are) drawn.\nBut now what about if Arista pulls out nine green balls and one red ball? The principle of sameness cannot be applied as simply as before. Based on the last previous ball, the next one will be red. But taking into account all the balls we have seen, the next will “probably” be green. We have no solid basis on which to go further. There cannot be any “solution” to the “problem” of reaching a general conclusion on the basis of these specific pieces of evidence.\nNow consider what you might conclude if you were told that a single green ball had been drawn with a random sampling procedure from a box containing nothing but balls. Knowledge that the sample was drawn randomly from a given universe is grounds for belief that one knows much more than if a sample were not drawn randomly. First, you would be sure — if you had reasonable basis to believe that the sampling really was random, which is not easy to guarantee — that the ball came from the box. Second, you would guess that the proportion of green balls is not very small, because if there are only a few green balls and many other-colored balls, it would be unusual — that is, the event would have a low probability — to draw a green ball. Not impossible, but unlikely. And we can compute the probability of drawing a green ball — or any other combination of colors — for different assumed compositions within the box . So the knowledge that the sampling process is random greatly increases our ability — or our confidence in our ability — to infer the contents of the box.\nLet us note well the strategy of the previous paragraph: Ask about the probability that one or more various possible contents of the box (the “universe”) will produce the observed sample , on the assumption that the sample was drawn randomly. This is the central strategy of all statistical inference , though I do not find it so stated elsewhere. We shall come back to this idea shortly.\nThere are several kinds of questions one might ask about the contents of the box. One general category includes questions about our best guesses of the box’s contents — that is, questions of estimation . Another category includes questions about our surety of that description, and our surety that the contents are similar or different from the contents of other boxes; the consideration of surety follows after estimates are made. The estimation questions can be subtle and unexpected (Savage 1972, chap. 15), but do not cause major controversy about the foundations of statistics. So we can quickly move on to questions about the extent of surety in our estimations.\nConsider your reaction if the sampling produces 10 green balls in a row, or 9 out of 10. If you had no other information (a very important assumption that we will leave aside for now), your best guess would be that the box contains all green balls, or a proportion of 9 of 10, in the two cases respectively. This estimation process seems natural enough.\nYou would be surprised if someone told you that instead of the box containing the proportion in the sample, it contained just half green balls. How surprised? Intuitively, the extent of your surprise would depend on the probability that a half-green “universe” would produce 10 or 9 green balls out of 10. This surprise is a key element in the logic of the hypothesis-testing branch of statistical inference.\nWe learn more about the likely contents of the box by asking about the probability that various specific populations of balls within the box would produce the particular sample that we received. That is, we can ask how likely a collection of 25 percent green balls is to produce (say) 9 of 10 green ones, and how likely collections of 50 percent, 75 percent, 90 percent (and any other collections of interest) are to produce the observed sample. That is, we ask about the consistency between any particular hypothesized collection within the box and the sample we observe. And it is reasonable to believe that those universes which have greater consistency with the observed sample — that is, those universes that are more likely to produce the observed sample — are more likely to be in the box than other universes. This (to repeat, as I shall repeat many times) is the basic strategy of statistical investigation. If we observe 9 of 10 green balls, we then determine that universes with (say) 9/10 and 10/10 green balls are more consistent with the observed evidence than are universes of 0/10 and 1/10 green balls. So by this process of considering specific universes that the box might contain, we make possible more specific inferences about the box’s probable contents based on the sample evidence than we could without this process.\nPlease notice the role of the assessment of probabilities here: By one technical means or another (either simulation or formulas), we assess the probabilities that a particular universe will produce the observed sample, and other samples as well.\nIt is of the highest importance to recognize that without additional knowledge (or assumption) one cannot make any statements about the probability of the sample having come from any particular universe , on the basis of the sample evidence. (Better read that last sentence again.) We can only speak about the probability that a particular universe will produce the observed sample, a very different matter. This issue will arise again very sharply in the context of confidence intervals.\nLet us generalize the steps in statistical inference:\n\nFrame the original question as: What is the chance of getting the observed sample x from population X? That is, what is probability of (If x then X)?\nProceed to this question: What kinds of samples does X produce, with which probability? That is, what is the probability of this particular x coming from X? That is, what is p(x|X)?\nActually investigate the behavior of X with respect to x and other samples. One can do this in two ways:\n\nUse the formulaic calculus of probability, perhaps resorting to Monte Carlo methods if an appropriate formula does not exist. Or,\nUse resampling (in the larger sense), the domain of which equals (all Monte Carlo experimentation) minus (the use of Monte Carlo methods for approximations, investigation of complex functions in statistics and other theoretical mathematics, and uses elsewhere in science). Resampling in its more restricted sense includes the bootstrap, permutation tests, and other non-parametric methods.\n\nInterpretation of the probabilities that result from step 3 in terms of\n\nacceptance or rejection of hypotheses, ii) surety of conclusions, or iii) inputs to decision theory.\n\n\nHere is a short definition of statistical inference:\n\nThe selection of a probabilistic model that might resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.\n\nWe will get even more specific about the procedure when we discuss the canonical procedures for hypothesis testing and for the finding of confidence intervals in the chapters on those subjects.\nThe discussion so far has been in the spirit of what is known as hypothesis testing . The result of a hypothesis test is a decision about whether or not one believes that the sample is likely to have been drawn randomly from the “benchmark universe” X. The logic is that if the probability of such a sample coming from that universe is low, we will then choose to believe the alternative — to wit, that the sample came from the universe that resembles the sample.\n\nThe underlying idea is that if an event would be very surprising if it really happened — as it would be very surprising if the dog had really eaten the homework (see Chapter 21) — we are inclined not to believe in that possibility. (This logic will be explored further in later chapters on hypothesis testing.)\nWe have so far assumed that our only relevant knowledge is the sample. And though we almost never lack some additional information, this can be a sensible way to proceed when we wish to suppress any other information or speculation. This suppression is controversial; those known as Bayesians or subjectivists want us to take into account all the information we have. But even they would not dispute suppressing information in certain cases — such as a teacher who does not want to know students’ IQ scores because s/he might want avoid the possibility of unconsciously being affected by that score, or an employer who wants not to know the potential employee’s ethnic or racial background even though the hiring process might be more “successful” on some metric, or a sports coach who refuses to pick the starting team each year until the players have competed for the positions.\n\nNow consider a variant on the green-ball situation discussed above. Assume now that you are told that samples of balls are alternately drawn from one of two specified universes — two buckets of balls, one with 50 percent green balls and the other with 80 percent green balls. Now you are shown a sample of nine green and one red balls drawn from one of those buckets. On the basis of your sample you can then say how probable it is that the sample came from one or the other universe . You proceed by computing the probabilities (often called the likelihoods in this situation) that each of those two universes would individually produce the observed samples — probabilities that you could arrive at with resampling, with Pascal’s Triangle, or with a table of binomial probabilities, or with the Normal approximation and the Z distribution, or with yet other devices. Those probabilities are .01 and .27, and the ratio of the two (0.1/.27) is a bit less than .04. That is, fair betting odds are about 1 to 27.\nLet us consider a genetics problem on this model. Plant A produces 3/4 black seeds and 1/4 reds; plant B produces all reds. You get a red seed. Which plant would you guess produced it? You surely would guess plant B. Now, how about 9 reds and a black, from Plants A and C, the latter producing 50 percent reds on average?\nTo put the question more precisely: What betting odds would you give that the one red seed came from plant B? Let us reason this way: If you do this again and again, 4 of 5 of the red seeds you see will come from plant B. Therefore, reasonable (or “fair”) odds are 4 to 1, because this is in accord with the ratios with which red seeds are produced by the two plants — 4/4 to 1/4.\nHow about the sample of 9 reds and a black, and plants A and C? It would make sense that the appropriate odds would be derived from the probabilities of the two plants producing that particular sample, probabilities which we computed above.\nNow let us move to a bit more complex problem: Consider two buckets — bucket G with 2 red and 1 black balls, and bucket H with 100 red and 100 black balls. Someone flips a coin to decide which bucket will be drawn from, reaches into that bucket, and chooses two balls without replacing the first one before drawing the second. Both are red. What are the odds that the sample came from bucket G? Clearly, the answer should derive from the probabilities that the two buckets would produce the observed sample.\n(Now just for fun, how about if the first ball drawn is thrown back after examining? What now are the appropriate odds?)\nLet’s restate the central issue. One can state the probability that a particular plant which produces on average 1 red and 3 black seeds will produce one red seed, or 5 reds among a sample of 10. But without further assumptions — such as the assumption above that the possibilities are limited to two specific universes — one cannot say how likely a given red seed is to have come from a given plant, even if we know that that plant produces only reds. (For example, it may have come from other plants producing only red seeds.)\nWhen we limit the possibilities to two universes (or to a larger set of specified universes) we are able to put a probability on one hypothesis or another. But to repeat, in many or most cases, one cannot reasonably assume it is only one or the other. And then we cannot state any odds that the sample came from a particular universe. This is a very difficult point to grasp, experience shows, but a crucial one. (It is the sort of subtle issue that makes statistics so difficult.)\nThe additional assumptions necessary to talk about the probability that the red seed came from a given plant are the stuff of statistical inference. And they must be combined with such “objective” probabilistic assessments as the probability that a 1-red-3-black plant will produce one red, or 5 reds among 10 seeds.\nNow let us move one step further. Instead of stating as a fact under our control that there is a .5 chance of the sample being drawn from each of the two buckets in the problem above, let us assume that we do not know the probability of each bucket being picked, but instead we estimate a probability of .5 for each bucket, based on a variety of other information that all is uncertain. But though the facts are now different, the most reasonable estimate of the odds that the observed sample was drawn from one or the other bucket will not be different than before — because in both situations we were working with a “prior probability” of .5.\n\nNow let us go a step further by allowing the universes from which the sample may have come to have different assumed probabilities as well as different compositions. That is, we now consider prior probabilities other than .5.\nHow do we decide which universe(s) to investigate for the probability of producing the observed sample, and of producing samples that are even less likely, in the sense of being more surprising? That judgment depends upon the purpose of your analysis, upon your point of view of how statistics ought to be done, and upon some other factors.\nIt should be noted that the logic described so far applies in exactly the same fashion whether we do our work estimating probabilities with the resampling method or with conventional methods. We can figure the probability of nine or more green chips from a universe of (say) p = .7 with either approach.\nSo far we have discussed the comparison of various hypotheses and possible universes. We must also consider where the consideration of the reliability of estimates comes in. This leads to the concept of confidence limits, which will be discussed in Chapter 26 and Chapter 27." + }, + { + "objectID": "inference_intro.html#samples-whose-observations-may-have-more-than-two-values", + "href": "inference_intro.html#samples-whose-observations-may-have-more-than-two-values", + "title": "18  Introduction to Statistical Inference", + "section": "18.2 Samples Whose Observations May Have More Than Two Values", + "text": "18.2 Samples Whose Observations May Have More Than Two Values\nSo far we have discussed samples and universes that we can characterize as proportions of elements which can have only one of two characteristics — green or other, in this case, which is equivalent to “1” or “0.” This expositional choice has been solely for clarity. All the ideas discussed above pertain just as well to samples whose observations may have more than two values, and which may be either discrete or continuous." + }, + { + "objectID": "inference_intro.html#summary-and-conclusions", + "href": "inference_intro.html#summary-and-conclusions", + "title": "18  Introduction to Statistical Inference", + "section": "18.3 Summary and conclusions", + "text": "18.3 Summary and conclusions\nA statistical question asks about the probabilities of a sample having arisen from various source universes in light of the evidence of a sample. In every case, the statistical answer comes from considering the behavior of particular specified universes in relation to the sample evidence and to the behavior of other possible universes. That is, a statistical problem is an exercise in postulating universes of interest and interpreting the probabilistic distributions of results of those universes. The preceding sentence is the key operational idea in statistical inference.\nDifferent sorts of realistic contexts call for different ways of framing the inquiry. For each of the established models there are types of problems which fit that model better than other models, and other types of problems for which the model is quite inappropriate.\nFundamental wisdom in statistics, as in all other contexts, is to employ a large tool kit rather than just applying only a hammer, screwdriver, or wrench no matter what the problem is at hand. (Philosopher Abraham Kaplan once stated Kaplan’s Law of scientific method: Give a small boy a hammer and there is nothing that he will encounter that does not require pounding.) Studying the text of a poem statistically to infer whether Shakespeare or Bacon was the more likely author is quite different than inferring whether bioengineer Smythe can produce an increase in the proportion of calves, and both are different from decisions about whether to remove a basketball player from the game or to produce a new product.\nSome key points: 1) In statistical inference as in all sound thinking, one’s purpose is central . All judgments should be made relative to that purpose, and in light of costs and benefits. (This is the spirit of the Neyman-Pearson approach). 2) One cannot avoid making judgments; the process of statistical inference cannot ever be perfectly routinized or objectified. Even in science, fitting a model to experience requires judgment. 3) The best ways to infer are different in different situations — economics, psychology, history, business, medicine, engineering, physics, and so on. 4) Different tools must be used when the situations call for them — sequential vs. fixed sampling, Neyman-Pearson vs. Fisher, and so on. 5) In statistical inference it is wise not to argue about the proper conclusion when the data and procedures are ambiguous. Instead, whenever possible, one should go back and get more data, hence lessening the importance of the efficiency of statistical tests. In some cases one cannot easily get more data, or even conduct an experiment, as in biostatistics with cancer patients. And with respect to the past one cannot produce more historical data. But one can gather more and different kinds of data, e.g. the history of research on smoking and lung cancer.\n\n\n\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New York: Dover Publications, Inc." + }, + { + "objectID": "point_estimation.html#ways-to-estimate-the-mean", + "href": "point_estimation.html#ways-to-estimate-the-mean", + "title": "19  Point Estimation", + "section": "19.1 Ways to estimate the mean", + "text": "19.1 Ways to estimate the mean\n\n19.1.1 The Method of Moments\nSince elementary school you have been taught to estimate the mean of a universe (or calculate the mean of a sample) by taking a simple arithmetic average. A fancy name for that process is “the method of moments.” It is the equivalent of estimating the center of gravity of a pole by finding the place where it will balance on your finger. If the pole has the same size and density all along its length, that balance point will be halfway between the endpoints, and the point may be thought of as the arithmetic average of the distances from the balance point of all the one-centimeter segments of the pole.\nConsider this example:\nExample: Twenty-nine Out of Fifty People Polled Say They Will Vote For The Democrat. Who Will Win The Election? The Relationship Between The Sample Proportion and The Population Proportion in a Two-Outcome Universe.\nYou take a random sample of 50 people in Maryland and ask which party’s candidate for governor they will vote for. Twenty-nine say they will vote for the Democrat. Let’s say it is reasonable to assume in this case that people will vote exactly as they say they will. The statistical question then facing you is: What proportion of the voters in Maryland will vote for the Democrat in the general election?\nYour intuitive best guess is that the proportion of the “universe” — which is composed of voters in the general election, in this case — will be the same as the proportion of the sample. That is, 58 percent = 29/50 is likely to be your guess about the proportion that will vote Democratic. Of course, your estimate may be too high or too low in this particular case, but in the long run — that is, if you take many samples like this one — on the average the sample mean will equal the universe (population) proportion, for reasons to be discussed later.\nThe sample mean seems to be the “natural” estimator of the population mean in this and many other cases. That is, it seems quite natural to say that the best estimate is the sample mean, and indeed it probably is best. But why? This is the problem of inverse probability that has bedeviled statisticians for two centuries.\nIf the only information that you have (or that seems relevant) is the evidence of the sample, then there would seem to be no basis for judging that the shape and location of the population differs to the “left” or “right” from that of the sample. That is often a strong argument.\nAnother way of saying much the same thing: If a sample has been drawn randomly, each single observation is a representative estimator of the mean; if you only have one observation, that observation is your best guess about the center of the distribution (if you have no reason to believe that the distribution of the population is peculiar — such as not being symmetrical). And therefore the sum of 2, 3…n of such observations (divided by their number) should have that same property, based on basic principles.\nBut if you are on a ship at sea and a leaf comes raining down from the sky, your best guess about the location of the tree from which it comes is not directly above you, and if two leaves fall, the midpoint of them is not the best location guess, either; you know that trees don’t grow at sea, and birds sometimes carry leaves out to sea.\nWe’ll return to this subject when we discuss criteria of methods.\n\n\n19.1.2 Expected Value and the Method of Moments\nConsider this gamble: You and another person roll a die. If it falls with the “6” upwards you get $4, and otherwise you pay $1. If you play 120 times, at the end of the day you would expect to have (20 * $4 - 100 * $1 =) -$20 dollars. We say that -$20 is your “expected value,” and your expected value per roll is (-$20 / 120 =) $.166 or the loss of 1/6 of a dollar. If you get $5 instead of $4, your expected value is $0.\nThis is exactly the same idea as the method of moments, and we even use the same term — “expected value,” or “expectation” — for the outcome of a calculation of the mean of a distribution. We say that the expected value for the success of rolling a “6” with a single cast of a die is 1/6, and that the expected value of rolling a “6” or a “5” is (1/6 + 1/6 = ) 2/6.\n\n\n19.1.3 The Maximum Likelihood Principle\nAnother way of thinking about estimation of the population mean asks: Which population(s) would, among the possible populations, have the highest probability of producing the observed sample? This criterion frequently produces the same answer as the method of moments, but in some situations the estimates differ. Furthermore, the logic of the maximum-likelihood principle is important.\nConsider that you draw without replacement six balls — 2 black and 4 white — from a bucket that contains twenty balls. What would you guess is the composition of the bucket from which they were drawn? Is it likely that those balls came from a bucket with 4 white and 16 black balls? Rather obviously not, because it would be most unusual to get all the 4 white balls in your draw. Indeed, we can estimate the probability of that happening with simulation or formula to be about .003.\nHow about a bucket with 2 black and 18 whites? The probability is much higher than with the previous bucket, but it still is low — about .075.\nLet us now estimate the probabilities for all buckets across the range of probabilities. In Figure 19.1 we see that the bucket with the highest probability of producing the observed sample has the same proportions of black and white balls as does the sample. This is called the “maximum likelihood universe.” Nor should this be very surprising, because that universe obviously has an equal chance of producing samples with proportions below and above that observed proportion — as was discussed in connection with the method of moments.\nWe should note, however, that the probability that even such a maximum-likelihood universe would produce exactly the observed sample is very low (though it has an even lower probability of producing any other sample).\n\n\n\n\n\nFigure 19.1: Number of White Balls in the Universe (N=20)" + }, + { + "objectID": "point_estimation.html#choice-of-estimation-method", + "href": "point_estimation.html#choice-of-estimation-method", + "title": "19  Point Estimation", + "section": "19.2 Choice of Estimation Method", + "text": "19.2 Choice of Estimation Method\nWhen should you base your estimate on the method of moments, or of maximum likelihood, or still some other principle? There is no general answer. Sound estimation requires that you think long and hard about the purpose of your estimation, and fit the method to the purpose. I am well aware that this is a very vague statement. But though it may be an uncomfortable idea to live with, guidance to sound statistical method must be vague because it requires sound judgment and deep knowledge of the particular set of facts about the situation at hand." + }, + { + "objectID": "point_estimation.html#criteria-of-estimates", + "href": "point_estimation.html#criteria-of-estimates", + "title": "19  Point Estimation", + "section": "19.3 Criteria of estimates", + "text": "19.3 Criteria of estimates\nHow should one judge the soundness of the process that produces an estimate? General criteria include representativeness and accuracy . But these are pretty vague; we’ll have to get more specific.\n\n19.3.1 Unbiasedness\nConcerning representativeness: We want a procedure that will not be systematically in error in one direction or another. In technical terms, we want an “unbiased estimate,” if possible. “Unbiased” in this case does not mean “friendly” or “unprejudiced,” but rather implies that on the average — that is, in the long run, after taking repeated samples — estimates that are too high will about balance (in percentage terms) those that are too low. The mean of the universe (or the proportion, if we are speaking of two-valued “binomial situations”) is a frequent object of our interest. And the sample mean is (in most cases) an unbiased estimate of the population mean.\nLet’s now see an informal proof that the mean of a randomlydrawn sample is an “unbiased” estimator of the population mean. That is, the errors of the sample means will cancel out after repeated samples because the mean of a large number of sample means approaches the population mean. A second “law” to be informally proven is that the size of the inaccuracy of a sample proportion is largest when the population proportion is near 50 percent, and smallest when it approaches zero percent or 100 percent.\nThe statement that the sample mean is an unbiased estimate of the population mean holds for many but not all kinds of samples — proportions of two-outcome (Democrat-Republican) events (as in this case) and also the means of many measured-data universes (heights, speeds, and so on) that we will come to later.\nBut, you object, I have only said that this is so; I haven’t proven it. Quite right. Now we will go beyond this simple assertion, though we won’t reach the level of formal proof. This discussion applies to conventional analytic statistical theory as well as to the resampling approach.\nWe want to know why the mean of a repeated sample — or the proportion, in the case of a binomial universe — tends to equal the mean of the universe (or the proportion of a binomial sample). Consider a population of one thousand voters. Split the population into random sub-populations of 500 voters each; let’s call these sub-populations by the name “samples.” Almost inevitably, the proportions voting Democratic in the samples will not exactly equal the “true” proportions in the population. (Why not? Well, why should they split evenly? There is no general reason why they should.) But if the sample proportions do not equal the population proportion, we can say that the extent of the difference between the two sample proportions and the population proportion will be identical but in the opposite direction .\nIf the population proportion is 600/1000 = 60 percent, and one sample’s proportion is 340/500 = 68 percent, then the other sample’s proportion must be (600-340 = 260)/500 = 52 percent. So if in the very long run you would choose each of these two samples about half the time (as you would if you selected between the two samples randomly) the average of the sample proportions would be (68 percent + 52 percent)/2 = 60 percent. This shows that on the average the sample proportion is a fair and unbiased estimate of the population proportion — if the sample is half the size of the population.\nIf we now sub-divide each of our two samples of 500 (each of which was half the population size) into equal-size subsamples of 250 each, the same argument will hold for the proportions of the samples of 250 with respect to the sample of 500: The proportion of a 250-voter sample is an unbiased estimate of the proportion of the 500-voter sample from which it is drawn. It seems inductively reasonable, then, that if the proportion of a 250-voter sample is an unbiased estimate of the 500-voter sample from which it is drawn, and the proportion of a 500-voter sample is an unbiased estimate of the 1000-voter population, then the proportion of a 250-voter sample should be an unbiased estimate of the population proportion. And if so, this argument should hold for samples of 1/2 x 250 = 125, and so on — in fact for any size sample.\nThe argument given above is not a rigorous formal proof. But I doubt that the non-mathematician needs, or will benefit from, a more formal proof of this proposition. You are more likely to be persuaded if you demonstrate this proposition to yourself experimentally in the following manner:\n\nStep 1. Let “1-6” = Democrat, “7-10” = Republican\nStep 2. Choose a sample of, say, ten random numbers, and record the proportion Democrat (the sample proportion).\nStep 3. Repeat step 2 a thousand times.\nStep 4. Compute the mean of the sample proportions, and compare it to the population proportion of 60 percent. This result should be close enough to reassure you that on the average the sample proportion is an “unbiased” estimate of the population proportion, though in any particular sample it may be substantially off in either direction.\n\n\n\n19.3.2 Efficiency\nWe want an estimate to be accurate, in the sense that it is as close to the “actual” value of the parameter as possible. Sometimes it is possible to get more accuracy at the cost of biasing the estimate. More than that does not need to be said here.\n\n\n19.3.3 Maximum Likelihood\nKnowing that a particular value is the most likely of all values may be of importance in itself. For example, a person betting on one horse in a horse race is interested in his/her estimate of the winner having the highest possible probability, and is not the slightest bit interested in getting nearly the right horse. Maximum likelihood estimates are of particular interest in such situations.\nSee (Savage 1972, chap. 15), for many other criteria of estimators." + }, + { + "objectID": "point_estimation.html#criteria-of-the-criteria", + "href": "point_estimation.html#criteria-of-the-criteria", + "title": "19  Point Estimation", + "section": "19.4 Criteria of the Criteria", + "text": "19.4 Criteria of the Criteria\nWhat should we look for in choosing criteria? Logically, this question should precede the above list of criteria.\nSavage (1972, chap. 15) has urged that we should always think in terms of the consequences of choosing criteria, in light of our purposes in making the estimate. I believe that he is making an important point. But it often is very hard work to think the matter through all the way to the consequences of the criteria chosen. And in most cases, such fine inquiry is not needed, in the sense that the estimating procedure chosen will be the same no matter what consequences are considered.1" + }, + { + "objectID": "point_estimation.html#estimation-of-accuracy-of-the-point-estimate", + "href": "point_estimation.html#estimation-of-accuracy-of-the-point-estimate", + "title": "19  Point Estimation", + "section": "19.5 Estimation of accuracy of the point estimate", + "text": "19.5 Estimation of accuracy of the point estimate\nSo far we have discussed how to make a point estimate, and criteria of good estimators. We also are interested in estimating the accuracy of that estimate. That subject — which is harder to grapple with — is discussed in Chapter 26 and Chapter 27 on confidence intervals.\nMost important: One cannot sensibly talk about the accuracy of probabilities in the abstract, without reference to some set of facts. In the abstract, the notion of accuracy loses any meaning, and invites confusion and argument." + }, + { + "objectID": "point_estimation.html#sec-uses-of-mean", + "href": "point_estimation.html#sec-uses-of-mean", + "title": "19  Point Estimation", + "section": "19.6 Uses of the mean", + "text": "19.6 Uses of the mean\nLet’s consider when the use of a device such as the mean is valuable, in the context of the data on marksmen in Table 19.1.2. If we wish to compare marksman A versus marksman B, we can immediately see that marksman A hit the bullseye (80 shots for 3 points each time) as many times as marksman B hit either the bullseye or simply got in the black (30 shots for 3 points and 50 shots for 2 points), and A hit the black (2 points) as many times as B just got in the white (1 point). From these two comparisons covering all the shots, in both of which comparisons A does better, it is immediately obvious that marksman A is better than marksman B. We can say that A’s score dominates B’s score.\n\n\nTable 19.1: Score percentages by marksman\n\n\n\n\n\n\n\nScore\n# occurrences\nProbability\n\n\n\n\nMarksman A\n\n\n1\n0\n0\n\n\n2\n20\n.2\n\n\n3\n80\n.8\n\n\nMarksman B\n\n\n1\n20\n.2\n\n\n2\n50\n.5\n\n\n3\n30\n.3\n\n\nMarksman C\n\n\n1\n10\n.1\n\n\n2\n60\n.6\n\n\n3\n30\n.3\n\n\n\n\nWhen we turn to comparing marksman C to marksman D, however, we cannot say that one “dominates” the other as we could with the comparison of marksmen A and B. Therefore, we turn to a summarizing device. One such device that is useful here is the mean. For marksman C the mean score is \\((40 * 1) + (10 * 2) + (50 * 3) = 210\\), while for marksman D the mean score is \\((10 * 1) + (60 * 2) + (30 * 3) = 220\\). Hence we can say that D is better than C even though D’s score does not dominate C’s score in the bullseye category.\nAnother use of the mean (Gnedenko, Aleksandr, and Khinchin 1962, 68) is shown in the estimation of the number of matches that we need to start fires for an operation carried out 20 times in a day (Table 19.2). Let’s say that the number of cases where s/he needs 1, 2 … 5 matches to start a fire are as follows (along with their probabilities) based on the last 100 fires started:\n\n\nTable 19.2: Number of matches needed to start a fire\n\n\nNumber of Matches\nNumber of Cases\nProbabilities\n\n\n\n\n1\n7\n.16\n\n\n2\n16\n.16\n\n\n3\n55\n.55\n\n\n4\n21\n.21\n\n\n5\n1\n.01\n\n\n\n\nIf you know that the operator will be lighting twenty fires, you can estimate the number of matches that s/he will need by multiplying the mean number of matches (which turns out be \\(1 * .07 + 2 * 0.16 + 3 * 0.55 + 4 * 0.21 + 5 * 0.01 = 2.93\\)) in the observed experience by 20. Here you are using the mean as an indication of a representative case.\nIt is common for writers to immediately produce the data in the forms of percentages or probabilities. But I think it is important to include in our discussion the absolute numbers, because this is what one must begin with in practice. And keeping the absolute numbers in mind is likely to avoid some confusions that arise if one immediately goes to percentages or to probabilities.\nStill another use for the mean is when you have a set of observations with error in them. The mean of the observations probably is your best guess about which is the “right” one. Furthermore, the distance you are likely to be off the mark is less if you select the mean of the observations. An example might be a series of witnesses giving the police their guesses about the height of a man who overturned an outhouse. The mean probably is the best estimate to give to police officers as a description of the perpetrator (though it would be helpful to give the range of the observations as well).\nWe use the mean so often, in so many different circumstances, that we become used to it and never think about its nature. So let’s do so a bit now.\nDifferent statistical ideas are appropriate for business and engineering decisions, biometrics, econometrics, scientific explanation (the philosophers’ case), and other fields. So nothing said here holds everywhere and always.\nOne might ask: What is the “meaning” of a mean? But that is not a helpful question. Rather, we should ask about the uses of a mean. Usually a mean is used to summarize a set of data. As we saw with marksmen C and D, it often is difficult to look at a table of data and obtain an overall idea of how big or how small the observations are; the mean (or other measurements) can help. Or if you wish to compare two sets of data where the distributions of observations overlap each other, comparing the means of the two distributions can often help you better understand the matter.\nAnother complication is the confusion between description and estimation , which makes it difficult to decide where to place the topic of descriptive statistics in a textbook. For example, compare the mean income of all men in the U. S., as measured by the decennial census. This mean of the universe can have a very different meaning from the mean of a sample of men with respect to the same characteristic. The sample mean is a point estimate, a statistical device, whereas the mean of the universe is a description. The use of the mean as an estimator is fraught with complications. Still, maybe it is no more complicated than deciding what describer to use for a population. This entire matter is much more complex than it appears at first glance.\nWhen the sample size approaches in size the entire population — when the sample becomes closer and closer to being the same as the population — the two issues blend. What does that tell us? Anything? What is the relationship between a baseball player’s average for two weeks, and his/her lifetime average? This is subtle stuff — rivaling the subtleness of arguments about inference versus probability, and about the nature of confidence limits (see Chapter 26 and Chapter 27 ). Maybe the only solid answer is to try to stay super-clear on what you are doing for what purpose, and to ask continually what job you want the statistic (or describer) to do for you.\nThe issue of the relationship of sample size to population size arises here. If the sample size equals or approaches the population size, the very notion of estimation loses its meaning.\nThe notion of “best estimator” makes no sense in some situations, including the following: a) You draw one black ball from a bucket. You cannot put confidence intervals around your estimate of the proportion of black balls, except to say that the proportion is somewhere between 1 and 0. No one would proceed without bringing in more information. That is, when there is almost no information, you simply cannot make much of an estimate — and the resampling method breaks down, too. It does not help much to shift the discussion to the models of the buckets, because then the issue is the unknown population of the buckets, in which case we need to bring in our general knowledge. b) When the sample size equals or is close to the population size, as discussed in this section, the data are a description rather than an estimate, because the sample is getting to be much the same as the universe; that is, if there are twelve people in your family, and you randomly take a sample of the amount of sugar used by eight members of the family, the results of the sample cannot be very different than if you compute the amount for all twelve family members. In such a case, the interpretation of the mean becomes complex.\nUnderlying all estimation is the assumption of continuation, which follows from random sampling — that there is no reason to expect the next sample to be different from the present one in any particular fashion, mean or variation. But we do expect it to be different in some fashion because of sampling variability." + }, + { + "objectID": "point_estimation.html#conclusion", + "href": "point_estimation.html#conclusion", + "title": "19  Point Estimation", + "section": "19.7 Conclusion", + "text": "19.7 Conclusion\nA Newsweek article says, “According to a recent reader’s survey in Bride’s magazine, the average blowout [wedding] will set you back about $16,000” (Feb 15, 1993, p. 67). That use of the mean (I assume) for the average, rather than the median, could cost the parents of some brides a pretty penny. It could be that the cost for the average person — that is, the median expenditure — might be a lot less than $16,000. (A few million dollar weddings could have a huge effect on a survey mean.) An inappropriate standard of comparison might enter into some family discussions as a result of this article, and cause higher outlays than otherwise. This chapter helps one understand the nature of such estimates.\n\n\n\n\nGnedenko, Boris Vladimirovich, I Aleksandr, and Akovlevich Khinchin. 1962. An Elementary Introduction to the Theory of Probability. New York, NY, USA: Dover Publications, Inc. https://archive.org/details/gnedenko-khinchin-an-elementary-introduction-to-the-theory-of-probability.\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New York: Dover Publications, Inc." + }, + { + "objectID": "framing_questions.html#introduction", + "href": "framing_questions.html#introduction", + "title": "20  Framing Statistical Questions", + "section": "20.1 Introduction", + "text": "20.1 Introduction\nChapter 3 - Chapter 15 discussed problems in probability theory. That is, we have been estimating the probability of a composite event resulting from a system in which we know the probabilities of the simple events — the “parameters” of the situation.\nThen Chapter 17 - Chapter 19 discussed the underlying philosophy of statistical inference.\nNow we turn to inferential-statistical problems. Up until now, we have been estimating the complex probabilities of known universes — the topic of probability . Now as we turn to problems in statistics , we seek to learn the characteristics of an unknown system — the basic probabilities of its simple events and parameters. (Here we note again, however, that in the process of dealing with them, all statistical-inferential problems eventually are converted into problems of pure probability). To assess the characteristics of the system in such problems, we employ the characteristics of the sample(s) that have been drawn from it.\nFor further discussion on the distinction between inferential statistics and probability theory, see Chapter 2 - Chapter 3.\nThis chapter begins the topic of hypothesis testing . The issue is: whether to adjudge that a particular sample (or samples) come(s) from a particular universe. A two-outcome yes-no universe is discussed first. Then we move on to “measured-data” universes, which are more complex than yes-no outcomes because the variables can take on many values, and because we ask somewhat more complex questions about the relationships of the samples to the universes. This topic is continued in subsequent chapters.\nIn a typical hypothesis-testing problem presented in this chapter, one sample of hospital patients is treated with a new drug and a second sample is not treated but rather given a “placebo.” After obtaining results from the samples, the “null” or “test” or “benchmark” hypothesis would be that the resulting drug and placebo samples are drawn from the same universe. This device of the null hypothesis is the equivalent of stating that the drug had no effect on the patients. It is a special intellectual strategy developed to handle such statistical questions.\nWe start with the scientific question: Does the medicine have an effect? We then translate it into a testable statistical question: How likely is it that the sample means come from the same universe? This process of question-translation is the crucial step in hypothesis-testing and inferential statistics. The chapter then explains how to solve these problems using resampling methods after you have formulated the proper statistical question.\nThough the examples in the chapter mostly focus on tests of hypotheses, the procedures also apply to confidence intervals, which will be discussed later." + }, + { + "objectID": "framing_questions.html#translating-scientific-questions-into-probabilistic-and-statistical-questions", + "href": "framing_questions.html#translating-scientific-questions-into-probabilistic-and-statistical-questions", + "title": "20  Framing Statistical Questions", + "section": "20.2 Translating scientific questions into probabilistic and statistical questions", + "text": "20.2 Translating scientific questions into probabilistic and statistical questions\nThe first step in using probability and statistics is to translate the scientific question into a statistical question. Once you know exactly which prob-stats question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy (though subtle). The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms.\nThough this translation is difficult, it involves no mathematics. Rather, this step requires only hard thought. You cannot beg off by saying, “I have no brain for math!” The need is for a brain that will do clear thinking, rather than a brain especially talented in mathematics. A person who uses conventional methods can avoid this hard thinking by simply grabbing the formula for some test without understanding why s/he chooses that test. But resampling pushes you to do this thinking explicitly.\nThis crucial process of translating from a pre-statistical question to a statistical question takes place in all statistical inference. But its nature comes out most sharply with respect to testing hypotheses, so most of what will be said about it will be in that context." + }, + { + "objectID": "framing_questions.html#the-three-types-of-questions", + "href": "framing_questions.html#the-three-types-of-questions", + "title": "20  Framing Statistical Questions", + "section": "20.3 The three types of questions", + "text": "20.3 The three types of questions\nLet’s consider the natures of conceptual, operational, and statistical questions.\n\n20.3.1 The Scientific Question\nA study for either scientific or decision-making purposes properly begins with a general question about the nature of the world — that is, a conceptual or theoretical question. One must then transform this question into an operational-empirical form that one can study scientifically. Thence comes the translation into a technical-statistical question.\nThe scientific-conceptual-theoretical question can be an issue of theory, or a policy choice, or the result of curiosity at large.\nExamples include: Can a bioengineer increase the chance of female calves being born? Is copper becoming less scarce? Are the prices of liquor systematically different in states where the liquor stores are publicly owned compared to states where they are privately owned? Does a new formulation of pig rations lead to faster hog growth? Was the rate of unemployment higher last month than the long-run average, or was the higher figure likely to be the result of sampling error? What are the margins of probable error for an unemployment survey?\n\n\n20.3.2 The Operational-Empirical Question\nThe operational-empirical question is framed in measurable quantities in a meaningful design. Examples include: How likely is this state of affairs (say, the new pig-food formulation) to cause an event such as was observed (say, the observed increase in hog growth)? How likely is it that the mean unemployment rate of a sample taken from the universe of interest (say, the labor force, with an unemployment rate of 10 percent) will be between 11 percent and 12 percent? What is the probability of getting three girls in the first four children if the probability of a girl is .48? How unlikely is it to get nine females out of ten calves in an experiment on your farm? Did the price of copper fall between 1800 and the present? These questions are in the form of empirical questions, which have already been transformed by operationalizing from scientific-conceptual questions.\n\n\n20.3.3 The Statistical Question\nAt this point one must decide whether the conceptual-scientific question is of the form of either a) or b):\n\nA test about whether some sample will frequently happen by chance rather than being very surprising — a test of the “significance” of a hypothesis. Such hypothesis testing takes the following form: How likely is a given “universe” to produce some sample like x? This leads to interpretation about: How likely is a given universe to be the cause of this observed sample?\nA question about the accuracy of the estimate of a parameter of the population based upon sample evidence (an inquiry about “confidence intervals”). This sort of question is considered by some (but not by me) to be a question in estimation — that is, one’s best guess about (say) the magnitude and probable error of the mean or median of a population. This is the form of a question about confidence limits — how likely is the mean to be between x and y?\n\nNotice that the statistical question is framed as a question in probability." + }, + { + "objectID": "framing_questions.html#illustrative-translations", + "href": "framing_questions.html#illustrative-translations", + "title": "20  Framing Statistical Questions", + "section": "20.4 Illustrative translations", + "text": "20.4 Illustrative translations\nThe best way to explain how to translate a scientific question into a statistical question is to illustrate the process.\n\n20.4.1 Illustration A — beliefs about smoking\nWere doctors’ beliefs as of 1964 about the harmfulness of cigarette smoking (and doctors’ own smoking behavior) affected by the social groups among whom the doctors live (Simon 1967)? That was the theoretical question. We decided to define the doctors’ reference groups as the states in which they live, because data about doctors and smoking were available state by state (Modern Medicine, 1964). We could then translate this question into an operational and testable scientific hypothesis by asking this question: Do doctors in tobacco-economy states differ from doctors in other states in their smoking, and in their beliefs about smoking?\nWhich numbers would help us answer this question, and how do we interpret those numbers? We now were ready to ask the statistical question: Do doctors in tobacco-economy states “belong to the same universe” (with respect to smoking) as do other doctors? That is, do doctors in tobacco-economy states have the same characteristics — at least, those characteristics we are interested in, smoking in this case — as do other doctors? Later we shall see that the way to proceed is to consider the statistical hypothesis that these doctors do indeed belong to that same universe; that hypothesis and the universe will be called “benchmark hypothesis” and “benchmark universe” respectively — or in more conventional usage, the “null hypothesis.”\nIf the tobacco-economy doctors do indeed belong to the benchmark universe — that is, if the benchmark hypothesis is correct — then there is a 49/50 chance that doctors in some state other than the state in which tobacco is most important will have the highest rate of cigarette smoking. But in fact we observe that the state in which tobacco accounts for the largest proportion of the state’s income — North Carolina — had (as of 1964) a higher proportion of doctors who smoked than any other state. (Furthermore, a lower proportion of doctors in North Carolina than in any other state said that they believed that smoking is a health hazard.)\nOf course, it is possible that it was just chance that North Carolina doctors smoked most, but the chance is only 1 in 50 if the benchmark hypothesis is correct. Obviously, some state had to have the highest rate, and the chance for any other state was also 1 in 50. But, because our original scientific hypothesis was that North Carolina doctors’ smoking rate would be highest, and we then observed that it was highest even though the chance was only 1 in 50, the observation became interesting and meaningful to us. It means that the chances are strong that there was a connection between the importance of tobacco in the economy of a state and the rate of cigarette smoking among doctors living there (as of 1964).\nTo consider this problem from another direction, it would be rare for North Carolina to have the highest smoking rate for doctors if there were no special reason for it; in fact, it would occur only once in fifty times. But, if there were a special reason — and we hypothesize that the tobacco economy provides the reason — then it would not seem unusual or rare for North Carolina to have the highest rate; therefore we choose to believe in the not-so-unusual phenomenon, that the tobacco economy caused doctors to smoke cigarettes.\nLike many (most? all?) actual situations, the cigarettes and doctors’ smoking issue is a rather messy business. Did I have a clear-cut, theoretically-derived prediction before I began? Maybe I did a bit of “data dredging” — that is, maybe I started with a vague expectation, and only arrived at my sharp hypothesis after I saw the data. This would weaken the probabilistic interpretation of the test of significance — but this is something that a scientific investigator does not like to do because it weakens his/her claim for attention and chance of publication. On the other hand, if one were a Bayesian, one could claim that one had a prior probability that the observed effect would occur, and the observed data strengthens that prior; but this procedure would not seem proper to many other investigators. The only wholly satisfactory conclusion is to obtain more data — but as of 1993, there does not seem to have been another data set collected since 1964, and collecting a set by myself is not feasible.\nThis clearly is a case of statistical inference that one could argue about, though perhaps it is true that all cases where the data are sufficiently ambiguous as to require a test of significance are also sufficiently ambiguous that they are properly subject to argument.\nFor some decades the hypothetico-deductive framework was the leading point of view in empirical science. It insisted that the empirical and statistical investigation should be preceded by theory, and only propositions suggested by the theory should be tested. Investigators were not supposed to go back and forth from data to theory to testing. It is now clear that this is an ivory-tower irrelevance, and no one lived by the hypothetico-deductive strictures anyway — just pretended to. Furthermore, there is no sound reason to feel constrained by it, though it strengthens your conclusions if you had theoretical reason in advance to expect the finding you obtained.\n\n\n20.4.2 Illustration B — is it a cure?\nDoes medicine CCC cure some particular cancer? That’s the scientific question. So you give the medicine to six patients who have the cancer and you do not give it to six similar patients who have the cancer. Your sample contains only twelve people because it is not feasible for you to obtain a larger sample. Five of six “medicine” patients get well, two of six “no medicine” patients get well. Does the medicine cure the cancer? That is, if future cancer patients take the medicine, will their rate of recovery be higher than if they did not take the medicine?\nOne way to translate the scientific question into a statistical question is to ask: Do the “medicine” patients belong to the same universe as the “no medicine” patients? That is, we ask whether “medicine” patients still have the same chances of getting well from the cancer as do the “no medicine” patients, or whether the medicine has bettered the chances of those who took it and thus removed them from the original universe, with its original chances of getting well. The original universe, to which the “no medicine” patients must still belong, is the benchmark universe. Shortly we shall see that we proceed by comparing the observed results against the benchmark hypothesis that the “medicine” patients still belong to the benchmark universe — that is, they still have the same chance of getting well as the “no medicine” patients.\nWe want to know whether or not the medicine does any good. This question is the same as asking whether patients who take medicine are still in the same population (universe) as “no medicine” patients, or whether they now belong to a different population in which patients have higher chances of getting well. To recapitulate our translations, we move from asking: Does the medicine cure the cancer? to, Do “medicine” patients have the same chance of getting well as “no medicine” patients?; and finally, to: Do “medicine” patients belong to the same universe (population) as “no medicine” patients? Remember that “population” in this sense does not refer to the population at large, but rather to a group of cancer sufferers (perhaps an infinitely large group) who have given chances of getting well, on the average. Groups with different chances of getting well are called “different populations” (universes). Shortly we shall see how to answer this statistical question. We must keep in mind that our ultimate concern in cases like this one is to predict future results of the medicine, that is, to predict whether use of the medicine will lead to a higher recovery rate than would be observed without the medicine.\n\n\n20.4.3 Illustration C — a better method for teaching reading\nIs method Alpha a better method of teaching reading than method Beta? That is, will method Alpha produce a higher average reading score in the future than will method Beta? Twenty children taught to read with method Alpha have an average reading score of 79, whereas children taught with method Beta have an average score of 84. To translate this scientific question into a statistical question we ask: Do children taught with method Alpha come from the same universe (population) as children taught with method Beta? Again, “universe” (population) does not mean the town or social group the children come from, and indeed the experiment will make sense only if the children do come from the same population, in that sense of “population.” What we want to know is whether or not the children belong to the same statistical population (universe), defined according to their reading ability, after they have studied with method Alpha or method Beta.\n\n\n20.4.4 Illustration D — better fertilizer\nIf one plot of ground is treated with fertilizer, and another similar plot is not treated, the benchmark (null) hypothesis is that the corn raised on the treated plot is no different than the corn raised on the untreated lot — that is, that the corn from the treated plot comes from (“belongs to”) the same universe as the corn from the untreated plot. If our statistical test makes it seem very unlikely that a universe like that from which the untreated-plot corn comes would also produce corn such as came from the treated plot, then we are willing to believe that the fertilizer has an effect. For a psychological example, substitute the words “group of children” for “plot,” “special training” for “fertilizer,” and “I.Q. score” for “corn.”\nThere is nothing sacred about the benchmark (null) hypothesis of “no difference.” You could just as well test the benchmark hypothesis that the corn comes from a universe that averages 110 bushels per acre, if you have reason to be especially interested in knowing whether or not the fertilizer produces more than 110 bushels per acre. But in many cases it is reasonable to test the probability that a sample comes from the population that does not receive the special treatment of medicine, fertilizer, or training." + }, + { + "objectID": "framing_questions.html#generalizing-from-sample-to-universe", + "href": "framing_questions.html#generalizing-from-sample-to-universe", + "title": "20  Framing Statistical Questions", + "section": "20.5 Generalizing from sample to universe", + "text": "20.5 Generalizing from sample to universe\nSo far we have discussed the scientific question and the statistical question. Remember that there is always a generalization question, too: Do the statistical results from this particular sample of, say, rats apply to a universe of humans? This question can be answered only with wisdom, common sense, and general knowledge, and not with probability statistics.\nTranslating from a scientific question into a statistical question is mostly a matter of asking the probability that some given benchmark universe (population) will produce one or more observed samples. Notice that we must (at least for general scientific testing purposes) ask about a given universe whose composition we assume to be known , rather than about a range of universes, or about a universe whose properties are unknown. In fact, there is really only one question that probability statistics can answer: Given some particular benchmark universe of some stated composition, what is the probability that an observed sample would come from it? (Please notice the subtle but all-important difference between the words “would come” in the previous sentence, and the word “came.”) A variation of this question is: Given two (or more) samples, what is the probability that they would come from the same universe — that is, that the same universe would produce both of them? In this latter case, the relevant benchmark universe is implicitly the universe whose composition is the two samples combined.\nThe necessity for stating the characteristics of the universe in question becomes obvious when you think about it for a moment. Probability-statistical testing adds up to comparing a sample with a particular benchmark universe, and asking whether there probably is a difference between the sample and the universe. To carry out this comparison, we ask how likely it is that the benchmark universe would produce a sample like the observed sample.\n\nBut in order to find out whether or not a universe could produce a given sample, we must ask whether or not some particular universe — with stated characteristics — could produce the sample. There is no doubt that some universe could produce the sample by a random process; in fact, some universe did. The only sensible question, then, is whether or not a particular universe, with stated (or known) characteristics, is likely to produce such a sample. In the case of the medicine, the universe with which we compare the sample who took the medicine is the benchmark universe to which that sample would belong if the medicine had had no effect. This comparison leads to the benchmark (null) hypothesis that the sample comes from a population in which the medicine (or other experimental treatment) seems to have no effect . It is to avoid confusion inherent in the term “null hypothesis” that I replace it with the term “benchmark hypothesis.”\nThe concept of the benchmark (null) hypothesis is not easy to grasp. The best way to learn its meaning is to see how it is used in practice. For example, we say we are willing to believe that the medicine has an effect if it seems very unlikely from the number who get well that the patients given the medicine still belong to the same benchmark universe as the patients given no medicine at all — that is, if the benchmark hypothesis is unlikely." + }, + { + "objectID": "framing_questions.html#the-steps-in-statistical-inference", + "href": "framing_questions.html#the-steps-in-statistical-inference", + "title": "20  Framing Statistical Questions", + "section": "20.6 The steps in statistical inference", + "text": "20.6 The steps in statistical inference\nThese are the steps in conducting statistical inference\n\nStep 1. Frame a question in the form of: What is the chance of getting the observed sample x from some specified population X? For example, what is the probability of getting a sample of 9 females and one male from a population where the probability of getting a single female is .48?\nStep 2. Reframe the question in the form of: What kinds of samples does population X produce, with which probabilities? That is, what is the probability of the observed sample x (9 females in 10 calves), given that a population is X (composed of 48 percent females)? Or in notation, what is \\(P(x | X)\\)?\nStep 3. Actually investigate the behavior of S with respect to S and other samples. This can be done in two ways:\n\n\nUse the calculus of probability (the formulaic method), perhaps resorting to the Monte Carlo method if an appropriate formula does not exist. Or\nResampling (in the larger sense), which equals the Monte Carlo method minus its use for approximations, investigation of complex functions in statistics and other theoretical mathematics, and non-resampling uses elsewhere in science. Resampling in the more restricted sense includes bootstrap, permutation, and other non-parametric methods. More about the resampling procedure follows in the paragraphs to come, and then in later chapters in the book. \n\n\nStep 4. Interpret the probabilities that result from step 3 in terms of acceptance or rejection of hypotheses, surety of conclusions, and as inputs to decision theory.1\n\nThe following short definition of statistical inference summarizes the previous four steps:\n\nStatistical inference equals the selection of a probabilistic model to resemble the process you wish to investigate, the investigation of that model’s behavior, and the interpretation of the results.\n\nStating the steps to be followed in a procedure is an operational definition of the procedure. My belief in the clarifying power of this device (the operational definition) is embodied in the set of steps given in Chapter 15 for the various aspects of statistical inference. A canonical question-and-answer procedure for testing hypotheses will be found in Chapter 25, and one for confidence intervals will be found in Chapter 26." + }, + { + "objectID": "framing_questions.html#summary", + "href": "framing_questions.html#summary", + "title": "20  Framing Statistical Questions", + "section": "20.7 Summary", + "text": "20.7 Summary\nWe define resampling to include problems in inferential statistics as well as problems in probability as follows: Using the entire set of data you have in hand, or using the given data-generating mechanism (such as a die) that is a model of the process you wish to understand, produce new samples of simulated data, and examine the results of those samples. That’s it in a nutshell. In some cases, it may also be appropriate to amplify this procedure with additional assumptions.\nProblems in pure probability may at first seem different in nature than problems in statistical inference. But the same logic as stated in this definition applies to both varieties of problems. The difference is that in probability problems the “model” is known in advance — say, the model implicit in a deck of poker cards plus a game’s rules for dealing and counting the results — rather than the model being assumed to be best estimated by the observed data, as in resampling statistics.\nThe hardest job in using probability statistics, and the most important, is to translate the scientific question into a form to which statistics can give a sensible answer. You must translate scientific questions into the appropriate form for statistical operations , so that you know which operations to perform. This is the part of the job that requires hard, clear thinking — though it is non-mathematical thinking — and it is the part that someone else usually cannot easily do for you.\nOnce you know exactly which probability-statistical question you want to ask — that is, exactly which probability you want to determine — the rest of the work is relatively easy. The stage at which you are most likely to make mistakes is in stating the question you want to answer in probabilistic terms. Though this step is hard, it involves no mathematics . This step requires only hard, clear thinking . You cannot beg off by saying “I have no brain for math!” To flub this step is to admit that you have no brain for clear thinking, rather than no brain for mathematics.\n\n\n\n\nSimon, Julian Lincoln. 1967. “Doctors, Smoking, and Reference Groups.” Public Opinion Quarterly 31 (4): 646–47." + }, + { + "objectID": "testing_counts_1.html#introduction", + "href": "testing_counts_1.html#introduction", + "title": "21  Hypothesis-Testing with Counted Data, Part 1", + "section": "21.1 Introduction", + "text": "21.1 Introduction\nThe first task in inferential statistics is to make one or more point estimates — that is, to make one or more statements about how much there is of something we are interested in — including especially the mean and the dispersion. (That work goes under the label “estimation” and is discussed in Chapter 19.) Frequently the next step, after making such quantitative estimation of the universe from which a sample has been drawn, is to consider whether two or more samples are different from each other, or whether the single sample is different from a specified value; this work goes under the label “hypothesis testing.” We ask: Did something happen? Or: Is there a difference between two universes? These are yes-no questions.\nIn other cases, the next step is to inquire into the reliability of the estimates; this goes under the label “confidence intervals.” (Some writers include assessing reliability under the rubric of estimation, but I judge it better not to do so).\nSo: Having reviewed how to convert hypothesis-testing problems into statistically testable questions in Chapter 20, we now must ask: How does one employ resampling methods to make the statistical test? As is always the case when using resampling techniques, there is no unique series of steps by which to proceed. The crucial criterion in assessing the model is whether it accurately simulates the actual event. With hypothesis-testing problems, any number of models may be correct. Generally speaking, though, the model that makes fullest use of the quantitative information available from the data is the best model.\nWhen attempting to deduce the characteristics of a universe from sample data, or when asking whether a sample was drawn from a particular universe, a crucial issue is whether a “one-tailed test” or a “two-tailed test” should be applied. That is, in examining the results of our resampling experiment based on the benchmark universe, do we examine both ends of the frequency distribution, or just one? If there is strong reason to believe a priori that the difference between the benchmark (null) universe and the sample will be in a given direction — for example if you hypothesize that the sample mean will be smaller than the mean of the benchmark universe — you should then employ a one-tailed test . If you do not have strong basis for such a prediction, use the two-tailed test. As an example, when a scientist tests a new medication, his/her hypothesis would be that the number of patients who get well will be higher in the treated group than in the control group. Thus, s/he applies the one-tailed test. See the text below for more detail on one- and two-tailed tests.\nSome language first:\nHypothesis: In inferential statistics, a statement or claim about a universe that can be tested and that you wish to investigate.\nTesting: The process of investigating the validity of a hypothesis.\nBenchmark (or null) hypothesis: A particular hypothesis chosen for convenience when testing hypotheses in inferential statistics. For example, we could test the hypothesis that there is no difference between a sample and a given universe, or between two samples, or that a parameter is less than or greater than a certain value. The benchmark universe refers to this hypothesis. (The concept of the benchmark or null hypothesis was discussed in Chapter 9 and Chapter 20.)\nNow let us begin the actual statistical testing of various sorts of hypotheses about samples and populations." + }, + { + "objectID": "testing_counts_1.html#should-a-single-sample-of-counted-data-be-considered-different-from-a-benchmark-universe", + "href": "testing_counts_1.html#should-a-single-sample-of-counted-data-be-considered-different-from-a-benchmark-universe", + "title": "21  Hypothesis-Testing with Counted Data, Part 1", + "section": "21.2 Should a single sample of counted data be considered different from a benchmark universe?", + "text": "21.2 Should a single sample of counted data be considered different from a benchmark universe?\n\n21.2.0.1 Example: Does Irradiation Affect the Sex Ratio in Fruit Flies?\nWhere the Benchmark Universe Mean (in this case, the Proportion) is Known, is the Mean (Proportion) of the Population Affected by the Treatment?)\nYou think you have developed a technique for irradiating the genes of fruit flies so that the sex ratio of the offspring will not be half males and half females. In the first twenty cases you treat, there are fourteen males and six females. Does this experimental result confirm that the irradiation does work?\nFirst convert the scientific question — whether or not the treatment affects the sex distribution — into a probability-statistical question: Is the observed sample likely to have come from a benchmark universe in which the sex ratio is one male to one female? The benchmark (null) hypothesis, then, is that the treatment makes no difference and the sample comes from the one-male-to-one-female universe. Therefore, we investigate how likely a one-to-one universe is to produce a distribution of fourteen or more of just one sex.\nA coin has a one-to-one (one out of two) chance of coming up tails. Therefore, we might flip a coin in groups of twenty flips, and count the number of heads in each twenty flips. Or we can use a random number table. The following steps will produce a sound estimate:\n\nStep 1. Let heads = male, tails = female.\nStep 2. Flip twenty coins and count the number of males. If 14 or more males occur, record “yes.” Also, if 6 or fewer males occur, record “yes” because this means we have gotten 14 or more females. Otherwise, record “no.”\nStep 3. Repeat step 2 perhaps 100 times.\nStep 4. Calculate the proportion “yes” in the 100 trials. This proportion estimates the probability that a fruit-fly population with a propensity to produce 50 percent males will by chance produce as many as 14 or as few as 6 males in a sample of 20 flies.\n\n\n\n\n\nTable 21.1: Results from 25 random trials for Fruitfly problem\n\n\nTrial no\n# of heads\n>=14 or <= 6\n\n\n\n\n1\n8\nNo\n\n\n2\n8\nNo\n\n\n3\n12\nNo\n\n\n4\n9\nNo\n\n\n5\n12\nNo\n\n\n6\n10\nNo\n\n\n7\n9\nNo\n\n\n8\n14\nYes\n\n\n9\n14\nYes\n\n\n10\n10\nNo\n\n\n11\n9\nNo\n\n\n12\n8\nNo\n\n\n13\n13\nNo\n\n\n14\n5\nYes\n\n\n15\n7\nNo\n\n\n16\n11\nNo\n\n\n17\n11\nNo\n\n\n18\n10\nNo\n\n\n19\n10\nNo\n\n\n20\n11\nNo\n\n\n21\n8\nNo\n\n\n22\n9\nNo\n\n\n23\n16\nYes\n\n\n24\n4\nYes\n\n\n25\n13\nNo\n\n\n\n\n\n\n\n\nTable 21.1 shows the results obtained in twenty-five trials of twenty flips each. In three of the twenty-five trials (12 percent) there were fourteen or more heads, which we call “males,” and in two of the twenty-five trials (8 percent) there six or fewer heads, meaning there were fourteen or more tails (“females”). We can therefore estimate that, even if the treatment does not affect the sex and the births over a long period really are one to one, five out of twenty-five times (20 percent) we would get fourteen or more of one sex or the other. Therefore, finding fourteen males out of twenty births is not overwhelming evidence that the treatment has any effect, even though the result is suggestive.\nHow accurate is the estimate? Seventy-five more trials were made, and of the 100 trials eight contained fourteen or more “males” (8 percent), and 9 trials contained fourteen or more “females” (9 percent), a total of 17 percent. So the first twenty-five trials gave a fairly reliable indication. As a matter of fact, analytically-based computation (not explained here) shows that the probability of getting fourteen or more females out of twenty births is .057 and, of course, the same for fourteen or more males from a one-to-one universe, implying a total probability of .114 of getting fourteen or more males or females.\nNow let us obtain larger and more accurate simulation samples with the computer. The key step in the R notebook below represents male fruit flies with the string 'male' and female fruit flies with the string 'female'. The sample function is then used to generate 20 of these strings with an equal probability that either string is selected. This simulates randomly choosing 20 fruit flies on the benchmark assumption — the “null hypothesis” — that each fruit fly has an equal chance of being a male or female. Now we want to discover the chances of getting more than 13 (i.e., 14 or more) males or more than 13 females under these conditions. So we use sum to count the number of males in each random sample and then store this value in the scores vector of this number for each sample. We repeat these steps 10,000 times.\nAfter ten thousand samples have been drawn, we count (sum) how often there were more than 13 males and then count the number of times there were fewer than 7 males (because if there were fewer than 7 males there must have been more than 13 females). When we add the two results together we have the probability that the results obtained from the sample of irradiated fruit flies would be obtained from a random sample of fruit flies.\n\nStart of fruit_fly notebook\n\nDownload notebook\nInteract\n\n\n\n# Set the number of trials\nn_trials <- 10000\n\n# set the sample size for each trial\nsample_size <- 20\n\n# An empty array to store the trials\nscores <- numeric(n_trials)\n\n# Do 1000 trials\nfor (i in 1:n_trials) {\n # Generate 20 simulated fruit flies, where each has an equal chance of being\n # male or female\n a <- sample(c('male', 'female'), size = sample_size, prob = c(0.5, 0.5),\n replace = TRUE)\n\n # count the number of males in the sample\n b <- sum(a == 'male')\n\n # store the result of this trial\n scores[i] <- b\n}\n\n# Produce a histogram of the trial results\ntitle_of_plot <- paste0(\"Number of males in\", n_trials, \" samples of \\n\", sample_size, \" simulated fruit flies\")\nhist(scores, xlab = 'Number of Males', main = title_of_plot)\n\n\n\n\n\n\n\n\nIn the histogram above, we see that in 16 percent of the trials, the number of males was 14 or more, or 6 or fewer. Or instead of reading the results from the histogram, we can calculate the result by tacking on the following commands to the above program:\n\n# Determine the number of trials in which we had 14 or more males.\nj <- sum(scores >= 14)\n\n# Determine the number of trials in which we had 6 or fewer males.\nk <- sum(scores <= 6)\n\n# Add the two results together.\nm <- j + k\n\n# Convert to a proportion.\nmm <- m/n_trials\n\n# Print the results.\nprint(mm)\n\n[1] 0.121\n\n\nEnd of fruit_fly notebook\n\n\nNotice that the strength of the evidence for the effectiveness of the radiation treatment depends upon the original question: whether or not the treatment had any effect on the sex of the fruit fly, which is a two-tailed question. If there were reason to believe at the start that the treatment could increase only the number of males , then we would focus our attention on the result that in only three of the twenty-five trials were fourteen or more males. There would then be only a 3/25 = 0.12 probability of getting the observed results by chance if the treatment really has no effect, rather than the weaker odds against obtaining fourteen or more of either males or females.\nTherefore, whether you decide to figure the odds of just fourteen or more males (what is called a “one-tail test”) or the odds for fourteen or more males plus fourteen or more females (a “two-tail test”), depends upon your advance knowledge of the subject. If you have no reason to believe that the treatment will have an effect only in the direction of creating more males and if you figure the odds for the one-tail test anyway, then you will be kidding yourself. Theory comes to bear here. If you have a strong hypothesis, deduced from a strong theory, that there will be more males, then you should figure one-tail odds, but if you have no such theory you should figure the weaker two-tail odds.1\nIn the case of the next problem concerning calves, we shall see that a one-tail test is appropriate because we have no interest in producing more male calves. Before leaving this example, let us review our intellectual strategy in handling the problem. First we observe a result (14 males in 20 flies) which differs from the proportion of the benchmark population (50 percent males). Because we have treated this sample with irradiation and observed a result that differs from the untreated benchmark-population’s mean, we speculate that the irradiation caused the sample to differ from the untreated population. We wish to check on whether this speculation is correct.\nWhen asking whether this speculation is correct, we are implicitly asking whether future irradiation would also produce a proportion of males higher than 50 percent. That is, we are implicitly asking whether irradiated flies would produce more samples with male proportions as high as 14/20 than would occur by chance in the absence of irradiation.\nIf samples as far away as 14/20 from the benchmark population mean of 10/20 would occur frequently by chance, then we would not be impressed with that experimental evidence as proof that irradiation does affect the sex ratio. Hence we set up a model that will tell us the frequency with which samples of 14 or more males out of 20 births would be observed by chance. Carrying out the resampling procedure tells us that perhaps a tenth of the time such samples would be observed by chance. That is not extremely frequent, but it is not infrequent either. Hence we would probably conclude that the evidence is provocative enough to justify further experimentation, but not so strong that we should immediately believe in the truth of this speculation.\nThe logic of attaching meaning to the probabilistic outcome of a test of a hypothesis is discussed in Chapter 22. There also is more about the concept of the level of significance in Chapter 22.\nBecause of the great importance of this sort of case, which brings out the basic principles particularly clearly, let us consider another example:\n\n\n21.2.1 Example: Does a treatment increase the female calf rate?\nWhat is the probability that among 10 calves born, 9 or more will be female?\nLet’s consider this question in the context of a set of queries for performing statistical inference that will be discussed further in Chapter 25.\nThe question: (From Hodges Jr and Lehmann (1970)): Female calves are more valuable than males. A bio-engineer claims to be able to cause more females to be born than the expected 50 percent rate. He conducts his procedure, and nine females are born out of the next 10 pregnancies among the treated cows. Should you believe his claim? That is, what is the probability of a result this (or more) surprising occurring by chance if his procedure has no effect? In this problem, we assume that on average 100 of 206 births are female, in contrast to the 50-50 benchmark universe in the previous problem.\nWhat is the purpose of the work?: Female calves are more valuable than male calves.\nStatistical inference?: Yes.\nConfidence interval or Test of hypothesis?: Test of hypothesis.\nWill you state the costs and benefits of various outcomes, or a loss function?: Yes. One need only say that the benefits are very large, and if the results are promising, it is worth gathering more data to confirm results.\nHow many samples of data are part of the hypothesis test?: One.\nWhat is the size of the first sample about which you wish to make significance statements?: Ten.\nWhat comparison(s) to make?: Compare the sample to the benchmark universe.\nWhat is the benchmark universe: that embodies the null hypothesis? 100/206 female.\nWhich symbols for the observed entities?: Balls in bucket, or numbers.\nWhat values or ranges of values?: We could write numbers 1 through 206 on pieces of paper, and take numbers 1-100 as “male” and 101-206 as “female”. Or we could use some other mechanism to give us a 100/206 chance of any one calf being female.\nFinite or infinite universe?: Infinite.\nWhich sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)?: Ten calves.\nWhat procedure to produce the sample entities?: Sampling with replacement.\nSimple (single step) or complex (multiple “if” drawings)?: Can think of it either way.\nWhat to record as the outcome of each resample trial?: The proportion (or number) of females.\nWhat is the criterion to be used in the test?: The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of 100/206 females.\n“One tail” or “two tail” test?: One tail, because the farmer is only interested in females. Finding a large proportion of males would not be of interest; it would not cause rejecting the null hypothesis.\nThe actual computation of probability may be done in several ways, as discussed earlier for four children and for ten cows. Conventional methods are discussed for comparison in Chapter 25. Here is the resampling solution in R.\n\nStart of female_calves notebook\n\nDownload notebook\nInteract\n\n\n\n# set the number of trials\nn_trials <- 10000\n\n# set the size of each sample\nsample_size <- 10\n\n# an array to store the results\nscores <- numeric(n_trials)\n\n# for 10000 repeats\nfor (i in 1:n_trials) {\n\n # generate 10 numbers between 1 and 206\n a <- sample(1:206, size = sample_size)\n\n # count how many numbers were between 101 and 206\n b <- sum((a >= 101) & ((a <= 206)))\n\n # store the result of the current trial\n scores[i] <- b\n}\n\n# plot a histogram of the scores\ntitle_of_plot <- paste0(\"Number of females in\", n_trials, \" samples of \\n\", sample_size, \" simulated calves\")\nhist(scores, xlab = 'Number of Females', main = title_of_plot)\n\n# count the number of scores that were greater than or equal to 9\nk <- sum(scores >= 9)\n\n# express as a proportion\nkk <- k / n_trials\n\n# show the proportion\nprint(paste(\"The probability of 9 or 10 females occurring by chance is\", kk))\n\n[1] \"The probability of 9 or 10 females occurring by chance is 0.011\"\n\n\n\n\n\n\n\n\n\nWe read from the result in vector kk in the “calves” program that the probability of 9 or 10 females occurring by chance is a bit more than one percent.\nEnd of female_calves notebook\n\n\n\n\n21.2.2 Example: A Public-Opinion Poll\nIs the Proportion of a Population Greater Than a Given Value?\nA municipal official wants to determine whether a majority of the town’s residents are for or against the awarding of a high-speed broadband internet contract, and he asks you to take a poll. You judge that the voter registration records are a fair representation of the universe in which the politician was interested, and you therefore decided to interview a random selection of registered voters. Of a sample of fifty people who expressed opinions, thirty said “yes” they were for the plan and twenty said “no,” they were against it. How conclusively do the results show that the people in town want this internet contract?\nNow comes some necessary subtle thinking in the interpretation of what seems like a simple problem. Notice that our aim in the analysis is to avoid the mistake of saying that the town favors the plan when in fact it does not favor the plan. Our chance of making this mistake is greatest when the voters are evenly split, so we choose as the benchmark (null) hypothesis that 50 percent of the town does not want the plan. This statement really means that “50 percent or more do not want the plan.” We could assess the probability of obtaining our result from a population that is split (say) 52-48 against, but such a probability would necessarily be even smaller, and we are primarily interested in assessing the maximum probability of being wrong. If the maximum probability of error turns out to be inconsequential, then we need not worry about less likely errors.\nThis problem is very much like the one-group fruit fly irradiation problem above. The only difference is that now we are comparing the observed sample against an arbitrary value of 50 percent (because that is the break-point in a situation where the majority decides) whereas in Section 21.2.0.1 we compared the observed sample against the normal population proportion (also 50 percent, because that is the normal proportion of males). But it really does not matter why we are comparing the observed sample to the figure of 50 percent; the procedure is the same in both cases. (Please notice that there is nothing special about the 50 percent figure; the same procedure would be followed for 20 percent or 85 percent.)\nIn brief, we a) take two pieces of paper, write “Yes” on one and “No” on the other, put them in a bucket b) draw a piece of paper from the bucket, record whether it was “Yes” or “No”, replace, and repeat 50 times c) count the number of “yeses” and “noes” in the first fifty draws, c) repeat for perhaps a hundred trials, then d) count the proportion of the trials in which a 50-50 universe would produce thirty or more “yes” answers.\nIn operational steps, the procedure is as follows:\n\nStep 1. “1-5” = no, “6-0” = yes.\nStep 2. In 50 random numbers, count the “yeses,” and record “false positive” if 30 or more “yeses.”\nStep 3. Repeat step 2 perhaps 100 times.\nStep 4. Calculate the proportion of experimental trials showing “false positive.” This estimates the probability that as many as 30 “yeses” would be observed by chance in a sample of 50 people if half (or more) are really against the plan.\n\n\n\n\n\nTable 21.2: Results from 20 random trials for contract poll problem\n\n\nTrial no\n# of \"Noes\"\n# of \"Yeses\"\n>= 30 \"Yeses\"\n\n\n\n\n1\n21\n29\n\n\n\n2\n25\n25\n\n\n\n3\n25\n25\n\n\n\n4\n25\n25\n\n\n\n5\n28\n22\n\n\n\n6\n28\n22\n\n\n\n7\n25\n25\n\n\n\n8\n28\n22\n\n\n\n9\n26\n24\n\n\n\n10\n22\n28\n\n\n\n11\n27\n23\n\n\n\n12\n25\n25\n\n\n\n13\n22\n28\n\n\n\n14\n24\n26\n\n\n\n15\n27\n23\n\n\n\n16\n27\n23\n\n\n\n17\n28\n22\n\n\n\n18\n26\n24\n\n\n\n19\n33\n17\n\n\n\n20\n23\n27\n\n\n\n\n\n\n\n\n\nIn Table 21.2, we see the results of twenty trials; 0 of 20 times (0 percent), 30 or more “yeses” were observed by chance. So our “significance level” or “prob value” is 0 percent, which is normally too high to feel confident that our poll results are reliable. This is the probability that as many as thirty of fifty people would say “yes” by chance if the population were “really” split evenly. (If the population were split so that more than 50 percent were against the plan, the probability would be even less that the observed results would occur by chance. In this sense, the benchmark hypothesis is conservative). On the other hand, if we had been counting the number of times there are 30 or more “No” votes that, in our setup, have the same odds as to 30 or more “Yes” votes, there would have been one. This indicates how samples can vary just by chance.\nTaken together, the evidence suggests that the mayor would be wise not to place very much confidence in the poll results, but rather ought to act with caution or else take a larger sample of voters.\n\nStart of contract_poll notebook\n\nDownload notebook\nInteract\n\n\nThis R notebook generates samples of 50 simulated voters on the assumption that only 50 percent are in favor of the contract. Then it counts (sums) the number of samples where over 29 (30 or more) of the 50 respondents said they were in favor of the contract. (That is, we use a “one-tailed test.”) The result in the kk variable is the chance of a “false positive,” that is, 30 or more people saying they favor a contract when support for the proposal is actually split evenly down the middle.\n\n# We will do 10,000 iterations.\nn <- 10000\n\n# Make an array of integers to store the \"Yes\" counts.\nyeses <- numeric(n)\n\nfor (i in 1:n) {\n answers <- sample(c('No', 'Yes'), size=50, replace=TRUE)\n yeses[i] <- sum(answers == 'Yes')\n}\n\n# Produce a histogram of the trial results.\n# Use integer bins for histogram, from 10 through 40.\nhist(yeses, breaks=10:40,\n main='Number of yes votes out of 50, in null universe')\n\n\n\n\n\n\n\n\nIn the histogram above, we see that about 11 percent of our trials had 30 or more voters in favor, despite the fact that they were drawn from a population that was split 50-50. R will calculate this proportion directly if we add the following commands to the above:\n\nk <- sum(yeses >= 30)\nkk <- k / n\nmessage('Proportion >= 30: ', round(kk, 2))\n\nProportion >= 30: 0.1\n\n\nEnd of contract_poll notebook\n\n\nThe section above discusses testing hypotheses about a single sample of counted data relative to a benchmark universe. This section discusses the issue of whether two samples with counted data should be considered the same or different.\n\n\n21.2.3 Example: Did the Trump-Clinton Poll Indicate that Trump Would Win?\n\nStart of trump_clinton notebook\n\nDownload notebook\nInteract\n\n\nWhat is the probability that a sample outcome such as actually observed (840 Trump, 660 Clinton) would occur by chance if Clinton is “really” ahead — that is, if Clinton has 50 percent (or more) of the support? To restate in sharper statistical language: What is the probability that the observed sample or one even more favorable to Trump would occur if the universe has a mean of 50 percent or below?\nHere is a procedure that responds to that question:\n\nCreate a benchmark universe with one ball marked “Trump” and another marked “Clinton”\nDraw a ball, record its marking, and replace. (We sample with replacement to simulate the practically-infinite population of U. S. voters.)\nRepeat step 2 1500 times and count the number of “Trump”s. If 840 or greater, record “Y”; otherwise, record “N.”\nRepeat steps 3 and 4 perhaps 1000 or 10,000 times, and count the number of “Y”s. The outcome estimates the probability that 840 or more Trump choices would occur if the universe is “really” half or more in favor of Clinton.\n\nThis procedure may be done as follows with R.\n\n# Number of repeats we will run.\nn <- 10000\n\n# Make an array to store the counts.\ntrumps <- numeric(n)\n\nfor (i in 1:n) {\n votes <- sample(c('Trump', 'Clinton'), size=1500, replace=TRUE)\n trumps[i] <- sum(votes == 'Trump')\n}\n\n# Integer bins from 675 through 825 in steps of 5.\nhist(trumps, breaks=seq(675, 826, by=5),\n main='Number of Trump voters of 1500 in null-world simulation')\n\n# How often >= 840 Trump votes in random draw?\nk <- sum(trumps >= 840)\n# As a proportion of simulated resamples.\nkk <- k / n\n\nmessage('Proportion voting for Trump: ', kk)\n\nProportion voting for Trump: 0\n\n\n\n\n\n\n\n\n\nThe value for kk is our estimate of the probability that Trump’s “victory” in the sample would occur by chance if he really were behind. In this case, our probability estimate is less than 1 in 10,000 (< 0.0001).\nEnd of trump_clinton notebook\n\n\n\n\n\n21.2.4 Example: Comparison of Possible Cancer Cure to Placebo\nDo Two Binomial Populations Differ in Their Proportions.\nSection 21.2.0.1 used an observed sample of male and female fruitflies to test the benchmark (null) hypothesis that the flies came from a universe with a one-to-one sex ratio, and the poll data problem also compared results to a 50-50 hypothesis. The calves problem also compared the results to a single benchmark universe — a proportion of 100/206 females. Now we want to compare two samples with each other , rather than comparing one sample with a hypothesized universe. That is, in this example we are not comparing one sample to a benchmark universe, but rather asking whether both samples come from the same universe. The universe from which both samples come, if both belong to the same universe, may be thought of as the benchmark universe, in this case.\nThe scientific question is whether pill P cures a rare cancer. A researcher gave pill P to six patients selected randomly from a group of twelve cancer patients; of the six, five got well. He gave an inactive placebo to the other six patients, and two of them got well. Does the evidence justify a conclusion that the pill has a curative effect?\n(An identical statistical example would serve for an experiment on methods of teaching reading to children. In such a situation the researcher would respond to inconclusive results by running the experiment on more subjects, but in cases like the cancer-pill example the researcher often cannot obtain more subjects.)\nWe can answer the stated question by combining the two samples and testing both samples against the resulting combined universe. In this case, the universe is twelve subjects, seven (5 + 2) of whom got well. How likely would such a universe produce two samples as far apart as five of six, and two of six, patients who get well? In other words, how often will two samples of six subjects, each drawn from a universe in which 7/12 of the patients get well, be as far apart as 5 - 2 = 3 patients in favor of the sample designated “pill”? This is obviously a one-tail test, for we have no reason to believe that the pill group might do less well than the placebo group.\nWe might construct a twelve-sided die, seven of whose sides are marked “get well.” Or put 12 pieces of paper in a bucket, seven with “get well” and five with “not well”. Or we would use pairs of numbers from the random-number table, with numbers “01-07” corresponding to get well, numbers “08-12” corresponding to “not get well,” and all other numbers omitted. (If you wish to save time, you can work out a system that uses more numbers and skips fewer, but that is up to you.) Designate the first six subjects “pill” and the next six subjects “placebo.”\nThe specific procedure might be as follows:\n\nStep 1. Write “get well” on seven pieces of paper, “not well” on another five. Put the 12 pieces of paper into a bucket.\nStep 2. Select two groups, “pill” and “placebo”, each with six random draws (with replacement) from the 12 pieces of paper.\nStep 3. Record how many “get well” in each group.\nStep 4. Subtract the result in group “placebo” from that in group “pill” (the difference may be negative).\nStep 5. Repeat steps 1-4 perhaps 100 times.\nStep 6. Compute the proportion of trials in which the pill does better by three or more cases.\n\n\n\n\n\nTable 21.3: Results from 25 random trials for pill/placebo\n\n\nTrial no\n# of pill cures\n# of placebo cures\nDifference\n\n\n\n\n1\n3\n4\n-1\n\n\n2\n4\n3\n1\n\n\n3\n3\n6\n-3\n\n\n4\n3\n5\n-2\n\n\n5\n5\n5\n0\n\n\n6\n3\n4\n-1\n\n\n7\n5\n1\n4\n\n\n8\n4\n4\n0\n\n\n9\n4\n4\n0\n\n\n10\n3\n4\n-1\n\n\n11\n2\n4\n-2\n\n\n12\n5\n3\n2\n\n\n13\n4\n6\n-2\n\n\n14\n3\n3\n0\n\n\n15\n4\n3\n1\n\n\n16\n3\n4\n-1\n\n\n17\n4\n5\n-1\n\n\n18\n4\n2\n2\n\n\n19\n4\n5\n-1\n\n\n20\n5\n2\n3\n\n\n21\n5\n4\n1\n\n\n22\n3\n4\n-1\n\n\n23\n5\n5\n0\n\n\n24\n3\n5\n-2\n\n\n25\n5\n3\n2\n\n\n\n\n\n\n\n\nIn the trials shown in Table 21.3, in two cases (8 percent) the difference between the randomly-drawn groups is three cases or greater. Apparently it is somewhat unusual — it happens 8 percent of the time — for this universe to generate “pill” samples in which the number of recoveries exceeds the number in the “placebo” samples by three or more. Therefore the answer to the scientific question, based on these samples, is that there is some reason to think that the medicine does have a favorable effect. But the investigator might sensibly await more data before reaching a firm conclusion about the pill’s efficiency, given the 8 percent probability.\n\nStart of pill_placebo notebook\n\nDownload notebook\nInteract\n\n\nNow for a R solution. Again, the benchmark hypothesis is that pill P has no effect, and we ask how often, on this assumption, the results that were obtained from the actual test of the pill would occur by chance.\nGiven that in the test 7 of 12 patients overall got well, the benchmark hypothesis assumes 7/12 to be the chances of any random patient being cured. We generate two similar samples of 6 patients, both taken from the same universe composed of the combined samples — the bootstrap procedure. We count (sum) the number who are “get well” in each sample. Then we subtract the number who got well in the “pill” sample from the number who got well in the “no-pill” sample. We record the resulting difference for each trial in the variable pill_betters.\nIn the actual test, 3 more patients got well in the sample given the pill than in the sample given the placebo. We therefore count how many of the trials yield results where the difference between the sample given the pill and the sample not given the pill was greater than 2 (equal to or greater than 3). This result is the probability that the results derived from the actual test would be obtained from random samples drawn from a population which has a constant cure rate, pill or no pill.\n\n# The bucket with the pieces of paper.\noptions <- rep(c('get well', 'not well'), c(7, 5))\n\nn <- 10000\n\npill_betters <- numeric(n)\n\nfor (i in 1:n) {\n pill <- sample(options, size=6, replace=TRUE)\n pill_cures <- sum(pill == 'get well')\n placebo <- sample(options, size=6, replace=TRUE)\n placebo_cures <- sum(placebo == 'get well')\n pill_betters[i] <- pill_cures - placebo_cures\n}\n\nhist(pill_betters, breaks=-6:6,\n main='Number of extra cures pill vs placebo in null universe')\n\n\n\n\n\n\n\n\nRecall our actual observed results: In the medicine group, three more patients were cured than in the placebo group. From the histogram, we see that in only about 8 percent of the simulated trials did the “medicine” group do as well or better. The results seem to suggest — but by no means conclusively — that the medicine’s performance is not due to chance. Further study would probably be warranted. The following commands added to the above program will calculate this proportion directly:\nEnd of pill_placebo notebook\n\n\nAs I (JLS) wrote when I first proposed this bootstrap method in 1969, this method is not the standard way of handling the problem; it is not even analogous to the standard analytic difference-of-proportions method (though since then it has become widely accepted). Though the method shown is quite direct and satisfactory, there are also many other resampling methods that one might construct to solve the same problem. By all means, invent your own statistics rather than simply trying to copy the methods described here; the examples given here only illustrate the process of inventing statistics rather than offering solutions for all classes of problems.\n\n\n21.2.5 Example: Did Attitudes About Marijuana Change?\n\nConsider two polls, each asking 1500 Americans about marijuana legalization. One poll, taken in 1980, found 52 percent of respondents in favor of decriminalization; the other, taken in 1985, found 46 percent in favor of decriminalization (Wonnacott and Wonnacott 1990, 275). Our null (benchmark) hypothesis is that both samples came from the same universe (the universe made up of the total of the two sets of observations). If so, let us then ask how likely would be two polls to produce results as different as were observed? Hence we construct a universe with a mean of 49 percent (the mean of the two polls of 52 percent and 46 percent), and repeatedly draw pairs of samples of size 1500 from it.\nTo see how the construction of the appropriate question is much more challenging intellectually than is the actual mathematics, let us consider another possibility suggested by a student: What about considering the universe to be the earlier poll with a mean of 52 percent, and then asking the probability that the later poll of 1500 people with a mean of 46 percent would come from it? Indeed, on first thought that procedure seems reasonable.\nUpon reflection — and it takes considerable thought on these matters to get them right — that would not be an appropriate procedure. The student’s suggested procedure would be the same as assuming that we had long-run solid knowledge of the universe, as if based on millions of observations, and then asking about the probability of a particular sample drawn from it. That does not correspond to the facts.\nThe only way to find the approach you eventually consider best — and there is no guarantee that it is indeed correct — is by close reference to the particular facts of the case.\n\n\n21.2.6 Example: Infarction and Cholesterol: Framingham Study\nIt is so important to understand the logic of hypothesis tests, and of the resampling method of doing them, that we will now tackle another problem similar to the preceding one.\nThis will be the first of several problems that use data from the famous Framingham study (drawn from Kahn and Sempos (1989)) concerning the development of myocardial infarction 16 years after the Framingham study began, for men ages 35- 44 with serum cholesterol above 250, compared to those with serum cholesterol below 250. The raw data are shown in Table 21.4. The data are from (Shurtleff 1970), cited in (Kahn and Sempos 1989, 12:61, Table 3-8). Kahn and Sempos divided the cases into “high” and “low” cholesterol.\n\n\nTable 21.4: Development of Myocardial Infarction in Men Aged 35-44 After 16 Years\n\n\nSerum Cholesterol\nDeveloped MI\nDidn’t Develop MI\nTotal\n\n\n\n\n> 250\n10\n125\n135\n\n\n<= 250\n21\n449\n470\n\n\n\n\nThe statistical logic properly begins by asking: How likely is that the two observed groups “really” came from the same “population” with respect to infarction rates? That is, we start with this question: How sure should one be that there is a difference in myocardial infarction rates between the high and low-cholesterol groups? Operationally, we address this issue by asking how likely it is that two groups as different in disease rates as the observed groups would be produced by the same “statistical universe.”\nKey step: We assume that the relevant “benchmark” or “null hypothesis” population (universe) is the composite of the two observed groups. That is, if there were no “true” difference in infarction rates between the two serum-cholesterol groups, and the observed disease differences occurred just because of sampling variation, the most reasonable representation of the population from which they came is the composite of the two observed groups.\nTherefore, we compose a hypothetical “benchmark” universe containing (135 + 470 =) 605 men at risk, and designate (10 + 21 =) 31 of them as infarction cases. We want to determine how likely it is that a universe like this one would produce — just by chance — two groups that differ as much as do the actually observed groups. That is, how often would random sampling from this universe produce one sub-sample of 135 men containing a large enough number of infarctions, and the other sub-sample of 470 men producing few enough infarctions, that the difference in occurrence rates would be as high as the observed difference of .029? (10/135 = .074, and 21/470 = .045, and .074 - .045 = .029).\nSo far, everything that has been said applies both to the conventional formulaic method and to the “new statistics” resampling method. But the logic is seldom explained to the reader of a piece of research — if indeed the researcher her/ himself grasps what the formula is doing. And if one just grabs for a formula with a prayer that it is the right one, one need never analyze the statistical logic of the problem at hand.\nNow we tackle this problem with a method that you would think of yourself if you began with the following mind-set: How can I simulate the mechanism whose operation I wish to understand? These steps will do the job:\n\nStep 1: Fill a bucket with 605 balls, 31 red (infarction) and the rest (605 — 31 = 574) green (no infarction).\nStep 2: Draw a sample of 135 (simulating the high serum-cholesterol group), one ball at a time and throwing it back after it is drawn to keep the simulated probability of an infarction the same throughout the sample; record the number of reds. Then do the same with another sample of 470 (the low serum-cholesterol group).\nStep 3: Calculate the difference in infarction rates for the two simulated groups, and compare it to the actual difference of .029; if the simulated difference is that large, record “Yes” for this trial; if not, record “No.”\nStep 4: Repeat steps 2 and 3 until a total of (say) 400 or 1000 trials have been completed. Compute the frequency with which the simulated groups produce a difference as great as actually observed. This frequency is an estimate of the probability that a difference as great as actually observed in Framingham would occur even if serum cholesterol has no effect upon myocardial infarction.\n\nThe procedure above can be carried out with balls in a bucket in a few hours. Yet it is natural to seek the added convenience of the computer to draw the samples. Here is a R program:\n\nStart of framingham_hearts notebook\n\nDownload notebook\nInteract\n\n\n\nn <- 10000\n\nmen <- rep(c('infarction', 'no infarction'), c(31, 574))\n\nn_high <- 135 # Number of men with high cholesterol\nn_low <- 470 # Number of men with low cholesterol\n\ninfarct_differences <- numeric(n)\n\nfor (i in 1:n) {\n highs <- sample(men, size=n_high, replace=TRUE)\n lows <- sample(men, size=n_low, replace=TRUE)\n high_infarcts <- sum(highs == 'infarction')\n low_infarcts <- sum(lows == 'infarction')\n high_prop <- high_infarcts / n_high\n low_prop <- low_infarcts / n_low\n infarct_differences[i] <- high_prop - low_prop\n}\n\nhist(infarct_differences, breaks=seq(-0.1, 0.1, by=0.005),\n main='Infarct proportion differences in null universe')\n\n# How often was the resampled difference >= the observed difference?\nk <- sum(infarct_differences >= 0.029)\n# Convert this result to a proportion\nkk <- k / n\n\nmessage('Proportion of trials with difference >= observed: ',\n round(kk, 2))\n\nProportion of trials with difference >= observed: 0.09\n\n\n\n\n\n\n\n\n\nThe results of the test using this program may be seen in the histogram. We find — perhaps surprisingly — that a difference as large as observed would occur by chance around 10 percent of the time. (If we were not guided by the theoretical expectation that high serum cholesterol produces heart disease, we might include the 10 percent difference going in the other direction, giving a 20 percent chance). Even a ten percent chance is sufficient to call into question the conclusion that high serum cholesterol is dangerous. At a minimum, this statistical result should call for more research before taking any strong action clinically or otherwise.\nEnd of framingham_hearts notebook\n\n\nWhere should one look to determine which procedures should be used to deal with a problem such as set forth above? Unlike the formulaic approach, the basic source is not a manual which sets forth a menu of formulas together with sets of rules about when they are appropriate. Rather, you consult your own understanding about what is happening in (say) the Framingham situation, and the question that needs to be answered, and then you construct a “model” that is as faithful to the facts as is possible. The bucket-sampling described above is such a model for the case at hand.\nTo connect up what we have done with the conventional approach, one could apply a z test (conceptually similar to the t test, but applicable to yes-no data; it is the Normal-distribution approximation to the large binomial distribution). Do so, we find that the results are much the same as the resampling result — an eleven percent probability.\nSomeone may ask: Why do a resampling test when you can use a standard device such as a z or t test? The great advantage of resampling is that it avoids using the wrong method. The researcher is more likely to arrive at sound conclusions with resampling because s/he can understand what s/he is doing, instead of blindly grabbing a formula which may be in error.\nThe textbook from which the problem is drawn is an excellent one; the difficulty of its presentation is an inescapable consequence of the formulaic approach to probability and statistics. The body of complex algebra and tables that only a rare expert understands down to the foundations constitutes an impenetrable wall to understanding. Yet without such understanding, there can be only rote practice, which leads to frustration and error.\n\n\n21.2.7 Example: Is One Pig Ration More Effective Than the Other?\nTesting For a Difference in Means With a Two-by-Two Classification.\nEach of two new types of ration is fed to twelve pigs. A farmer wants to know whether ration A or ration B is better.2 The weight gains in pounds for pigs fed on rations A and B are:\nA: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 31\nB: 26, 24, 28, 29, 30, 29, 31, 29, 32, 26, 28, 32\nThe statistical question may be framed as follows: should one consider that the pigs fed on the different rations come from the same universe with respect to weight gains?\nIn the actual experiment, 9 of the 12 pigs who were fed ration A also were in the top half of weight gains. How likely is it that one group of 12 randomly-chosen pigs would contain 9 of the 12 top weight gainers?\nOne approach to the problem is to divide the pigs into two groups — the twelve with the highest weight gains, and the twelve with the lowest weight gains — and examine whether an unusually large number of high-weight-gain pigs were fed on one or the other of the rations.\nWe can make this test by ordering and grouping the twenty four pigs:\nHigh-weight group:\n38 (ration A), 35 (A), 34 (A), 34 (A), 32 (B), 32 (A), 32 (A), 32 (B), 31 (A),\n31 (B), 31 (A), 31 (A)\nLow-weight group:\n30 (B), 29 (A), 29 (A), 29 (B), 29 (B), 29 (B), 28 (B), 28 (B), 26 (A), 26 (B),\n26 (B), 24 (B).\nAmong the twelve high-weight-gain pigs, nine were fed on ration A. We ask: Is this further from an even split than we are likely to get by chance? Let us take twelve red and twelve black cards, shuffle them, and deal out twelve cards (the other twelve need not be dealt out). Count the proportion of the hands in which one ration comes up nine or more times in the first twelve cards, to reflect ration A’s appearance nine times among the highest twelve weight gains. More specifically:\n\nStep 1. Constitute a deck of twelve red and twelve black cards, and shuffle.\nStep 2. Deal out twelve cards, count the number red, and record “yes” if there are nine or more of either red or black.\nStep 3. Repeat step 2 perhaps fifty times.\nStep 4. Compute the proportion “yes.” This proportion estimates the probability sought.\n\n\n\n\n\nTable 21.5: Results from 25 random trials for pig rations\n\n\nTrial no\n# red\n# black\n>=9 red or black\n\n\n\n\n1\n6\n6\n\n\n\n2\n7\n5\n\n\n\n3\n6\n6\n\n\n\n4\n5\n7\n\n\n\n5\n5\n7\n\n\n\n6\n7\n5\n\n\n\n7\n6\n6\n\n\n\n8\n7\n5\n\n\n\n9\n8\n4\n\n\n\n10\n3\n9\n+\n\n\n11\n5\n7\n\n\n\n12\n2\n10\n+\n\n\n13\n5\n7\n\n\n\n14\n8\n4\n\n\n\n15\n7\n5\n\n\n\n16\n5\n7\n\n\n\n17\n5\n7\n\n\n\n18\n5\n7\n\n\n\n19\n9\n3\n+\n\n\n20\n8\n4\n\n\n\n21\n5\n7\n\n\n\n22\n7\n5\n\n\n\n23\n4\n8\n\n\n\n24\n3\n9\n+\n\n\n25\n4\n8\n\n\n\n\n\n\n\n\n\nTable 21.5 shows the results of 25 trials. In four (marked by + signs) of the 25 (that is, 16 percent of the trials) there were nine or more either red or black cards in the first twelve cards. Again the results suggest that it would be slightly unusual for the results to favor one ration or the other so strongly just by chance if they come from the same universe.\nNow the R procedure to answer the question:\n\nStart of pig_rations notebook\n\nDownload notebook\nInteract\n\n\nThe ranks <- 1:24 statement creates a vector of numbers 1 through 24, which will represent the rankings of weight gains for each of the 24 pigs. We repeat the following procedure for 10000 trials. First we shuffle the elements of vector ranks so that the rank numbers for weight gains are randomized and placed in vector shuffled. We then select the first 12 elements of shuffled and place them in first_12; this represents the rankings of a randomly-selected group of 12 pigs. We next count (sum) in n_top the number of pigs whose rankings for weight gain were in the top half — that is, a rank of less than 13. We record that number in top_ranks, and then continue the loop, until we finish our n trials.\nSince we did not know beforehand the direction of the effect of ration A on weight gain, we want to count the times that either more than 8 of the random selection of 12 pigs were in the top half of the rankings, or that fewer than 4 of these pigs were in the top half of the weight gain rankings — (The latter is the same as counting the number of times that more than 8 of the 12 non-selected random pigs were in the top half in weight gain.)\nWe do so with the final two sum statements. By adding the two results n_gte_9 and n_lte_3 together, we have the number of times out of 10,000 that differences in weight gains in two groups as dramatic as those obtained in the actual experiment would occur by chance.\n\n# Constitute the set of the weight gain rank orders. ranks is now a vector\n# consisting of the numbers 1 — 24, in that order.\nranks <- 1:24\n\nn <- 10000\n\ntop_ranks <- numeric(n)\n\nfor (i in 1:n) {\n # Shuffle the ranks of the weight gains.\n shuffled <- sample(ranks)\n # Take the first 12 ranks.\n first_12 <- shuffled[1:12]\n # Determine how many of these randomly selected 12 ranks are less than\n # 12 (i.e. 1-12), put that result in n_top.\n n_top <- sum(first_12 <= 12)\n # Keep track of each trial result in top_ranks\n top_ranks[i] <- n_top\n}\n\nhist(top_ranks, breaks=1:11,\n main='Number of top 12 ranks in pig-ration trials')\n\n\n\n\n\n\n\n\nWe see from the histogram that, in about 3 percent of the trials, either more than 8 or fewer than 4 top half ranks (1-12) made it into the random group of twelve that we selected. R will calculate this for us as follows:\n\n# Determine how many of the trials yielded 9 or more top ranks.\nn_gte_9 <- sum(top_ranks >= 9)\n# Determine how many trials yielded 3 or fewer of the top ranks.\n# If there were 3 or fewer, then 9 or more of the top ranks must\n# have been in the other group (not selected).\nn_lte_3 <- sum(top_ranks <= 3)\n# Add the two together.\nn_both <- n_gte_9 + n_lte_3\n# Convert to a proportion.\nprop_both <- n_both / n\n\nmessage('Trial proportion >=9 top ranks in either group: ',\n round(prop_both, 2))\n\nTrial proportion >=9 top ranks in either group: 0.04\n\n\nThe decisions that are warranted on the basis of the results depend upon one’s purpose. If writing a scientific paper on the merits of ration A is the ultimate purpose, it would be sensible to test another batch of pigs to get further evidence. (Or you could proceed to employ another sort of test for a slightly more precise evaluation.) But if the goal is a decision on which type of ration to buy for a small farm and they are the same price, just go ahead and buy ration A because, even if it is no better than ration B, you have strong evidence that it is no worse .\nEnd of pig_rations notebook\n\n\n\n\n21.2.8 Example: Do Planet Densities Differ?\nConsider the five planets known to the ancient world.\nMosteller and Rourke (1973, 17–19) ask us to compare the densities of the three planets farther from the sun than is the earth (Mars, density 0.71; Jupiter, 0.24; and Saturn, 0.12) against the densities of the planets closer to the sun than is the earth (Mercury, 0.68; Venus, 0.94).\nThe average density of the distant planets is .357, of the closer planets is .81. Is this difference (.353) statistically surprising, or is it likely to occur in a chance ordering of these planets?\nWe can answer this question with a permutation test; such sampling without replacement makes sense here because we are considering the entire set of planets, rather than a sample drawn from a larger population of planets (the word “population” is used here, rather than “universe,” to avoid confusion.) And because the number of objects is so small, one could examine all possible arrangements (permutations), and see how many have (say) differences in mean densities between the two groups as large as observed.\nAnother method that Mosteller and Rourke suggest is by a comparison of the density ranks of the two sets, where Saturn has rank 1 and Venus has rank 5. This might have a scientific advantage if the sample data are dominated by a single “outlier,” whose domination is removed when we rank the data.\nWe see that the sum of the ranks for the “closer” set is 3+5=8. We can then ask: If the ranks were assigned at random, how likely is it that a set of two planets would have a sum as large as 8? Again, because the sample is small, we can examine all the possible permutations, as Mosteller and Rourke do in Table 3-1 (Mosteller and Rourke 1973, 56) (Substitute “Closer” for “B,” “Further” for “A”). In two of the ten permutations, a sum of ranks as great as 8 is observed, so the probability of a result as great as observed happening by chance is 20 percent, using these data. (We could just as well consider the difference in mean ranks between the two groups: (8/2 - 7/3 = 10 / 6 = 1.67).\n\n\nTo illuminate the logic of this test, consider comparing the heights of two samples of trees. If sample A has the five tallest trees, and sample B has the five shortest trees, the difference in mean ranks will be (6+7+8+9+10=) 40 — (1+2+3+4+5=) 15, the largest possible difference. If the groups are less sharply differentiated — for example, if sample A has #3 and sample B has #8 — the difference in ranks will be less than the maximum of 40, as you can quickly verify.\nThe method we have just used is called a Mann-Whitney test, though that label is usually applied when the data are too many to examine all the possible permutations; in that case one conventionally uses a table prepared by formula. In the case where there are too many for a complete permutation test, our resampling algorithm is as follows (though we’ll continue with the planets example):\n\nCompute the mean ranks of the two groups.\nCalculate the difference between the means computed in step 1.\nCreate a bucket containing the ranks from 1 to the number of observations (5, in the case of the planets)\nShuffle the ranks.\nSince we are working with the ranked data, we must draw without replacement, because there can only be one #3, one #7, and so on. So draw the number of observations corresponding to the number of observations — 2 “Closer” and 3 “Further.”\nCompute the mean ranks of the two simulated groups of planets.\nCalculate the difference between the means computed in step 5 and record.\nRepeat steps 4 through 7 perhaps 1000 times.\nCount how often the shuffled difference in ranks exceeds the observed difference from step 2 (1.67).\n\n\nStart of planet_densities notebook\n\nDownload notebook\nInteract\n\n\n\n# Steps 1 and 2.\nactual_mean_diff <- 8 / 2 - 7 / 3\n\n# Step 3\nranks <- 1:5\n\nn <- 10000\n\nmean_differences <- numeric(n)\n\nfor (i in 1:n) {\n # Step 4\n shuffled <- sample(ranks)\n # Step 5\n closer <- shuffled[1:2] # First 2\n further <- shuffled[3:5] # Last 3\n # Step 6\n mean_close <- mean(closer)\n mean_far <- mean(further)\n # Step 7\n mean_differences[i] <- mean_close - mean_far\n}\n\n# Step 9\nk <- sum(mean_differences >= actual_mean_diff)\nprob <- k / n\n\nmessage('Proportion of trials with mean difference >= 1.67: ',\n round(prob, 2))\n\nProportion of trials with mean difference >= 1.67: 0.2\n\n\nInterpretation: 20 percent of the time, random shufflings produced a difference in ranks as great as or greater than observed. Hence, on the strength of this evidence, we should not conclude that there is a statistically surprising difference in densities between the further planets and the closer planets.\nEnd of planet_densities notebook" + }, + { + "objectID": "testing_counts_1.html#conclusion", + "href": "testing_counts_1.html#conclusion", + "title": "21  Hypothesis-Testing with Counted Data, Part 1", + "section": "21.3 Conclusion", + "text": "21.3 Conclusion\nThis chapter has begun the actual work of testing hypotheses. The next chapter continues with discussion of somewhat more complex problems with counted data — more complex to think about, but no more difficult to actually treat mathematically with resampling simulation. If you have understood the general logic of the procedures used up until this point, you are in command of all the necessary conceptual knowledge to construct your own tests to answer any statistical question. A lot more practice, working on a variety of problems, obviously would help. But the key elements are simple: 1) Model the real situation accurately, 2) experiment with the model, and 3) compare the results of the model with the observed results.\n\n\n\n\nDixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to Statistical Analysis.”\n\n\nHodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic Concepts of Probability and Statistics. 2nd ed. San Francisco, California: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9.\n\n\nKahn, Harold A, and Christopher T Sempos. 1989. Statistical Methods in Epidemiology. Vol. 12. Monographs in Epidemiology and Biostatistics. New York: Oxford University Press. https://www.google.co.uk/books/edition/Statistical_Methods_in_Epidemiology/YERYAgAAQBAJ.\n\n\nMosteller, Frederick, and Robert E. K. Rourke. 1973. Sturdy Statistics: Nonparametrics and Order Statistics. Addison-Wesley Publishing Company.\n\n\nShurtleff, Dewey. 1970. “Some Characteristics Related to the Incidence of Cardiovascular Disease and Death: Framingham Study, 16-Year Follow-up.” Section 26. Edited by William B. Kannel and Tavia Gordon. The Framingham Study: An Epidemiological Investigation of Cardiovascular Disease. Washington, D.C.: U.S. Government Printing Office. https://upload.wikimedia.org/wikipedia/commons/6/6d/The_Framingham_study_-_an_epidemiological_investigation_of_cardiovascular_disease_sec.26_1970_%28IA_framinghamstudye00kann_25%29.pdf.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory Statistics. 5th ed. New York: John Wiley & Sons." + }, + { + "objectID": "significance.html#the-logic-of-hypothesis-tests", + "href": "significance.html#the-logic-of-hypothesis-tests", + "title": "22  The Concept of Statistical Significance in Testing Hypotheses", + "section": "22.1 The logic of hypothesis tests", + "text": "22.1 The logic of hypothesis tests\nLet’s address the logic of hypothesis tests by considering a variety of examples in everyday thinking:\nConsider the nine-year-old who tells the teacher that the dog ate the homework. Why does the teacher not accept the child’s excuse? Clearly it is because the event would be too “unusual.” But why do we think that way?\nLet’s speculate that you survey a million adults, and only three report that they have ever heard of a real case where a dog ate somebody’s homework. You are a teacher, and a student comes in without homework and says that a dog ate the homework. It could have happened — your survey reports that it really has happened in three lifetimes out of a million. But the event happens only very infrequently .\nTherefore, you probably conclude that because the event is so unlikely, something else must have happened — and the likeliest alternative is that the student did not do the homework. The logic is that if an event seems very unlikely, it would therefore surprise us greatly if it were to actually happen, and therefore we assume that there must be a better explanation. This is why we look askance at unlikely coincidences when they are to someone’s benefit.\nThe same line of reasoning was the logic of John Arbuthnot’s hypothesis test (1710) about the ratio of births by sex in the first published hypothesis test, though his extension of logic to God’s design as an alternative hypothesis goes beyond the standard modern framework. It is also the implicit logic in the research on puerperal fever, cholera, and beri-beri, the data for which were shown in Chapter 17, though no explicit mention was made of probability in those cases.\nTwo students sat next to each other at an ACT college-entrance examination in Kentucky in 1987. Out of 219 questions, 211 of the answers were identical, including many that were wrong. Student A was a high school athlete in Kentucky who had failed two previous SAT exams, and Student B thought he saw Student A copying from him. Should one believe that Student A cheated? (The Washington Post , April 19, 1992, p. D2.)\nYou say to yourself: It would be most unlikely that the two test-takers would answer that many questions identically by chance — and we can compute how unlikely that event would be. Because that event is so unlikely, we therefore conclude that one or both cheated. And indeed, the testing service invalidated the athlete’s exam. On the other hand, if all the questions that were answered identically were correct , the result might not be unreasonable. If we knew in how many cases they made the same mistakes , the inquiry would have been clearer, but the newspaper did not contain those details.\nThe court is hearing a murder case. There is no eye-witness, and the evidence consists of such facts as the height and weight and age of the person charged, and other circumstantial evidence. Only one person in 50 million has such characteristics, and you find such a person. Will you convict the person, or will you believe that the evidence was just a coincidence? Of course the evidence might have occurred by bad luck, but the probability is very, very small (1 in 50 million). Will you therefore conclude that because the chance is so small, it is reasonable to assume that the person charged committed the crime?\nSometimes the unusual really happens — the court errs by judging that the wrong person did it, and that person goes to prison or even is executed. The best we can do is to make the criterion strict: “Beyond a reasonable doubt.” (People ask: What probability does that criterion represent? But the court will not provide a numerical answer.)\nSomebody says to you: I am going to deal out five cards and it will be a royal flush — ten, jack, queen, king, and ace of the same suit. The person deals the cards and lo and behold! the royal flush appears. Do you think the occurrence happened just by chance? No, you are likely to be very dubious that it happened by chance. Therefore, you believe there must be some other explanation — that the person fixed the cards, for example.\nNote: You don’t attach the same meaning to any other permutation (say 3, 6, 7, 7, and king of various suits), even though that permutation is just as rare — unless the person announced exactly that permutation in advance.\nIndeed, even if the person says nothing , you will be surprised at a royal flush, because this hand has meaning , whereas another given set of five cards do not have any special meaning.\nYou see six Volvos in one home’s driveway, and you conclude that it is a Volvo club meeting, or a Volvo salesperson’s meeting. Why? Because it is unlikely that six people not connected formally by Volvo ownership would be friends of the same person.\nTwo important points complicate the concept of statistical significance:\n\nWith a large enough sample, every treatment or variable will seem different from every other. Two faces of even a good die (say, “1” and “2”) will produce different results in the very very long run.\nStatistical significance does not imply economic or social significance. Two faces of a die may be statistically different in a huge sample of throws, but a 1/10,000 difference between them is too small to make an economic difference in betting. Statistical significance is only a filter . If it appears, one should then proceed to decide whether there is substantive significance.\n\nInterpreting statistical significance is sometimes complex, especially when the interpretation depends heavily upon your prior expectations — as it often does. For example, how should a basketball coach decide whether or not to bench a player for poor performance after a series of missed shots at the basket?\nConsider Coach John Thompson who, after Charles Smith missed 10 of 12 shots in the 1989 Georgetown-Notre Dame NCAA game, took Smith out of the game for a time (The Washington Post, March 20, 1989, p. C1). The scientific or decision problem is: Should the coach consider that Smith is not now a 47 percent shooter as he normally is, and therefore the coach should bench him? The statistical question is: How likely is a shooter with a 47 percent average to produce 10 of 12 misses? The key issue in the statistical question concerns the total number of shot attempts we should consider.\nWould Coach Thompson take Smith out of the game after he missed one shot? Clearly not. Why not? Because one “expects” Smith to miss a shot half the time, and missing one shot therefore does not seem unusual.\nHow about after Smith misses two shots in a row? For the same reason the coach still would not bench him, because this event happens “often” — more specifically, about once in every sequence of four shots.\nHow about after 9 misses out of ten shots? Notice the difference between this case and 9 females among ten calves. In the case of the calves, we expected half females because the experiment is a single isolated trial. The event considered by itself has a small enough probability that it seems unexpected rather than expected. (“Unexpected” seems to be closely related to “happens seldom” or “unusual” in our psychology.) And an event that happens seldom seems to call for explanation, and also seems to promise that it will yield itself to explanation by some unusual concatenation of forces. That is, unusual events lead us to think that they have unusual causes; that is the nub of the matter. (But on the other hand, one can sometimes benefit by paying attention to unusual events, as scientists know when they investigate outliers.)\nIn basketball shooting, we expect 47 percent of Smith’s individual shots to be successful, and we also expect that average for each set of shots. But we also expect some sets of shots to be far from that average because we observe many sets; such variation is inevitable. So when we see a single set of 9 misses in ten shots, we are not very surprised.\nBut how about 29 misses in 30 shots? At some point, one must start to pay attention. (And of course we would pay more attention if beforehand, and never at any other time, the player said, “I can’t see the basket today. My eyes are dim.”)\nSo, how should one proceed? Perhaps proceed the same way as with a coin that keeps coming down heads a very large proportion of the throws, over a long series of tosses: At some point you examine it to see if it has two heads. But if your investigation is negative, in the absence of an indication other than the behavior in question , you continue to believe that there is no explanation and you assume that the event is “chance” and should not be acted upon . In the same way, a coach might ask a player if there is an explanation for the many misses. But if the player answers “no,” the coach should not bench him. (There are difficulties here with truth-telling, of course, but let that go for now.)\nThe key point for the basketball case and other repetitive situations is not to judge that there is an unusual explanation from the behavior of a single sample alone , just as with a short sequence of stock-price changes.\nWe all need to learn that “irregular” (a good word here) sequences are less unusual than they seem to the naked intuition. A streak of 10 out of 12 misses for a 47 percent shooter occurs about 3 percent of the time. That is, about every 33 shots Smith takes, he will begin a sequence of 12 shots that will end with 3 or fewer baskets — perhaps once in every couple of games. This does not seem “very” unusual, perhaps. And if the coach treats each such case as unusual, he will be losing some of the services of a better player than he replaces him with.\nIn brief, how hard one should search for an explanation should depend on the probability of the event. But one should (almost) assume the absence of an explanation unless one actually finds it.\nBayesian analysis (Chapter 31) could be brought to bear upon the matter, bringing in your prior probabilities based on the knowledge of research that has shown that there is no such thing as a “hot hand” in basketball (see Chapter 14), together with some sort of cost-benefit error-loss calculation comparing Smith and the next best available player." + }, + { + "objectID": "significance.html#the-concept-of-statistical-significance", + "href": "significance.html#the-concept-of-statistical-significance", + "title": "22  The Concept of Statistical Significance in Testing Hypotheses", + "section": "22.2 The concept of statistical significance", + "text": "22.2 The concept of statistical significance\n“Significance level” is a common term in probability statistics. It corresponds roughly to the probability that the assumed benchmark universe could give rise to a sample as extreme as the observed sample by chance. The results of Example 16-1 would be phrased as follows: The hypothesis that the radiation treatment affects the sex of the fruit fly offspring is accepted as true at the probability level of .16 (sometimes stated as the 16 percent level of significance). (A more common way of expressing this idea would be to say that the hypothesis is not rejected at the .16 probability level or the 16 percent level of significance. But “not rejected” and “accepted” really do mean much the same thing, despite some arguments to the contrary.) This kind of statistical work is called hypothesis testing.\nThe question of which significance level should be considered “significant” is difficult. How great must a coincidence be before you refuse to believe that it is only a coincidence? It has been conventional in social science to say that if the probability that something happens by chance is less than 5 percent, it is significant. But sometimes the stiffer standard of 1 percent is used. Actually, any fixed cut-off significance level is arbitrary. (And even the whole notion of saying that a hypothesis “is true” or “is not true” is sometimes not useful.) Whether a one-tailed or two-tailed test is used will influence your significance level, and this is why care must be taken in making that choice.\n\n\n\n\nArbuthnot, John. 1710. “An Argument for Divine Providence, Taken from the Constant Regularity Observ’d in the Births of Both Sexes. By Dr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of the College of Physitians and the Royal Society.” Philosophical Transactions of the Royal Society of London 27 (328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011." + }, + { + "objectID": "testing_counts_2.html#comparisons-among-more-than-two-samples-of-counted-data", + "href": "testing_counts_2.html#comparisons-among-more-than-two-samples-of-counted-data", + "title": "23  The Statistics of Hypothesis-Testing with Counted Data, Part 2", + "section": "23.1 Comparisons among more than two samples of counted data", + "text": "23.1 Comparisons among more than two samples of counted data\nExample 17-1: Do Any of Four Treatments Affect the Sex Ratio in Fruit Flies? (When the Benchmark Universe Proportion is Known, Is the Propor tion of the Binomial Population Affected by Any of the Treatments?) (Program “4treat”)\nSuppose that, instead of experimenting with just one type of radiation treatment on the flies (as in Example 15-1), you try four different treatments, which we shall label A, B, C, and D. Treatment A produces fourteen males and six females, but treatments B, C, and D produce ten, eleven, and ten males, respectively. It is immediately obvious that there is no reason to think that treatment B, C, or D affects the sex ratio. But what about treatment A?\nA frequent and dangerous mistake made by young scientists is to scrounge around in the data for the most extreme result, and then treat it as if it were the only result. In the context of this example, it would be fallacious to think that the probability of the fourteen-males-to-six females split observed for treatment A is the same as the probability that we figured for a single experiment in Example 15-1. Instead, we must consider that our benchmark universe is composed of four sets of twenty trials, each trial having a 50-50 probability of being male. We can consider that our previous trials 1-4 in Example 15-1 constitute a single new trial, and each subsequent set of four previous trials constitute another new trial. We then ask how likely a new trial of our sets of twenty flips is to produce one set with fourteen or more of one or the other sex.\nLet us make the procedure explicit, but using random numbers instead of coins this time:\nStep 1. Let “1-5” = males, “6-0” = females\nStep 2. Choose four groups of twenty numbers. If for any group there are 14 or more males, record “yes”; if 13 or less, record “no.”\nStep 3. Repeat perhaps 1000 times.\nStep 4. Calculate the proportion “yes” in the 1000 trials. This proportion estimates the probability that a fruit fly population with a proportion of 50 percent males will produce as many as 14 males in at least one of four samples of 20 flies.\nWe begin the trials with data as in Table 17-1. In two of the six simulation trials, more than one sample shows 14 or more males. Another trial shows fourteen or more females . Without even concerning ourselves about whether we should be looking at males or females, or just males, or needing to do more trials, we can see that it would be very common indeed to have one of four treatments show fourteen or more of one sex just by chance. This discovery clearly indicates that a result that would be fairly unusual (three in twenty-five) for a single sample alone is commonplace in one of four observed samples.\nTable 17-1\nNumber of “Males” in Groups of 20 (Based on Random Numbers)\nTrial Group A Group B Group C Group D Yes / No\n>= 14 or <= 6\n\n\n\n1\n11\n12\n8\n12\nNo\n\n\n2\n12\n7\n9\n8\nNo\n\n\n3\n6\n10\n10\n10\nYes\n\n\n4\n9\n9\n12\n7\nNo\n\n\n5\n14\n12\n13\n10\nYes\n\n\n6\n11\n14\n9\n7\nYes\n\n\n\nA key point of the RESAMPLING STATS program “4TREAT” is that each sample consists of four sets of 20 randomly generated hypothetical fruit flies. And if we consider 1000 trials, we will be examining 4000 sets of 20 fruit flies.\nIn each trial we GENERATE up to 4 random samples of 20 fruit flies, and for each, we count the number of males (“1”s) and then check whether that group has more than 13 of either sex (actually, more than 13 “1”s or less than 7 “1”s). If it does, then we change J to 1, which informs us that for this sample, at least 1 group of 20 fruit flies had results as unusual as the results from the fruit flies exposed to the four treatments.\nAfter the 1000 runs are made, we count the number of trials where one sample had a group of fruit flies with 14 or more of either sex, and PRINT the results.\n\n' Program file: \"4treat.rss\"\n\nREPEAT 1000\n ' Do 1000 experiments.\n COPY (0) j\n ' j indicates whether we have obtained a trial group with 14 or more of\n ' either sex. We start at \"0\" (= no).\n REPEAT 4\n ' Repeat the following steps 4 times to constitute 4 trial groups of 20\n ' flies each.\n GENERATE 20 1,2 a\n ' Generate randomly 20 \"1\"s and \"2\"s and put them in a; let \"1\"\n\n ' = male.\n COUNT a =1 b\n ' Count the number of males, put the result in b.\n IF b >= 14\n ' If the result is 14 or more males, then\n END\n COPY (1) j\n ' Set the indicator to \"1.\"\n\n ' End the IF condition.\n IF b <= 6\n ' If the result is 6 or fewer males (the same as 14 or more females), then\n END\n COPY (1) j\n ' Set the indicator to \"1.\"\n\n ' End the IF condition.\n END\nEND\n' End the procedure for one group, go back and repeat until all four\n' groups have been done.\nSCORE j z\n' j now tells us whether we got a result as extreme as that observed (j =\n' \"1\" if we did, j = \"0\" if not). We must keep track in z of this result\n' for each experiment.\n\n' End one experiment, go back and repeat until all 1000 are complete.\nCOUNT z =1 k\n' Count the number of experiments in which we had results as extreme as\n' those observed.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the result.\n\n' Note: The file \"4treat\" on the Resampling Stats software disk contains\n' this set of commands.\nIn one set of 1000 trials, there were more than 13 or less than 7 males 33 percent of the time — clearly not an unusual occurrence.\nExample 17-2: Do Four Psychological Treatments Differ in Effectiveness? (Do Several Two-Outcome Samples Differ Among Themselves in Their Propor tions? (Program “4treat1”)\nConsider four different psychological treatments designed to rehabilitate juvenile delinquents. Instead of a numerical test score, there is only a “yes” or a “no” answer as to whether the juvenile has been rehabilitated or has gotten into trouble again. Label the treatments P, R, S, and T, each of which is administered to a separate group of twenty juvenile delinquents. The number of rehabilitations per group has been: P, 17; R, 10; S, 10; T, 7. Is it improbable that all four groups come from the same universe?\nThis problem is like the placebo vs. cancer-cure problem, but now there are more than two samples. It is also like the four-sample irradiated-fruit flies example (Example 17-1), except that now we are not asking whether any or some of the samples differ from a given universe (50-50 sex ratio in that case). Rather, we are now asking whether there are differences among the samples themselves. Please keep in mind that we are still dealing with two-outcome (yes-or-no, well-or-sick) problems. Later we shall take up problems that are similar except that the outcomes are “quantitative.”\nIf all four groups were drawn from the same universe, that universe has an estimated rehabilitation rate of 17/20 + 10/20 + 10/20 + 7/20 = 44/80 = 55/100, because the observed data taken as a whole constitute our best guess as to the nature of the universe from which they come — again, if they all come from the same universe. (Please think this matter over a bit, because it is important and subtle. It may help you to notice the absence of any other information about the universe from which they have all come, if they have come from the same universe.)\nTherefore, select twenty two-digit numbers for each group from the random-number table, marking “yes” for each number “1-55” and “no” for each number “56-100.” Conduct a number of such trials. Then count the proportion of times that the difference between the highest and lowest groups is larger than the widest observed difference, the difference between P and T (17-7 = 10). In Table 17-2, none of the first six trials shows anywhere near as large a difference as the observed range of 10, suggesting that it would be rare for four treatments that are “really” similar to show so great a difference. There is thus reason to believe that P and T differ in their effects.\nTable 7-2\nResults of Six Random Trials for Problem “Delinquents”\n\n\n\nTrial\nP\nR\nS\nT\nLargest Minus Smallest\n\n\n1\n11\n9\n8\n12\n4\n\n\n2\n10\n10\n12\n12\n2\n\n\n3\n9\n12\n8\n12\n4\n\n\n4\n9\n11\n12\n10\n3\n\n\n5\n10\n10\n11\n12\n1\n\n\n6\n11\n11\n9\n11\n2\n\n\n\nThe strategy of the RESAMPLING STATS solution to “Delinquents” is similar to the strategy for previous problems in this chapter. The benchmark (null) hypothesis is that the treatments do not differ in their effects observed, and we estimate the probability that the observed results would occur by chance using the benchmark universe. The only new twist is that we must instruct the computer to find the groups with the highest and the lowest numbers of rehabilitations.\nUsing RESAMPLING STATS we GENERATE four “treatments,” each represented by 20 numbers, each number randomly selected between 1 and 100. We let 1-55 = success, 56-100\n= failure. Follow along in the program for the rest of the procedure:\n\n' Program file: \"4treat1.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 20 1,100 a\n ' The first treatment group, where \"1-55\" = success, \"56-100\" = failure\n GENERATE 20 1,100 b\n ' The second group\n GENERATE 20 1,100 c\n ' The third group\n GENERATE 20 1,100 d\n ' The fourth group\n COUNT a <=55 aa\n ' Count the first group's successes\n COUNT b <=55 bb\n ' Same for second, third & fourth groups\n COUNT c <=55 cc\n COUNT d <=55 dd\nEND\nSUBTRACT aa bb ab\n' Now find all the pairwise differences in successes among the groups\nSUBTRACT aa cc ac\nSUBTRACT aa dd ad\nSUBTRACT bb cc bc\nSUBTRACT bb dd bd\nSUBTRACT cc dd cd\nCONCAT ab ac ad bc bd cd e\n' Concatenate, or join, all the differences in a single vector e\nABS e f\n' Since we are interested only in the magnitude of the difference, not its\n' direction, we take the ABSolute value of all the differences.\nMAX f g\n' Find the largest of all the differences\nSCORE g z\n' Keep score of the largest\n\n' End a trial, go back and repeat until all 1000 are complete.\nCOUNT z >=10 k\n' How many of the trials yielded a maximum difference greater than the\n' observed maximum difference?\nDIVIDE k 1000 kk\n' Convert to a proportion\nPRINT kk\n' Note: The file \"4treat1\" on the Resampling Stats software disk contains\n' this set of commands.\nOne percent of the experiments with randomly generated treatments from a common success rate of .55 produced differences in excess of the observed maximum difference (10).\nAn alternative approach to this problem would be to deal with each result’s departure from the mean, rather than the largest difference among the pairs. Once again, we want to deal with absolute departures, since we are interested only in magnitude of difference. We could take the absolute value of the differences, as above, but we will try something different here. Squaring the differences also renders them all positive: this is a common approach in statistics.\nThe first step is to examine our data and calculate this measure: The mean is 11, the differences are 6, 1, 1, and 4, the\nsquared differences are 36, 1, 1, and 16, and their sum is 54. Our experiment will be, as before, to constitute four groups of 20 at random from a universe with a 55 percent rehabilitation rate. We then calculate this same measure for the random groups. If it is frequently larger than 54, then we conclude that a uniform cure rate of 55 percent could easily have produced the observed results. The program that follows also GENERATES the four treatments by using a REPEAT loop, rather than spelling out the GENERATE command 4 times as above. In RESAMPLING STATS:\n\n' Program file: \"testing_counts_2_02.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n REPEAT 4\n ' Repeat the following steps 4 times to constitute 4 groups of 20 and\n ' count their rehabilitation rates.\n GENERATE 20 1,100 a\n ' Randomly generate 20 numbers between 1 and 100 and put them in a; let\n ' 1-55 = rehabilitation, 56-100 no rehab.\n COUNT a between 1 55 b\n ' Count the number of rehabs, put the result in b.\n SCORE b w\n ' Keep track of the 4 rehab rates for the group of 20.\n END\n ' End the procedure for one group of 20, go back and repeat until all 4\n ' are done.\n MEAN w x\n ' Calculate the mean\n SUMSQRDEV w x y\n ' Find the sum of squared deviations between group rehab rates (w) and the\n ' overall rate (x).\n SCORE y z\n ' Keep track of the result for each trial.\n CLEAR w\n ' Erase the contents of w to prepare for the next trial.\nEND\n' End one experiment, go back and repeat until all 1000 are complete.\nHISTOGRAM z\n' Produce a histogram of trial results.\n4 Treatments\n\nsum of squared differences\nFrom this histogram, we see that in only 1 percent of the cases did our trial sum of squared differences equal or exceed 54, confirming our conclusion that this is an unusual result. We can have RESAMPLING STATS calculate this proportion:\n\n' Program file: \"4treat2.rss\"\n\nCOUNT z >= 54 k\n' Determine how many trials produced differences as great as those\n' observed.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the results.\n\n' Note: The file \"4treat2\" on the Resampling Stats software disk contains\n' this set of commands.\nThe conventional way to approach this problem would be with what is known as a “chi-square test.”\nExample 17-3: Three-way Comparison\nIn a national election poll of 750 respondents in May, 1992, George Bush got 36 percent of the preferences (270 voters), Ross Perot got 30 percent (225 voters), and Bill Clinton got 28 percent (210 voters) ( Wall Street Journal, October 29, 1992, A16). Assuming that the poll was representative of actual voting, how likely is it that Bush was actually behind and just came out ahead in this poll by chance? Or to put it differently, what was the probability that Bush actually had a plurality of support, rather than that his apparent advantage was a matter of sampling variability? We test this by constructing a universe in which Bush is slightly behind (in practice, just equal), and then drawing samples to see how likely it is that those samples will show Bush ahead.\nWe must first find that universe — among all possible universes that yield a conclusion contrary to the conclusion shown by the data, and one in which we are interested — that has the highest probability of producing the observed sample. With a two-person race the universe is obvious: a universe that is evenly split except for a single vote against “our” candidate who is now in the lead, i.e. in practice a 50-50 universe. In that simple case we then ask the probability that that universe would produce a sample as far out in the direction of the conclusion drawn from the observed sample as the observed sample.\nWith a three-person race, however, the decision is not obvious (and if this problem becomes too murky for you, skip over it; it is included here more for fun than anything else). And there is no standard method for handling this problem in conventional statistics (a solution in terms of a confidence interval was first offered in 1992, and that one is very complicated and not very satisfactory to me). But the sort of thinking that we must labor to accomplish is also required for any conventional solution; the difficulty is inherent in the problem, rather than being inherent in resampling, and resampling will be at least as simple and understandable as any formulaic approach.\nThe relevant universe is (or so I think) a universe that is 35 Bush — 35 Perot — 30 Clinton (for a race where the poll indicates a 36-30-28 split); the 35-35-30 universe is of interest because it is the universe that is closest to the observed sample that does not provide a win for Bush (leaving out the “undecideds” for convenience); it is roughly analogous to the 50-50 split in the two-person race, though a clear-cut argument would require a lot more discussion. A universe that is split 34-34-32, or any of the other possible universes, is less likely to produce a 36-30-28 sample (such as was observed) than is a 35-35-30 universe, I believe, but that is a checkable matter. (In technical terms, it might be a “maximum likelihood universe” that we are looking for.)\nWe might also try a 36-36-28 universe to see if that produces a result very different than the 35-35-30 universe.\nAmong those universes where Bush is behind (or equal), a universe that is split 50-50-0 (with just one extra vote for the closest opponent to Bush) would be the most likely to produce a 6 percent difference between the top two candidates by chance, but we are not prepared to believe that the voters are split in such a fashion. This assumption shows that we are bringing some judgments to bear from outside the observed data.\nFor now, the point is not how to discover the appropriate benchmark hypothesis, but rather its criterion — which is, I repeat, that universe (among all possible universes) that yields a conclusion contrary to the conclusion shown by the data (and in which we are interested) and that (among such universes that yield such a conclusion) has the highest probability of producing the observed sample.\nLet’s go through the logic again: 1) Bush apparently has a 6 percent lead over the second-place candidate. 2) We ask if the second-place candidate might be ahead if all voters were polled. We test that by setting up a universe in which the second-place candidate is infinitesimally ahead (in practice, we make the two top candidates equal in our hypothetical universe). And we make the third-place candidate somewhere close to the top two candidates. 3) We then draw samples from this universe and observe how often the result is a 6 percent lead for the top candidate (who starts off just below equal in the universe).\nFrom here on, the procedure is straightforward: Determine how likely that universe is to produce a sample as far (or further) away in the direction of “our” candidate winning. (One could do something like this even if the candidate of interest were not now in the lead.)\nThis problem teaches again that one must think explicitly about the choice of a benchmark hypothesis. The grounds for the choice of the benchmark hypothesis should precede the program, or should be included as an extended comment within the program.\nThis program embodies the previous line of thought.\n\n' Program file: \"testing_counts_2_04.rss\"\n\nURN 35#1 35#2 30#3 univ 1= Bush, 2= Perot, 3=Clinton\nREPEAT 1000\n SAMPLE 750 univ samp\n ' Take a sample of 750 votes\n COUNT samp =1 bush\n ' Count the Bush voters, etc.\n COUNT samp =2 pero\n ' Perot voters\n COUNT samp =3 clin\n ' Clinton voters\n CONCAT pero clin others\n ' Join Perot & Clinton votes\n MAX others second\n ' Find the larger of the other two\n SUBTRACT bush second d\n ' Find Bush's margin over 2nd\n SCORE d z\nEND\nHISTOGRAM z\nCOUNT z >=46 m\n' Compare to the observed margin in the sample of 750 corresponding to a 6\n' percent margin by Bush over 2nd place finisher (rounded)\nDIVIDE m 1000 mm\nPRINT mm\n\n\n\nFigure 23.1: Samples of 750 Voters:\n\n\nThe result is — Bush’s margin over 2nd (mm) = 0.018.\nWhen we run this program with a 36-36-28 split, we also get a similar result — 2.6 percent. That is, the analysis shows a probability of only 2.6 percent that Bush would score a 6 percentage point “victory” in the sample, by chance, if the universe were split as specified. So Bush could feels reasonably confident that at the time the poll was taken, he was ahead of the other two candidates." + }, + { + "objectID": "testing_counts_2.html#paired-comparisons-with-counted-data", + "href": "testing_counts_2.html#paired-comparisons-with-counted-data", + "title": "23  The Statistics of Hypothesis-Testing with Counted Data, Part 2", + "section": "23.2 Paired Comparisons With Counted Data", + "text": "23.2 Paired Comparisons With Counted Data\nExample 17-4: The Pig Rations Again, But Comparing Pairs of Pigs (Paired-Comparison Test) (Program “Pigs2”)\nTo illustrate how several different procedures can reasonably be used to deal with a given problem, here is another way to decide whether pig ration A is “really” better: We can assume that the order of the pig scores listed within each ration group is random — perhaps the order of the stalls the pigs were kept in, or their alphabetical-name order, or any other random order not related to their weights . Match the first pig eating ration A with the first pig eating ration B, and also match the second pigs, the third pigs, and so forth. Then count the number of matched pairs on which ration A does better. On nine of twelve pairings ration A does better, that is, 31.0 > 26.0, 34.0 > 24.0, and so forth.\nNow we can ask: If the two rations are equally good, how often will one ration exceed the other nine or more times out of twelve, just by chance? This is the same as asking how often either heads or tails will come up nine or more times in twelve tosses. (This is a “two-tailed” test because, as far as we know, either ration may be as good as or better than the other.) Once we have decided to treat the problem in this manner, it is quite similar to Example 15-1 (the first fruitfly irradiation problem). We ask how likely it is that the outcome will be as far away as the observed outcome (9 “heads” of 12) from 6 of 12 (which is what we expect to get by chance in this case if the two rations are similar).\nSo we conduct perhaps fifty trials as in Table 17-3, where an asterisk denotes nine or more heads or tails.\nStep 1. Let odd numbers equal “A better” and even numbers equal “B better.”\nStep 2. Examine 12 random digits and check whether 9 or more, or 3 or less, are odd. If so, record “yes,” otherwise “no.”\nStep 3. Repeat step 2 fifty times.\nStep 4. Compute the proportion “yes,” which estimates the probability sought.\nThe results are shown in Table 17-3.\nIn 8 of 50 simulation trials, one or the other ration had nine or more tosses in its favor. Therefore, we estimate the probability to be .16 (eight of fifty) that samples this different would be generated by chance if the samples came from the same universe.\nTable 17-3\nResults From Fifty Simulation Trials Of The Problem “Pigs2”\n\n\n\n\n\n\n\n\n\n\n\nTrial\nHeads” or Odds”\n(Ration A)\n“Tails” or “Evems”\n(Ration B)\nTrial\n“Heads” or Odds”\n(Ration A)\n“Tails” or “Evens”\n(Ration B)\n\n\n1\n6\n6\n26\n6\n6\n\n\n2\n4\n8\n27\n5\n7\n\n\n3\n6\n6\n28\n7\n5\n\n\n4\n7\n5\n29\n4\n8\n\n\n* 5\n3\n9\n30\n6\n6\n\n\n6\n5\n7\n* 31\n9\n3\n\n\n7\n8\n4\n* 32\n2\n10\n\n\n8\n6\n6\n33\n7\n5\n\n\n9\n7\n5\n34\n5\n7\n\n\n*10\n9\n3\n35\n6\n6\n\n\n11\n7\n5\n36\n8\n4\n\n\n*12\n3\n9\n37\n6\n6\n\n\n13\n5\n7\n38\n4\n8\n\n\n14\n6\n6\n39\n5\n7\n\n\n15\n6\n6\n40\n8\n4\n\n\n16\n8\n4\n41\n5\n7\n\n\n17\n5\n7\n42\n6\n6\n\n\n*18\n9\n3\n43\n5\n7\n\n\n19\n6\n6\n44\n7\n5\n\n\n20\n7\n5\n45\n6\n6\n\n\n21\n4\n8\n46\n4\n8\n\n\n* 22\n10\n2\n47\n5\n7\n\n\n23\n6\n6\n48\n5\n7\n\n\n24\n5\n7\n49\n8\n4\n\n\n*25\n3\n9\n50\n7\n5\n\n\n\nNow for a RESAMPLING STATS program and results. “Pigs2” is different from “Pigs1” in that it compares the weight-gain results of pairs of pigs, instead of simply looking at the rankings for weight gains.\nThe key to “Pigs2” is the GENERATE statement. If we assume that ration A does not have an effect on weight gain (which is the “benchmark” or “null” hypothesis), then the results of the actual experiment would be no different than if we randomly GENERATE numbers “1” and “2” and treat a “1” as a larger weight gain for the ration A pig, and a “2” as a larger weight gain for the ration B pig. Both events have a .5 chance of occurring for each pair of pigs because if the rations had no effect on weight gain (the null hypothesis), ration A pigs would have larger weight gains about half of the time. The next step is to COUNT the number of times that the weight gains of one group (call it the group fed with ration A) were larger than the weight gains of the other (call it the group fed with ration B). The complete program follows:\n\n' Program file: \"pigs2.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 12 1,2 a\n ' Generate randomly 12 \"1\"s and \"2\"s, put them in a. This represents 12\n ' \"pairings\" where \"1\" = ration a \"wins,\" \"2\" = ration b = \"wins.\"\n COUNT a =1 b\n ' Count the number of \"pairings\" where ration a won, put the result in b.\n SCORE b z\n ' Keep track of the result in z\nEND\n' End the trial, go back and repeat until all 100 trials are complete.\nCOUNT z >= 9 j\n' Determine how often we got 9 or more \"wins\" for ration a.\nCOUNT z <= 3 k\n' Determine how often we got 3 or fewer \"wins\" for ration a.\nADD j k m\n' Add the two together\nDIVIDE m 100 mm\n' Convert to a proportion\nPRINT mm\n' Print the result.\n\n' Note: The file \"pigs2\" on the Resampling Stats software disk contains\n' this set of commands.\nNotice how we proceeded in Examples 15-6 and 17-4. The data were originally quantitative — weight gains in pounds for each pig. But for simplicity we classified the data into simpler counted-data formats. The first format (Example 15-6) was a rank order, from highest to lowest. The second format (Example 17-4) was simply higher-lower, obtained by randomly pairing the observations (using alphabetical letter, or pig’s stall number, or whatever was the cause of the order in which the data were presented to be random). Classifying the data in either of these ways loses some information and makes the subsequent tests somewhat cruder than more refined analysis could provide (as we shall see in the next chapter), but the loss of efficiency is not crucial in many such cases. We shall see how to deal directly with the quantitative data in Chapter 24.\nExample 17-5: Merged Firms Compared to Two Non-Merged Groups\nIn a study by Simon, Mokhtari, and Simon (1996), a set of 33 advertising agencies that merged over a period of years were each compared to entities within two groups (each also of 33 firms) that did not merge; one non-merging group contained firms of roughly the same size as the final merged entities, and the other non-merging group contained pairs of non-merging firms whose total size was roughly the same as the total size of the merging entities.\nThe idea behind the matching was that each pair of merged firms was compared against\n\na pair of contemporaneous firms that were roughly the same size as the merging firms before the merger, and\na single firm that was roughly the same size as the merged entity after the merger.\nHere (Table 17-4) are the data (provided by the authors):\nTable 17-4\nRevenue Growth In Year 1 Following Merger\nSet # Merged Match1 Match2\n\n\n\n1\n-0.20000\n0.02564\n0.000000\n\n\n2\n-0.34831\n-0.12500\n0.080460\n\n\n3\n0.07514\n0.06322\n-0.023121\n\n\n4\n0.12613\n-0.04199\n0.164671\n\n\n5\n-0.10169\n0.08000\n0.277778\n\n\n6\n0.03784\n0.14907\n0.430168\n\n\n7\n0.11616\n0.15183\n0.142857\n\n\n8\n-0.09836\n0.03774\n0.040000\n\n\n9\n0.02137\n0.07661\n.0111111\n\n\n10\n-0.01711\n0.28434\n0.189139\n\n\n11\n-0.36478\n0.13907\n0.038869\n\n\n12\n0.08814\n0.03874\n0.094792\n\n\n13\n-0.26316\n0.05641\n0.045139\n\n\n14\n-0.04938\n0.05371\n0.008333\n\n\n15\n0.01146\n0.04805\n0.094817\n\n\n16\n0.00975\n0.19816\n0.060929\n\n\n17\n0.07143\n0.42083\n-0.024823\n\n\n18\n0.00183\n0.07432\n0.053191\n\n\n19\n0.00482\n-0.00707\n0.050083\n\n\n20\n-0.05399\n0.17152\n0.109524\n\n\n21\n0.02270\n0.02788\n-0.022456\n\n\n22\n0.05984\n0.04857\n0.167064\n\n\n23\n-0.05987\n0.02643\n0.020676\n\n\n24\n-0.08861\n-0.05927\n0.077067\n\n\n25\n-0.02483\n-0.01839\n0.059633\n\n\n26\n0.07643\n0.01262\n0.034635\n\n\n27\n-0.00170\n-0.04549\n0.053571\n\n\n28\n-0.21975\n0.34309\n0.042789\n\n\n29\n0.38237\n0.22105\n0.115773\n\n\n30\n-0.00676\n0.25494\n0.237047\n\n\n31\n-0.16298\n0.01124\n0.190476\n\n\n32\n0.19182\n0.15048\n0.151994\n\n\n33\n0.06116\n0.17045\n0.093525\n\n\n\nComparisons were made in several years before and after the mergings to see whether the merged entities did better or worse than the non-merging entities they were matched with by the researchers, but for simplicity we may focus on just one of the more important years in which they were compared — say, the revenue growth rates in the year after the merger.\nHere are those average revenue growth rates for the three groups:\nYear’s rev. growth\n\n\n\nMERGED\n-0.0213\n\n\nMATCH 1\n0.092085\n\n\nMATCH 2\n0.095931\n\n\n\nWe could do a general test to determine whether there are differences among the means of the three groups, as was done in the “Differences Among 4 Pig Rations” problem (Section 24.0.1). However, we note that there may be considerable variation from one matched set to another — variation which can obscure the overall results if we resample from a large general bucket.\nTherefore, we use the following resampling procedure that maintains the separation between matched sets by converting each observation into a rank (1, 2 or 3) within the matched set.\nHere (Table 17-5) are those ranks:\nTable 17-5\nRanked Within Matched Set (1 = worst, 3 = best)\nSet # Merged Match1 Match2\n\n\n\n1\n1\n3\n2\n\n\n2\n1\n2\n3\n\n\n3\n3\n2\n1\n\n\n4\n2\n1\n3\n\n\n5\n1\n2\n3\n\n\n6\n1\n3\n2\n\n\n7\n1\n3\n2\n\n\n8\n1\n2\n3\n\n\n9\n1\n2\n3\n\n\n10\n1\n2\n3\n\n\n11\n1\n3\n2\n\n\n12\n2\n1\n3\n\n\n13\n1\n3\n2\n\n\n14\n1\n3\n2\n\n\n15\n1\n2\n3\n\n\n16\n1\n3\n2\n\n\n17\n2\n3\n1\n\n\n18\n1\n3\n2\n\n\n\n\n\n\nSet #\nMerged\nMatch1\nMatch2\n\n\n19\n2\n1\n3\n\n\n20\n1\n3\n2\n\n\n21\n2\n2\n3\n\n\n22\n2\n2\n3\n\n\n23\n1\n3\n2\n\n\n24\n1\n2\n3\n\n\n25\n1\n2\n3\n\n\n26\n3\n1\n2\n\n\n27\n2\n1\n3\n\n\n28\n1\n3\n2\n\n\n29\n3\n2\n1\n\n\n30\n1\n3\n2\n\n\n31\n1\n2\n3\n\n\n32\n3\n1\n2\n\n\n33\n1\n3\n2\n\n\n\nThese are the average ranks for the three groups (1 = worst, 3\n= best):\n\n\n\nMERGED\n1.45\n\n\nMATCH 1\n2.18\n\n\nMATCH 2\n2.36\n\n\n\nIs it possible that the merged group received such a low (poor) average ranking just by chance? The null hypothesis is that the ranks within each set were assigned randomly, and that “merged” came out so poorly just by chance. The following procedure simulates random assignment of ranks to the “merged” group:\n\nRandomly select 33 integers between “1” and “3” (inclusive).\nFind the average rank & record.\nRepeat steps 1 and 2, say, 1000 times.\nFind out how often the average rank is as low as 1.45\n\n\nHere’s a RESAMPLING STATS program (“merge.sta”):\n\n' Program file: \"testing_counts_2_06.rss\"\n\nREPEAT 1000\n GENERATE 33 (1 2 3) ranks\n MEAN ranks ranksum\n SCORE ranksum z\nEND\nHISTOGRAM z\nCOUNT z <=1.45 k\nDIVIDE k 1000 kk\nPRINT kk\n\nResult: kk = 0\nInterpretation: 1000 random selections of 33 ranks never produced an average as low as the observed average. Therefore we rule out chance as an explanation for the poor ranking of the merged firms.\nExactly the same technique might be used in experimental medical studies wherein subjects in an experimental group are matched with two different entities that receive placebos or control treatments.\nFor example, there have been several recent three-way tests of treatments for depression: drug therapy versus cognitive therapy versus combined drug and cognitive therapy. If we are interested in the combined drug-therapy treatment in particular, comparing it to standard existing treatments, we can proceed in the same fashion as in the merger problem.\nWe might just as well consider the real data from the merger as hypothetical data for a proposed test in 33 triplets of people that have been matched within triplet by sex, age, and years of education. The three treatments were to be chosen randomly within each triplet.\nAssume that we now switch scales from the merger data, so that #1 = best and #3 = worst, and that the outcomes on a series of tests were ranked from best (#1) to worst (#3) within each triplet. Assume that the combined drug-and-therapy regime has the best average rank. How sure can we be that the observed result would not occur by chance? Here are the data from the merger study, seen here as Table 17-5-b:\nTable 17-5-b\nRanked Therapies Within Matched Patient Triplets\n(hypothetical data identical to merger data) (1 = best, 3 = worst)\nTriplet # Therapy Only Combined Drug Only\n\n\n\n1\n1\n3\n2\n\n\n2\n1\n2\n3\n\n\n3\n3\n2\n1\n\n\n4\n2\n1\n3\n\n\n5\n1\n2\n3\n\n\n6\n1\n3\n2\n\n\n7\n1\n3\n2\n\n\n8\n1\n2\n3\n\n\n9\n1\n2\n3\n\n\n10\n1\n2\n3\n\n\n11\n1\n3\n2\n\n\n12\n2\n1\n3\n\n\n13\n1\n3\n2\n\n\n14\n1\n3\n2\n\n\n15\n1\n2\n3\n\n\n16\n1\n3\n2\n\n\n17\n2\n3\n1\n\n\n18\n1\n3\n2\n\n\n19\n2\n1\n3\n\n\n20\n1\n3\n2\n\n\n21\n2\n1\n3\n\n\n22\n2\n1\n3\n\n\n23\n1\n3\n2\n\n\n24\n1\n2\n3\n\n\n25\n1\n2\n3\n\n\n26\n3\n1\n2\n\n\n27\n2\n1\n3\n\n\n28\n1\n3\n2\n\n\n29\n3\n2\n1\n\n\n30\n1\n3\n2\n\n\n31\n1\n2\n3\n\n\n32\n3\n1\n2\n\n\n33\n1\n3\n2\n\n\n\nThese are the average ranks for the three groups (“1” = best, “3”= worst):\n\n\n\nCombined\n1.45\n\n\nDrug\n2.18\n\n\nTherapy\n2.36\n\n\n\nIn these hypothetical data, the average rank for the drug and therapy regime is 1.45. Is it likely that the regimes do not “really” differ with respect to effectiveness, and that the drug and therapy regime came out with the best rank just by the luck of the draw? We test by asking, “If there is no difference, what is the probability that the treatment of interest will get an average rank this good, just by chance?”\nWe proceed exactly as with the solution for the merger problem (see above).\nIn the above problems, we did not concern ourselves with chance outcomes for the other therapies (or the matched firms) because they were not our primary focus. If, in actual fact, one of them had done exceptionally well or poorly, we would have paid little notice because their performance was not the object of the study. We needed, therefore, only to guard against the possibility that chance good luck for our therapy of interest might have led us to a hasty conclusion.\nSuppose now that we are not interested primarily in the combined drug-therapy treatment, and that we have three treatments being tested, all on equal footing. It is no longer sufficient to ask the question “What is the probability that the combined therapy could come out this well just by chance?” We must now ask “What is the probability that any of the therapies could have come out this well by chance?” (Perhaps you can guess that this probability will be higher than the probability that our chosen therapy will do so well by chance.)\nHere is a resampling procedure that will answer this question:\n\nPut the numbers “1”, “2” and “3” (corresponding to ranks) in a bucket\nShuffle the numbers and deal them out to three locations that correspond to treatments (call the locations “t1,” “t2,” and “t3”)\nRepeat step two another 32 times (for a total of 33 repetitions, for 33 matched triplets)\nFind the average rank for each location (treatment.\nRecord the minimum (best) score.\nRepeat steps 2-4, say, 1000 times.\nFind out how often the minimum average rank for any treatment is as low as 1.45\n\n\n' Program file: \"testing_counts_2_07.rss\"\n\nNUMBERS (1 2 3) a\n' Step 1 above\nREPEAT 1000\n ' Step 6\n REPEAT 33\n ' Step 3\n SHUFFLE a a\n ' Step 2\n SCORE a t1 t2 t3\n ' Step 2\n END\n ' Step 3\n MEAN t1 tt1\n ' Step 4\n MEAN t2 tt2\n MEAN t3 tt3\n CLEAR t1\n ' Clear the vectors where we've stored the ranks for this trial (must do\n ' this whenever we have a SCORE statement that's part of a \"nested\" repeat\n ' loop)\n CLEAR t2\n CLEAR t3\n CONCAT tt1 tt2 tt3 b\n ' Part of step 5\n MIN b bb\n ' Part of step 5\n SCORE bb z\n ' Part of step 5\nEND\n' Step 6\nHISTOGRAM z\nCOUNT z <=1.45 k\n' Step 7\nDIVIDE k 1000 kk\nPRINT kk\nInterpretation: 1000 random shufflings of 33 ranks, apportioned to three “treatments,” never produced for the best treatment in the three an average as low as the observed average, therefore we rule out chance as an explanation for the success of the combined therapy.\nAn interesting feature of the mergers (or depression treatment) problem is that it would be hard to find a conventional test that would handle this three-way comparison in an efficient manner. Certainly it would be impossible to find a test that does not require formulae and tables that only a talented professional statistician could manage satisfactorily, and even s/ he is not likely to fully understand those formulaic procedures.\n\nResult: kk = 0" + }, + { + "objectID": "testing_counts_2.html#technical-note", + "href": "testing_counts_2.html#technical-note", + "title": "23  The Statistics of Hypothesis-Testing with Counted Data, Part 2", + "section": "23.3 Technical note", + "text": "23.3 Technical note\nSome of the tests introduced in this chapter are similar to standard nonparametric rank and sign tests. They differ less in the structure of the test statistic than in the way in which significance is assessed (the comparison is to multiple simulations of a model based on the benchmark hypothesis, rather than to critical values calculated analytically).\n\n\n\n\nSimon, Julian Lincoln, Manouchehr Mokhtari, and Daniel H Simon. 1996. “Are Mergers Beneficial or Detrimental? Evidence from Advertising Agencies.” International Journal of the Economics of Business 3 (1): 69–82." + }, + { + "objectID": "testing_measured.html#differences-among-four-means", + "href": "testing_measured.html#differences-among-four-means", + "title": "24  The Statistics of Hypothesis-Testing With Measured Data", + "section": "24.1 Differences among four means", + "text": "24.1 Differences among four means\nExample 18-6: Differences Among Four Pig Rations (Test for Differences Among Means of More Than Two Samples of Measured Data) (File “PIGS4”)\nIn Examples 15-1 and 15-4 we investigated whether or not the results shown by a single sample are sufficiently different from a null (benchmark) hypothesis so that the sample is unlikely to have come from the null-hypothesis benchmark universe. In Examples 15-7, 17-1, and 18-1 we then investigated whether or not the results shown by two samples suggest that both had come from the same universe, a universe that was assumed to be the composite of the two samples. Now as in Example 17-2 we investigate whether or not several samples come from the same universe, except that now we work with measured data rather than with counted data.\nIf one experiments with each of 100 different pig foods on twelve pigs, some of the foods will show much better results than will others just by chance , just as one family in sixteen is likely to have the very “high” number of 4 daughters in its first four children. Therefore, it is wrong reasoning to try out the 100 pig foods, select the food that shows the best results, and then compare it statistically with the average (sum) of all the other foods (or worse, with the poorest food). With such a procedure and enough samples, you will surely find one (or more) that seems very atypical statistically. A bridge hand with 12 or 13 spades seems very atypical, too, but if you deal enough bridge hands you will sooner or later get one with 12 or 13 spades — as a purely chance phenomenon, dealt randomly from a standard deck. Therefore we need a test that prevents our falling into such traps. Such a test usually operates by taking into account the differences among all the foods that were tried.\nThe method of Example 18-1 can be extended to handle this problem. Assume that four foods were each tested on twelve pigs. The weight gains in pounds for the pigs fed on foods A and B were as before. For foods C and D the weight gains were:\nRation C: 30, 30, 32, 31, 29, 27, 25, 30, 31, 32, 34, 33\nRation D: 32, 25, 31, 26, 32, 27, 28, 29, 29, 28, 23, 25\nNow construct a benchmark universe of forty-eight index cards, one for each weight gain. Then deal out sets of four hands randomly. More specifically:\nStep 1. Constitute a universe of the forty-eight observed weight gains in the four samples, writing the weight gains on cards.\nStep 2. Draw four groups of twelve weight gains, with replacement, since we are drawing from a hypothesized infinite universe in which consecutive draws are independent. Determine whether the difference between the lowest and highest group means is as large or larger than the observed difference. If so write “yes,” otherwise “no.”\nStep 3. Repeat step 2 fifty times.\nStep 4. Count the trials in which the differences between the simulated groups with the highest and lowest means are as large or larger than the differences between the means of the highest and lowest observed samples. The proportion of such trials to the total number of trials is the probability that all four samples would differ as much as do the observed samples if they (in technical terms) come from the same universe.\nThe problem “Pigs4,” as handled by the steps given above, is quite similar to the way we handled Example TKTK, except that the data are measured (in pounds of weight gain) rather than simply counted (the number of rehabilitations).\nInstead of working through a program for the procedure outlined above, let us consider a different approach to the problem — computing the difference between each pair of foods, six differences in all, converting all minus (-) signs to (+) differences. Then we can total the six differences, and compare the total with the sum of the six differences in the observed sample. The proportion of the resampling trials in which the observed sample sum is exceeded by the sum of the differences in the trials is the probability that the observed samples would differ as much as they do if they come from the same universe.5\nOne naturally wonders whether this latter test statistic is better than the range, as discussed above. It would seem obvious that using the information contained in all four samples should increase the precision of the estimate. And indeed it is so, as you can confirm for yourself by comparing the results of the two approaches. But in the long run, the estimate provided by the two approaches would be much the same. That is, there is no reason to think that one or another of the estimates is biased . However, successive samples from the population would steady down faster to the true value using the four-groupbased estimate than they would using the range. That is, the four-group-based estimate would require a smaller sample of pigs.\nIs there reason to prefer one or the other approach from the point of view of some decision that might be made? One might think that the range procedure throws light on which one of the foods is best in a way that the four-group-based approach does not. But this is not correct. Both approaches answer this question, and only this question: Are the results from the four foods likely to have resulted from the same “universe” of weight gains or not? If one wants to know whether the best food is similar to, say, all the other three, the appropriate approach would be a two -sample approach similar to various two -sample examples discussed earlier. (It would be still another question to ask whether the best food is different from the worst. One would then use a procedure different from either of those discussed above.)\nIf the foods cost the same, one would not need even a twosample analysis to decide which food to feed. Feed the one whose results are best in the experiment, without bothering to ask whether it is “really” the best; you can’t go wrong as long as it doesn’t cost more to use it. (One could inquire about the probability that the food yielding the best results in the experiment would attain those results by chance even if it was worse than the others by some stipulated amount, but pursuing that line of thought may be left to the student as an exercise.)\nIn the problem “Pigs4,” we want a measure of how the groups differ. The obvious first step is to add up the total weight gains for each group: 382, 344, 364, 335. The next step is to calculate the differences between all the possible combinations of groups: 382-344=38, 382-364=18, 382-335=47, 344-364= -20, 344-335=9, 364-335=29." + }, + { + "objectID": "testing_measured.html#using-squared-differences", + "href": "testing_measured.html#using-squared-differences", + "title": "24  The Statistics of Hypothesis-Testing With Measured Data", + "section": "24.2 Using Squared Differences", + "text": "24.2 Using Squared Differences\nHere we face a choice. We could work with the absolute differences — that is, the results of the subtractions — treating each result as a positive number even if it is negative. We have seen this approach before. Therefore let us now take the opportunity of showing another approach. Instead of working with the absolute differences, we square each difference, and then SUM the squares. An advantage of working with the squares is that they are positive — a negative number squared is positive — which is convenient. Additionally, conventional statistics works mainly with squared quantities, and therefore it is worth getting familiar with that point of view. The squared differences in this case add up to 5096.\nUsing RESAMPLING STATS, we shuffle all the weight gains together, select four random groups, and determine whether the squared differences in the resample exceed 5096. If they do so with regularity, then we conclude that the observed differences could easily have occurred by chance.\nWith the CONCAT command, we string the four vectors into a single vector. After SHUFFLEing the 48-pig weight-gain vector G into H, we TAKE four randomized samples. And we compute the squared differences between the pairs of groups and SUM the squared differences just as we did above for the observed groups.\nLast, we examine how often the simulated-trials data produce differences among the groups as large as (or larger than) the actually observed data — 5096.\n\n' Program file: \"pigs4.rss\"\n\nNUMBERS (34 29 26 32 35 38 31 34 30 29 32 31) a\nNUMBERS (26 24 28 29 30 29 32 26 31 29 32 28) b\nNUMBERS (30 30 32 31 29 27 25 30 31 32 34 33) c\nNUMBERS (32 25 31 26 32 27 28 29 29 28 23 25) d\n' (Record the data for the 4 foods)\nCONCAT a b c d g\n' Combine the four vectors into g\nREPEAT 1000\n ' Do 1000 trials\n SHUFFLE g h\n ' Shuffle all the weight gains.\n SAMPLE 12 h p\n ' Take 4 random samples, with replacement.\n SAMPLE 12 h q\n SAMPLE 12 h r\n SAMPLE 12 h s\n SUM p i\n ' Sum the weight gains for the 4 resamples.\n SUM q j\n SUM r k\n SUM s l\n SUBTRACT i j ij\n ' Find the differences between all the possible pairs of resamples.\n SUBTRACT i k ik\n SUBTRACT i l il\n SUBTRACT j k jk\n SUBTRACT j l jl\n SUBTRACT k l kl\n MULTIPLY ij ij ijsq\n ' Find the squared differences.\n MULTIPLY ik ik iksq\n MULTIPLY il il ilsq\n MULTIPLY jk jk jksq\n MULTIPLY jl jl jlsq\n MULTIPLY kl kl klsq\n ADD ijsq iksq ilsq jksq jlsq klsq total\n ' Add them together.\n SCORE total z\n ' Keep track of the total for each trial.\nEND\n' End one trial, go back and repeat until 1000 trials are complete.\nHISTOGRAM z\n' Produce a histogram of the trial results.\nCOUNT z >= 5096 k\n' Find out how many trials produced differences among groups as great as\n' or greater than those observed.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the result.\n\n' Note: The file \"pigs4\" on the Resampling Stats software disk contains\n' this set of commands.\nPIGS4: Differences Among Four Pig Rations\n\nsums of squares\nWe find that our observed sum of squares — 5096 — was exceeded by randomly-drawn sums of squares in only 3 percent of our trials. We conclude that the four treatments are likely not all similar." + }, + { + "objectID": "testing_measured.html#exercises", + "href": "testing_measured.html#exercises", + "title": "24  The Statistics of Hypothesis-Testing With Measured Data", + "section": "24.3 Exercises", + "text": "24.3 Exercises\nSolutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.\nExercise 18-1\nThe data shown in Table 18-3 (Hollander and Wolfe 1999, 39, Table 3.1) might be data for the outcomes of two different mechanics, showing the length of time until the next overhaul is needed for nine pairs of similar vehicles. Or they could be two readings made by different instruments on the same sample of rock. In fact, they represent data for two successive tests for depression on the Hamilton scale, before and after drug therapy.\n\nTable 18-3\nHamilton Depression Scale Values\n\n\n\n\n\n\n\n\nPatient #\nScore Before\nScore After\n\n\n\n\n1 2 3 4 5 6 7 8 9\n1.83 .50 1.62 2.48 1.68 1.88 1.55 3.06 1.3\n.878 .647 .598 2.05 1.06 1.29 1.06 3.14 1.29\n\n\n\nThe task is to perform a test that will help decide whether there is a difference in the depression scores at the two visits (or the performances of the two mechanics). Perform both a bootstrap test and a permutation test, and give some reason for preferring one to the other in principle. How much do they differ in practice?\nExercise 18-2\nThirty-six of 72 (.5) taxis surveyed in Pittsburgh had visible seatbelts. Seventy-seven of 129 taxis in Chicago (.597) had visible seatbelts. Calculate a confidence interval for the difference in proportions, estimated at -.097. (Source: Peskun, Peter H., “A New Confidence Interval Method Based on the Normal Approximation for the Difference of Two Binomial Probabilities,” Journal of the American Statistical Association , 6/93 p. 656).\n\n\n\n\nChung, James H, and Donald AS Fraser. 1958. “Randomization Tests for a Multivariate Two-Sample Problem.” Journal of the American Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf.\n\n\nDwass, Meyer. 1957. “Modified Randomization Tests for Nonparametric Hypotheses.” The Annals of Mathematical Statistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf.\n\n\nEfron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to the Bootstrap.” In Monographs on Statistics and Applied Probability, edited by David R Cox, David V Hinkley, Nancy Reid, Donald B Rubin, and Bernard W Silverman. Vol. 57. New York: Chapman & Hall.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nHollander, Myles, and Douglas A Wolfe. 1999. Nonparametric Statistical Methods. 2nd ed. Wiley Series in Probability and Statistics: Applied Probability and Statistics. New York: John Wiley & Sons, Inc. https://archive.org/details/nonparametricsta0000ed2holl.\n\n\nPitman, Edwin JG. 1937. “Significance Tests Which May Be Applied to Samples from Any Populations.” Supplement to the Journal of the Royal Statistical Society 4 (1): 119–30. https://www.jstor.org/stable/pdf/2984124.pdf.\n\n\nSimon, Julian Lincoln, and David M Simon. 1996. “The Effects of Regulations on State Liquor Prices.” Empirica 23: 303–16." + }, + { + "objectID": "testing_procedures.html#introduction", + "href": "testing_procedures.html#introduction", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.1 Introduction", + "text": "25.1 Introduction\nThe previous chapters have presented procedures for making statistical inferences that apply to both testing hypotheses and constructing confidence intervals: This chapter focuses on specific procedures for testing hypotheses.\n`The general idea in testing hypotheses is to ask: Is there some other universe which might well have produced the observed sample? So we consider alternative hypotheses. This is a straightforward exercise in probability, asking about behavior of one or more universes. The choice of another universe(s) to examine depends upon purposes and other considerations." + }, + { + "objectID": "testing_procedures.html#canonical-question-and-answer-procedure-for-testing-hypotheses", + "href": "testing_procedures.html#canonical-question-and-answer-procedure-for-testing-hypotheses", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.2 Canonical question-and-answer procedure for testing hypotheses", + "text": "25.2 Canonical question-and-answer procedure for testing hypotheses" + }, + { + "objectID": "testing_procedures.html#skeleton-procedure-for-testing-hypotheses", + "href": "testing_procedures.html#skeleton-procedure-for-testing-hypotheses", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.3 Skeleton procedure for testing hypotheses", + "text": "25.3 Skeleton procedure for testing hypotheses\nAkin to skeleton procedure for questions in probability and confidence intervals shown elsewhere\nThe following series of questions will be repeated below in the context of a specific inference.\nWhat is the question? What is the purpose to be served by answering the question?\nIs this a “probability” or a “statistics” question?\nAssuming the Question is a Statistical Inference Question\nWhat is the form of the statistics question?\nHypothesis test, or confidence interval, or other inference? One must first decide whether the conceptual-scientific question is of the form a) a test about the probability that some sample is likely to happen by chance rather than being very surprising (a test of a hypothesis), or b) a question about the accuracy of the estimate of a parameter of the population based upon sample evidence (a confidence interval):\nAssuming the Question Concerns Testing Hypotheses\nWill you state the costs and benefits of various outcomes, perhaps in the form of a “loss function”? If “yes,” what are they?\nHow many samples of data have been observed?\nOne, two, more than two?\nWhat is the description of the observed sample(s)?\nRaw data?\nWhich characteristic(s) (parameters) of the population are of interest to you?\nWhat are the statistics of the sample(s) that refer to this (these) characteristics(s) in which you are interested?\nWhat comparison(s) to make?\nSamples to each other?\nSample to particular universe(s)? If so, which?\nWhat is the benchmark (null) universe?\nThis may include presenting the raw data and/or such summary statistics as the computed mean, median, standard deviation, range, interquartile range, other:\nIf there is to be a Neyman-Pearson-type alternative universe, what is it? (In most cases the answer to this technical question is “no.”)\nWhich symbols for the observed entities?\nDiscrete or continuous?\nWhat values or ranges of values?\nWhich sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? (Answer: samples the same size as has been observed)\n[Here one may continue with the conventional method, using perhaps a t or f or chi-square test or whatever: Everything up to now is the same whether continuing with resampling or with standard parametric test.]\nWhat procedure will be used to produce the resampled entities?\nRandomly drawn?\nSimple (single step) or complex (multiple “if” drawings)?\nWhat procedure to produce resample?\nWhich universe will you draw them from? With or without replacement?\nWhat size resamples? Number of resample trials?\nWhat to record as outcome of each resample trial?\nMean, median, or whatever of resample?\nClassifying the outcomes\nWhat is the criterion of significance to be used in evaluating the results of the test?\nStating the distribution of results\nGraph of each statistic recorded — occurrences for each value.\nCount the outcomes that exceed criterion and divide by number of trials." + }, + { + "objectID": "testing_procedures.html#an-example-can-the-bio-engineer-increase-the-female-calf-rate", + "href": "testing_procedures.html#an-example-can-the-bio-engineer-increase-the-female-calf-rate", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.4 An example: can the bio-engineer increase the female calf rate?", + "text": "25.4 An example: can the bio-engineer increase the female calf rate?\nThe question. (from (Hodges Jr and Lehmann 1970, 310): Female calves are more valuable than male calves. A bio-engineer claims to have a method that can produce more females. He tests the procedure on ten of your pregnant cows, and the result is nine females. Should you believe that his method has some effect? That is, what is the probability of a result this surprising occurring by chance?\nThe purpose: Female calves are more valuable than male.\nInference? Yes.\nTest of hypothesis? Yes.\nWill you state the costs and benefits of various outcomes (or a loss function)? We need only say that the benefits of a method that works are very large, and if the results are promising, it is worth gathering more data to confirm results.\nHow many samples of data are part of the significance test? One\nWhat is the size of the first sample about which you wish to make significance statements? Ten.\nWhat comparison(s) to make? Compare sample to benchmark universe.\nWhat is the benchmark universe that embodies the null hypothesis? 50-50 female, or 100/206 female.\nIf there is to be a Neyman-Pearson alternative universe , what is it? None.\nWhich symbols for the observed entities? Balls in bucket, or numbers.\nWhat values or ranges of values? 0-1, (1-100), or 101-206.\nFinite or infinite? Infinite.\nWhich sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? Ten calves compared to universe.\nWhat procedure to produce entities? Sampling with replacement,\nSimple (single step) or complex (multiple “if” drawings)? One can think of it either way.\nWhat to record as outcome of each resample trial? The proportion (or number) of females.\nWhat is the criterion to be used in the test? The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of half females. (Or frame in terms of a significance level.)\n“One-tail” or “two-tail” test? One tail, because the farmer is only interested in females: Finding a large proportion of males would not be of interest, and would not cause one to reject the null hypothesis.\nComputation of the probability sought. The actual computation of probability may be done with several formulaic or sample-space methods, and with several resampling methods: I will first show a resampling method and then several conventional methods. The following material, which allows one to compare resampling and conventional methods, is more germane to the earlier explication of resampling taken altogether in earlier chapters than it is to the theory of hypothesis tests discussed in this chapter, but it is more expedient to present it here." + }, + { + "objectID": "testing_procedures.html#computation-of-probabilities-with-resampling", + "href": "testing_procedures.html#computation-of-probabilities-with-resampling", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.5 Computation of Probabilities with Resampling", + "text": "25.5 Computation of Probabilities with Resampling\nWe can do the problem by hand as follows:\n\nConstitute a bucket with either one blue and one pink ball, or 106 blue and 100 pink balls.\nDraw ten balls with replacement, count pinks, and record.\nRepeat step (2) say 400 times.\nCalculate proportion of results with 9 or 10 pinks.\n\nOr, we can take advantage of the speed and efficiency of the computer as follows:\n\nn <- 10000\n\nfemales <- numeric(n)\n\nfor (i in 1:n) {\n samp <- sample(c('female', 'male'), size=10, replace=TRUE)\n females[i] <- sum(samp == 'female')\n}\n\nhist(females)\n\nk <- sum(females >= 9)\nkk <- k / n\nmessage('Proportion with >= 9 females: ', kk)\n\nProportion with >= 9 females: 0.011\n\n\n\n\n\n\n\n\n\nThis outcome implies that there is roughly a one percent chance that one would observe 9 or 10 female births in a single sample of 10 calves if the probability of a female on each birth is .5. This outcome should help the decision-maker decide about the plausibility of the bio-engineer’s claim to be able to increase the probability of female calves being born." + }, + { + "objectID": "testing_procedures.html#conventional-methods", + "href": "testing_procedures.html#conventional-methods", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.6 Conventional methods", + "text": "25.6 Conventional methods\n\n25.6.1 The Sample Space and First Principles\nAssume for a moment that our problem is a smaller one and therefore much easier — the probability of getting two females in two calves if the probability of a female is .5. One could then map out what mathematicians call the “sample space,” a technique that (in its simplest form) assigns to each outcome a single point, and find the proportion of points that correspond to a “success.” We list all four possible combinations — FF, FM, MF, MM. Now we look at the ratio of the number of combinations that have 2 females to the total, which is 1/4. We may then interpret this probability.\nWe might also use this method for (say) five female calves in a row. We can make a list of possibilities such as FFFFF, MFFFF, MMFFF, MMMFFF … MFMFM … MMMMM. There will be 2*2*2*2*2 = 32 possibilities, and 64 and 128 possibilities for six and seven calves respectively. But when we get as high as ten calves, this method would become very troublesome.\n\n\n25.6.2 Sample Space Calculations\nFor two females in a row, we could use the well known, and very simple, multiplication rule; we could do so even for ten females in a row. But calculating the probability of nine females in ten is a bit more complex.\n\n\n25.6.3 Pascal’s Triangle\nOne can use Pascal’s Triangle to obtain binomial coefficients for p = .5 and a sample size of 10, focusing on those for 9 or 10 successes. Then calculate the proportion of the total cases with 9 or 10 “successes” in one direction, to find the proportion of cases that pass beyond the criterion of 9 females. The method of Pascal’s Triangle requires more complete understanding of the probabilistic system than does the resampling simulation described above because Pascal’s Triangle requires that one understand the entire structure; simulation requires only that you follow the rules of the model.\n\n\n25.6.4 The Quincunx\nThe quincunx — a device that filters tiny balls through a set of bumper points not unlike a pinball machine, mentioned here simply for completeness — is more a simulation method than theoretical, but it may be considered “conventional.” Hence, it is included here.\n\n\n25.6.5 Table of Binomial Coefficients\nPascal’s Triangle becomes cumbersome or impractical with large numbers — say, 17 females of 20 births — or with probabilities other than .5. One might produce the binomial coefficients by algebraic multiplication, but that, too, becomes tedious even with small sample sizes. One can also use the pre-computed table of binomial coefficients found in any standard text. But the probabilities for n = 10 and 9 or 10 females are too small to be shown.\n\n\n25.6.6 Binomial Formula\nFor larger sample sizes, one can use the binomial formula. The binomial formula gives no deeper understanding of the statistical structure than does the Triangle (but it does yield a deeper understanding of the pure mathematics). With very large numbers, even the binomial formula is cumbersome.\n\n\n25.6.7 The Normal Approximation\nWhen the sample size becomes too large for any of the above methods, one can then use the Normal approximation, which yields results close to the binomial (as seen very nicely in the output of the quincunx). But use of the Normal distribution requires an estimate of the standard deviation, which can be derived either by formula or by resampling. (See a more extended parallel discussion in Chapter 27 on confidence intervals for the Bush-Dukakis comparison.)\nThe desired probability can be obtained from the Z formula and a standard table of the Normal distribution found in every elementary text.\nThe Z table can be made less mysterious if we generate it with simulation, or with graph paper or Archimedes’ method, using as raw material (say) five “continuous” (that is, non-binomial) distributions, many of which are skewed: 1) Draw samples of (say) 50 or 100. 2) Plot the means to see that the Normal shape is the outcome. Then 3) standardize with the standard deviation by marking the standard deviations onto the histograms.\nThe aim of the above exercise and the heart of the conventional parametric method is to compare the sample result — the mean — to a standardized plot of the means of samples drawn from the universe of interest to see how likely it is that that universe produces means deviating as much from the universe mean as does our observed sample mean. The steps are:\n\nEstablish the Normal shape — from the exercise above, or from the quincunx or Pascal’s Triangle or the binomial formula or the formula for the Normal approximation or some other device.\nStandardize that shape in standard deviations.\nCompute the Z score for the sample mean — that is, its deviation from the universe mean in standard deviations.\nExamine the Normal (or really, tables computed from graph paper, etc.) to find the probability of a mean deviating that far by chance.\n\nThis is the canon of the procedure for most parametric work in statistics. (For some small samples, accuracy is improved with an adjustment.)" + }, + { + "objectID": "testing_procedures.html#choice-of-the-benchmark-universebruce", + "href": "testing_procedures.html#choice-of-the-benchmark-universebruce", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.7 Choice of the benchmark universe1", + "text": "25.7 Choice of the benchmark universe1\nIn the example of the ten calves, the choice of a benchmark universe — a universe that (on average) produces equal proportions of males and females — seems rather straightforward and even automatic, requiring no difficult judgments. But in other cases the process requires more judgments.\nLet’s consider another case where the choice of a benchmark universe requires no difficult judgments. Assume the U.S. Department of Labor’s Bureau of Labor Statistics (BLS) takes a very large sample — say, 20,000 persons — and finds a 10 percent unemployment rate. At some later time another but smaller sample is drawn — 2,000 persons — showing an 11 percent unemployment rate. Should BLS conclude that unemployment has risen, or is there a large chance that the difference between 10 percent and 11 percent is due to sample variability? In this case, it makes rather obvious sense to ask how often a sample of 2,000 drawn from a universe of 10 percent unemployment (ignoring the variability in the larger sample) will be as different as 11 percent due solely to sample variability? This problem differs from that of the calves only in the proportions and the sizes of the samples.\nLet’s change the facts and assume that a very large sample had not been drawn and only a sample of 2,000 had been taken, indicating 11 percent unemployment. A policy-maker asks the probability that unemployment is above ten percent. It would still seem rather straightforward to ask how often a universe of 10 percent unemployment would produce a sample of 2000 with a proportion of 11 percent unemployed.\nStill another problem where the choice of benchmark hypothesis is relatively straightforward: Say that BLS takes two samples of 2000 persons a month apart, and asks whether there is a difference in the results. Pooling the two samples and examining how often two samples drawn from the pooled universe would be as different as observed seems obvious.\nOne of the reasons that the above cases — especially the two-sample case — seem so clear-cut is that the variance of the benchmark hypothesis is not an issue, being implied by the fact that the samples deal with proportions. If the data were continuous, however, this issue would quickly arise. Consider, for example, that the BLS might take the same sorts of samples and ask unemployed persons the lengths of time they had been unemployed. Comparing a small sample to a very large one would be easy to decide about. And even comparing two small samples might be straightforward — simply pooling them as is.\nBut what about if you have a sample of 2,000 with data on lengths of unemployment spells with a mean of 30 days, and you are asked the probability that it comes from a universe with a mean of 25 days? Now there arises the question about the amount of variability to assume for that benchmark universe. Should it be the variability observed in the sample? That is probably an overestimate, because a universe with a smaller mean would probably have a smaller variance, too. So some judgment is required; there cannot be an automatic “objective” process here, whether one proceeds with the conventional or the resampling method.\nThe example of the comparison of liquor retailing systems in Section 24.0.2 provides more material on this subject." + }, + { + "objectID": "testing_procedures.html#why-is-statistics-and-hypothesis-testing-so-difficult", + "href": "testing_procedures.html#why-is-statistics-and-hypothesis-testing-so-difficult", + "title": "25  General Procedures for Testing Hypotheses", + "section": "25.8 Why is statistics — and hypothesis testing — so difficult?", + "text": "25.8 Why is statistics — and hypothesis testing — so difficult?\nWhy is statistics such a difficult subject? The aforegoing procedural outline provides a window to the explanation. Hypothesis testing — as is also true of the construction of confidence intervals (but unlike simple probability problems) — involves a very long chain of reasoning, perhaps longer than in any other realm of systematic thinking. Furthermore, many decisions in the process require judgment that goes beyond technical analysis. All this emerges as one proceeds through the skeleton procedure above with any specific example.\n(Bayes’ rule also is very difficult intuitively, but that probably is a result of the twists and turns required in all complex problems in conditional probability. Decision-tree analysis is counter-intuitive, too, probably because it starts at the end instead of the beginning of the story, as we are usually accustomed to doing.)\n\n\n\n\nHodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic Concepts of Probability and Statistics. 2nd ed. San Francisco, California: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9." + }, + { + "objectID": "confidence_1.html#introduction", + "href": "confidence_1.html#introduction", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.1 Introduction", + "text": "26.1 Introduction\nThis chapter discusses how to assess the accuracy of a point estimate of the mean, median, or other statistic of a sample. We want to know: How close is our estimate of (say) the sample mean likely to be to the population mean? The chapter begins with an intuitive discussion of the relationship between a) a statistic derived from sample data, and b) a parameter of a universe from which the sample is drawn. Then we discuss the actual construction of confidence intervals using two different approaches which produce the same numbers though they have different logic. The following chapter shows illustrations of these procedures.\nThe accuracy of an estimate is a hard intellectual nut to crack, so hard that for hundreds of years statisticians and scientists wrestled with the problem with little success; it was not until the last century or two that much progress was made. The kernel of the problem is learning the extent of the variation in the population. But whereas the sample mean can be used straightforwardly to estimate the population mean, the extent of variation in the sample does not directly estimate the extent of the variation in the population, because the variation differs at different places in the distribution, and there is no reason to expect it to be symmetrical around the estimate or the mean.\nThe intellectual difficulty of confidence intervals is one reason why they are less prominent in statistics literature and practice than are tests of hypotheses (though statisticians often favor confidence intervals). Another reason is that tests of hypotheses are more fundamental for pure science because they address the question that is at the heart of all knowledge-getting: “Should these groups be considered different or the same ?” The statistical inference represented by confidence limits addresses what seems to be a secondary question in most sciences (though not in astronomy or perhaps physics): “How reliable is the estimate?” Still, confidence intervals are very important in some applied sciences such as geology — estimating the variation in grades of ores, for example — and in some parts of business and industry.\nConfidence intervals and hypothesis tests are not disjoint ideas. Indeed, hypothesis testing of a single sample against a benchmark value is (in all schools of thought, I believe) operationally identical with the most common way (Approach 1 below) of constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different for confidence limits and hypothesis tests.\nThe logic of confidence intervals is on shakier ground, in my judgment, than that of hypothesis testing, though there are many thoughtful and respected statisticians who argue that the logic of confidence intervals is better grounded and leads less often to error.\nConfidence intervals are considered by many to be part of the same topic as estimation , being an estimation of accuracy, in their view. And confidence intervals and hypothesis testing are seen as sub-cases of each other by some people. Whatever the importance of these distinctions among these intellectual tasks in other contexts, they need not concern us here." + }, + { + "objectID": "confidence_1.html#estimating-the-accuracy-of-a-sample-mean", + "href": "confidence_1.html#estimating-the-accuracy-of-a-sample-mean", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.2 Estimating the accuracy of a sample mean", + "text": "26.2 Estimating the accuracy of a sample mean\nIf one draws a sample that is very, very large — large enough so that one need not worry about sample size and dispersion in the case at hand — from a universe whose characteristics one knows , one then can deduce the probability that the sample mean will fall within a given distance of the population mean. Intuitively, it seems as if one should also be able to reverse the process — to infer something about the location of the population mean from the sample mean . But this inverse inference turns out to be a slippery business indeed.\nLet’s put it differently: It is all very well to say — as one logically may — that on average the sample mean (or other point estimator) equals a population parameter in most situations.\nBut what about the result of any particular sample? How accurate or inaccurate an estimate of the population mean is the sample likely to produce?\nBecause the logic of confidence intervals is subtle, most statistics texts skim right past the conceptual difficulties, and go directly to computation. Indeed, the topic of confidence intervals has been so controversial that some eminent statisticians refuse to discuss it at all. And when the concept is combined with the conventional algebraic treatment, the composite is truly baffling; the formal mathematics makes impossible any intuitive understanding. For students, “pluginski” is the only viable option for passing exams.\nWith the resampling method, however, the estimation of confidence intervals is easy. The topic then is manageable though subtle and challenging — sometimes pleasurably so. Even beginning undergraduates can enjoy the subtlety and find that it feels good to stretch the brain and get down to fundamentals.\nOne thing is clear: Despite the subtlety of the topic, the accuracy of estimates must be dealt with, one way or another.\nI hope the discussion below resolves much of the confusion of the topic." + }, + { + "objectID": "confidence_1.html#the-logic-of-confidence-intervals", + "href": "confidence_1.html#the-logic-of-confidence-intervals", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.3 The logic of confidence intervals", + "text": "26.3 The logic of confidence intervals\nTo preview the treatment of confidence intervals presented below: We do not learn about the reliability of sample estimates of the mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because this cannot be done in principle . Instead, we investigate the behavior of various universes in the neighborhood of the sample, universes whose characteristics are chosen on the basis of their similarity to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes that are implicitly suggested by the sample evidence but are not logically implied by that evidence.\nThe examples worked in the following chapter help explain why statistics is a difficult subject. The procedure required to transit successfully from the original question to a statistical probability, and then through a sensible interpretation of the probability, involves a great many choices about the appropriate model based on analysis of the problem at hand; a wrong choice at any point dooms the procedure. The actual computation of the probability — whether done with formulaic probability theory or with resampling simulation — is only a very small part of the procedure, and it is the least difficult part if one proceeds with resampling. The difficulties in the statistical process are not mathematical but rather stem from the hard clear thinking needed to understand the nature of the situation and to ascertain the appropriate way to model it.\nAgain, the purpose of a confidence interval is to help us assess the reliability of a statistic of the sample — for example, its mean or median — as an estimator of the parameter of the universe. The line of thought runs as follows: It is possible to map the distribution of the means (or other such parameter) of samples of any given size (the size of interest in any investigation usually being the size of the observed sample) and of any given pattern of dispersion (which we will assume for now can be estimated from the sample) that a universe in the neighborhood of the sample will produce. For example, we can compute how large an interval to the right and left of a postulated universe’s mean is required to include 45 percent of the samples on either side of the mean.\nWhat cannot be done is to draw conclusions from sample evidence about the nature of the universe from which it was drawn, in the absence of some information about the set of universes from which it might have been drawn. That is, one can investigate the behavior of one or more specified universes, and discover the absolute and relative probabilities that the given specified universe(s) might produce such a sample. But the universe(s) to be so investigated must be specified in advance (which is consistent with the Bayesian view of statistics). To put it differently, we can employ probability theory to learn the pattern(s) of results produced by samples drawn from a particular specified universe, and then compare that pattern to the observed sample. But we cannot infer the probability that that sample was drawn from any given universe in the absence of knowledge of the other possible sources of the sample. That is a subtle difference, I know, but I hope that the following discussion makes it understandable." + }, + { + "objectID": "confidence_1.html#computing-confidence-intervals", + "href": "confidence_1.html#computing-confidence-intervals", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.4 Computing confidence intervals", + "text": "26.4 Computing confidence intervals\nIn the first part of the discussion we shall leave aside the issue of estimating the extent of the dispersion — a troublesome matter, but one which seldom will result in unsound conclusions even if handled crudely. To start from scratch again: The first — and seemingly straightforward — step is to estimate the mean of the population based on the sample data. The next and more complex step is to ask about the range of values (and their probabilities) that the estimate of the mean might take — that is, the construction of confidence intervals. It seems natural to assume that if our best guess about the population mean is the value of the sample mean, our best guesses about the various values that the population mean might take if unbiased sampling error causes discrepancies between population parameters and sample statistics, should be values clustering around the sample mean in a symmetrical fashion (assuming that asymmetry is not forced by the distribution — as for example, the binomial is close to symmetric near its middle values). But how far away from the sample mean might the population mean be?\nLet’s walk slowly through the logic, going back to basics to enhance intuition. Let’s start with the familiar saying, “The apple doesn’t fall far from the tree.” Imagine that you are in a very hypothetical place where an apple tree is above you, and you are not allowed to look up at the tree, whose trunk has an infinitely thin diameter. You see an apple on the ground. You must now guess where the trunk (center) of the tree is. The obvious guess for the location of the trunk is right above the apple. But the trunk is not likely to be exactly above the apple because of the small probability of the trunk being at any particular location, due to sampling dispersion.\nThough you find it easy to make a best guess about where the mean is (the true trunk), with the given information alone you have no way of making an estimate of the probability that the mean is one place or another, other than that the probability is the same that the tree is to the north or south, east or west, of you. You have no idea about how far the center of the tree is from you. You cannot even put a maximum on the distance it is from you, and without a maximum you could not even reasonably assume a rectangular distribution, or a Normal distribution, or any other.\nNext you see two apples. What guesses do you make now? The midpoint between the two obviously is your best guess about the location of the center of the tree. But still there is no way to estimate the probability distribution of the location of the center of the tree.\nNow assume you are given still another piece of information: The outermost spread of the tree’s branches (the range) equals the distance between the two apples you see. With this information, you could immediately locate the boundaries of the location of the center of the tree. But this is only because the answer you sought was given to you in disguised form.\nYou could, however, come up with some statements of relative probabilities. In the absence of prior information on where the tree might be, you would offer higher odds that the center (the trunk) is in any unit of area close to the center of your two apples than in a unit of area far from the center. That is, if you are told that either one apple, or two apples, came from one of two specified trees whose locations are given , with no reason to believe it is one tree or the other (later, we can put other prior probabilities on the two trees), and you are also told the dispersions, you now can put relative probabilities on one tree or the other being the source. (Note to the advanced student: This is like the Neyman-Pearson procedure, and it is easily reconciled with the Bayesian point of view to be explored later. One can also connect this concept of relative probability to the Fisherian concept of maximum likelihood — which is a probability relative to all others). And you could list from high to low the probabilities for each unit of area in the neighborhood of your apple sample. But this procedure is quite different from making any single absolute numerical probability estimate of the location of the mean.\nNow let’s say you see 10 apples on the ground. Of course your best estimate is that the trunk of the tree is at their arithmetic center. But how close to the actual tree trunk (the population mean) is your estimate likely to be? This is the question involved in confidence intervals. We want to estimate a range (around the center, which we estimate with the center mean of the sample, we said) within which we are pretty sure that the trunk lies.\nTo simplify, we consider variation along only one dimension — that is, on (say) a north-south line rather than on two dimensions (the entire surface).\nWe first note that you have no reason to estimate the trunk’s location to be outside the sample pattern, or at its edge, though it could be so in principle.\nIf the pattern of the 10 apples is tight, you imagine the pattern of the likely locations of the population mean to be tight; if not, not. That is, it is intuitively clear that there is some connection between how spread out are the sample observations and your confidence about the location of the population mean . For example, consider two patterns of a thousand apples, one with twice the spread of another, where we measure spread by (say) the diameter of the circle that holds the inner half of the apples for each tree, or by the standard deviation. It makes sense that if the two patterns have the same center point (mean), you would put higher odds on the tree with the smaller spread being within some given distance — say, a foot — of the estimated mean. But what odds would you give on that bet?" + }, + { + "objectID": "confidence_1.html#procedure-for-estimating-confidence-intervals", + "href": "confidence_1.html#procedure-for-estimating-confidence-intervals", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.5 Procedure for estimating confidence intervals", + "text": "26.5 Procedure for estimating confidence intervals\nHere is a canonical list of questions that help organize one’s thinking when constructing confidence intervals. The list is comparable to the lists for questions in probability and for hypothesis testing provided in earlier chapters. This set of questions will be applied operationally in Chapter 27.\nWhat Is The Question?\nWhat is the purpose to be served by answering the question? Is this a “probability” or a “statistics” question?\nIf the Question Is a Statistical Inference Question:\nWhat is the form of the statistics question?\nHypothesis test or confidence limits or other inference?\nAssuming Question Is About Confidence Limits:\nWhat is the description of the sample that has been observed?\nRaw data?\nStatistics of the sample?\nWhich universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess of the properties of the universe whose parameter you wish to make statements about? Finite or infinite? Bayesian possibilities?\nWhich parameter do you wish to make statements about?\nMean, median, standard deviation, range, interquartile range, other?\nWhich symbols for the observed entities?\nDiscrete or continuous?\nWhat values or ranges of values?\nIf the universe is as guessed at, for which samples do you wish to estimate the variation? (Answer: samples the same size as has been observed)\nHere one may continue with the conventional method, using perhaps a t or F or chi-square test or whatever. Everything up to now is the same whether continuing with resampling or with standard parametric test.\nWhat procedure to produce the original entities in the sample?\nWhat universe will you draw them from? Random selection?\nWhat size resample?\nSimple (single step) or complex (multiple “if” drawings)?\nWhat procedure to produce resamples?\nWith or without replacement? Number of drawings?\nWhat to record as result of resample drawing?\nMean, median, or whatever of resample\nStating the Distribution of Results\nHistogram, frequency distribution, other?\nChoice Of Confidence Bounds\nOne or two-tailed?\n90%, 95%, etc.?\nComputation of Probabilities Within Chosen Bounds" + }, + { + "objectID": "confidence_1.html#summary", + "href": "confidence_1.html#summary", + "title": "26  Confidence Intervals, Part 1: Assessing the Accuracy of Samples", + "section": "26.6 Summary", + "text": "26.6 Summary\nThis chapter discussed the theoretical basis for assessing the accuracy of population averages from sample data. The following chapter shows two very different approaches to confidence intervals, and provides examples of the computations." + }, + { + "objectID": "confidence_2.html#approach-1-the-distance-between-sample-and-population-mean", + "href": "confidence_2.html#approach-1-the-distance-between-sample-and-population-mean", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.1 Approach 1: The distance between sample and population mean", + "text": "27.1 Approach 1: The distance between sample and population mean\nIf the study of probability can tell us the probability that a given population will produce a sample with a mean at a given distance x from the population mean, and if a sample is an unbiased estimator of the population, then it seems natural to turn the matter around and interpret the same sort of data as telling us the probability that the estimate of the population mean is that far from the “actual” population mean. A fly in the ointment is our lack of knowledge of the dispersion, but we can safely put that aside for now. (See below, however.)\nThis first approach begins by assuming that the universe that actually produced the sample has the same amount of dispersion (but not necessarily the same mean) that one would estimate from the sample. One then produces (either with resampling or with Normal distribution theory) the distribution of sample means that would occur with repeated sampling from that designated universe with samples the size of the observed sample. One can then compute the distance between the (assumed) population mean and (say) the inner 45 percent of sample means on each side of the actually observed sample mean.\nThe crucial step is to shift vantage points. We look from the sample to the universe, instead of from a hypothesized universe to simulated samples (as we have done so far). This same interval as computed above must be the relevant distance as when one looks from the sample to the universe. Putting this algebraically, we can state (on the basis of either simulation or formal calculation) that for any given population S, and for any given distance \\(d\\) from its mean \\(\\mu\\), that \\(P((\\mu - \\bar{x}) < d) = \\alpha\\), where \\(\\bar{x}\\) is a randomly generated sample mean and \\(\\alpha\\) is the probability resulting from the simulation or calculation.\nThe above equation focuses on the deviation of various sample means (\\(\\bar{x}\\)) from a stated population mean (\\(\\mu\\)). But we are logically entitled to read the algebra in another fashion, focusing on the deviation of \\(\\mu\\) from a randomly generated sample mean. This implies that for any given randomly generated sample mean we observe, the same probability (\\(\\alpha\\)) describes the probability that \\(\\mu\\) will be at a distance \\(d\\) or less from the observed \\(\\bar{x}\\). (I believe that this is the logic underlying the conventional view of confidence intervals, but I have yet to find a clear-cut statement of it; in any case, it appears to be logically correct.)\nTo repeat this difficult idea in slightly different words: If one draws a sample (large enough to not worry about sample size and dispersion), one can say in advance that there is a probability \\(p\\) that the sample mean (\\(\\bar{x}\\)) will fall within \\(z\\) standard deviations of the population mean (\\(\\mu\\)). One estimates the population dispersion from the sample. If there is a probability \\(p\\) that \\(\\bar{x}\\) is within \\(z\\) standard deviations of \\(\\mu\\), then with probability \\(p\\), \\(\\mu\\) must be within that same \\(z\\) standard deviations of \\(\\bar{x}\\). To repeat, this is, I believe, the heart of the standard concept of the confidence interval, to the extent that there is thought through consensus on the matter.\nSo we can state for such populations the probability that the distance between the population and sample means will be \\(d\\) or less. Or with respect to a given distance, we can say that the probability that the population and sample means will be that close together is \\(p\\).\nThat is, we start by focusing on how much the sample mean diverges from the known population mean. But then — and to repeat once more this key conceptual step — we refocus our attention to begin with the sample mean and then discuss the probability that the population mean will be within a given distance. The resulting distance is what we call the “confidence interval.”\nPlease notice that the distribution (universe) assumed at the beginning of this approach did not include the assumption that the distribution is centered on the sample mean or anywhere else. It is true that the sample mean is used for purposes of reporting the location of the estimated universe mean . But despite how the subject is treated in the conventional approach, the estimated population mean is not part of the work of constructing confidence intervals. Rather, the calculations apply in the same way to all universes in the neighborhood of the sample (which are assumed, for the purpose of the work, to have the same dispersion). And indeed, it must be so, because the probability that the universe from which the sample was drawn is centered exactly at the sample mean is very small.\nThis independence of the confidence-intervals construction from the mean of the sample (and the mean of the estimated universe) is surprising at first, but after a bit of thought it makes sense.\nIn this first approach, as noted more generally above, we do not make estimates of the confidence intervals on the basis of any logical inference from any one particular sample to any one particular universe, because this cannot be done in principle ; it is the futile search for this connection that for decades roiled the brains of so many statisticians and now continues to trouble the minds of so many students. Instead, we investigate the behavior of (in this first approach) the universe that has a higher probability of producing the observed sample than does any other universe (in the absence of any additional evidence to the contrary), and whose characteristics are chosen on the basis of its resemblance to the sample. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of one or more hypothesized universes, the universe(s) being implicitly suggested by the sample evidence but not logically implied by that evidence. And there are no grounds for dispute about exactly what is being done — only about how to interpret the results.\nOne difficulty with the above approach is that the estimate of the population dispersion does not rest on sound foundations; this matter will be discussed later, but it is not likely to lead to a seriously misleading conclusion.\nA second difficulty with this approach is in interpreting the result. What is the justification for focusing our attention on a universe centered on the sample mean? While this particular universe may be more likely than any other, it undoubtedly has a low probability. And indeed, the statement of the confidence intervals refers to the probabilities that the sample has come from universes other than the universe centered at the sample mean, and quite a distance from it.\nMy answer to this question does not rest on a set of meaningful mathematical axioms, and I assert that a meaningful axiomatic answer is impossible in principle. Rather, I reason that we should consider the behavior of this universe because other universes near it will produce much the same results, differing only in dispersion from this one, and this difference is not likely to be crucial; this last assumption is all-important, of course. True, we do not know what the dispersion might be for the “true” universe. But elsewhere (Simon, forthcoming) I argue that the concept of the “true universe” is not helpful — or maybe even worse than nothing — and should be forsworn. And we can postulate a dispersion for any other universe we choose to investigate. That is, for this postulation we unabashedly bring in any other knowledge we may have. The defense for such an almost-arbitrary move would be that this is a second-order matter relative to the location of the estimated universe mean, and therefore it is not likely to lead to serious error. (This sort of approximative guessing sticks in the throats of many trained mathematicians, of course, who want to feel an unbroken logic leading backwards into the mists of axiom formation. But the axioms themselves inevitably are chosen arbitrarily just as there is arbitrariness in the practice at hand, though the choice process for axioms is less obvious and more hallowed by having been done by the masterminds of the past. (See Simon (1998), on the necessity for judgment.) The absence of a sequence of equations leading from some first principles to the procedure described in the paragraph above is evidence of what is felt to be missing by those who crave logical justification. The key equation in this approach is formally unassailable, but it seems to come from nowhere.)\nIn the examples in the following chapter may be found computations for two population distributions — one binomial and one quantitative — of the histograms of the sample means produced with this procedure.\nOperationally, we use the observed sample mean, together with an estimate of the dispersion from the sample, to estimate a mean and dispersion for the population. Then with reference to the sample mean we state a combination of a distance (on each side) and a probability pertaining to the population mean. The computational examples will illustrate this procedure.\nOnce we have obtained a numerical answer, we must decide how to interpret it. There is a natural and almost irresistible tendency to talk about the probability that the mean of the universe lies within the intervals, but this has proven confusing and controversial. Interpretation in terms of a repeated process is not very satisfying intuitively.1\nIn my view, it is not worth arguing about any “true” interpretation of these computations. One could sensibly interpret the computations in terms of the odds a decision maker, given the evidence, would reasonably offer about the relative probabilities that the sample came from one of two specified universes (one of them probably being centered on the sample); this does provide some information on reliability, but this procedure departs from the concept of confidence intervals.\n\n27.1.1 Example: Counted Data: The Accuracy of Political Polls\nConsider the reliability of a randomly selected 1988 presidential election poll, showing 840 intended votes for Bush and 660 intended votes for Dukakis out of 1500 (Wonnacott and Wonnacott 1990, 5). Let us work through the logic of this example.\n\n\nWhat is the question? Stated technically, what are the 95% confidence limits for the proportion of Bush supporters in the population? (The proportion is the mean of a binomial population or sample, of course.) More broadly, within which bounds could one confidently believe that the population proportion was likely to lie? At this stage of the work, we must already have translated the conceptual question (in this case, a decision-making question from the point of view of the candidates) into a statistical question. (See Chapter 20 on translating questions into statistical form.)\nWhat is the purpose to be served by answering this question? There is no sharp and clear answer in this case. The goal could be to satisfy public curiosity, or strategy planning for a candidate (though a national proportion is not as helpful for planning strategy as state data would be). A secondary goal might be to help guide decisions about the sample size of subsequent polls.\nIs this a “probability” or a “probability-statistics” question? The latter; we wish to infer from sample to population rather than the converse.\nGiven that this is a statistics question: What is the form of the statistics question — confidence limits or hypothesis testing? Confidence limits.\nGiven that the question is about confidence limits: What is the description of the sample that has been observed? a) The raw sample data — the observed numbers of interviewees are 840 for Bush and 660 for Dukakis — constitutes the best description of the universe. The statistics of the sample are the given proportions — 56 percent for Bush, 44 percent for Dukakis.\nWhich universe? (Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe about whose parameter you wish to make statements? The best guess is that the population proportion is the sample proportion — that is, the population contains 56 percent Bush votes, 44 percent Dukakis votes.\nPossibilities for Bayesian analysis? Not in this case, unless you believe that the sample was biased somehow.\nWhich parameter(s) do you wish to make statements about? Mean, median, standard deviation, range, interquartile range, other? We wish to estimate the proportion in favor of Bush (or Dukakis).\nWhich symbols for the observed entities? Perhaps 56 green and 44 yellow balls, if a bucket is used, or “0” and “1” if the computer is used.\nDiscrete or continuous distribution? In principle, discrete. (All distributions must be discrete in practice.)\nWhat values or ranges of values?* “0” or “1.”\nFinite or infinite? Infinite — the sample is small relative to the population.\nIf the universe is what you guess it to be, for which samples do you wish to estimate the variation? A sample the same size as the observed poll.\n\nHere one may continue either with resampling or with the conventional method. Everything done up to now would be the same whether continuing with resampling or with a standard parametric test." + }, + { + "objectID": "confidence_2.html#conventional-calculational-methods", + "href": "confidence_2.html#conventional-calculational-methods", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.2 Conventional Calculational Methods", + "text": "27.2 Conventional Calculational Methods\nEstimating the Distribution of Differences Between Sample and Population Means With the Normal Distribution.\nIn the conventional approach, one could in principle work from first principles with lists and sample space, but that would surely be too cumbersome. One could work with binomial proportions, but this problem has too large a sample for tree-drawing and quincunx techniques; even the ordinary textbook table of binomial coefficients is too small for this job. Calculating binomial coefficients also is a big job. So instead one would use the Normal approximation to the binomial formula.\n(Note to the beginner: The distribution of means that we manipulate has the Normal shape because of the operation of the Law of Large Numbers (The Central Limit theorem). Sums and averages, when the sample is reasonably large, take on this shape even if the underlying distribution is not Normal. This is a truly astonishing property of randomly drawn samples — the distribution of their means quickly comes to resemble a “Normal” distribution, no matter the shape of the underlying distribution. We then standardize it with the standard deviation or other devices so that we can state the probability distribution of the sampling error of the mean for any sample of reasonable size.)\nThe exercise of creating the Normal shape empirically is simply a generalization of particular cases such as we will later create here for the poll by resampling simulation. One can also go one step further and use the formula of de Moivre-Laplace-Gauss to describe the empirical distributions, and to serve instead of the empirical distributions. Looking ahead now, the difference between resampling and the conventional approach can be said to be that in the conventional approach we simply plot the Gaussian distribution very carefully, and use a formula instead of the empirical histograms, afterwards putting the results in a standardized table so that we can read them quickly without having to recreate the curve each time we use it. More about the nature of the Normal distribution may be found in Simon (forthcoming).\nAll the work done above uses the information specified previously — the sample size of 1500, the drawing with replacement, the observed proportion as the criterion." + }, + { + "objectID": "confidence_2.html#confidence-intervals-empirically-with-resampling", + "href": "confidence_2.html#confidence-intervals-empirically-with-resampling", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.3 Confidence Intervals Empirically — With Resampling", + "text": "27.3 Confidence Intervals Empirically — With Resampling\nEstimating the Distribution of Differences Between Sample and Population Means By Resampling\n\nWhat procedure to produce entities?: Random selection from bucket or computer.\nSimple (single step) or complex (multiple “if” drawings)?: Simple.\nWhat procedure to produce resamples? That is, with or without replacement? With replacement.\nNumber of drawings observations in actual sample, and hence, number of drawings in resamples? 1500.\nWhat to record as result of each resample drawing? Mean, median, or whatever of resample? The proportion is what we seek.\nStating the distribution of results : The distribution of proportions for the trial samples.\nChoice of confidence bounds? : 95%, two tails (choice made by the textbook that posed the problem).\nComputation of probabilities within chosen bounds : Read the probabilistic result from the histogram of results.\nComputation of upper and lower confidence bounds: Locate the values corresponding to the 2.5th and 97.5th percentile of the resampled proportions.\n\nBecause the theory of confidence intervals is so abstract (even with the resampling method of computation), let us now walk through this resampling demonstration slowly, using the conventional Approach 1 described previously. We first produce a sample, and then see how the process works in reverse to estimate the reliability of the sample, using the Bush-Dukakis poll as an example. The computer program follows below.\n\nStep 1: Draw a sample of 1500 voters from a universe that, based on the observed sample, is 56 percent for Bush, 44 percent for Dukakis. The first such sample produced by the computer happens to be 53 percent for Bush; it might have been 58 percent, or 55 percent, or very rarely, 49 percent for Bush.\nStep 2: Repeat step 1 perhaps 400 or 1000 times.\nStep 3: Estimate the distribution of means (proportions) of samples of size 1500 drawn from this 56-44 percent Bush- Dukakis universe; the resampling result is shown below.\nStep 4: In a fashion similar to what was done in steps 13, now compute the 95 percent confidence intervals for some other postulated universe mean — say 53% for Bush, 47% for Dukakis. This step produces a confidence interval that is not centered on the sample mean and the estimated universe mean, and hence it shows the independence of the procedure from that magnitude. And we now compare the breadth of the estimated confidence interval generated with the 53-47 percent universe against the confidence interval derived from the corresponding distribution of sample means generated by the “true” Bush-Dukakis population of 56 percent — 44 percent. If the procedure works well, the results of the two procedures should be similar.\n\nNow we interpret the results using this first approach. The histogram shows the probability that the difference between the sample mean and the population mean — the error in the sample result — will be about 2.5 percentage points too low. It follows that about 47.5 percent (half of 95 percent) of the time, a sample like this one will be between the population mean and 2.5 percent too low. We do not know the actual population mean. But for any observed sample like this one, we can say that there is a 47.5 percent chance that the distance between it and the mean of the population that generated it is minus 2.5 percent or less.\nNow a crucial step: We turn around the statement just above, and say that there is an 47.5 percent chance that the population mean is less than three percentage points higher than the mean of a sample drawn like this one, but at or above the sample mean. (And we do the same for the other side of the sample mean.) So to recapitulate: We observe a sample and its mean. We estimate the error by experimenting with one or more universes in that neighborhood, and we then give the probability that the population mean is within that margin of error from the sample mean.\n\n27.3.1 Example: Measured Data Example — the Bootstrap\nA feed merchant decides to experiment with a new pig ration — ration A — on twelve pigs. To obtain a random sample, he provides twelve customers (selected at random) with sufficient food for one pig. After 4 weeks, the 12 pigs experience an average gain of 508 ounces. The weight gain of the individual pigs are as follows: 496, 544, 464, 416, 512, 560, 608, 544, 480, 466, 512, 496.\nThe merchant sees that the ration produces results that are quite variable (from a low of 466 ounces to a high of 560 ounces) and is therefore reluctant to advertise an average weight gain of 508 ounces. He speculates that a different sample of pigs might well produce a different average weight gain.\nUnfortunately, it is impractical to sample additional pigs to gain additional information about the universe of weight gains. The merchant must rely on the data already gathered. How can these data be used to tell us more about the sampling variability of the average weight gain?\nRecalling that all we know about the universe of weight gains is the sample we have observed, we can replicate that sample millions of times, creating a “pseudo-universe” that embodies all our knowledge about the real universe. We can then draw additional samples from this pseudo-universe and see how they behave.\nMore specifically, we replicate each observed weight gain millions of times — we can imagine writing each result that many times on separate pieces of paper — then shuffle those weight gains and pick out a sample of 12. Average the weight gain for that sample, and record the result. Take repeated samples, and record the result for each. We can then make a histogram of the results; it might look something like this:\n\n\n\n\n\n\n\n\n\nThough we do not know the true average weight gain, we can use this histogram to estimate the bounds within which it falls. The merchant can consider various weight gains for advertising purposes, and estimate the probability that the true weight gain falls below the value. For example, he might wish to advertise a weight gain of 500 ounces. Examining the histogram, we see that about 36% of our samples yielded weight gains less than 500 ounces. The merchant might wish to choose a lower weight gain to advertise, to reduce the risk of overstating the effectiveness of the ration.\nThis illustrates the “bootstrap” method. By re-using our original sample many times (and using nothing else), we are able to make inferences about the population from which the sample came. This problem would conventionally be addressed with the “t-test.”\n\n\n27.3.2 Example: Measured Data Example: Estimating Tree Diameters\n\nWhat is the question? A horticulturist is experimenting with a new type of tree. She plants 20 of them on a plot of land, and measures their trunk diameter after two years. She wants to establish a 90% confidence interval for the population average trunk diameter. For the data given below, calculate the mean of the sample and calculate (or describe a simulation procedure for calculating) a 90% confidence interval around the mean. Here are the 20 diameters, in centimeters and in no particular order (Table 27.1):\n\n\nTable 27.1: Tree Diameters, in Centimeters\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n8.5\n7.6\n9.3\n5.5\n11.4\n6.9\n6.5\n12.9\n8.7\n4.8\n\n\n4.2\n8.1\n6.5\n5.8\n6.7\n2.4\n11.1\n7.1\n8.8\n7.2\n\n\n\n\nWhat is the purpose to be served by answering the question? Either research & development, or pure science.\nIs this a “probability” or a “statistics” question? Statistics.\nWhat is the form of the statistics question? Confidence limits.\nWhat is the description of the sample that has been observed? The raw data as shown above.\nStatistics of the sample ? Mean of the tree data.\nWhich universe? Assuming that the observed sample is representative of the universe from which it is drawn, what is your best guess about the properties of the universe whose parameter you wish to make statements about? Answer: The universe is like the sample above but much, much bigger. That is, in the absence of other information, we imagine this “bootstrap” universe as a collection of (say) one million trees of 8.5 centimeters width, one million of 7.2 centimeters, and so on. We’ll see in a moment that the device of sampling with replacement makes it unnecessary for us to work with such a large universe; by replacing each element after we draw it in a resample, we achieve the same effect as creating an almost-infinite universe from which to draw the resamples. (Are there possibilities for Bayesian analysis?) No Bayesian prior information will be included.\nWhich parameter do you wish to make statements about? The mean.\nWhich symbols for the observed entities? Cards or computer entries with numbers 8.5…7.2, sample of an infinite size.\nIf the universe is as guessed at, for which samples do you wish to estimate the variation? Samples of size 20.\n\nHere one may continue with the conventional method. Everything up to now is the same whether continuing with resampling or with a standard parametric test. The information listed above is the basis for a conventional test.\nContinuing with resampling:\n\nWhat procedure will be used to produce the trial entities? Random selection: simple (single step), not complex (multiple “if”) sample drawings).\nWhat procedure to produce resamples? With replacement. As noted above, sampling with replacement allows us to forego creating a very large bootstrap universe; replacing the elements after we draw them achieves the same effect as would an infinite universe.\nNumber of drawings? 20 trees\nWhat to record as result of resample drawing? The mean.\nHow to state the distribution of results? See histogram.\nChoice of confidence bounds? 90%, two-tailed.\nComputation of values of the resample statistic corresponding to chosen confidence bounds? Read from histogram.\n\nAs has been discussed in Chapter 19, it often is more appropriate to work with the median than with the mean. One reason is that the median is not so sensitive to the extreme observations as is the mean. Another reason is that one need not assume a Normal distribution for the universe under study: this consideration affects conventional statistics but usually does not affect resampling, but it is worth keeping mind when a statistician is making a choice between a parametric (that is, Normal-based) and a non-parametric procedure.\n\n\n27.3.3 Example: Determining a Confidence Interval for the Median Aluminum Content in Theban Jars\nData for the percentages of aluminum content in a sample of 18 ancient Theban jars (Catling and Jones 1977) are as follows, arranged in ascending order: 11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5, 15.0, 15.1, 15.8, 16.0, 16.3, 16.5, 16.9, 17.0, 17.2, 17.5, 19.0. Consider now putting a confidence interval around the median of 15.45 (halfway between the middle observations 15.1 and 15.8).\nOne may simply estimate a confidence interval around the median with a bootstrap procedure by substituting the median for the mean in the usual bootstrap procedure for estimating a confidence limit around the mean, as follows:\n\ndata = c(11.4, 13.4, 13.5, 13.8, 13.9, 14.4, 14.5,\n 15.0, 15.1, 15.8, 16.0, 16.3, 16.5, 16.9,\n 17.0, 17.2, 17.5, 19.0)\n\nobserved_median <- median(data)\n\nn <- 10000\nmedians <- numeric(n)\n\nfor (i in 1:n) {\n sample <- sample(data, replace=TRUE)\n medians[i] <- median(sample)\n}\n\nhist(medians)\n\nmessage('Observed median aluminum content: ', observed_median)\n\nObserved median aluminum content: 15.45\n\npp <- quantile(medians, c(0.025, 0.975))\nmessage('Estimate of 95 percent confidence interval: ', pp[1], ' - ', pp[2])\n\nEstimate of 95 percent confidence interval: 14.15 - 16.6\n\n\n\n\n\n\n\n\n\n(This problem would be approached conventionally with a binomial procedure leading to quite wide confidence intervals (Deshpande, Gore, and Shanubhogue 1995, 32)).\n\n\n\n27.3.4 Example: Confidence Interval for the Median Price Elasticity of Demand for Cigarettes\nThe data for a measure of responsiveness of demand to a price change (the “elasticity” — percent change in demand divided by percent change in price) are shown for cigarette price changes as follows (Table 27.2). I (JLS) computed the data from cigarette sales data preceding and following a tax change in a state (Lyon and Simon 1968).\n\n\nTable 27.2: Price elasticity of demand in various states at various dates\n\n\n\n\n\n\n\n\n\n\n\n\n1.725\n1.139\n.957\n.863\n.802\n.517\n.407\n.304\n\n\n.204\n.125\n.122\n.106\n.031\n-.032\n-.1\n-.142\n\n\n-.174\n-.234\n-.240\n-.251\n-.277\n-.301\n-.302\n-.302\n\n\n-.307\n-.328\n-.329\n-.346\n-.357\n-.376\n-.377\n-.383\n\n\n-.385\n-.393\n-.444\n-.482\n-.511\n-.538\n-.541\n-.549\n\n\n-.554\n-.600\n-.613\n-.644\n-.692\n-.713\n-.724\n-.734\n\n\n-.749\n-.752\n-.753\n-.766\n-.805\n-.866\n-.926\n-.971\n\n\n-.972\n-.975\n-1.018\n-1.024\n-1.066\n-1.118\n-1.145\n-1.146\n\n\n-1.157\n-1.282\n-1.339\n-1.420\n-1.443\n-1.478\n-2.041\n-2.092\n\n\n-7.100\n\n\n\n\n\n\n\n\n\n\n\nThe positive observations (implying an increase in demand when the price rises) run against all theory, but can be considered to be the result simply of measurement errors, and treated as they stand. Aside from this minor complication, the reader may work this example similarly to the case of the Theban jars. Consider this program:\n\ndata = c(\n 1.725, 1.139, 0.957, 0.863, 0.802, 0.517, 0.407, 0.304,\n 0.204, 0.125, 0.122, 0.106, 0.031, -0.032, -0.1, -0.142,\n -0.174, -0.234, -0.240, -0.251, -0.277, -0.301, -0.302, -0.302,\n -0.307, -0.328, -0.329, -0.346, -0.357, -0.376, -0.377, -0.383,\n -0.385, -0.393, -0.444, -0.482, -0.511, -0.538, -0.541, -0.549,\n -0.554, -0.600, -0.613, -0.644, -0.692, -0.713, -0.724, -0.734,\n -0.749, -0.752, -0.753, -0.766, -0.805, -0.866, -0.926, -0.971,\n -0.972, -0.975, -1.018, -1.024, -1.066, -1.118, -1.145, -1.146,\n -1.157, -1.282, -1.339, -1.420, -1.443, -1.478, -2.041, -2.092,\n -7.100)\n\ndata_median <- median(data)\n\nn <- 10000\n\nmedians <- numeric(n)\n\nfor (i in 1:n) {\n sample <- sample(data, replace=TRUE)\n medians[i] <- median(sample)\n}\n\nhist(medians)\n\nmessage('Observed median elasticity: ', data_median)\n\nObserved median elasticity: -0.511\n\npp <- quantile(medians, c(0.025, 0.975))\nmessage('Estimate of 95 percent confidence interval: ',\n pp[1], ' - ', pp[2])\n\nEstimate of 95 percent confidence interval: -0.692 - -0.357" + }, + { + "objectID": "confidence_2.html#measured-data-example-confidence-intervals-for-a-difference-between-two-means", + "href": "confidence_2.html#measured-data-example-confidence-intervals-for-a-difference-between-two-means", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.4 Measured Data Example: Confidence Intervals For a Difference Between Two Means", + "text": "27.4 Measured Data Example: Confidence Intervals For a Difference Between Two Means\nThis is another example from the mice data.\nReturning to the data on the survival times of the two groups of mice in Section 24.0.3. It is the view of this book that confidence intervals should be calculated for a difference between two groups only if one is reasonably satisfied that the difference is not due to chance. Some statisticians might choose to compute a confidence interval in this case nevertheless, some because they believe that the confidence-interval machinery is more appropriate to deciding whether the difference is the likely outcome of chance than is the machinery of a hypothesis test in which you are concerned with the behavior of a benchmark or null universe. So let us calculate a confidence interval for these data, which will in any case demonstrate the technique for determining a confidence interval for a difference between two samples.\nOur starting point is our estimate for the difference in mean survival times between the two samples — 30.63 days. We ask “How much might this estimate be in error? If we drew additional samples from the control universe and additional samples from the treatment universe, how much might they differ from this result?”\nWe do not have the ability to go back to these universes and draw more samples, but from the samples themselves we can create hypothetical universes that embody all that we know about the treatment and control universes. We imagine replicating each element in each sample millions of times to create a hypothetical control universe and (separately) a hypothetical treatment universe. Then we can draw samples (separately) from these hypothetical universes to see how reliable is our original estimate of the difference in means (30.63 days).\nActually, we use a shortcut — instead of copying each sample element a million times, we simply replace it after drawing it for our resample, thus creating a universe that is effectively infinite.\nHere are the steps:\n\nStep 1: Consider the two samples separately as the relevant universes.\nStep 2: Draw a sample of 7 with replacement from the treatment group and calculate the mean.\nStep 3: Draw a sample of 9 with replacement from the control group and calculate the mean.\nStep 4: Calculate the difference in means (treatment minus control) & record.\nStep 5: Repeat steps 2-4 many times.\nStep 6: Review the distribution of resample means; the 5th and 95th percentiles are estimates of the endpoints of a 90 percent confidence interval.\n\nHere is a R example:\n\ntreatment = c(94, 38, 23, 197, 99, 16, 141)\ncontrol = c(52, 10, 40, 104, 51, 27, 146, 30, 46)\n\nobserved_diff <- mean(treatment) - mean(control)\n\nn <- 10000\nmean_delta <- numeric(n)\n\nfor (i in 1:n) {\n treatment_sample <- sample(treatment, replace=TRUE)\n control_sample <- sample(control, replace=TRUE)\n mean_delta[i] <- mean(treatment_sample) - mean(control_sample)\n}\n\nhist(mean_delta)\n\nmessage('Observed difference in means: ', round(observed_diff, 2))\n\nObserved difference in means: 30.63\n\npp <- quantile(mean_delta, c(0.05, 0.95))\nmessage('Estimate of 90 percent confidence interval: ',\n round(pp[1], 2), ' - ', round(pp[2], 2))\n\nEstimate of 90 percent confidence interval: -13.76 - 75.43\n\n\n\n\n\n\n\n\n\nInterpretation: This means that one can be 90 percent confident that the mean of the difference (which is estimated to be 30.635) falls between -13.763) and 75.429). So the reliability of the estimate of the mean is very small." + }, + { + "objectID": "confidence_2.html#count-data-example-confidence-limit-on-a-proportion-framingham-cholesterol-data", + "href": "confidence_2.html#count-data-example-confidence-limit-on-a-proportion-framingham-cholesterol-data", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.5 Count Data Example: Confidence Limit on a Proportion, Framingham Cholesterol Data", + "text": "27.5 Count Data Example: Confidence Limit on a Proportion, Framingham Cholesterol Data\nThe Framingham cholesterol data were used in Section 21.2.6 to illustrate the first classic question in statistical inference — interpretation of sample data for testing hypotheses. Now we use the same data for the other main theme in statistical inference — the estimation of confidence intervals. Indeed, the bootstrap method discussed above was originally devised for estimation of confidence intervals. The bootstrap method may also be used to calculate the appropriate sample size for experiments and surveys, another important topic in statistics.\nConsider for now just the data for the sub-group of 135 high-cholesterol men in Table 21.4. Our second classic statistical question is as follows: How much confidence should we have that if we were to take a much larger sample than was actually obtained, the sample mean (that is, the proportion 10/135 = .07) would be in some close vicinity of the observed sample mean? Let us first carry out a resampling procedure to answer the questions, waiting until afterwards to discuss the logic of the inference.\n\nConstruct a bucket containing 135 balls — 10 red (infarction) and 125 green (no infarction) to simulate the universe as we guess it to be.\nMix, choose a ball, record its color, replace it, and repeat 135 times (to simulate a sample of 135 men).\nRecord the number of red balls among the 135 balls drawn.\nRepeat steps 2-3 perhaps 10000 times, and observe how much the total number of reds varies from sample to sample. We arbitrarily denote the boundary lines that include 47.5 percent of the hypothetical samples on each side of the sample mean as the 95 percent “confidence limits” around the mean of the actual population.\n\nHere is a R program:\n\nmen <- rep(c(1, 0), c(10, 125))\n\nn <- 10000\nz <- numeric(n)\n\nfor (i in 1:n) {\n sample <- sample(men, replace=TRUE)\n infarctions <- sum(sample == 1)\n z[i] <- infarctions / 135\n}\n\nhist(z)\n\npp <- quantile(z, c(0.025, 0.975))\nmessage('Estimate of 95 percent confidence interval: ',\n round(pp[1], 2), ' - ', round(pp[2], 2))\n\nEstimate of 95 percent confidence interval: 0.04 - 0.12\n\n\n\n\n\n\n\n\n\n(The result is the 95 percent confidence interval, enclosing 95 percent of the resample results)\nThe variation in the histogram above highlights the fact that a sample containing only 10 cases of infarction is very small, and the number of observed cases — or the proportion of cases — necessarily varies greatly from sample to sample. Perhaps the most important implication of this statistical analysis, then, is that we badly need to collect additional data.\nAgain, this is a classic problem in confidence intervals, found in all subject fields. The language used in the cholesterol-infarction example is exactly the same as the language used for the Bush-Dukakis poll above except for labels and numbers.\nAs noted above, the philosophic logic of confidence intervals is quite deep and controversial, less obvious than for the hypothesis test. The key idea is that we can estimate for any given universe the probability P that a sample’s mean will fall within any given distance D of the universe’s mean; we then turn this around and assume that if we know the sample mean, the probability is P that the universe mean is within distance D of it. This inversion is more slippery than it may seem. But the logic is exactly the same for the formulaic method and for resampling. The only difference is how one estimates the probabilities — either with a numerical resampling simulation (as here), or with a formula or other deductive mathematical device (such as counting and partitioning all the possibilities, as Galileo did when he answered a gambler’s question about three dice). And when one uses the resampling method, the probabilistic calculations are the least demanding part of the work. One then has mental capacity available to focus on the crucial part of the job — framing the original question soundly, choosing a model for the facts so as to properly resemble the actual situation, and drawing appropriate inferences from the simulation." + }, + { + "objectID": "confidence_2.html#approach-2-probability-of-various-universes-producing-this-sample", + "href": "confidence_2.html#approach-2-probability-of-various-universes-producing-this-sample", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.6 Approach 2: Probability of various universes producing this sample", + "text": "27.6 Approach 2: Probability of various universes producing this sample\nA second approach to the general question of estimate accuracy is to analyze the behavior of a variety of universes centered at other points on the line, rather than the universe centered on the sample mean. One can ask the probability that a distribution centered away from the sample mean, with a given dispersion, would produce (say) a 10-apple scatter having a mean as far away from the given point as the observed sample mean. If we assume the situation to be symmetric, we can find a point at which we can say that a distribution centered there would have only a (say) 5 percent chance of producing the observed sample. And we can also say that a distribution even further away from the sample mean would have an even lower probability of producing the given sample. But we cannot turn the matter around and say that there is any particular chance that the distribution that actually produced the observed sample is between that point and the center of the sample.\nImagine a situation where you are standing on one side of a canyon, and you are hit by a baseball, the only ball in the vicinity that day. Based on experiments, you can estimate that a baseball thrower who you see standing on the other side of the canyon has only a 5 percent chance of hitting you with a single throw. But this does not imply that the source of the ball that hit you was someone else standing in the middle of the canyon, because that is patently impossible. That is, your knowledge about the behavior of the “boundary” universe does not logically imply anything about the existence and behavior of any other universes. But just as in the discussion of testing hypotheses, if you know that one possibility is unlikely, it is reasonable that as a result you will draw conclusions about other possibilities in the context of your general knowledge and judgment.\nWe can find the “boundary” distribution(s) we seek if we a) specify a measure of dispersion, and b) try every point along the line leading away from the sample mean, until we find that distribution that produces samples such as that observed with a (say) 5 percent probability or less.\nTo estimate the dispersion, in many cases we can safely use an estimate based on the sample dispersion, using either resampling or Normal distribution theory. The hardest cases for resampling are a) a very small sample of data, and b) a proportion near 0 or near 1.0 (because the presence or absence in the sample of a small number of observations can change the estimate radically, and therefore a large sample is needed for reliability). In such situations one should use additional outside information, or Normal distribution theory, or both.\nWe can also create a confidence interval in the following fashion: We can first estimate the dispersion for a universe in the general neighborhood of the sample mean, using various devices to be “conservative,” if we like.2 Given the estimated dispersion, we then estimate the probability distribution of various amounts of error between observed sample means and the population mean. We can do this with resampling simulation as follows: a) Create other universes at various distances from the sample mean, but with other characteristics similar to the universe that we postulate for the immediate neighborhood of the sample, and b) experiment with those universes. One can also apply the same logic with a more conventional parametric approach, using general knowledge of the sampling distribution of the mean, based on Normal distribution theory or previous experience with resampling. We shall not discuss the latter method here.\nAs with approach 1, we do not make any probability statements about where the population mean may be found. Rather, we discuss only what various hypothetical universes might produce , and make inferences about the “actual” population’s characteristics by comparison with those hypothesized universes.\nIf we are interested in (say) a 95 percent confidence interval, we want to find the distribution on each side of the sample mean that would produce a sample with a mean that far away only 2.5 percent of the time (2 * .025 = 1-.95). A shortcut to find these “border distributions” is to plot the sampling distribution of the mean at the center of the sample, as in Approach 1. Then find the (say) 2.5 percent cutoffs at each end of that distribution. On the assumption of equal dispersion at the two points along the line, we now reproduce the previously-plotted distribution with its centroid (mean) at those 2.5 percent points on the line. The new distributions will have 2.5 percent of their areas on the other side of the mean of the sample.\n\n27.6.1 Example: Approach 2 for Counted Data: the Bush-Dukakis Poll\nLet’s implement Approach 2 for counted data, using for comparison the Bush-Dukakis poll data discussed earlier in the context of Approach 1.\nWe seek to state, for universes that we select on the basis that their results will interest us, the probability that they (or it, for a particular universe) would produce a sample as far or farther away from the mean of the universe in question as the mean of the observed sample — 56 percent for Bush. The most interesting universe is that which produces such a sample only about 5 percent of the time, simply because of the correspondence of this value to a conventional breakpoint in statistical inference. So we could experiment with various universes by trial and error to find this universe.\nWe can learn from our previous simulations of the Bush — Dukakis poll in Approach 1 that about 95 percent of the samples fall within .025 on either side of the sample mean (which we had been implicitly assuming is the location of the population mean). If we assume (and there seems no reason not to) that the dispersions of the universes we experiment with are the same, we will find (by symmetry) that the universe we seek is centered on those points .025 away from .56, or .535 and .585.\nFrom the standpoint of Approach 2, then, the conventional sample formula that is centered at the mean can be considered a shortcut to estimating the boundary distributions. We say that the boundary is at the point that centers a distribution which has only a (say) 2.5 percent chance of producing the observed sample; it is that distribution which is the subject of the discussion, and not the distribution which is centered at \\(\\mu = \\bar{x}\\). Results of these simulations are shown in Figure 27.1.\n\n\n\nFigure 27.1: Approach 2 for Bush-Dukakis problem\n\n\nAbout these distributions centered at .535 and .585 — or more importantly for understanding an election situation, the universe centered at .535 — one can say: Even if the “true” value is as low as 53.5 percent for Bush, there is only a 2 ½ percent chance that a sample as high as 56 percent pro-Bush would be observed. (The values of a 2 ½ percent probability and a 2 ½ percent difference between 56 percent and 53.5 percent coincide only by chance in this case.) It would be even more revealing in an election situation to make a similar statement about the universe located at 50-50, but this would bring us almost entirely within the intellectual ambit of hypothesis testing.\nTo restate, then: Moving progressively farther away from the sample mean, we can eventually find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can then say that this point represents a “limit” or “boundary” so that the interval between it and the sample mean may be called a confidence interval.\n\n\n27.6.2 Example: Approach 2 for Measured Data: The Diameters of Trees\nTo implement Approach 2 for measured data, one may proceed exactly as with Approach 1 above except that the output of the simulation with the sample mean as midpoint will be used for guidance about where to locate trial universes for Approach 2. The results for the tree diameter data (Table 27.1) are shown in Figure 27.2.\n\n\n\nFigure 27.2: Approach 2 for tree diameters" + }, + { + "objectID": "confidence_2.html#interpretation-of-approach-2", + "href": "confidence_2.html#interpretation-of-approach-2", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.7 Interpretation of Approach 2", + "text": "27.7 Interpretation of Approach 2\nNow to interpret the results of the second approach: Assume that the sample is not drawn in a biased fashion (such as the wind blowing all the apples in the same direction), and that the population has the same dispersion as the sample. We can then say that distributions centered at the two endpoints of the 95 percent confidence interval (each of them including a tail in the direction of the observed sample mean with 2.5 percent of the area), or even further away from the sample mean, will produce the observed sample only 5 percent of the time or less .\nThe result of the second approach is more in the spirit of a hypothesis test than of the usual interpretation of confidence intervals. Another statement of the result of the second approach is: We postulate a given universe — say, a universe at (say) the two-tailed 95 percent boundary line. We then say: The probability that the observed sample would be produced by a universe with a mean as far (or further) from the observed sample’s mean as the universe under investigation is only 2.5 percent. This is similar to the probability value interpretation of a hypothesis-test framework. It is not a direct statement about the location of the mean of the universe from which the sample has been drawn. But it is certainly reasonable to derive a betting-odds interpretation of the statement just above, to wit: The chances are 2½ in 100 (or, the odds are 2½ to 97½ ) that a population located here would generate a sample with a mean as far away as the observed sample. And it would seem legitimate to proceed to the further betting-odds statement that (assuming we have no additional information) the odds are 97 ½ to 2 ½ that the mean of the universe that generated this sample is no farther away from the sample mean than the mean of the boundary universe under discussion. About this statement there is nothing slippery, and its meaning should not be controversial.\nHere again the tactic for interpreting the statistical procedure is to restate the facts of the behavior of the universe that we are manipulating and examining at that moment. We use a heuristic device to find a particular distribution — the one that is at (say) the 97 ½ –2 ½ percent boundary — and simply state explicitly what the distribution tells us implicitly: The probability of this distribution generating the observed sample (or a sample even further removed) is 2 ½ percent. We could go on to say (if it were of interest to us at the moment) that because the probability of this universe generating the observed sample is as low as it is, we “reject” the “hypothesis” that the sample came from a universe this far away or further. Or in other words, we could say that because we would be very surprised if the sample were to have come from this universe, we instead believe that another hypothesis is true. The “other” hypothesis often is that the universe that generated the sample has a mean located at the sample mean or closer to it than the boundary universe.\nThe behavior of the universe at the 97 ½ –2 ½ percent boundary line can also be interpreted in terms of our “confidence” about the location of the mean of the universe that generated the observed sample. We can say: At this boundary point lies the end of the region within which we would bet 97 ½ to 2 ½ that the mean of the universe that generated this sample lies to the (say) right of it.\nAs noted in the preview to this chapter, we do not learn about the reliability of sample estimates of the population mean (and other parameters) by logical inference from any one particular sample to any one particular universe, because in principle this cannot be done . Instead, in this second approach we investigate the behavior of various universes at the borderline of the neighborhood of the sample, those universes being chosen on the basis of their resemblances to the sample. We seek, for example, to find the universes that would produce samples with the mean of the observed sample less than (say) 5 percent of the time. In this way the estimation of confidence intervals is like all other statistical inference: One investigates the probabilistic behavior of hypothesized universes, the hypotheses being implicitly suggested by the sample evidence but not logically implied by that evidence.\nApproaches 1 and 2 may (if one chooses) be seen as identical conceptually as well as (in many cases) computationally (except for the asymmetric distributions mentioned earlier). But as I see it, the interpretation of them is rather different, and distinguishing them helps one’s intuitive understanding." + }, + { + "objectID": "confidence_2.html#exercises", + "href": "confidence_2.html#exercises", + "title": "27  Confidence Intervals, Part 2: The Two Approaches to Estimating Confidence Intervals", + "section": "27.8 Exercises", + "text": "27.8 Exercises\nSolutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.\n\n27.8.1 Exercise 1\nIn a sample of 200 people, 7 percent are found to be unemployed. Determine a 95 percent confidence interval for the true population proportion.\n\n\n27.8.2 Exercise 2\nA sample of 20 batteries is tested, and the average lifetime is 28.85 months. Establish a 95 percent confidence interval for the true average value. The sample values (lifetimes in months) are listed below.\n30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31\n\n\n27.8.3 Exercise 3\nSuppose we have 10 measurements of Optical Density on a batch of HIV negative control:\n.02 .026 .023 .017 .022 .019 .018 .018 .017 .022\nDerive a 95 percent confidence interval for the sample mean. Are there enough measurements to produce a satisfactory answer?\n\n\n\n\nCatling, HW, and RE Jones. 1977. “A Reinvestigation of the Provenance of the Inscribed Stirrup Jars Found at Thebes.” Archaeometry 19 (2): 137–46.\n\n\nDeshpande, Jayant V, AP Gore, and A Shanubhogue. 1995. Statistical Analysis of Nonnormal Data. Taylor & Francis. https://www.google.co.uk/books/edition/Statistical_Analysis_of_Nonnormal_Data/sS0on2XqwwoC.\n\n\nLee, Peter M. 2012. Bayesian Statistics: An Introduction. 4th ed. Wiley Online Library. https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/book.htm.\n\n\nLyon, Herbert L, and Julian Lincoln Simon. 1968. “Price Elasticity of the Demand for Cigarettes in the United States.” American Journal of Agricultural Economics 50 (4): 888–95.\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New York: Dover Publications, Inc.\n\n\nSimon, Julian Lincoln. 1998. “The Philosophy and Practice of Resampling Statistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory Statistics. 5th ed. New York: John Wiley & Sons." + }, + { + "objectID": "reliability_average.html#the-problem-of-uncertainty-about-the-dispersion", + "href": "reliability_average.html#the-problem-of-uncertainty-about-the-dispersion", + "title": "28  Some Last Words About the Reliability of Sample Averages", + "section": "28.1 The problem of uncertainty about the dispersion", + "text": "28.1 The problem of uncertainty about the dispersion\nThe inescapable difficulty of estimating the amount of dispersion in the population has greatly exercised statisticians over the years. Hence I must try to clarify the matter. Yet in practice this issue turns out not to be the likely source of much error even if one is somewhat wrong about the extent of dispersion, and therefore we should not let it be a stumbling block in the way of our producing estimates of the accuracy of samples in estimating population parameters.\nStudent’s t test was designed to get around the problem of the lack of knowledge of the population dispersion. But Wallis and Roberts wrote about the t test: “[F]ar-reaching as have been the consequences of the t distribution for technical statistics, in elementary applications it does not differ enough from the normal distribution…to justify giving beginners this added complexity.” [wallis1956statistics], p. x) “Although Student’s t and the F ratio are explained…the student…is advised not ordinarily to use them himself but to use the shortcut methods… These, being non-parametric and involving simpler computations, are more nearly foolproof in the hands of the beginner — and, ordinarily, only a little less powerful.” (p. xi)1\nIf we knew the population parameter — the proportion, in the case we will discuss — we could easily determine how inaccurate the sample proportion is likely to be. If, for example, we wanted to know about the likely inaccuracy of the proportion of a sample of 100 voters drawn from a population of a million that is 60% Democratic, we could simply simulate drawing (say) 200 samples of 100 voters from such a universe, and examine the average inaccuracy of the 200 sample proportions.\nBut in fact we do not know the characteristics of the actual universe. Rather, the nature of the actual universe is what we seek to learn about. Of course, if the amount of variation among samples were the same no matter what the Republican-Democrat proportions in the universe, the issue would still be simple, because we could then estimate the average inaccuracy of the sample proportion for any universe and then assume that it would hold for our universe. But it is reasonable to suppose that the amount of variation among samples will be different for different Democrat-Republican proportions in the universe.\nLet us first see why the amount of variation among samples drawn from a given universe is different with different relative proportions of the events in the universe. Consider a universe of 999,999 Democrats and one Republican. Most samples of 100 taken from this universe will contain 100 Democrats. A few (and only a very, very few) samples will contain 99 Democrats and one Republican. So the biggest possible difference between the sample proportion and the population proportion (99.9999%) is less than one percent (for the very few samples of 99% Democrats). And most of the time the difference will only be the tiny difference between a sample of 100 Democrats (sample proportion = 100%), and the population proportion of 99.9999%.\nCompare the above to the possible difference between a sample of 100 from a universe of half a million Republicans and half a million Democrats. At worst a sample could be off by as much as 50% (if it got zero Republicans or zero Democrats), and at best it is unlikely to get exactly 50 of each. So it will almost always be off by 1% or more.\nIt seems, therefore, intuitively reasonable (and in fact it is true) that the likely difference between a sample proportion and the population proportion is greatest with a 50%-50% universe, least with a 0%-100% universe, and somewhere in between for probabilities, in the fashion of Figure 28.1.\n\n\n\n\n\nFigure 28.1: Relationship Between the Population Proportion and the Likely Error In a Sample\n\n\n\n\nPerhaps it will help to clarify the issue of estimating dispersion if we consider this: If we compare estimates for a second sample based on a) the population , versus b) the first sample , the former will be more accurate than the latter, because of the sampling variation in the first sample that affects the latter estimate. But we cannot estimate that sampling variation without knowing more about the population." + }, + { + "objectID": "reliability_average.html#notes-on-the-use-of-confidence-intervals", + "href": "reliability_average.html#notes-on-the-use-of-confidence-intervals", + "title": "28  Some Last Words About the Reliability of Sample Averages", + "section": "28.2 Notes on the use of confidence intervals", + "text": "28.2 Notes on the use of confidence intervals\n\nConfidence intervals are used more frequently in the physical sciences — indeed, the concept was developed for use in astronomy — than in bio-statistics and in the social sciences; in these latter fields, measurement is less often the main problem and the distinction between hypotheses often is difficult.\nSome statisticians suggest that one can do hypothesis tests with the confidence-interval concept. But that seems to me equivalent to suggesting that one can get from New York to Chicago by flying first to Los Angeles. Additionally, the logic of hypothesis tests is much clearer than the logic of confidence intervals, and it corresponds to our intuitions so much more easily.\nDiscussions of confidence intervals sometimes assert that one cannot make a probability statement about where the population mean may be, yet can make statements about the probability that a particular set of samples may bound that mean.\n\nIf we agree that our interest is upcoming events and probably decision-making, then we obviously are interested in putting betting odds on the location of the population mean (and subsequent samples). And a statement about process will not help us with that, but only a probability statement.\nMoving progressively farther away from the sample mean, we can find a universe that has only some (any) specified small probability of producing a sample like the one observed. One can say that this point represents a “limit” or “boundary” between which and the sample mean may be called a confidence interval, I suppose.\nThis issue is discussed in more detail in Simon (1998, published online)." + }, + { + "objectID": "reliability_average.html#overall-summary-and-conclusions-about-confidence-intervals", + "href": "reliability_average.html#overall-summary-and-conclusions-about-confidence-intervals", + "title": "28  Some Last Words About the Reliability of Sample Averages", + "section": "28.3 Overall summary and conclusions about confidence intervals", + "text": "28.3 Overall summary and conclusions about confidence intervals\nThe first task in statistics is to measure how much — to make a quantitative estimate of the universe from which a given sample has been drawn, including especially the average and the dispersion; the theory of point estimation is discussed in Chapter 19.\nThe next task is to make inferences about the meaning of the estimates. A hypothesis test helps us decide whether two or more universes are the same or different from each other. In contrast, the confidence interval concept helps us decide on the reliability of an estimate.\nConfidence intervals and hypothesis tests are not entirely disjoint. In fact, hypothesis testing of a single sample against a benchmark value is, under all interpretations, I think, operationally identical with constructing a confidence interval and checking whether it includes that benchmark value. But the underlying reasoning is different because the questions which they are designed to answer are different.\nHaving now worked through the entire procedure of producing a confidence interval, it should be glaringly obvious why statistics is such a difficult subject. The procedure is very long, and involves a very large number of logical steps. Such a long logical train is very hard to control intellectually, and very hard to follow with one’s intuition. The actual computation of the probabilities is the very least of it, almost a trivial exercise.\n\n\n\n\nSimon, Julian Lincoln. 1998. “The Philosophy and Practice of Resampling Statistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New Approach. New York: The Free Press." + }, + { + "objectID": "correlation_causation.html#preview", + "href": "correlation_causation.html#preview", + "title": "29  Correlation and Causation", + "section": "29.1 Preview", + "text": "29.1 Preview\nThe correlation (speaking in a loose way for now) between two variables measures the strength of the relationship between them. A positive “linear” correlation between two variables x and y implies that high values of x are associated with high values of y, and that low values of x are associated with low values of y. A negative correlation implies the opposite; high values of x are associated with low values of y. By definition a “correlation coefficient” close to zero indicates little or no linear relationship between two variables; correlation coefficients close to 1 and -1 denote a strong positive or negative relationship. We will generally use a simpler measure of correlation than the correlation coefficient, however.\nOne way to measure correlation with the resampling method is to rank both variables from highest to lowest, and investigate how often in randomly-generated samples the rankings of the two variables are as close to each other as the rankings in the observed variables. A better approach, because it uses more of the quantitative information contained in the data though it requires more computation, is to multiply the values for the corresponding pairs of values for the two variables, and compare the sum of the resulting products to the analogous sum for randomly-generated pairs of the observed variable values. The last section of the chapter shows how the strength of a relationship can be determined when the data are counted, rather than measured. First comes some discussion of the philosophical issues involved in correlation and causation." + }, + { + "objectID": "correlation_causation.html#introduction-to-correlation-and-causation", + "href": "correlation_causation.html#introduction-to-correlation-and-causation", + "title": "29  Correlation and Causation", + "section": "29.2 Introduction to correlation and causation", + "text": "29.2 Introduction to correlation and causation\nThe questions in examples Section 12.1 to Section 13.3.3 have been stated in the following form: Does the independent variable (say, irradiation; or type of pig ration) have an effect upon the dependent variable (say, sex of fruit flies; or weight gain of pigs)? This is another way to state the following question: Is there a causal relationship between the independent variable(s) and the dependent variable? (“Independent” or “control” is the name we give to the variable(s) the researcher believes is (are) responsible for changes in the other variable, which we call the “dependent” or “response” variable.)\nA causal relationship cannot be defined perfectly neatly. Even an experiment does not determine perfectly whether a relationship deserves to be called “causal” because, among other reasons, the independent variable may not be clear-cut. For example, even if cigarette smoking experimentally produces cancer in rats, it might be the paper and not the tobacco that causes the cancer. Or consider the fabled gentlemen who got experimentally drunk on bourbon and soda on Monday night, scotch and soda on Tuesday night, and brandy and soda on Wednesday night — and stayed sober Thursday night by drinking nothing. With a vast inductive leap of scientific imagination, they treated their experience as an empirical demonstration that soda, the common element each evening, was the cause of the inebriated state they had experienced. Notice that their deduction was perfectly sound, given only the recent evidence they had. Other knowledge of the world is necessary to set them straight. That is, even in a controlled experiment there is often no way except subject-matter knowledge to avoid erroneous conclusions about causality. Nothing except substantive knowledge or scientific intuition would have led them to the recognition that it is the alcohol rather than the soda that made them drunk, as long as they always took soda with their drinks . And no statistical procedure can suggest to them that they ought to experiment with the presence and absence of soda. If this is true for an experiment, it must also be true for an uncontrolled study.\nHere are some tests that a relationship usually must pass to be called causal. That is, a working definition of a particular causal relationship is expressed in a statement that has these important characteristics:\n\nIt is an association that is strong enough so that the observer believes it to have a predictive (explanatory) power great enough to be scientifically useful or interesting. For example, he is not likely to say that wearing glasses causes (or is a cause of) auto accidents if the observed correlation is .07, even if the sample is large enough to make the correlation statistically significant. In other words, unimportant relationships are not likely to be labeled causal.\nVarious observers may well differ in judging whether or not an association is strong enough to be important and therefore “causal.” And the particular field in which the observer works may affect this judgment. This is an indication that whether or not a relationship is dubbed “causal” involves a good deal of human judgment and is subject to dispute.\nThe “side conditions” must be sufficiently few and sufficiently observable so that the relationship will apply under a wide enough range of conditions to be considered useful or interesting. In other words, the relationship must not require too many “if”s, “and”s, and “but”s in order to hold . For example, one might say that an increase in income caused an increase in the birth rate if this relationship were observed everywhere. But, if the relationship were found to hold only in developed countries, among the educated classes, and among the higher-income groups, then it would be less likely to be called “causal” — even if the correlation were extremely high once the specified conditions had been met. A similar example can be made of the relationship between income and happiness.\nFor a relationship to be called “causal,” there should be sound reason to believe that, even if the control variable were not the “real” cause (and it never is), other relevant “hidden” and “real” cause variables must also change consistently with changes in the control variables. That is, a variable being manipulated may reasonably be called “causal” if the real variable for which it is believed to be a proxy must always be tied intimately to it. (Between two variables, v and w, v may be said to be the “more real” cause and w a “spurious” cause, if v and w require the same side conditions, except that v does not require w as a side condition.) This third criterion (non-spuriousness) is of particular importance to policy makers. The difference between it and the previous criterion for side conditions is that a plenitude of very restrictive side conditions may take the relationship out of the class of causal relationships, even though the effects of the side conditions are known . This criterion of nonspuriousness concerns variables that are as yet unknown and unevaluated but that have a possible ability to upset the observed association.\nExamples of spurious relationships and hidden-third-factor causation are commonplace. For a single example, toy sales rise in December. There is no danger in saying that December causes an increase in toy sales, even though it is “really” Christmas that causes the increase, because Christmas and December practically always accompany each other.\nBelief that the relationship is not spurious is increased if many likely variables have been investigated and none removes the relationship. This is further demonstration that the test of whether or not an association should be called “causal” cannot be a logical one; there is no way that one can express in symbolic logic the fact that many other variables have been tried without changing the relationship in question.\nThe more tightly a relationship is bound into (that is, deduced from, compatible with, and logically connected to) a general framework of theory, the stronger is its claim to be called “causal.” For an economics example, observed positive relationships between the interest rate and business investment and between profits and investment are more likely to be called “causal” than is the relationship between liquid assets and investment. This is so because the first two statements can be deduced from classical price theory, whereas the third statement cannot. Connection to a theoretical framework provides support for belief that the side conditions necessary for the statement to hold true are not restrictive and that the likelihood of spurious correlation is not great; because a statement is logically connected to the rest of the system, the statement tends to stand or fall as the rest of the system stands or falls. And, because the rest of the system of economic theory has, over a long period of time and in a wide variety of tests, been shown to have predictive power, a statement connected with it is cloaked in this mantle.\n\nThe social sciences other than economics do not have such well-developed bodies of deductive theory, and therefore this criterion of causality does not weigh as heavily in sociology, for instance, as in economics. Rather, the other social sciences seem to substitute a weaker and more general criterion, that is, whether or not the statement of the relationship is accompanied by other statements that seem to “explain” the “mechanism” by which the relationship operates. Consider, for example, the relationship between the phases of the moon and the suicide rate. The reason that sociologists do not call it causal is that there are no auxiliary propositions that explain the relationship and describe an operative mechanism. On the other hand, the relationship between broken homes and juvenile delinquency is often referred to as “causal,” in large part because a large body of psychoanalytic theory serves to explain why a child raised without one or the other parent, or in the presence of parental strife, should not adjust readily.\nFurthermore, one can never decide with perfect certainty whether in any given situation one variable “causes” a particular change in another variable. At best, given your particular purposes in investigating a phenomena, you may be safe in judging that very likely there is causal influence.\nIn brief, it is correct to say (as it is so often said) that correlation does not prove causation — if we add the word “completely” to make it “correlation does not completely prove causation.” On the other hand, causation can never be “proven” completely by correlation or any other tool or set of tools, including experimentation. The best we can do is make informed judgments about whether to call a relationship causal.\nIt is clear, however, that in any situation where we are interested in the possibility of causation, we must at least know whether there is a relationship (correlation) between the variables of interest; the existence of a relationship is necessary for a relationship to be judged causal even if it is not sufficient to receive the causal label. And in other situations where we are not even interested in causality, but rather simply want to predict events or understand the structure of a system, we may be interested in the existence of relationships quite apart from questions about causations. Therefore our next set of problems deals with the probability of there being a relationship between two measured variables, variables that can take on any values (say, the values on a test of athletic scores) rather than just two values (say, whether or not there has been irradiation.)1\nAnother way to think about such problems is to ask whether two variables are independent of each other — that is, whether you know anything about the value of one variable if you know the value of the other in a particular case — or whether they are not independent but rather are related." + }, + { + "objectID": "correlation_causation.html#a-note-on-association-compared-to-testing-a-hypothesis", + "href": "correlation_causation.html#a-note-on-association-compared-to-testing-a-hypothesis", + "title": "29  Correlation and Causation", + "section": "29.3 A Note on Association Compared to Testing a Hypothesis", + "text": "29.3 A Note on Association Compared to Testing a Hypothesis\nProblems in which we investigate a) whether there is an association , versus b) whether there is a difference between just two groups, often look very similar, especially when the data constitute a 2-by-2 table. There is this important difference between the two types of analysis, however: Questions about association refer to variables — say weight and age — and it never makes sense to ask whether there is a difference between variables (except when asking whether they measure the same quantity). Questions about similarity or difference refer to groups of individuals , and in such a situation it does make sense to ask whether or not two groups are observably different from each other.\nExample 23-1: Is Athletic Ability Directly Related to Intelligence? (Is There Correlation Between Two Variables or Are They Independent?) (Program “Ability1”)\nA scientist often wants to know whether or not two characteristics go together, that is, whether or not they are correlated (that is, related or associated). For example, do youths with high athletic ability tend to also have high I.Q.s?\nHypothetical physical-education scores of a group of ten high-school boys are shown in Table 23-1, ordered from high to low, along with the I.Q. score for each boy. The ranks for each student’s athletic and I.Q. scores are then shown in columns 3 and 4.\nTable 23-1\nHypothetical Athletic and I.Q. Scores for High School Boys\n\n\n\nAthletic Score\nI.Q. Score\nAthletic Rank\nI.Q.Rank\n\n\n(1)\n(2)\n(3)\n(4)\n\n\n97\n114\n1\n3\n\n\n94\n120\n2\n1\n\n\n93\n107\n3\n7\n\n\n90\n113\n4\n4\n\n\n87\n118\n5\n2\n\n\n86\n101\n6\n8\n\n\n86\n109\n7\n6\n\n\n85\n110\n8\n5\n\n\n81\n100\n9\n9\n\n\n76\n99\n10\n10\n\n\n\nWe want to know whether a high score on athletic ability tends to be found along with a high I.Q. score more often than would be expected by chance. Therefore, our strategy is to see how often high scores on both variables are found by chance. We do this by disassociating the two variables and making two separate and independent universes, one composed of the athletic scores and another of the I.Q. scores. Then we draw pairs of observations from the two universes at random, and compare the experimental patterns that occur by chance to what actually is observed to occur in the world.\nThe first testing scheme we shall use is similar to our first approach to the pig rations — splitting the results into just “highs” and “lows.” We take ten cards, one of each denomination from “ace” to “10,” shuffle, and deal five cards to correspond to the first five athletic ranks. The face values then correspond to the\nI.Q. ranks. Under the benchmark hypothesis the athletic ranks will not be associated with the I.Q. ranks. Add the face values in the first five cards in each trial; the first hand includes 2, 4, 5, 6, and 9, so the sum is 26. Record, shuffle, and repeat perhaps ten times. Then compare the random results to the sum of the observed ranks of the five top athletes, which equals 17.\nThe following steps describe a slightly different procedure than that just described, because this one may be easier to understand:\nStep 1. Convert the athletic and I.Q. scores to ranks. Then constitute a universe of spades, “ace” to “10,” to correspond to the athletic ranks, and a universe of hearts, “ace” to “10,” to correspond to the IQ ranks.\nStep 2. Deal out the well-shuffled cards into pairs, each pair with an athletic score and an I.Q. score.\nStep 3. Locate the cards with the top five athletic ranks, and add the I.Q. rank scores on their paired cards. Compare this sum to the observed sum of 17. If 17 or less, indicate “yes,” otherwise “no.” (Why do we use “17 or less” rather than “less than 17”? Because we are asking the probability of a score this low or lower .)\nStep 4. Repeat steps 2 and 3 ten times.\nStep 5. Calculate the proportion “yes.” This estimates the probability sought.\nIn Table 23-2 we see that the observed sum (17) is lower than the sum of the top 5 ranks in all but one (shown by an asterisk) of the ten random trials (trial 5), which suggests that there is a good chance (9 in 10) that the five best athletes will not have I.Q. scores that high by chance. But it might be well to deal some more to get a more reliable average. We add thirty hands, and thirty-nine of the total forty hands exceed the observed rank value, so the probability that the observed correlation of athletic and I.Q. scores would occur by chance is about\n.025. In other words, if there is no real association between the variables, the probability that the top 5 ranks would sum to a number this low or lower is only 1 in 40, and it therefore seems reasonable to believe that high athletic ability tends to accompany a high I.Q.\nTable 23-2\nResults of 40 Random Trials of The Problem “Ability”\n(Note: Observed sum of IQ ranks: 17)\n\n\n\nTrial\nSum of IQ Ranks\nYes or No\n\n\n1\n26\nNo\n\n\n2\n23\nNo\n\n\n3\n22\nNo\n\n\n4\n37\nNo\n\n\n* 5\n16\nYes\n\n\n6\n22\nNo\n\n\n7\n22\nNo\n\n\n8\n28\nNo\n\n\n9\n38\nNo\n\n\n10\n22\nNo\n\n\n11\n35\nNo\n\n\n12\n36\nNo\n\n\n13\n31\nNo\n\n\n14\n29\nNo\n\n\n15\n32\nNo\n\n\n16\n25\nNo\n\n\n17\n25\nNo\n\n\n18\n29\nNo\n\n\n19\n25\nNo\n\n\n20\n22\nNo\n\n\n21\n30\nNo\n\n\n22\n31\nNo\n\n\n23\n35\nNo\n\n\n24\n25\nNo\n\n\n25\n33\nNo\n\n\n26\n30\nNo\n\n\n27\n24\nNo\n\n\n28\n29\nNo\n\n\n29\n30\nNo\n\n\n30\n31\nNo\n\n\n31\n30\nNo\n\n\n32\n21\nNo\n\n\n33\n25\nNo\n\n\n34\n19\nNo\n\n\n35\n29\nNo\n\n\n36\n23\nNo\n\n\n37\n23\nNo\n\n\n38\n34\nNo\n\n\n39\n23\nNo\n\n\n40\n26\nNo\n\n\n\nThe RESAMPLING STATS program “Ability1” creates an array containing the I.Q. rankings of the top 5 students in athletics. The SUM of these I.Q. rankings constitutes the observed result to be tested against randomly-drawn samples. We observe that the actual I.Q. rankings of the top five athletes sums to 17. The more frequently that the sum of 5 randomly-generated rankings (out of 10) is as low as this observed number, the higher is the probability that there is no relationship between athletic performance and I.Q. based on these data.\nFirst we record the NUMBERS “1” through “10” into vector\nA. Then we SHUFFLE the numbers so the rankings are in a random order. Then TAKE the first 5 of these numbers and put them in another array, D, and SUM them, putting the result in E. We repeat this procedure 1000 times, recording each result in a scorekeeping vector: Z. Graphing Z, we get a HIS- TOGRAM that shows us how often our randomly assigned sums are equal to or below 17.\n\n' Program file: \"correlation_causation_00.rss\"\n\nREPEAT 1000\n ' Repeat the experiment 1000 times.\n NUMBERS 1,10 a\n ' Constitute the set of I.Q. ranks.\n SHUFFLE a b\n ' Shuffle them.\n TAKE b 1,5 d\n ' Take the first 5 ranks.\n SUM d e\n ' Sum those ranks.\n SCORE e z\n ' Keep track of the result of each trial.\nEND\n' End the experiment, go back and repeat.\nHISTOGRAM z\n' Produce a histogram of trial results.\nABILITY1: Random Selection of 5 Out of 10 Ranks\n\nSum of top 5 ranks\nWe see that in only about 2% of the trials did random selection of ranks produce a total of 17 or lower. RESAMPLING STATS will calculate this for us directly:\n\n' Program file: \"ability1.rss\"\n\nCOUNT z <= 17 k\n' Determine how many trials produced sums of ranks \\<= 17 by chance.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the results.\n\n' Note: The file \"ability1\" on the Resampling Stats software disk contains\n' this set of commands.\nWhy do we sum the ranks of the first five athletes and compare them with the second five athletes, rather than comparing the top three, say, with the bottom seven? Indeed, we could have looked at the top three, two, four, or even six or seven. The first reason for splitting the group in half is that an even split uses the available information more fully, and therefore we obtain greater efficiency. (I cannot prove this formally here, but perhaps it makes intuitive sense to you.) A second reason is that getting into the habit of always looking at an even split reduces the chances that you will pick and choose in such a manner as to fool yourself. For example, if the I.Q. ranks of the top five athletes were 3, 2, 1, 10, and 9, we would be deceiving ourselves if, after looking the data over, we drew the line between athletes 3 and 4. (More generally, choosing an appropriate measure before examining the data will help you avoid fooling yourself in such matters.)\nA simpler but less efficient approach to this same problem is to classify the top-half athletes by whether or not they were also in the top half of the I.Q. scores. Of the first five athletes actually observed, four were in the top five I.Q. scores. We can then shuffle five black and five red cards and see how often four or more (that is, four or five) blacks come up with the first five cards. The proportion of times that four or more blacks occurs in the trial is the probability that an association as strong as that observed might occur by chance even if there is no association. Table 23-3 shows a proportion of five trials out of twenty.\nIn the RESAMPLING STATS program “Ability2” we first note that the top 5 athletes had 4 of the top 5 I.Q. scores. So we constitute the set of 10 IQ rankings (vector A). We then SHUFFLE A and TAKE 5 I.Q. rankings (out of 10). We COUNT how many are in the top 5, and keep SCORE of the result. After REPEATing 1000 times, we find out how often we select 4 of the top 5.\nTable 23-3\nResults of 20 Random Trials of the Problem “ABILITY2”\nObserved Score: 4\n\n\n\nTrial\nScore\nYes or No\n\n\n1\n4\nYes\n\n\n2\n2\nNo\n\n\n3\n2\nNo\n\n\n4\n2\nNo\n\n\n5\n3\nNo\n\n\n6\n2\nNo\n\n\n7\n4\nYes\n\n\n8\n3\nNo\n\n\n9\n3\nNo\n\n\n10\n4\nYes\n\n\n11\n3\nNo\n\n\n12\n1\nNo\n\n\n13\n3\nNo\n\n\n14\n3\nNo\n\n\n15\n4\nYes\n\n\n16\n3\nNo\n\n\n17\n2\nNo\n\n\n18\n2\nNo\n\n\n19\n2\nNo\n\n\n20\n4\nYes\n\n\n\n\n' Program file: \"ability2.rss\"\n\nREPEAT 1000\n ' Do 1000 experiments.\n NUMBERS 1,10 a\n ' Constitute the set of I.Q. ranks.\n SHUFFLE a b\n ' Shuffle them.\n TAKE b 1,5 c\n ' Take the first 5 ranks.\n COUNT c between 1 5 d\n ' Of those 5, count how many are among the top half of the ranks (1-5).\n SCORE d z\n ' Keep track of that result in z\nEND\n' End one experiment, go back and repeat until all 1000 are complete.\nCOUNT z >= 4 k\n' Determine how many trials produced 4 or more top ranks by chance.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Print the result.\n\n' Note: The file \"ability2\" on the Resampling Stats software disk contains\n' this set of commands.\nSo far we have proceeded on the theory that if there is any relationship between athletics and I.Q., then the better athletes have higher rather than lower I.Q. scores. The justification for this assumption is that past research suggests that it is probably true. But if we had not had the benefit of that past research, we would then have had to proceed somewhat differently; we would have had to consider the possibility that the top five athletes could have I.Q. scores either higher or lower than those of the other students. The results of the “two-tail” test would have yielded odds weaker than those we observed.\nExample 23-2: Athletic Ability and I.Q. a Third Way.\n(Program “Ability3”).\nExample 23-1 investigated the relationship between I.Q. and athletic score by ranking the two sets of scores. But ranking of scores loses some efficiency because it uses only an “ordinal” (rank-ordered) rather than a “cardinal” (measured) scale; the numerical shadings and relative relationships are lost when we convert to ranks. Therefore let us consider a test of correlation that uses the original cardinal numerical scores.\nFirst a little background: Figure 29.1 and Figure 29.2 show two hypothetical cases of very high association among the I.Q. and athletic scores used in previous examples. Figure 29.1 indicates that the higher the I.Q. score, the higher the athletic score. With a boy’s athletic score you can thus predict quite well his I.Q. score by means of a hand-drawn line — or vice versa. The same is true of Figure 29.2, but in the opposite direction. Notice that even though athletic score is on the x-axis (horizontal) and I.Q. score is on the y-axis (vertical), the athletic score does not cause the I.Q. score. (It is an unfortunate deficiency of such diagrams that some variable must arbitrarily be placed on the x-axis, whether you intend to suggest causation or not.)\n\n\n\n\n\nFigure 29.1: Hypothetical Scores for I.Q. and Athletic Ability — 1\n\n\n\n\n\n\n\n\n\nFigure 29.2: Hypothetical Scores for I.Q. and Athletic Ability — 2\n\n\n\n\nIn Figure 29.3, which plots the scores as given in table 23-1 the prediction of athletic score given I.Q. score, or vice versa, is less clear-cut than in Figure 29.1. On the basis of Figure 29.3 alone, one can say only that there might be some association between the two variables.\n\n\n\n\n\nFigure 29.3: Given Scores for I.Q. and Athletic Ability" + }, + { + "objectID": "correlation_causation.html#correlation-sum-of-products", + "href": "correlation_causation.html#correlation-sum-of-products", + "title": "29  Correlation and Causation", + "section": "29.4 Correlation: sum of products", + "text": "29.4 Correlation: sum of products\nNow let us take advantage of a handy property of numbers. The more closely two sets of numbers match each other in order, the higher the sums of their products. Consider the following arrays of the numbers 1, 2, and 3:\n1 x 1 = 1\n2 x 2 = 4 (columns in matching order) 3 x 3 = 9\nSUM = 14\n1 x 2 = 2\n2 x 3 = 6 (columns not in matching order) 3 x 1 = 3\nSUM = 11\nI will not attempt a mathematical proof, but the reader is encouraged to try additional combinations to be sure that the highest sum is obtained when the order of the two columns is the same. Likewise, the lowest sum is obtained when the two columns are in perfectly opposite order:\n1 x 3 = 3\n2 x 2 = 4 (columns in opposite order) 3 x 1 = 3\nSUM = 10\nConsider the cases in Table 23-4 which are chosen to illustrate a perfect (linear) association between x (Column 1) and y 1 (Column 2), and also between x (Column 1) and y 2 (Column 4); the numbers shown in Columns 3 and 5 are those that would be consistent with perfect associations. Notice the sum of the multiples of the x and y values in the two cases. It is either higher ( xy 1) or lower ( xy 2) than for any other possible way of arranging the y ’s. Any other arrangement of the y’s ( y 3, in Column 6, for example, chosen at random), when multiplied by the x ’s in Column 1, ( xy 3), produces a sum that falls somewhere between the sums of xy 1 and xy 2, as is the case with any other set of y 3’s which is not perfectly correlated with the x ’s.\nTable 23-5, below, shows that the sum of the products of the observed I.Q. scores multiplied by athletic scores (column 7) is between the sums that would occur if the I.Q. scores were ranked from best to worst (column 3) and worst to best (column 5). The extent of correlation (association) can thus be measured by whether the sum of the multiples of the observed x\nand y values is relatively much higher or much lower than are sums of randomly-chosen pairs of x and y .\nTable 23-4\nComparison of Sums of Multiplications\n\n\n\nStrong Positive Relationship\nStrong Negative Relationship\nRandom Pairings\n\n\n\n\n\n\nX\nY1\nX*Y1\nY2\nX*Y2\nY3\nX*Y3\n\n\n2\n2\n4\n10\n20\n4\n8\n\n\n4\n4\n16\n8\n32\n8\n32\n\n\n6\n6\n36\n6\n36\n6\n36\n\n\n8\n8\n64\n4\n48\n2\n16\n\n\n10\n10\n100\n2\n20\n10\n100\n\n\nSUMS:\n\n220\n\n156\n\n192\n\n\n\nTable 23-5\nSums of Products: IQ and Athletic Scores\n\n\n\n1\n2\n3\n4\n5\n6\n7\n\n\nAthletic\nHypothetical\nCol. 1 x\nHypothetical\nCol. 1 x\nActual\nCol. 1 x\n\n\nScore\nI.Q.\nCol.2\nI.Q.\nCol. 4\nI.Q.\nCol.6\n\n\n97\n120\n11640\n99\n9603\n114\n11058\n\n\n94\n118\n11092\n100\n9400\n120\n11280\n\n\n93\n114\n10602\n101\n9393\n107\n9951\n\n\n90\n113\n10170\n107\n9630\n113\n10170\n\n\n87\n110\n9570\n109\n9483\n118\n10266\n\n\n86\n109\n9374\n110\n8460\n101\n8686\n\n\n86\n107\n9202\n113\n9718\n109\n9374\n\n\n85\n101\n8585\n114\n9690\n110\n9350\n\n\n81\n100\n8100\n118\n9558\n100\n8100\n\n\n76\n99\n7524\n120\n9120\n99\n7524\n\n\nSUMS:\n\n95859\n\n95055\n\n95759\n\n\n\n3 Cases:\n\nPerfect positive correlation (hypothetical); column 3\nPerfect negative correlation (hypothetical); column 5\nObserved; column 7\n\nNow we attack the I.Q. and athletic-score problem using the property of numbers just discussed. First multiply the x and y values of the actual observations, and sum them to be 95,759 (Table 23-5). Then write the ten observed I.Q. scores on cards, and assign the cards in random order to the ten athletes, as shown in column 1 in Table 23-6.\nMultiply by the x’s, and sum as in Table 23-7. If the I.Q. scores and athletic scores are positively associated , that is, if high I.Q.s and high athletic scores go together, then the sum of the multiplications for the observed sample will be higher than for most of the random trials. (If high I.Q.s go with low athletic scores, the sum of the multiplications for the observed sample will be lower than most of the random trials.)\nTable 23-6\nRandom Drawing of I.Q. Scores and Pairing (Randomly) Against Athletic Scores (20 Trials)\nTrial Number\nAthletic 1 2 3 4 5 6 7 8 9 10\nScore\n\n\n\n97\n114\n109\n110\n118\n107\n114\n107\n120\n100\n114\n\n\n94\n101\n113\n113\n101\n118\n100\n110\n109\n120\n107\n\n\n93\n107\n118\n100\n99\n120\n101\n114\n99\n110\n113\n\n\n90\n113\n101\n118\n114\n101\n113\n100\n118\n99\n99\n\n\n87\n120\n100\n101\n100\n110\n107\n113\n114\n101\n118\n\n\n86\n100\n110\n120\n107\n113\n110\n118\n101\n118\n101\n\n\n86\n110\n107\n99\n109\n100\n120\n120\n113\n114\n120\n\n\n85\n99\n99\n104\n120\n99\n109\n101\n107\n109\n109\n\n\n81\n118\n120\n114\n110\n114\n99\n99\n100\n107\n109\n\n\n76\n109\n114\n109\n113\n109\n118\n109\n110\n113\n110\n\n\nTrial Number\n\n\n\n\n\n\n\n\n\n\n\n\nAthletic Score\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n\n\n97\n109\n118\n101\n109\n107\n100\n99\n113\n99\n110\n\n\n94\n101\n110\n114\n118\n101\n107\n114\n101\n109\n113\n\n\n93\n120\n120\n100\n120\n114\n113\n100\n100\n120\n100\n\n\n90\n110\n118\n109\n110\n99\n109\n107\n109\n110\n99\n\n\n87\n100\n100\n120\n99\n118\n114\n110\n110\n107\n101\n\n\n86\n118\n99\n107\n100\n109\n118\n113\n118\n100\n118\n\n\n86\n99\n101\n99\n101\n100\n99\n101\n107\n114\n120\n\n\n85\n107\n114\n110\n114\n120\n110\n120\n120\n118\n100\n\n\n81\n114\n107\n113\n113\n110\n101\n109\n114\n101\n100\n\n\n76\n113\n109\n118\n107\n113\n120\n118\n99\n118\n107\n\n\n\nTable 23-7\nResults of Sum Products for Above 20 Random Trials\n\n\n\nTrial\nSum of Multiplications\nTrial\nSum of Multiplications\n\n\n1\n95,430\n11\n95,406\n\n\n2\n95,426\n12\n95,622\n\n\n3\n95,446\n13\n95,250\n\n\n4\n95,381\n14\n95,599\n\n\n5\n95,542\n15\n95,323\n\n\n6\n95,362\n16\n95,308\n\n\n7\n95,508\n17\n95,220\n\n\n8\n95,590\n18\n95,443\n\n\n9\n95,379\n19\n95,421\n\n\n10\n95,532\n20\n95,528\n\n\n\nMore specifically, by the steps:\nStep 1. Write the ten I.Q. scores on one set of cards, and the ten athletic scores on another set of cards.\nStep 2. Pair the I.Q. and athletic-score cards at random. Multiply the scores in each pair, and add the results of the ten multiplications.\nStep 3. Subtract the experimental sum in step 2 from the observed sum, 95,759.\nStep 4. Repeat steps 2 and 3 twenty times.\nStep 5. Compute the proportion of trials where the difference is negative, which estimates the probability that an association as strong as the observed would occur by chance.\nThe sums of the multiplications for 20 trials are shown in Table 23-7. No random-trial sum was as high as the observed sum, which suggests that the probability of an association this strong happening by chance is so low as to approach zero. (An empirically-observed probability is never actually zero.)\nThis program can be solved particularly easily with RESAMPLING STATS. The arrays A and B in program “Ability3” list the athletic scores and the I.Q. scores respectively of 10 “actual” students ordered from highest to lowest athletic score. We MULTIPLY the corresponding elements of these arrays and proceed to compare the sum of these multiplications to the sums of experimental multiplications in which the elements are selected randomly.\nFinally, we COUNT the trials in which the sum of the products of the randomly-paired athletic and I.Q. scores equals or exceeds the sum of the products in the observed data.\n\n' Program file: \"correlation_causation_03.rss\"\n\nNUMBERS (97 94 93 90 87 86 86 85 81 76) a\n' Record athletic scores, highest to lowest.\nNUMBERS (114 120 107 113 118 101 109 110 100 99) b\n' Record corresponding IQ scores for those students.\nMULTIPLY a b c\n' Multiply the two sets of scores together.\nSUM c d\n' Sum the results — the \"observed value.\"\nREPEAT 1000\n ' Do 1000 experiments.\n SHUFFLE a e\n ' Shuffle the athletic scores so we can pair them against IQ scores.\n MULTIPLY e b f\n ' Multiply the shuffled athletic scores by the I.Q. scores. (Note that we\n ' could shuffle the I.Q. scores too but it would not achieve any greater\n ' randomization.)\n SUM f j\n ' Sum the randomized multiplications.\n SUBTRACT d j k\n ' Subtract the sum from the sum of the \"observed\" multiplication.\n SCORE k z\n ' Keep track of the result in z.\nEND\n' End one trial, go back and repeat until 1000 trials are complete.\nHISTOGRAM z\n' Obtain a histogram of the trial results.\nRandom Sums of Products\nATHLETES & IQ SCORES\n\nobserved sum less random sum\nWe see that obtaining a chance trial result as great as that observed was rare. RESAMPLING STATS will calculate this proportion for us:\n\n' Program file: \"ability3.rss\"\n\nCOUNT z <= 0 k\n' Determine in how many trials the random sum of products was less than\n' the observed sum of products.\nDIVIDE k 1000 kk\n' Convert to a proportion.\nPRINT kk\n' Note: The file \"ability3\" on the Resampling Stats software disk contains\n' this set of commands.\nExample 23-3: Correlation Between Adherence to Medication Regime and Change in Cholesterol\nEfron and Tibshirani (1993, 72) show data on the extents to which 164 men a) took the drug prescribed to them (cholostyramine), and b) showed a decrease in total plasma cholesterol. Table 23-8 shows these values (note that a positive value in the “decrease in cholesterol” column denotes a decrease in cholesterol, while a negative value denotes an increase.)\nTable 23-8\n\n\n\nTaken\nTaken\nTaken\n\nTaken\n\n\n0 -5.25\n27\n-1.50 71\n59.50\n95 32.50\n\n\n0 -7.25\n28\n23.50 71\n14.75\n95 70.75\n\n\n0 -6.25\n29\n33.00 72\n63.00\n95 18.25\n\n\n0 11.50\n31\n4.25 72\n0.00\n95 76.00\n\n\n2 21.00\n32\n18.75 73\n42.00\n95 75.75\n\n\n2 -23.00\n32\n8.50 74\n41.25\n95 78.75\n\n\n2 5.75\n33\n3.25 75\n36.25\n95 54.75\n\n\n3 3.25\n33\n27.75 76\n66.50\n95 77.00\n\n\n3 8.75\n34\n30.75 77\n61.75\n96 68.00\n\n\n4 8.75\n34\n-1.50 77\n14.00\n96 73.00\n\n\n4 -10.25\n34\n1.00 78\n36.00\n96 28.75\n\n\n7 -10.50\n34\n7.75 78\n39.50\n96 26.75\n\n\n8 19.75\n35\n-15.75 81\n1.00\n96 56.00\n\n\n8 -0.50\n36\n33.50 82\n53.50\n96 47.50\n\n\n8 29.25\n36\n36.25 84\n46.50\n96 30.25\n\n\n8 36.25\n37\n5.50 85\n51.00\n96 21.00\n\n\n9 10.75\n38\n25.50 85\n39.00\n97 79.00\n\n\n9 19.50\n41\n20.25 87\n-0.25\n97 69.00\n\n\n9 17.25\n43\n33.25 87\n1.00\n97 80.00\n\n\n10 3.50\n45\n56.75 87\n46.75\n97 86.00\n\n\n10 11.25\n45\n4.25 87\n11.50\n98 54.75\n\n\n11 -13.00\n47\n32.50 87\n2.75\n98 26.75\n\n\n12 24.00\n50\n54.50 88\n48.75\n98 80.00\n\n\n13 2.50\n50\n-4.25 89\n56.75\n98 42.25\n\n\n15 3.00\n51\n42.75 90\n29.25\n98 6.00\n\n\n15 5.50\n51\n62.75 90\n72.50\n98 104.75\n\n\n16 21.25\n52\n64.25 91\n41.75\n98 94.25\n\n\n16 29.75\n53\n30.25 92\n48.50\n98 41.25\n\n\n17 7.50\n54\n14.75 92\n61.25\n98 40.25\n\n\n18 -16.50\n54\n47.25 92\n29.50\n99 51.50\n\n\n20 4.50\n56\n18.00 92\n59.75\n99 82.75\n\n\n20 39.00\n57\n13.75 93\n71.00\n99 85.00\n\n\n21 -5.75\n57\n48.75 93\n37.75\n99 70.00\n\n\n21 -21.00\n58\n43.00 93\n41.00\n100 92.00\n\n\n21 0.25\n60\n27.75 93\n9.75\n100 73.75\n\n\n22 -10.25\n62\n44.50 93\n53.75\n100 54.00\n\n\n24 -0.50\n64\n22.50 94\n62.50\n100 69.50\n\n\n25 -19.00\n64\n-14.50 94\n39.00\n100 101.50\n\n\n25 15.75\n64\n-20.75 94\n3.25\n100 68.00\n\n\n26 6.00\n67\n46.25 94\n60.00\n100 44.75\n\n\n27 10.50\n68\n39.50 95\n113.25\n100 86.75\n\n\n\n% Prescribed Dosage\nDecrease in Cholesterol\n% Prescribed Dosage\nDecrease in Cholesterol\n% Prescribed Dosage\nDecrease in Cholesterol\n% Prescribed Dosage\nDecrease in Cholesterol\nThe aim is to assess the effect of the compliance on the improvement. There are two related issues:\n\nWhat form of regression should be fitted to these data, which we address later, and\nIs there reason to believe that the relationship is meaningful? That is, we wish to ascertain if there is any meaningful correlation between the variables — because if there is no relationship between the variables, there is no basis for regressing one on the other. Sometimes people jump ahead in the latter question to first run the regression and then ask whether the regression slope coefficient(s) is (are) different than zero, but this usually is not sound practice. The sensible way to proceed is first to graph the data to see whether there is visible indication of a relationship.\n\nEfron and Tibshirani do this, and they find sufficient intuitive basis in the graph to continue the analysis. The next step is to investigate whether a measure of relationship is statistically significant; this we do as follows (program “inp10”):\n\nMultiply the observed values for each of the 164 participants on the independent x variable (cholostyramine — percent of prescribed dosage actually taken) and the dependent y variable (cholesterol), and sum the results — it’s 439,140.\nRandomly shuffle the dependent variable y values among the participants. The sampling is being done without replacement, though an equally good argument could be made for sampling with replacement; the results do not differ meaningfully, however, because the sample size is so large.\nThen multiply these x and y hypothetical values for each of the 164 participants, sum the results and record.\nRepeat steps 2 and 3 perhaps 1000 times.\nDetermine how often the shuffled sum-of-products exceeds the observed value (439,140).\n\nThe following program in RESAMPLING STATS provides the solution:\n\n' Program file: \"correlation_causation_05.rss\"\n\nREAD FILE “inp10” x y\n' Data\nMULTIPLY x y xy\n' Step 1 above\nSUM xy xysum\n' Note: xysum = 439,140 (4.3914e+05)\nREPEAT 1000\n ' Do 1000 simulations (step 4 above)\n SHUFFLE x xrandom\n ' Step 2 above\n MULTIPLY xrandom y xy\n ' Step 3 above\n SUM xy newsum\n ' Step 3 above\n SCORE newsum scrboard\n ' Step 3 above\nEND\n' Step 4 above\nCOUNT scorboard >=439140 prob\n' Step 5 above\nPRINT xysum prob\n' Result: prob = 0. Interpretation: 1000 simulated random shufflings never\n' produced a sum-of-products as high as the observed value. Hence we rule\n' out random chance as an explanation for the observed correlation.\nExample 23-3: Is There A Relationship Between Drinking Beer And Being In Favor of Selling Beer? (Testing for a Relationship Between Counted-Data Variables.) (Program “Beerpoll”)\nThe data for athletic ability and I.Q. were measured. Therefore, we could use them in their original “cardinal” form, or we could split them up into “high” and “low” groups. Often, however, the individual observations are recorded only as “yes” or “no,” which makes it more difficult to ascertain the existence of a relationship. Consider the poll responses in Table 23-8 to two public-opinion survey questions: “Do you drink beer?” and “Are you in favor of local option on the sale of beer?”.2\n\nTable 23-9\nResults of Observed Sample For Problem “Beerpoll”\n\n\n\nDo you favor local option on the sale of beer?\nDo you drink beer?\n\n\n\n\n\nYes\nNo\nTotal\n\n\nFavor\n45\n20\n65\n\n\nDon’t Favor\n7\n6\n13\n\n\nTotal\n52\n26\n78\n\n\n\nHere is the statistical question: Is a person’s opinion on “local option” related to whether or not he drinks beer? Our resampling solution begins by noting that there are seventy-eight respondents, sixty-five of whom approve local option and thirteen of whom do not. Therefore write “approve” on sixty-five index cards and “not approve” on thirteen index cards. Now take another set of seventy-eight index cards, preferably of a different color, and write “yes” on fifty-two of them and “no” on twenty-six of them, corresponding to the numbers of people who do and do not drink beer in the sample. Now lay them down in random pairs , one from each pile.\nIf there is a high association between the variables, then real life observations will bunch up in the two diagonal cells in the upper left and lower right in Table 23-8. (Ignore the “total” data for now.) Therefore, subtract one sum of two diagonal cells from the other sum for the observed data: (45 + 6) - (20 + 7) = 24. Then compare this difference to the comparable differences found in random trials. The proportion of times that the simulated-trial difference exceeds the observed difference is the probability that the observed difference of +24 might occur by chance, even if there is no relationship between the two variables. (Notice that, in this case, we are working on the assumption that beer drinking is positively associated with approval of local option and not the inverse. We are interested only in differences that are equal to or exceed +24 when the northeast-southwest diagonal is subtracted from the northwest-southeast diagonal.)\nWe can carry out a resampling test with this procedure:\nStep 1. Write “approve” on 65 and “disapprove” on 13 red index cards, respectively; write “Drink” and “Don’t drink” on 52 and 26 white cards, respectively.\nStep 2. Pair the two sets of cards randomly. Count the numbers of the four possible pairs: (1) “approve-drink,” (2) “disapprove-don’t drink,” (3) “disapprove-drink,” and (4) “approve-don’t drink.” Record the number of these combinations, as in Table 23-10, where columns 1-4 correspond to the four cells in Table 23-9.\nStep 3. Add (column 1 plus column 4) and (column 2 plus column 3), and subtract the result in the second parenthesis from the result in the first parenthesis. If the difference is equal to or greater than 24, record “yes,” otherwise “no.”\nStep 4. Repeat steps 2 and 3 perhaps a hundred times.\nStep 5. Calculate the proportion “yes,” which estimates the probability that an association this great or greater would be observed by chance.\nTable 23-10\nResults of One Random Trial of the Problem “Beerpoll”\n\n\n\n\n\n\n\n\n\n\n\n\n(1)\n(2)\n(3)\n(4)\n(5)\n\n\nTrial\nApprove Yes\nApprove No\nDisappr ove Yes\nDisappr ove No\n(Col 1 + Col 4) -\n(Col 2 + Col 3)\n\n\n\n1 43 22 9 4 47-31=16\nA series of ten trials in this case (see Table 23-9) indicates that the observed difference is very often exceeded, which suggests that there is no relationship between beer drinking and opinion.\nThe RESAMPLING STATS program “Beerpoll” does this repetitively. From the “actual” sample results we know that 52 respondents drink beer and 26 do not. We create the vector “drink” with 52 “1”s for those who drink beer, and 26 “2”s for those who do not. We also create the vector “sale” with 65 “1”s (approve) and 13 “2”s (disapprove). In the actual sample, 51 of the 78 respondents had “consistent” responses to the two questions — that is, people who both favor the sale of beer and drink beer, or who are against the sale of beer and do not drink beer. We want to randomly pair the responses to the two questions to compare against that observed result to test the relationship.\nTo accomplish this aim, we REPEAT the following procedure 1000 times. We SHUFFLE drink to drink$ so that the responses are randomly ordered. Now when we SUBTRACT the corresponding elements of the two arrays, a “0” will appear in each element of the new array c for which there was consistency in the response of the two questions. We therefore COUNT the times that c equals “0” and place this result in d, and the number of times c does not equal 0, and place this result in e. Find the difference (d minus e), and SCORE this to z.\nSCORE Z stores for each trial the number of consistent responses minus inconsistent responses. To determine whether the results of the actual sample indicate a relationship between the responses to the two questions, we check how often the random trials had a difference (between consistent and inconsistent responses) as great as 24, the value in the observed sample.\n\n' Program file: \"beerpoll.rss\"\n\nURN 52#1 26#0 drink\n' Constitute the set of 52 beer drinkers, represented by 52 \"1\"s, and the\n' set of 26 non-drinkers, represented by \"2\"s.\nURN 57#1 21#0 sale\n' The same set of individuals classified by whether they favor (\"1\") or\n' don't favor (\"0\") the sale of beer.\n\n' Note: F is now the vector {1 1 1 1 1 1 \\... 0 0 0 0 0 \\...} where 1 =\n' people in favor, 0 = people opposed.\nREPEAT 1000\n ' Repeat the experiment 1000 times.\n SHUFFLE drink drink$\n ' Shuffle the beer drinkers/non-drinker, call the shuffled set drink\\*.\n\n ' Note: drink\\$ is now a vector like {1 1 1 0 1 0 0 1 0 1 1 0 0 \\...}\n ' where 1 = drinker, 0 = non-drinker.\nEND\nSUBTRACT drink$ sale c\n' Subtract the favor/don't favor set from the drink/don't drink set.\n' Consistent responses are someone who drinks favoring the sale of beer (a\n' \"1\" and a \"1\") or someone who doesn't drink opposing the sale of beer.\n' When subtracted, consistent responses *(and only consistent responses)*\n' produce a \"0.\"\nCOUNT c =0 d\n' Count the number of consistent responses (those equal to \"0\").\nCOUNT c <> 0 e\n' Count the \"inconsistent\" responses (those not equal to \"0\").\nSUBTRACT d e f\n' Find the difference\nSCORE f z\n' Keep track of the results of each trial.\n\n' End one trial, go back and repeat until all 1000 trials are complete.\nHISTOGRAM z\n' Produce a histogram of the trial result.\n\n' Note: The file \"beerpoll\" on the Resampling Stats software disk contains\n' this set of commands.\nAre Drinkers More Likely to Favor Local Option & Vice Versa\n\n# consistent responses thru chance draw\nThe actual results showed a difference of 24. In the histogram we see that a difference that large or larger happened just by chance pairing — without any relationship between the two variables — 23% of the time. Hence, we conclude that there is little evidence of a relationship between the two variables.\nThough the test just described may generally be appropriate for data of this sort, it may well not be appropriate in some particular case. Let’s consider a set of data where even if the test showed that an association existed, we would not believe the test result to be meaningful.\nSuppose the survey results had been as presented in Table 23-11. We see that non-beer drinkers have a higher rate of approval of allowing beer drinking, which does not accord with experience or reason. Hence, without additional explanation we would not believe that a meaningful relationship exists among these variables even if the test showed one to exist. (Still another reason to doubt that a relationship exists is that the absolute differences are too small — there is only a 6% difference in disapproval between drink and don’t drink groups — to mean anything to anyone. On both grounds, then, it makes sense simply to act as if there were no difference between the two groups and to run no test .).\nTable 23-11\nBeer Poll In Which Results Are Not In Accord With Expectation Or Reason\n\n\n\n\n% Approve\n% Disapprove\nTotal\n\n\nBeer Drinkers\n71%\n29%\n100%\n\n\nNon-Beer Drinkers\n77%\n23%\n100%\n\n\n\nThe lesson to be learned from this is that one should inspect the data carefully before applying a statistical test, and only test for “significance” if the apparent relationships accord with theory, general understanding, and common sense.\nExample 23-4: Do Athletes Really Have “Slumps”? (Are Successive Events in a Series Independent, or is There a Relationship Between Them?)\nThe important concept of independent events was introduced earlier. Various scientific and statistical decisions depend upon whether or not a series of events is independent. But how does one know whether or not the events are independent? Let us consider a baseball example.\nBaseball players and their coaches believe that on some days and during some weeks a player will bat better than on other days and during other weeks. And team managers and coaches act on the belief that there are periods in which players do poorly — slumps — by temporarily replacing the player with another after a period of poor performance. The underlying belief is that a series of failures indicates a temporary (or permanent) change in the player’s capacity to play well, and it therefore makes sense to replace him until the evil spirit passes on, either of its own accord or by some change in the player’s style.\nBut even if his hits come randomly, a player will have runs of good luck and runs of bad luck just by chance — just as does a card player. The problem, then, is to determine whether (a) the runs of good and bad batting are merely runs of chance, and the probability of success for each event remains the same throughout the series of events — which would imply that the batter’s ability is the same at all times, and coaches should not take recent performance heavily into account when deciding which players should play; or (b) whether a batter really does have a tendency to do better at some times than at others, which would imply that there is some relationship between the occurrence of success in one trial event and the probability of success in the next trial event, and therefore that it is reasonable to replace players from time to time.\nLet’s analyze the batting of a player we shall call “Slug.” Here are the results of Slug’s first 100 times at bat during the 1987 season (“H” = hit, “X” = out):\nX X X X X X H X X H X H H X X X X X X X X H X X X X X H X X X X H H X X X X X H X X H X H X X X H H X X X X X H X H X X X X H H X H H X X X X X X X X X X H X X X H X X H X X H X H X X H X X X H X X X.\nNow, do Slug’s hits tend to come in bunches? That would be the case if he really did have a tendency to do better at some times than at others. Therefore, let us compare Slug’s results with those of a deck of cards or a set of random numbers that we know has no tendency to do better at some times than at others.\nDuring this period of 100 times at bat, Slug has averaged one hit in every four times at bat — a .250 batting average. This average is the same as the chance of one card suit’s coming up. We designate hearts as “hits” and prepare a deck of 100 cards, twenty-five “H”s (hearts, or “hit”) and seventy-five “X”s (other suit, or “out”). Here is the sequence in which the 100 randomly-shuffled cards fell:\nX X H X X X X H H X X X H H H X X X X X H X X X H X X H X X X X H X H H X X X X X X X X X H X X X X X X H H X X X X X H H H X X X X X X H X H X H X X H X H X X X X X X X X X H X X X X X X X H H H X X.\nNow we can compare whether or not Slug’s hits are bunched up more than they would be by random chance; we can do so by counting the clusters (also called “runs”) of consecutive hits and outs for Slug and for the cards. Slug had forty-three clusters, which is more than the thirty-seven clusters in the cards; it therefore does not seem that there is a tendency for Slug’s hits to cluster together. (A larger number of clusters indicates a lower tendency to cluster.)\nOf course, the single trial of 100 cards shown above might have an unusually high or low number of clusters. To be safer, lay out, (say,) ten trials of 100 cards each, and compare Slug’s number of clusters with the various trials. The proportion of trials with more clusters than Slug’s indicates whether or not Slug’s hits have a tendency to bunch up. (But caution: This proportion cannot be interpreted directly as a probability.)\nNow the steps:\nStep 1. Constitute a bucket with 3 slips of paper that say “out” and one that says “hit.” Or “01-25” = hits (H), “26-00” = outs (X), Slug’s long-run average.\nStep 2. Sample 100 slips of paper, with replacement, record “hit” or “out” each time, or write a series of “H’s” or “X’s” corresponding to 100 numbers, each selected randomly between 1 and 100.\nStep 3. Count the number of “clusters,” that is, the number of “runs” of the same event, “H”s or “X”s.\nStep 4. Compare the outcome in step 3 with Slug’s outcome, 43 clusters. If 43 or fewer; write “yes,” otherwise “no.”\nStep 5. Repeat steps 2-4 a hundred times.\nStep 6. Compute the proportion “yes.” This estimates the probability that Slug’s record is not characterized by more “slumps” than would be caused by chance. A very low proportion of “yeses” indicates longer (and hence fewer) “streaks” and “slumps” than would result by chance.\nIn RESAMPLING STATS, we can do this experiment 1000 times.\n\n' Program file: \"sluggo.rss\"\n\nREPEAT 1000\n URN 3#0 1#1 a\n SAMPLE 100 a b\n ' Sample 100 \"at-bats\" from a\n RUNS b >=1 c\n ' How many runs (of any length \\>=1) are there in the 100 at-bats?\n SCORE c z\nEND\nHISTOGRAM z\n' Note: The file \"sluggo\" on the Resampling Stats software disk contains\n' this set of commands.\nExamining the histogram, we see that 43 runs is not at all an unusual occurrence:\n“Runs” in 100 At-Bats\n\n# “runs” of same outcome\nThe manager wants to look at this matter in a somewhat different fashion, however. He insists that the existence of slumps is proven by the fact that the player sometimes does not get a hit for an abnormally long period of time. One way of testing whether or not the coach is right is by comparing an average player’s longest slump in a 100-at-bat season with the longest run of outs in the first card trial. Assume that Slug is a player picked at random . Then compare Slug’s longest slump — say, 10 outs in a row — with the longest cluster of a single simulated 100-at-bat trial with the cards, 9 outs. This result suggests that Slug’s apparent slump might well have resulted by chance.\nThe estimate can be made more accurate by taking the average longest slump (cluster of outs) in ten simulated 400-at-bat trials. But notice that we do not compare Slug’s slump against the longest slump found in ten such simulated trials. We want to know the longest cluster of outs that would be found under average conditions, and the hand with the longest slump is not average or typical. Determining whether to compare Slug’s slump with the average longest slump or with the longest of the ten longest slumps is a decision of crucial importance. There are no mathematical or logical rules to help you. What is required is hard, clear thinking. Experience can help you think clearly, of course, but these decisions are not easy or obvious even to the most experienced statisticians.\nThe coach may then refer to the protracted slump of one of the twenty-five players on his team to prove that slumps really occur. But, of twenty-five random 100-at-bat trials, one will contain a slump longer than any of the other twenty-four, and that slump will be considerably longer than average. A fair comparison, then, would be between the longest slump of his longest-slumping player, and the longest run of outs found among twenty-five random trials. In fact, the longest run among twenty-five hands of 100 cards was fifteen outs in a row. And, if we had set some of the hands for lower (and higher) batting averages than .250, the longest slump in the cards would have been even longer.\nResearch by Roberts and his students at the University of Chicago shows that in fact slumps do not exist, as I conjectured in the first publication of this material in 1969. (Of course, a batter feels as if he has a better chance of getting a hit at some times than at other times. After a series of successful at-bats, sandlot players and professionals alike feel confident — just as gamblers often feel that they’re on a “streak.” But there seems to be no connection between a player’s performance and whether he feels hot or cold, astonishing as that may be.)\nAverages over longer periods may vary systematically, as Ty Cobb’s annual batting average varied non-randomly from season to season, Roberts found. But short-run analyses of dayto-day and week-to-week individual and team performances in most sports have shown results similar to the outcomes that a lottery-type random-number machine would produce.\nRemember, too, the study by Gilovich, Vallone, and Twersky of basketball mentioned in Chapter 14. To repeat, their analyses “provided no evidence for a positive correlation between the outcomes of successive shots.” That is, knowing whether a shooter has or has not scored on the previous sheet — or in any previous sequence of shots — is useless for predicting whether he will score again.\nThe species homo sapiens apparently has a powerful propensity to believe that one can find a pattern even when there is no pattern to be found. Two decades ago I cooked up several series of random numbers that looked like weekly prices of publicly-traded stocks. Players in the experiment were told to buy and sell stocks as they chose. Then I repeatedly gave them “another week’s prices,” and allowed them to buy and sell again. The players did all kinds of fancy calculating, using a wild variety of assumptions — although there was no possible way that the figuring could help them.\nWhen I stopped the game before completing the 10 buy-andsell sessions they expected, subjects would ask that the game go on. Then I would tell them that there was no basis to believe that there were patterns in the data, because the “prices” were just randomly-generated numbers. Winning or losing therefore did not depend upon the subjects’ skill. Nevertheless, they demanded that the game not stop until the 10 “weeks” had been played, so they could find out whether they “won” or “lost.”\nThis study of batting illustrates how one can test for independence among various trials. The trials are independent if each observation is randomly chosen with replacement from the universe, in which case there is no reason to believe that one observation will be related to the observations directly before and after; as it is said, “the coin has no memory.”\nThe year-to-year level of Lake Michigan is an example in which observations are not independent. If Lake Michigan is very high in one year, it is likely to be higher than average the following year because some of the high level carries over from one year into the next.3 We could test this hypothesis by writing down whether the level in each year from, say, 1860 to 1975 was higher or lower than the median level for those years. We would then count the number of runs of “higher” and “lower” and compare the number of runs of “black” and “red” with a deck of that many cards; we would find fewer runs in the lake level than in an average hand of 116 (1976-1860) cards, though this test is hardly necessary. (But are the changes in Lake Michigan’s level independent from year to year? If the level went up last year, is there a better than 50-50 chance that the level will also go up this year? The answer to this question is not so obvious. One could compare the numbers of runs of ups and downs against an average hand of cards, just as with the hits and outs in baseball.)\nExercise for students: How could one check whether the successive numbers in a random-number table are independent?" + }, + { + "objectID": "correlation_causation.html#exercises", + "href": "correlation_causation.html#exercises", + "title": "29  Correlation and Causation", + "section": "29.5 Exercises", + "text": "29.5 Exercises\nSolutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.\nExercise 23-1\nTable 23-12 shows voter participation rates in the various states in the 1844 presidential election. Should we conclude that there was a negative relationship between the participation rate (a) and the vote spread (b) between the parties in the election? (Adapted from (Noreen 1989, 20, Table 2-4):\nTable 23-12\nVoter Participation In The 1844 Presidential Election\n\n\n\nState\nParticipation (a)\nSpread (b)\n\n\nMaine\n67.5\n13\n\n\nNew Hampshire\n65.6\n19\n\n\nVermont\n65.7\n18\n\n\nMassachusetts\n59.3\n12\n\n\nRhode Island\n39.8\n20\n\n\nConnecticut\n76.1\n5\n\n\nNew York\n73.6\n1\n\n\nNew Jersey\n81.6\n1\n\n\nPennsylvania\n75.5\n2\n\n\nDelaware\n85.0\n3\n\n\nMaryland\n80.3\n5\n\n\nVirginia\n54.5\n6\n\n\nNorth Carolina\n79.1\n5\n\n\nGeorgia\n94.0\n4\n\n\nKentucky\n80.3\n8\n\n\nTennessee\n89.6\n1\n\n\nLouisiana\n44.7\n3\n\n\nAlabama\n82.7\n8\n\n\nMississippi\n89.7\n13\n\n\nOhio\n83.6\n2\n\n\nIndiana\n84.9\n2\n\n\nIllinois\n76.3\n12\n\n\nMissouri\n74.7\n17\n\n\nArkansas\n68.8\n26\n\n\nMichigan\n79.3\n6\n\n\nNational Average\n74.9\n9\n\n\n\nThe observed correlation coefficient between voter participation and spread is -.37398. Is this more negative that what might occur by chance, if no correlation exists?\nExercise 23-2\nWe would like to know whether, among major-league baseball players, home runs (per 500 at-bats) and strikeouts (per 500 at-bat’s) are correlated. We first use the procedure as used above for I.Q. and athletic ability — multiplying the elements within each pair. (We will later use a more “sophisticated” measure, the correlation coefficient.)\nThe data for 18 randomly-selected players in the 1989 season are as follows, as they would appear in the first lines of the program.\n\n' Program file: \"correlation_causation_08.rss\"\n\nNUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32) homeruns\nNUMBERS (135 153 120 161 138 175 126 200 205 147 165 124 169 156 36 98 82 131) strikeout\n' Exercise: Complete this program.\nExercise 23-3\nIn the previous example relating strikeouts and home runs, we used the procedure of multiplying the elements within each pair. Now we use a more “sophisticated” measure, the correlation coefficient, which is simply a standardized form of the multiplicands, but sufficiently well known that we calculate it with a pre-set command.\nExercise: Write a program that uses the correlation coefficient to test the significance of the association between home runs and strikeouts.\nExercise 23-4\nAll the other things equal, an increase in a country’s money supply is inflationary and should have a negative impact on the exchange rate for the country’s currency. The data in the following table were computed using data from tables in the 1983/1984 Statistical Yearbook of the United Nations :\nTable 23-13\nMoney Supply and Exchange Rate Changes\n% Change % Change % Change % Change\nExch. Rate Money Supply Exch. Rate Money Supply\n\n\n\nAustralia\n0.089\n0.035\nBelgium\n0.134\n0.003\n\n\nBotswana\n0.351\n0.085\nBurma\n0.064\n0.155\n\n\nBurundi\n0.064\n0.064\nCanada\n0.062\n0.209\n\n\nChile\n0.465\n0.126\nChina\n0.411\n0.555\n\n\nCosta Rica\n0.100\n0.100\nCyprus\n0.158\n0.044\n\n\nDenmark\n0.140\n0.351\nEcuador\n0.242\n0.356\n\n\nFiji\n0.093\n0.000\nFinland\n0.124\n0.164\n\n\nFrance\n0.149\n0.090\nGermany\n0.156\n0.061\n\n\nGreece\n0.302\n0.202\nHungary\n0.133\n0.049\n\n\nIndia\n0.187\n0.184\nIndonesia\n0.080\n0.132\n\n\nItaly\n0.167\n0.124\nJamaica\n0.504\n0.237\n\n\nJapan\n0.081\n0.069\nJordan\n0.092\n0.010\n\n\nKenya\n0.144\n0.141\nKorea\n0.040\n0.006\n\n\nKuwait\n0.038\n-0.180\nLebanon\n0.619\n0.065\n\n\nMadagascar\n0.337\n0.244\nMalawi\n0.205\n0.203\n\n\nMalaysia\n0.037\n-0.006\nMalta\n0.003\n0.003\n\n\nMauritania\n0.180\n0.192\nMauritius\n0.226\n0.136\n\n\nMexico\n0.338\n0.599\nMorocco\n0.076\n0.076\n\n\nNetherlands\n0.158\n0.078\nNew Zealand\n0.370\n0.098\n\n\nNigeria\n0.079\n0.082\nNorway\n0.177\n0.242\n\n\nPapua\n0.075\n0.209\nPhilippines\n0.411\n0.035\n\n\nPortugal\n0.288\n0.166\nRomania\n-0.029\n0.039\n\n\nRwanda\n0.059\n0.083\nSamoa\n0.348\n0.118\n\n\nSaudi Arabia\n0.023\n0.023\nSeychelles\n0.063\n0.031\n\n\nSingapore\n0.024\n0.030\nSolomon Is\n0.101\n0.526\n\n\nSomalia\n0.481\n0.238\nSouth Africa\n0.624\n0.412\n\n\nSpain\n0.107\n0.086\nSri Lanka\n0.051\n0.141\n\n\nSwitzerland\n0.186\n0.186\nTunisia\n0.193\n0.068\n\n\nTurkey\n0.573\n0.181\nUK\n0.255\n0.154\n\n\nUSA\n0.000\n0.156\nVanatuva\n0.008\n0.331\n\n\nYemen\n0.253\n0.247\nYugoslavia\n0.685\n0.432\n\n\nZaire\n0.343\n0.244\nZambia\n0.457\n0.094\n\n\nZimbabwe\n0.359\n0.164\n\n\n\n\n\n\nPercentage changes in exchange rates and money supply between 1983 and 1984 for various countries.\nAre changes in the exchange rates and in money supplies related to each other? That is, are they correlated?\n\nExercise: Should the algorithm of non-computer resampling steps be similar to the algorithm for I.Q. and athletic ability shown in the text? One can also work with the correlation coefficient rather then the sum-of-products method, and expect to get the same result.\n\nWrite a series of non-computer resampling steps to solve this problem.\nWrite a computer program to implement those steps.\n\n\n\n\n\nDixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to Statistical Analysis.”\n\n\nEfron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to the Bootstrap.” In Monographs on Statistics and Applied Probability, edited by David R Cox, David V Hinkley, Nancy Reid, Donald B Rubin, and Bernard W Silverman. Vol. 57. New York: Chapman & Hall.\n\n\nNoreen, Eric W. 1989. Computer-Intensive Methods for Testing Hypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research Methods in Social Science. 3rd ed. New York: Random House.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New Approach. New York: The Free Press." + }, + { + "objectID": "how_big_sample.html#issues-in-determining-sample-size", + "href": "how_big_sample.html#issues-in-determining-sample-size", + "title": "30  How Large a Sample?", + "section": "30.1 Issues in determining sample size", + "text": "30.1 Issues in determining sample size\nSometime in the course of almost every study — preferably early in the planning stage — the researcher must decide how large a sample to take. Deciding the size of sample to take is likely to puzzle and distress you at the beginning of your research career. You have to decide somehow, but there are no simple, obvious guides for the decision.\nFor example, one of the first studies I worked on was a study of library economics (Fussler and Simon 1961), which required taking a sample of the books from the library’s collections. Sampling was expensive, and we wanted to take a correctly sized sample. But how large should the sample be? The longer we searched the literature, and the more people we asked, the more frustrated we got because there just did not seem to be a clear-cut answer. Eventually we found out that, even though there are some fairly rational ways of fixing the sample size, most sample sizes in most studies are fixed simply (and irrationally) by the amount of money that is available or by the sample size that similar research has used in the past.\nThe rational way to choose a sample size is by weighing the benefits you can expect in information against the cost of increasing the sample size. In principle you should continue to increase the sample size until the benefit and cost of an additional sampled unit are equal.1\nThe benefit of additional information is not easy to estimate even in applied research, and it is extraordinarily difficult to estimate in basic research. Therefore, it has been the practice of researchers to set up target goals of the degree of accuracy they wish to achieve, or to consider various degrees of accuracy that might be achieved with various sample sizes, and then to balance the degree of accuracy with the cost of achieving that accuracy. The bulk of this chapter is devoted to learning how the sample size is related to accuracy in simple situations.\nIn complex situations, however, and even in simple situations for beginners, you are likely to feel frustrated by the difficulties of relating accuracy to sample size, in which case you cry out to a supervisor, “Don’t give me complicated methods, just give me a rough number based on your greatest experience.” My inclination is to reply to you, “Sometimes life is hard and there is no shortcut.” On the other hand, perhaps you can get more information than misinformation out of knowing sample sizes that have been used in other studies. Table 24-1 shows the middle (modal), 25th percentile, and 75th percentile scores for — please keep this in mind — National Opinion Surveys in the top panel. The bottom panel shows how subgroup analyses affect sample size.\nPretest sample sizes are smaller, of course, perhaps 25-100 observations. Samples in research for Master’s and Ph.D. theses are likely to be closer to a pretest than to national samples.\nTable 24-1\nMost Common Sample Sizes Used for National and Regional Studies By Subject Matter\nSubject Matter National Regional\n\n\n\nSubject Matter\nMode\nQ3\nQ1\nMode\nQ3\nQ1\n\n\nFinancial\n1000+\n—\n—\n100 40\n0 50\n\n\n\nMedical\n1000+\n1000+\n500\n1000+ 10\n00+ 25\n0\n\n\nOther Behavior\n1000+\n—\n—\n700 10\n00 30\n0\n\n\nAttitudes\n1000+\n1000+\n500\n700 10\n00 40\n0\n\n\nLaboratory Experiments\n—\n—\n—\n100 20\n0 50\n\n\n\n\nTypical Sample Sizes for Studies of Human and Institutional Populations\nPeople or Households Institutions\n\n\n\n\nPeople or house\nholds\nInstitutions\n\n\n\nSubgroup Analyses\nNational\nSpecial\nNational\nSpecial\n\n\nNone or few\n1000-1500\n200-500\n200-500\n50-200\n\n\nAverage\n1500-2500\n500-1000\n500-1000\n200-500\n\n\nMany\n2500+\n1000+\n1000+\n500+\n\n\n\nSOURCE: From Applied Sampling, by Seymour Sudman (1976, 86 — 87) copyright Academic Press, reprinted by permission.\nOnce again, the sample size ought to depend on the proportions of the sample that have the characteristics you are interested in, the extent to which you want to learn about subgroups as well as the universe as a whole, and of course the purpose of your study, the value of the information, and the cost. Also, keep in mind that the added information that you obtain from an additional sample observation tends to be smaller as the sample size gets larger. You must quadruple the sample to halve the error.\nNow let us consider some specific cases. The first examples taken up here are from the descriptive type of study, and the latter deal with sample sizes in relationship research." + }, + { + "objectID": "how_big_sample.html#some-practical-examples", + "href": "how_big_sample.html#some-practical-examples", + "title": "30  How Large a Sample?", + "section": "30.2 Some practical examples", + "text": "30.2 Some practical examples\nExample 24-1\nWhat proportion of the homes in Countryville are tuned into television station WCNT’s ten o’clock news program? That is the question your telephone survey aims to answer, and you want to know how many randomly selected homes you must telephone to obtain a sufficiently large sample.\nBegin by guessing the likeliest answer, say 30 percent in this case. Do not worry if you are off by 5 per cent or even 10 per cent; and you will probably not be further off than that. Select a first-approximation sample size of perhaps 400; this number is selected from my general experience, but it is just a starting point. Then proceed through the first 400 numbers in the random-number table, marking down a yes for numbers 1-3 and no for numbers 4-10 (because 3/10 was your estimate of the proportion listening). Then add the number of yes and no . Carry out perhaps ten sets of such trials, the results of which are in Table 24-2.\nTable 24-2\n% DIFFERENCE FROM\nTrial Number “Yes” Number “No” Expected Mean of 30%\n\n\n\n\n(120 “Yes”)\n\n\n\n\n1\n115\n285\n1.25\n\n\n2\n119\n281\n0.25\n\n\n3\n116\n284\n1.00\n\n\n4\n114\n286\n1.50\n\n\n5\n107\n293\n3.25\n\n\n6\n116\n284\n1.00\n\n\n7\n132\n268\n3.00\n\n\n8\n123\n277\n0.75\n\n\n9\n121\n279\n0.25\n\n\n10\n114\n286\n1.50\n\n\nMean\n\n\n1.37\n\n\n\nBased on these ten trials, you can estimate that if you take a sample of 400 and if the “real” viewing level is 30 percent, your average percentage error will be 1.375 percent on either side of 30 percent. That is, with a sample of 400, half the time your error will be greater than 1.375 percent if 3/10 of the universe is listening.\nNow you must decide whether the estimated error is small enough for your needs. If you want greater accuracy than a sample of 400 will give you, increase the sample size, using this important rule of thumb: To cut the error in half, you must quadruple the sample size. In other words, if you want a sample that will give you an error of only 0.55 percent on the average, you must increase the sample size to 1,600 interviews. Similarly, if you cut the sample size to 100, the average error will be only 2.75 percent (double 1.375 percent) on either side of 30 percent. If you distrust this rule of thumb, run ten or so trials on sample sizes of 100 or 1,600, and see what error you can expect to obtain on the average.\nIf the “real” viewership is 20 percent or 40 percent, instead of 30 percent, the accuracy you will obtain from a sample size of 400 will not be very different from an “actual” viewership of 30 percent, so do not worry about that too much, as long as you are in the right general vicinity.\nAccuracy is slightly greater in smaller universes but only slightly. For example, a sample of 400 would give perfect accuracy if Countryville had only 400 residents. And a sample of 400 will give slightly greater accuracy for a town of 800 residents than for a city of 80,000 residents. But, beyond the point at which the sample is a large fraction of the total universe, there is no difference in accuracy with increases in the size of universe. This point is very important. For any given level of accuracy, identical sample sizes give the same level of accuracy for Podunk (population 8,000) or New York City (population 8 million). The ratio of the sample size to the population of Podunk or New York City means nothing at all, even though it intuitively seems to be important.\nThe size of the sample must depend upon which population or subpopulations you wish to describe. For example, Alfred Kinsey’s sample size for the classic “Sexual Behavior in the Human Male” (1948) would have seemed large, by customary practice, for generalizations about the United States population as a whole. But, as Kinsey explains: “… the chief concern of the present study is an understanding of the sexual behavior of each segment of the population, and that it is only secondarily concerned with generalization for the population as a whole.” (1948, 82, italics added). Therefore Kinsey’s sample had to include subsamples large enough to obtain the desired accuracy in each of these sub-universes. The U.S. Census offers a similar illustration. When the U.S. Bureau of the Census aims to estimate only a total or an average for the United States as a whole — as, for example, in the Current Population Survey estimate of unemployment — a sample of perhaps 50,000 is big enough. But the decennial census aims to make estimates for all the various communities in the country, estimates that require adequate subsamples in each of these sub-universes; such is the justification for the decennial census’ sample size of so many millions. Television ratings illustrate both types of purpose. Nielsen ratings, for example, are sold primarily to national network advertisers. These advertisers on national television networks usually sell their goods all across the country and are therefore interested primarily in the total United States viewership for a program, rather than in the viewership in various demographic subgroups. The appropriate calculations for Nielsen sample size will therefore refer to the total United States sample. But other organizations sell rating services to local television and radio stations for use in soliciting advertising over the local stations rather than over the network as a whole. Each local sample must then be large enough to provide reasonable accuracy, and, considered as a whole, the samples for the local stations therefore add up to a much larger sample than the Nielsen and other nationwide samples.\nThe problem may be handled with the following R program. This program represents viewers with the string 'viewers' and non-viewers as 'not viewers'. It then asks sample to choose randomly between 'viewer' and 'not viewer' with a 30% (p=0.3) chance of getting a 'viewer' and a 70% chance of getting a 'not viewer'. It gets a sample of 400 such numbers, counts (with sum the “viewers” then finds how much this sample diverges from the expected number of viewers (30% of 400 = 120). It repeats this procedure 10000 times, and then calculates the average divergence.\n\nStart of viewer_numbers notebook\n\nDownload notebook\nInteract\n\n\n\n# set the number of trials\nn_trials <- 10000\n\n# an empty array to store the scores\nscores <- numeric(n_trials)\n\n# What are the options to choose from?\noptions <- c('viewer', 'not viewer')\n\n# do n_trials trials\nfor (i in 1:n_trials) {\n\n # Choose 'viewer' 30% of the time.\n a <- sample(options, size=400, prob=c(0.3, 0.7), replace=TRUE)\n\n # count the viewers\n b <- sum(a == 'viewer')\n\n # how different from expected?\n c <- 120 - b\n\n # absolute value of the difference\n d <- abs(c)\n\n # express as a proportion of sample\n e <- d / 400\n\n # keep score of the result\n scores[i] <- e\n}\n\n# find the mean divergence\nk <- mean(scores)\n\n# Show the result\nk\n\n[1] 0.0182\n\n\n\nEnd of viewer_numbers notebook\n\nIt is a simple matter to go back and try a sample size of (say) 1600 rather than 400, and examine the effect on the mean difference.\nExample 24-2\nThis example, like Example 24-1, illustrates the choice of sample size for estimating a summarization statistic. Later examples deal with sample sizes for probability statistics.\nHark back to the pig-ration problems presented earlier, and consider the following set of pig weight-gains recorded for ration A: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 30. Assume that\nour purpose now is to estimate the average weight gain for ration A, so that the feed company can advertise to farmers how much weight gain to expect from ration A. If the universe is made up of pig weight-gains like those we observed, we can simulate the universe with, say, 1 million weight gains of thirty-one pounds, 1 million of thirty-four pounds, and so on for the twelve observed weight gains. Or, more conveniently, as accuracy will not be affected much, we can make up a universe of say, thirty cards for each thirty-one-pound gain, thirty cards for each thirty-four-pound gains and so forth, yielding a deck of 30 x 12 = 360 cards. Then shuffle, and, just for a starting point, try sample sizes of twelve pigs. The means of the samples for twenty such trials are as in Table 24-3.\nNow ask yourself whether a sample size of twelve pigs gives you enough accuracy. There is a .5 chance that the mean for the sample will be more than .65 or .92 pound (the two median deviations) or (say) .785 pound (the midpoint of the two medians) from the mean of the universe that generates such samples, which in this situation is 31.75 pounds. Is this close enough? That is up to you to decide in light of the purposes for which you are running the experiment. (The logic of the inference you make here is inevitably murky, and use of the term “real mean” can make it even murkier, as is seen in the discussion in Chapters 20-22 on confidence intervals.)\nTo see how accuracy is affected by larger samples, try a sample size of forty-eight “pigs” dealt from the same deck. (But, if the sample size were to be much larger than forty-eight, you might need a “universe” greater than 360 cards.) The results of twenty trials are in Table 24-4.\nIn half the trials with a sample size of forty-eight the difference between the sample mean and the “real” mean of 31.75 will be .36 or .37 pound (the median deviations), smaller than with the values of .65 and .92 for samples of 12 pigs. Again, is this too little accuracy for you? If so, increase the sample size further.\nTable 24-3\n\n\n\n\n\n\n\n\n\n\n\nTrial\nMean\nAbsolut e Devisatio n of Trial Mean\nfrom Actual Mean\nTrial\nMean\nAbsolut e Deviation of Trial Mean\nfrom Actual Mean\n\n\n1\n31.77\n.02\n11\n32.10\n.35\n\n\n2\n32.27\n1.52\n12\n30.67\n1.08\n\n\n3\n31.75\n.00\n13\n32.42\n.67\n\n\n4\n30.83\n.92\n14\n30.67\n1.08\n\n\n5\n30.52\n1.23\n15\n32.25\n.50\n\n\n6\n31.60\n.15\n16\n31.60\n.15\n\n\n7\n32.46\n.71\n17\n32.33\n.58\n\n\n8\n31.10\n.65\n18\n33.08\n1.33\n\n\n9\n32.42\n.35\n19\n33.01\n1.26\n\n\n10\n30.60\n1.15\n20\n30.60\n1.15\n\n\nMean\n\n\n\n\n31.75\n\n\n\nThe attentive reader of this example may have been troubled by this question: How do you know what kind of a distribution of values is contained in the universe before the sample is taken? The answer is that you guess, just as in Example 24-1 you guessed at the mean of the universe. If you guess wrong, you will get either more accuracy or less accuracy than you expected from a given sample size, but the results will not be fatal; if you obtain more accuracy than you wanted, you have wasted some money, and, if you obtain less accuracy, your sample dispersion will tell you so, and you can then augment the sample to boost the accuracy. But an error in guessing will not introduce error into your final results.\nTable 24-4\n\n\n\n\n\n\n\n\n\n\n\nTrial\nMean\nAbsolut e Deviation of Trial Mean\nfrom Actual Mean\nTrial\nMean\nAbsolut e Deviation of Trial Mean\nfrom Actual Mean\n\n\n1\n31.80\n.05\n11\n31.93\n.18\n\n\n2\n32.27\n.52\n12\n32.40\n.65\n\n\n3\n31.82\n.07\n13\n31.32\n.43\n\n\n4\n31.39\n.36\n14\n32.07\n.68\n\n\n5\n31.22\n.53\n15\n32.03\n.28\n\n\n6\n31.88\n.13\n16\n31.95\n.20\n\n\n7\n31.37\n.38\n17\n31.75\n.00\n\n\n8\n31.48\n.27\n18\n31.11\n.64\n\n\n9\n31.20\n.55\n19\n31.96\n.21\n\n\n10\n32.01\n.26\n20\n31.32\n.43\n\n\nMean\n\n\n\n\n31.75\n\n\n\nThe guess should be based on something, however. One source for guessing is your general knowledge of the likely dispersion; for example, if you were estimating male heights in Rhode Island, you would be able to guess what proportion of observations would fall within 2 inches, 4 inches, 6 inches, and 8 inches, perhaps, of the real value. Or, much better yet, a very small pretest will yield quite satisfactory estimates of the dispersion.\nHere is a RESAMPLING STATS program that will let you try different sample sizes, and then take bootstrap samples to determine the range of sampling error. You set the sample size with the DATA command, and the NUMBERS command records the data. Above I noted that we could sample without replacement from a “deck” of thirty “31”’s, thirty “34”’s, etc, as a substitute for creating a universe of a million “31”’s, a million “34”’s, etc. We can achieve the same effect if we replace each card after we sample it; this is equivalent to creating a “deck” of an infinite number of “31”’s, “34”’s, etc. That is what the SAMPLE command does, below. Note that the sample size is determined by the value of the “sampsize” variable, which you set at the beginning. From here on the program takes the MEAN of each sample, keeps SCORE of that result, and produces a HISTOGRAM. The PERCENTILE command will also tell you what values enclose 90% of all sample results, excluding those below the 5th percentile and above the 95th percentile.\nHere is a program for a sample size of 12.\n\n' Program file: \"how_big_sample_01.rss\"\n\nDATA (12) sampsize\nNUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a\nREPEAT 1000\n SAMPLE sampsize a b\n MEAN b c\n SCORE c z\nEND\nHISTOGRAM z\nPERCENTILE z (5 95) k\nPRINT k\n' **Bin Center Freq Pct Cum Pct**\n\n\n\n\n29.0\n\n2\n0.2\n0.2\n\n\n29.5\n\n4\n0.4\n0.6\n\n\n30.0\n\n30\n3.0\n3.6\n\n\n30.5\n\n71\n7.1\n10.7\n\n\n31.0\n\n162\n16.2\n26.9\n\n\n31.5\n\n209\n20.9\n47.8\n\n\n32.0\n\n237\n23.7\n71.5\n\n\n32.5\n\n143\n14.3\n85.8\n\n\n33.0\n\n90\n9.0\n94.8\n\n\n33.5\n\n37\n3.7\n98.5\n\n\n34.0\n\n12\n1.2\n99.7\n\n\n34.5\n\n3\n0.3\n100.0\n\n\nk = 30.417\n33.25\n\n\n\n\n\n\nExample 24-3\nThis is the first example of sample-size estimation for probability (testing) statistics, rather than the summarization statistics dealt with above.\nRecall the problem of the sex of fruit-fly offspring discussed in Example 15-1. The question now is, how large a sample is needed to determine whether the radiation treatment results in a sex ratio other than a 50-50 male-female split?\nThe first step is, as usual, difficult but necessary. As the researcher, you must guess what the sex ratio will be if the treatment does have an effect. Let’s say that you use all your general knowledge of genetics and of this treatment and that you guess the sex ratio will be 75 percent males and 25 percent females if the treatment alters the ratio from 50-50.\nIn the random-number table let “01-25” stand for females and “26-00” for males. Take twenty successive pairs of numbers for each trial, and run perhaps fifty trials, as in Table 24-5.\nTable 24-5\n\n\n\n1\n4\n16\n18\n7\n13\n34\n4\n16\n\n\n2\n6\n14\n19\n3\n17\n35\n6\n14\n\n\n3\n6\n14\n20\n7\n13\n36\n3\n17\n\n\n4\n5\n15\n21\n4\n16\n37\n8\n12\n\n\n5\n5\n15\n22\n4\n16\n38\n4\n16\n\n\n6\n3\n17\n23\n5\n15\n39\n3\n17\n\n\n7\n7\n13\n24\n8\n12\n40\n6\n14\n\n\n8\n6\n14\n25\n4\n16\n41\n5\n15\n\n\n9\n3\n17\n26\n1\n19\n42\n2\n18\n\n\n10\n2\n18\n27\n5\n15\n43\n8\n12\n\n\n11\n6\n14\n28\n3\n17\n44\n4\n16\n\n\n12\n1\n19\n29\n8\n12\n45\n6\n14\n\n\n13\n6\n14\n30\n8\n12\n46\n5\n15\n\n\n14\n3\n17\n31\n5\n15\n47\n3\n17\n\n\n15\n1\n19\n32\n3\n17\n48\n5\n15\n\n\n16\n5\n15\n33\n4\n16\n49\n3\n17\n\n\n17\n5\n15\n\n\n\n50\n5\n15\n\n\n\nTrial Females Males Trial Females Males Trial Females Males\nIn Example 15-1 with a sample of twenty flies that contained fourteen or more males, we found only an 8% probability that such an extreme sample would result from a 50-50 universe. Therefore, if we observe such an extreme sample, we rule out a 50-50 universe.\nNow Table 24-5 tells us that, if the ratio is really 75 to 25, then a sample of twenty will show fourteen or more males forty-two of fifty times (84 percent of the time). If we take a sample of twenty flies and if the ratio is really 75-25, we will make the correct decision by deciding that the split is not 50-50 84 percent of the time.\nPerhaps you are not satisfied with reaching the right conclusion only 84 percent of the time. In that case, still assuming that the ratio will really be 75-25 if it is not 50-50, you need to take a sample larger than twenty flies. How much larger? That depends on how much surer you want to be. Follow the same procedure for a sample size of perhaps eighty flies. First work out for a sample of eighty, as was done in Example 15-1 for a sample of twenty, the number of males out of eighty that you would need to find for the odds to be, say, 9 to 1 that the universe is not 50-50; your estimate turns out to be forty-eight males. Then run fifty trials of eighty flies each on the basis of 75-25 probability, and see how often you would not get as many as forty-eight males in the sample. Table 24-6 shows the results we got. No trial was anywhere near as low as forty-eight, which suggests that a sample of eighty is larger than necessary if the split is really 75-25.\nTable 24-6\n\n\nTrial Females Males Trial Females Males Trial Females Males\n\n\n\n1\n21\n59\n18\n13\n67\n34\n21\n59\n\n\n2\n22\n58\n19\n19\n61\n35\n17\n63\n\n\n3\n13\n67\n20\n17\n63\n36\n22\n58\n\n\n4\n15\n65\n21\n17\n63\n37\n19\n61\n\n\n5\n22\n58\n22\n18\n62\n38\n21\n59\n\n\n6\n21\n59\n23\n26\n54\n39\n21\n59\n\n\n7\n13\n67\n24\n20\n60\n40\n21\n59\n\n\n8\n24\n56\n25\n16\n64\n41\n21\n59\n\n\n9\n16\n64\n26\n22\n58\n42\n18\n62\n\n\n10\n21\n59\n27\n16\n64\n43\n19\n61\n\n\n11\n20\n60\n28\n21\n59\n44\n17\n63\n\n\n12\n19\n61\n29\n22\n58\n45\n13\n67\n\n\n13\n21\n59\n30\n21\n59\n46\n16\n64\n\n\n14\n17\n63\n31\n22\n58\n47\n21\n59\n\n\n15\n22\n68\n32\n19\n61\n48\n16\n64\n\n\n16\n22\n68\n33\n10\n70\n49\n17\n63\n\n\n17\n17\n63\n\n\n\n50\n21\n59\n\n\n\nTable 24-7\nTrial Females Males Trial Females Males Trial Females Males\n\n\n\n1\n35\n45\n18\n32\n48\n34\n35\n45\n\n\n2\n36\n44\n19\n28\n52\n35\n36\n44\n\n\n3\n35\n45\n20\n32\n48\n36\n29\n51\n\n\n4\n35\n45\n21\n33\n47\n37\n36\n44\n\n\n5\n36\n44\n22\n37\n43\n38\n36\n44\n\n\n6\n36\n44\n23\n36\n44\n39\n31\n49\n\n\n7\n36\n44\n24\n31\n49\n40\n29\n51\n\n\n8\n34\n46\n25\n27\n53\n41\n30\n50\n\n\n9\n34\n46\n26\n30\n50\n42\n35\n45\n\n\n10\n29\n51\n27\n31\n49\n43\n32\n48\n\n\n11\n29\n51\n28\n33\n47\n44\n30\n50\n\n\n12\n32\n48\n29\n37\n43\n45\n37\n43\n\n\n13\n29\n51\n30\n30\n50\n46\n31\n49\n\n\n14\n31\n49\n31\n31\n49\n47\n36\n44\n\n\n15\n28\n52\n32\n32\n48\n48\n34\n64\n\n\n16\n33\n47\n33\n34\n46\n49\n29\n51\n\n\n17\n36\n44\n\n\n\n50\n37\n43\n\n\n\n\nIt is obvious that, if the split you guess at is 60 to 40 rather than 75 to 25, you will need a bigger sample to obtain the “correct” result with the same probability. For example, run some eighty-fly random-number trials with 1-40 representing males and 51-100 representing females. Table 24-7 shows that only twenty-four of fifty (48 percent) of the trials reach the necessary cut-off at which one would judge that a sample of eighty really does not come from a universe that is split 50-50; therefore, a sample of eighty is not big enough if the split is 60-40.\nTo review the main principles of this example: First, the closer together the two possible universes from which you think the sample might have come (50-50 and 60-40 are closer together than are 50-50 and 75-25), the larger the sample needed to distinguish between them. Second, the surer you want to be that you reach the right decision based upon the sample evidence, the larger the sample you need.\nThe problem may be handled with the following RESAMPLING STATS program. We construct a benchmark universe that is 60-40 male-female, and take samples of size 80, observing whether the numbers of males and females differs enough in these resamples to rule out a 50-50 universe. Recall that we need at least 48 males to say that the proportion of males is not 50%.\n\n' Program file: \"how_big_sample_02.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 80 1,10 a\n ' Generate 80 \"flies,\" each represented by a number between 1 and 10 where\n ' \\<= 6 is a male\n COUNT a <=6 b\n ' Count the males\n SCORE b z\n ' Keep score\nEND\nCOUNT z >=48 k\n' How many of the trials produced more than 48 males?\nDIVIDE k 1000 kk\n' Convert to a proportion\nPRINT kk\n' If the result \"kk\" is close to 1, we then know that samples of size 80\n' will almost always produce samples with enough males to avoid misleading\n' us into thinking that they could have come from a universe in which\n' males and females are split 50-50.\nExample 24-3\nReferring back to Example 15-3, on the cable-television poll, how large a sample should you have taken? Pretend that the data have not yet been collected. You need some estimate of how the results will turn out before you can select a sample size. But you have not the foggiest idea how the results will turn out. Therefore, go out and take a very small sample, maybe ten people, to give you some idea of whether people will split quite evenly or unevenly. Seven of your ten initial interviews say they are for CATV. How large a sample do you now need to provide an answer of which you can be fairly sure?\nUsing the techniques of the previous chapter, we estimate roughly that from a sample of fifty people at least thirty-two would have to vote the same way for you to believe that the odds are at least 19 to 1 that the sample does not misrepresent the universe, that is, that the sample does not show a majority different from that of the whole universe if you polled everyone. This estimate is derived from the resampling experiment described in example 15-3. The table shows that if half the people (or more) are against cable television, only one in twenty times will thirty-two (or more) people of a sample of fifty say that they are for cable television; that is, only one of twenty trials with a 50-50 universe will produce as many as thirty-two yeses if a majority of the population is against it.\nTherefore, designate numbers 1-30 as no and 31-00 as yes in the random-number table (that is, 70 percent, as in your estimate based on your presample of ten), work through a trial sample size of fifty, and count the number of yeses . Run through perhaps ten or fifteen trials, and reckon how often the observed number of yeses exceeds thirty-two, the number you must exceed for a result you can rely on. In Table 24-8 we see that a sample of fifty respondents, from a universe split 70-30, will show that many yeses a preponderant proportion of the time — in fact, in fifteen of fifteen experiments; therefore, the sample size of fifty is large enough if the split is “really” 70-30.\nTable 24-8\n\n\n\nTrial\nNo\nYes\nTrial\nNo\nYes\n\n\n1\n13\n37\n9\n15\n35\n\n\n2\n14\n36\n10\n9\n41\n\n\n3\n18\n32\n11\n15\n35\n\n\n4\n10\n40\n12\n15\n35\n\n\n5\n13\n37\n13\n9\n41\n\n\n6\n15\n35\n14\n16\n34\n\n\n7\n14\n36\n15\n17\n33\n\n\n\nThe following RESAMPLING STATS program takes samples of size 50 from a universe that is 70% “yes.” It then observes how often such samples produce more than 32 “yeses” — the number we must get if we are to be sure that the sample is not from a 50/50 universe.\n\n' Program file: \"how_big_sample_03.rss\"\n\nREPEAT 1000\n ' Do 1000 trials\n GENERATE 50 1,10 a\n ' Generate 50 numbers between 1 and 10, let 1-7 = yes.\n COUNT a <=7 b\n ' Count the \"yeses\"\n SCORE b z\n ' Keep score of the result\nEND\nCOUNT z >=32 k\n' Count how often the sample result \\>= our 32 cutoff (recall that samples\n' with 32 or fewer \"yeses\" cannot be ruled out of a 50/50 universe)\nDIVIDE k 1000 kk\n' Convert to a proportion\nIf “kk” is close to 1, we can be confident that this sample will be large enough to avoid a result that we might mistakenly think comes from a 50/50 universe (provided that the real universe is 70% favorable).\nExample 24-4\nHow large a sample is needed to determine whether there is any difference between the two pig rations in Example 15-7? The first step is to guess the results of the tests. You estimate that the average for ration A will be a weight gain of thirty-two pounds. You further guess that twelve pigs on ration A might gain thirty-six, thirty-five, thirty-four, thirty-three, thirty-three, thirty-two, thirty-two, thirty-one, thirty-one, thirty, twentynine, and twenty-eight pounds. This set of guesses has an equal number of pigs above and below the average and more pigs close to the average than farther away. That is, there are more pigs at 33 and 31 pounds than at 36 and 28 pounds. This would seem to be a reasonable distribution of pigs around an average of 32 pounds. In similar fashion, you guess an average weight gain of 28 pounds for ration B and a distribution of 32, 31, 30, 29, 29, 28, 28, 27, 27, 26, 25, and 24 pounds.\nLet us review the basic strategy. We want to find a sample size large enough so that a large proportion of the time it will reveal a difference between groups big enough to be accepted as not attributable to chance. First, then, we need to find out how big the difference must be to be accepted as evidence that the difference is not attributable to chance. We do so from trials with samples that size from the benchmark universe. We state that a difference larger than the benchmark universe will usually produce is not attributable to chance.\nIn this case, let us try samples of 12 pigs on each ration. First we draw two samples from a combined benchmark universe made up of the results that we have guessed will come from ration A and ration B. (The procedure is the same as was followed in Example 15-7.) We find that in 19 out of 20 trials the difference between the two observed groups of 12 pigs was 3 pounds or less. Now we investigate how often samples of 12 pigs, drawn from the separate universes, will show a mean difference as large as 3 pounds. We do so by making up a deck of 25 or 50 cards for each of the 12 hypothesized A’s and each of the 12 B’s, with the ration name and the weight gain written on it — that is, a deck of, say, 300 cards for each ration. Then from each deck we draw a set of 12 cards at random, record the group averages, and find the difference.\nHere is the same work done with more runs on the computer:\n\n' Program file: \"how_big_sample_04.rss\"\n\nNUMBERS (31 34 29 26 32 35 38 34 32 31 30 29) a\nNUMBERS (32 32 31 30 29 29 29 28 28 26 26 24) b\nREPEAT 1000\n SAMPLE 12 a aa\n MEAN aa aaa\n SAMPLE 12 b bb\n MEAN bb bbb\n SUBTRACT aaa bbb c\n SCORE c z\nEND\nHISTOGRAM z\n' **Difference in mean weights between resamples**\n\nTherefore, two samples of twelve pigs each are clearly large enough, and, in fact, even smaller samples might be sufficient if the universes are really like those we guessed at. If, on the other hand, the differences in the guessed universes had been smaller, then twelve-pig groups would have seemed too small and we would then have had to try out larger sample sizes, say forty-eight pigs in each group and perhaps 200 pigs in each group if forty-eight were not enough. And so on until the sample size is large enough to promise the accuracy we want. (In that case, the decks would also have to be much larger, of course.)\nIf we had guessed different universes for the two rations, then the sample sizes required would have been larger or smaller. If we had guessed the averages for the two samples to be closer together, then we would have needed larger samples. Also, if we had guessed the weight gains within each universe to be less spread out, the samples could have been smaller and vice versa.\nThe following RESAMPLING STATS program first records the data from the two samples, and then draws from decks of infinite size by sampling with replacement from the original samples.\n\n' Program file: \"how_big_sample_05.rss\"\n\nDATA (36 35 34 33 33 32 32 31 31 30 29 28) a\nDATA (32 31 30 29 29 28 28 27 27 26 25 24) b\nREPEAT 1000\n SAMPLE 12 a aa\n ' Draw a sample of 12 from ration a with replacement (this is like drawing\n ' from a large deck made up of many replicates of the elements in a)\n SAMPLE 12 b bb\n ' Same for b\n MEAN aa aaa\n ' Find the averages of the resamples\n MEAN bb bbb\n SUBTRACT aaa bbb c\n ' Find the difference\n SCORE c z\nEND\nCOUNT z >=3 k\n' How often did the difference exceed the cutoff point for our\n' significance test of 3 pounds?\nDIVIDE k 1000 kk\nPRINT kk\n' If kk is close to zero, we know that the sample size is large enough\n' that samples drawn from the universes we have hypothesized will not\n' mislead us into thinking that they could come from the same universe." + }, + { + "objectID": "how_big_sample.html#step-wise-sample-size-determination", + "href": "how_big_sample.html#step-wise-sample-size-determination", + "title": "30  How Large a Sample?", + "section": "30.3 Step-wise sample-size determination", + "text": "30.3 Step-wise sample-size determination\nOften it is wisest to determine the sample size as you go along, rather than fixing it firmly in advance. In sequential sampling, you continue sampling until the split is sufficiently even to make you believe you have a reliable answer.\nRelated techniques work in a series of jumps from sample size to sample size. Step-wise sampling makes it less likely that you will take a sample that is much larger than necessary. For example, in the cable-television case, if you took a sample of perhaps fifty you could see whether the split was as wide as 32-18, which you figure you need for 9 to 1 odds that your answer is right. If the split were not that wide, you would sample another fifty, another 100, or however large a sample you needed until you reached a split wide enough to satisfy you that your answer was reliable and that you really knew which way the entire universe would vote.\nStep-wise sampling is not always practical, however, and the cable-television telephone-survey example is unusually favorable for its use. One major pitfall is that the early responses to a mail survey, for example, do not provide a random sample of the whole, and therefore it is a mistake simply to look at the early returns when the split is not wide enough to justify a verdict. If you have listened to early radio or television reports of election returns, you know how misleading the reports from the first precincts can be if we regard them as a fair sample of the whole.2\nStratified sampling is another device that helps reduce the sample size required, by balancing the amounts of information you obtain in the various strata. (Cluster sampling does not reduce the sample size. Rather, it aims to reduce the cost of obtaining a sample that will produce a given level of accuracy.)" + }, + { + "objectID": "how_big_sample.html#summary", + "href": "how_big_sample.html#summary", + "title": "30  How Large a Sample?", + "section": "30.4 Summary", + "text": "30.4 Summary\nSample sizes are too often determined on the basis of convention or of the available budget. A more rational method of choosing the size of the sample is by balancing the diminution of error expected with a larger sample, and its value, against the cost of increasing the sample size. The relationship of various sample sizes to various degrees of accuracy can be estimated with resampling methods, which are illustrated here.\n\n\n\n\nFussler, Herman Howe, and Julian Lincoln Simon. 1961. Patterns in the Use of Books in Large Research Libraries. Chicago: University of Chicago Library.\n\n\nHansen, Morris H, William N Hurwitz, and William G Madow. 1953. “Sample Survey Methods and Theory. Vol. I. Methods and Applications.” https://archive.org/details/SampleSurveyMethodsAndTheoryVol1.\n\n\nKinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948. “Sexual Behavior in the Human Male.” W. B. Saunders Company. https://books.google.co.uk/books?id=pfMKrY3VvigC.\n\n\nLorie, James Hirsch, and Harry V Roberts. 1951. Basic Methods of Marketing Research. McGraw-Hill.\n\n\nSchlaifer, Robert. 1961. Introduction to Statistics for Business Decisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl.\n\n\nSudman, Seymour. 1976. Applied Sampling. New York: Academic Press. https://archive.org/details/appliedsampling0000unse." + }, + { + "objectID": "bayes_simulation.html#simple-decision-problems", + "href": "bayes_simulation.html#simple-decision-problems", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.1 Simple decision problems", + "text": "31.1 Simple decision problems\n\n31.1.1 Assessing the Likelihood That a Used Car Will Be Sound\nConsider a problem in estimating the soundness of a used car one considers purchasing (after (Wonnacott and Wonnacott 1990, 93–94)). Seventy percent of the cars are known to be OK on average, and 30 percent are faulty. Of the cars that are really OK, a mechanic correctly identifies 80 percent as “OK” but says that 20 percent are “faulty”; of those that are faulty, the mechanic correctly identifies 90 percent as faulty and says (incorrectly) that 10 percent are OK.\nWe wish to know the probability that if the mechanic says a car is “OK,” it really is faulty. Phrased differently, what is the probability of a car being faulty if the mechanic said it was OK?\nWe can get the desired probabilities directly by simulation without knowing Bayes’ rule, as we shall see. But one must be able to model the physical problem correctly in order to proceed with the simulation; this requirement of a clearly visualized model is a strong point in favor of simulation.\n\nNote that we are only interested in outcomes where the mechanic approved a car.\nFor each car, generate a label of either “faulty” or “working” with probabilities of 0.3 and 0.7, respectively.\nFor each faulty car, we generate one of two labels, “approved” or “not approved” with probabilities 0.1 and 0.9, respectively.\nFor each working car, we generate one of two labels, “approved” or “not approved” with probabilities 0.7 and 0.3, respectively.\nOut of all cars “approved”, count how many are “faulty”. The ratio between these numbers is our answer.\n\nHere is the whole thing:\nThe answer looks to be somewhere between 5 and 6%. The code clearly follows the description step by step, but it is also quite slow. If we can improve the code, we may be able to do our simulation with more cars, and get a more accurate answer.\nLet’s use arrays to store the states of all cars in the lot simultaneously:\nThe code now runs much faster, and with a larger number of cars we see that the answer is closer to a 5% chance of a car being broken after it has been approved by a mechanic.\n\n\n31.1.2 Calculation without simulation\nSimulation forces us to model our problem clearly and concretely in code. Such code is most often easier to reason about than opaque statistical methods. Running the simulation gives a good sense of what the correct answer should be. Thereafter, we can still look into different — sometimes more elegant or accurate — ways of modeling and solving the problem.\nLet’s examine the following diagram of our car selection:\n\nWe see that there are two paths, highlighted, that results in a car being approved by a mechanic. Either a car can be working, and correctly identified as such by a mechanic; or the car can be broken, while the mechanic mistakenly determines it to be working. Our question only pertains to these two paths, so we do not need to study the rest of the tree.\nIn the long run, in our simulation, about 70% of the cars will end with the label “working”, and about 30% will end up with the label “faulty”. We just took 10000 sample cars above but, in fact, the larger the number of cars we take, the closer we will get to 70% “working” and 30% “faulty”. So, with many samples, we can think of 70% of these samples flowing down the “working” path, and 30% flowing along the “faulty” path.\nNow, we want to know, of all the cars approved by a mechanic, how many are faulty:\n\\[ \\frac{\\mathrm{cars_{\\mathrm{faulty}}}}{\\mathrm{cars}_{\\mathrm{approved}}} \\]\nWe follow the two highlighted paths in the tree:\n\nOf a large sample of cars, 30% are faulty. Of these, 10% are approved by a mechanic. That is, 30% * 10% = 3% of all cars.\nOf all cars, 70% work. Of these, 80% are approved by a mechanic. That is, 70% * 80% = 56% of all cars.\n\nThe percentage of faulty cars, out of approved cars, becomes:\n\\[\n3\\% / (56\\% + 3\\%) = 5.08\\%\n\\]\nNotation-wise, it is a bit easier to calculate these sums using proportions rather than percentages:\n\nFaulty cars approved by a mechanic: 0.3 * 0.1 = 0.03\nWorking cars approved by a mechanic: 0.7 * 0.8 = 0.56\n\nFraction of faulty cars out of approved cars: 0.03 / (0.03 + 0.56) = 0.0508\nWe see that every time the tree branches, it filters the cars: some go to one branch, the rest to another. In our code, we used the AND (&) operator to find the intersection between faulty AND approved cars, i.e., to filter out from all faulty cars only the cars that were ALSO approved." + }, + { + "objectID": "bayes_simulation.html#probability-interpretation", + "href": "bayes_simulation.html#probability-interpretation", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.2 Probability interpretation", + "text": "31.2 Probability interpretation\n\n31.2.1 Probability from proportion\nIn these examples, we often calculate proportions. In the given simulation:\n\nHow many cars are approved by a mechanic? 59/100.\nHow many of those 59 were faulty? 3/59.\n\nWe often also count how commonly events occur: “it rained 4 out of the 10 days”.\nAn extension of this idea is to predict the probability of an event occurring, based on what we had seen in the past. We can say “out of 100 days, there was some rain on 20 of them; we therefore estimate that the probability of rain occurring is 20/100”. Of course, this is not a complex or very accurate weather model; for that, we’d need to take other factors—such as season—into consideration. Overall, the more observations we have, the better our probability estimates become. We discussed this idea previously in “The Law of Large Numbers”.\n\n\n31.2.1.1 Ratios of proportions\nAt our mechanic’s yard, we can ask “how many red cars here are faulty”? To calculate that, we’d first count the number of red cars, then the number of those red cars that are also broken, then calculate the ratio: red_cars_faulty / red_cars.\nWe could just as well have worked in percentages: percentage_of_red_cars_broken / percentage_of_cars_that_are_red, since that is (red_cars_broken / 100) / (red_cars / 100)—the same ratio calculated before.\nOur point is that the denominator doesn’t matter when calculating ratios, so we could just as well have written:\n(red_cars_broken / all_cars) / (red_cars / all_cars)\nor\n\\[\nP(\\text{cars that are red and that are broken}) / P(\\text{red cars})\n\\]\n\n\n\n\n31.2.2 Probability relationships: conditional probability\nHere’s one way of writing the probability that a car is broken:\n\\[\nP(\\text{car is broken})\n\\]\nWe can shorten “car is broken” to B, and write the same thing as:\n\\[\nP(B)\n\\]\nSimilarly, we could write the probability that a car is red as:\n\\[\nP(R)\n\\]\nWe might also want to express the conditional probability, as in the probability that the car is broken, given that we already know that the car is red:\n\\[\nP(\\text{car is broken GIVEN THAT car is red})\n\\]\nThat is getting getting pretty verbose, so we will shorten this as we did above:\n\\[\nP(B \\text{ GIVEN THAT } R)\n\\]\nTo make things even more compact, we write “GIVEN THAT” as a vertical bar | — so the whole thing becomes:\n\\[\nP(B | R)\n\\]\nWe read this as “the probability that the car is broken given that the car is red”. Such a probability is known as a conditional probability. We discuss these in more details in Ch TKTK.\n\nIn our original problem, we ask what the chance is of a car being broken given that a mechanic approved it. As discussed under “Ratios of proportions”, it can be calculated with:\n\\[\nP(\\text{car broken | mechanic approved})\n= P(\\text{car broken and mechanic approved}) / P(\\text{mechanic approved})\n\\]\nWe have already used \\(B\\) to mean “broken” (above), so let us use \\(A\\) to mean “mechanic approved”. Then we can write the statement above in a more compact way:\n\\[\nP(B | A) = P(B \\text{ and } A) / P(A)\n\\]\nTo put this generally, conditional probabilities for two events \\(X\\) and \\(Y\\) can be written as:\n\\(P(X | Y) = P(X \\text{ and } Y) / P(Y)\\)\nWhere (again) \\(\\text{ and }\\) means that both events occur.\n\n\n31.2.3 Example: conditional probability\nLet’s discuss a very relevant example. You get a COVID test, and the test is negative. Now, you would like to know what the chance is of you having COVID.\nWe have the following information:\n\n1.5% of people in your area have COVID\nThe false positive rate of the tests (i.e., that they detect COVID when it is absent) is very low at 0.5%\nThe false negative rate (i.e., that they fail to detect COVID when it is present) is quite high at 40%\n\n\nAgain, we start with our simulation.\nThis gives around 0.006 or 0.6%.\nNow that we have a rough indication of what the answer should be, let’s try and calculate it directly, based on the tree of informatiom shown earlier.\nWe will use these abbreviations:\n\n\\(C^+\\) means Covid positive (you do actually have Covid).\n\\(C^-\\) means Covid negative (you do not actually have Covid).\n\\(T^+\\) means the Covid test was positive.\n\\(T^-\\) means the Covid test was negative.\n\nFor example \\(P(C^+ | T^-)\\) is the probability (\\(P\\)) that you do actually have Covid (\\(C^+\\)) given that (\\(|\\)) the test was negative (\\(T^-\\)).\nWe would like to know the probability of having COVID given that your test was negative (\\(P(C^+ | T^-)\\)). Using the conditional probability relationship from above, we can write:\n\\[\nP(C^+ | T^-) = P(C^+ \\text{ and } T^-) / P(T^-)\n\\]\nWe see from the tree diagram that \\(P(C^+ \\text{ and } T^-) = P(T^- | C^+) * P(C^+) = .4 * .015 = 0.006\\).\n\nWe observe that \\(P(T^-) = P(T^- \\text{ and } C^-) + P(T^- \\text{ and } C^+)\\), i.e. that we can obtain a negative test result through two paths, having COVID or not having COVID. We expand these further as conditional probabilities:\n\\(P(T^- \\text{ and } C^-) = P(T^- | C^-) * P(C^-)\\)\nand\n\\(P(T^- \\text{ and } C^+) = P(T^- | C^+) * P(C^+)\\).\nWe can now calculate\n\\[\nP(T^-) = P(T^- | C^-) * P(C^-) + P(T^- | C^+) * P(C^+)\n\\]\n\\[\n= .995 * .985 + .4 * .015 = 0.986\n\\]\nThe answer, then, is:\n\\(P(C^+ | T^-) = 0.006 / 0.986 = 0.0061\\) or 0.61%.\nThis matches very closely our simulation result, so we have some confidence that we have done the calculation correctly.\n\n\n31.2.4 Estimating Driving Risk for Insurance Purposes\nAnother sort of introductory problem, following after (Feller 1968, p 122):\nA mutual insurance company charges its members according to the risk of having an car accident. It is known that there are two classes of people — 80 percent of the population with good driving judgment and with a probability of .06 of having an accident each year, and 20 percent with poor judgment and a probability of .6 of having an accident each year. The company’s policy is to charge $100 for each percent of risk, i. e., a driver with a probability of .6 should pay 60*$100 = $6000.\nIf nothing is known of a driver except that they had an accident last year, what fee should they pay?\nAnother way to phrase this question is: given that a driver had an accident last year, what is the probability of them having an accident overall?\nWe will proceed as follows:\n\nGenerate a population of N people. Label each as good driver or poor driver.\nSimulate the last year for each person: did they have an accident or not?\nSelect only the ones that had an accident last year.\nAmong those, calculate what their average risk is of making an accident. This will indicate the appropriate insurance premium.\n\nThe answer should be around 4450 USD.\n\n\n31.2.5 Screening for Disease\n\nThis is a classic Bayesian problem (quoted by Tversky and Kahneman (1982, 154), from Cascells et al. (1978, 999)):\n\nIf a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?\n\nTversky and Kahneman note that among the respondents — students and staff at Harvard Medical School — “the most common response, given by almost half of the participants, was 95%” — very much the wrong answer.\nTo obtain an answer by simulation, we may rephrase the question above with (hypothetical) absolute numbers as follows:\nIf a test to detect a disease whose prevalence has been estimated to be about 100,000 in the population of 100 million persons over age 40 (that is, about 1 in a thousand) has been observed to have a false positive rate of 60 in 1200 observations, and never gives a negative result if a person really has the disease, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?\nIf the raw numbers are not available, the problem can be phrased in such terms as “about 1 case in 1000” and “about 5 false positives in 100 cases.”)\nOne may obtain an answer as follows:\n\nConstruct bucket A with 999 white beads and 1 black bead, and bucket B with 95 green beads and 5 red beads. A more complete problem that also discusses false negatives would need a third bucket.\nPick a bead from bucket A. If black, record “T,” replace the bead, and end the trial. If white, continue to step 3.\nIf a white bead is drawn from bucket A, select a bead from bucket B. If red, record “F” and replace the bead, and if green record “N” and replace the bead.\nRepeat steps 2-4 perhaps 10,000 times, and in the results count the proportion of “T”s to (“T”s plus “F”s) ignoring the “N”s).\nOf course 10,000 draws would be tedious, but even after a few hundred draws a person would be likely to draw the correct conclusion that the proportion of “T”s to (“T”s plus “F”s) would be small. And it is easy with a computer to do 10,000 trials very quickly.\nNote that the respondents in the Cascells et al. study were not naive; the medical staff members were supposed to understand statistics. Yet most doctors and other personnel offered wrong answers. If simulation can do better than the standard deductive method, then simulation would seem to be the method of choice. And only one piece of training for simulation is required: Teach the habit of saying “I’ll simulate it” and then actually doing so." + }, + { + "objectID": "bayes_simulation.html#fundamental-problems-in-statistical-practice", + "href": "bayes_simulation.html#fundamental-problems-in-statistical-practice", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.3 Fundamental problems in statistical practice", + "text": "31.3 Fundamental problems in statistical practice\nBox and Tiao (1992) begin their classic exposition of Bayesian statistics with the analysis of a famous problem first published by Fisher (1959, 18).\n\n…there are mice of two colors, black and brown. The black mice are of two genetic kinds, homozygotes (BB) and heterozygotes (Bb), and the brown mice are of one kind (bb). It is known from established genetic theory that the probabilities associated with offspring from various matings are as listed in Table 31.1.\n\n\n\nTable 31.1: Probabilities for Genetic Character of Mice Offspring (Box and Tiao 1992, 12–14)\n\n\n\nBB (black)\nBb (black)\nbb (brown)\n\n\n\n\nBB mated with bb\n0\n1\n0\n\n\nBb mated with bb\n0\n½\n½\n\n\nBb mated with Bb\n¼\n½\n¼\n\n\n\n\nSuppose we have a “test” mouse which has been produced by a mating between two (Bb) mice and is black. What is the genetic kind of this mouse?\nTo answer that, we look at the information in the last line of the table: it shows that the probabilities of a test mouse is of kind BB and Bb are precisely known, and are 1/3 and 2/3 respectively ((1/4)/(1/4 + 1/2) vs (1/2)/(1/4 + 1/2)). We call this our “prior” estimate — in other words, our estimate before seeing data.\nSuppose the test mouse is now mated with a brown mouse (of kind bb) and produces seven black offspring. Before, we thought that it was more likely for the parent to be of kind Bb than of kind BB. But if that were true, then we would have expected to have seen some brown offspring (the probability of mating Bb with bb resulting in brown offspring is given as 0.5). Therefore, we sense that it may now be more likely that the parent was of type BB instead. How do we quantify that?\nOne can calculate, as Fisher (1959, 19) did, the probabilities after seeing the data (we call this the posterior probability). This is typically done using using Bayes’ rule.\nBut instead of doing that, let’s take the easy route out and simulate the situation instead.\n\nWe begin, as do Box and Tiao, by restricting our attention to the third line in Table Table 31.1. We draw a mouse with label ‘BB’, ‘Bb’, or ‘bb’, using those probabilities. We were told that the “test mouse” is black, so if we draw ‘bb’, we try again. (Alternatively, we could draw ‘BB’ and ‘Bb’ with probabilities of 1/3 and 2/3 respectively.)\nWe now want to examine the offspring of the test mouse when mated with a brown “bb” mouse. Specifically, we are only interested in cases where all offspring were black. We will store the genetic kind of the parents of such offspring so that we can count them later.\nIf our test mouse is “BB”, we already know that all their offspring will be black (“Bb”). Thus, store “BB” in the parent list.\nIf our test mouse is “Bb”, we have a bit more work to do. Draw seven offspring from the middle row of Table tbl-mice-genetics. If all the offspring are black, store “Bb” in the parent list.\nRepeat steps 1-3 perhaps 10000 times.\nNow, out of all parents count the numbers of “BB” vs “Bb”.\n\nWe will do a naïve implementation that closely follows the logic described above, followed by a slightly optimized version.\nWe see that all the offspring being black considerably changes the situation! We started with the odds being 2:1 in favor of Bb vs BB. The “posterior” or “after the evidence” ratio is closer to 64:1 in favor of BB! (1973, pp. 12-14)\nLet’s tune the code a bit to run faster. Instead of doing the trials one mouse at a time, we will do the whole bunch together.\nThis yields a similar result, but in much shorter time — which means we can increase the number of trials and get a more accurate result.\n\nCreating the correct simulation procedure is not trivial, because Bayesian reasoning is subtle — a reason it has been the cause of controversy for more than two centuries. But it certainly is not easier to create a correct procedure using analytic tools (except in the cookbook sense of plug-and-pray). And the difficult mathematics that underlie the analytic method (see e.g. (Box and Tiao 1992, Appendix A1.1) make it almost impossible for the statistician to fully understand the procedure from beginning to end. If one is interested in insight, the simulation procedure might well be preferred.1" + }, + { + "objectID": "bayes_simulation.html#problems-based-on-normal-and-other-distributions", + "href": "bayes_simulation.html#problems-based-on-normal-and-other-distributions", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.4 Problems based on normal and other distributions", + "text": "31.4 Problems based on normal and other distributions\nThis section should be skipped by all except advanced practitioners of statistics.\nMuch of the work in Bayesian analysis for scientific purposes treats the combining of prior distributions having Normal and other standard shapes with sample evidence which may also be represented with such standard functions. The mathematics involved often is formidable, though some of the calculational formulas are fairly simple and even intuitive.\nThese problems may be handled with simulation by replacing the Normal (or other) distribution with the original raw data when data are available, or by a set of discrete sub-universes when distributions are subjective.\nMeasured data from a continuous distribution present a special problem because the probability of any one observed value is very low, often approaching zero, and hence the probability of a given set of observed values usually cannot be estimated sensibly; this is the reason for the conventional practice of working with a continuous distribution itself, of course. But a simulation necessarily works with discrete values. A feasible procedure must bridge this gulf.\nThe logic for a problem of Schlaifer’s (1961, example 17.1) will only be sketched out. The procedure is rather novel, but it has not heretofore been published and therefore must be considered tentative and requiring particular scrutiny.\n\n31.4.1 An Intermediate Problem in Conditional Probability\nSchlaifer employs a quality-control problem for his leading example of Bayesian estimation with Normal sampling. A chemical manufacturer wants to estimate the amount of yield of a crucial ingredient X in a batch of raw material in order to decide whether it should receive special handling. The yield ranges between 2 and 3 pounds (per gallon), and the manufacturer has compiled the distribution of the last 100 batches.\nThe manufacturer currently uses the decision rule that if the mean of nine samples from the batch (which vary only because of measurement error, which is the reason that he takes nine samples rather than just one) indicates that the batch mean is greater than 2.5 gallons, the batch is accepted. The first question Schlaifer asks, as a sampling-theory waystation to the more general question, is the likelihood that a given batch with any given yield — say 2.3 gallons — will produce a set of samples with a mean as great or greater than 2.5 gallons.\nWe are told that the manufacturer has in hand nine samples from a given batch; they are 1.84, 1.75, 1.39, 1.65, 3.53, 1.03,\n2.73, 2.86, and 1.96, with a mean of 2.08. Because we are also told that the manufacturer considers the extent of sample variation to be the same at all yield levels, we may — if we are again working with 2.3 as our example of a possible universe — therefore add (2.3 minus 2.08 =) 0.22 to each of these nine observations, so as to constitute a bootstrap-type universe; we do this on the grounds that this is our best guess about the constitution of that distribution with a mean at (say) 2.3.\nWe then repeatedly draw samples of nine observations from this distribution (centered at 2.3) to see how frequently its mean exceeds 2.5. This work is so straightforward that we need not even state the steps in the procedure.\n\n\n31.4.2 Estimating the Posterior Distribution\nNext we estimate the posterior distribution. Figure 31.1 shows the prior distribution of batch yields, based on 100 previous batches.\n\n\n\n\n\nFigure 31.1: Posterior distribution of batch yields\n\n\n\n\nNotation: S m = set of batches (where total S = 100) with a particular mean m (say, m = 2.1). x i = particular observation (say, x 3 = 1.03). s = the set of x i .\nWe now perform for each of the S m (categorized into the tenth-of-gallon divisions between 2.1 and 3.0 gallons), each corresponding to one of the yields ranging from 2.1 to 3.0, the same sort of sampling operation performed for S m=2.3 in the previous problem. But now, instead of using the manufacturer’s decision criterion of 2.5, we construct an interval of arbitrary width around the sample mean of 2.08 — say at .1 intervals from 2.03 to 2.13 — and then work with the weighted proportions of sample means that fall into this interval.\n\nUsing a bootstrap-like approach, we presume that the sub-universe of observations related to each S m equals the mean of that S m — say, 2.1) plus (minus) the mean of the x i (equals 2.05) added to (subtracted from) each of the nine x i , say, 1.03 + .05 = 1.08. For a distribution centered at 2.3, the values would be (1.84 + .22 = 2.06, 1.75 + .22 = 1.97…).\nWorking with the distribution centered at 2.3 as an example: Constitute a universe of the values (1.84+.22=2.06, 1.75 + .22 = 1.97…). Here we may notice that the variability in the sample enters into the analysis at this point, rather than when the sample evidence is combined with the prior distribution; this is in contrast to conventional Bayesian practice where the posterior is the result of the prior and sample means weighted by the reciprocals of the variances (see e.g. (Box and Tiao 1992, 17 and Appendix A1.1)).\nDraw nine observations from this universe (with replacement, of course), compute the mean, and record.\nRepeat step 2 perhaps 1000 times and plot the distribution of outcomes.\nCompute the percentages of the means within (say) .5 on each side of the sample mean, i. e. from 2.03–2.13. The resulting number — call it UP i — is the un-standardized (un-normalized) effect of this sub-distribution in the posterior distribution.\nRepeat steps 1-5 to cover each other possible batch yield from 2.0 to 3.0 (2.3 was just done).\nWeight each of these sub-distributions — actually, its UP i — by its prior probability, and call that WP i -.\nStandardize the WP i s to a total probability of 1.0. The result is the posterior distribution. The value found is 2.283, which the reader may wish to compare with a theoretically-obtained result (which Schlaifer does not give).\n\nThis procedure must be biased because the numbers of “hits” will differ between the two sides of the mean for all sub-distributions except that one centered at the same point as the sample, but the extent and properties of this bias are as-yet unknown. The bias would seem to be smaller as the interval is smaller, but a small interval requires a large number of simulations; a satisfactorily narrow interval surely will contain relatively few trials, which is a practical problem of still-unknown dimensions.\nAnother procedure — less theoretically justified and probably more biased — intended to get around the problem of the narrowness of the interval, is as follows:\n\n(5a.) Compute the percentages of the means on each side of the sample mean, and note the smaller of the two (or in another possible process, the difference of the two). The resulting number — call it UP i — is the un-standardized (un-normalized) weight of this sub-distribution in the posterior distribution.\n\nAnother possible criterion — a variation on the procedure in 5a — is the difference between the two tails; for a universe with the same mean as the sample, this difference would be zero." + }, + { + "objectID": "bayes_simulation.html#conclusion", + "href": "bayes_simulation.html#conclusion", + "title": "31  Bayesian Analysis by Simulation", + "section": "31.5 Conclusion", + "text": "31.5 Conclusion\nAll but the simplest problems in conditional probability are confusing to the intuition even if not difficult mathematically. But when one tackles Bayesian and other problems in probability with experimental simulation methods rather than with logic, neither simple nor complex problems need be difficult for experts or beginners.\nThis chapter shows how simulation can be a helpful and illuminating way to approach problems in Bayesian analysis.\nSimulation has two valuable properties for Bayesian analysis:\n\nIt can provide an effective way to handle problems whose analytic solution may be difficult or impossible.\nSimulation can provide insight to problems that otherwise are difficult to understand fully, as is peculiarly the case with Bayesian analysis.\n\nBayesian problems of updating estimates can be handled easily and straightforwardly with simulation, whether the data are discrete or continuous. The process and the results tend to be intuitive and transparent. Simulation works best with the original raw data rather than with abstractions from them via percentages and distributions. This can aid the understanding as well as facilitate computation.\n\n\n\n\nBox, George E. P., and George C. Tiao. 1992. Bayesian Inference in Statistical Analysis. New York: Wiley & Sons, Inc. https://www.google.co.uk/books/edition/Bayesian_Inference_in_Statistical_Analys/T8Askeyk1k4C.\n\n\nCascells, Ward, Arno Schoenberger, and Thomas B. Grayboys. 1978. “Interpretation by Physicians of Clinical Laboratory Results.” New England Journal of Medicine 299: 999–1001. https://www.nejm.org/doi/full/10.1056/NEJM197811022991808.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its Applications: Volume i. 3rd ed. Vol. 1. New York: John Wiley & Sons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nFisher, Ronald Aylmer. 1959. “Statistical Methods and Scientific Inference.” https://archive.org/details/statisticalmetho0000fish.\n\n\nPeirce, Charles Sanders. 1923. Chance, Love, and Logic: Philosophical Essays. New York: Harcourt Brace & Company, Inc. https://www.gutenberg.org/files/65274/65274-h/65274-h.htm.\n\n\nSchlaifer, Robert. 1961. Introduction to Statistics for Business Decisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl.\n\n\nTversky, Amos, and Daniel Kahneman. 1982. “Evidential Impact of Base Rates.” In Judgement Under Uncertainty: Heuristics and Biases, edited by Daniel Kahneman, Paul Slovic, and Amos Tversky. Cambridge: Cambridge University Press. https://www.google.co.uk/books/edition/Judgment_Under_Uncertainty/_0H8gwj4a1MC.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory Statistics. 5th ed. New York: John Wiley & Sons." + }, + { + "objectID": "exercise_solutions.html#solution-18-2", + "href": "exercise_solutions.html#solution-18-2", + "title": "32  Exercise Solutions", + "section": "32.1 Solution 18-2", + "text": "32.1 Solution 18-2\n\nURN 36#1 36#0 pit\nURN 77#1 52#0 chi\nREPEAT 1000\n SAMPLE 72 pit pit$\n SAMPLE 129 chi chi$\n MEAN pit$ p\n MEAN chi$ c\n SUBTRACT p c d\n SCORE d scrboard\nEND\nHISTOGRAM scrboard\nPERCENTILE scrboard (2.5 97.5) interval\nPRINT interval\n\nResults:\nINTERVAL = -0.25921 0.039083 (estimated 95 percent confidence interval)." + }, + { + "objectID": "exercise_solutions.html#solution-21-1", + "href": "exercise_solutions.html#solution-21-1", + "title": "32  Exercise Solutions", + "section": "32.2 Solution 21-1", + "text": "32.2 Solution 21-1\n\nREPEAT 1000\n GENERATE 200 1,100 a\n COUNT a <= 7 b\n DIVIDE b 200 c\n SCORE c scrboard\nEND\nHISTOGRAM scrboard\nPERCENTILE z (2.5 97.5) interval\nPRINT interval\n\nResult:\nINTERVAL = 0.035 0.105 [estimated 95 percent confidence interval]" + }, + { + "objectID": "exercise_solutions.html#solution-21-2", + "href": "exercise_solutions.html#solution-21-2", + "title": "32  Exercise Solutions", + "section": "32.3 Solution 21-2", + "text": "32.3 Solution 21-2\nWe use the “bootstrap” technique of drawing many bootstrap re-samples with replacement from the original sample, and observing how the re-sample means are distributed.\n\nNUMBERS (30 32 31 28 31 29 29 24 30 31 28 28 32 31 24 23 31 27 27 31) a\n\nREPEAT 1000\n ' Do 1000 trials or simulations\n SAMPLE 20 a b\n ' Draw 20 lifetimes from a, randomly and with replacement\n MEAN b c\n ' Find the average lifetime of the 20\n SCORE c scrboard\n ' Keep score\nEND\n\nHISTOGRAM scrboard\n' Graph the experiment results\n\nPERCENTILE scrboard (2.5 97.5) interval\n' Identify the 2.5th and 97.5th percentiles. These percentiles will\n' enclose 95 percent of the resample means.\n\nResult:\nINTERVAL = 27.7 30.05 [estimated 95 percent confidence interval]" + }, + { + "objectID": "exercise_solutions.html#solution-21-3", + "href": "exercise_solutions.html#solution-21-3", + "title": "32  Exercise Solutions", + "section": "32.4 Solution 21-3", + "text": "32.4 Solution 21-3\n\nNUMBERS (.02 .026 .023 .017 .022 .019 .018 .018 .017 .022) a\nREPEAT 1000\n SAMPLE 10 a b\n MEAN b c\n SCORE c scrboard\nEND\nHISTOGRAM scrboard\nPERCENTILE scrboard (2.5 97.5) interval\nPRINT interval\n\nResult:\nINTERVAL = 0.0187 0.0219 [estimated 95 percent confidence interval]" + }, + { + "objectID": "exercise_solutions.html#solution-23-1", + "href": "exercise_solutions.html#solution-23-1", + "title": "32  Exercise Solutions", + "section": "32.5 Solution 23-1", + "text": "32.5 Solution 23-1\n\nCreate two groups of paper cards: 25 with participation rates, and 25 with the spread values. Arrange the cards in pairs in accordance with the table, and compute the correlation coefficient between the shuffled participation and spread variables.\nShuffle one of the sets, say that with participation, and compute correlation between shuffled participation and spread.\nRepeat step 2 many, say 1000, times. Compute the proportion of the trials in which correlation was at least as negative as that for the original data.\n\n\nDATA (67.5 65.6 65.7 59.3 39.8 76.1 73.6 81.6 75.5 85.0 80.3\n54.5 79.1 94.0 80.3 89.6 44.7 82.7 89.7 83.6 84.9 76.3 74.7\n68.8 79.3) partic1\n\nDATA (13 19 18 12 20 5 1 1 2 3 5 6 5 4 8 1 3 18 13 2 2 12 17 26 6)\nspread1\n\nCORR partic1 spread1 corr\n\n' compute correlation - it’s -.37\nREPEAT 1000\n SHUFFLE partic1 partic2\n ' shuffle the participation rates\n CORR partic2 spread1 corrtria\n ' compute re-sampled correlation\n SCORE corrtria z\n ' keep the value in the scoreboard\nEND\nHISTOGRAM z\nCOUNT z <= -.37 n\n' count the trials when result <= -.37\nDIVIDE n 1000 prob\n' compute the proportion of such trials\nPRINT prob\nConclusion: The results of 5 Monte Carlo experiments each of a thousand such simulations are as follows:\nprob = 0.028, 0.045, 0.036, 0.04, 0.025.\nFrom this we may conclude that the voter participation rates probably are negatively related to the vote spread in the election. The actual value of the correlation (-.37398) cannot be explained by chance alone. In our Monte Carlo simulation of the null-hypothesis a correlation that negative is found only 3 percent — 4 percent of the time.\nDistribution of the test statistic’s value in 1000 independent trials corresponding to the null-hypothesis:" + }, + { + "objectID": "exercise_solutions.html#solution-23-2", + "href": "exercise_solutions.html#solution-23-2", + "title": "32  Exercise Solutions", + "section": "32.6 Solution 23-2", + "text": "32.6 Solution 23-2\n\nNUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)\nhomeruns\nNUMBERS (135 153 120 161 138 175 126 200 205 147 165 124\n169 156 36 98 82 131) strikeout\nMULTIPLY homerun strikeout r\nSUM r s\nREPEAT 1000\n SHUFFLE strikeout strikout2\n MULTIPLY strikout2 homeruns c\n SUM c cc\n SUBTRACT s cc d\n SCORE d scrboard\nEND\nHISTOGRAM scrboard\nCOUNT scrboard >=s k\nDIVIDE k 1000 kk\nPRINT kk\n\nResult: kk = 0\nInterpretation: In 1000 simulations, random shuffling never produced a value as high as observed. Therefore, we conclude that random chance could not be responsible for the observed degree of correlation." + }, + { + "objectID": "exercise_solutions.html#solution-23-3", + "href": "exercise_solutions.html#solution-23-3", + "title": "32  Exercise Solutions", + "section": "32.7 Solution 23-3", + "text": "32.7 Solution 23-3\n\nNUMBERS (14 20 0 38 9 38 22 31 33 11 40 5 15 32 3 29 5 32)\nhomeruns\nNUMBERS (135 153 120 161 138 175 126 200 205 147 165 124\n169 156 36 98 82 131) strikeou\nCORR homeruns strikeou r\n REPEAT 1000\n SHUFFLE strikeou strikou2\n CORR strikou2 homeruns r$\n SCORE r$ scrboard\nEND\nHISTOGRAM scrboard\nCOUNT scrboard >=0.62 k\nDIVIDE k 1000 kk\nPRINT kk r\n\nResult: kk = .001\nInterpretation: A correlation coefficient as high as the observed value (.62) occurred only 1 out of 1000 times by chance. Hence, we rule out chance as an explanation for such a high value of the correlation coefficient." + }, + { + "objectID": "exercise_solutions.html#solution-23-4", + "href": "exercise_solutions.html#solution-23-4", + "title": "32  Exercise Solutions", + "section": "32.8 Solution 23-4", + "text": "32.8 Solution 23-4\n\nREAD FILE “noreen2.dat” exrate msuppl\n' read data from file\nCORR exrate msuppl stat\n' compute correlation stat (it’s .419)\nREPEAT 1000\n SHUFFLE msuppl msuppl$\n ' shuffle money supply values\n CORR exrate msuppl$ stat$\n ' compute correlation\n SCORE stat$ scrboard\n ' keep the value in a scoreboard\nEND\nPRINT stat\nHISTOGRAM scrboard\nCOUNT scrboard >=0.419 k\nDIVIDE k 1000 prob\nPRINT prob\nDistribution of the correlation after permutation of the data:\n\nResult: prob = .001\nInterpretation: The observed correlation (.419) between the exchange rate and the money supply is seldom exceeded by random experiments with these data. Thus, the observed result 0.419 cannot be explained by chance alone and we conclude that it is statistically significant." + }, + { + "objectID": "acknowlegements.html#for-the-second-edition", + "href": "acknowlegements.html#for-the-second-edition", + "title": "33  Acknowledgements", + "section": "33.1 For the second edition", + "text": "33.1 For the second edition\nMany people have helped in the long evolution of this work. First was the late Max Beberman, who in 1967 immediately recognized the potential of resampling statistics for high school students as well as for all others. Louis Guttman and Joseph Doob provided important encouragement about the theoretical and practical value of resampling statistics. Allen Holmes cooperated with me in teaching the first class at University High School in Urbana, Illinois, in 1967. Kenneth Travers found and supervised several PhD students — David Atkinson and Carolyn Shevokas outstanding among them — who experimented with resampling statistics in high school and college classrooms and proved its effectiveness; Travers also carried the message to many secondary school teachers in person and in his texts. In 1973 Dan Weidenfield efficiently wrote the first program for the mainframe (then called “Simple Stats”). Derek Kumar wrote the first interactive program for the Apple II. Chad McDaniel developed the IBM version, with touchup by Henry van Kuijk and Yoram Kochavi. Carlos Puig developed the powerful 1990 version of the program. William E. Kirwan, Robert Dorfman, and Rudolf Lamone have provided their good offices for us to harness the resources of the University of Maryland and, in particular, the College of Business and Management. Terry Oswald worked day and night with great dedication on the program and on commercial details to start the marketing of RESAMPLING STATS. In mid-1989, Peter Bruce assumed the overall stewardship of RESAMPLING STATS, and has been proceeding with energy, good judgment, and courage. He has contributed to this volume in many ways, always excellently (including the writing and re-writing of programs, as well as explanations of the bootstrap and of the interpretation of p-values). Vladimir Koliadin wrote the code for several of the problems in this edition, and Cheinan Marks programmed the Windows and Macintosh versions of Resampling Stats. Toni York handled the typesetting and desktop publishing through various iterations, Barbara Shaw provided expert proofreading and desktop publishing services for the second printing of the second edition, and Chris Brest produced many of the figures. Thanks to all of you, and to others who should be added to the list." + }, + { + "objectID": "technical_note.html", + "href": "technical_note.html", + "title": "34  Technical Note to the Professional Reader", + "section": "", + "text": "The material presented in this book fits together with the technical literature as follows: Though I (JLS) had proceeded from first principles rather than from the literature, I have from the start cited work by Chung and Fraser (1958) and Meyer Dwass (1957) They suggested taking samples of permutations in a two-sample test as a way of extending the applicability of Fisher’s randomization test (1935; 1960, chap. III, section 21). Resampling with replacement from a single sample to determine sample statistic variability was suggested by Simon (1969). Independent work by Efron (1979) explored the properties of this technique (Efron termed it the “bootstrap”) and lent it theoretical support. The notion of using these techniques routinely and in preference to conventional techniques based on Gaussian assumptions was suggested by Simon (1969) and by Simon, Atkinson, and Shevokas (1976).\n\n\n\n\nChung, James H, and Donald AS Fraser. 1958. “Randomization Tests for a Multivariate Two-Sample Problem.” Journal of the American Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf.\n\n\nDwass, Meyer. 1957. “Modified Randomization Tests for Nonparametric Hypotheses.” The Annals of Mathematical Statistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf.\n\n\nEfron, Bradley. 1979. “Bootstrap Methods; Another Look at the Jackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh: Oliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nSimon, Julian Lincoln. 1969. Basic Research Methods in Social Science. 1st ed. New York: Random House.\n\n\nSimon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976. “Probability and Statistics: Experimental Results of a Radically Different Teaching Method.” The American Mathematical Monthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf." + }, + { + "objectID": "references.html", + "href": "references.html", + "title": "References", + "section": "", + "text": "Ani Adhikari, John DeNero, and David Wagner. 2021. Computational and\nInferential Thinking: The Foundations of Data Science. https://inferentialthinking.com. https://inferentialthinking.com.\n\n\nArbuthnot, John. 1710. “An Argument for Divine Providence, Taken\nfrom the Constant Regularity Observ’d in the Births of Both Sexes. By\nDr. John Arbuthnott, Physitian in Ordinary to Her Majesty, and Fellow of\nthe College of Physitians and the Royal Society.”\nPhilosophical Transactions of the Royal Society of London 27\n(328): 186–90. https://royalsocietypublishing.org/doi/pdf/10.1098/rstl.1710.0011.\n\n\nBarnett, Vic. 1982. Comparative Statistical Inference. 2nd ed.\nWiley Series in Probability and Mathematical Statistics. Chichester:\nJohn Wiley & Sons. https://archive.org/details/comparativestati0000barn.\n\n\nBox, George E. P., and George C. Tiao. 1992. Bayesian Inference in\nStatistical Analysis. New York: Wiley & Sons, Inc.\nhttps://www.google.co.uk/books/edition/Bayesian_Inference_in_Statistical_Analys/T8Askeyk1k4C.\n\n\nBrooks, Charles Ernest Pelham. 1928. “Periodicities in the Nile\nFloods.” Memoirs of the Royal Meteorological Society 2\n(12): 9--26. https://www.rmets.org/sites/default/files/papers/brooksmem2-12.pdf.\n\n\nBulmer, M. G. 1979. Principles of Statistics. New York, NY:\nDover Publications, inc. https://archive.org/details/principlesofstat0000bulm.\n\n\nBurnett, Ed. 1988. The Complete Direct Mail List Handbook:\nEverything You Need to Know about Lists and How to Use Them for Greater\nProfit. Englewood Cliffs, New Jersey: Prentice Hall. https://archive.org/details/completedirectma00burn.\n\n\nCascells, Ward, Arno Schoenberger, and Thomas B. Grayboys. 1978.\n“Interpretation by Physicians of Clinical Laboratory\nResults.” New England Journal of Medicine 299: 999–1001.\nhttps://www.nejm.org/doi/full/10.1056/NEJM197811022991808.\n\n\nCatling, HW, and RE Jones. 1977. “A Reinvestigation of the\nProvenance of the Inscribed Stirrup Jars Found at Thebes.”\nArchaeometry 19 (2): 137–46.\n\n\nChung, James H, and Donald AS Fraser. 1958. “Randomization Tests\nfor a Multivariate Two-Sample Problem.” Journal of the\nAmerican Statistical Association 53 (283): 729–35. https://www.jstor.org/stable/pdf/2282050.pdf.\n\n\nCipolla, C. M. 1981. Fighting the Plague in Seventeenth-Century\nItaly. Merle Curti Lectures. Madison, Wisconsin: University of\nWisconsin Press. https://books.google.co.uk/books?id=Ct\\_OJYgnKCsC.\n\n\nCobb, George W. 2007. “The Introductory Statistics Course: A\nPtolemaic Curriculum?” Technology Innovations in Statistics\nEducation 1 (1). https://escholarship.org/uc/item/6hb3k0nz.\n\n\nColeman, William. 1987. “Experimental Physiology and Statistical\nInference: The Therapeutic Trial in Nineteenth Century\nGermany.” In The Probabilistic Revolution:\nVolume 2: Ideas in the Sciences, edited by Lorenz Krüger, Gerd\nGigerenzer, and Mary S. Morgan. An MIT Press Classic. MIT Press. https://books.google.co.uk/books?id=SLftmgEACAAJ.\n\n\nCook, Earl. 1976. “Limits to Exploitation of Nonrenewable\nResources.” Science 191 (4228): 677–82. https://www.jstor.org/stable/pdf/1741483.pdf.\n\n\nDavenport, Thomas H, and DJ Patil. 2012. “Data Scientist: The\nSexiest Job of the 21st Century.” Harvard Business\nReview 90 (10): 70–76. https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century.\n\n\nDeshpande, Jayant V, AP Gore, and A Shanubhogue. 1995. Statistical\nAnalysis of Nonnormal Data. Taylor & Francis. https://www.google.co.uk/books/edition/Statistical_Analysis_of_Nonnormal_Data/sS0on2XqwwoC.\n\n\nDixon, Wilfrid J, and Frank J Massey Jr. 1983. “Introduction to\nStatistical Analysis.”\n\n\nDonoho, David. 2017. “50 Years of Data Science.”\nJournal of Computational and Graphical Statistics 26 (4):\n745–66. http://courses.csail.mit.edu/18.337/2015/docs/50YearsDataScience.pdf.\n\n\nDunleavy, Kieron, Stefania Pittaluga, John Janik, Nicole Grant, Margaret\nShovlin, Richard Little, Robert Yarchoan, Seth Steinberg, Elaine S.\nJaffe, and Wyndham H. Wilson. 2006. “Novel\nTreatment of Burkitt Lymphoma with Dose-Adjusted EPOCH-Rituximab:\nPreliminary Results Showing Excellent Outcome.”\nBlood 108 (11): 2736–36. https://doi.org/10.1182/blood.V108.11.2736.2736.\n\n\nDwass, Meyer. 1957. “Modified Randomization Tests for\nNonparametric Hypotheses.” The Annals of Mathematical\nStatistics, 181–87. https://www.jstor.org/stable/pdf/2237031.pdf.\n\n\nEfron, Bradley. 1979. “Bootstrap Methods; Another Look at the\nJackknife.” The Annals of Statistics 7 (1): 1–26. http://www.econ.uiuc.edu/~econ508/Papers/efron79.pdf.\n\n\nEfron, Bradley, and Robert J Tibshirani. 1993. “An Introduction to\nthe Bootstrap.” In Monographs on Statistics and Applied\nProbability, edited by David R Cox, David V Hinkley, Nancy Reid,\nDonald B Rubin, and Bernard W Silverman. Vol. 57. New York:\nChapman & Hall.\n\n\nFeller, William. 1968. An Introduction to Probability Theory and Its\nApplications: Volume i. 3rd ed. Vol. 1. New York: John Wiley &\nSons. https://www.google.co.uk/books/edition/An_Introduction_to_Probability_Theory_an/jbkdAQAAMAAJ.\n\n\nFeynman, Richard P., and Ralph Leighton. 1988. What Do You\nCare What Other People Think? Further Adventures of a Curious\nCharacter. New York, NY: W. W. Norton; Company, Inc. https://archive.org/details/whatdoyoucarewha0000feyn_x5w7.\n\n\nFisher, Ronald Aylmer. 1935. The Design of Experiments. 1st ed.\nEdinburgh: Oliver and Boyd Ltd. https://archive.org/details/in.ernet.dli.2015.502684.\n\n\n———. 1959. “Statistical Methods and Scientific Inference.”\nhttps://archive.org/details/statisticalmetho0000fish.\n\n\n———. 1960. The Design of Experiments. 7th ed. Edinburgh:\nOliver and Boyd Ltd. https://archive.org/details/designofexperime0000rona_q7u5.\n\n\nFussler, Herman Howe, and Julian Lincoln Simon. 1961. Patterns in\nthe Use of Books in Large Research Libraries. Chicago: University\nof Chicago Library.\n\n\nGardner, Martin. 1985. Mathematical Magic Show. Penguin Books\nLtd, Harmondsworth.\n\n\n———. 2001. The Colossal Book of Mathematics. W.W. Norton &\nCompany Inc., New York. https://archive.org/details/B-001-001-265.\n\n\nGilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot\nHand in Basketball: On the Misperception of Random Sequences.”\nCognitive Psychology 17 (3): 295–314. https://www.joelvelasco.net/teaching/122/Gilo.Vallone.Tversky.pdf.\n\n\nGnedenko, Boris Vladimirovich, I Aleksandr, and Akovlevich Khinchin.\n1962. An Elementary Introduction to the Theory of Probability.\nNew York, NY, USA: Dover Publications, Inc. https://archive.org/details/gnedenko-khinchin-an-elementary-introduction-to-the-theory-of-probability.\n\n\nGoldberg, Samuel. 1986. Probability: An Introduction. Courier\nCorporation. https://www.google.co.uk/books/edition/Probability/CmzFx9rB_FcC.\n\n\nGraunt, John. 1759. “Natural and Political Observations Mentioned\nin a Following Index and Made Upon the Bills of Mortality.” In\nCollection of Yearly Bills of Mortality, from 1657 to 1758\nInclusive, edited by Thomas Birch. London: A. Miller. https://archive.org/details/collectionyearl00hebegoog.\n\n\nHald, Anders. 1990. A History of Probability and Statistics and\nTheir Applications Before 1750. New York: John Wiley & Sons. https://archive.org/details/historyofprobabi0000hald.\n\n\nHansen, Morris H, William N Hurwitz, and William G Madow. 1953.\n“Sample Survey Methods and Theory. Vol. I. Methods and\nApplications.” https://archive.org/details/SampleSurveyMethodsAndTheoryVol1.\n\n\nHodges Jr, Joseph Lawson, and Erich Leo Lehmann. 1970. Basic\nConcepts of Probability and Statistics. 2nd ed. San Francisco,\nCalifornia: Holden-Day, Inc. https://archive.org/details/basicconceptsofp0000unse_m8m9.\n\n\nHollander, Myles, and Douglas A Wolfe. 1999. Nonparametric\nStatistical Methods. 2nd ed. Wiley Series in Probability and\nStatistics: Applied Probability and Statistics. New York: John Wiley\n& Sons, Inc. https://archive.org/details/nonparametricsta0000ed2holl.\n\n\nHyndman, Rob J, and Yanan Fan. 1996. “Sample Quantiles in\nStatistical Packages.” The American Statistician 50 (4):\n361–65. https://www.jstor.org/stable/pdf/2684934.pdf.\n\n\nKahn, Harold A, and Christopher T Sempos. 1989. Statistical Methods\nin Epidemiology. Vol. 12. Monographs in Epidemiology and\nBiostatistics. New York: Oxford University Press. https://www.google.co.uk/books/edition/Statistical_Methods_in_Epidemiology/YERYAgAAQBAJ.\n\n\nKinsey, Alfred C, Wardell B Pomeroy, and Clyde E Martin. 1948.\n“Sexual Behavior in the Human Male.” W. B. Saunders\nCompany. https://books.google.co.uk/books?id=pfMKrY3VvigC.\n\n\nKornberg, Arthur. 1991. For the Love of Enzymes: The Odyssey of a\nBiochemist. Cambridge, Massachusetts: Harvard University Press. https://archive.org/details/forloveofenzymes00arth.\n\n\nKotz, Samuel, and Norman Lloyd Johnson. 1992. Breakthroughs in\nStatistics. New York: Springer-Verlag.\n\n\nLee, Peter M. 2012. Bayesian Statistics: An Introduction. 4th\ned. Wiley Online Library. https://www.york.ac.uk/depts/maths/histstat/pml1/bayes/book.htm.\n\n\nLorie, James Hirsch, and Harry V Roberts. 1951. Basic Methods of\nMarketing Research. McGraw-Hill.\n\n\nLyon, Herbert L, and Julian Lincoln Simon. 1968. “Price Elasticity\nof the Demand for Cigarettes in the United States.” American\nJournal of Agricultural Economics 50 (4): 888–95.\n\n\nMartineau, Adrian R, David A Jolliffe, Richard L Hooper, Lauren\nGreenberg, John F Aloia, Peter Bergman, Gal Dubnov-Raz, et al. 2017.\n“Vitamin D Supplementation to Prevent Acute\nRespiratory Tract Infections: Systematic Review and Meta-Analysis of\nIndividual Participant Data.” Bmj 356.\n\n\nMcCabe, George P, and Linda Doyle McCabe. 1989. Instructor’s Guide\nwith Solutions for Introduction to the Practice of Statistics. New\nYork: W. H. Freeman.\n\n\nMosteller, Frederick. 1987. Fifty Challenging Problems in\nProbability with Solutions. Courier Corporation.\n\n\nMosteller, Frederick, and Robert E. K. Rourke. 1973. Sturdy\nStatistics: Nonparametrics and Order Statistics. Addison-Wesley\nPublishing Company.\n\n\nMosteller, Frederick, Robert E. K. Rourke, and George Brinton Thomas Jr.\n1961. Probability with Statistical Applications. 2nd ed. https://archive.org/details/probabilitywiths0000most.\n\n\nNoreen, Eric W. 1989. Computer-Intensive Methods for Testing\nHypotheses. New York: John Wiley & Sons. https://archive.org/details/computerintensiv0000nore.\n\n\nPeirce, Charles Sanders. 1923. Chance, Love, and Logic:\nPhilosophical Essays. New York: Harcourt Brace & Company, Inc.\nhttps://www.gutenberg.org/files/65274/65274-h/65274-h.htm.\n\n\nPiketty, Thomas. 2018. “Brahmin Left Vs Merchant Right: Rising\nInequality & the Changing Structure of Political Conflict.”\n2018. https://www.prsinstitute.org/downloads/related/economics/RisingInequalityandtheChangingStructureofPoliticalConflict1.pdf.\n\n\nPitman, Edwin JG. 1937. “Significance Tests Which May Be Applied\nto Samples from Any Populations.” Supplement to the Journal\nof the Royal Statistical Society 4 (1): 119–30. https://www.jstor.org/stable/pdf/2984124.pdf.\n\n\nRaiffa, Howard. 1968. “Decision Analysis: Introductory Lectures on\nChoices Under Uncertainty.” https://archive.org/details/decisionanalysis0000raif.\n\n\nRuark, Arthur Edward, and Harold Clayton Urey. 1930. Atoms,\nMoleculues and Quanta. New York, NY: McGraw-Hill book\ncompany, inc. https://archive.org/details/atomsmoleculesqu00ruar.\n\n\nRussell, Bertrand. 1945. A History of Western\nPhilosophy. New York: Simon; Schuster.\n\n\nSavage, Leonard J. 1972. The Foundations of Statistics. New\nYork: Dover Publications, Inc.\n\n\nSavant, Marilyn vos. 1990. “Ask Marilyn.” 1990. https://web.archive.org/web/20160318182523/http://marilynvossavant.com/game-show-problem.\n\n\nSchlaifer, Robert. 1961. Introduction to Statistics for Business\nDecisions. New York: MacGraw-Hill. https://archive.org/details/introductiontost00schl.\n\n\nSelvin, Steve. 1975. “Letters to the Editor.” The\nAmerican Statistician 29 (1): 67. http://www.jstor.org/stable/2683689.\n\n\nSemmelweis, Ignác Fülöp. 1983. The Etiology, Concept, and\nProphylaxis of Childbed Fever. Translated by K. Codell Carter.\nMadison, Wisconsin: University of Wisconsin Press. https://archive.org/details/etiologyconcepta0000unse.\n\n\nShurtleff, Dewey. 1970. “Some Characteristics Related to the\nIncidence of Cardiovascular Disease and Death: Framingham Study, 16-Year\nFollow-up.” Section 26. Edited by William B. Kannel and Tavia\nGordon. The Framingham Study: An Epidemiological Investigation of\nCardiovascular Disease. Washington, D.C.: U.S. Government Printing\nOffice. https://upload.wikimedia.org/wikipedia/commons/6/6d/The_Framingham_study_-_an_epidemiological_investigation_of_cardiovascular_disease_sec.26_1970_%28IA_framinghamstudye00kann_25%29.pdf.\n\n\nSimon, Julian Lincoln. 1967. “Doctors, Smoking, and Reference\nGroups.” Public Opinion Quarterly 31 (4): 646–47.\n\n\n———. 1969. Basic Research Methods in Social Science. 1st ed.\nNew York: Random House.\n\n\n———. 1992. Resampling: The New Statistics. 1st ed.\nArlington, VA: Resampling Stats Inc.\n\n\n———. 1998. “The Philosophy and Practice of Resampling\nStatistics.” 1998. http://www.juliansimon.org/writings/Resampling_Philosophy.\n\n\nSimon, Julian Lincoln, David T Atkinson, and Carolyn Shevokas. 1976.\n“Probability and Statistics: Experimental Results of a Radically\nDifferent Teaching Method.” The American Mathematical\nMonthly 83 (9): 733–39. https://www.jstor.org/stable/pdf/2318961.pdf.\n\n\nSimon, Julian Lincoln, and Paul Burstein. 1985. Basic Research\nMethods in Social Science. 3rd ed. New York: Random House.\n\n\nSimon, Julian Lincoln, and Allen Holmes. 1969. “A New Way to Teach\nProbability Statistics.” The Mathematics Teacher 62 (4):\n283–88.\n\n\nSimon, Julian Lincoln, Manouchehr Mokhtari, and Daniel H Simon. 1996.\n“Are Mergers Beneficial or Detrimental? Evidence from Advertising\nAgencies.” International Journal of the Economics of\nBusiness 3 (1): 69–82.\n\n\nSimon, Julian Lincoln, and David M Simon. 1996. “The Effects of\nRegulations on State Liquor Prices.” Empirica 23:\n303–16.\n\n\nStøvring, H. 1999. “On Radicke and His Method for Testing Mean\nDifferences.” Journal of the Royal Statistical Society:\nSeries D (The Statistician) 48 (2): 189–201. https://www.jstor.org/stable/pdf/2681185.pdf.\n\n\nSudman, Seymour. 1976. Applied Sampling. New York:\nAcademic Press. https://archive.org/details/appliedsampling0000unse.\n\n\nTukey, John W. 1977. Exploratory Data Analysis. Reading, MA,\nUSA: Addison-Wesley.\n\n\nTversky, Amos, and Daniel Kahneman. 1982. “Evidential Impact of\nBase Rates.” In Judgement Under Uncertainty: Heuristics and\nBiases, edited by Daniel Kahneman, Paul Slovic, and Amos Tversky.\nCambridge: Cambridge University Press. https://www.google.co.uk/books/edition/Judgment_Under_Uncertainty/_0H8gwj4a1MC.\n\n\nVazsonyi, Andrew. 1999. “Which Door Has the Cadillac.”\nDecision Line 30 (1): 17–19. https://web.archive.org/web/20140413131827/http://www.decisionsciences.org/DecisionLine/Vol30/30_1/vazs30_1.pdf.\n\n\nWallis, Wilson Allen, and Harry V Roberts. 1956. Statistics, a New\nApproach. New York: The Free Press.\n\n\nWhitworth, William Allen. 1897. DCC Exercises in Choice\nand Chance. Cambridge, UK: Deighton Bell; Co. https://archive.org/details/dccexerciseschoi00whit.\n\n\nWinslow, Charles-Edward Amory. 1980. The Conquest of Epidemic\nDisease: A Chapter in the History of Ideas. Madison, Wisconsin:\nUniversity of Wisconsin Press. https://archive.org/details/conquestofepidem0000wins_p3k0.\n\n\nWonnacott, Thomas H, and Ronald J Wonnacott. 1990. Introductory\nStatistics. 5th ed. New York: John Wiley & Sons.\n\n\nZhou, Qixing, Christopher E Gibson, and Robert H Foy. 2000.\n“Long-Term Changes of Nitrogen and Phosphorus Loadings to a Large\nLake in North-West Ireland.” Water Research 34 (3):\n922–26. https://doi.org/10.1016/S0043-1354(99)00199-2." + } +] \ No newline at end of file diff --git a/r-book/significance.html b/r-book/significance.html new file mode 100644 index 00000000..d0319dfc --- /dev/null +++ b/r-book/significance.html @@ -0,0 +1,688 @@ + + + + + + + + + +Resampling statistics - 22  The Concept of Statistical Significance in Testing Hypotheses + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

22  The Concept of Statistical Significance in Testing Hypotheses

+
+ + + +
+ + + + +
+ + +
+ +

This chapter offers an interpretation of the meaning of the concept of statistical significance and the term “significant” in connection with the logic of significance tests. It also discusses the concept of “level of significance.”

+
+

22.1 The logic of hypothesis tests

+

Let’s address the logic of hypothesis tests by considering a variety of examples in everyday thinking:

+

Consider the nine-year-old who tells the teacher that the dog ate the homework. Why does the teacher not accept the child’s excuse? Clearly it is because the event would be too “unusual.” But why do we think that way?

+

Let’s speculate that you survey a million adults, and only three report that they have ever heard of a real case where a dog ate somebody’s homework. You are a teacher, and a student comes in without homework and says that a dog ate the homework. It could have happened — your survey reports that it really has happened in three lifetimes out of a million. But the event happens only very infrequently .

+

Therefore, you probably conclude that because the event is so unlikely, something else must have happened — and the likeliest alternative is that the student did not do the homework. The logic is that if an event seems very unlikely, it would therefore surprise us greatly if it were to actually happen, and therefore we assume that there must be a better explanation. This is why we look askance at unlikely coincidences when they are to someone’s benefit.

+

The same line of reasoning was the logic of John Arbuthnot’s hypothesis test (1710) about the ratio of births by sex in the first published hypothesis test, though his extension of logic to God’s design as an alternative hypothesis goes beyond the standard modern framework. It is also the implicit logic in the research on puerperal fever, cholera, and beri-beri, the data for which were shown in Chapter 17, though no explicit mention was made of probability in those cases.

+

Two students sat next to each other at an ACT college-entrance examination in Kentucky in 1987. Out of 219 questions, 211 of the answers were identical, including many that were wrong. Student A was a high school athlete in Kentucky who had failed two previous SAT exams, and Student B thought he saw Student A copying from him. Should one believe that Student A cheated? (The Washington Post , April 19, 1992, p. D2.)

+

You say to yourself: It would be most unlikely that the two test-takers would answer that many questions identically by chance — and we can compute how unlikely that event would be. Because that event is so unlikely, we therefore conclude that one or both cheated. And indeed, the testing service invalidated the athlete’s exam. On the other hand, if all the questions that were answered identically were correct , the result might not be unreasonable. If we knew in how many cases they made the same mistakes , the inquiry would have been clearer, but the newspaper did not contain those details.

+

The court is hearing a murder case. There is no eye-witness, and the evidence consists of such facts as the height and weight and age of the person charged, and other circumstantial evidence. Only one person in 50 million has such characteristics, and you find such a person. Will you convict the person, or will you believe that the evidence was just a coincidence? Of course the evidence might have occurred by bad luck, but the probability is very, very small (1 in 50 million). Will you therefore conclude that because the chance is so small, it is reasonable to assume that the person charged committed the crime?

+

Sometimes the unusual really happens — the court errs by judging that the wrong person did it, and that person goes to prison or even is executed. The best we can do is to make the criterion strict: “Beyond a reasonable doubt.” (People ask: What probability does that criterion represent? But the court will not provide a numerical answer.)

+

Somebody says to you: I am going to deal out five cards and it will be a royal flush — ten, jack, queen, king, and ace of the same suit. The person deals the cards and lo and behold! the royal flush appears. Do you think the occurrence happened just by chance? No, you are likely to be very dubious that it happened by chance. Therefore, you believe there must be some other explanation — that the person fixed the cards, for example.

+

Note: You don’t attach the same meaning to any other permutation (say 3, 6, 7, 7, and king of various suits), even though that permutation is just as rare — unless the person announced exactly that permutation in advance.

+

Indeed, even if the person says nothing , you will be surprised at a royal flush, because this hand has meaning , whereas another given set of five cards do not have any special meaning.

+

You see six Volvos in one home’s driveway, and you conclude that it is a Volvo club meeting, or a Volvo salesperson’s meeting. Why? Because it is unlikely that six people not connected formally by Volvo ownership would be friends of the same person.

+

Two important points complicate the concept of statistical significance:

+
    +
  1. With a large enough sample, every treatment or variable will seem different from every other. Two faces of even a good die (say, “1” and “2”) will produce different results in the very very long run.
  2. +
  3. Statistical significance does not imply economic or social significance. Two faces of a die may be statistically different in a huge sample of throws, but a 1/10,000 difference between them is too small to make an economic difference in betting. Statistical significance is only a filter . If it appears, one should then proceed to decide whether there is substantive significance.
  4. +
+

Interpreting statistical significance is sometimes complex, especially when the interpretation depends heavily upon your prior expectations — as it often does. For example, how should a basketball coach decide whether or not to bench a player for poor performance after a series of missed shots at the basket?

+

Consider Coach John Thompson who, after Charles Smith missed 10 of 12 shots in the 1989 Georgetown-Notre Dame NCAA game, took Smith out of the game for a time (The Washington Post, March 20, 1989, p. C1). The scientific or decision problem is: Should the coach consider that Smith is not now a 47 percent shooter as he normally is, and therefore the coach should bench him? The statistical question is: How likely is a shooter with a 47 percent average to produce 10 of 12 misses? The key issue in the statistical question concerns the total number of shot attempts we should consider.

+

Would Coach Thompson take Smith out of the game after he missed one shot? Clearly not. Why not? Because one “expects” Smith to miss a shot half the time, and missing one shot therefore does not seem unusual.

+

How about after Smith misses two shots in a row? For the same reason the coach still would not bench him, because this event happens “often” — more specifically, about once in every sequence of four shots.

+

How about after 9 misses out of ten shots? Notice the difference between this case and 9 females among ten calves. In the case of the calves, we expected half females because the experiment is a single isolated trial. The event considered by itself has a small enough probability that it seems unexpected rather than expected. (“Unexpected” seems to be closely related to “happens seldom” or “unusual” in our psychology.) And an event that happens seldom seems to call for explanation, and also seems to promise that it will yield itself to explanation by some unusual concatenation of forces. That is, unusual events lead us to think that they have unusual causes; that is the nub of the matter. (But on the other hand, one can sometimes benefit by paying attention to unusual events, as scientists know when they investigate outliers.)

+

In basketball shooting, we expect 47 percent of Smith’s individual shots to be successful, and we also expect that average for each set of shots. But we also expect some sets of shots to be far from that average because we observe many sets; such variation is inevitable. So when we see a single set of 9 misses in ten shots, we are not very surprised.

+

But how about 29 misses in 30 shots? At some point, one must start to pay attention. (And of course we would pay more attention if beforehand, and never at any other time, the player said, “I can’t see the basket today. My eyes are dim.”)

+

So, how should one proceed? Perhaps proceed the same way as with a coin that keeps coming down heads a very large proportion of the throws, over a long series of tosses: At some point you examine it to see if it has two heads. But if your investigation is negative, in the absence of an indication other than the behavior in question , you continue to believe that there is no explanation and you assume that the event is “chance” and should not be acted upon . In the same way, a coach might ask a player if there is an explanation for the many misses. But if the player answers “no,” the coach should not bench him. (There are difficulties here with truth-telling, of course, but let that go for now.)

+

The key point for the basketball case and other repetitive situations is not to judge that there is an unusual explanation from the behavior of a single sample alone , just as with a short sequence of stock-price changes.

+

We all need to learn that “irregular” (a good word here) sequences are less unusual than they seem to the naked intuition. A streak of 10 out of 12 misses for a 47 percent shooter occurs about 3 percent of the time. That is, about every 33 shots Smith takes, he will begin a sequence of 12 shots that will end with 3 or fewer baskets — perhaps once in every couple of games. This does not seem “very” unusual, perhaps. And if the coach treats each such case as unusual, he will be losing some of the services of a better player than he replaces him with.

+

In brief, how hard one should search for an explanation should depend on the probability of the event. But one should (almost) assume the absence of an explanation unless one actually finds it.

+

Bayesian analysis (Chapter 31) could be brought to bear upon the matter, bringing in your prior probabilities based on the knowledge of research that has shown that there is no such thing as a “hot hand” in basketball (see Chapter 14), together with some sort of cost-benefit error-loss calculation comparing Smith and the next best available player.

+
+
+

22.2 The concept of statistical significance

+

“Significance level” is a common term in probability statistics. It corresponds roughly to the probability that the assumed benchmark universe could give rise to a sample as extreme as the observed sample by chance. The results of Example 16-1 would be phrased as follows: The hypothesis that the radiation treatment affects the sex of the fruit fly offspring is accepted as true at the probability level of .16 (sometimes stated as the 16 percent level of significance). (A more common way of expressing this idea would be to say that the hypothesis is not rejected at the .16 probability level or the 16 percent level of significance. But “not rejected” and “accepted” really do mean much the same thing, despite some arguments to the contrary.) This kind of statistical work is called hypothesis testing.

+

The question of which significance level should be considered “significant” is difficult. How great must a coincidence be before you refuse to believe that it is only a coincidence? It has been conventional in social science to say that if the probability that something happens by chance is less than 5 percent, it is significant. But sometimes the stiffer standard of 1 percent is used. Actually, any fixed cut-off significance level is arbitrary. (And even the whole notion of saying that a hypothesis “is true” or “is not true” is sometimes not useful.) Whether a one-tailed or two-tailed test is used will influence your significance level, and this is why care must be taken in making that choice.

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/site_libs/bootstrap/bootstrap-icons.css b/r-book/site_libs/bootstrap/bootstrap-icons.css new file mode 100644 index 00000000..94f19404 --- /dev/null +++ b/r-book/site_libs/bootstrap/bootstrap-icons.css @@ -0,0 +1,2018 @@ +@font-face { + font-display: block; + font-family: "bootstrap-icons"; + src: +url("./bootstrap-icons.woff?2ab2cbbe07fcebb53bdaa7313bb290f2") format("woff"); +} + +.bi::before, +[class^="bi-"]::before, +[class*=" bi-"]::before { + display: inline-block; + font-family: bootstrap-icons !important; + font-style: normal; + font-weight: normal !important; + font-variant: normal; + text-transform: none; + line-height: 1; + vertical-align: -.125em; + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; +} + +.bi-123::before { content: "\f67f"; } +.bi-alarm-fill::before { content: "\f101"; } +.bi-alarm::before { content: "\f102"; } +.bi-align-bottom::before { content: "\f103"; } +.bi-align-center::before { content: "\f104"; } +.bi-align-end::before { content: "\f105"; } +.bi-align-middle::before { content: "\f106"; } +.bi-align-start::before { content: "\f107"; } +.bi-align-top::before { content: "\f108"; } +.bi-alt::before { content: "\f109"; } +.bi-app-indicator::before { content: "\f10a"; } +.bi-app::before { content: "\f10b"; } +.bi-archive-fill::before { content: "\f10c"; } +.bi-archive::before { content: "\f10d"; } +.bi-arrow-90deg-down::before { content: "\f10e"; } +.bi-arrow-90deg-left::before { content: "\f10f"; } +.bi-arrow-90deg-right::before { content: "\f110"; } +.bi-arrow-90deg-up::before { content: "\f111"; } +.bi-arrow-bar-down::before { content: "\f112"; } +.bi-arrow-bar-left::before { content: "\f113"; } +.bi-arrow-bar-right::before { content: "\f114"; } +.bi-arrow-bar-up::before { content: "\f115"; } +.bi-arrow-clockwise::before { content: "\f116"; } +.bi-arrow-counterclockwise::before { content: "\f117"; } +.bi-arrow-down-circle-fill::before { content: "\f118"; } +.bi-arrow-down-circle::before { content: "\f119"; } +.bi-arrow-down-left-circle-fill::before { content: "\f11a"; } +.bi-arrow-down-left-circle::before { content: "\f11b"; } +.bi-arrow-down-left-square-fill::before { content: "\f11c"; } +.bi-arrow-down-left-square::before { content: "\f11d"; } +.bi-arrow-down-left::before { content: "\f11e"; } +.bi-arrow-down-right-circle-fill::before { content: "\f11f"; } +.bi-arrow-down-right-circle::before { content: "\f120"; } +.bi-arrow-down-right-square-fill::before { content: "\f121"; } +.bi-arrow-down-right-square::before { content: "\f122"; } +.bi-arrow-down-right::before { content: "\f123"; } +.bi-arrow-down-short::before { content: "\f124"; } +.bi-arrow-down-square-fill::before { content: "\f125"; } +.bi-arrow-down-square::before { content: "\f126"; } +.bi-arrow-down-up::before { content: "\f127"; } +.bi-arrow-down::before { content: "\f128"; } +.bi-arrow-left-circle-fill::before { content: "\f129"; } +.bi-arrow-left-circle::before { content: "\f12a"; } +.bi-arrow-left-right::before { content: "\f12b"; } +.bi-arrow-left-short::before { content: "\f12c"; } +.bi-arrow-left-square-fill::before { content: "\f12d"; } +.bi-arrow-left-square::before { content: "\f12e"; } +.bi-arrow-left::before { content: "\f12f"; } +.bi-arrow-repeat::before { content: "\f130"; } +.bi-arrow-return-left::before { content: "\f131"; } +.bi-arrow-return-right::before { content: "\f132"; } +.bi-arrow-right-circle-fill::before { content: "\f133"; } +.bi-arrow-right-circle::before { content: "\f134"; } +.bi-arrow-right-short::before { content: "\f135"; } +.bi-arrow-right-square-fill::before { content: "\f136"; } +.bi-arrow-right-square::before { content: "\f137"; } +.bi-arrow-right::before { content: "\f138"; } +.bi-arrow-up-circle-fill::before { content: "\f139"; } +.bi-arrow-up-circle::before { content: "\f13a"; } +.bi-arrow-up-left-circle-fill::before { content: "\f13b"; } +.bi-arrow-up-left-circle::before { content: "\f13c"; } +.bi-arrow-up-left-square-fill::before { content: "\f13d"; } +.bi-arrow-up-left-square::before { content: "\f13e"; } +.bi-arrow-up-left::before { content: "\f13f"; } +.bi-arrow-up-right-circle-fill::before { content: "\f140"; } +.bi-arrow-up-right-circle::before { content: "\f141"; } +.bi-arrow-up-right-square-fill::before { content: "\f142"; } +.bi-arrow-up-right-square::before { content: "\f143"; } +.bi-arrow-up-right::before { content: "\f144"; } +.bi-arrow-up-short::before { content: "\f145"; } +.bi-arrow-up-square-fill::before { content: "\f146"; } +.bi-arrow-up-square::before { content: "\f147"; } +.bi-arrow-up::before { content: "\f148"; } +.bi-arrows-angle-contract::before { content: "\f149"; } +.bi-arrows-angle-expand::before { content: "\f14a"; } +.bi-arrows-collapse::before { content: "\f14b"; } +.bi-arrows-expand::before { content: "\f14c"; } +.bi-arrows-fullscreen::before { content: "\f14d"; } +.bi-arrows-move::before { content: "\f14e"; } +.bi-aspect-ratio-fill::before { content: "\f14f"; } +.bi-aspect-ratio::before { content: "\f150"; } +.bi-asterisk::before { content: "\f151"; } +.bi-at::before { content: "\f152"; } +.bi-award-fill::before { content: "\f153"; } +.bi-award::before { content: "\f154"; } +.bi-back::before { content: "\f155"; } +.bi-backspace-fill::before { content: "\f156"; } +.bi-backspace-reverse-fill::before { content: "\f157"; } +.bi-backspace-reverse::before { content: "\f158"; } +.bi-backspace::before { content: "\f159"; } +.bi-badge-3d-fill::before { content: "\f15a"; } +.bi-badge-3d::before { content: "\f15b"; } +.bi-badge-4k-fill::before { content: "\f15c"; } +.bi-badge-4k::before { content: "\f15d"; } +.bi-badge-8k-fill::before { content: "\f15e"; } +.bi-badge-8k::before { content: "\f15f"; } +.bi-badge-ad-fill::before { content: "\f160"; } +.bi-badge-ad::before { content: "\f161"; } +.bi-badge-ar-fill::before { content: "\f162"; } +.bi-badge-ar::before { content: "\f163"; } +.bi-badge-cc-fill::before { content: "\f164"; } +.bi-badge-cc::before { content: "\f165"; } +.bi-badge-hd-fill::before { content: "\f166"; } +.bi-badge-hd::before { content: "\f167"; } +.bi-badge-tm-fill::before { content: "\f168"; } +.bi-badge-tm::before { content: "\f169"; } +.bi-badge-vo-fill::before { content: "\f16a"; } +.bi-badge-vo::before { content: "\f16b"; } +.bi-badge-vr-fill::before { content: "\f16c"; } +.bi-badge-vr::before { content: "\f16d"; } +.bi-badge-wc-fill::before { content: "\f16e"; } +.bi-badge-wc::before { content: "\f16f"; } +.bi-bag-check-fill::before { content: "\f170"; } +.bi-bag-check::before { content: "\f171"; } +.bi-bag-dash-fill::before { content: "\f172"; } +.bi-bag-dash::before { content: "\f173"; } +.bi-bag-fill::before { content: "\f174"; } +.bi-bag-plus-fill::before { content: "\f175"; } +.bi-bag-plus::before { content: "\f176"; } +.bi-bag-x-fill::before { content: "\f177"; } +.bi-bag-x::before { content: "\f178"; } +.bi-bag::before { content: "\f179"; } +.bi-bar-chart-fill::before { content: "\f17a"; } +.bi-bar-chart-line-fill::before { content: "\f17b"; } +.bi-bar-chart-line::before { content: "\f17c"; } +.bi-bar-chart-steps::before { content: "\f17d"; } +.bi-bar-chart::before { content: "\f17e"; } +.bi-basket-fill::before { content: "\f17f"; } +.bi-basket::before { content: "\f180"; } +.bi-basket2-fill::before { content: "\f181"; } +.bi-basket2::before { content: "\f182"; } +.bi-basket3-fill::before { content: "\f183"; } +.bi-basket3::before { content: "\f184"; } +.bi-battery-charging::before { content: "\f185"; } +.bi-battery-full::before { content: "\f186"; } +.bi-battery-half::before { content: "\f187"; } +.bi-battery::before { content: "\f188"; } +.bi-bell-fill::before { content: "\f189"; } +.bi-bell::before { content: "\f18a"; } +.bi-bezier::before { content: "\f18b"; } +.bi-bezier2::before { content: "\f18c"; } +.bi-bicycle::before { content: "\f18d"; } +.bi-binoculars-fill::before { content: "\f18e"; } +.bi-binoculars::before { content: "\f18f"; } +.bi-blockquote-left::before { content: "\f190"; } +.bi-blockquote-right::before { content: "\f191"; } +.bi-book-fill::before { content: "\f192"; } +.bi-book-half::before { content: "\f193"; } +.bi-book::before { content: "\f194"; } +.bi-bookmark-check-fill::before { content: "\f195"; } +.bi-bookmark-check::before { content: "\f196"; } +.bi-bookmark-dash-fill::before { content: "\f197"; } +.bi-bookmark-dash::before { content: "\f198"; } +.bi-bookmark-fill::before { content: "\f199"; } +.bi-bookmark-heart-fill::before { content: "\f19a"; } +.bi-bookmark-heart::before { content: "\f19b"; } +.bi-bookmark-plus-fill::before { content: "\f19c"; } +.bi-bookmark-plus::before { content: "\f19d"; } +.bi-bookmark-star-fill::before { content: "\f19e"; } +.bi-bookmark-star::before { content: "\f19f"; } +.bi-bookmark-x-fill::before { content: "\f1a0"; } +.bi-bookmark-x::before { content: "\f1a1"; } +.bi-bookmark::before { content: "\f1a2"; } +.bi-bookmarks-fill::before { content: "\f1a3"; } +.bi-bookmarks::before { content: "\f1a4"; } +.bi-bookshelf::before { content: "\f1a5"; } +.bi-bootstrap-fill::before { content: "\f1a6"; } +.bi-bootstrap-reboot::before { content: "\f1a7"; } +.bi-bootstrap::before { content: "\f1a8"; } +.bi-border-all::before { content: "\f1a9"; } +.bi-border-bottom::before { content: "\f1aa"; } +.bi-border-center::before { content: "\f1ab"; } +.bi-border-inner::before { content: "\f1ac"; } +.bi-border-left::before { content: "\f1ad"; } +.bi-border-middle::before { content: "\f1ae"; } +.bi-border-outer::before { content: "\f1af"; } +.bi-border-right::before { content: "\f1b0"; } +.bi-border-style::before { content: "\f1b1"; } +.bi-border-top::before { content: "\f1b2"; } +.bi-border-width::before { content: "\f1b3"; } +.bi-border::before { content: "\f1b4"; } +.bi-bounding-box-circles::before { content: "\f1b5"; } +.bi-bounding-box::before { content: "\f1b6"; } +.bi-box-arrow-down-left::before { content: "\f1b7"; } +.bi-box-arrow-down-right::before { content: "\f1b8"; } +.bi-box-arrow-down::before { content: "\f1b9"; } +.bi-box-arrow-in-down-left::before { content: "\f1ba"; } +.bi-box-arrow-in-down-right::before { content: "\f1bb"; } +.bi-box-arrow-in-down::before { content: "\f1bc"; } +.bi-box-arrow-in-left::before { content: "\f1bd"; } +.bi-box-arrow-in-right::before { content: "\f1be"; } +.bi-box-arrow-in-up-left::before { content: "\f1bf"; } +.bi-box-arrow-in-up-right::before { content: "\f1c0"; } +.bi-box-arrow-in-up::before { content: "\f1c1"; } +.bi-box-arrow-left::before { content: "\f1c2"; } +.bi-box-arrow-right::before { content: "\f1c3"; } +.bi-box-arrow-up-left::before { content: "\f1c4"; } +.bi-box-arrow-up-right::before { content: "\f1c5"; } +.bi-box-arrow-up::before { content: "\f1c6"; } +.bi-box-seam::before { content: "\f1c7"; } +.bi-box::before { content: "\f1c8"; } +.bi-braces::before { content: "\f1c9"; } +.bi-bricks::before { content: "\f1ca"; } +.bi-briefcase-fill::before { content: "\f1cb"; } +.bi-briefcase::before { content: "\f1cc"; } +.bi-brightness-alt-high-fill::before { content: "\f1cd"; } +.bi-brightness-alt-high::before { content: "\f1ce"; } +.bi-brightness-alt-low-fill::before { content: "\f1cf"; } +.bi-brightness-alt-low::before { content: "\f1d0"; } +.bi-brightness-high-fill::before { content: "\f1d1"; } +.bi-brightness-high::before { content: "\f1d2"; } +.bi-brightness-low-fill::before { content: "\f1d3"; } +.bi-brightness-low::before { content: "\f1d4"; } +.bi-broadcast-pin::before { content: "\f1d5"; } +.bi-broadcast::before { content: "\f1d6"; } +.bi-brush-fill::before { content: "\f1d7"; } +.bi-brush::before { content: "\f1d8"; } +.bi-bucket-fill::before { content: "\f1d9"; } +.bi-bucket::before { content: "\f1da"; } +.bi-bug-fill::before { content: "\f1db"; } +.bi-bug::before { content: "\f1dc"; } +.bi-building::before { content: "\f1dd"; } +.bi-bullseye::before { content: "\f1de"; } +.bi-calculator-fill::before { content: "\f1df"; } +.bi-calculator::before { content: "\f1e0"; } +.bi-calendar-check-fill::before { content: "\f1e1"; } +.bi-calendar-check::before { content: "\f1e2"; } +.bi-calendar-date-fill::before { content: "\f1e3"; } +.bi-calendar-date::before { content: "\f1e4"; } +.bi-calendar-day-fill::before { content: "\f1e5"; } +.bi-calendar-day::before { content: "\f1e6"; } +.bi-calendar-event-fill::before { content: "\f1e7"; } +.bi-calendar-event::before { content: "\f1e8"; } +.bi-calendar-fill::before { content: "\f1e9"; } +.bi-calendar-minus-fill::before { content: "\f1ea"; } +.bi-calendar-minus::before { content: "\f1eb"; } +.bi-calendar-month-fill::before { content: "\f1ec"; } +.bi-calendar-month::before { content: "\f1ed"; } +.bi-calendar-plus-fill::before { content: "\f1ee"; } +.bi-calendar-plus::before { content: "\f1ef"; } +.bi-calendar-range-fill::before { content: "\f1f0"; } +.bi-calendar-range::before { content: "\f1f1"; } +.bi-calendar-week-fill::before { content: "\f1f2"; } +.bi-calendar-week::before { content: "\f1f3"; } +.bi-calendar-x-fill::before { content: "\f1f4"; } +.bi-calendar-x::before { content: "\f1f5"; } +.bi-calendar::before { content: "\f1f6"; } +.bi-calendar2-check-fill::before { content: "\f1f7"; } +.bi-calendar2-check::before { content: "\f1f8"; } +.bi-calendar2-date-fill::before { content: "\f1f9"; } +.bi-calendar2-date::before { content: "\f1fa"; } +.bi-calendar2-day-fill::before { content: "\f1fb"; } +.bi-calendar2-day::before { content: "\f1fc"; } +.bi-calendar2-event-fill::before { content: "\f1fd"; } +.bi-calendar2-event::before { content: "\f1fe"; } +.bi-calendar2-fill::before { content: "\f1ff"; } +.bi-calendar2-minus-fill::before { content: "\f200"; } +.bi-calendar2-minus::before { content: "\f201"; } +.bi-calendar2-month-fill::before { content: "\f202"; } +.bi-calendar2-month::before { content: "\f203"; } +.bi-calendar2-plus-fill::before { content: "\f204"; } +.bi-calendar2-plus::before { content: "\f205"; } +.bi-calendar2-range-fill::before { content: "\f206"; } +.bi-calendar2-range::before { content: "\f207"; } +.bi-calendar2-week-fill::before { content: "\f208"; } +.bi-calendar2-week::before { content: "\f209"; } +.bi-calendar2-x-fill::before { content: "\f20a"; } +.bi-calendar2-x::before { content: "\f20b"; } +.bi-calendar2::before { content: "\f20c"; } +.bi-calendar3-event-fill::before { content: "\f20d"; } +.bi-calendar3-event::before { content: "\f20e"; } +.bi-calendar3-fill::before { content: "\f20f"; } +.bi-calendar3-range-fill::before { content: "\f210"; } +.bi-calendar3-range::before { content: "\f211"; } +.bi-calendar3-week-fill::before { content: "\f212"; } +.bi-calendar3-week::before { content: "\f213"; } +.bi-calendar3::before { content: "\f214"; } +.bi-calendar4-event::before { content: "\f215"; } +.bi-calendar4-range::before { content: "\f216"; } +.bi-calendar4-week::before { content: "\f217"; } +.bi-calendar4::before { content: "\f218"; } +.bi-camera-fill::before { content: "\f219"; } +.bi-camera-reels-fill::before { content: "\f21a"; } +.bi-camera-reels::before { content: "\f21b"; } +.bi-camera-video-fill::before { content: "\f21c"; } +.bi-camera-video-off-fill::before { content: "\f21d"; } +.bi-camera-video-off::before { content: "\f21e"; } +.bi-camera-video::before { content: "\f21f"; } +.bi-camera::before { content: "\f220"; } +.bi-camera2::before { content: "\f221"; } +.bi-capslock-fill::before { content: "\f222"; } +.bi-capslock::before { content: "\f223"; } +.bi-card-checklist::before { content: "\f224"; } +.bi-card-heading::before { content: "\f225"; } +.bi-card-image::before { content: "\f226"; } +.bi-card-list::before { content: "\f227"; } +.bi-card-text::before { content: "\f228"; } +.bi-caret-down-fill::before { content: "\f229"; } +.bi-caret-down-square-fill::before { content: "\f22a"; } +.bi-caret-down-square::before { content: "\f22b"; } +.bi-caret-down::before { content: "\f22c"; } +.bi-caret-left-fill::before { content: "\f22d"; } +.bi-caret-left-square-fill::before { content: "\f22e"; } +.bi-caret-left-square::before { content: "\f22f"; } +.bi-caret-left::before { content: "\f230"; } +.bi-caret-right-fill::before { content: "\f231"; } +.bi-caret-right-square-fill::before { content: "\f232"; } +.bi-caret-right-square::before { content: "\f233"; } +.bi-caret-right::before { content: "\f234"; } +.bi-caret-up-fill::before { content: "\f235"; } +.bi-caret-up-square-fill::before { content: "\f236"; } +.bi-caret-up-square::before { content: "\f237"; } +.bi-caret-up::before { content: "\f238"; } +.bi-cart-check-fill::before { content: "\f239"; } +.bi-cart-check::before { content: "\f23a"; } +.bi-cart-dash-fill::before { content: "\f23b"; } +.bi-cart-dash::before { content: "\f23c"; } +.bi-cart-fill::before { content: "\f23d"; } +.bi-cart-plus-fill::before { content: "\f23e"; } +.bi-cart-plus::before { content: "\f23f"; } +.bi-cart-x-fill::before { content: "\f240"; } +.bi-cart-x::before { content: "\f241"; } +.bi-cart::before { content: "\f242"; } +.bi-cart2::before { content: "\f243"; } +.bi-cart3::before { content: "\f244"; } +.bi-cart4::before { content: "\f245"; } +.bi-cash-stack::before { content: "\f246"; } +.bi-cash::before { content: "\f247"; } +.bi-cast::before { content: "\f248"; } +.bi-chat-dots-fill::before { content: "\f249"; } +.bi-chat-dots::before { content: "\f24a"; } +.bi-chat-fill::before { content: "\f24b"; } +.bi-chat-left-dots-fill::before { content: "\f24c"; } +.bi-chat-left-dots::before { content: "\f24d"; } +.bi-chat-left-fill::before { content: "\f24e"; } +.bi-chat-left-quote-fill::before { content: "\f24f"; } +.bi-chat-left-quote::before { content: "\f250"; } +.bi-chat-left-text-fill::before { content: "\f251"; } +.bi-chat-left-text::before { content: "\f252"; } +.bi-chat-left::before { content: "\f253"; } +.bi-chat-quote-fill::before { content: "\f254"; } +.bi-chat-quote::before { content: "\f255"; } +.bi-chat-right-dots-fill::before { content: "\f256"; } +.bi-chat-right-dots::before { content: "\f257"; } +.bi-chat-right-fill::before { content: "\f258"; } +.bi-chat-right-quote-fill::before { content: "\f259"; } +.bi-chat-right-quote::before { content: "\f25a"; } +.bi-chat-right-text-fill::before { content: "\f25b"; } +.bi-chat-right-text::before { content: "\f25c"; } +.bi-chat-right::before { content: "\f25d"; } +.bi-chat-square-dots-fill::before { content: "\f25e"; } +.bi-chat-square-dots::before { content: "\f25f"; } +.bi-chat-square-fill::before { content: "\f260"; } +.bi-chat-square-quote-fill::before { content: "\f261"; } +.bi-chat-square-quote::before { content: "\f262"; } +.bi-chat-square-text-fill::before { content: "\f263"; } +.bi-chat-square-text::before { content: "\f264"; } +.bi-chat-square::before { content: "\f265"; } +.bi-chat-text-fill::before { content: "\f266"; } +.bi-chat-text::before { content: "\f267"; } +.bi-chat::before { content: "\f268"; } +.bi-check-all::before { content: "\f269"; } +.bi-check-circle-fill::before { content: "\f26a"; } +.bi-check-circle::before { content: "\f26b"; } +.bi-check-square-fill::before { content: "\f26c"; } +.bi-check-square::before { content: "\f26d"; } +.bi-check::before { content: "\f26e"; } +.bi-check2-all::before { content: "\f26f"; } +.bi-check2-circle::before { content: "\f270"; } +.bi-check2-square::before { content: "\f271"; } +.bi-check2::before { content: "\f272"; } +.bi-chevron-bar-contract::before { content: "\f273"; } +.bi-chevron-bar-down::before { content: "\f274"; } +.bi-chevron-bar-expand::before { content: "\f275"; } +.bi-chevron-bar-left::before { content: "\f276"; } +.bi-chevron-bar-right::before { content: "\f277"; } +.bi-chevron-bar-up::before { content: "\f278"; } +.bi-chevron-compact-down::before { content: "\f279"; } +.bi-chevron-compact-left::before { content: "\f27a"; } +.bi-chevron-compact-right::before { content: "\f27b"; } +.bi-chevron-compact-up::before { content: "\f27c"; } +.bi-chevron-contract::before { content: "\f27d"; } +.bi-chevron-double-down::before { content: "\f27e"; } +.bi-chevron-double-left::before { content: "\f27f"; } +.bi-chevron-double-right::before { content: "\f280"; } +.bi-chevron-double-up::before { content: "\f281"; } +.bi-chevron-down::before { content: "\f282"; } +.bi-chevron-expand::before { content: "\f283"; } +.bi-chevron-left::before { content: "\f284"; } +.bi-chevron-right::before { content: "\f285"; } +.bi-chevron-up::before { content: "\f286"; } +.bi-circle-fill::before { content: "\f287"; } +.bi-circle-half::before { content: "\f288"; } +.bi-circle-square::before { content: "\f289"; } +.bi-circle::before { content: "\f28a"; } +.bi-clipboard-check::before { content: "\f28b"; } +.bi-clipboard-data::before { content: "\f28c"; } +.bi-clipboard-minus::before { content: "\f28d"; } +.bi-clipboard-plus::before { content: "\f28e"; } +.bi-clipboard-x::before { content: "\f28f"; } +.bi-clipboard::before { content: "\f290"; } +.bi-clock-fill::before { content: "\f291"; } +.bi-clock-history::before { content: "\f292"; } +.bi-clock::before { content: "\f293"; } +.bi-cloud-arrow-down-fill::before { content: "\f294"; } +.bi-cloud-arrow-down::before { content: "\f295"; } +.bi-cloud-arrow-up-fill::before { content: "\f296"; } +.bi-cloud-arrow-up::before { content: "\f297"; } +.bi-cloud-check-fill::before { content: "\f298"; } +.bi-cloud-check::before { content: "\f299"; } +.bi-cloud-download-fill::before { content: "\f29a"; } +.bi-cloud-download::before { content: "\f29b"; } +.bi-cloud-drizzle-fill::before { content: "\f29c"; } +.bi-cloud-drizzle::before { content: "\f29d"; } +.bi-cloud-fill::before { content: "\f29e"; } +.bi-cloud-fog-fill::before { content: "\f29f"; } +.bi-cloud-fog::before { content: "\f2a0"; } +.bi-cloud-fog2-fill::before { content: "\f2a1"; } +.bi-cloud-fog2::before { content: "\f2a2"; } +.bi-cloud-hail-fill::before { content: "\f2a3"; } +.bi-cloud-hail::before { content: "\f2a4"; } +.bi-cloud-haze-1::before { content: "\f2a5"; } +.bi-cloud-haze-fill::before { content: "\f2a6"; } +.bi-cloud-haze::before { content: "\f2a7"; } +.bi-cloud-haze2-fill::before { content: "\f2a8"; } +.bi-cloud-lightning-fill::before { content: "\f2a9"; } +.bi-cloud-lightning-rain-fill::before { content: "\f2aa"; } +.bi-cloud-lightning-rain::before { content: "\f2ab"; } +.bi-cloud-lightning::before { content: "\f2ac"; } +.bi-cloud-minus-fill::before { content: "\f2ad"; } +.bi-cloud-minus::before { content: "\f2ae"; } +.bi-cloud-moon-fill::before { content: "\f2af"; } +.bi-cloud-moon::before { content: "\f2b0"; } +.bi-cloud-plus-fill::before { content: "\f2b1"; } +.bi-cloud-plus::before { content: "\f2b2"; } +.bi-cloud-rain-fill::before { content: "\f2b3"; } +.bi-cloud-rain-heavy-fill::before { content: "\f2b4"; } +.bi-cloud-rain-heavy::before { content: "\f2b5"; } +.bi-cloud-rain::before { content: "\f2b6"; } +.bi-cloud-slash-fill::before { content: "\f2b7"; } +.bi-cloud-slash::before { content: "\f2b8"; } +.bi-cloud-sleet-fill::before { content: "\f2b9"; } +.bi-cloud-sleet::before { content: "\f2ba"; } +.bi-cloud-snow-fill::before { content: "\f2bb"; } +.bi-cloud-snow::before { content: "\f2bc"; } +.bi-cloud-sun-fill::before { content: "\f2bd"; } +.bi-cloud-sun::before { content: "\f2be"; } +.bi-cloud-upload-fill::before { content: "\f2bf"; } +.bi-cloud-upload::before { content: "\f2c0"; } +.bi-cloud::before { content: "\f2c1"; } +.bi-clouds-fill::before { content: "\f2c2"; } +.bi-clouds::before { content: "\f2c3"; } +.bi-cloudy-fill::before { content: "\f2c4"; } +.bi-cloudy::before { content: "\f2c5"; } +.bi-code-slash::before { content: "\f2c6"; } +.bi-code-square::before { content: "\f2c7"; } +.bi-code::before { content: "\f2c8"; } +.bi-collection-fill::before { content: "\f2c9"; } +.bi-collection-play-fill::before { content: "\f2ca"; } +.bi-collection-play::before { content: "\f2cb"; } +.bi-collection::before { content: "\f2cc"; } +.bi-columns-gap::before { content: "\f2cd"; } +.bi-columns::before { content: "\f2ce"; } +.bi-command::before { content: "\f2cf"; } +.bi-compass-fill::before { content: "\f2d0"; } +.bi-compass::before { content: "\f2d1"; } +.bi-cone-striped::before { content: "\f2d2"; } +.bi-cone::before { content: "\f2d3"; } +.bi-controller::before { content: "\f2d4"; } +.bi-cpu-fill::before { content: "\f2d5"; } +.bi-cpu::before { content: "\f2d6"; } +.bi-credit-card-2-back-fill::before { content: "\f2d7"; } +.bi-credit-card-2-back::before { content: "\f2d8"; } +.bi-credit-card-2-front-fill::before { content: "\f2d9"; } +.bi-credit-card-2-front::before { content: "\f2da"; } +.bi-credit-card-fill::before { content: "\f2db"; } +.bi-credit-card::before { content: "\f2dc"; } +.bi-crop::before { content: "\f2dd"; } +.bi-cup-fill::before { content: "\f2de"; } +.bi-cup-straw::before { content: "\f2df"; } +.bi-cup::before { content: "\f2e0"; } +.bi-cursor-fill::before { content: "\f2e1"; } +.bi-cursor-text::before { content: "\f2e2"; } +.bi-cursor::before { content: "\f2e3"; } +.bi-dash-circle-dotted::before { content: "\f2e4"; } +.bi-dash-circle-fill::before { content: "\f2e5"; } +.bi-dash-circle::before { content: "\f2e6"; } +.bi-dash-square-dotted::before { content: "\f2e7"; } +.bi-dash-square-fill::before { content: "\f2e8"; } +.bi-dash-square::before { content: "\f2e9"; } +.bi-dash::before { content: "\f2ea"; } +.bi-diagram-2-fill::before { content: "\f2eb"; } +.bi-diagram-2::before { content: "\f2ec"; } +.bi-diagram-3-fill::before { content: "\f2ed"; } +.bi-diagram-3::before { content: "\f2ee"; } +.bi-diamond-fill::before { content: "\f2ef"; } +.bi-diamond-half::before { content: "\f2f0"; } +.bi-diamond::before { content: "\f2f1"; } +.bi-dice-1-fill::before { content: "\f2f2"; } +.bi-dice-1::before { content: "\f2f3"; } +.bi-dice-2-fill::before { content: "\f2f4"; } +.bi-dice-2::before { content: "\f2f5"; } +.bi-dice-3-fill::before { content: "\f2f6"; } +.bi-dice-3::before { content: "\f2f7"; } +.bi-dice-4-fill::before { content: "\f2f8"; } +.bi-dice-4::before { content: "\f2f9"; } +.bi-dice-5-fill::before { content: "\f2fa"; } +.bi-dice-5::before { content: "\f2fb"; } +.bi-dice-6-fill::before { content: "\f2fc"; } +.bi-dice-6::before { content: "\f2fd"; } +.bi-disc-fill::before { content: "\f2fe"; } +.bi-disc::before { content: "\f2ff"; } +.bi-discord::before { content: "\f300"; } +.bi-display-fill::before { content: "\f301"; } +.bi-display::before { content: "\f302"; } +.bi-distribute-horizontal::before { content: "\f303"; } +.bi-distribute-vertical::before { content: "\f304"; } +.bi-door-closed-fill::before { content: "\f305"; } +.bi-door-closed::before { content: "\f306"; } +.bi-door-open-fill::before { content: "\f307"; } +.bi-door-open::before { content: "\f308"; } +.bi-dot::before { content: "\f309"; } +.bi-download::before { content: "\f30a"; } +.bi-droplet-fill::before { content: "\f30b"; } +.bi-droplet-half::before { content: "\f30c"; } +.bi-droplet::before { content: "\f30d"; } +.bi-earbuds::before { content: "\f30e"; } +.bi-easel-fill::before { content: "\f30f"; } +.bi-easel::before { content: "\f310"; } +.bi-egg-fill::before { content: "\f311"; } +.bi-egg-fried::before { content: "\f312"; } +.bi-egg::before { content: "\f313"; } +.bi-eject-fill::before { content: "\f314"; } +.bi-eject::before { content: "\f315"; } +.bi-emoji-angry-fill::before { content: "\f316"; } +.bi-emoji-angry::before { content: "\f317"; } +.bi-emoji-dizzy-fill::before { content: "\f318"; } +.bi-emoji-dizzy::before { content: "\f319"; } +.bi-emoji-expressionless-fill::before { content: "\f31a"; } +.bi-emoji-expressionless::before { content: "\f31b"; } +.bi-emoji-frown-fill::before { content: "\f31c"; } +.bi-emoji-frown::before { content: "\f31d"; } +.bi-emoji-heart-eyes-fill::before { content: "\f31e"; } +.bi-emoji-heart-eyes::before { content: "\f31f"; } +.bi-emoji-laughing-fill::before { content: "\f320"; } +.bi-emoji-laughing::before { content: "\f321"; } +.bi-emoji-neutral-fill::before { content: "\f322"; } +.bi-emoji-neutral::before { content: "\f323"; } +.bi-emoji-smile-fill::before { content: "\f324"; } +.bi-emoji-smile-upside-down-fill::before { content: "\f325"; } +.bi-emoji-smile-upside-down::before { content: "\f326"; } +.bi-emoji-smile::before { content: "\f327"; } +.bi-emoji-sunglasses-fill::before { content: "\f328"; } +.bi-emoji-sunglasses::before { content: "\f329"; } +.bi-emoji-wink-fill::before { content: "\f32a"; } +.bi-emoji-wink::before { content: "\f32b"; } +.bi-envelope-fill::before { content: "\f32c"; } +.bi-envelope-open-fill::before { content: "\f32d"; } +.bi-envelope-open::before { content: "\f32e"; } +.bi-envelope::before { content: "\f32f"; } +.bi-eraser-fill::before { content: "\f330"; } +.bi-eraser::before { content: "\f331"; } +.bi-exclamation-circle-fill::before { content: "\f332"; } +.bi-exclamation-circle::before { content: "\f333"; } +.bi-exclamation-diamond-fill::before { content: "\f334"; } +.bi-exclamation-diamond::before { content: "\f335"; } +.bi-exclamation-octagon-fill::before { content: "\f336"; } +.bi-exclamation-octagon::before { content: "\f337"; } +.bi-exclamation-square-fill::before { content: "\f338"; } +.bi-exclamation-square::before { content: "\f339"; } +.bi-exclamation-triangle-fill::before { content: "\f33a"; } +.bi-exclamation-triangle::before { content: "\f33b"; } +.bi-exclamation::before { content: "\f33c"; } +.bi-exclude::before { content: "\f33d"; } +.bi-eye-fill::before { content: "\f33e"; } +.bi-eye-slash-fill::before { content: "\f33f"; } +.bi-eye-slash::before { content: "\f340"; } +.bi-eye::before { content: "\f341"; } +.bi-eyedropper::before { content: "\f342"; } +.bi-eyeglasses::before { content: "\f343"; } +.bi-facebook::before { content: "\f344"; } +.bi-file-arrow-down-fill::before { content: "\f345"; } +.bi-file-arrow-down::before { content: "\f346"; } +.bi-file-arrow-up-fill::before { content: "\f347"; } +.bi-file-arrow-up::before { content: "\f348"; } +.bi-file-bar-graph-fill::before { content: "\f349"; } +.bi-file-bar-graph::before { content: "\f34a"; } +.bi-file-binary-fill::before { content: "\f34b"; } +.bi-file-binary::before { content: "\f34c"; } +.bi-file-break-fill::before { content: "\f34d"; } +.bi-file-break::before { content: "\f34e"; } +.bi-file-check-fill::before { content: "\f34f"; } +.bi-file-check::before { content: "\f350"; } +.bi-file-code-fill::before { content: "\f351"; } +.bi-file-code::before { content: "\f352"; } +.bi-file-diff-fill::before { content: "\f353"; } +.bi-file-diff::before { content: "\f354"; } +.bi-file-earmark-arrow-down-fill::before { content: "\f355"; } +.bi-file-earmark-arrow-down::before { content: "\f356"; } +.bi-file-earmark-arrow-up-fill::before { content: "\f357"; } +.bi-file-earmark-arrow-up::before { content: "\f358"; } +.bi-file-earmark-bar-graph-fill::before { content: "\f359"; } +.bi-file-earmark-bar-graph::before { content: "\f35a"; } +.bi-file-earmark-binary-fill::before { content: "\f35b"; } +.bi-file-earmark-binary::before { content: "\f35c"; } +.bi-file-earmark-break-fill::before { content: "\f35d"; } +.bi-file-earmark-break::before { content: "\f35e"; } +.bi-file-earmark-check-fill::before { content: "\f35f"; } +.bi-file-earmark-check::before { content: "\f360"; } +.bi-file-earmark-code-fill::before { content: "\f361"; } +.bi-file-earmark-code::before { content: "\f362"; } +.bi-file-earmark-diff-fill::before { content: "\f363"; } +.bi-file-earmark-diff::before { content: "\f364"; } +.bi-file-earmark-easel-fill::before { content: "\f365"; } +.bi-file-earmark-easel::before { content: "\f366"; } +.bi-file-earmark-excel-fill::before { content: "\f367"; } +.bi-file-earmark-excel::before { content: "\f368"; } +.bi-file-earmark-fill::before { content: "\f369"; } +.bi-file-earmark-font-fill::before { content: "\f36a"; } +.bi-file-earmark-font::before { content: "\f36b"; } +.bi-file-earmark-image-fill::before { content: "\f36c"; } +.bi-file-earmark-image::before { content: "\f36d"; } +.bi-file-earmark-lock-fill::before { content: "\f36e"; } +.bi-file-earmark-lock::before { content: "\f36f"; } +.bi-file-earmark-lock2-fill::before { content: "\f370"; } +.bi-file-earmark-lock2::before { content: "\f371"; } +.bi-file-earmark-medical-fill::before { content: "\f372"; } +.bi-file-earmark-medical::before { content: "\f373"; } +.bi-file-earmark-minus-fill::before { content: "\f374"; } +.bi-file-earmark-minus::before { content: "\f375"; } +.bi-file-earmark-music-fill::before { content: "\f376"; } +.bi-file-earmark-music::before { content: "\f377"; } +.bi-file-earmark-person-fill::before { content: "\f378"; } +.bi-file-earmark-person::before { content: "\f379"; } +.bi-file-earmark-play-fill::before { content: "\f37a"; } +.bi-file-earmark-play::before { content: "\f37b"; } +.bi-file-earmark-plus-fill::before { content: "\f37c"; } +.bi-file-earmark-plus::before { content: "\f37d"; } +.bi-file-earmark-post-fill::before { content: "\f37e"; } +.bi-file-earmark-post::before { content: "\f37f"; } +.bi-file-earmark-ppt-fill::before { content: "\f380"; } +.bi-file-earmark-ppt::before { content: "\f381"; } +.bi-file-earmark-richtext-fill::before { content: "\f382"; } +.bi-file-earmark-richtext::before { content: "\f383"; } +.bi-file-earmark-ruled-fill::before { content: "\f384"; } +.bi-file-earmark-ruled::before { content: "\f385"; } +.bi-file-earmark-slides-fill::before { content: "\f386"; } +.bi-file-earmark-slides::before { content: "\f387"; } +.bi-file-earmark-spreadsheet-fill::before { content: "\f388"; } +.bi-file-earmark-spreadsheet::before { content: "\f389"; } +.bi-file-earmark-text-fill::before { content: "\f38a"; } +.bi-file-earmark-text::before { content: "\f38b"; } +.bi-file-earmark-word-fill::before { content: "\f38c"; } +.bi-file-earmark-word::before { content: "\f38d"; } +.bi-file-earmark-x-fill::before { content: "\f38e"; } +.bi-file-earmark-x::before { content: "\f38f"; } +.bi-file-earmark-zip-fill::before { content: "\f390"; } +.bi-file-earmark-zip::before { content: "\f391"; } +.bi-file-earmark::before { content: "\f392"; } +.bi-file-easel-fill::before { content: "\f393"; } +.bi-file-easel::before { content: "\f394"; } +.bi-file-excel-fill::before { content: "\f395"; } +.bi-file-excel::before { content: "\f396"; } +.bi-file-fill::before { content: "\f397"; } +.bi-file-font-fill::before { content: "\f398"; } +.bi-file-font::before { content: "\f399"; } +.bi-file-image-fill::before { content: "\f39a"; } +.bi-file-image::before { content: "\f39b"; } +.bi-file-lock-fill::before { content: "\f39c"; } +.bi-file-lock::before { content: "\f39d"; } +.bi-file-lock2-fill::before { content: "\f39e"; } +.bi-file-lock2::before { content: "\f39f"; } +.bi-file-medical-fill::before { content: "\f3a0"; } +.bi-file-medical::before { content: "\f3a1"; } +.bi-file-minus-fill::before { content: "\f3a2"; } +.bi-file-minus::before { content: "\f3a3"; } +.bi-file-music-fill::before { content: "\f3a4"; } +.bi-file-music::before { content: "\f3a5"; } +.bi-file-person-fill::before { content: "\f3a6"; } +.bi-file-person::before { content: "\f3a7"; } +.bi-file-play-fill::before { content: "\f3a8"; } +.bi-file-play::before { content: "\f3a9"; } +.bi-file-plus-fill::before { content: "\f3aa"; } +.bi-file-plus::before { content: "\f3ab"; } +.bi-file-post-fill::before { content: "\f3ac"; } +.bi-file-post::before { content: "\f3ad"; } +.bi-file-ppt-fill::before { content: "\f3ae"; } +.bi-file-ppt::before { content: "\f3af"; } +.bi-file-richtext-fill::before { content: "\f3b0"; } +.bi-file-richtext::before { content: "\f3b1"; } +.bi-file-ruled-fill::before { content: "\f3b2"; } +.bi-file-ruled::before { content: "\f3b3"; } +.bi-file-slides-fill::before { content: "\f3b4"; } +.bi-file-slides::before { content: "\f3b5"; } +.bi-file-spreadsheet-fill::before { content: "\f3b6"; } +.bi-file-spreadsheet::before { content: "\f3b7"; } +.bi-file-text-fill::before { content: "\f3b8"; } +.bi-file-text::before { content: "\f3b9"; } +.bi-file-word-fill::before { content: "\f3ba"; } +.bi-file-word::before { content: "\f3bb"; } +.bi-file-x-fill::before { content: "\f3bc"; } +.bi-file-x::before { content: "\f3bd"; } +.bi-file-zip-fill::before { content: "\f3be"; } +.bi-file-zip::before { content: "\f3bf"; } +.bi-file::before { content: "\f3c0"; } +.bi-files-alt::before { content: "\f3c1"; } +.bi-files::before { content: "\f3c2"; } +.bi-film::before { content: "\f3c3"; } +.bi-filter-circle-fill::before { content: "\f3c4"; } +.bi-filter-circle::before { content: "\f3c5"; } +.bi-filter-left::before { content: "\f3c6"; } +.bi-filter-right::before { content: "\f3c7"; } +.bi-filter-square-fill::before { content: "\f3c8"; } +.bi-filter-square::before { content: "\f3c9"; } +.bi-filter::before { content: "\f3ca"; } +.bi-flag-fill::before { content: "\f3cb"; } +.bi-flag::before { content: "\f3cc"; } +.bi-flower1::before { content: "\f3cd"; } +.bi-flower2::before { content: "\f3ce"; } +.bi-flower3::before { content: "\f3cf"; } +.bi-folder-check::before { content: "\f3d0"; } +.bi-folder-fill::before { content: "\f3d1"; } +.bi-folder-minus::before { content: "\f3d2"; } +.bi-folder-plus::before { content: "\f3d3"; } +.bi-folder-symlink-fill::before { content: "\f3d4"; } +.bi-folder-symlink::before { content: "\f3d5"; } +.bi-folder-x::before { content: "\f3d6"; } +.bi-folder::before { content: "\f3d7"; } +.bi-folder2-open::before { content: "\f3d8"; } +.bi-folder2::before { content: "\f3d9"; } +.bi-fonts::before { content: "\f3da"; } +.bi-forward-fill::before { content: "\f3db"; } +.bi-forward::before { content: "\f3dc"; } +.bi-front::before { content: "\f3dd"; } +.bi-fullscreen-exit::before { content: "\f3de"; } +.bi-fullscreen::before { content: "\f3df"; } +.bi-funnel-fill::before { content: "\f3e0"; } +.bi-funnel::before { content: "\f3e1"; } +.bi-gear-fill::before { content: "\f3e2"; } +.bi-gear-wide-connected::before { content: "\f3e3"; } +.bi-gear-wide::before { content: "\f3e4"; } +.bi-gear::before { content: "\f3e5"; } +.bi-gem::before { content: "\f3e6"; } +.bi-geo-alt-fill::before { content: "\f3e7"; } +.bi-geo-alt::before { content: "\f3e8"; } +.bi-geo-fill::before { content: "\f3e9"; } +.bi-geo::before { content: "\f3ea"; } +.bi-gift-fill::before { content: "\f3eb"; } +.bi-gift::before { content: "\f3ec"; } +.bi-github::before { content: "\f3ed"; } +.bi-globe::before { content: "\f3ee"; } +.bi-globe2::before { content: "\f3ef"; } +.bi-google::before { content: "\f3f0"; } +.bi-graph-down::before { content: "\f3f1"; } +.bi-graph-up::before { content: "\f3f2"; } +.bi-grid-1x2-fill::before { content: "\f3f3"; } +.bi-grid-1x2::before { content: "\f3f4"; } +.bi-grid-3x2-gap-fill::before { content: "\f3f5"; } +.bi-grid-3x2-gap::before { content: "\f3f6"; } +.bi-grid-3x2::before { content: "\f3f7"; } +.bi-grid-3x3-gap-fill::before { content: "\f3f8"; } +.bi-grid-3x3-gap::before { content: "\f3f9"; } +.bi-grid-3x3::before { content: "\f3fa"; } +.bi-grid-fill::before { content: "\f3fb"; } +.bi-grid::before { content: "\f3fc"; } +.bi-grip-horizontal::before { content: "\f3fd"; } +.bi-grip-vertical::before { content: "\f3fe"; } +.bi-hammer::before { content: "\f3ff"; } +.bi-hand-index-fill::before { content: "\f400"; } +.bi-hand-index-thumb-fill::before { content: "\f401"; } +.bi-hand-index-thumb::before { content: "\f402"; } +.bi-hand-index::before { content: "\f403"; } +.bi-hand-thumbs-down-fill::before { content: "\f404"; } +.bi-hand-thumbs-down::before { content: "\f405"; } +.bi-hand-thumbs-up-fill::before { content: "\f406"; } +.bi-hand-thumbs-up::before { content: "\f407"; } +.bi-handbag-fill::before { content: "\f408"; } +.bi-handbag::before { content: "\f409"; } +.bi-hash::before { content: "\f40a"; } +.bi-hdd-fill::before { content: "\f40b"; } +.bi-hdd-network-fill::before { content: "\f40c"; } +.bi-hdd-network::before { content: "\f40d"; } +.bi-hdd-rack-fill::before { content: "\f40e"; } +.bi-hdd-rack::before { content: "\f40f"; } +.bi-hdd-stack-fill::before { content: "\f410"; } +.bi-hdd-stack::before { content: "\f411"; } +.bi-hdd::before { content: "\f412"; } +.bi-headphones::before { content: "\f413"; } +.bi-headset::before { content: "\f414"; } +.bi-heart-fill::before { content: "\f415"; } +.bi-heart-half::before { content: "\f416"; } +.bi-heart::before { content: "\f417"; } +.bi-heptagon-fill::before { content: "\f418"; } +.bi-heptagon-half::before { content: "\f419"; } +.bi-heptagon::before { content: "\f41a"; } +.bi-hexagon-fill::before { content: "\f41b"; } +.bi-hexagon-half::before { content: "\f41c"; } +.bi-hexagon::before { content: "\f41d"; } +.bi-hourglass-bottom::before { content: "\f41e"; } +.bi-hourglass-split::before { content: "\f41f"; } +.bi-hourglass-top::before { content: "\f420"; } +.bi-hourglass::before { content: "\f421"; } +.bi-house-door-fill::before { content: "\f422"; } +.bi-house-door::before { content: "\f423"; } +.bi-house-fill::before { content: "\f424"; } +.bi-house::before { content: "\f425"; } +.bi-hr::before { content: "\f426"; } +.bi-hurricane::before { content: "\f427"; } +.bi-image-alt::before { content: "\f428"; } +.bi-image-fill::before { content: "\f429"; } +.bi-image::before { content: "\f42a"; } +.bi-images::before { content: "\f42b"; } +.bi-inbox-fill::before { content: "\f42c"; } +.bi-inbox::before { content: "\f42d"; } +.bi-inboxes-fill::before { content: "\f42e"; } +.bi-inboxes::before { content: "\f42f"; } +.bi-info-circle-fill::before { content: "\f430"; } +.bi-info-circle::before { content: "\f431"; } +.bi-info-square-fill::before { content: "\f432"; } +.bi-info-square::before { content: "\f433"; } +.bi-info::before { content: "\f434"; } +.bi-input-cursor-text::before { content: "\f435"; } +.bi-input-cursor::before { content: "\f436"; } +.bi-instagram::before { content: "\f437"; } +.bi-intersect::before { content: "\f438"; } +.bi-journal-album::before { content: "\f439"; } +.bi-journal-arrow-down::before { content: "\f43a"; } +.bi-journal-arrow-up::before { content: "\f43b"; } +.bi-journal-bookmark-fill::before { content: "\f43c"; } +.bi-journal-bookmark::before { content: "\f43d"; } +.bi-journal-check::before { content: "\f43e"; } +.bi-journal-code::before { content: "\f43f"; } +.bi-journal-medical::before { content: "\f440"; } +.bi-journal-minus::before { content: "\f441"; } +.bi-journal-plus::before { content: "\f442"; } +.bi-journal-richtext::before { content: "\f443"; } +.bi-journal-text::before { content: "\f444"; } +.bi-journal-x::before { content: "\f445"; } +.bi-journal::before { content: "\f446"; } +.bi-journals::before { content: "\f447"; } +.bi-joystick::before { content: "\f448"; } +.bi-justify-left::before { content: "\f449"; } +.bi-justify-right::before { content: "\f44a"; } +.bi-justify::before { content: "\f44b"; } +.bi-kanban-fill::before { content: "\f44c"; } +.bi-kanban::before { content: "\f44d"; } +.bi-key-fill::before { content: "\f44e"; } +.bi-key::before { content: "\f44f"; } +.bi-keyboard-fill::before { content: "\f450"; } +.bi-keyboard::before { content: "\f451"; } +.bi-ladder::before { content: "\f452"; } +.bi-lamp-fill::before { content: "\f453"; } +.bi-lamp::before { content: "\f454"; } +.bi-laptop-fill::before { content: "\f455"; } +.bi-laptop::before { content: "\f456"; } +.bi-layer-backward::before { content: "\f457"; } +.bi-layer-forward::before { content: "\f458"; } +.bi-layers-fill::before { content: "\f459"; } +.bi-layers-half::before { content: "\f45a"; } +.bi-layers::before { content: "\f45b"; } +.bi-layout-sidebar-inset-reverse::before { content: "\f45c"; } +.bi-layout-sidebar-inset::before { content: "\f45d"; } +.bi-layout-sidebar-reverse::before { content: "\f45e"; } +.bi-layout-sidebar::before { content: "\f45f"; } +.bi-layout-split::before { content: "\f460"; } +.bi-layout-text-sidebar-reverse::before { content: "\f461"; } +.bi-layout-text-sidebar::before { content: "\f462"; } +.bi-layout-text-window-reverse::before { content: "\f463"; } +.bi-layout-text-window::before { content: "\f464"; } +.bi-layout-three-columns::before { content: "\f465"; } +.bi-layout-wtf::before { content: "\f466"; } +.bi-life-preserver::before { content: "\f467"; } +.bi-lightbulb-fill::before { content: "\f468"; } +.bi-lightbulb-off-fill::before { content: "\f469"; } +.bi-lightbulb-off::before { content: "\f46a"; } +.bi-lightbulb::before { content: "\f46b"; } +.bi-lightning-charge-fill::before { content: "\f46c"; } +.bi-lightning-charge::before { content: "\f46d"; } +.bi-lightning-fill::before { content: "\f46e"; } +.bi-lightning::before { content: "\f46f"; } +.bi-link-45deg::before { content: "\f470"; } +.bi-link::before { content: "\f471"; } +.bi-linkedin::before { content: "\f472"; } +.bi-list-check::before { content: "\f473"; } +.bi-list-nested::before { content: "\f474"; } +.bi-list-ol::before { content: "\f475"; } +.bi-list-stars::before { content: "\f476"; } +.bi-list-task::before { content: "\f477"; } +.bi-list-ul::before { content: "\f478"; } +.bi-list::before { content: "\f479"; } +.bi-lock-fill::before { content: "\f47a"; } +.bi-lock::before { content: "\f47b"; } +.bi-mailbox::before { content: "\f47c"; } +.bi-mailbox2::before { content: "\f47d"; } +.bi-map-fill::before { content: "\f47e"; } +.bi-map::before { content: "\f47f"; } +.bi-markdown-fill::before { content: "\f480"; } +.bi-markdown::before { content: "\f481"; } +.bi-mask::before { content: "\f482"; } +.bi-megaphone-fill::before { content: "\f483"; } +.bi-megaphone::before { content: "\f484"; } +.bi-menu-app-fill::before { content: "\f485"; } +.bi-menu-app::before { content: "\f486"; } +.bi-menu-button-fill::before { content: "\f487"; } +.bi-menu-button-wide-fill::before { content: "\f488"; } +.bi-menu-button-wide::before { content: "\f489"; } +.bi-menu-button::before { content: "\f48a"; } +.bi-menu-down::before { content: "\f48b"; } +.bi-menu-up::before { content: "\f48c"; } +.bi-mic-fill::before { content: "\f48d"; } +.bi-mic-mute-fill::before { content: "\f48e"; } +.bi-mic-mute::before { content: "\f48f"; } +.bi-mic::before { content: "\f490"; } +.bi-minecart-loaded::before { content: "\f491"; } +.bi-minecart::before { content: "\f492"; } +.bi-moisture::before { content: "\f493"; } +.bi-moon-fill::before { content: "\f494"; } +.bi-moon-stars-fill::before { content: "\f495"; } +.bi-moon-stars::before { content: "\f496"; } +.bi-moon::before { content: "\f497"; } +.bi-mouse-fill::before { content: "\f498"; } +.bi-mouse::before { content: "\f499"; } +.bi-mouse2-fill::before { content: "\f49a"; } +.bi-mouse2::before { content: "\f49b"; } +.bi-mouse3-fill::before { content: "\f49c"; } +.bi-mouse3::before { content: "\f49d"; } +.bi-music-note-beamed::before { content: "\f49e"; } +.bi-music-note-list::before { content: "\f49f"; } +.bi-music-note::before { content: "\f4a0"; } +.bi-music-player-fill::before { content: "\f4a1"; } +.bi-music-player::before { content: "\f4a2"; } +.bi-newspaper::before { content: "\f4a3"; } +.bi-node-minus-fill::before { content: "\f4a4"; } +.bi-node-minus::before { content: "\f4a5"; } +.bi-node-plus-fill::before { content: "\f4a6"; } +.bi-node-plus::before { content: "\f4a7"; } +.bi-nut-fill::before { content: "\f4a8"; } +.bi-nut::before { content: "\f4a9"; } +.bi-octagon-fill::before { content: "\f4aa"; } +.bi-octagon-half::before { content: "\f4ab"; } +.bi-octagon::before { content: "\f4ac"; } +.bi-option::before { content: "\f4ad"; } +.bi-outlet::before { content: "\f4ae"; } +.bi-paint-bucket::before { content: "\f4af"; } +.bi-palette-fill::before { content: "\f4b0"; } +.bi-palette::before { content: "\f4b1"; } +.bi-palette2::before { content: "\f4b2"; } +.bi-paperclip::before { content: "\f4b3"; } +.bi-paragraph::before { content: "\f4b4"; } +.bi-patch-check-fill::before { content: "\f4b5"; } +.bi-patch-check::before { content: "\f4b6"; } +.bi-patch-exclamation-fill::before { content: "\f4b7"; } +.bi-patch-exclamation::before { content: "\f4b8"; } +.bi-patch-minus-fill::before { content: "\f4b9"; } +.bi-patch-minus::before { content: "\f4ba"; } +.bi-patch-plus-fill::before { content: "\f4bb"; } +.bi-patch-plus::before { content: "\f4bc"; } +.bi-patch-question-fill::before { content: "\f4bd"; } +.bi-patch-question::before { content: "\f4be"; } +.bi-pause-btn-fill::before { content: "\f4bf"; } +.bi-pause-btn::before { content: "\f4c0"; } +.bi-pause-circle-fill::before { content: "\f4c1"; } +.bi-pause-circle::before { content: "\f4c2"; } +.bi-pause-fill::before { content: "\f4c3"; } +.bi-pause::before { content: "\f4c4"; } +.bi-peace-fill::before { content: "\f4c5"; } +.bi-peace::before { content: "\f4c6"; } +.bi-pen-fill::before { content: "\f4c7"; } +.bi-pen::before { content: "\f4c8"; } +.bi-pencil-fill::before { content: "\f4c9"; } +.bi-pencil-square::before { content: "\f4ca"; } +.bi-pencil::before { content: "\f4cb"; } +.bi-pentagon-fill::before { content: "\f4cc"; } +.bi-pentagon-half::before { content: "\f4cd"; } +.bi-pentagon::before { content: "\f4ce"; } +.bi-people-fill::before { content: "\f4cf"; } +.bi-people::before { content: "\f4d0"; } +.bi-percent::before { content: "\f4d1"; } +.bi-person-badge-fill::before { content: "\f4d2"; } +.bi-person-badge::before { content: "\f4d3"; } +.bi-person-bounding-box::before { content: "\f4d4"; } +.bi-person-check-fill::before { content: "\f4d5"; } +.bi-person-check::before { content: "\f4d6"; } +.bi-person-circle::before { content: "\f4d7"; } +.bi-person-dash-fill::before { content: "\f4d8"; } +.bi-person-dash::before { content: "\f4d9"; } +.bi-person-fill::before { content: "\f4da"; } +.bi-person-lines-fill::before { content: "\f4db"; } +.bi-person-plus-fill::before { content: "\f4dc"; } +.bi-person-plus::before { content: "\f4dd"; } +.bi-person-square::before { content: "\f4de"; } +.bi-person-x-fill::before { content: "\f4df"; } +.bi-person-x::before { content: "\f4e0"; } +.bi-person::before { content: "\f4e1"; } +.bi-phone-fill::before { content: "\f4e2"; } +.bi-phone-landscape-fill::before { content: "\f4e3"; } +.bi-phone-landscape::before { content: "\f4e4"; } +.bi-phone-vibrate-fill::before { content: "\f4e5"; } +.bi-phone-vibrate::before { content: "\f4e6"; } +.bi-phone::before { content: "\f4e7"; } +.bi-pie-chart-fill::before { content: "\f4e8"; } +.bi-pie-chart::before { content: "\f4e9"; } +.bi-pin-angle-fill::before { content: "\f4ea"; } +.bi-pin-angle::before { content: "\f4eb"; } +.bi-pin-fill::before { content: "\f4ec"; } +.bi-pin::before { content: "\f4ed"; } +.bi-pip-fill::before { content: "\f4ee"; } +.bi-pip::before { content: "\f4ef"; } +.bi-play-btn-fill::before { content: "\f4f0"; } +.bi-play-btn::before { content: "\f4f1"; } +.bi-play-circle-fill::before { content: "\f4f2"; } +.bi-play-circle::before { content: "\f4f3"; } +.bi-play-fill::before { content: "\f4f4"; } +.bi-play::before { content: "\f4f5"; } +.bi-plug-fill::before { content: "\f4f6"; } +.bi-plug::before { content: "\f4f7"; } +.bi-plus-circle-dotted::before { content: "\f4f8"; } +.bi-plus-circle-fill::before { content: "\f4f9"; } +.bi-plus-circle::before { content: "\f4fa"; } +.bi-plus-square-dotted::before { content: "\f4fb"; } +.bi-plus-square-fill::before { content: "\f4fc"; } +.bi-plus-square::before { content: "\f4fd"; } +.bi-plus::before { content: "\f4fe"; } +.bi-power::before { content: "\f4ff"; } +.bi-printer-fill::before { content: "\f500"; } +.bi-printer::before { content: "\f501"; } +.bi-puzzle-fill::before { content: "\f502"; } +.bi-puzzle::before { content: "\f503"; } +.bi-question-circle-fill::before { content: "\f504"; } +.bi-question-circle::before { content: "\f505"; } +.bi-question-diamond-fill::before { content: "\f506"; } +.bi-question-diamond::before { content: "\f507"; } +.bi-question-octagon-fill::before { content: "\f508"; } +.bi-question-octagon::before { content: "\f509"; } +.bi-question-square-fill::before { content: "\f50a"; } +.bi-question-square::before { content: "\f50b"; } +.bi-question::before { content: "\f50c"; } +.bi-rainbow::before { content: "\f50d"; } +.bi-receipt-cutoff::before { content: "\f50e"; } +.bi-receipt::before { content: "\f50f"; } +.bi-reception-0::before { content: "\f510"; } +.bi-reception-1::before { content: "\f511"; } +.bi-reception-2::before { content: "\f512"; } +.bi-reception-3::before { content: "\f513"; } +.bi-reception-4::before { content: "\f514"; } +.bi-record-btn-fill::before { content: "\f515"; } +.bi-record-btn::before { content: "\f516"; } +.bi-record-circle-fill::before { content: "\f517"; } +.bi-record-circle::before { content: "\f518"; } +.bi-record-fill::before { content: "\f519"; } +.bi-record::before { content: "\f51a"; } +.bi-record2-fill::before { content: "\f51b"; } +.bi-record2::before { content: "\f51c"; } +.bi-reply-all-fill::before { content: "\f51d"; } +.bi-reply-all::before { content: "\f51e"; } +.bi-reply-fill::before { content: "\f51f"; } +.bi-reply::before { content: "\f520"; } +.bi-rss-fill::before { content: "\f521"; } +.bi-rss::before { content: "\f522"; } +.bi-rulers::before { content: "\f523"; } +.bi-save-fill::before { content: "\f524"; } +.bi-save::before { content: "\f525"; } +.bi-save2-fill::before { content: "\f526"; } +.bi-save2::before { content: "\f527"; } +.bi-scissors::before { content: "\f528"; } +.bi-screwdriver::before { content: "\f529"; } +.bi-search::before { content: "\f52a"; } +.bi-segmented-nav::before { content: "\f52b"; } +.bi-server::before { content: "\f52c"; } +.bi-share-fill::before { content: "\f52d"; } +.bi-share::before { content: "\f52e"; } +.bi-shield-check::before { content: "\f52f"; } +.bi-shield-exclamation::before { content: "\f530"; } +.bi-shield-fill-check::before { content: "\f531"; } +.bi-shield-fill-exclamation::before { content: "\f532"; } +.bi-shield-fill-minus::before { content: "\f533"; } +.bi-shield-fill-plus::before { content: "\f534"; } +.bi-shield-fill-x::before { content: "\f535"; } +.bi-shield-fill::before { content: "\f536"; } +.bi-shield-lock-fill::before { content: "\f537"; } +.bi-shield-lock::before { content: "\f538"; } +.bi-shield-minus::before { content: "\f539"; } +.bi-shield-plus::before { content: "\f53a"; } +.bi-shield-shaded::before { content: "\f53b"; } +.bi-shield-slash-fill::before { content: "\f53c"; } +.bi-shield-slash::before { content: "\f53d"; } +.bi-shield-x::before { content: "\f53e"; } +.bi-shield::before { content: "\f53f"; } +.bi-shift-fill::before { content: "\f540"; } +.bi-shift::before { content: "\f541"; } +.bi-shop-window::before { content: "\f542"; } +.bi-shop::before { content: "\f543"; } +.bi-shuffle::before { content: "\f544"; } +.bi-signpost-2-fill::before { content: "\f545"; } +.bi-signpost-2::before { content: "\f546"; } +.bi-signpost-fill::before { content: "\f547"; } +.bi-signpost-split-fill::before { content: "\f548"; } +.bi-signpost-split::before { content: "\f549"; } +.bi-signpost::before { content: "\f54a"; } +.bi-sim-fill::before { content: "\f54b"; } +.bi-sim::before { content: "\f54c"; } +.bi-skip-backward-btn-fill::before { content: "\f54d"; } +.bi-skip-backward-btn::before { content: "\f54e"; } +.bi-skip-backward-circle-fill::before { content: "\f54f"; } +.bi-skip-backward-circle::before { content: "\f550"; } +.bi-skip-backward-fill::before { content: "\f551"; } +.bi-skip-backward::before { content: "\f552"; } +.bi-skip-end-btn-fill::before { content: "\f553"; } +.bi-skip-end-btn::before { content: "\f554"; } +.bi-skip-end-circle-fill::before { content: "\f555"; } +.bi-skip-end-circle::before { content: "\f556"; } +.bi-skip-end-fill::before { content: "\f557"; } +.bi-skip-end::before { content: "\f558"; } +.bi-skip-forward-btn-fill::before { content: "\f559"; } +.bi-skip-forward-btn::before { content: "\f55a"; } +.bi-skip-forward-circle-fill::before { content: "\f55b"; } +.bi-skip-forward-circle::before { content: "\f55c"; } +.bi-skip-forward-fill::before { content: "\f55d"; } +.bi-skip-forward::before { content: "\f55e"; } +.bi-skip-start-btn-fill::before { content: "\f55f"; } +.bi-skip-start-btn::before { content: "\f560"; } +.bi-skip-start-circle-fill::before { content: "\f561"; } +.bi-skip-start-circle::before { content: "\f562"; } +.bi-skip-start-fill::before { content: "\f563"; } +.bi-skip-start::before { content: "\f564"; } +.bi-slack::before { content: "\f565"; } +.bi-slash-circle-fill::before { content: "\f566"; } +.bi-slash-circle::before { content: "\f567"; } +.bi-slash-square-fill::before { content: "\f568"; } +.bi-slash-square::before { content: "\f569"; } +.bi-slash::before { content: "\f56a"; } +.bi-sliders::before { content: "\f56b"; } +.bi-smartwatch::before { content: "\f56c"; } +.bi-snow::before { content: "\f56d"; } +.bi-snow2::before { content: "\f56e"; } +.bi-snow3::before { content: "\f56f"; } +.bi-sort-alpha-down-alt::before { content: "\f570"; } +.bi-sort-alpha-down::before { content: "\f571"; } +.bi-sort-alpha-up-alt::before { content: "\f572"; } +.bi-sort-alpha-up::before { content: "\f573"; } +.bi-sort-down-alt::before { content: "\f574"; } +.bi-sort-down::before { content: "\f575"; } +.bi-sort-numeric-down-alt::before { content: "\f576"; } +.bi-sort-numeric-down::before { content: "\f577"; } +.bi-sort-numeric-up-alt::before { content: "\f578"; } +.bi-sort-numeric-up::before { content: "\f579"; } +.bi-sort-up-alt::before { content: "\f57a"; } +.bi-sort-up::before { content: "\f57b"; } +.bi-soundwave::before { content: "\f57c"; } +.bi-speaker-fill::before { content: "\f57d"; } +.bi-speaker::before { content: "\f57e"; } +.bi-speedometer::before { content: "\f57f"; } +.bi-speedometer2::before { content: "\f580"; } +.bi-spellcheck::before { content: "\f581"; } +.bi-square-fill::before { content: "\f582"; } +.bi-square-half::before { content: "\f583"; } +.bi-square::before { content: "\f584"; } +.bi-stack::before { content: "\f585"; } +.bi-star-fill::before { content: "\f586"; } +.bi-star-half::before { content: "\f587"; } +.bi-star::before { content: "\f588"; } +.bi-stars::before { content: "\f589"; } +.bi-stickies-fill::before { content: "\f58a"; } +.bi-stickies::before { content: "\f58b"; } +.bi-sticky-fill::before { content: "\f58c"; } +.bi-sticky::before { content: "\f58d"; } +.bi-stop-btn-fill::before { content: "\f58e"; } +.bi-stop-btn::before { content: "\f58f"; } +.bi-stop-circle-fill::before { content: "\f590"; } +.bi-stop-circle::before { content: "\f591"; } +.bi-stop-fill::before { content: "\f592"; } +.bi-stop::before { content: "\f593"; } +.bi-stoplights-fill::before { content: "\f594"; } +.bi-stoplights::before { content: "\f595"; } +.bi-stopwatch-fill::before { content: "\f596"; } +.bi-stopwatch::before { content: "\f597"; } +.bi-subtract::before { content: "\f598"; } +.bi-suit-club-fill::before { content: "\f599"; } +.bi-suit-club::before { content: "\f59a"; } +.bi-suit-diamond-fill::before { content: "\f59b"; } +.bi-suit-diamond::before { content: "\f59c"; } +.bi-suit-heart-fill::before { content: "\f59d"; } +.bi-suit-heart::before { content: "\f59e"; } +.bi-suit-spade-fill::before { content: "\f59f"; } +.bi-suit-spade::before { content: "\f5a0"; } +.bi-sun-fill::before { content: "\f5a1"; } +.bi-sun::before { content: "\f5a2"; } +.bi-sunglasses::before { content: "\f5a3"; } +.bi-sunrise-fill::before { content: "\f5a4"; } +.bi-sunrise::before { content: "\f5a5"; } +.bi-sunset-fill::before { content: "\f5a6"; } +.bi-sunset::before { content: "\f5a7"; } +.bi-symmetry-horizontal::before { content: "\f5a8"; } +.bi-symmetry-vertical::before { content: "\f5a9"; } +.bi-table::before { content: "\f5aa"; } +.bi-tablet-fill::before { content: "\f5ab"; } +.bi-tablet-landscape-fill::before { content: "\f5ac"; } +.bi-tablet-landscape::before { content: "\f5ad"; } +.bi-tablet::before { content: "\f5ae"; } +.bi-tag-fill::before { content: "\f5af"; } +.bi-tag::before { content: "\f5b0"; } +.bi-tags-fill::before { content: "\f5b1"; } +.bi-tags::before { content: "\f5b2"; } +.bi-telegram::before { content: "\f5b3"; } +.bi-telephone-fill::before { content: "\f5b4"; } +.bi-telephone-forward-fill::before { content: "\f5b5"; } +.bi-telephone-forward::before { content: "\f5b6"; } +.bi-telephone-inbound-fill::before { content: "\f5b7"; } +.bi-telephone-inbound::before { content: "\f5b8"; } +.bi-telephone-minus-fill::before { content: "\f5b9"; } +.bi-telephone-minus::before { content: "\f5ba"; } +.bi-telephone-outbound-fill::before { content: "\f5bb"; } +.bi-telephone-outbound::before { content: "\f5bc"; } +.bi-telephone-plus-fill::before { content: "\f5bd"; } +.bi-telephone-plus::before { content: "\f5be"; } +.bi-telephone-x-fill::before { content: "\f5bf"; } +.bi-telephone-x::before { content: "\f5c0"; } +.bi-telephone::before { content: "\f5c1"; } +.bi-terminal-fill::before { content: "\f5c2"; } +.bi-terminal::before { content: "\f5c3"; } +.bi-text-center::before { content: "\f5c4"; } +.bi-text-indent-left::before { content: "\f5c5"; } +.bi-text-indent-right::before { content: "\f5c6"; } +.bi-text-left::before { content: "\f5c7"; } +.bi-text-paragraph::before { content: "\f5c8"; } +.bi-text-right::before { content: "\f5c9"; } +.bi-textarea-resize::before { content: "\f5ca"; } +.bi-textarea-t::before { content: "\f5cb"; } +.bi-textarea::before { content: "\f5cc"; } +.bi-thermometer-half::before { content: "\f5cd"; } +.bi-thermometer-high::before { content: "\f5ce"; } +.bi-thermometer-low::before { content: "\f5cf"; } +.bi-thermometer-snow::before { content: "\f5d0"; } +.bi-thermometer-sun::before { content: "\f5d1"; } +.bi-thermometer::before { content: "\f5d2"; } +.bi-three-dots-vertical::before { content: "\f5d3"; } +.bi-three-dots::before { content: "\f5d4"; } +.bi-toggle-off::before { content: "\f5d5"; } +.bi-toggle-on::before { content: "\f5d6"; } +.bi-toggle2-off::before { content: "\f5d7"; } +.bi-toggle2-on::before { content: "\f5d8"; } +.bi-toggles::before { content: "\f5d9"; } +.bi-toggles2::before { content: "\f5da"; } +.bi-tools::before { content: "\f5db"; } +.bi-tornado::before { content: "\f5dc"; } +.bi-trash-fill::before { content: "\f5dd"; } +.bi-trash::before { content: "\f5de"; } +.bi-trash2-fill::before { content: "\f5df"; } +.bi-trash2::before { content: "\f5e0"; } +.bi-tree-fill::before { content: "\f5e1"; } +.bi-tree::before { content: "\f5e2"; } +.bi-triangle-fill::before { content: "\f5e3"; } +.bi-triangle-half::before { content: "\f5e4"; } +.bi-triangle::before { content: "\f5e5"; } +.bi-trophy-fill::before { content: "\f5e6"; } +.bi-trophy::before { content: "\f5e7"; } +.bi-tropical-storm::before { content: "\f5e8"; } +.bi-truck-flatbed::before { content: "\f5e9"; } +.bi-truck::before { content: "\f5ea"; } +.bi-tsunami::before { content: "\f5eb"; } +.bi-tv-fill::before { content: "\f5ec"; } +.bi-tv::before { content: "\f5ed"; } +.bi-twitch::before { content: "\f5ee"; } +.bi-twitter::before { content: "\f5ef"; } +.bi-type-bold::before { content: "\f5f0"; } +.bi-type-h1::before { content: "\f5f1"; } +.bi-type-h2::before { content: "\f5f2"; } +.bi-type-h3::before { content: "\f5f3"; } +.bi-type-italic::before { content: "\f5f4"; } +.bi-type-strikethrough::before { content: "\f5f5"; } +.bi-type-underline::before { content: "\f5f6"; } +.bi-type::before { content: "\f5f7"; } +.bi-ui-checks-grid::before { content: "\f5f8"; } +.bi-ui-checks::before { content: "\f5f9"; } +.bi-ui-radios-grid::before { content: "\f5fa"; } +.bi-ui-radios::before { content: "\f5fb"; } +.bi-umbrella-fill::before { content: "\f5fc"; } +.bi-umbrella::before { content: "\f5fd"; } +.bi-union::before { content: "\f5fe"; } +.bi-unlock-fill::before { content: "\f5ff"; } +.bi-unlock::before { content: "\f600"; } +.bi-upc-scan::before { content: "\f601"; } +.bi-upc::before { content: "\f602"; } +.bi-upload::before { content: "\f603"; } +.bi-vector-pen::before { content: "\f604"; } +.bi-view-list::before { content: "\f605"; } +.bi-view-stacked::before { content: "\f606"; } +.bi-vinyl-fill::before { content: "\f607"; } +.bi-vinyl::before { content: "\f608"; } +.bi-voicemail::before { content: "\f609"; } +.bi-volume-down-fill::before { content: "\f60a"; } +.bi-volume-down::before { content: "\f60b"; } +.bi-volume-mute-fill::before { content: "\f60c"; } +.bi-volume-mute::before { content: "\f60d"; } +.bi-volume-off-fill::before { content: "\f60e"; } +.bi-volume-off::before { content: "\f60f"; } +.bi-volume-up-fill::before { content: "\f610"; } +.bi-volume-up::before { content: "\f611"; } +.bi-vr::before { content: "\f612"; } +.bi-wallet-fill::before { content: "\f613"; } +.bi-wallet::before { content: "\f614"; } +.bi-wallet2::before { content: "\f615"; } +.bi-watch::before { content: "\f616"; } +.bi-water::before { content: "\f617"; } +.bi-whatsapp::before { content: "\f618"; } +.bi-wifi-1::before { content: "\f619"; } +.bi-wifi-2::before { content: "\f61a"; } +.bi-wifi-off::before { content: "\f61b"; } +.bi-wifi::before { content: "\f61c"; } +.bi-wind::before { content: "\f61d"; } +.bi-window-dock::before { content: "\f61e"; } +.bi-window-sidebar::before { content: "\f61f"; } +.bi-window::before { content: "\f620"; } +.bi-wrench::before { content: "\f621"; } +.bi-x-circle-fill::before { content: "\f622"; } +.bi-x-circle::before { content: "\f623"; } +.bi-x-diamond-fill::before { content: "\f624"; } +.bi-x-diamond::before { content: "\f625"; } +.bi-x-octagon-fill::before { content: "\f626"; } +.bi-x-octagon::before { content: "\f627"; } +.bi-x-square-fill::before { content: "\f628"; } +.bi-x-square::before { content: "\f629"; } +.bi-x::before { content: "\f62a"; } +.bi-youtube::before { content: "\f62b"; } +.bi-zoom-in::before { content: "\f62c"; } +.bi-zoom-out::before { content: "\f62d"; } +.bi-bank::before { content: "\f62e"; } +.bi-bank2::before { content: "\f62f"; } +.bi-bell-slash-fill::before { content: "\f630"; } +.bi-bell-slash::before { content: "\f631"; } +.bi-cash-coin::before { content: "\f632"; } +.bi-check-lg::before { content: "\f633"; } +.bi-coin::before { content: "\f634"; } +.bi-currency-bitcoin::before { content: "\f635"; } +.bi-currency-dollar::before { content: "\f636"; } +.bi-currency-euro::before { content: "\f637"; } +.bi-currency-exchange::before { content: "\f638"; } +.bi-currency-pound::before { content: "\f639"; } +.bi-currency-yen::before { content: "\f63a"; } +.bi-dash-lg::before { content: "\f63b"; } +.bi-exclamation-lg::before { content: "\f63c"; } +.bi-file-earmark-pdf-fill::before { content: "\f63d"; } +.bi-file-earmark-pdf::before { content: "\f63e"; } +.bi-file-pdf-fill::before { content: "\f63f"; } +.bi-file-pdf::before { content: "\f640"; } +.bi-gender-ambiguous::before { content: "\f641"; } +.bi-gender-female::before { content: "\f642"; } +.bi-gender-male::before { content: "\f643"; } +.bi-gender-trans::before { content: "\f644"; } +.bi-headset-vr::before { content: "\f645"; } +.bi-info-lg::before { content: "\f646"; } +.bi-mastodon::before { content: "\f647"; } +.bi-messenger::before { content: "\f648"; } +.bi-piggy-bank-fill::before { content: "\f649"; } +.bi-piggy-bank::before { content: "\f64a"; } +.bi-pin-map-fill::before { content: "\f64b"; } +.bi-pin-map::before { content: "\f64c"; } +.bi-plus-lg::before { content: "\f64d"; } +.bi-question-lg::before { content: "\f64e"; } +.bi-recycle::before { content: "\f64f"; } +.bi-reddit::before { content: "\f650"; } +.bi-safe-fill::before { content: "\f651"; } +.bi-safe2-fill::before { content: "\f652"; } +.bi-safe2::before { content: "\f653"; } +.bi-sd-card-fill::before { content: "\f654"; } +.bi-sd-card::before { content: "\f655"; } +.bi-skype::before { content: "\f656"; } +.bi-slash-lg::before { content: "\f657"; } +.bi-translate::before { content: "\f658"; } +.bi-x-lg::before { content: "\f659"; } +.bi-safe::before { content: "\f65a"; } +.bi-apple::before { content: "\f65b"; } +.bi-microsoft::before { content: "\f65d"; } +.bi-windows::before { content: "\f65e"; } +.bi-behance::before { content: "\f65c"; } +.bi-dribbble::before { content: "\f65f"; } +.bi-line::before { content: "\f660"; } +.bi-medium::before { content: "\f661"; } +.bi-paypal::before { content: "\f662"; } +.bi-pinterest::before { content: "\f663"; } +.bi-signal::before { content: "\f664"; } +.bi-snapchat::before { content: "\f665"; } +.bi-spotify::before { content: "\f666"; } +.bi-stack-overflow::before { content: "\f667"; } +.bi-strava::before { content: "\f668"; } +.bi-wordpress::before { content: "\f669"; } +.bi-vimeo::before { content: "\f66a"; } +.bi-activity::before { content: "\f66b"; } +.bi-easel2-fill::before { content: "\f66c"; } +.bi-easel2::before { content: "\f66d"; } +.bi-easel3-fill::before { content: "\f66e"; } +.bi-easel3::before { content: "\f66f"; } +.bi-fan::before { content: "\f670"; } +.bi-fingerprint::before { content: "\f671"; } +.bi-graph-down-arrow::before { content: "\f672"; } +.bi-graph-up-arrow::before { content: "\f673"; } +.bi-hypnotize::before { content: "\f674"; } +.bi-magic::before { content: "\f675"; } +.bi-person-rolodex::before { content: "\f676"; } +.bi-person-video::before { content: "\f677"; } +.bi-person-video2::before { content: "\f678"; } +.bi-person-video3::before { content: "\f679"; } +.bi-person-workspace::before { content: "\f67a"; } +.bi-radioactive::before { content: "\f67b"; } +.bi-webcam-fill::before { content: "\f67c"; } +.bi-webcam::before { content: "\f67d"; } +.bi-yin-yang::before { content: "\f67e"; } +.bi-bandaid-fill::before { content: "\f680"; } +.bi-bandaid::before { content: "\f681"; } +.bi-bluetooth::before { content: "\f682"; } +.bi-body-text::before { content: "\f683"; } +.bi-boombox::before { content: "\f684"; } +.bi-boxes::before { content: "\f685"; } +.bi-dpad-fill::before { content: "\f686"; } +.bi-dpad::before { content: "\f687"; } +.bi-ear-fill::before { content: "\f688"; } +.bi-ear::before { content: "\f689"; } +.bi-envelope-check-1::before { content: "\f68a"; } +.bi-envelope-check-fill::before { content: "\f68b"; } +.bi-envelope-check::before { content: "\f68c"; } +.bi-envelope-dash-1::before { content: "\f68d"; } +.bi-envelope-dash-fill::before { content: "\f68e"; } +.bi-envelope-dash::before { content: "\f68f"; } +.bi-envelope-exclamation-1::before { content: "\f690"; } +.bi-envelope-exclamation-fill::before { content: "\f691"; } +.bi-envelope-exclamation::before { content: "\f692"; } +.bi-envelope-plus-fill::before { content: "\f693"; } +.bi-envelope-plus::before { content: "\f694"; } +.bi-envelope-slash-1::before { content: "\f695"; } +.bi-envelope-slash-fill::before { content: "\f696"; } +.bi-envelope-slash::before { content: "\f697"; } +.bi-envelope-x-1::before { content: "\f698"; } +.bi-envelope-x-fill::before { content: "\f699"; } +.bi-envelope-x::before { content: "\f69a"; } +.bi-explicit-fill::before { content: "\f69b"; } +.bi-explicit::before { content: "\f69c"; } +.bi-git::before { content: "\f69d"; } +.bi-infinity::before { content: "\f69e"; } +.bi-list-columns-reverse::before { content: "\f69f"; } +.bi-list-columns::before { content: "\f6a0"; } +.bi-meta::before { content: "\f6a1"; } +.bi-mortorboard-fill::before { content: "\f6a2"; } +.bi-mortorboard::before { content: "\f6a3"; } +.bi-nintendo-switch::before { content: "\f6a4"; } +.bi-pc-display-horizontal::before { content: "\f6a5"; } +.bi-pc-display::before { content: "\f6a6"; } +.bi-pc-horizontal::before { content: "\f6a7"; } +.bi-pc::before { content: "\f6a8"; } +.bi-playstation::before { content: "\f6a9"; } +.bi-plus-slash-minus::before { content: "\f6aa"; } +.bi-projector-fill::before { content: "\f6ab"; } +.bi-projector::before { content: "\f6ac"; } +.bi-qr-code-scan::before { content: "\f6ad"; } +.bi-qr-code::before { content: "\f6ae"; } +.bi-quora::before { content: "\f6af"; } +.bi-quote::before { content: "\f6b0"; } +.bi-robot::before { content: "\f6b1"; } +.bi-send-check-fill::before { content: "\f6b2"; } +.bi-send-check::before { content: "\f6b3"; } +.bi-send-dash-fill::before { content: "\f6b4"; } +.bi-send-dash::before { content: "\f6b5"; } +.bi-send-exclamation-1::before { content: "\f6b6"; } +.bi-send-exclamation-fill::before { content: "\f6b7"; } +.bi-send-exclamation::before { content: "\f6b8"; } +.bi-send-fill::before { content: "\f6b9"; } +.bi-send-plus-fill::before { content: "\f6ba"; } +.bi-send-plus::before { content: "\f6bb"; } +.bi-send-slash-fill::before { content: "\f6bc"; } +.bi-send-slash::before { content: "\f6bd"; } +.bi-send-x-fill::before { content: "\f6be"; } +.bi-send-x::before { content: "\f6bf"; } +.bi-send::before { content: "\f6c0"; } +.bi-steam::before { content: "\f6c1"; } +.bi-terminal-dash-1::before { content: "\f6c2"; } +.bi-terminal-dash::before { content: "\f6c3"; } +.bi-terminal-plus::before { content: "\f6c4"; } +.bi-terminal-split::before { content: "\f6c5"; } +.bi-ticket-detailed-fill::before { content: "\f6c6"; } +.bi-ticket-detailed::before { content: "\f6c7"; } +.bi-ticket-fill::before { content: "\f6c8"; } +.bi-ticket-perforated-fill::before { content: "\f6c9"; } +.bi-ticket-perforated::before { content: "\f6ca"; } +.bi-ticket::before { content: "\f6cb"; } +.bi-tiktok::before { content: "\f6cc"; } +.bi-window-dash::before { content: "\f6cd"; } +.bi-window-desktop::before { content: "\f6ce"; } +.bi-window-fullscreen::before { content: "\f6cf"; } +.bi-window-plus::before { content: "\f6d0"; } +.bi-window-split::before { content: "\f6d1"; } +.bi-window-stack::before { content: "\f6d2"; } +.bi-window-x::before { content: "\f6d3"; } +.bi-xbox::before { content: "\f6d4"; } +.bi-ethernet::before { content: "\f6d5"; } +.bi-hdmi-fill::before { content: "\f6d6"; } +.bi-hdmi::before { content: "\f6d7"; } +.bi-usb-c-fill::before { content: "\f6d8"; } +.bi-usb-c::before { content: "\f6d9"; } +.bi-usb-fill::before { content: "\f6da"; } +.bi-usb-plug-fill::before { content: "\f6db"; } +.bi-usb-plug::before { content: "\f6dc"; } +.bi-usb-symbol::before { content: "\f6dd"; } +.bi-usb::before { content: "\f6de"; } +.bi-boombox-fill::before { content: "\f6df"; } +.bi-displayport-1::before { content: "\f6e0"; } +.bi-displayport::before { content: "\f6e1"; } +.bi-gpu-card::before { content: "\f6e2"; } +.bi-memory::before { content: "\f6e3"; } +.bi-modem-fill::before { content: "\f6e4"; } +.bi-modem::before { content: "\f6e5"; } +.bi-motherboard-fill::before { content: "\f6e6"; } +.bi-motherboard::before { content: "\f6e7"; } +.bi-optical-audio-fill::before { content: "\f6e8"; } +.bi-optical-audio::before { content: "\f6e9"; } +.bi-pci-card::before { content: "\f6ea"; } +.bi-router-fill::before { content: "\f6eb"; } +.bi-router::before { content: "\f6ec"; } +.bi-ssd-fill::before { content: "\f6ed"; } +.bi-ssd::before { content: "\f6ee"; } +.bi-thunderbolt-fill::before { content: "\f6ef"; } +.bi-thunderbolt::before { content: "\f6f0"; } +.bi-usb-drive-fill::before { content: "\f6f1"; } +.bi-usb-drive::before { content: "\f6f2"; } +.bi-usb-micro-fill::before { content: "\f6f3"; } +.bi-usb-micro::before { content: "\f6f4"; } +.bi-usb-mini-fill::before { content: "\f6f5"; } +.bi-usb-mini::before { content: "\f6f6"; } +.bi-cloud-haze2::before { content: "\f6f7"; } +.bi-device-hdd-fill::before { content: "\f6f8"; } +.bi-device-hdd::before { content: "\f6f9"; } +.bi-device-ssd-fill::before { content: "\f6fa"; } +.bi-device-ssd::before { content: "\f6fb"; } +.bi-displayport-fill::before { content: "\f6fc"; } +.bi-mortarboard-fill::before { content: "\f6fd"; } +.bi-mortarboard::before { content: "\f6fe"; } +.bi-terminal-x::before { content: "\f6ff"; } +.bi-arrow-through-heart-fill::before { content: "\f700"; } +.bi-arrow-through-heart::before { content: "\f701"; } +.bi-badge-sd-fill::before { content: "\f702"; } +.bi-badge-sd::before { content: "\f703"; } +.bi-bag-heart-fill::before { content: "\f704"; } +.bi-bag-heart::before { content: "\f705"; } +.bi-balloon-fill::before { content: "\f706"; } +.bi-balloon-heart-fill::before { content: "\f707"; } +.bi-balloon-heart::before { content: "\f708"; } +.bi-balloon::before { content: "\f709"; } +.bi-box2-fill::before { content: "\f70a"; } +.bi-box2-heart-fill::before { content: "\f70b"; } +.bi-box2-heart::before { content: "\f70c"; } +.bi-box2::before { content: "\f70d"; } +.bi-braces-asterisk::before { content: "\f70e"; } +.bi-calendar-heart-fill::before { content: "\f70f"; } +.bi-calendar-heart::before { content: "\f710"; } +.bi-calendar2-heart-fill::before { content: "\f711"; } +.bi-calendar2-heart::before { content: "\f712"; } +.bi-chat-heart-fill::before { content: "\f713"; } +.bi-chat-heart::before { content: "\f714"; } +.bi-chat-left-heart-fill::before { content: "\f715"; } +.bi-chat-left-heart::before { content: "\f716"; } +.bi-chat-right-heart-fill::before { content: "\f717"; } +.bi-chat-right-heart::before { content: "\f718"; } +.bi-chat-square-heart-fill::before { content: "\f719"; } +.bi-chat-square-heart::before { content: "\f71a"; } +.bi-clipboard-check-fill::before { content: "\f71b"; } +.bi-clipboard-data-fill::before { content: "\f71c"; } +.bi-clipboard-fill::before { content: "\f71d"; } +.bi-clipboard-heart-fill::before { content: "\f71e"; } +.bi-clipboard-heart::before { content: "\f71f"; } +.bi-clipboard-minus-fill::before { content: "\f720"; } +.bi-clipboard-plus-fill::before { content: "\f721"; } +.bi-clipboard-pulse::before { content: "\f722"; } +.bi-clipboard-x-fill::before { content: "\f723"; } +.bi-clipboard2-check-fill::before { content: "\f724"; } +.bi-clipboard2-check::before { content: "\f725"; } +.bi-clipboard2-data-fill::before { content: "\f726"; } +.bi-clipboard2-data::before { content: "\f727"; } +.bi-clipboard2-fill::before { content: "\f728"; } +.bi-clipboard2-heart-fill::before { content: "\f729"; } +.bi-clipboard2-heart::before { content: "\f72a"; } +.bi-clipboard2-minus-fill::before { content: "\f72b"; } +.bi-clipboard2-minus::before { content: "\f72c"; } +.bi-clipboard2-plus-fill::before { content: "\f72d"; } +.bi-clipboard2-plus::before { content: "\f72e"; } +.bi-clipboard2-pulse-fill::before { content: "\f72f"; } +.bi-clipboard2-pulse::before { content: "\f730"; } +.bi-clipboard2-x-fill::before { content: "\f731"; } +.bi-clipboard2-x::before { content: "\f732"; } +.bi-clipboard2::before { content: "\f733"; } +.bi-emoji-kiss-fill::before { content: "\f734"; } +.bi-emoji-kiss::before { content: "\f735"; } +.bi-envelope-heart-fill::before { content: "\f736"; } +.bi-envelope-heart::before { content: "\f737"; } +.bi-envelope-open-heart-fill::before { content: "\f738"; } +.bi-envelope-open-heart::before { content: "\f739"; } +.bi-envelope-paper-fill::before { content: "\f73a"; } +.bi-envelope-paper-heart-fill::before { content: "\f73b"; } +.bi-envelope-paper-heart::before { content: "\f73c"; } +.bi-envelope-paper::before { content: "\f73d"; } +.bi-filetype-aac::before { content: "\f73e"; } +.bi-filetype-ai::before { content: "\f73f"; } +.bi-filetype-bmp::before { content: "\f740"; } +.bi-filetype-cs::before { content: "\f741"; } +.bi-filetype-css::before { content: "\f742"; } +.bi-filetype-csv::before { content: "\f743"; } +.bi-filetype-doc::before { content: "\f744"; } +.bi-filetype-docx::before { content: "\f745"; } +.bi-filetype-exe::before { content: "\f746"; } +.bi-filetype-gif::before { content: "\f747"; } +.bi-filetype-heic::before { content: "\f748"; } +.bi-filetype-html::before { content: "\f749"; } +.bi-filetype-java::before { content: "\f74a"; } +.bi-filetype-jpg::before { content: "\f74b"; } +.bi-filetype-js::before { content: "\f74c"; } +.bi-filetype-jsx::before { content: "\f74d"; } +.bi-filetype-key::before { content: "\f74e"; } +.bi-filetype-m4p::before { content: "\f74f"; } +.bi-filetype-md::before { content: "\f750"; } +.bi-filetype-mdx::before { content: "\f751"; } +.bi-filetype-mov::before { content: "\f752"; } +.bi-filetype-mp3::before { content: "\f753"; } +.bi-filetype-mp4::before { content: "\f754"; } +.bi-filetype-otf::before { content: "\f755"; } +.bi-filetype-pdf::before { content: "\f756"; } +.bi-filetype-php::before { content: "\f757"; } +.bi-filetype-png::before { content: "\f758"; } +.bi-filetype-ppt-1::before { content: "\f759"; } +.bi-filetype-ppt::before { content: "\f75a"; } +.bi-filetype-psd::before { content: "\f75b"; } +.bi-filetype-py::before { content: "\f75c"; } +.bi-filetype-raw::before { content: "\f75d"; } +.bi-filetype-rb::before { content: "\f75e"; } +.bi-filetype-sass::before { content: "\f75f"; } +.bi-filetype-scss::before { content: "\f760"; } +.bi-filetype-sh::before { content: "\f761"; } +.bi-filetype-svg::before { content: "\f762"; } +.bi-filetype-tiff::before { content: "\f763"; } +.bi-filetype-tsx::before { content: "\f764"; } +.bi-filetype-ttf::before { content: "\f765"; } +.bi-filetype-txt::before { content: "\f766"; } +.bi-filetype-wav::before { content: "\f767"; } +.bi-filetype-woff::before { content: "\f768"; } +.bi-filetype-xls-1::before { content: "\f769"; } +.bi-filetype-xls::before { content: "\f76a"; } +.bi-filetype-xml::before { content: "\f76b"; } +.bi-filetype-yml::before { content: "\f76c"; } +.bi-heart-arrow::before { content: "\f76d"; } +.bi-heart-pulse-fill::before { content: "\f76e"; } +.bi-heart-pulse::before { content: "\f76f"; } +.bi-heartbreak-fill::before { content: "\f770"; } +.bi-heartbreak::before { content: "\f771"; } +.bi-hearts::before { content: "\f772"; } +.bi-hospital-fill::before { content: "\f773"; } +.bi-hospital::before { content: "\f774"; } +.bi-house-heart-fill::before { content: "\f775"; } +.bi-house-heart::before { content: "\f776"; } +.bi-incognito::before { content: "\f777"; } +.bi-magnet-fill::before { content: "\f778"; } +.bi-magnet::before { content: "\f779"; } +.bi-person-heart::before { content: "\f77a"; } +.bi-person-hearts::before { content: "\f77b"; } +.bi-phone-flip::before { content: "\f77c"; } +.bi-plugin::before { content: "\f77d"; } +.bi-postage-fill::before { content: "\f77e"; } +.bi-postage-heart-fill::before { content: "\f77f"; } +.bi-postage-heart::before { content: "\f780"; } +.bi-postage::before { content: "\f781"; } +.bi-postcard-fill::before { content: "\f782"; } +.bi-postcard-heart-fill::before { content: "\f783"; } +.bi-postcard-heart::before { content: "\f784"; } +.bi-postcard::before { content: "\f785"; } +.bi-search-heart-fill::before { content: "\f786"; } +.bi-search-heart::before { content: "\f787"; } +.bi-sliders2-vertical::before { content: "\f788"; } +.bi-sliders2::before { content: "\f789"; } +.bi-trash3-fill::before { content: "\f78a"; } +.bi-trash3::before { content: "\f78b"; } +.bi-valentine::before { content: "\f78c"; } +.bi-valentine2::before { content: "\f78d"; } +.bi-wrench-adjustable-circle-fill::before { content: "\f78e"; } +.bi-wrench-adjustable-circle::before { content: "\f78f"; } +.bi-wrench-adjustable::before { content: "\f790"; } +.bi-filetype-json::before { content: "\f791"; } +.bi-filetype-pptx::before { content: "\f792"; } +.bi-filetype-xlsx::before { content: "\f793"; } +.bi-1-circle-1::before { content: "\f794"; } +.bi-1-circle-fill-1::before { content: "\f795"; } +.bi-1-circle-fill::before { content: "\f796"; } +.bi-1-circle::before { content: "\f797"; } +.bi-1-square-fill::before { content: "\f798"; } +.bi-1-square::before { content: "\f799"; } +.bi-2-circle-1::before { content: "\f79a"; } +.bi-2-circle-fill-1::before { content: "\f79b"; } +.bi-2-circle-fill::before { content: "\f79c"; } +.bi-2-circle::before { content: "\f79d"; } +.bi-2-square-fill::before { content: "\f79e"; } +.bi-2-square::before { content: "\f79f"; } +.bi-3-circle-1::before { content: "\f7a0"; } +.bi-3-circle-fill-1::before { content: "\f7a1"; } +.bi-3-circle-fill::before { content: "\f7a2"; } +.bi-3-circle::before { content: "\f7a3"; } +.bi-3-square-fill::before { content: "\f7a4"; } +.bi-3-square::before { content: "\f7a5"; } +.bi-4-circle-1::before { content: "\f7a6"; } +.bi-4-circle-fill-1::before { content: "\f7a7"; } +.bi-4-circle-fill::before { content: "\f7a8"; } +.bi-4-circle::before { content: "\f7a9"; } +.bi-4-square-fill::before { content: "\f7aa"; } +.bi-4-square::before { content: "\f7ab"; } +.bi-5-circle-1::before { content: "\f7ac"; } +.bi-5-circle-fill-1::before { content: "\f7ad"; } +.bi-5-circle-fill::before { content: "\f7ae"; } +.bi-5-circle::before { content: "\f7af"; } +.bi-5-square-fill::before { content: "\f7b0"; } +.bi-5-square::before { content: "\f7b1"; } +.bi-6-circle-1::before { content: "\f7b2"; } +.bi-6-circle-fill-1::before { content: "\f7b3"; } +.bi-6-circle-fill::before { content: "\f7b4"; } +.bi-6-circle::before { content: "\f7b5"; } +.bi-6-square-fill::before { content: "\f7b6"; } +.bi-6-square::before { content: "\f7b7"; } +.bi-7-circle-1::before { content: "\f7b8"; } +.bi-7-circle-fill-1::before { content: "\f7b9"; } +.bi-7-circle-fill::before { content: "\f7ba"; } +.bi-7-circle::before { content: "\f7bb"; } +.bi-7-square-fill::before { content: "\f7bc"; } +.bi-7-square::before { content: "\f7bd"; } +.bi-8-circle-1::before { content: "\f7be"; } +.bi-8-circle-fill-1::before { content: "\f7bf"; } +.bi-8-circle-fill::before { content: "\f7c0"; } +.bi-8-circle::before { content: "\f7c1"; } +.bi-8-square-fill::before { content: "\f7c2"; } +.bi-8-square::before { content: "\f7c3"; } +.bi-9-circle-1::before { content: "\f7c4"; } +.bi-9-circle-fill-1::before { content: "\f7c5"; } +.bi-9-circle-fill::before { content: "\f7c6"; } +.bi-9-circle::before { content: "\f7c7"; } +.bi-9-square-fill::before { content: "\f7c8"; } +.bi-9-square::before { content: "\f7c9"; } +.bi-airplane-engines-fill::before { content: "\f7ca"; } +.bi-airplane-engines::before { content: "\f7cb"; } +.bi-airplane-fill::before { content: "\f7cc"; } +.bi-airplane::before { content: "\f7cd"; } +.bi-alexa::before { content: "\f7ce"; } +.bi-alipay::before { content: "\f7cf"; } +.bi-android::before { content: "\f7d0"; } +.bi-android2::before { content: "\f7d1"; } +.bi-box-fill::before { content: "\f7d2"; } +.bi-box-seam-fill::before { content: "\f7d3"; } +.bi-browser-chrome::before { content: "\f7d4"; } +.bi-browser-edge::before { content: "\f7d5"; } +.bi-browser-firefox::before { content: "\f7d6"; } +.bi-browser-safari::before { content: "\f7d7"; } +.bi-c-circle-1::before { content: "\f7d8"; } +.bi-c-circle-fill-1::before { content: "\f7d9"; } +.bi-c-circle-fill::before { content: "\f7da"; } +.bi-c-circle::before { content: "\f7db"; } +.bi-c-square-fill::before { content: "\f7dc"; } +.bi-c-square::before { content: "\f7dd"; } +.bi-capsule-pill::before { content: "\f7de"; } +.bi-capsule::before { content: "\f7df"; } +.bi-car-front-fill::before { content: "\f7e0"; } +.bi-car-front::before { content: "\f7e1"; } +.bi-cassette-fill::before { content: "\f7e2"; } +.bi-cassette::before { content: "\f7e3"; } +.bi-cc-circle-1::before { content: "\f7e4"; } +.bi-cc-circle-fill-1::before { content: "\f7e5"; } +.bi-cc-circle-fill::before { content: "\f7e6"; } +.bi-cc-circle::before { content: "\f7e7"; } +.bi-cc-square-fill::before { content: "\f7e8"; } +.bi-cc-square::before { content: "\f7e9"; } +.bi-cup-hot-fill::before { content: "\f7ea"; } +.bi-cup-hot::before { content: "\f7eb"; } +.bi-currency-rupee::before { content: "\f7ec"; } +.bi-dropbox::before { content: "\f7ed"; } +.bi-escape::before { content: "\f7ee"; } +.bi-fast-forward-btn-fill::before { content: "\f7ef"; } +.bi-fast-forward-btn::before { content: "\f7f0"; } +.bi-fast-forward-circle-fill::before { content: "\f7f1"; } +.bi-fast-forward-circle::before { content: "\f7f2"; } +.bi-fast-forward-fill::before { content: "\f7f3"; } +.bi-fast-forward::before { content: "\f7f4"; } +.bi-filetype-sql::before { content: "\f7f5"; } +.bi-fire::before { content: "\f7f6"; } +.bi-google-play::before { content: "\f7f7"; } +.bi-h-circle-1::before { content: "\f7f8"; } +.bi-h-circle-fill-1::before { content: "\f7f9"; } +.bi-h-circle-fill::before { content: "\f7fa"; } +.bi-h-circle::before { content: "\f7fb"; } +.bi-h-square-fill::before { content: "\f7fc"; } +.bi-h-square::before { content: "\f7fd"; } +.bi-indent::before { content: "\f7fe"; } +.bi-lungs-fill::before { content: "\f7ff"; } +.bi-lungs::before { content: "\f800"; } +.bi-microsoft-teams::before { content: "\f801"; } +.bi-p-circle-1::before { content: "\f802"; } +.bi-p-circle-fill-1::before { content: "\f803"; } +.bi-p-circle-fill::before { content: "\f804"; } +.bi-p-circle::before { content: "\f805"; } +.bi-p-square-fill::before { content: "\f806"; } +.bi-p-square::before { content: "\f807"; } +.bi-pass-fill::before { content: "\f808"; } +.bi-pass::before { content: "\f809"; } +.bi-prescription::before { content: "\f80a"; } +.bi-prescription2::before { content: "\f80b"; } +.bi-r-circle-1::before { content: "\f80c"; } +.bi-r-circle-fill-1::before { content: "\f80d"; } +.bi-r-circle-fill::before { content: "\f80e"; } +.bi-r-circle::before { content: "\f80f"; } +.bi-r-square-fill::before { content: "\f810"; } +.bi-r-square::before { content: "\f811"; } +.bi-repeat-1::before { content: "\f812"; } +.bi-repeat::before { content: "\f813"; } +.bi-rewind-btn-fill::before { content: "\f814"; } +.bi-rewind-btn::before { content: "\f815"; } +.bi-rewind-circle-fill::before { content: "\f816"; } +.bi-rewind-circle::before { content: "\f817"; } +.bi-rewind-fill::before { content: "\f818"; } +.bi-rewind::before { content: "\f819"; } +.bi-train-freight-front-fill::before { content: "\f81a"; } +.bi-train-freight-front::before { content: "\f81b"; } +.bi-train-front-fill::before { content: "\f81c"; } +.bi-train-front::before { content: "\f81d"; } +.bi-train-lightrail-front-fill::before { content: "\f81e"; } +.bi-train-lightrail-front::before { content: "\f81f"; } +.bi-truck-front-fill::before { content: "\f820"; } +.bi-truck-front::before { content: "\f821"; } +.bi-ubuntu::before { content: "\f822"; } +.bi-unindent::before { content: "\f823"; } +.bi-unity::before { content: "\f824"; } +.bi-universal-access-circle::before { content: "\f825"; } +.bi-universal-access::before { content: "\f826"; } +.bi-virus::before { content: "\f827"; } +.bi-virus2::before { content: "\f828"; } +.bi-wechat::before { content: "\f829"; } +.bi-yelp::before { content: "\f82a"; } +.bi-sign-stop-fill::before { content: "\f82b"; } +.bi-sign-stop-lights-fill::before { content: "\f82c"; } +.bi-sign-stop-lights::before { content: "\f82d"; } +.bi-sign-stop::before { content: "\f82e"; } +.bi-sign-turn-left-fill::before { content: "\f82f"; } +.bi-sign-turn-left::before { content: "\f830"; } +.bi-sign-turn-right-fill::before { content: "\f831"; } +.bi-sign-turn-right::before { content: "\f832"; } +.bi-sign-turn-slight-left-fill::before { content: "\f833"; } +.bi-sign-turn-slight-left::before { content: "\f834"; } +.bi-sign-turn-slight-right-fill::before { content: "\f835"; } +.bi-sign-turn-slight-right::before { content: "\f836"; } +.bi-sign-yield-fill::before { content: "\f837"; } +.bi-sign-yield::before { content: "\f838"; } +.bi-ev-station-fill::before { content: "\f839"; } +.bi-ev-station::before { content: "\f83a"; } +.bi-fuel-pump-diesel-fill::before { content: "\f83b"; } +.bi-fuel-pump-diesel::before { content: "\f83c"; } +.bi-fuel-pump-fill::before { content: "\f83d"; } +.bi-fuel-pump::before { content: "\f83e"; } +.bi-0-circle-fill::before { content: "\f83f"; } +.bi-0-circle::before { content: "\f840"; } +.bi-0-square-fill::before { content: "\f841"; } +.bi-0-square::before { content: "\f842"; } +.bi-rocket-fill::before { content: "\f843"; } +.bi-rocket-takeoff-fill::before { content: "\f844"; } +.bi-rocket-takeoff::before { content: "\f845"; } +.bi-rocket::before { content: "\f846"; } +.bi-stripe::before { content: "\f847"; } +.bi-subscript::before { content: "\f848"; } +.bi-superscript::before { content: "\f849"; } +.bi-trello::before { content: "\f84a"; } +.bi-envelope-at-fill::before { content: "\f84b"; } +.bi-envelope-at::before { content: "\f84c"; } +.bi-regex::before { content: "\f84d"; } +.bi-text-wrap::before { content: "\f84e"; } +.bi-sign-dead-end-fill::before { content: "\f84f"; } +.bi-sign-dead-end::before { content: "\f850"; } +.bi-sign-do-not-enter-fill::before { content: "\f851"; } +.bi-sign-do-not-enter::before { content: "\f852"; } +.bi-sign-intersection-fill::before { content: "\f853"; } +.bi-sign-intersection-side-fill::before { content: "\f854"; } +.bi-sign-intersection-side::before { content: "\f855"; } +.bi-sign-intersection-t-fill::before { content: "\f856"; } +.bi-sign-intersection-t::before { content: "\f857"; } +.bi-sign-intersection-y-fill::before { content: "\f858"; } +.bi-sign-intersection-y::before { content: "\f859"; } +.bi-sign-intersection::before { content: "\f85a"; } +.bi-sign-merge-left-fill::before { content: "\f85b"; } +.bi-sign-merge-left::before { content: "\f85c"; } +.bi-sign-merge-right-fill::before { content: "\f85d"; } +.bi-sign-merge-right::before { content: "\f85e"; } +.bi-sign-no-left-turn-fill::before { content: "\f85f"; } +.bi-sign-no-left-turn::before { content: "\f860"; } +.bi-sign-no-parking-fill::before { content: "\f861"; } +.bi-sign-no-parking::before { content: "\f862"; } +.bi-sign-no-right-turn-fill::before { content: "\f863"; } +.bi-sign-no-right-turn::before { content: "\f864"; } +.bi-sign-railroad-fill::before { content: "\f865"; } +.bi-sign-railroad::before { content: "\f866"; } +.bi-building-add::before { content: "\f867"; } +.bi-building-check::before { content: "\f868"; } +.bi-building-dash::before { content: "\f869"; } +.bi-building-down::before { content: "\f86a"; } +.bi-building-exclamation::before { content: "\f86b"; } +.bi-building-fill-add::before { content: "\f86c"; } +.bi-building-fill-check::before { content: "\f86d"; } +.bi-building-fill-dash::before { content: "\f86e"; } +.bi-building-fill-down::before { content: "\f86f"; } +.bi-building-fill-exclamation::before { content: "\f870"; } +.bi-building-fill-gear::before { content: "\f871"; } +.bi-building-fill-lock::before { content: "\f872"; } +.bi-building-fill-slash::before { content: "\f873"; } +.bi-building-fill-up::before { content: "\f874"; } +.bi-building-fill-x::before { content: "\f875"; } +.bi-building-fill::before { content: "\f876"; } +.bi-building-gear::before { content: "\f877"; } +.bi-building-lock::before { content: "\f878"; } +.bi-building-slash::before { content: "\f879"; } +.bi-building-up::before { content: "\f87a"; } +.bi-building-x::before { content: "\f87b"; } +.bi-buildings-fill::before { content: "\f87c"; } +.bi-buildings::before { content: "\f87d"; } +.bi-bus-front-fill::before { content: "\f87e"; } +.bi-bus-front::before { content: "\f87f"; } +.bi-ev-front-fill::before { content: "\f880"; } +.bi-ev-front::before { content: "\f881"; } +.bi-globe-americas::before { content: "\f882"; } +.bi-globe-asia-australia::before { content: "\f883"; } +.bi-globe-central-south-asia::before { content: "\f884"; } +.bi-globe-europe-africa::before { content: "\f885"; } +.bi-house-add-fill::before { content: "\f886"; } +.bi-house-add::before { content: "\f887"; } +.bi-house-check-fill::before { content: "\f888"; } +.bi-house-check::before { content: "\f889"; } +.bi-house-dash-fill::before { content: "\f88a"; } +.bi-house-dash::before { content: "\f88b"; } +.bi-house-down-fill::before { content: "\f88c"; } +.bi-house-down::before { content: "\f88d"; } +.bi-house-exclamation-fill::before { content: "\f88e"; } +.bi-house-exclamation::before { content: "\f88f"; } +.bi-house-gear-fill::before { content: "\f890"; } +.bi-house-gear::before { content: "\f891"; } +.bi-house-lock-fill::before { content: "\f892"; } +.bi-house-lock::before { content: "\f893"; } +.bi-house-slash-fill::before { content: "\f894"; } +.bi-house-slash::before { content: "\f895"; } +.bi-house-up-fill::before { content: "\f896"; } +.bi-house-up::before { content: "\f897"; } +.bi-house-x-fill::before { content: "\f898"; } +.bi-house-x::before { content: "\f899"; } +.bi-person-add::before { content: "\f89a"; } +.bi-person-down::before { content: "\f89b"; } +.bi-person-exclamation::before { content: "\f89c"; } +.bi-person-fill-add::before { content: "\f89d"; } +.bi-person-fill-check::before { content: "\f89e"; } +.bi-person-fill-dash::before { content: "\f89f"; } +.bi-person-fill-down::before { content: "\f8a0"; } +.bi-person-fill-exclamation::before { content: "\f8a1"; } +.bi-person-fill-gear::before { content: "\f8a2"; } +.bi-person-fill-lock::before { content: "\f8a3"; } +.bi-person-fill-slash::before { content: "\f8a4"; } +.bi-person-fill-up::before { content: "\f8a5"; } +.bi-person-fill-x::before { content: "\f8a6"; } +.bi-person-gear::before { content: "\f8a7"; } +.bi-person-lock::before { content: "\f8a8"; } +.bi-person-slash::before { content: "\f8a9"; } +.bi-person-up::before { content: "\f8aa"; } +.bi-scooter::before { content: "\f8ab"; } +.bi-taxi-front-fill::before { content: "\f8ac"; } +.bi-taxi-front::before { content: "\f8ad"; } +.bi-amd::before { content: "\f8ae"; } +.bi-database-add::before { content: "\f8af"; } +.bi-database-check::before { content: "\f8b0"; } +.bi-database-dash::before { content: "\f8b1"; } +.bi-database-down::before { content: "\f8b2"; } +.bi-database-exclamation::before { content: "\f8b3"; } +.bi-database-fill-add::before { content: "\f8b4"; } +.bi-database-fill-check::before { content: "\f8b5"; } +.bi-database-fill-dash::before { content: "\f8b6"; } +.bi-database-fill-down::before { content: "\f8b7"; } +.bi-database-fill-exclamation::before { content: "\f8b8"; } +.bi-database-fill-gear::before { content: "\f8b9"; } +.bi-database-fill-lock::before { content: "\f8ba"; } +.bi-database-fill-slash::before { content: "\f8bb"; } +.bi-database-fill-up::before { content: "\f8bc"; } +.bi-database-fill-x::before { content: "\f8bd"; } +.bi-database-fill::before { content: "\f8be"; } +.bi-database-gear::before { content: "\f8bf"; } +.bi-database-lock::before { content: "\f8c0"; } +.bi-database-slash::before { content: "\f8c1"; } +.bi-database-up::before { content: "\f8c2"; } +.bi-database-x::before { content: "\f8c3"; } +.bi-database::before { content: "\f8c4"; } +.bi-houses-fill::before { content: "\f8c5"; } +.bi-houses::before { content: "\f8c6"; } +.bi-nvidia::before { content: "\f8c7"; } +.bi-person-vcard-fill::before { content: "\f8c8"; } +.bi-person-vcard::before { content: "\f8c9"; } +.bi-sina-weibo::before { content: "\f8ca"; } +.bi-tencent-qq::before { content: "\f8cb"; } +.bi-wikipedia::before { content: "\f8cc"; } diff --git a/r-book/site_libs/bootstrap/bootstrap-icons.woff b/r-book/site_libs/bootstrap/bootstrap-icons.woff new file mode 100644 index 00000000..18d21d45 Binary files /dev/null and b/r-book/site_libs/bootstrap/bootstrap-icons.woff differ diff --git a/r-book/site_libs/bootstrap/bootstrap.min.css b/r-book/site_libs/bootstrap/bootstrap.min.css new file mode 100644 index 00000000..a519bf95 --- /dev/null +++ b/r-book/site_libs/bootstrap/bootstrap.min.css @@ -0,0 +1,10 @@ +/*! + * Bootstrap v5.1.3 (https://getbootstrap.com/) + * Copyright 2011-2021 The Bootstrap Authors + * Copyright 2011-2021 Twitter, Inc. + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) + */@import"https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@300;400;700&display=swap";:root{--bs-blue: #2780e3;--bs-indigo: #6610f2;--bs-purple: #613d7c;--bs-pink: #e83e8c;--bs-red: #ff0039;--bs-orange: #f0ad4e;--bs-yellow: #ff7518;--bs-green: #3fb618;--bs-teal: #20c997;--bs-cyan: #9954bb;--bs-white: #fff;--bs-gray: #6c757d;--bs-gray-dark: #373a3c;--bs-gray-100: #f8f9fa;--bs-gray-200: #e9ecef;--bs-gray-300: #dee2e6;--bs-gray-400: #ced4da;--bs-gray-500: #adb5bd;--bs-gray-600: #6c757d;--bs-gray-700: #495057;--bs-gray-800: #373a3c;--bs-gray-900: #212529;--bs-default: #373a3c;--bs-primary: #2780e3;--bs-secondary: #373a3c;--bs-success: #3fb618;--bs-info: #9954bb;--bs-warning: #ff7518;--bs-danger: #ff0039;--bs-light: #f8f9fa;--bs-dark: #373a3c;--bs-default-rgb: 55, 58, 60;--bs-primary-rgb: 39, 128, 227;--bs-secondary-rgb: 55, 58, 60;--bs-success-rgb: 63, 182, 24;--bs-info-rgb: 153, 84, 187;--bs-warning-rgb: 255, 117, 24;--bs-danger-rgb: 255, 0, 57;--bs-light-rgb: 248, 249, 250;--bs-dark-rgb: 55, 58, 60;--bs-white-rgb: 255, 255, 255;--bs-black-rgb: 0, 0, 0;--bs-body-color-rgb: 55, 58, 60;--bs-body-bg-rgb: 255, 255, 255;--bs-font-sans-serif: "Source Sans Pro", -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";--bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;--bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0));--bs-root-font-size: 17px;--bs-body-font-family: var(--bs-font-sans-serif);--bs-body-font-size: 1rem;--bs-body-font-weight: 400;--bs-body-line-height: 1.5;--bs-body-color: #373a3c;--bs-body-bg: #fff}*,*::before,*::after{box-sizing:border-box}:root{font-size:var(--bs-root-font-size)}body{margin:0;font-family:var(--bs-body-font-family);font-size:var(--bs-body-font-size);font-weight:var(--bs-body-font-weight);line-height:var(--bs-body-line-height);color:var(--bs-body-color);text-align:var(--bs-body-text-align);background-color:var(--bs-body-bg);-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:rgba(0,0,0,0)}hr{margin:1rem 0;color:inherit;background-color:currentColor;border:0;opacity:.25}hr:not([size]){height:1px}h6,.h6,h5,.h5,h4,.h4,h3,.h3,h2,.h2,h1,.h1{margin-top:0;margin-bottom:.5rem;font-weight:400;line-height:1.2}h1,.h1{font-size:calc(1.325rem + 0.9vw)}@media(min-width: 1200px){h1,.h1{font-size:2rem}}h2,.h2{font-size:calc(1.29rem + 0.48vw)}@media(min-width: 1200px){h2,.h2{font-size:1.65rem}}h3,.h3{font-size:calc(1.27rem + 0.24vw)}@media(min-width: 1200px){h3,.h3{font-size:1.45rem}}h4,.h4{font-size:1.25rem}h5,.h5{font-size:1.1rem}h6,.h6{font-size:1rem}p{margin-top:0;margin-bottom:1rem}abbr[title],abbr[data-bs-original-title]{text-decoration:underline dotted;-webkit-text-decoration:underline dotted;-moz-text-decoration:underline dotted;-ms-text-decoration:underline dotted;-o-text-decoration:underline dotted;cursor:help;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}ol,ul{padding-left:2rem}ol,ul,dl{margin-top:0;margin-bottom:1rem}ol ol,ul ul,ol ul,ul ol{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem;padding:.625rem 1.25rem;border-left:.25rem solid #e9ecef}blockquote p:last-child,blockquote ul:last-child,blockquote ol:last-child{margin-bottom:0}b,strong{font-weight:bolder}small,.small{font-size:0.875em}mark,.mark{padding:.2em;background-color:#fcf8e3}sub,sup{position:relative;font-size:0.75em;line-height:0;vertical-align:baseline}sub{bottom:-0.25em}sup{top:-0.5em}a{color:#2780e3;text-decoration:underline;-webkit-text-decoration:underline;-moz-text-decoration:underline;-ms-text-decoration:underline;-o-text-decoration:underline}a:hover{color:#1f66b6}a:not([href]):not([class]),a:not([href]):not([class]):hover{color:inherit;text-decoration:none}pre,code,kbd,samp{font-family:var(--bs-font-monospace);font-size:1em;direction:ltr /* rtl:ignore */;unicode-bidi:bidi-override}pre{display:block;margin-top:0;margin-bottom:1rem;overflow:auto;font-size:0.875em;color:#000;background-color:#f7f7f7;padding:.5rem;border:1px solid #dee2e6}pre code{background-color:rgba(0,0,0,0);font-size:inherit;color:inherit;word-break:normal}code{font-size:0.875em;color:#9753b8;background-color:#f7f7f7;padding:.125rem .25rem;word-wrap:break-word}a>code{color:inherit}kbd{padding:.4rem .4rem;font-size:0.875em;color:#fff;background-color:#212529}kbd kbd{padding:0;font-size:1em;font-weight:700}figure{margin:0 0 1rem}img,svg{vertical-align:middle}table{caption-side:bottom;border-collapse:collapse}caption{padding-top:.5rem;padding-bottom:.5rem;color:#6c757d;text-align:left}th{text-align:inherit;text-align:-webkit-match-parent}thead,tbody,tfoot,tr,td,th{border-color:inherit;border-style:solid;border-width:0}label{display:inline-block}button{border-radius:0}button:focus:not(:focus-visible){outline:0}input,button,select,optgroup,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}select:disabled{opacity:1}[list]::-webkit-calendar-picker-indicator{display:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button}button:not(:disabled),[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled){cursor:pointer}::-moz-focus-inner{padding:0;border-style:none}textarea{resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{float:left;width:100%;padding:0;margin-bottom:.5rem;font-size:calc(1.275rem + 0.3vw);line-height:inherit}@media(min-width: 1200px){legend{font-size:1.5rem}}legend+*{clear:left}::-webkit-datetime-edit-fields-wrapper,::-webkit-datetime-edit-text,::-webkit-datetime-edit-minute,::-webkit-datetime-edit-hour-field,::-webkit-datetime-edit-day-field,::-webkit-datetime-edit-month-field,::-webkit-datetime-edit-year-field{padding:0}::-webkit-inner-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:textfield}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-color-swatch-wrapper{padding:0}::file-selector-button{font:inherit}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}iframe{border:0}summary{display:list-item;cursor:pointer}progress{vertical-align:baseline}[hidden]{display:none !important}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:calc(1.625rem + 4.5vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-1{font-size:5rem}}.display-2{font-size:calc(1.575rem + 3.9vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-2{font-size:4.5rem}}.display-3{font-size:calc(1.525rem + 3.3vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-3{font-size:4rem}}.display-4{font-size:calc(1.475rem + 2.7vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-4{font-size:3.5rem}}.display-5{font-size:calc(1.425rem + 2.1vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-5{font-size:3rem}}.display-6{font-size:calc(1.375rem + 1.5vw);font-weight:300;line-height:1.2}@media(min-width: 1200px){.display-6{font-size:2.5rem}}.list-unstyled{padding-left:0;list-style:none}.list-inline{padding-left:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-right:.5rem}.initialism{font-size:0.875em;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote>:last-child{margin-bottom:0}.blockquote-footer{margin-top:-1rem;margin-bottom:1rem;font-size:0.875em;color:#6c757d}.blockquote-footer::before{content:"— "}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:0.875em;color:#6c757d}.grid{display:grid;grid-template-rows:repeat(var(--bs-rows, 1), 1fr);grid-template-columns:repeat(var(--bs-columns, 12), 1fr);gap:var(--bs-gap, 1.5rem)}.grid .g-col-1{grid-column:auto/span 1}.grid .g-col-2{grid-column:auto/span 2}.grid .g-col-3{grid-column:auto/span 3}.grid .g-col-4{grid-column:auto/span 4}.grid .g-col-5{grid-column:auto/span 5}.grid .g-col-6{grid-column:auto/span 6}.grid .g-col-7{grid-column:auto/span 7}.grid .g-col-8{grid-column:auto/span 8}.grid .g-col-9{grid-column:auto/span 9}.grid .g-col-10{grid-column:auto/span 10}.grid .g-col-11{grid-column:auto/span 11}.grid .g-col-12{grid-column:auto/span 12}.grid .g-start-1{grid-column-start:1}.grid .g-start-2{grid-column-start:2}.grid .g-start-3{grid-column-start:3}.grid .g-start-4{grid-column-start:4}.grid .g-start-5{grid-column-start:5}.grid .g-start-6{grid-column-start:6}.grid .g-start-7{grid-column-start:7}.grid .g-start-8{grid-column-start:8}.grid .g-start-9{grid-column-start:9}.grid .g-start-10{grid-column-start:10}.grid .g-start-11{grid-column-start:11}@media(min-width: 576px){.grid .g-col-sm-1{grid-column:auto/span 1}.grid .g-col-sm-2{grid-column:auto/span 2}.grid .g-col-sm-3{grid-column:auto/span 3}.grid .g-col-sm-4{grid-column:auto/span 4}.grid .g-col-sm-5{grid-column:auto/span 5}.grid .g-col-sm-6{grid-column:auto/span 6}.grid .g-col-sm-7{grid-column:auto/span 7}.grid .g-col-sm-8{grid-column:auto/span 8}.grid .g-col-sm-9{grid-column:auto/span 9}.grid .g-col-sm-10{grid-column:auto/span 10}.grid .g-col-sm-11{grid-column:auto/span 11}.grid .g-col-sm-12{grid-column:auto/span 12}.grid .g-start-sm-1{grid-column-start:1}.grid .g-start-sm-2{grid-column-start:2}.grid .g-start-sm-3{grid-column-start:3}.grid .g-start-sm-4{grid-column-start:4}.grid .g-start-sm-5{grid-column-start:5}.grid .g-start-sm-6{grid-column-start:6}.grid .g-start-sm-7{grid-column-start:7}.grid .g-start-sm-8{grid-column-start:8}.grid .g-start-sm-9{grid-column-start:9}.grid .g-start-sm-10{grid-column-start:10}.grid .g-start-sm-11{grid-column-start:11}}@media(min-width: 768px){.grid .g-col-md-1{grid-column:auto/span 1}.grid .g-col-md-2{grid-column:auto/span 2}.grid .g-col-md-3{grid-column:auto/span 3}.grid .g-col-md-4{grid-column:auto/span 4}.grid .g-col-md-5{grid-column:auto/span 5}.grid .g-col-md-6{grid-column:auto/span 6}.grid .g-col-md-7{grid-column:auto/span 7}.grid .g-col-md-8{grid-column:auto/span 8}.grid .g-col-md-9{grid-column:auto/span 9}.grid .g-col-md-10{grid-column:auto/span 10}.grid .g-col-md-11{grid-column:auto/span 11}.grid .g-col-md-12{grid-column:auto/span 12}.grid .g-start-md-1{grid-column-start:1}.grid .g-start-md-2{grid-column-start:2}.grid .g-start-md-3{grid-column-start:3}.grid .g-start-md-4{grid-column-start:4}.grid .g-start-md-5{grid-column-start:5}.grid .g-start-md-6{grid-column-start:6}.grid .g-start-md-7{grid-column-start:7}.grid .g-start-md-8{grid-column-start:8}.grid .g-start-md-9{grid-column-start:9}.grid .g-start-md-10{grid-column-start:10}.grid .g-start-md-11{grid-column-start:11}}@media(min-width: 992px){.grid .g-col-lg-1{grid-column:auto/span 1}.grid .g-col-lg-2{grid-column:auto/span 2}.grid .g-col-lg-3{grid-column:auto/span 3}.grid .g-col-lg-4{grid-column:auto/span 4}.grid .g-col-lg-5{grid-column:auto/span 5}.grid .g-col-lg-6{grid-column:auto/span 6}.grid .g-col-lg-7{grid-column:auto/span 7}.grid .g-col-lg-8{grid-column:auto/span 8}.grid .g-col-lg-9{grid-column:auto/span 9}.grid .g-col-lg-10{grid-column:auto/span 10}.grid .g-col-lg-11{grid-column:auto/span 11}.grid .g-col-lg-12{grid-column:auto/span 12}.grid .g-start-lg-1{grid-column-start:1}.grid .g-start-lg-2{grid-column-start:2}.grid .g-start-lg-3{grid-column-start:3}.grid .g-start-lg-4{grid-column-start:4}.grid .g-start-lg-5{grid-column-start:5}.grid .g-start-lg-6{grid-column-start:6}.grid .g-start-lg-7{grid-column-start:7}.grid .g-start-lg-8{grid-column-start:8}.grid .g-start-lg-9{grid-column-start:9}.grid .g-start-lg-10{grid-column-start:10}.grid .g-start-lg-11{grid-column-start:11}}@media(min-width: 1200px){.grid .g-col-xl-1{grid-column:auto/span 1}.grid .g-col-xl-2{grid-column:auto/span 2}.grid .g-col-xl-3{grid-column:auto/span 3}.grid .g-col-xl-4{grid-column:auto/span 4}.grid .g-col-xl-5{grid-column:auto/span 5}.grid .g-col-xl-6{grid-column:auto/span 6}.grid .g-col-xl-7{grid-column:auto/span 7}.grid .g-col-xl-8{grid-column:auto/span 8}.grid .g-col-xl-9{grid-column:auto/span 9}.grid .g-col-xl-10{grid-column:auto/span 10}.grid .g-col-xl-11{grid-column:auto/span 11}.grid .g-col-xl-12{grid-column:auto/span 12}.grid .g-start-xl-1{grid-column-start:1}.grid .g-start-xl-2{grid-column-start:2}.grid .g-start-xl-3{grid-column-start:3}.grid .g-start-xl-4{grid-column-start:4}.grid .g-start-xl-5{grid-column-start:5}.grid .g-start-xl-6{grid-column-start:6}.grid .g-start-xl-7{grid-column-start:7}.grid .g-start-xl-8{grid-column-start:8}.grid .g-start-xl-9{grid-column-start:9}.grid .g-start-xl-10{grid-column-start:10}.grid .g-start-xl-11{grid-column-start:11}}@media(min-width: 1400px){.grid .g-col-xxl-1{grid-column:auto/span 1}.grid .g-col-xxl-2{grid-column:auto/span 2}.grid .g-col-xxl-3{grid-column:auto/span 3}.grid .g-col-xxl-4{grid-column:auto/span 4}.grid .g-col-xxl-5{grid-column:auto/span 5}.grid .g-col-xxl-6{grid-column:auto/span 6}.grid .g-col-xxl-7{grid-column:auto/span 7}.grid .g-col-xxl-8{grid-column:auto/span 8}.grid .g-col-xxl-9{grid-column:auto/span 9}.grid .g-col-xxl-10{grid-column:auto/span 10}.grid .g-col-xxl-11{grid-column:auto/span 11}.grid .g-col-xxl-12{grid-column:auto/span 12}.grid .g-start-xxl-1{grid-column-start:1}.grid .g-start-xxl-2{grid-column-start:2}.grid .g-start-xxl-3{grid-column-start:3}.grid .g-start-xxl-4{grid-column-start:4}.grid .g-start-xxl-5{grid-column-start:5}.grid .g-start-xxl-6{grid-column-start:6}.grid .g-start-xxl-7{grid-column-start:7}.grid .g-start-xxl-8{grid-column-start:8}.grid .g-start-xxl-9{grid-column-start:9}.grid .g-start-xxl-10{grid-column-start:10}.grid .g-start-xxl-11{grid-column-start:11}}.table{--bs-table-bg: transparent;--bs-table-accent-bg: transparent;--bs-table-striped-color: #373a3c;--bs-table-striped-bg: rgba(0, 0, 0, 0.05);--bs-table-active-color: #373a3c;--bs-table-active-bg: rgba(0, 0, 0, 0.1);--bs-table-hover-color: #373a3c;--bs-table-hover-bg: rgba(0, 0, 0, 0.075);width:100%;margin-bottom:1rem;color:#373a3c;vertical-align:top;border-color:#dee2e6}.table>:not(caption)>*>*{padding:.5rem .5rem;background-color:var(--bs-table-bg);border-bottom-width:1px;box-shadow:inset 0 0 0 9999px var(--bs-table-accent-bg)}.table>tbody{vertical-align:inherit}.table>thead{vertical-align:bottom}.table>:not(:first-child){border-top:2px solid #b6babc}.caption-top{caption-side:top}.table-sm>:not(caption)>*>*{padding:.25rem .25rem}.table-bordered>:not(caption)>*{border-width:1px 0}.table-bordered>:not(caption)>*>*{border-width:0 1px}.table-borderless>:not(caption)>*>*{border-bottom-width:0}.table-borderless>:not(:first-child){border-top-width:0}.table-striped>tbody>tr:nth-of-type(odd)>*{--bs-table-accent-bg: var(--bs-table-striped-bg);color:var(--bs-table-striped-color)}.table-active{--bs-table-accent-bg: var(--bs-table-active-bg);color:var(--bs-table-active-color)}.table-hover>tbody>tr:hover>*{--bs-table-accent-bg: var(--bs-table-hover-bg);color:var(--bs-table-hover-color)}.table-primary{--bs-table-bg: #d4e6f9;--bs-table-striped-bg: #c9dbed;--bs-table-striped-color: #000;--bs-table-active-bg: #bfcfe0;--bs-table-active-color: #000;--bs-table-hover-bg: #c4d5e6;--bs-table-hover-color: #000;color:#000;border-color:#bfcfe0}.table-secondary{--bs-table-bg: #d7d8d8;--bs-table-striped-bg: #cccdcd;--bs-table-striped-color: #000;--bs-table-active-bg: #c2c2c2;--bs-table-active-color: #000;--bs-table-hover-bg: #c7c8c8;--bs-table-hover-color: #000;color:#000;border-color:#c2c2c2}.table-success{--bs-table-bg: #d9f0d1;--bs-table-striped-bg: #cee4c7;--bs-table-striped-color: #000;--bs-table-active-bg: #c3d8bc;--bs-table-active-color: #000;--bs-table-hover-bg: #c9dec1;--bs-table-hover-color: #000;color:#000;border-color:#c3d8bc}.table-info{--bs-table-bg: #ebddf1;--bs-table-striped-bg: #dfd2e5;--bs-table-striped-color: #000;--bs-table-active-bg: #d4c7d9;--bs-table-active-color: #000;--bs-table-hover-bg: #d9ccdf;--bs-table-hover-color: #000;color:#000;border-color:#d4c7d9}.table-warning{--bs-table-bg: #ffe3d1;--bs-table-striped-bg: #f2d8c7;--bs-table-striped-color: #000;--bs-table-active-bg: #e6ccbc;--bs-table-active-color: #000;--bs-table-hover-bg: #ecd2c1;--bs-table-hover-color: #000;color:#000;border-color:#e6ccbc}.table-danger{--bs-table-bg: #ffccd7;--bs-table-striped-bg: #f2c2cc;--bs-table-striped-color: #000;--bs-table-active-bg: #e6b8c2;--bs-table-active-color: #000;--bs-table-hover-bg: #ecbdc7;--bs-table-hover-color: #000;color:#000;border-color:#e6b8c2}.table-light{--bs-table-bg: #f8f9fa;--bs-table-striped-bg: #ecedee;--bs-table-striped-color: #000;--bs-table-active-bg: #dfe0e1;--bs-table-active-color: #000;--bs-table-hover-bg: #e5e6e7;--bs-table-hover-color: #000;color:#000;border-color:#dfe0e1}.table-dark{--bs-table-bg: #373a3c;--bs-table-striped-bg: #414446;--bs-table-striped-color: #fff;--bs-table-active-bg: #4b4e50;--bs-table-active-color: #fff;--bs-table-hover-bg: #46494b;--bs-table-hover-color: #fff;color:#fff;border-color:#4b4e50}.table-responsive{overflow-x:auto;-webkit-overflow-scrolling:touch}@media(max-width: 575.98px){.table-responsive-sm{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 767.98px){.table-responsive-md{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 991.98px){.table-responsive-lg{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 1199.98px){.table-responsive-xl{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media(max-width: 1399.98px){.table-responsive-xxl{overflow-x:auto;-webkit-overflow-scrolling:touch}}.form-label,.shiny-input-container .control-label{margin-bottom:.5rem}.col-form-label{padding-top:calc(0.375rem + 1px);padding-bottom:calc(0.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(0.5rem + 1px);padding-bottom:calc(0.5rem + 1px);font-size:1.25rem}.col-form-label-sm{padding-top:calc(0.25rem + 1px);padding-bottom:calc(0.25rem + 1px);font-size:0.875rem}.form-text{margin-top:.25rem;font-size:0.875em;color:#6c757d}.form-control{display:block;width:100%;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#373a3c;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;border-radius:0;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control{transition:none}}.form-control[type=file]{overflow:hidden}.form-control[type=file]:not(:disabled):not([readonly]){cursor:pointer}.form-control:focus{color:#373a3c;background-color:#fff;border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-control::-webkit-date-and-time-value{height:1.5em}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}.form-control::file-selector-button{padding:.375rem .75rem;margin:-0.375rem -0.75rem;margin-inline-end:.75rem;color:#373a3c;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control::file-selector-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::file-selector-button{background-color:#dde0e3}.form-control::-webkit-file-upload-button{padding:.375rem .75rem;margin:-0.375rem -0.75rem;margin-inline-end:.75rem;color:#373a3c;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-control::-webkit-file-upload-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button{background-color:#dde0e3}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;line-height:1.5;color:#373a3c;background-color:rgba(0,0,0,0);border:solid rgba(0,0,0,0);border-width:1px 0}.form-control-plaintext.form-control-sm,.form-control-plaintext.form-control-lg{padding-right:0;padding-left:0}.form-control-sm{min-height:calc(1.5em + 0.5rem + 2px);padding:.25rem .5rem;font-size:0.875rem}.form-control-sm::file-selector-button{padding:.25rem .5rem;margin:-0.25rem -0.5rem;margin-inline-end:.5rem}.form-control-sm::-webkit-file-upload-button{padding:.25rem .5rem;margin:-0.25rem -0.5rem;margin-inline-end:.5rem}.form-control-lg{min-height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem}.form-control-lg::file-selector-button{padding:.5rem 1rem;margin:-0.5rem -1rem;margin-inline-end:1rem}.form-control-lg::-webkit-file-upload-button{padding:.5rem 1rem;margin:-0.5rem -1rem;margin-inline-end:1rem}textarea.form-control{min-height:calc(1.5em + 0.75rem + 2px)}textarea.form-control-sm{min-height:calc(1.5em + 0.5rem + 2px)}textarea.form-control-lg{min-height:calc(1.5em + 1rem + 2px)}.form-control-color{width:3rem;height:auto;padding:.375rem}.form-control-color:not(:disabled):not([readonly]){cursor:pointer}.form-control-color::-moz-color-swatch{height:1.5em}.form-control-color::-webkit-color-swatch{height:1.5em}.form-select{display:block;width:100%;padding:.375rem 2.25rem .375rem .75rem;-moz-padding-start:calc(0.75rem - 3px);font-size:1rem;font-weight:400;line-height:1.5;color:#373a3c;background-color:#fff;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23373a3c' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right .75rem center;background-size:16px 12px;border:1px solid #ced4da;border-radius:0;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}@media(prefers-reduced-motion: reduce){.form-select{transition:none}}.form-select:focus{border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-select[multiple],.form-select[size]:not([size="1"]){padding-right:.75rem;background-image:none}.form-select:disabled{background-color:#e9ecef}.form-select:-moz-focusring{color:rgba(0,0,0,0);text-shadow:0 0 0 #373a3c}.form-select-sm{padding-top:.25rem;padding-bottom:.25rem;padding-left:.5rem;font-size:0.875rem}.form-select-lg{padding-top:.5rem;padding-bottom:.5rem;padding-left:1rem;font-size:1.25rem}.form-check,.shiny-input-container .checkbox,.shiny-input-container .radio{display:block;min-height:1.5rem;padding-left:0;margin-bottom:.125rem}.form-check .form-check-input,.form-check .shiny-input-container .checkbox input,.form-check .shiny-input-container .radio input,.shiny-input-container .checkbox .form-check-input,.shiny-input-container .checkbox .shiny-input-container .checkbox input,.shiny-input-container .checkbox .shiny-input-container .radio input,.shiny-input-container .radio .form-check-input,.shiny-input-container .radio .shiny-input-container .checkbox input,.shiny-input-container .radio .shiny-input-container .radio input{float:left;margin-left:0}.form-check-input,.shiny-input-container .checkbox input,.shiny-input-container .checkbox-inline input,.shiny-input-container .radio input,.shiny-input-container .radio-inline input{width:1em;height:1em;margin-top:.25em;vertical-align:top;background-color:#fff;background-repeat:no-repeat;background-position:center;background-size:contain;border:1px solid rgba(0,0,0,.25);appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none;color-adjust:exact;-webkit-print-color-adjust:exact}.form-check-input[type=radio],.shiny-input-container .checkbox input[type=radio],.shiny-input-container .checkbox-inline input[type=radio],.shiny-input-container .radio input[type=radio],.shiny-input-container .radio-inline input[type=radio]{border-radius:50%}.form-check-input:active,.shiny-input-container .checkbox input:active,.shiny-input-container .checkbox-inline input:active,.shiny-input-container .radio input:active,.shiny-input-container .radio-inline input:active{filter:brightness(90%)}.form-check-input:focus,.shiny-input-container .checkbox input:focus,.shiny-input-container .checkbox-inline input:focus,.shiny-input-container .radio input:focus,.shiny-input-container .radio-inline input:focus{border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.form-check-input:checked,.shiny-input-container .checkbox input:checked,.shiny-input-container .checkbox-inline input:checked,.shiny-input-container .radio input:checked,.shiny-input-container .radio-inline input:checked{background-color:#2780e3;border-color:#2780e3}.form-check-input:checked[type=checkbox],.shiny-input-container .checkbox input:checked[type=checkbox],.shiny-input-container .checkbox-inline input:checked[type=checkbox],.shiny-input-container .radio input:checked[type=checkbox],.shiny-input-container .radio-inline input:checked[type=checkbox]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10l3 3l6-6'/%3e%3c/svg%3e")}.form-check-input:checked[type=radio],.shiny-input-container .checkbox input:checked[type=radio],.shiny-input-container .checkbox-inline input:checked[type=radio],.shiny-input-container .radio input:checked[type=radio],.shiny-input-container .radio-inline input:checked[type=radio]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e")}.form-check-input[type=checkbox]:indeterminate,.shiny-input-container .checkbox input[type=checkbox]:indeterminate,.shiny-input-container .checkbox-inline input[type=checkbox]:indeterminate,.shiny-input-container .radio input[type=checkbox]:indeterminate,.shiny-input-container .radio-inline input[type=checkbox]:indeterminate{background-color:#2780e3;border-color:#2780e3;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e")}.form-check-input:disabled,.shiny-input-container .checkbox input:disabled,.shiny-input-container .checkbox-inline input:disabled,.shiny-input-container .radio input:disabled,.shiny-input-container .radio-inline input:disabled{pointer-events:none;filter:none;opacity:.5}.form-check-input[disabled]~.form-check-label,.form-check-input[disabled]~span,.form-check-input:disabled~.form-check-label,.form-check-input:disabled~span,.shiny-input-container .checkbox input[disabled]~.form-check-label,.shiny-input-container .checkbox input[disabled]~span,.shiny-input-container .checkbox input:disabled~.form-check-label,.shiny-input-container .checkbox input:disabled~span,.shiny-input-container .checkbox-inline input[disabled]~.form-check-label,.shiny-input-container .checkbox-inline input[disabled]~span,.shiny-input-container .checkbox-inline input:disabled~.form-check-label,.shiny-input-container .checkbox-inline input:disabled~span,.shiny-input-container .radio input[disabled]~.form-check-label,.shiny-input-container .radio input[disabled]~span,.shiny-input-container .radio input:disabled~.form-check-label,.shiny-input-container .radio input:disabled~span,.shiny-input-container .radio-inline input[disabled]~.form-check-label,.shiny-input-container .radio-inline input[disabled]~span,.shiny-input-container .radio-inline input:disabled~.form-check-label,.shiny-input-container .radio-inline input:disabled~span{opacity:.5}.form-check-label,.shiny-input-container .checkbox label,.shiny-input-container .checkbox-inline label,.shiny-input-container .radio label,.shiny-input-container .radio-inline label{cursor:pointer}.form-switch{padding-left:2.5em}.form-switch .form-check-input{width:2em;margin-left:-2.5em;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e");background-position:left center;transition:background-position .15s ease-in-out}@media(prefers-reduced-motion: reduce){.form-switch .form-check-input{transition:none}}.form-switch .form-check-input:focus{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2393c0f1'/%3e%3c/svg%3e")}.form-switch .form-check-input:checked{background-position:right center;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.form-check-inline,.shiny-input-container .checkbox-inline,.shiny-input-container .radio-inline{display:inline-block;margin-right:1rem}.btn-check{position:absolute;clip:rect(0, 0, 0, 0);pointer-events:none}.btn-check[disabled]+.btn,.btn-check:disabled+.btn{pointer-events:none;filter:none;opacity:.65}.form-range{width:100%;height:1.5rem;padding:0;background-color:rgba(0,0,0,0);appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}.form-range:focus{outline:0}.form-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(39,128,227,.25)}.form-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(39,128,227,.25)}.form-range::-moz-focus-outer{border:0}.form-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-0.25rem;background-color:#2780e3;border:0;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}@media(prefers-reduced-motion: reduce){.form-range::-webkit-slider-thumb{transition:none}}.form-range::-webkit-slider-thumb:active{background-color:#bed9f7}.form-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:rgba(0,0,0,0);cursor:pointer;background-color:#dee2e6;border-color:rgba(0,0,0,0)}.form-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#2780e3;border:0;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;appearance:none;-webkit-appearance:none;-moz-appearance:none;-ms-appearance:none;-o-appearance:none}@media(prefers-reduced-motion: reduce){.form-range::-moz-range-thumb{transition:none}}.form-range::-moz-range-thumb:active{background-color:#bed9f7}.form-range::-moz-range-track{width:100%;height:.5rem;color:rgba(0,0,0,0);cursor:pointer;background-color:#dee2e6;border-color:rgba(0,0,0,0)}.form-range:disabled{pointer-events:none}.form-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.form-range:disabled::-moz-range-thumb{background-color:#adb5bd}.form-floating{position:relative}.form-floating>.form-control,.form-floating>.form-select{height:calc(3.5rem + 2px);line-height:1.25}.form-floating>label{position:absolute;top:0;left:0;height:100%;padding:1rem .75rem;pointer-events:none;border:1px solid rgba(0,0,0,0);transform-origin:0 0;transition:opacity .1s ease-in-out,transform .1s ease-in-out}@media(prefers-reduced-motion: reduce){.form-floating>label{transition:none}}.form-floating>.form-control{padding:1rem .75rem}.form-floating>.form-control::placeholder{color:rgba(0,0,0,0)}.form-floating>.form-control:focus,.form-floating>.form-control:not(:placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:-webkit-autofill{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-select{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:focus~label,.form-floating>.form-control:not(:placeholder-shown)~label,.form-floating>.form-select~label{opacity:.65;transform:scale(0.85) translateY(-0.5rem) translateX(0.15rem)}.form-floating>.form-control:-webkit-autofill~label{opacity:.65;transform:scale(0.85) translateY(-0.5rem) translateX(0.15rem)}.input-group{position:relative;display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:stretch;-webkit-align-items:stretch;width:100%}.input-group>.form-control,.input-group>.form-select{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto;width:1%;min-width:0}.input-group>.form-control:focus,.input-group>.form-select:focus{z-index:3}.input-group .btn{position:relative;z-index:2}.input-group .btn:focus{z-index:3}.input-group-text{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#373a3c;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da}.input-group-lg>.form-control,.input-group-lg>.form-select,.input-group-lg>.input-group-text,.input-group-lg>.btn{padding:.5rem 1rem;font-size:1.25rem}.input-group-sm>.form-control,.input-group-sm>.form-select,.input-group-sm>.input-group-text,.input-group-sm>.btn{padding:.25rem .5rem;font-size:0.875rem}.input-group-lg>.form-select,.input-group-sm>.form-select{padding-right:3rem}.input-group>:not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback){margin-left:-1px}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:0.875em;color:#3fb618}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:0.875rem;color:#fff;background-color:rgba(63,182,24,.9)}.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip,.is-valid~.valid-feedback,.is-valid~.valid-tooltip{display:block}.was-validated .form-control:valid,.form-control.is-valid{border-color:#3fb618;padding-right:calc(1.5em + 0.75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%233fb618' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(0.375em + 0.1875rem) center;background-size:calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-control:valid:focus,.form-control.is-valid:focus{border-color:#3fb618;box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-right:calc(1.5em + 0.75rem);background-position:top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem)}.was-validated .form-select:valid,.form-select.is-valid{border-color:#3fb618}.was-validated .form-select:valid:not([multiple]):not([size]),.was-validated .form-select:valid:not([multiple])[size="1"],.form-select.is-valid:not([multiple]):not([size]),.form-select.is-valid:not([multiple])[size="1"]{padding-right:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23373a3c' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%233fb618' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-select:valid:focus,.form-select.is-valid:focus{border-color:#3fb618;box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated .form-check-input:valid,.form-check-input.is-valid{border-color:#3fb618}.was-validated .form-check-input:valid:checked,.form-check-input.is-valid:checked{background-color:#3fb618}.was-validated .form-check-input:valid:focus,.form-check-input.is-valid:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.25)}.was-validated .form-check-input:valid~.form-check-label,.form-check-input.is-valid~.form-check-label{color:#3fb618}.form-check-inline .form-check-input~.valid-feedback{margin-left:.5em}.was-validated .input-group .form-control:valid,.input-group .form-control.is-valid,.was-validated .input-group .form-select:valid,.input-group .form-select.is-valid{z-index:1}.was-validated .input-group .form-control:valid:focus,.input-group .form-control.is-valid:focus,.was-validated .input-group .form-select:valid:focus,.input-group .form-select.is-valid:focus{z-index:3}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:0.875em;color:#ff0039}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:0.875rem;color:#fff;background-color:rgba(255,0,57,.9)}.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip,.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip{display:block}.was-validated .form-control:invalid,.form-control.is-invalid{border-color:#ff0039;padding-right:calc(1.5em + 0.75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23ff0039'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23ff0039' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:right calc(0.375em + 0.1875rem) center;background-size:calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-control:invalid:focus,.form-control.is-invalid:focus{border-color:#ff0039;box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-right:calc(1.5em + 0.75rem);background-position:top calc(0.375em + 0.1875rem) right calc(0.375em + 0.1875rem)}.was-validated .form-select:invalid,.form-select.is-invalid{border-color:#ff0039}.was-validated .form-select:invalid:not([multiple]):not([size]),.was-validated .form-select:invalid:not([multiple])[size="1"],.form-select.is-invalid:not([multiple]):not([size]),.form-select.is-invalid:not([multiple])[size="1"]{padding-right:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23373a3c' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23ff0039'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23ff0039' stroke='none'/%3e%3c/svg%3e");background-position:right .75rem center,center right 2.25rem;background-size:16px 12px,calc(0.75em + 0.375rem) calc(0.75em + 0.375rem)}.was-validated .form-select:invalid:focus,.form-select.is-invalid:focus{border-color:#ff0039;box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated .form-check-input:invalid,.form-check-input.is-invalid{border-color:#ff0039}.was-validated .form-check-input:invalid:checked,.form-check-input.is-invalid:checked{background-color:#ff0039}.was-validated .form-check-input:invalid:focus,.form-check-input.is-invalid:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.25)}.was-validated .form-check-input:invalid~.form-check-label,.form-check-input.is-invalid~.form-check-label{color:#ff0039}.form-check-inline .form-check-input~.invalid-feedback{margin-left:.5em}.was-validated .input-group .form-control:invalid,.input-group .form-control.is-invalid,.was-validated .input-group .form-select:invalid,.input-group .form-select.is-invalid{z-index:2}.was-validated .input-group .form-control:invalid:focus,.input-group .form-control.is-invalid:focus,.was-validated .input-group .form-select:invalid:focus,.input-group .form-select.is-invalid:focus{z-index:3}.btn{display:inline-block;font-weight:400;line-height:1.5;color:#373a3c;text-align:center;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;vertical-align:middle;cursor:pointer;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;background-color:rgba(0,0,0,0);border:1px solid rgba(0,0,0,0);padding:.375rem .75rem;font-size:1rem;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.btn{transition:none}}.btn:hover{color:#373a3c}.btn-check:focus+.btn,.btn:focus{outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.btn:disabled,.btn.disabled,fieldset:disabled .btn{pointer-events:none;opacity:.65}.btn-default{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-default:hover{color:#fff;background-color:#2f3133;border-color:#2c2e30}.btn-check:focus+.btn-default,.btn-default:focus{color:#fff;background-color:#2f3133;border-color:#2c2e30;box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-check:checked+.btn-default,.btn-check:active+.btn-default,.btn-default:active,.btn-default.active,.show>.btn-default.dropdown-toggle{color:#fff;background-color:#2c2e30;border-color:#292c2d}.btn-check:checked+.btn-default:focus,.btn-check:active+.btn-default:focus,.btn-default:active:focus,.btn-default.active:focus,.show>.btn-default.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-default:disabled,.btn-default.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-primary{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-primary:hover{color:#fff;background-color:#216dc1;border-color:#1f66b6}.btn-check:focus+.btn-primary,.btn-primary:focus{color:#fff;background-color:#216dc1;border-color:#1f66b6;box-shadow:0 0 0 .25rem rgba(71,147,231,.5)}.btn-check:checked+.btn-primary,.btn-check:active+.btn-primary,.btn-primary:active,.btn-primary.active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#1f66b6;border-color:#1d60aa}.btn-check:checked+.btn-primary:focus,.btn-check:active+.btn-primary:focus,.btn-primary:active:focus,.btn-primary.active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(71,147,231,.5)}.btn-primary:disabled,.btn-primary.disabled{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-secondary{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-secondary:hover{color:#fff;background-color:#2f3133;border-color:#2c2e30}.btn-check:focus+.btn-secondary,.btn-secondary:focus{color:#fff;background-color:#2f3133;border-color:#2c2e30;box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-check:checked+.btn-secondary,.btn-check:active+.btn-secondary,.btn-secondary:active,.btn-secondary.active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#2c2e30;border-color:#292c2d}.btn-check:checked+.btn-secondary:focus,.btn-check:active+.btn-secondary:focus,.btn-secondary:active:focus,.btn-secondary.active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-secondary:disabled,.btn-secondary.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-success{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-success:hover{color:#fff;background-color:#369b14;border-color:#329213}.btn-check:focus+.btn-success,.btn-success:focus{color:#fff;background-color:#369b14;border-color:#329213;box-shadow:0 0 0 .25rem rgba(92,193,59,.5)}.btn-check:checked+.btn-success,.btn-check:active+.btn-success,.btn-success:active,.btn-success.active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#329213;border-color:#2f8912}.btn-check:checked+.btn-success:focus,.btn-check:active+.btn-success:focus,.btn-success:active:focus,.btn-success.active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(92,193,59,.5)}.btn-success:disabled,.btn-success.disabled{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-info{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-info:hover{color:#fff;background-color:#82479f;border-color:#7a4396}.btn-check:focus+.btn-info,.btn-info:focus{color:#fff;background-color:#82479f;border-color:#7a4396;box-shadow:0 0 0 .25rem rgba(168,110,197,.5)}.btn-check:checked+.btn-info,.btn-check:active+.btn-info,.btn-info:active,.btn-info.active,.show>.btn-info.dropdown-toggle{color:#fff;background-color:#7a4396;border-color:#733f8c}.btn-check:checked+.btn-info:focus,.btn-check:active+.btn-info:focus,.btn-info:active:focus,.btn-info.active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(168,110,197,.5)}.btn-info:disabled,.btn-info.disabled{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-warning{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-warning:hover{color:#fff;background-color:#d96314;border-color:#cc5e13}.btn-check:focus+.btn-warning,.btn-warning:focus{color:#fff;background-color:#d96314;border-color:#cc5e13;box-shadow:0 0 0 .25rem rgba(255,138,59,.5)}.btn-check:checked+.btn-warning,.btn-check:active+.btn-warning,.btn-warning:active,.btn-warning.active,.show>.btn-warning.dropdown-toggle{color:#fff;background-color:#cc5e13;border-color:#bf5812}.btn-check:checked+.btn-warning:focus,.btn-check:active+.btn-warning:focus,.btn-warning:active:focus,.btn-warning.active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(255,138,59,.5)}.btn-warning:disabled,.btn-warning.disabled{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-danger{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-danger:hover{color:#fff;background-color:#d90030;border-color:#cc002e}.btn-check:focus+.btn-danger,.btn-danger:focus{color:#fff;background-color:#d90030;border-color:#cc002e;box-shadow:0 0 0 .25rem rgba(255,38,87,.5)}.btn-check:checked+.btn-danger,.btn-check:active+.btn-danger,.btn-danger:active,.btn-danger.active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#cc002e;border-color:#bf002b}.btn-check:checked+.btn-danger:focus,.btn-check:active+.btn-danger:focus,.btn-danger:active:focus,.btn-danger.active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(255,38,87,.5)}.btn-danger:disabled,.btn-danger.disabled{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-light{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:focus+.btn-light,.btn-light:focus{color:#000;background-color:#f9fafb;border-color:#f9fafb;box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-check:checked+.btn-light,.btn-check:active+.btn-light,.btn-light:active,.btn-light.active,.show>.btn-light.dropdown-toggle{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:checked+.btn-light:focus,.btn-check:active+.btn-light:focus,.btn-light:active:focus,.btn-light.active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-light:disabled,.btn-light.disabled{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-dark{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-dark:hover{color:#fff;background-color:#2f3133;border-color:#2c2e30}.btn-check:focus+.btn-dark,.btn-dark:focus{color:#fff;background-color:#2f3133;border-color:#2c2e30;box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-check:checked+.btn-dark,.btn-check:active+.btn-dark,.btn-dark:active,.btn-dark.active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#2c2e30;border-color:#292c2d}.btn-check:checked+.btn-dark:focus,.btn-check:active+.btn-dark:focus,.btn-dark:active:focus,.btn-dark.active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(85,88,89,.5)}.btn-dark:disabled,.btn-dark.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-outline-default{color:#373a3c;border-color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-default:hover{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:focus+.btn-outline-default,.btn-outline-default:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-check:checked+.btn-outline-default,.btn-check:active+.btn-outline-default,.btn-outline-default:active,.btn-outline-default.active,.btn-outline-default.dropdown-toggle.show{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:checked+.btn-outline-default:focus,.btn-check:active+.btn-outline-default:focus,.btn-outline-default:active:focus,.btn-outline-default.active:focus,.btn-outline-default.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-outline-default:disabled,.btn-outline-default.disabled{color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-primary{color:#2780e3;border-color:#2780e3;background-color:rgba(0,0,0,0)}.btn-outline-primary:hover{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-check:focus+.btn-outline-primary,.btn-outline-primary:focus{box-shadow:0 0 0 .25rem rgba(39,128,227,.5)}.btn-check:checked+.btn-outline-primary,.btn-check:active+.btn-outline-primary,.btn-outline-primary:active,.btn-outline-primary.active,.btn-outline-primary.dropdown-toggle.show{color:#fff;background-color:#2780e3;border-color:#2780e3}.btn-check:checked+.btn-outline-primary:focus,.btn-check:active+.btn-outline-primary:focus,.btn-outline-primary:active:focus,.btn-outline-primary.active:focus,.btn-outline-primary.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(39,128,227,.5)}.btn-outline-primary:disabled,.btn-outline-primary.disabled{color:#2780e3;background-color:rgba(0,0,0,0)}.btn-outline-secondary{color:#373a3c;border-color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-secondary:hover{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:focus+.btn-outline-secondary,.btn-outline-secondary:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-check:checked+.btn-outline-secondary,.btn-check:active+.btn-outline-secondary,.btn-outline-secondary:active,.btn-outline-secondary.active,.btn-outline-secondary.dropdown-toggle.show{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:checked+.btn-outline-secondary:focus,.btn-check:active+.btn-outline-secondary:focus,.btn-outline-secondary:active:focus,.btn-outline-secondary.active:focus,.btn-outline-secondary.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-outline-secondary:disabled,.btn-outline-secondary.disabled{color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-success{color:#3fb618;border-color:#3fb618;background-color:rgba(0,0,0,0)}.btn-outline-success:hover{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-check:focus+.btn-outline-success,.btn-outline-success:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.5)}.btn-check:checked+.btn-outline-success,.btn-check:active+.btn-outline-success,.btn-outline-success:active,.btn-outline-success.active,.btn-outline-success.dropdown-toggle.show{color:#fff;background-color:#3fb618;border-color:#3fb618}.btn-check:checked+.btn-outline-success:focus,.btn-check:active+.btn-outline-success:focus,.btn-outline-success:active:focus,.btn-outline-success.active:focus,.btn-outline-success.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(63,182,24,.5)}.btn-outline-success:disabled,.btn-outline-success.disabled{color:#3fb618;background-color:rgba(0,0,0,0)}.btn-outline-info{color:#9954bb;border-color:#9954bb;background-color:rgba(0,0,0,0)}.btn-outline-info:hover{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-check:focus+.btn-outline-info,.btn-outline-info:focus{box-shadow:0 0 0 .25rem rgba(153,84,187,.5)}.btn-check:checked+.btn-outline-info,.btn-check:active+.btn-outline-info,.btn-outline-info:active,.btn-outline-info.active,.btn-outline-info.dropdown-toggle.show{color:#fff;background-color:#9954bb;border-color:#9954bb}.btn-check:checked+.btn-outline-info:focus,.btn-check:active+.btn-outline-info:focus,.btn-outline-info:active:focus,.btn-outline-info.active:focus,.btn-outline-info.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(153,84,187,.5)}.btn-outline-info:disabled,.btn-outline-info.disabled{color:#9954bb;background-color:rgba(0,0,0,0)}.btn-outline-warning{color:#ff7518;border-color:#ff7518;background-color:rgba(0,0,0,0)}.btn-outline-warning:hover{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-check:focus+.btn-outline-warning,.btn-outline-warning:focus{box-shadow:0 0 0 .25rem rgba(255,117,24,.5)}.btn-check:checked+.btn-outline-warning,.btn-check:active+.btn-outline-warning,.btn-outline-warning:active,.btn-outline-warning.active,.btn-outline-warning.dropdown-toggle.show{color:#fff;background-color:#ff7518;border-color:#ff7518}.btn-check:checked+.btn-outline-warning:focus,.btn-check:active+.btn-outline-warning:focus,.btn-outline-warning:active:focus,.btn-outline-warning.active:focus,.btn-outline-warning.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(255,117,24,.5)}.btn-outline-warning:disabled,.btn-outline-warning.disabled{color:#ff7518;background-color:rgba(0,0,0,0)}.btn-outline-danger{color:#ff0039;border-color:#ff0039;background-color:rgba(0,0,0,0)}.btn-outline-danger:hover{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-check:focus+.btn-outline-danger,.btn-outline-danger:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.5)}.btn-check:checked+.btn-outline-danger,.btn-check:active+.btn-outline-danger,.btn-outline-danger:active,.btn-outline-danger.active,.btn-outline-danger.dropdown-toggle.show{color:#fff;background-color:#ff0039;border-color:#ff0039}.btn-check:checked+.btn-outline-danger:focus,.btn-check:active+.btn-outline-danger:focus,.btn-outline-danger:active:focus,.btn-outline-danger.active:focus,.btn-outline-danger.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(255,0,57,.5)}.btn-outline-danger:disabled,.btn-outline-danger.disabled{color:#ff0039;background-color:rgba(0,0,0,0)}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa;background-color:rgba(0,0,0,0)}.btn-outline-light:hover{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:focus+.btn-outline-light,.btn-outline-light:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-check:checked+.btn-outline-light,.btn-check:active+.btn-outline-light,.btn-outline-light:active,.btn-outline-light.active,.btn-outline-light.dropdown-toggle.show{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:checked+.btn-outline-light:focus,.btn-check:active+.btn-outline-light:focus,.btn-outline-light:active:focus,.btn-outline-light.active:focus,.btn-outline-light.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-outline-light:disabled,.btn-outline-light.disabled{color:#f8f9fa;background-color:rgba(0,0,0,0)}.btn-outline-dark{color:#373a3c;border-color:#373a3c;background-color:rgba(0,0,0,0)}.btn-outline-dark:hover{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:focus+.btn-outline-dark,.btn-outline-dark:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-check:checked+.btn-outline-dark,.btn-check:active+.btn-outline-dark,.btn-outline-dark:active,.btn-outline-dark.active,.btn-outline-dark.dropdown-toggle.show{color:#fff;background-color:#373a3c;border-color:#373a3c}.btn-check:checked+.btn-outline-dark:focus,.btn-check:active+.btn-outline-dark:focus,.btn-outline-dark:active:focus,.btn-outline-dark.active:focus,.btn-outline-dark.dropdown-toggle.show:focus{box-shadow:0 0 0 .25rem rgba(55,58,60,.5)}.btn-outline-dark:disabled,.btn-outline-dark.disabled{color:#373a3c;background-color:rgba(0,0,0,0)}.btn-link{font-weight:400;color:#2780e3;text-decoration:underline;-webkit-text-decoration:underline;-moz-text-decoration:underline;-ms-text-decoration:underline;-o-text-decoration:underline}.btn-link:hover{color:#1f66b6}.btn-link:disabled,.btn-link.disabled{color:#6c757d}.btn-lg,.btn-group-lg>.btn{padding:.5rem 1rem;font-size:1.25rem;border-radius:0}.btn-sm,.btn-group-sm>.btn{padding:.25rem .5rem;font-size:0.875rem;border-radius:0}.fade{transition:opacity .15s linear}@media(prefers-reduced-motion: reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{height:0;overflow:hidden;transition:height .2s ease}@media(prefers-reduced-motion: reduce){.collapsing{transition:none}}.collapsing.collapse-horizontal{width:0;height:auto;transition:width .35s ease}@media(prefers-reduced-motion: reduce){.collapsing.collapse-horizontal{transition:none}}.dropup,.dropend,.dropdown,.dropstart{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-right:.3em solid rgba(0,0,0,0);border-bottom:0;border-left:.3em solid rgba(0,0,0,0)}.dropdown-toggle:empty::after{margin-left:0}.dropdown-menu{position:absolute;z-index:1000;display:none;min-width:10rem;padding:.5rem 0;margin:0;font-size:1rem;color:#373a3c;text-align:left;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15)}.dropdown-menu[data-bs-popper]{top:100%;left:0;margin-top:.125rem}.dropdown-menu-start{--bs-position: start}.dropdown-menu-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-end{--bs-position: end}.dropdown-menu-end[data-bs-popper]{right:0;left:auto}@media(min-width: 576px){.dropdown-menu-sm-start{--bs-position: start}.dropdown-menu-sm-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-sm-end{--bs-position: end}.dropdown-menu-sm-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 768px){.dropdown-menu-md-start{--bs-position: start}.dropdown-menu-md-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-md-end{--bs-position: end}.dropdown-menu-md-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 992px){.dropdown-menu-lg-start{--bs-position: start}.dropdown-menu-lg-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-lg-end{--bs-position: end}.dropdown-menu-lg-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 1200px){.dropdown-menu-xl-start{--bs-position: start}.dropdown-menu-xl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xl-end{--bs-position: end}.dropdown-menu-xl-end[data-bs-popper]{right:0;left:auto}}@media(min-width: 1400px){.dropdown-menu-xxl-start{--bs-position: start}.dropdown-menu-xxl-start[data-bs-popper]{right:auto;left:0}.dropdown-menu-xxl-end{--bs-position: end}.dropdown-menu-xxl-end[data-bs-popper]{right:0;left:auto}}.dropup .dropdown-menu[data-bs-popper]{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:0;border-right:.3em solid rgba(0,0,0,0);border-bottom:.3em solid;border-left:.3em solid rgba(0,0,0,0)}.dropup .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-menu[data-bs-popper]{top:0;right:auto;left:100%;margin-top:0;margin-left:.125rem}.dropend .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid rgba(0,0,0,0);border-right:0;border-bottom:.3em solid rgba(0,0,0,0);border-left:.3em solid}.dropend .dropdown-toggle:empty::after{margin-left:0}.dropend .dropdown-toggle::after{vertical-align:0}.dropstart .dropdown-menu[data-bs-popper]{top:0;right:100%;left:auto;margin-top:0;margin-right:.125rem}.dropstart .dropdown-toggle::after{display:inline-block;margin-left:.255em;vertical-align:.255em;content:""}.dropstart .dropdown-toggle::after{display:none}.dropstart .dropdown-toggle::before{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid rgba(0,0,0,0);border-right:.3em solid;border-bottom:.3em solid rgba(0,0,0,0)}.dropstart .dropdown-toggle:empty::after{margin-left:0}.dropstart .dropdown-toggle::before{vertical-align:0}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid rgba(0,0,0,.15)}.dropdown-item{display:block;width:100%;padding:.25rem 1rem;clear:both;font-weight:400;color:#212529;text-align:inherit;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;white-space:nowrap;background-color:rgba(0,0,0,0);border:0}.dropdown-item:hover,.dropdown-item:focus{color:#1e2125;background-color:#e9ecef}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#2780e3}.dropdown-item.disabled,.dropdown-item:disabled{color:#adb5bd;pointer-events:none;background-color:rgba(0,0,0,0)}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1rem;margin-bottom:0;font-size:0.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1rem;color:#212529}.dropdown-menu-dark{color:#dee2e6;background-color:#373a3c;border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item{color:#dee2e6}.dropdown-menu-dark .dropdown-item:hover,.dropdown-menu-dark .dropdown-item:focus{color:#fff;background-color:rgba(255,255,255,.15)}.dropdown-menu-dark .dropdown-item.active,.dropdown-menu-dark .dropdown-item:active{color:#fff;background-color:#2780e3}.dropdown-menu-dark .dropdown-item.disabled,.dropdown-menu-dark .dropdown-item:disabled{color:#adb5bd}.dropdown-menu-dark .dropdown-divider{border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item-text{color:#dee2e6}.dropdown-menu-dark .dropdown-header{color:#adb5bd}.btn-group,.btn-group-vertical{position:relative;display:inline-flex;vertical-align:middle}.btn-group>.btn,.btn-group-vertical>.btn{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto}.btn-group>.btn-check:checked+.btn,.btn-group>.btn-check:focus+.btn,.btn-group>.btn:hover,.btn-group>.btn:focus,.btn-group>.btn:active,.btn-group>.btn.active,.btn-group-vertical>.btn-check:checked+.btn,.btn-group-vertical>.btn-check:focus+.btn,.btn-group-vertical>.btn:hover,.btn-group-vertical>.btn:focus,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn.active{z-index:1}.btn-toolbar{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;justify-content:flex-start;-webkit-justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn:not(:first-child),.btn-group>.btn-group:not(:first-child){margin-left:-1px}.dropdown-toggle-split{padding-right:.5625rem;padding-left:.5625rem}.dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after,.dropend .dropdown-toggle-split::after{margin-left:0}.dropstart .dropdown-toggle-split::before{margin-right:0}.btn-sm+.dropdown-toggle-split,.btn-group-sm>.btn+.dropdown-toggle-split{padding-right:.375rem;padding-left:.375rem}.btn-lg+.dropdown-toggle-split,.btn-group-lg>.btn+.dropdown-toggle-split{padding-right:.75rem;padding-left:.75rem}.btn-group-vertical{flex-direction:column;-webkit-flex-direction:column;align-items:flex-start;-webkit-align-items:flex-start;justify-content:center;-webkit-justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn:not(:first-child),.btn-group-vertical>.btn-group:not(:first-child){margin-top:-1px}.nav{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;padding-left:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem;color:#2780e3;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out}@media(prefers-reduced-motion: reduce){.nav-link{transition:none}}.nav-link:hover,.nav-link:focus{color:#1f66b6}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-link{margin-bottom:-1px;background:none;border:1px solid rgba(0,0,0,0)}.nav-tabs .nav-link:hover,.nav-tabs .nav-link:focus{border-color:#e9ecef #e9ecef #dee2e6;isolation:isolate}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:rgba(0,0,0,0);border-color:rgba(0,0,0,0)}.nav-tabs .nav-link.active,.nav-tabs .nav-item.show .nav-link{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px}.nav-pills .nav-link{background:none;border:0}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#2780e3}.nav-fill>.nav-link,.nav-fill .nav-item{flex:1 1 auto;-webkit-flex:1 1 auto;text-align:center}.nav-justified>.nav-link,.nav-justified .nav-item{flex-basis:0;-webkit-flex-basis:0;flex-grow:1;-webkit-flex-grow:1;text-align:center}.nav-fill .nav-item .nav-link,.nav-justified .nav-item .nav-link{width:100%}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding-top:.5rem;padding-bottom:.5rem}.navbar>.container-xxl,.navbar>.container-xl,.navbar>.container-lg,.navbar>.container-md,.navbar>.container-sm,.navbar>.container,.navbar>.container-fluid{display:flex;display:-webkit-flex;flex-wrap:inherit;-webkit-flex-wrap:inherit;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between}.navbar-brand{padding-top:.3125rem;padding-bottom:.3125rem;margin-right:1rem;font-size:1.25rem;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;white-space:nowrap}.navbar-nav{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;padding-left:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-right:0;padding-left:0}.navbar-nav .dropdown-menu{position:static}.navbar-text{padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{flex-basis:100%;-webkit-flex-basis:100%;flex-grow:1;-webkit-flex-grow:1;align-items:center;-webkit-align-items:center}.navbar-toggler{padding:.25 0;font-size:1.25rem;line-height:1;background-color:rgba(0,0,0,0);border:1px solid rgba(0,0,0,0);transition:box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.navbar-toggler{transition:none}}.navbar-toggler:hover{text-decoration:none}.navbar-toggler:focus{text-decoration:none;outline:0;box-shadow:0 0 0 .25rem}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;background-repeat:no-repeat;background-position:center;background-size:100%}.navbar-nav-scroll{max-height:var(--bs-scroll-height, 75vh);overflow-y:auto}@media(min-width: 576px){.navbar-expand-sm{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-sm .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-sm .navbar-nav-scroll{overflow:visible}.navbar-expand-sm .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}.navbar-expand-sm .offcanvas-header{display:none}.navbar-expand-sm .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-sm .offcanvas-top,.navbar-expand-sm .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-sm .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 768px){.navbar-expand-md{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-md .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-md .navbar-nav-scroll{overflow:visible}.navbar-expand-md .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}.navbar-expand-md .offcanvas-header{display:none}.navbar-expand-md .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-md .offcanvas-top,.navbar-expand-md .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-md .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 992px){.navbar-expand-lg{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-lg .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-lg .navbar-nav-scroll{overflow:visible}.navbar-expand-lg .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}.navbar-expand-lg .offcanvas-header{display:none}.navbar-expand-lg .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-lg .offcanvas-top,.navbar-expand-lg .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-lg .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 1200px){.navbar-expand-xl{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-xl .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xl .navbar-nav-scroll{overflow:visible}.navbar-expand-xl .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}.navbar-expand-xl .offcanvas-header{display:none}.navbar-expand-xl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-xl .offcanvas-top,.navbar-expand-xl .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-xl .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}@media(min-width: 1400px){.navbar-expand-xxl{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand-xxl .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand-xxl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xxl .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand-xxl .navbar-nav-scroll{overflow:visible}.navbar-expand-xxl .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand-xxl .navbar-toggler{display:none}.navbar-expand-xxl .offcanvas-header{display:none}.navbar-expand-xxl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand-xxl .offcanvas-top,.navbar-expand-xxl .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand-xxl .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}}.navbar-expand{flex-wrap:nowrap;-webkit-flex-wrap:nowrap;justify-content:flex-start;-webkit-justify-content:flex-start}.navbar-expand .navbar-nav{flex-direction:row;-webkit-flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-right:.5rem;padding-left:.5rem}.navbar-expand .navbar-nav-scroll{overflow:visible}.navbar-expand .navbar-collapse{display:flex !important;display:-webkit-flex !important;flex-basis:auto;-webkit-flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-expand .offcanvas-header{display:none}.navbar-expand .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;-webkit-flex-grow:1;visibility:visible !important;background-color:rgba(0,0,0,0);border-right:0;border-left:0;transition:none;transform:none}.navbar-expand .offcanvas-top,.navbar-expand .offcanvas-bottom{height:auto;border-top:0;border-bottom:0}.navbar-expand .offcanvas-body{display:flex;display:-webkit-flex;flex-grow:0;-webkit-flex-grow:0;padding:0;overflow-y:visible}.navbar-light{background-color:#f8f9fa}.navbar-light .navbar-brand{color:#545555}.navbar-light .navbar-brand:hover,.navbar-light .navbar-brand:focus{color:#1a5698}.navbar-light .navbar-nav .nav-link{color:#545555}.navbar-light .navbar-nav .nav-link:hover,.navbar-light .navbar-nav .nav-link:focus{color:rgba(26,86,152,.8)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(84,85,85,.75)}.navbar-light .navbar-nav .show>.nav-link,.navbar-light .navbar-nav .nav-link.active{color:#1a5698}.navbar-light .navbar-toggler{color:#545555;border-color:rgba(84,85,85,0)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:#545555}.navbar-light .navbar-text a,.navbar-light .navbar-text a:hover,.navbar-light .navbar-text a:focus{color:#1a5698}.navbar-dark{background-color:#f8f9fa}.navbar-dark .navbar-brand{color:#545555}.navbar-dark .navbar-brand:hover,.navbar-dark .navbar-brand:focus{color:#1a5698}.navbar-dark .navbar-nav .nav-link{color:#545555}.navbar-dark .navbar-nav .nav-link:hover,.navbar-dark .navbar-nav .nav-link:focus{color:rgba(26,86,152,.8)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(84,85,85,.75)}.navbar-dark .navbar-nav .show>.nav-link,.navbar-dark .navbar-nav .active>.nav-link,.navbar-dark .navbar-nav .nav-link.active{color:#1a5698}.navbar-dark .navbar-toggler{color:#545555;border-color:rgba(84,85,85,0)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='%23545555' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:#545555}.navbar-dark .navbar-text a,.navbar-dark .navbar-text a:hover,.navbar-dark .navbar-text a:focus{color:#1a5698}.card{position:relative;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125)}.card>hr{margin-right:0;margin-left:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0}.card>.list-group:last-child{border-bottom-width:0}.card>.card-header+.list-group,.card>.list-group+.card-footer{border-top:0}.card-body{flex:1 1 auto;-webkit-flex:1 1 auto;padding:1rem 1rem}.card-title{margin-bottom:.5rem}.card-subtitle{margin-top:-0.25rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link+.card-link{margin-left:1rem}.card-header{padding:.5rem 1rem;margin-bottom:0;background-color:#adb5bd;border-bottom:1px solid rgba(0,0,0,.125)}.card-footer{padding:.5rem 1rem;background-color:#adb5bd;border-top:1px solid rgba(0,0,0,.125)}.card-header-tabs{margin-right:-0.5rem;margin-bottom:-0.5rem;margin-left:-0.5rem;border-bottom:0}.card-header-pills{margin-right:-0.5rem;margin-left:-0.5rem}.card-img-overlay{position:absolute;top:0;right:0;bottom:0;left:0;padding:1rem}.card-img,.card-img-top,.card-img-bottom{width:100%}.card-group>.card{margin-bottom:.75rem}@media(min-width: 576px){.card-group{display:flex;display:-webkit-flex;flex-flow:row wrap;-webkit-flex-flow:row wrap}.card-group>.card{flex:1 0 0%;-webkit-flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-left:0;border-left:0}}.accordion-button{position:relative;display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;width:100%;padding:1rem 1.25rem;font-size:1rem;color:#373a3c;text-align:left;background-color:#fff;border:0;overflow-anchor:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,border-radius .15s ease}@media(prefers-reduced-motion: reduce){.accordion-button{transition:none}}.accordion-button:not(.collapsed){color:#2373cc;background-color:#e9f2fc;box-shadow:inset 0 -1px 0 rgba(0,0,0,.125)}.accordion-button:not(.collapsed)::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%232373cc'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");transform:rotate(-180deg)}.accordion-button::after{flex-shrink:0;-webkit-flex-shrink:0;width:1.25rem;height:1.25rem;margin-left:auto;content:"";background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23373a3c'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-size:1.25rem;transition:transform .2s ease-in-out}@media(prefers-reduced-motion: reduce){.accordion-button::after{transition:none}}.accordion-button:hover{z-index:2}.accordion-button:focus{z-index:3;border-color:#93c0f1;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.accordion-header{margin-bottom:0}.accordion-item{background-color:#fff;border:1px solid rgba(0,0,0,.125)}.accordion-item:not(:first-of-type){border-top:0}.accordion-body{padding:1rem 1.25rem}.accordion-flush .accordion-collapse{border-width:0}.accordion-flush .accordion-item{border-right:0;border-left:0}.accordion-flush .accordion-item:first-child{border-top:0}.accordion-flush .accordion-item:last-child{border-bottom:0}.breadcrumb{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;padding:0 0;margin-bottom:1rem;list-style:none}.breadcrumb-item+.breadcrumb-item{padding-left:.5rem}.breadcrumb-item+.breadcrumb-item::before{float:left;padding-right:.5rem;color:#6c757d;content:var(--bs-breadcrumb-divider, ">") /* rtl: var(--bs-breadcrumb-divider, ">") */}.breadcrumb-item.active{color:#6c757d}.pagination{display:flex;display:-webkit-flex;padding-left:0;list-style:none}.page-link{position:relative;display:block;color:#2780e3;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background-color:#fff;border:1px solid #dee2e6;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media(prefers-reduced-motion: reduce){.page-link{transition:none}}.page-link:hover{z-index:2;color:#1f66b6;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;color:#1f66b6;background-color:#e9ecef;outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25)}.page-item:not(:first-child) .page-link{margin-left:-1px}.page-item.active .page-link{z-index:3;color:#fff;background-color:#2780e3;border-color:#2780e3}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;background-color:#fff;border-color:#dee2e6}.page-link{padding:.375rem .75rem}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:0.875rem}.badge{display:inline-block;padding:.35em .65em;font-size:0.75em;font-weight:700;line-height:1;color:#fff;text-align:center;white-space:nowrap;vertical-align:baseline}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.alert{position:relative;padding:1rem 1rem;margin-bottom:1rem;border:0 solid rgba(0,0,0,0)}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-right:3rem}.alert-dismissible .btn-close{position:absolute;top:0;right:0;z-index:2;padding:1.25rem 1rem}.alert-default{color:#212324;background-color:#d7d8d8;border-color:#c3c4c5}.alert-default .alert-link{color:#1a1c1d}.alert-primary{color:#174d88;background-color:#d4e6f9;border-color:#bed9f7}.alert-primary .alert-link{color:#123e6d}.alert-secondary{color:#212324;background-color:#d7d8d8;border-color:#c3c4c5}.alert-secondary .alert-link{color:#1a1c1d}.alert-success{color:#266d0e;background-color:#d9f0d1;border-color:#c5e9ba}.alert-success .alert-link{color:#1e570b}.alert-info{color:#5c3270;background-color:#ebddf1;border-color:#e0cceb}.alert-info .alert-link{color:#4a285a}.alert-warning{color:#99460e;background-color:#ffe3d1;border-color:#ffd6ba}.alert-warning .alert-link{color:#7a380b}.alert-danger{color:#902;background-color:#ffccd7;border-color:#ffb3c4}.alert-danger .alert-link{color:#7a001b}.alert-light{color:#959596;background-color:#fefefe;border-color:#fdfdfe}.alert-light .alert-link{color:#777778}.alert-dark{color:#212324;background-color:#d7d8d8;border-color:#c3c4c5}.alert-dark .alert-link{color:#1a1c1d}@keyframes progress-bar-stripes{0%{background-position-x:.5rem}}.progress{display:flex;display:-webkit-flex;height:.5rem;overflow:hidden;font-size:0.75rem;background-color:#e9ecef}.progress-bar{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;justify-content:center;-webkit-justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#2780e3;transition:width .6s ease}@media(prefers-reduced-motion: reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent);background-size:.5rem .5rem}.progress-bar-animated{animation:1s linear infinite progress-bar-stripes}@media(prefers-reduced-motion: reduce){.progress-bar-animated{animation:none}}.list-group{display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;padding-left:0;margin-bottom:0}.list-group-numbered{list-style-type:none;counter-reset:section}.list-group-numbered>li::before{content:counters(section, ".") ". ";counter-increment:section}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:hover,.list-group-item-action:focus{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#373a3c;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.5rem 1rem;color:#212529;text-decoration:none;-webkit-text-decoration:none;-moz-text-decoration:none;-ms-text-decoration:none;-o-text-decoration:none;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#2780e3;border-color:#2780e3}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}@media(min-width: 576px){.list-group-horizontal-sm{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 768px){.list-group-horizontal-md{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 992px){.list-group-horizontal-lg{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 1200px){.list-group-horizontal-xl{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}@media(min-width: 1400px){.list-group-horizontal-xxl{flex-direction:row;-webkit-flex-direction:row}.list-group-horizontal-xxl>.list-group-item.active{margin-top:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item{border-top-width:1px;border-left-width:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item.active{margin-left:-1px;border-left-width:1px}}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-default{color:#212324;background-color:#d7d8d8}.list-group-item-default.list-group-item-action:hover,.list-group-item-default.list-group-item-action:focus{color:#212324;background-color:#c2c2c2}.list-group-item-default.list-group-item-action.active{color:#fff;background-color:#212324;border-color:#212324}.list-group-item-primary{color:#174d88;background-color:#d4e6f9}.list-group-item-primary.list-group-item-action:hover,.list-group-item-primary.list-group-item-action:focus{color:#174d88;background-color:#bfcfe0}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#174d88;border-color:#174d88}.list-group-item-secondary{color:#212324;background-color:#d7d8d8}.list-group-item-secondary.list-group-item-action:hover,.list-group-item-secondary.list-group-item-action:focus{color:#212324;background-color:#c2c2c2}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#212324;border-color:#212324}.list-group-item-success{color:#266d0e;background-color:#d9f0d1}.list-group-item-success.list-group-item-action:hover,.list-group-item-success.list-group-item-action:focus{color:#266d0e;background-color:#c3d8bc}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#266d0e;border-color:#266d0e}.list-group-item-info{color:#5c3270;background-color:#ebddf1}.list-group-item-info.list-group-item-action:hover,.list-group-item-info.list-group-item-action:focus{color:#5c3270;background-color:#d4c7d9}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#5c3270;border-color:#5c3270}.list-group-item-warning{color:#99460e;background-color:#ffe3d1}.list-group-item-warning.list-group-item-action:hover,.list-group-item-warning.list-group-item-action:focus{color:#99460e;background-color:#e6ccbc}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#99460e;border-color:#99460e}.list-group-item-danger{color:#902;background-color:#ffccd7}.list-group-item-danger.list-group-item-action:hover,.list-group-item-danger.list-group-item-action:focus{color:#902;background-color:#e6b8c2}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#902;border-color:#902}.list-group-item-light{color:#959596;background-color:#fefefe}.list-group-item-light.list-group-item-action:hover,.list-group-item-light.list-group-item-action:focus{color:#959596;background-color:#e5e5e5}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#959596;border-color:#959596}.list-group-item-dark{color:#212324;background-color:#d7d8d8}.list-group-item-dark.list-group-item-action:hover,.list-group-item-dark.list-group-item-action:focus{color:#212324;background-color:#c2c2c2}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#212324;border-color:#212324}.btn-close{box-sizing:content-box;width:1em;height:1em;padding:.25em .25em;color:#000;background:rgba(0,0,0,0) url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 011.414 0L8 6.586 14.293.293a1 1 0 111.414 1.414L9.414 8l6.293 6.293a1 1 0 01-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 01-1.414-1.414L6.586 8 .293 1.707a1 1 0 010-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat;border:0;opacity:.5}.btn-close:hover{color:#000;text-decoration:none;opacity:.75}.btn-close:focus{outline:0;box-shadow:0 0 0 .25rem rgba(39,128,227,.25);opacity:1}.btn-close:disabled,.btn-close.disabled{pointer-events:none;user-select:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;-o-user-select:none;opacity:.25}.btn-close-white{filter:invert(1) grayscale(100%) brightness(200%)}.toast{width:350px;max-width:100%;font-size:0.875rem;pointer-events:auto;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .5rem 1rem rgba(0,0,0,.15)}.toast.showing{opacity:0}.toast:not(.show){display:none}.toast-container{width:max-content;width:-webkit-max-content;width:-moz-max-content;width:-ms-max-content;width:-o-max-content;max-width:100%;pointer-events:none}.toast-container>:not(:last-child){margin-bottom:.75rem}.toast-header{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;padding:.5rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05)}.toast-header .btn-close{margin-right:-0.375rem;margin-left:.75rem}.toast-body{padding:.75rem;word-wrap:break-word}.modal{position:fixed;top:0;left:0;z-index:1055;display:none;width:100%;height:100%;overflow-x:hidden;overflow-y:auto;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:transform .3s ease-out;transform:translate(0, -50px)}@media(prefers-reduced-motion: reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{transform:none}.modal.modal-static .modal-dialog{transform:scale(1.02)}.modal-dialog-scrollable{height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:100%;overflow:hidden}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;min-height:calc(100% - 1rem)}.modal-content{position:relative;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);outline:0}.modal-backdrop{position:fixed;top:0;left:0;z-index:1050;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:flex;display:-webkit-flex;flex-shrink:0;-webkit-flex-shrink:0;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6}.modal-header .btn-close{padding:.5rem .5rem;margin:-0.5rem -0.5rem -0.5rem auto}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;flex:1 1 auto;-webkit-flex:1 1 auto;padding:1rem}.modal-footer{display:flex;display:-webkit-flex;flex-wrap:wrap;-webkit-flex-wrap:wrap;flex-shrink:0;-webkit-flex-shrink:0;align-items:center;-webkit-align-items:center;justify-content:flex-end;-webkit-justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6}.modal-footer>*{margin:.25rem}@media(min-width: 576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{height:calc(100% - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-sm{max-width:300px}}@media(min-width: 992px){.modal-lg,.modal-xl{max-width:800px}}@media(min-width: 1200px){.modal-xl{max-width:1140px}}.modal-fullscreen{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen .modal-content{height:100%;border:0}.modal-fullscreen .modal-body{overflow-y:auto}@media(max-width: 575.98px){.modal-fullscreen-sm-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-sm-down .modal-content{height:100%;border:0}.modal-fullscreen-sm-down .modal-body{overflow-y:auto}}@media(max-width: 767.98px){.modal-fullscreen-md-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-md-down .modal-content{height:100%;border:0}.modal-fullscreen-md-down .modal-body{overflow-y:auto}}@media(max-width: 991.98px){.modal-fullscreen-lg-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-lg-down .modal-content{height:100%;border:0}.modal-fullscreen-lg-down .modal-body{overflow-y:auto}}@media(max-width: 1199.98px){.modal-fullscreen-xl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xl-down .modal-content{height:100%;border:0}.modal-fullscreen-xl-down .modal-body{overflow-y:auto}}@media(max-width: 1399.98px){.modal-fullscreen-xxl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xxl-down .modal-content{height:100%;border:0}.modal-fullscreen-xxl-down .modal-body{overflow-y:auto}}.tooltip{position:absolute;z-index:1080;display:block;margin:0;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:0.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .tooltip-arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .tooltip-arrow::before{position:absolute;content:"";border-color:rgba(0,0,0,0);border-style:solid}.bs-tooltip-top,.bs-tooltip-auto[data-popper-placement^=top]{padding:.4rem 0}.bs-tooltip-top .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow{bottom:0}.bs-tooltip-top .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before{top:-1px;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-end,.bs-tooltip-auto[data-popper-placement^=right]{padding:0 .4rem}.bs-tooltip-end .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-end .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before{right:-1px;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.bs-tooltip-bottom,.bs-tooltip-auto[data-popper-placement^=bottom]{padding:.4rem 0}.bs-tooltip-bottom .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow{top:0}.bs-tooltip-bottom .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before{bottom:-1px;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-start,.bs-tooltip-auto[data-popper-placement^=left]{padding:0 .4rem}.bs-tooltip-start .tooltip-arrow,.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-start .tooltip-arrow::before,.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before{left:-1px;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000}.popover{position:absolute;top:0;left:0 /* rtl:ignore */;z-index:1070;display:block;max-width:276px;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:left;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:0.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2)}.popover .popover-arrow{position:absolute;display:block;width:1rem;height:.5rem}.popover .popover-arrow::before,.popover .popover-arrow::after{position:absolute;display:block;content:"";border-color:rgba(0,0,0,0);border-style:solid}.bs-popover-top>.popover-arrow,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow{bottom:calc(-0.5rem - 1px)}.bs-popover-top>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-top>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-end>.popover-arrow,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow{left:calc(-0.5rem - 1px);width:.5rem;height:1rem}.bs-popover-end>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-end>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.bs-popover-bottom>.popover-arrow,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow{top:calc(-0.5rem - 1px)}.bs-popover-bottom>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-bottom>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-bottom .popover-header::before,.bs-popover-auto[data-popper-placement^=bottom] .popover-header::before{position:absolute;top:0;left:50%;display:block;width:1rem;margin-left:-0.5rem;content:"";border-bottom:1px solid #f0f0f0}.bs-popover-start>.popover-arrow,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow{right:calc(-0.5rem - 1px);width:.5rem;height:1rem}.bs-popover-start>.popover-arrow::before,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-start>.popover-arrow::after,.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.popover-header{padding:.5rem 1rem;margin-bottom:0;font-size:1rem;background-color:#f0f0f0;border-bottom:1px solid rgba(0,0,0,.2)}.popover-header:empty{display:none}.popover-body{padding:1rem 1rem;color:#373a3c}.carousel{position:relative}.carousel.pointer-event{touch-action:pan-y;-webkit-touch-action:pan-y;-moz-touch-action:pan-y;-ms-touch-action:pan-y;-o-touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:left;width:100%;margin-right:-100%;backface-visibility:hidden;-webkit-backface-visibility:hidden;-moz-backface-visibility:hidden;-ms-backface-visibility:hidden;-o-backface-visibility:hidden;transition:transform .6s ease-in-out}@media(prefers-reduced-motion: reduce){.carousel-item{transition:none}}.carousel-item.active,.carousel-item-next,.carousel-item-prev{display:block}.carousel-item-next:not(.carousel-item-start),.active.carousel-item-end{transform:translateX(100%)}.carousel-item-prev:not(.carousel-item-end),.active.carousel-item-start{transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;transform:none}.carousel-fade .carousel-item.active,.carousel-fade .carousel-item-next.carousel-item-start,.carousel-fade .carousel-item-prev.carousel-item-end{z-index:1;opacity:1}.carousel-fade .active.carousel-item-start,.carousel-fade .active.carousel-item-end{z-index:0;opacity:0;transition:opacity 0s .6s}@media(prefers-reduced-motion: reduce){.carousel-fade .active.carousel-item-start,.carousel-fade .active.carousel-item-end{transition:none}}.carousel-control-prev,.carousel-control-next{position:absolute;top:0;bottom:0;z-index:1;display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;justify-content:center;-webkit-justify-content:center;width:15%;padding:0;color:#fff;text-align:center;background:none;border:0;opacity:.5;transition:opacity .15s ease}@media(prefers-reduced-motion: reduce){.carousel-control-prev,.carousel-control-next{transition:none}}.carousel-control-prev:hover,.carousel-control-prev:focus,.carousel-control-next:hover,.carousel-control-next:focus{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{left:0}.carousel-control-next{right:0}.carousel-control-prev-icon,.carousel-control-next-icon{display:inline-block;width:2rem;height:2rem;background-repeat:no-repeat;background-position:50%;background-size:100% 100%}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e")}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;right:0;bottom:0;left:0;z-index:2;display:flex;display:-webkit-flex;justify-content:center;-webkit-justify-content:center;padding:0;margin-right:15%;margin-bottom:1rem;margin-left:15%;list-style:none}.carousel-indicators [data-bs-target]{box-sizing:content-box;flex:0 1 auto;-webkit-flex:0 1 auto;width:30px;height:3px;padding:0;margin-right:3px;margin-left:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border:0;border-top:10px solid rgba(0,0,0,0);border-bottom:10px solid rgba(0,0,0,0);opacity:.5;transition:opacity .6s ease}@media(prefers-reduced-motion: reduce){.carousel-indicators [data-bs-target]{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;right:15%;bottom:1.25rem;left:15%;padding-top:1.25rem;padding-bottom:1.25rem;color:#fff;text-align:center}.carousel-dark .carousel-control-prev-icon,.carousel-dark .carousel-control-next-icon{filter:invert(1) grayscale(100)}.carousel-dark .carousel-indicators [data-bs-target]{background-color:#000}.carousel-dark .carousel-caption{color:#000}@keyframes spinner-border{to{transform:rotate(360deg) /* rtl:ignore */}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:-0.125em;border:.25em solid currentColor;border-right-color:rgba(0,0,0,0);border-radius:50%;animation:.75s linear infinite spinner-border}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:-0.125em;background-color:currentColor;border-radius:50%;opacity:0;animation:.75s linear infinite spinner-grow}.spinner-grow-sm{width:1rem;height:1rem}@media(prefers-reduced-motion: reduce){.spinner-border,.spinner-grow{animation-duration:1.5s;-webkit-animation-duration:1.5s;-moz-animation-duration:1.5s;-ms-animation-duration:1.5s;-o-animation-duration:1.5s}}.offcanvas{position:fixed;bottom:0;z-index:1045;display:flex;display:-webkit-flex;flex-direction:column;-webkit-flex-direction:column;max-width:100%;visibility:hidden;background-color:#fff;background-clip:padding-box;outline:0;transition:transform .3s ease-in-out}@media(prefers-reduced-motion: reduce){.offcanvas{transition:none}}.offcanvas-backdrop{position:fixed;top:0;left:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.offcanvas-backdrop.fade{opacity:0}.offcanvas-backdrop.show{opacity:.5}.offcanvas-header{display:flex;display:-webkit-flex;align-items:center;-webkit-align-items:center;justify-content:space-between;-webkit-justify-content:space-between;padding:1rem 1rem}.offcanvas-header .btn-close{padding:.5rem .5rem;margin-top:-0.5rem;margin-right:-0.5rem;margin-bottom:-0.5rem}.offcanvas-title{margin-bottom:0;line-height:1.5}.offcanvas-body{flex-grow:1;-webkit-flex-grow:1;padding:1rem 1rem;overflow-y:auto}.offcanvas-start{top:0;left:0;width:400px;border-right:1px solid rgba(0,0,0,.2);transform:translateX(-100%)}.offcanvas-end{top:0;right:0;width:400px;border-left:1px solid rgba(0,0,0,.2);transform:translateX(100%)}.offcanvas-top{top:0;right:0;left:0;height:30vh;max-height:100%;border-bottom:1px solid rgba(0,0,0,.2);transform:translateY(-100%)}.offcanvas-bottom{right:0;left:0;height:30vh;max-height:100%;border-top:1px solid rgba(0,0,0,.2);transform:translateY(100%)}.offcanvas.show{transform:none}.placeholder{display:inline-block;min-height:1em;vertical-align:middle;cursor:wait;background-color:currentColor;opacity:.5}.placeholder.btn::before{display:inline-block;content:""}.placeholder-xs{min-height:.6em}.placeholder-sm{min-height:.8em}.placeholder-lg{min-height:1.2em}.placeholder-glow .placeholder{animation:placeholder-glow 2s ease-in-out infinite}@keyframes placeholder-glow{50%{opacity:.2}}.placeholder-wave{mask-image:linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);-webkit-mask-image:linear-gradient(130deg, #000 55%, rgba(0, 0, 0, 0.8) 75%, #000 95%);mask-size:200% 100%;-webkit-mask-size:200% 100%;animation:placeholder-wave 2s linear infinite}@keyframes placeholder-wave{100%{mask-position:-200% 0%;-webkit-mask-position:-200% 0%}}.clearfix::after{display:block;clear:both;content:""}.link-default{color:#373a3c}.link-default:hover,.link-default:focus{color:#2c2e30}.link-primary{color:#2780e3}.link-primary:hover,.link-primary:focus{color:#1f66b6}.link-secondary{color:#373a3c}.link-secondary:hover,.link-secondary:focus{color:#2c2e30}.link-success{color:#3fb618}.link-success:hover,.link-success:focus{color:#329213}.link-info{color:#9954bb}.link-info:hover,.link-info:focus{color:#7a4396}.link-warning{color:#ff7518}.link-warning:hover,.link-warning:focus{color:#cc5e13}.link-danger{color:#ff0039}.link-danger:hover,.link-danger:focus{color:#cc002e}.link-light{color:#f8f9fa}.link-light:hover,.link-light:focus{color:#f9fafb}.link-dark{color:#373a3c}.link-dark:hover,.link-dark:focus{color:#2c2e30}.ratio{position:relative;width:100%}.ratio::before{display:block;padding-top:var(--bs-aspect-ratio);content:""}.ratio>*{position:absolute;top:0;left:0;width:100%;height:100%}.ratio-1x1{--bs-aspect-ratio: 100%}.ratio-4x3{--bs-aspect-ratio: 75%}.ratio-16x9{--bs-aspect-ratio: 56.25%}.ratio-21x9{--bs-aspect-ratio: 42.8571428571%}.fixed-top{position:fixed;top:0;right:0;left:0;z-index:1030}.fixed-bottom{position:fixed;right:0;bottom:0;left:0;z-index:1030}.sticky-top{position:sticky;top:0;z-index:1020}@media(min-width: 576px){.sticky-sm-top{position:sticky;top:0;z-index:1020}}@media(min-width: 768px){.sticky-md-top{position:sticky;top:0;z-index:1020}}@media(min-width: 992px){.sticky-lg-top{position:sticky;top:0;z-index:1020}}@media(min-width: 1200px){.sticky-xl-top{position:sticky;top:0;z-index:1020}}@media(min-width: 1400px){.sticky-xxl-top{position:sticky;top:0;z-index:1020}}.hstack{display:flex;display:-webkit-flex;flex-direction:row;-webkit-flex-direction:row;align-items:center;-webkit-align-items:center;align-self:stretch;-webkit-align-self:stretch}.vstack{display:flex;display:-webkit-flex;flex:1 1 auto;-webkit-flex:1 1 auto;flex-direction:column;-webkit-flex-direction:column;align-self:stretch;-webkit-align-self:stretch}.visually-hidden,.visually-hidden-focusable:not(:focus):not(:focus-within){position:absolute !important;width:1px !important;height:1px !important;padding:0 !important;margin:-1px !important;overflow:hidden !important;clip:rect(0, 0, 0, 0) !important;white-space:nowrap !important;border:0 !important}.stretched-link::after{position:absolute;top:0;right:0;bottom:0;left:0;z-index:1;content:""}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.vr{display:inline-block;align-self:stretch;-webkit-align-self:stretch;width:1px;min-height:1em;background-color:currentColor;opacity:.25}.align-baseline{vertical-align:baseline !important}.align-top{vertical-align:top !important}.align-middle{vertical-align:middle !important}.align-bottom{vertical-align:bottom !important}.align-text-bottom{vertical-align:text-bottom !important}.align-text-top{vertical-align:text-top !important}.float-start{float:left !important}.float-end{float:right !important}.float-none{float:none !important}.opacity-0{opacity:0 !important}.opacity-25{opacity:.25 !important}.opacity-50{opacity:.5 !important}.opacity-75{opacity:.75 !important}.opacity-100{opacity:1 !important}.overflow-auto{overflow:auto !important}.overflow-hidden{overflow:hidden !important}.overflow-visible{overflow:visible !important}.overflow-scroll{overflow:scroll !important}.d-inline{display:inline !important}.d-inline-block{display:inline-block !important}.d-block{display:block !important}.d-grid{display:grid !important}.d-table{display:table !important}.d-table-row{display:table-row !important}.d-table-cell{display:table-cell !important}.d-flex{display:flex !important}.d-inline-flex{display:inline-flex !important}.d-none{display:none !important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15) !important}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075) !important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175) !important}.shadow-none{box-shadow:none !important}.position-static{position:static !important}.position-relative{position:relative !important}.position-absolute{position:absolute !important}.position-fixed{position:fixed !important}.position-sticky{position:sticky !important}.top-0{top:0 !important}.top-50{top:50% !important}.top-100{top:100% !important}.bottom-0{bottom:0 !important}.bottom-50{bottom:50% !important}.bottom-100{bottom:100% !important}.start-0{left:0 !important}.start-50{left:50% !important}.start-100{left:100% !important}.end-0{right:0 !important}.end-50{right:50% !important}.end-100{right:100% !important}.translate-middle{transform:translate(-50%, -50%) !important}.translate-middle-x{transform:translateX(-50%) !important}.translate-middle-y{transform:translateY(-50%) !important}.border{border:1px solid #dee2e6 !important}.border-0{border:0 !important}.border-top{border-top:1px solid #dee2e6 !important}.border-top-0{border-top:0 !important}.border-end{border-right:1px solid #dee2e6 !important}.border-end-0{border-right:0 !important}.border-bottom{border-bottom:1px solid #dee2e6 !important}.border-bottom-0{border-bottom:0 !important}.border-start{border-left:1px solid #dee2e6 !important}.border-start-0{border-left:0 !important}.border-default{border-color:#373a3c !important}.border-primary{border-color:#2780e3 !important}.border-secondary{border-color:#373a3c !important}.border-success{border-color:#3fb618 !important}.border-info{border-color:#9954bb !important}.border-warning{border-color:#ff7518 !important}.border-danger{border-color:#ff0039 !important}.border-light{border-color:#f8f9fa !important}.border-dark{border-color:#373a3c !important}.border-white{border-color:#fff !important}.border-1{border-width:1px !important}.border-2{border-width:2px !important}.border-3{border-width:3px !important}.border-4{border-width:4px !important}.border-5{border-width:5px !important}.w-25{width:25% !important}.w-50{width:50% !important}.w-75{width:75% !important}.w-100{width:100% !important}.w-auto{width:auto !important}.mw-100{max-width:100% !important}.vw-100{width:100vw !important}.min-vw-100{min-width:100vw !important}.h-25{height:25% !important}.h-50{height:50% !important}.h-75{height:75% !important}.h-100{height:100% !important}.h-auto{height:auto !important}.mh-100{max-height:100% !important}.vh-100{height:100vh !important}.min-vh-100{min-height:100vh !important}.flex-fill{flex:1 1 auto !important}.flex-row{flex-direction:row !important}.flex-column{flex-direction:column !important}.flex-row-reverse{flex-direction:row-reverse !important}.flex-column-reverse{flex-direction:column-reverse !important}.flex-grow-0{flex-grow:0 !important}.flex-grow-1{flex-grow:1 !important}.flex-shrink-0{flex-shrink:0 !important}.flex-shrink-1{flex-shrink:1 !important}.flex-wrap{flex-wrap:wrap !important}.flex-nowrap{flex-wrap:nowrap !important}.flex-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-0{gap:0 !important}.gap-1{gap:.25rem !important}.gap-2{gap:.5rem !important}.gap-3{gap:1rem !important}.gap-4{gap:1.5rem !important}.gap-5{gap:3rem !important}.justify-content-start{justify-content:flex-start !important}.justify-content-end{justify-content:flex-end !important}.justify-content-center{justify-content:center !important}.justify-content-between{justify-content:space-between !important}.justify-content-around{justify-content:space-around !important}.justify-content-evenly{justify-content:space-evenly !important}.align-items-start{align-items:flex-start !important}.align-items-end{align-items:flex-end !important}.align-items-center{align-items:center !important}.align-items-baseline{align-items:baseline !important}.align-items-stretch{align-items:stretch !important}.align-content-start{align-content:flex-start !important}.align-content-end{align-content:flex-end !important}.align-content-center{align-content:center !important}.align-content-between{align-content:space-between !important}.align-content-around{align-content:space-around !important}.align-content-stretch{align-content:stretch !important}.align-self-auto{align-self:auto !important}.align-self-start{align-self:flex-start !important}.align-self-end{align-self:flex-end !important}.align-self-center{align-self:center !important}.align-self-baseline{align-self:baseline !important}.align-self-stretch{align-self:stretch !important}.order-first{order:-1 !important}.order-0{order:0 !important}.order-1{order:1 !important}.order-2{order:2 !important}.order-3{order:3 !important}.order-4{order:4 !important}.order-5{order:5 !important}.order-last{order:6 !important}.m-0{margin:0 !important}.m-1{margin:.25rem !important}.m-2{margin:.5rem !important}.m-3{margin:1rem !important}.m-4{margin:1.5rem !important}.m-5{margin:3rem !important}.m-auto{margin:auto !important}.mx-0{margin-right:0 !important;margin-left:0 !important}.mx-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-3{margin-right:1rem !important;margin-left:1rem !important}.mx-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-5{margin-right:3rem !important;margin-left:3rem !important}.mx-auto{margin-right:auto !important;margin-left:auto !important}.my-0{margin-top:0 !important;margin-bottom:0 !important}.my-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-0{margin-top:0 !important}.mt-1{margin-top:.25rem !important}.mt-2{margin-top:.5rem !important}.mt-3{margin-top:1rem !important}.mt-4{margin-top:1.5rem !important}.mt-5{margin-top:3rem !important}.mt-auto{margin-top:auto !important}.me-0{margin-right:0 !important}.me-1{margin-right:.25rem !important}.me-2{margin-right:.5rem !important}.me-3{margin-right:1rem !important}.me-4{margin-right:1.5rem !important}.me-5{margin-right:3rem !important}.me-auto{margin-right:auto !important}.mb-0{margin-bottom:0 !important}.mb-1{margin-bottom:.25rem !important}.mb-2{margin-bottom:.5rem !important}.mb-3{margin-bottom:1rem !important}.mb-4{margin-bottom:1.5rem !important}.mb-5{margin-bottom:3rem !important}.mb-auto{margin-bottom:auto !important}.ms-0{margin-left:0 !important}.ms-1{margin-left:.25rem !important}.ms-2{margin-left:.5rem !important}.ms-3{margin-left:1rem !important}.ms-4{margin-left:1.5rem !important}.ms-5{margin-left:3rem !important}.ms-auto{margin-left:auto !important}.p-0{padding:0 !important}.p-1{padding:.25rem !important}.p-2{padding:.5rem !important}.p-3{padding:1rem !important}.p-4{padding:1.5rem !important}.p-5{padding:3rem !important}.px-0{padding-right:0 !important;padding-left:0 !important}.px-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-3{padding-right:1rem !important;padding-left:1rem !important}.px-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-5{padding-right:3rem !important;padding-left:3rem !important}.py-0{padding-top:0 !important;padding-bottom:0 !important}.py-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-0{padding-top:0 !important}.pt-1{padding-top:.25rem !important}.pt-2{padding-top:.5rem !important}.pt-3{padding-top:1rem !important}.pt-4{padding-top:1.5rem !important}.pt-5{padding-top:3rem !important}.pe-0{padding-right:0 !important}.pe-1{padding-right:.25rem !important}.pe-2{padding-right:.5rem !important}.pe-3{padding-right:1rem !important}.pe-4{padding-right:1.5rem !important}.pe-5{padding-right:3rem !important}.pb-0{padding-bottom:0 !important}.pb-1{padding-bottom:.25rem !important}.pb-2{padding-bottom:.5rem !important}.pb-3{padding-bottom:1rem !important}.pb-4{padding-bottom:1.5rem !important}.pb-5{padding-bottom:3rem !important}.ps-0{padding-left:0 !important}.ps-1{padding-left:.25rem !important}.ps-2{padding-left:.5rem !important}.ps-3{padding-left:1rem !important}.ps-4{padding-left:1.5rem !important}.ps-5{padding-left:3rem !important}.font-monospace{font-family:var(--bs-font-monospace) !important}.fs-1{font-size:calc(1.325rem + 0.9vw) !important}.fs-2{font-size:calc(1.29rem + 0.48vw) !important}.fs-3{font-size:calc(1.27rem + 0.24vw) !important}.fs-4{font-size:1.25rem !important}.fs-5{font-size:1.1rem !important}.fs-6{font-size:1rem !important}.fst-italic{font-style:italic !important}.fst-normal{font-style:normal !important}.fw-light{font-weight:300 !important}.fw-lighter{font-weight:lighter !important}.fw-normal{font-weight:400 !important}.fw-bold{font-weight:700 !important}.fw-bolder{font-weight:bolder !important}.lh-1{line-height:1 !important}.lh-sm{line-height:1.25 !important}.lh-base{line-height:1.5 !important}.lh-lg{line-height:2 !important}.text-start{text-align:left !important}.text-end{text-align:right !important}.text-center{text-align:center !important}.text-decoration-none{text-decoration:none !important}.text-decoration-underline{text-decoration:underline !important}.text-decoration-line-through{text-decoration:line-through !important}.text-lowercase{text-transform:lowercase !important}.text-uppercase{text-transform:uppercase !important}.text-capitalize{text-transform:capitalize !important}.text-wrap{white-space:normal !important}.text-nowrap{white-space:nowrap !important}.text-break{word-wrap:break-word !important;word-break:break-word !important}.text-default{--bs-text-opacity: 1;color:rgba(var(--bs-default-rgb), var(--bs-text-opacity)) !important}.text-primary{--bs-text-opacity: 1;color:rgba(var(--bs-primary-rgb), var(--bs-text-opacity)) !important}.text-secondary{--bs-text-opacity: 1;color:rgba(var(--bs-secondary-rgb), var(--bs-text-opacity)) !important}.text-success{--bs-text-opacity: 1;color:rgba(var(--bs-success-rgb), var(--bs-text-opacity)) !important}.text-info{--bs-text-opacity: 1;color:rgba(var(--bs-info-rgb), var(--bs-text-opacity)) !important}.text-warning{--bs-text-opacity: 1;color:rgba(var(--bs-warning-rgb), var(--bs-text-opacity)) !important}.text-danger{--bs-text-opacity: 1;color:rgba(var(--bs-danger-rgb), var(--bs-text-opacity)) !important}.text-light{--bs-text-opacity: 1;color:rgba(var(--bs-light-rgb), var(--bs-text-opacity)) !important}.text-dark{--bs-text-opacity: 1;color:rgba(var(--bs-dark-rgb), var(--bs-text-opacity)) !important}.text-black{--bs-text-opacity: 1;color:rgba(var(--bs-black-rgb), var(--bs-text-opacity)) !important}.text-white{--bs-text-opacity: 1;color:rgba(var(--bs-white-rgb), var(--bs-text-opacity)) !important}.text-body{--bs-text-opacity: 1;color:rgba(var(--bs-body-color-rgb), var(--bs-text-opacity)) !important}.text-muted{--bs-text-opacity: 1;color:#6c757d !important}.text-black-50{--bs-text-opacity: 1;color:rgba(0,0,0,.5) !important}.text-white-50{--bs-text-opacity: 1;color:rgba(255,255,255,.5) !important}.text-reset{--bs-text-opacity: 1;color:inherit !important}.text-opacity-25{--bs-text-opacity: 0.25}.text-opacity-50{--bs-text-opacity: 0.5}.text-opacity-75{--bs-text-opacity: 0.75}.text-opacity-100{--bs-text-opacity: 1}.bg-default{--bs-bg-opacity: 1;background-color:rgba(var(--bs-default-rgb), var(--bs-bg-opacity)) !important}.bg-primary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-primary-rgb), var(--bs-bg-opacity)) !important}.bg-secondary{--bs-bg-opacity: 1;background-color:rgba(var(--bs-secondary-rgb), var(--bs-bg-opacity)) !important}.bg-success{--bs-bg-opacity: 1;background-color:rgba(var(--bs-success-rgb), var(--bs-bg-opacity)) !important}.bg-info{--bs-bg-opacity: 1;background-color:rgba(var(--bs-info-rgb), var(--bs-bg-opacity)) !important}.bg-warning{--bs-bg-opacity: 1;background-color:rgba(var(--bs-warning-rgb), var(--bs-bg-opacity)) !important}.bg-danger{--bs-bg-opacity: 1;background-color:rgba(var(--bs-danger-rgb), var(--bs-bg-opacity)) !important}.bg-light{--bs-bg-opacity: 1;background-color:rgba(var(--bs-light-rgb), var(--bs-bg-opacity)) !important}.bg-dark{--bs-bg-opacity: 1;background-color:rgba(var(--bs-dark-rgb), var(--bs-bg-opacity)) !important}.bg-black{--bs-bg-opacity: 1;background-color:rgba(var(--bs-black-rgb), var(--bs-bg-opacity)) !important}.bg-white{--bs-bg-opacity: 1;background-color:rgba(var(--bs-white-rgb), var(--bs-bg-opacity)) !important}.bg-body{--bs-bg-opacity: 1;background-color:rgba(var(--bs-body-bg-rgb), var(--bs-bg-opacity)) !important}.bg-transparent{--bs-bg-opacity: 1;background-color:rgba(0,0,0,0) !important}.bg-opacity-10{--bs-bg-opacity: 0.1}.bg-opacity-25{--bs-bg-opacity: 0.25}.bg-opacity-50{--bs-bg-opacity: 0.5}.bg-opacity-75{--bs-bg-opacity: 0.75}.bg-opacity-100{--bs-bg-opacity: 1}.bg-gradient{background-image:var(--bs-gradient) !important}.user-select-all{user-select:all !important}.user-select-auto{user-select:auto !important}.user-select-none{user-select:none !important}.pe-none{pointer-events:none !important}.pe-auto{pointer-events:auto !important}.rounded{border-radius:.25rem !important}.rounded-0{border-radius:0 !important}.rounded-1{border-radius:.2em !important}.rounded-2{border-radius:.25rem !important}.rounded-3{border-radius:.3rem !important}.rounded-circle{border-radius:50% !important}.rounded-pill{border-radius:50rem !important}.rounded-top{border-top-left-radius:.25rem !important;border-top-right-radius:.25rem !important}.rounded-end{border-top-right-radius:.25rem !important;border-bottom-right-radius:.25rem !important}.rounded-bottom{border-bottom-right-radius:.25rem !important;border-bottom-left-radius:.25rem !important}.rounded-start{border-bottom-left-radius:.25rem !important;border-top-left-radius:.25rem !important}.visible{visibility:visible !important}.invisible{visibility:hidden !important}@media(min-width: 576px){.float-sm-start{float:left !important}.float-sm-end{float:right !important}.float-sm-none{float:none !important}.d-sm-inline{display:inline !important}.d-sm-inline-block{display:inline-block !important}.d-sm-block{display:block !important}.d-sm-grid{display:grid !important}.d-sm-table{display:table !important}.d-sm-table-row{display:table-row !important}.d-sm-table-cell{display:table-cell !important}.d-sm-flex{display:flex !important}.d-sm-inline-flex{display:inline-flex !important}.d-sm-none{display:none !important}.flex-sm-fill{flex:1 1 auto !important}.flex-sm-row{flex-direction:row !important}.flex-sm-column{flex-direction:column !important}.flex-sm-row-reverse{flex-direction:row-reverse !important}.flex-sm-column-reverse{flex-direction:column-reverse !important}.flex-sm-grow-0{flex-grow:0 !important}.flex-sm-grow-1{flex-grow:1 !important}.flex-sm-shrink-0{flex-shrink:0 !important}.flex-sm-shrink-1{flex-shrink:1 !important}.flex-sm-wrap{flex-wrap:wrap !important}.flex-sm-nowrap{flex-wrap:nowrap !important}.flex-sm-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-sm-0{gap:0 !important}.gap-sm-1{gap:.25rem !important}.gap-sm-2{gap:.5rem !important}.gap-sm-3{gap:1rem !important}.gap-sm-4{gap:1.5rem !important}.gap-sm-5{gap:3rem !important}.justify-content-sm-start{justify-content:flex-start !important}.justify-content-sm-end{justify-content:flex-end !important}.justify-content-sm-center{justify-content:center !important}.justify-content-sm-between{justify-content:space-between !important}.justify-content-sm-around{justify-content:space-around !important}.justify-content-sm-evenly{justify-content:space-evenly !important}.align-items-sm-start{align-items:flex-start !important}.align-items-sm-end{align-items:flex-end !important}.align-items-sm-center{align-items:center !important}.align-items-sm-baseline{align-items:baseline !important}.align-items-sm-stretch{align-items:stretch !important}.align-content-sm-start{align-content:flex-start !important}.align-content-sm-end{align-content:flex-end !important}.align-content-sm-center{align-content:center !important}.align-content-sm-between{align-content:space-between !important}.align-content-sm-around{align-content:space-around !important}.align-content-sm-stretch{align-content:stretch !important}.align-self-sm-auto{align-self:auto !important}.align-self-sm-start{align-self:flex-start !important}.align-self-sm-end{align-self:flex-end !important}.align-self-sm-center{align-self:center !important}.align-self-sm-baseline{align-self:baseline !important}.align-self-sm-stretch{align-self:stretch !important}.order-sm-first{order:-1 !important}.order-sm-0{order:0 !important}.order-sm-1{order:1 !important}.order-sm-2{order:2 !important}.order-sm-3{order:3 !important}.order-sm-4{order:4 !important}.order-sm-5{order:5 !important}.order-sm-last{order:6 !important}.m-sm-0{margin:0 !important}.m-sm-1{margin:.25rem !important}.m-sm-2{margin:.5rem !important}.m-sm-3{margin:1rem !important}.m-sm-4{margin:1.5rem !important}.m-sm-5{margin:3rem !important}.m-sm-auto{margin:auto !important}.mx-sm-0{margin-right:0 !important;margin-left:0 !important}.mx-sm-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-sm-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-sm-3{margin-right:1rem !important;margin-left:1rem !important}.mx-sm-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-sm-5{margin-right:3rem !important;margin-left:3rem !important}.mx-sm-auto{margin-right:auto !important;margin-left:auto !important}.my-sm-0{margin-top:0 !important;margin-bottom:0 !important}.my-sm-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-sm-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-sm-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-sm-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-sm-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-sm-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-sm-0{margin-top:0 !important}.mt-sm-1{margin-top:.25rem !important}.mt-sm-2{margin-top:.5rem !important}.mt-sm-3{margin-top:1rem !important}.mt-sm-4{margin-top:1.5rem !important}.mt-sm-5{margin-top:3rem !important}.mt-sm-auto{margin-top:auto !important}.me-sm-0{margin-right:0 !important}.me-sm-1{margin-right:.25rem !important}.me-sm-2{margin-right:.5rem !important}.me-sm-3{margin-right:1rem !important}.me-sm-4{margin-right:1.5rem !important}.me-sm-5{margin-right:3rem !important}.me-sm-auto{margin-right:auto !important}.mb-sm-0{margin-bottom:0 !important}.mb-sm-1{margin-bottom:.25rem !important}.mb-sm-2{margin-bottom:.5rem !important}.mb-sm-3{margin-bottom:1rem !important}.mb-sm-4{margin-bottom:1.5rem !important}.mb-sm-5{margin-bottom:3rem !important}.mb-sm-auto{margin-bottom:auto !important}.ms-sm-0{margin-left:0 !important}.ms-sm-1{margin-left:.25rem !important}.ms-sm-2{margin-left:.5rem !important}.ms-sm-3{margin-left:1rem !important}.ms-sm-4{margin-left:1.5rem !important}.ms-sm-5{margin-left:3rem !important}.ms-sm-auto{margin-left:auto !important}.p-sm-0{padding:0 !important}.p-sm-1{padding:.25rem !important}.p-sm-2{padding:.5rem !important}.p-sm-3{padding:1rem !important}.p-sm-4{padding:1.5rem !important}.p-sm-5{padding:3rem !important}.px-sm-0{padding-right:0 !important;padding-left:0 !important}.px-sm-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-sm-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-sm-3{padding-right:1rem !important;padding-left:1rem !important}.px-sm-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-sm-5{padding-right:3rem !important;padding-left:3rem !important}.py-sm-0{padding-top:0 !important;padding-bottom:0 !important}.py-sm-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-sm-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-sm-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-sm-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-sm-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-sm-0{padding-top:0 !important}.pt-sm-1{padding-top:.25rem !important}.pt-sm-2{padding-top:.5rem !important}.pt-sm-3{padding-top:1rem !important}.pt-sm-4{padding-top:1.5rem !important}.pt-sm-5{padding-top:3rem !important}.pe-sm-0{padding-right:0 !important}.pe-sm-1{padding-right:.25rem !important}.pe-sm-2{padding-right:.5rem !important}.pe-sm-3{padding-right:1rem !important}.pe-sm-4{padding-right:1.5rem !important}.pe-sm-5{padding-right:3rem !important}.pb-sm-0{padding-bottom:0 !important}.pb-sm-1{padding-bottom:.25rem !important}.pb-sm-2{padding-bottom:.5rem !important}.pb-sm-3{padding-bottom:1rem !important}.pb-sm-4{padding-bottom:1.5rem !important}.pb-sm-5{padding-bottom:3rem !important}.ps-sm-0{padding-left:0 !important}.ps-sm-1{padding-left:.25rem !important}.ps-sm-2{padding-left:.5rem !important}.ps-sm-3{padding-left:1rem !important}.ps-sm-4{padding-left:1.5rem !important}.ps-sm-5{padding-left:3rem !important}.text-sm-start{text-align:left !important}.text-sm-end{text-align:right !important}.text-sm-center{text-align:center !important}}@media(min-width: 768px){.float-md-start{float:left !important}.float-md-end{float:right !important}.float-md-none{float:none !important}.d-md-inline{display:inline !important}.d-md-inline-block{display:inline-block !important}.d-md-block{display:block !important}.d-md-grid{display:grid !important}.d-md-table{display:table !important}.d-md-table-row{display:table-row !important}.d-md-table-cell{display:table-cell !important}.d-md-flex{display:flex !important}.d-md-inline-flex{display:inline-flex !important}.d-md-none{display:none !important}.flex-md-fill{flex:1 1 auto !important}.flex-md-row{flex-direction:row !important}.flex-md-column{flex-direction:column !important}.flex-md-row-reverse{flex-direction:row-reverse !important}.flex-md-column-reverse{flex-direction:column-reverse !important}.flex-md-grow-0{flex-grow:0 !important}.flex-md-grow-1{flex-grow:1 !important}.flex-md-shrink-0{flex-shrink:0 !important}.flex-md-shrink-1{flex-shrink:1 !important}.flex-md-wrap{flex-wrap:wrap !important}.flex-md-nowrap{flex-wrap:nowrap !important}.flex-md-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-md-0{gap:0 !important}.gap-md-1{gap:.25rem !important}.gap-md-2{gap:.5rem !important}.gap-md-3{gap:1rem !important}.gap-md-4{gap:1.5rem !important}.gap-md-5{gap:3rem !important}.justify-content-md-start{justify-content:flex-start !important}.justify-content-md-end{justify-content:flex-end !important}.justify-content-md-center{justify-content:center !important}.justify-content-md-between{justify-content:space-between !important}.justify-content-md-around{justify-content:space-around !important}.justify-content-md-evenly{justify-content:space-evenly !important}.align-items-md-start{align-items:flex-start !important}.align-items-md-end{align-items:flex-end !important}.align-items-md-center{align-items:center !important}.align-items-md-baseline{align-items:baseline !important}.align-items-md-stretch{align-items:stretch !important}.align-content-md-start{align-content:flex-start !important}.align-content-md-end{align-content:flex-end !important}.align-content-md-center{align-content:center !important}.align-content-md-between{align-content:space-between !important}.align-content-md-around{align-content:space-around !important}.align-content-md-stretch{align-content:stretch !important}.align-self-md-auto{align-self:auto !important}.align-self-md-start{align-self:flex-start !important}.align-self-md-end{align-self:flex-end !important}.align-self-md-center{align-self:center !important}.align-self-md-baseline{align-self:baseline !important}.align-self-md-stretch{align-self:stretch !important}.order-md-first{order:-1 !important}.order-md-0{order:0 !important}.order-md-1{order:1 !important}.order-md-2{order:2 !important}.order-md-3{order:3 !important}.order-md-4{order:4 !important}.order-md-5{order:5 !important}.order-md-last{order:6 !important}.m-md-0{margin:0 !important}.m-md-1{margin:.25rem !important}.m-md-2{margin:.5rem !important}.m-md-3{margin:1rem !important}.m-md-4{margin:1.5rem !important}.m-md-5{margin:3rem !important}.m-md-auto{margin:auto !important}.mx-md-0{margin-right:0 !important;margin-left:0 !important}.mx-md-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-md-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-md-3{margin-right:1rem !important;margin-left:1rem !important}.mx-md-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-md-5{margin-right:3rem !important;margin-left:3rem !important}.mx-md-auto{margin-right:auto !important;margin-left:auto !important}.my-md-0{margin-top:0 !important;margin-bottom:0 !important}.my-md-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-md-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-md-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-md-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-md-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-md-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-md-0{margin-top:0 !important}.mt-md-1{margin-top:.25rem !important}.mt-md-2{margin-top:.5rem !important}.mt-md-3{margin-top:1rem !important}.mt-md-4{margin-top:1.5rem !important}.mt-md-5{margin-top:3rem !important}.mt-md-auto{margin-top:auto !important}.me-md-0{margin-right:0 !important}.me-md-1{margin-right:.25rem !important}.me-md-2{margin-right:.5rem !important}.me-md-3{margin-right:1rem !important}.me-md-4{margin-right:1.5rem !important}.me-md-5{margin-right:3rem !important}.me-md-auto{margin-right:auto !important}.mb-md-0{margin-bottom:0 !important}.mb-md-1{margin-bottom:.25rem !important}.mb-md-2{margin-bottom:.5rem !important}.mb-md-3{margin-bottom:1rem !important}.mb-md-4{margin-bottom:1.5rem !important}.mb-md-5{margin-bottom:3rem !important}.mb-md-auto{margin-bottom:auto !important}.ms-md-0{margin-left:0 !important}.ms-md-1{margin-left:.25rem !important}.ms-md-2{margin-left:.5rem !important}.ms-md-3{margin-left:1rem !important}.ms-md-4{margin-left:1.5rem !important}.ms-md-5{margin-left:3rem !important}.ms-md-auto{margin-left:auto !important}.p-md-0{padding:0 !important}.p-md-1{padding:.25rem !important}.p-md-2{padding:.5rem !important}.p-md-3{padding:1rem !important}.p-md-4{padding:1.5rem !important}.p-md-5{padding:3rem !important}.px-md-0{padding-right:0 !important;padding-left:0 !important}.px-md-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-md-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-md-3{padding-right:1rem !important;padding-left:1rem !important}.px-md-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-md-5{padding-right:3rem !important;padding-left:3rem !important}.py-md-0{padding-top:0 !important;padding-bottom:0 !important}.py-md-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-md-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-md-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-md-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-md-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-md-0{padding-top:0 !important}.pt-md-1{padding-top:.25rem !important}.pt-md-2{padding-top:.5rem !important}.pt-md-3{padding-top:1rem !important}.pt-md-4{padding-top:1.5rem !important}.pt-md-5{padding-top:3rem !important}.pe-md-0{padding-right:0 !important}.pe-md-1{padding-right:.25rem !important}.pe-md-2{padding-right:.5rem !important}.pe-md-3{padding-right:1rem !important}.pe-md-4{padding-right:1.5rem !important}.pe-md-5{padding-right:3rem !important}.pb-md-0{padding-bottom:0 !important}.pb-md-1{padding-bottom:.25rem !important}.pb-md-2{padding-bottom:.5rem !important}.pb-md-3{padding-bottom:1rem !important}.pb-md-4{padding-bottom:1.5rem !important}.pb-md-5{padding-bottom:3rem !important}.ps-md-0{padding-left:0 !important}.ps-md-1{padding-left:.25rem !important}.ps-md-2{padding-left:.5rem !important}.ps-md-3{padding-left:1rem !important}.ps-md-4{padding-left:1.5rem !important}.ps-md-5{padding-left:3rem !important}.text-md-start{text-align:left !important}.text-md-end{text-align:right !important}.text-md-center{text-align:center !important}}@media(min-width: 992px){.float-lg-start{float:left !important}.float-lg-end{float:right !important}.float-lg-none{float:none !important}.d-lg-inline{display:inline !important}.d-lg-inline-block{display:inline-block !important}.d-lg-block{display:block !important}.d-lg-grid{display:grid !important}.d-lg-table{display:table !important}.d-lg-table-row{display:table-row !important}.d-lg-table-cell{display:table-cell !important}.d-lg-flex{display:flex !important}.d-lg-inline-flex{display:inline-flex !important}.d-lg-none{display:none !important}.flex-lg-fill{flex:1 1 auto !important}.flex-lg-row{flex-direction:row !important}.flex-lg-column{flex-direction:column !important}.flex-lg-row-reverse{flex-direction:row-reverse !important}.flex-lg-column-reverse{flex-direction:column-reverse !important}.flex-lg-grow-0{flex-grow:0 !important}.flex-lg-grow-1{flex-grow:1 !important}.flex-lg-shrink-0{flex-shrink:0 !important}.flex-lg-shrink-1{flex-shrink:1 !important}.flex-lg-wrap{flex-wrap:wrap !important}.flex-lg-nowrap{flex-wrap:nowrap !important}.flex-lg-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-lg-0{gap:0 !important}.gap-lg-1{gap:.25rem !important}.gap-lg-2{gap:.5rem !important}.gap-lg-3{gap:1rem !important}.gap-lg-4{gap:1.5rem !important}.gap-lg-5{gap:3rem !important}.justify-content-lg-start{justify-content:flex-start !important}.justify-content-lg-end{justify-content:flex-end !important}.justify-content-lg-center{justify-content:center !important}.justify-content-lg-between{justify-content:space-between !important}.justify-content-lg-around{justify-content:space-around !important}.justify-content-lg-evenly{justify-content:space-evenly !important}.align-items-lg-start{align-items:flex-start !important}.align-items-lg-end{align-items:flex-end !important}.align-items-lg-center{align-items:center !important}.align-items-lg-baseline{align-items:baseline !important}.align-items-lg-stretch{align-items:stretch !important}.align-content-lg-start{align-content:flex-start !important}.align-content-lg-end{align-content:flex-end !important}.align-content-lg-center{align-content:center !important}.align-content-lg-between{align-content:space-between !important}.align-content-lg-around{align-content:space-around !important}.align-content-lg-stretch{align-content:stretch !important}.align-self-lg-auto{align-self:auto !important}.align-self-lg-start{align-self:flex-start !important}.align-self-lg-end{align-self:flex-end !important}.align-self-lg-center{align-self:center !important}.align-self-lg-baseline{align-self:baseline !important}.align-self-lg-stretch{align-self:stretch !important}.order-lg-first{order:-1 !important}.order-lg-0{order:0 !important}.order-lg-1{order:1 !important}.order-lg-2{order:2 !important}.order-lg-3{order:3 !important}.order-lg-4{order:4 !important}.order-lg-5{order:5 !important}.order-lg-last{order:6 !important}.m-lg-0{margin:0 !important}.m-lg-1{margin:.25rem !important}.m-lg-2{margin:.5rem !important}.m-lg-3{margin:1rem !important}.m-lg-4{margin:1.5rem !important}.m-lg-5{margin:3rem !important}.m-lg-auto{margin:auto !important}.mx-lg-0{margin-right:0 !important;margin-left:0 !important}.mx-lg-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-lg-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-lg-3{margin-right:1rem !important;margin-left:1rem !important}.mx-lg-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-lg-5{margin-right:3rem !important;margin-left:3rem !important}.mx-lg-auto{margin-right:auto !important;margin-left:auto !important}.my-lg-0{margin-top:0 !important;margin-bottom:0 !important}.my-lg-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-lg-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-lg-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-lg-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-lg-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-lg-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-lg-0{margin-top:0 !important}.mt-lg-1{margin-top:.25rem !important}.mt-lg-2{margin-top:.5rem !important}.mt-lg-3{margin-top:1rem !important}.mt-lg-4{margin-top:1.5rem !important}.mt-lg-5{margin-top:3rem !important}.mt-lg-auto{margin-top:auto !important}.me-lg-0{margin-right:0 !important}.me-lg-1{margin-right:.25rem !important}.me-lg-2{margin-right:.5rem !important}.me-lg-3{margin-right:1rem !important}.me-lg-4{margin-right:1.5rem !important}.me-lg-5{margin-right:3rem !important}.me-lg-auto{margin-right:auto !important}.mb-lg-0{margin-bottom:0 !important}.mb-lg-1{margin-bottom:.25rem !important}.mb-lg-2{margin-bottom:.5rem !important}.mb-lg-3{margin-bottom:1rem !important}.mb-lg-4{margin-bottom:1.5rem !important}.mb-lg-5{margin-bottom:3rem !important}.mb-lg-auto{margin-bottom:auto !important}.ms-lg-0{margin-left:0 !important}.ms-lg-1{margin-left:.25rem !important}.ms-lg-2{margin-left:.5rem !important}.ms-lg-3{margin-left:1rem !important}.ms-lg-4{margin-left:1.5rem !important}.ms-lg-5{margin-left:3rem !important}.ms-lg-auto{margin-left:auto !important}.p-lg-0{padding:0 !important}.p-lg-1{padding:.25rem !important}.p-lg-2{padding:.5rem !important}.p-lg-3{padding:1rem !important}.p-lg-4{padding:1.5rem !important}.p-lg-5{padding:3rem !important}.px-lg-0{padding-right:0 !important;padding-left:0 !important}.px-lg-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-lg-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-lg-3{padding-right:1rem !important;padding-left:1rem !important}.px-lg-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-lg-5{padding-right:3rem !important;padding-left:3rem !important}.py-lg-0{padding-top:0 !important;padding-bottom:0 !important}.py-lg-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-lg-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-lg-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-lg-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-lg-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-lg-0{padding-top:0 !important}.pt-lg-1{padding-top:.25rem !important}.pt-lg-2{padding-top:.5rem !important}.pt-lg-3{padding-top:1rem !important}.pt-lg-4{padding-top:1.5rem !important}.pt-lg-5{padding-top:3rem !important}.pe-lg-0{padding-right:0 !important}.pe-lg-1{padding-right:.25rem !important}.pe-lg-2{padding-right:.5rem !important}.pe-lg-3{padding-right:1rem !important}.pe-lg-4{padding-right:1.5rem !important}.pe-lg-5{padding-right:3rem !important}.pb-lg-0{padding-bottom:0 !important}.pb-lg-1{padding-bottom:.25rem !important}.pb-lg-2{padding-bottom:.5rem !important}.pb-lg-3{padding-bottom:1rem !important}.pb-lg-4{padding-bottom:1.5rem !important}.pb-lg-5{padding-bottom:3rem !important}.ps-lg-0{padding-left:0 !important}.ps-lg-1{padding-left:.25rem !important}.ps-lg-2{padding-left:.5rem !important}.ps-lg-3{padding-left:1rem !important}.ps-lg-4{padding-left:1.5rem !important}.ps-lg-5{padding-left:3rem !important}.text-lg-start{text-align:left !important}.text-lg-end{text-align:right !important}.text-lg-center{text-align:center !important}}@media(min-width: 1200px){.float-xl-start{float:left !important}.float-xl-end{float:right !important}.float-xl-none{float:none !important}.d-xl-inline{display:inline !important}.d-xl-inline-block{display:inline-block !important}.d-xl-block{display:block !important}.d-xl-grid{display:grid !important}.d-xl-table{display:table !important}.d-xl-table-row{display:table-row !important}.d-xl-table-cell{display:table-cell !important}.d-xl-flex{display:flex !important}.d-xl-inline-flex{display:inline-flex !important}.d-xl-none{display:none !important}.flex-xl-fill{flex:1 1 auto !important}.flex-xl-row{flex-direction:row !important}.flex-xl-column{flex-direction:column !important}.flex-xl-row-reverse{flex-direction:row-reverse !important}.flex-xl-column-reverse{flex-direction:column-reverse !important}.flex-xl-grow-0{flex-grow:0 !important}.flex-xl-grow-1{flex-grow:1 !important}.flex-xl-shrink-0{flex-shrink:0 !important}.flex-xl-shrink-1{flex-shrink:1 !important}.flex-xl-wrap{flex-wrap:wrap !important}.flex-xl-nowrap{flex-wrap:nowrap !important}.flex-xl-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-xl-0{gap:0 !important}.gap-xl-1{gap:.25rem !important}.gap-xl-2{gap:.5rem !important}.gap-xl-3{gap:1rem !important}.gap-xl-4{gap:1.5rem !important}.gap-xl-5{gap:3rem !important}.justify-content-xl-start{justify-content:flex-start !important}.justify-content-xl-end{justify-content:flex-end !important}.justify-content-xl-center{justify-content:center !important}.justify-content-xl-between{justify-content:space-between !important}.justify-content-xl-around{justify-content:space-around !important}.justify-content-xl-evenly{justify-content:space-evenly !important}.align-items-xl-start{align-items:flex-start !important}.align-items-xl-end{align-items:flex-end !important}.align-items-xl-center{align-items:center !important}.align-items-xl-baseline{align-items:baseline !important}.align-items-xl-stretch{align-items:stretch !important}.align-content-xl-start{align-content:flex-start !important}.align-content-xl-end{align-content:flex-end !important}.align-content-xl-center{align-content:center !important}.align-content-xl-between{align-content:space-between !important}.align-content-xl-around{align-content:space-around !important}.align-content-xl-stretch{align-content:stretch !important}.align-self-xl-auto{align-self:auto !important}.align-self-xl-start{align-self:flex-start !important}.align-self-xl-end{align-self:flex-end !important}.align-self-xl-center{align-self:center !important}.align-self-xl-baseline{align-self:baseline !important}.align-self-xl-stretch{align-self:stretch !important}.order-xl-first{order:-1 !important}.order-xl-0{order:0 !important}.order-xl-1{order:1 !important}.order-xl-2{order:2 !important}.order-xl-3{order:3 !important}.order-xl-4{order:4 !important}.order-xl-5{order:5 !important}.order-xl-last{order:6 !important}.m-xl-0{margin:0 !important}.m-xl-1{margin:.25rem !important}.m-xl-2{margin:.5rem !important}.m-xl-3{margin:1rem !important}.m-xl-4{margin:1.5rem !important}.m-xl-5{margin:3rem !important}.m-xl-auto{margin:auto !important}.mx-xl-0{margin-right:0 !important;margin-left:0 !important}.mx-xl-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-xl-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-xl-3{margin-right:1rem !important;margin-left:1rem !important}.mx-xl-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-xl-5{margin-right:3rem !important;margin-left:3rem !important}.mx-xl-auto{margin-right:auto !important;margin-left:auto !important}.my-xl-0{margin-top:0 !important;margin-bottom:0 !important}.my-xl-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-xl-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-xl-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-xl-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-xl-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-xl-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-xl-0{margin-top:0 !important}.mt-xl-1{margin-top:.25rem !important}.mt-xl-2{margin-top:.5rem !important}.mt-xl-3{margin-top:1rem !important}.mt-xl-4{margin-top:1.5rem !important}.mt-xl-5{margin-top:3rem !important}.mt-xl-auto{margin-top:auto !important}.me-xl-0{margin-right:0 !important}.me-xl-1{margin-right:.25rem !important}.me-xl-2{margin-right:.5rem !important}.me-xl-3{margin-right:1rem !important}.me-xl-4{margin-right:1.5rem !important}.me-xl-5{margin-right:3rem !important}.me-xl-auto{margin-right:auto !important}.mb-xl-0{margin-bottom:0 !important}.mb-xl-1{margin-bottom:.25rem !important}.mb-xl-2{margin-bottom:.5rem !important}.mb-xl-3{margin-bottom:1rem !important}.mb-xl-4{margin-bottom:1.5rem !important}.mb-xl-5{margin-bottom:3rem !important}.mb-xl-auto{margin-bottom:auto !important}.ms-xl-0{margin-left:0 !important}.ms-xl-1{margin-left:.25rem !important}.ms-xl-2{margin-left:.5rem !important}.ms-xl-3{margin-left:1rem !important}.ms-xl-4{margin-left:1.5rem !important}.ms-xl-5{margin-left:3rem !important}.ms-xl-auto{margin-left:auto !important}.p-xl-0{padding:0 !important}.p-xl-1{padding:.25rem !important}.p-xl-2{padding:.5rem !important}.p-xl-3{padding:1rem !important}.p-xl-4{padding:1.5rem !important}.p-xl-5{padding:3rem !important}.px-xl-0{padding-right:0 !important;padding-left:0 !important}.px-xl-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-xl-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-xl-3{padding-right:1rem !important;padding-left:1rem !important}.px-xl-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-xl-5{padding-right:3rem !important;padding-left:3rem !important}.py-xl-0{padding-top:0 !important;padding-bottom:0 !important}.py-xl-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-xl-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-xl-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-xl-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-xl-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-xl-0{padding-top:0 !important}.pt-xl-1{padding-top:.25rem !important}.pt-xl-2{padding-top:.5rem !important}.pt-xl-3{padding-top:1rem !important}.pt-xl-4{padding-top:1.5rem !important}.pt-xl-5{padding-top:3rem !important}.pe-xl-0{padding-right:0 !important}.pe-xl-1{padding-right:.25rem !important}.pe-xl-2{padding-right:.5rem !important}.pe-xl-3{padding-right:1rem !important}.pe-xl-4{padding-right:1.5rem !important}.pe-xl-5{padding-right:3rem !important}.pb-xl-0{padding-bottom:0 !important}.pb-xl-1{padding-bottom:.25rem !important}.pb-xl-2{padding-bottom:.5rem !important}.pb-xl-3{padding-bottom:1rem !important}.pb-xl-4{padding-bottom:1.5rem !important}.pb-xl-5{padding-bottom:3rem !important}.ps-xl-0{padding-left:0 !important}.ps-xl-1{padding-left:.25rem !important}.ps-xl-2{padding-left:.5rem !important}.ps-xl-3{padding-left:1rem !important}.ps-xl-4{padding-left:1.5rem !important}.ps-xl-5{padding-left:3rem !important}.text-xl-start{text-align:left !important}.text-xl-end{text-align:right !important}.text-xl-center{text-align:center !important}}@media(min-width: 1400px){.float-xxl-start{float:left !important}.float-xxl-end{float:right !important}.float-xxl-none{float:none !important}.d-xxl-inline{display:inline !important}.d-xxl-inline-block{display:inline-block !important}.d-xxl-block{display:block !important}.d-xxl-grid{display:grid !important}.d-xxl-table{display:table !important}.d-xxl-table-row{display:table-row !important}.d-xxl-table-cell{display:table-cell !important}.d-xxl-flex{display:flex !important}.d-xxl-inline-flex{display:inline-flex !important}.d-xxl-none{display:none !important}.flex-xxl-fill{flex:1 1 auto !important}.flex-xxl-row{flex-direction:row !important}.flex-xxl-column{flex-direction:column !important}.flex-xxl-row-reverse{flex-direction:row-reverse !important}.flex-xxl-column-reverse{flex-direction:column-reverse !important}.flex-xxl-grow-0{flex-grow:0 !important}.flex-xxl-grow-1{flex-grow:1 !important}.flex-xxl-shrink-0{flex-shrink:0 !important}.flex-xxl-shrink-1{flex-shrink:1 !important}.flex-xxl-wrap{flex-wrap:wrap !important}.flex-xxl-nowrap{flex-wrap:nowrap !important}.flex-xxl-wrap-reverse{flex-wrap:wrap-reverse !important}.gap-xxl-0{gap:0 !important}.gap-xxl-1{gap:.25rem !important}.gap-xxl-2{gap:.5rem !important}.gap-xxl-3{gap:1rem !important}.gap-xxl-4{gap:1.5rem !important}.gap-xxl-5{gap:3rem !important}.justify-content-xxl-start{justify-content:flex-start !important}.justify-content-xxl-end{justify-content:flex-end !important}.justify-content-xxl-center{justify-content:center !important}.justify-content-xxl-between{justify-content:space-between !important}.justify-content-xxl-around{justify-content:space-around !important}.justify-content-xxl-evenly{justify-content:space-evenly !important}.align-items-xxl-start{align-items:flex-start !important}.align-items-xxl-end{align-items:flex-end !important}.align-items-xxl-center{align-items:center !important}.align-items-xxl-baseline{align-items:baseline !important}.align-items-xxl-stretch{align-items:stretch !important}.align-content-xxl-start{align-content:flex-start !important}.align-content-xxl-end{align-content:flex-end !important}.align-content-xxl-center{align-content:center !important}.align-content-xxl-between{align-content:space-between !important}.align-content-xxl-around{align-content:space-around !important}.align-content-xxl-stretch{align-content:stretch !important}.align-self-xxl-auto{align-self:auto !important}.align-self-xxl-start{align-self:flex-start !important}.align-self-xxl-end{align-self:flex-end !important}.align-self-xxl-center{align-self:center !important}.align-self-xxl-baseline{align-self:baseline !important}.align-self-xxl-stretch{align-self:stretch !important}.order-xxl-first{order:-1 !important}.order-xxl-0{order:0 !important}.order-xxl-1{order:1 !important}.order-xxl-2{order:2 !important}.order-xxl-3{order:3 !important}.order-xxl-4{order:4 !important}.order-xxl-5{order:5 !important}.order-xxl-last{order:6 !important}.m-xxl-0{margin:0 !important}.m-xxl-1{margin:.25rem !important}.m-xxl-2{margin:.5rem !important}.m-xxl-3{margin:1rem !important}.m-xxl-4{margin:1.5rem !important}.m-xxl-5{margin:3rem !important}.m-xxl-auto{margin:auto !important}.mx-xxl-0{margin-right:0 !important;margin-left:0 !important}.mx-xxl-1{margin-right:.25rem !important;margin-left:.25rem !important}.mx-xxl-2{margin-right:.5rem !important;margin-left:.5rem !important}.mx-xxl-3{margin-right:1rem !important;margin-left:1rem !important}.mx-xxl-4{margin-right:1.5rem !important;margin-left:1.5rem !important}.mx-xxl-5{margin-right:3rem !important;margin-left:3rem !important}.mx-xxl-auto{margin-right:auto !important;margin-left:auto !important}.my-xxl-0{margin-top:0 !important;margin-bottom:0 !important}.my-xxl-1{margin-top:.25rem !important;margin-bottom:.25rem !important}.my-xxl-2{margin-top:.5rem !important;margin-bottom:.5rem !important}.my-xxl-3{margin-top:1rem !important;margin-bottom:1rem !important}.my-xxl-4{margin-top:1.5rem !important;margin-bottom:1.5rem !important}.my-xxl-5{margin-top:3rem !important;margin-bottom:3rem !important}.my-xxl-auto{margin-top:auto !important;margin-bottom:auto !important}.mt-xxl-0{margin-top:0 !important}.mt-xxl-1{margin-top:.25rem !important}.mt-xxl-2{margin-top:.5rem !important}.mt-xxl-3{margin-top:1rem !important}.mt-xxl-4{margin-top:1.5rem !important}.mt-xxl-5{margin-top:3rem !important}.mt-xxl-auto{margin-top:auto !important}.me-xxl-0{margin-right:0 !important}.me-xxl-1{margin-right:.25rem !important}.me-xxl-2{margin-right:.5rem !important}.me-xxl-3{margin-right:1rem !important}.me-xxl-4{margin-right:1.5rem !important}.me-xxl-5{margin-right:3rem !important}.me-xxl-auto{margin-right:auto !important}.mb-xxl-0{margin-bottom:0 !important}.mb-xxl-1{margin-bottom:.25rem !important}.mb-xxl-2{margin-bottom:.5rem !important}.mb-xxl-3{margin-bottom:1rem !important}.mb-xxl-4{margin-bottom:1.5rem !important}.mb-xxl-5{margin-bottom:3rem !important}.mb-xxl-auto{margin-bottom:auto !important}.ms-xxl-0{margin-left:0 !important}.ms-xxl-1{margin-left:.25rem !important}.ms-xxl-2{margin-left:.5rem !important}.ms-xxl-3{margin-left:1rem !important}.ms-xxl-4{margin-left:1.5rem !important}.ms-xxl-5{margin-left:3rem !important}.ms-xxl-auto{margin-left:auto !important}.p-xxl-0{padding:0 !important}.p-xxl-1{padding:.25rem !important}.p-xxl-2{padding:.5rem !important}.p-xxl-3{padding:1rem !important}.p-xxl-4{padding:1.5rem !important}.p-xxl-5{padding:3rem !important}.px-xxl-0{padding-right:0 !important;padding-left:0 !important}.px-xxl-1{padding-right:.25rem !important;padding-left:.25rem !important}.px-xxl-2{padding-right:.5rem !important;padding-left:.5rem !important}.px-xxl-3{padding-right:1rem !important;padding-left:1rem !important}.px-xxl-4{padding-right:1.5rem !important;padding-left:1.5rem !important}.px-xxl-5{padding-right:3rem !important;padding-left:3rem !important}.py-xxl-0{padding-top:0 !important;padding-bottom:0 !important}.py-xxl-1{padding-top:.25rem !important;padding-bottom:.25rem !important}.py-xxl-2{padding-top:.5rem !important;padding-bottom:.5rem !important}.py-xxl-3{padding-top:1rem !important;padding-bottom:1rem !important}.py-xxl-4{padding-top:1.5rem !important;padding-bottom:1.5rem !important}.py-xxl-5{padding-top:3rem !important;padding-bottom:3rem !important}.pt-xxl-0{padding-top:0 !important}.pt-xxl-1{padding-top:.25rem !important}.pt-xxl-2{padding-top:.5rem !important}.pt-xxl-3{padding-top:1rem !important}.pt-xxl-4{padding-top:1.5rem !important}.pt-xxl-5{padding-top:3rem !important}.pe-xxl-0{padding-right:0 !important}.pe-xxl-1{padding-right:.25rem !important}.pe-xxl-2{padding-right:.5rem !important}.pe-xxl-3{padding-right:1rem !important}.pe-xxl-4{padding-right:1.5rem !important}.pe-xxl-5{padding-right:3rem !important}.pb-xxl-0{padding-bottom:0 !important}.pb-xxl-1{padding-bottom:.25rem !important}.pb-xxl-2{padding-bottom:.5rem !important}.pb-xxl-3{padding-bottom:1rem !important}.pb-xxl-4{padding-bottom:1.5rem !important}.pb-xxl-5{padding-bottom:3rem !important}.ps-xxl-0{padding-left:0 !important}.ps-xxl-1{padding-left:.25rem !important}.ps-xxl-2{padding-left:.5rem !important}.ps-xxl-3{padding-left:1rem !important}.ps-xxl-4{padding-left:1.5rem !important}.ps-xxl-5{padding-left:3rem !important}.text-xxl-start{text-align:left !important}.text-xxl-end{text-align:right !important}.text-xxl-center{text-align:center !important}}.bg-default{color:#fff}.bg-primary{color:#fff}.bg-secondary{color:#fff}.bg-success{color:#fff}.bg-info{color:#fff}.bg-warning{color:#fff}.bg-danger{color:#fff}.bg-light{color:#000}.bg-dark{color:#fff}@media(min-width: 1200px){.fs-1{font-size:2rem !important}.fs-2{font-size:1.65rem !important}.fs-3{font-size:1.45rem !important}}@media print{.d-print-inline{display:inline !important}.d-print-inline-block{display:inline-block !important}.d-print-block{display:block !important}.d-print-grid{display:grid !important}.d-print-table{display:table !important}.d-print-table-row{display:table-row !important}.d-print-table-cell{display:table-cell !important}.d-print-flex{display:flex !important}.d-print-inline-flex{display:inline-flex !important}.d-print-none{display:none !important}}.sidebar-item .chapter-number{color:#373a3c}.quarto-container{min-height:calc(100vh - 132px)}footer.footer .nav-footer,#quarto-header>nav{padding-left:1em;padding-right:1em}nav[role=doc-toc]{padding-left:.5em}#quarto-content>*{padding-top:14px}@media(max-width: 991.98px){#quarto-content>*{padding-top:0}#quarto-content .subtitle{padding-top:14px}#quarto-content section:first-of-type h2:first-of-type,#quarto-content section:first-of-type .h2:first-of-type{margin-top:1rem}}.headroom-target,header.headroom{will-change:transform;transition:position 200ms linear;transition:all 200ms linear}header.headroom--pinned{transform:translateY(0%)}header.headroom--unpinned{transform:translateY(-100%)}.navbar-container{width:100%}.navbar-brand{overflow:hidden;text-overflow:ellipsis}.navbar-brand-container{max-width:calc(100% - 115px);min-width:0;display:flex;align-items:center}@media(min-width: 992px){.navbar-brand-container{margin-right:1em}}.navbar-brand.navbar-brand-logo{margin-right:4px;display:inline-flex}.navbar-toggler{flex-basis:content;flex-shrink:0}.navbar .navbar-brand-container{order:2}.navbar .navbar-toggler{order:1}.navbar .navbar-collapse{order:4}.navbar #quarto-search{order:3}.navbar .navbar-toggler{margin-right:.5em}.navbar-logo{max-height:24px;width:auto;padding-right:4px}nav .nav-item:not(.compact){padding-top:1px}nav .nav-link i,nav .dropdown-item i{padding-right:1px}.navbar-expand-lg .navbar-nav .nav-link{padding-left:.6rem;padding-right:.6rem}nav .nav-item.compact .nav-link{padding-left:.5rem;padding-right:.5rem;font-size:1.1rem}.navbar .quarto-navbar-tools div.dropdown{display:inline-block}.navbar .quarto-navbar-tools .quarto-navigation-tool{color:#545555}.navbar .quarto-navbar-tools .quarto-navigation-tool:hover{color:#1a5698}@media(max-width: 991.98px){.navbar .quarto-navbar-tools{margin-top:.25em;padding-top:.75em;display:block;color:solid #d4d4d4 1px;text-align:center;vertical-align:middle;margin-right:auto}}.navbar-nav .dropdown-menu{min-width:220px;font-size:.9rem}.navbar .navbar-nav .nav-link.dropdown-toggle::after{opacity:.75;vertical-align:.175em}.navbar ul.dropdown-menu{padding-top:0;padding-bottom:0}.navbar .dropdown-header{text-transform:uppercase;font-size:.8rem;padding:0 .5rem}.navbar .dropdown-item{padding:.4rem .5rem}.navbar .dropdown-item>i.bi{margin-left:.1rem;margin-right:.25em}.sidebar #quarto-search{margin-top:-1px}.sidebar #quarto-search svg.aa-SubmitIcon{width:16px;height:16px}.sidebar-navigation a{color:inherit}.sidebar-title{margin-top:.25rem;padding-bottom:.5rem;font-size:1.3rem;line-height:1.6rem;visibility:visible}.sidebar-title>a{font-size:inherit;text-decoration:none}.sidebar-title .sidebar-tools-main{margin-top:-6px}@media(max-width: 991.98px){#quarto-sidebar div.sidebar-header{padding-top:.2em}}.sidebar-header-stacked .sidebar-title{margin-top:.6rem}.sidebar-logo{max-width:90%;padding-bottom:.5rem}.sidebar-logo-link{text-decoration:none}.sidebar-navigation li a{text-decoration:none}.sidebar-navigation .quarto-navigation-tool{opacity:.7;font-size:.875rem}#quarto-sidebar>nav>.sidebar-tools-main{margin-left:14px}.sidebar-tools-main{display:inline-flex;margin-left:0px;order:2}.sidebar-tools-main:not(.tools-wide){vertical-align:middle}.sidebar-navigation .quarto-navigation-tool.dropdown-toggle::after{display:none}.sidebar.sidebar-navigation>*{padding-top:1em}.sidebar-item{margin-bottom:.2em}.sidebar-section{margin-top:.2em;padding-left:.5em;padding-bottom:.2em}.sidebar-item .sidebar-item-container{display:flex;justify-content:space-between}.sidebar-item-toggle:hover{cursor:pointer}.sidebar-item .sidebar-item-toggle .bi{font-size:.7rem;text-align:center}.sidebar-item .sidebar-item-toggle .bi-chevron-right::before{transition:transform 200ms ease}.sidebar-item .sidebar-item-toggle[aria-expanded=false] .bi-chevron-right::before{transform:none}.sidebar-item .sidebar-item-toggle[aria-expanded=true] .bi-chevron-right::before{transform:rotate(90deg)}.sidebar-navigation .sidebar-divider{margin-left:0;margin-right:0;margin-top:.5rem;margin-bottom:.5rem}@media(max-width: 991.98px){.quarto-secondary-nav{display:block}.quarto-secondary-nav button.quarto-search-button{padding-right:0em;padding-left:2em}.quarto-secondary-nav button.quarto-btn-toggle{margin-left:-0.75rem;margin-right:.15rem}.quarto-secondary-nav nav.quarto-page-breadcrumbs{display:flex;align-items:center;padding-right:1em;margin-left:-0.25em}.quarto-secondary-nav nav.quarto-page-breadcrumbs a{text-decoration:none}.quarto-secondary-nav nav.quarto-page-breadcrumbs ol.breadcrumb{margin-bottom:0}}@media(min-width: 992px){.quarto-secondary-nav{display:none}}.quarto-secondary-nav .quarto-btn-toggle{color:#595959}.quarto-secondary-nav[aria-expanded=false] .quarto-btn-toggle .bi-chevron-right::before{transform:none}.quarto-secondary-nav[aria-expanded=true] .quarto-btn-toggle .bi-chevron-right::before{transform:rotate(90deg)}.quarto-secondary-nav .quarto-btn-toggle .bi-chevron-right::before{transition:transform 200ms ease}.quarto-secondary-nav{cursor:pointer}.quarto-secondary-nav-title{margin-top:.3em;color:#595959;padding-top:4px}.quarto-secondary-nav nav.quarto-page-breadcrumbs{color:#595959}.quarto-secondary-nav nav.quarto-page-breadcrumbs a{color:#595959}.quarto-secondary-nav nav.quarto-page-breadcrumbs a:hover{color:rgba(27,88,157,.8)}.quarto-secondary-nav nav.quarto-page-breadcrumbs .breadcrumb-item::before{color:#8c8c8c}div.sidebar-item-container{color:#595959}div.sidebar-item-container:hover,div.sidebar-item-container:focus{color:rgba(27,88,157,.8)}div.sidebar-item-container.disabled{color:rgba(89,89,89,.75)}div.sidebar-item-container .active,div.sidebar-item-container .show>.nav-link,div.sidebar-item-container .sidebar-link>code{color:#1b589d}div.sidebar.sidebar-navigation.rollup.quarto-sidebar-toggle-contents,nav.sidebar.sidebar-navigation:not(.rollup){background-color:#fff}@media(max-width: 991.98px){.sidebar-navigation .sidebar-item a,.nav-page .nav-page-text,.sidebar-navigation{font-size:1rem}.sidebar-navigation ul.sidebar-section.depth1 .sidebar-section-item{font-size:1.1rem}.sidebar-logo{display:none}.sidebar.sidebar-navigation{position:static;border-bottom:1px solid #dee2e6}.sidebar.sidebar-navigation.collapsing{position:fixed;z-index:1000}.sidebar.sidebar-navigation.show{position:fixed;z-index:1000}.sidebar.sidebar-navigation{min-height:100%}nav.quarto-secondary-nav{background-color:#fff;border-bottom:1px solid #dee2e6}.sidebar .sidebar-footer{visibility:visible;padding-top:1rem;position:inherit}.sidebar-tools-collapse{display:block}}#quarto-sidebar{transition:width .15s ease-in}#quarto-sidebar>*{padding-right:1em}@media(max-width: 991.98px){#quarto-sidebar .sidebar-menu-container{white-space:nowrap;min-width:225px}#quarto-sidebar.show{transition:width .15s ease-out}}@media(min-width: 992px){#quarto-sidebar{display:flex;flex-direction:column}.nav-page .nav-page-text,.sidebar-navigation .sidebar-section .sidebar-item{font-size:.875rem}.sidebar-navigation .sidebar-item{font-size:.925rem}.sidebar.sidebar-navigation{display:block;position:sticky}.sidebar-search{width:100%}.sidebar .sidebar-footer{visibility:visible}}@media(max-width: 991.98px){#quarto-sidebar-glass{position:fixed;top:0;bottom:0;left:0;right:0;background-color:rgba(255,255,255,0);transition:background-color .15s ease-in;z-index:-1}#quarto-sidebar-glass.collapsing{z-index:1000}#quarto-sidebar-glass.show{transition:background-color .15s ease-out;background-color:rgba(102,102,102,.4);z-index:1000}}.sidebar .sidebar-footer{padding:.5rem 1rem;align-self:flex-end;color:#6c757d;width:100%}.quarto-page-breadcrumbs .breadcrumb-item+.breadcrumb-item,.quarto-page-breadcrumbs .breadcrumb-item{padding-right:.33em;padding-left:0}.quarto-page-breadcrumbs .breadcrumb-item::before{padding-right:.33em}.quarto-sidebar-footer{font-size:.875em}.sidebar-section .bi-chevron-right{vertical-align:middle}.sidebar-section .bi-chevron-right::before{font-size:.9em}.notransition{-webkit-transition:none !important;-moz-transition:none !important;-o-transition:none !important;transition:none !important}.btn:focus:not(:focus-visible){box-shadow:none}.page-navigation{display:flex;justify-content:space-between}.nav-page{padding-bottom:.75em}.nav-page .bi{font-size:1.8rem;vertical-align:middle}.nav-page .nav-page-text{padding-left:.25em;padding-right:.25em}.nav-page a{color:#6c757d;text-decoration:none;display:flex;align-items:center}.nav-page a:hover{color:#1f66b6}.toc-actions{display:flex}.toc-actions p{margin-block-start:0;margin-block-end:0}.toc-actions a{text-decoration:none;color:inherit;font-weight:400}.toc-actions a:hover{color:#1f66b6}.toc-actions .action-links{margin-left:4px}.sidebar nav[role=doc-toc] .toc-actions .bi{margin-left:-4px;font-size:.7rem;color:#6c757d}.sidebar nav[role=doc-toc] .toc-actions .bi:before{padding-top:3px}#quarto-margin-sidebar .toc-actions .bi:before{margin-top:.3rem;font-size:.7rem;color:#6c757d;vertical-align:top}.sidebar nav[role=doc-toc] .toc-actions>div:first-of-type{margin-top:-3px}#quarto-margin-sidebar .toc-actions p,.sidebar nav[role=doc-toc] .toc-actions p{font-size:.875rem}.nav-footer .toc-actions{padding-bottom:.5em;padding-top:.5em}.nav-footer .toc-actions :first-child{margin-left:auto}.nav-footer .toc-actions :last-child{margin-right:auto}.nav-footer .toc-actions .action-links{display:flex}.nav-footer .toc-actions .action-links p{padding-right:1.5em}.nav-footer .toc-actions .action-links p:last-of-type{padding-right:0}.nav-footer{display:flex;flex-direction:row;flex-wrap:wrap;justify-content:space-between;align-items:baseline;text-align:center;padding-top:.5rem;padding-bottom:.5rem;background-color:#fff}body.nav-fixed{padding-top:64px}.nav-footer-contents{color:#6c757d;margin-top:.25rem}.nav-footer{min-height:3.5em;color:#757575}.nav-footer a{color:#757575}.nav-footer .nav-footer-left{font-size:.825em}.nav-footer .nav-footer-center{font-size:.825em}.nav-footer .nav-footer-right{font-size:.825em}.nav-footer-left .footer-items,.nav-footer-center .footer-items,.nav-footer-right .footer-items{display:inline-flex;padding-top:.3em;padding-bottom:.3em;margin-bottom:0em}.nav-footer-left .footer-items .nav-link,.nav-footer-center .footer-items .nav-link,.nav-footer-right .footer-items .nav-link{padding-left:.6em;padding-right:.6em}.nav-footer-left{flex:1 1 0px;text-align:left}.nav-footer-right{flex:1 1 0px;text-align:right}.nav-footer-center{flex:1 1 0px;min-height:3em;text-align:center}.nav-footer-center .footer-items{justify-content:center}@media(max-width: 767.98px){.nav-footer-center{margin-top:3em}}.navbar .quarto-reader-toggle.reader .quarto-reader-toggle-btn{background-color:#545555;border-radius:3px}.quarto-reader-toggle.reader.quarto-navigation-tool .quarto-reader-toggle-btn{background-color:#595959;border-radius:3px}.quarto-reader-toggle .quarto-reader-toggle-btn{display:inline-flex;padding-left:.2em;padding-right:.2em;margin-left:-0.2em;margin-right:-0.2em;text-align:center}.navbar .quarto-reader-toggle:not(.reader) .bi::before{background-image:url('data:image/svg+xml,')}.navbar .quarto-reader-toggle.reader .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-reader-toggle:not(.reader) .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-reader-toggle.reader .bi::before{background-image:url('data:image/svg+xml,')}#quarto-back-to-top{display:none;position:fixed;bottom:50px;background-color:#fff;border-radius:.25rem;box-shadow:0 .2rem .5rem #6c757d,0 0 .05rem #6c757d;color:#6c757d;text-decoration:none;font-size:.9em;text-align:center;left:50%;padding:.4rem .8rem;transform:translate(-50%, 0)}.aa-DetachedOverlay ul.aa-List,#quarto-search-results ul.aa-List{list-style:none;padding-left:0}.aa-DetachedOverlay .aa-Panel,#quarto-search-results .aa-Panel{background-color:#fff;position:absolute;z-index:2000}#quarto-search-results .aa-Panel{max-width:400px}#quarto-search input{font-size:.925rem}@media(min-width: 992px){.navbar #quarto-search{margin-left:.25rem;order:999}}@media(max-width: 991.98px){#quarto-sidebar .sidebar-search{display:none}}#quarto-sidebar .sidebar-search .aa-Autocomplete{width:100%}.navbar .aa-Autocomplete .aa-Form{width:180px}.navbar #quarto-search.type-overlay .aa-Autocomplete{width:40px}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form{background-color:inherit;border:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form:focus-within{box-shadow:none;outline:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-InputWrapper{display:none}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-InputWrapper:focus-within{display:inherit}.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-Label svg,.navbar #quarto-search.type-overlay .aa-Autocomplete .aa-Form .aa-LoadingIndicator svg{width:26px;height:26px;color:#545555;opacity:1}.navbar #quarto-search.type-overlay .aa-Autocomplete svg.aa-SubmitIcon{width:26px;height:26px;color:#545555;opacity:1}.aa-Autocomplete .aa-Form,.aa-DetachedFormContainer .aa-Form{align-items:center;background-color:#fff;border:1px solid #ced4da;border-radius:.25rem;color:#373a3c;display:flex;line-height:1em;margin:0;position:relative;width:100%}.aa-Autocomplete .aa-Form:focus-within,.aa-DetachedFormContainer .aa-Form:focus-within{box-shadow:rgba(39,128,227,.6) 0 0 0 1px;outline:currentColor none medium}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix{align-items:center;display:flex;flex-shrink:0;order:1}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-Label,.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-Label,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator{cursor:initial;flex-shrink:0;padding:0;text-align:left}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-Label svg,.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-Label svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator svg{color:#373a3c;opacity:.5}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-SubmitButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-SubmitButton{appearance:none;background:none;border:0;margin:0}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator{align-items:center;display:flex;justify-content:center}.aa-Autocomplete .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperPrefix .aa-LoadingIndicator[hidden]{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapper,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper{order:3;position:relative;width:100%}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input{appearance:none;background:none;border:0;color:#373a3c;font:inherit;height:calc(1.5em + .1rem + 2px);padding:0;width:100%}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::placeholder,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::placeholder{color:#373a3c;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input:focus{border-color:none;box-shadow:none;outline:none}.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-decoration,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-cancel-button,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-button,.aa-Autocomplete .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-decoration,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-decoration,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-cancel-button,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-button,.aa-DetachedFormContainer .aa-Form .aa-InputWrapper .aa-Input::-webkit-search-results-decoration{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix{align-items:center;display:flex;order:4}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton{align-items:center;background:none;border:0;color:#373a3c;opacity:.8;cursor:pointer;display:flex;margin:0;width:calc(1.5em + .1rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:hover,.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:hover,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton:focus{color:#373a3c;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton[hidden]{display:none}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-ClearButton svg,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-ClearButton svg{width:calc(1.5em + 0.75rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton{border:none;align-items:center;background:none;color:#373a3c;opacity:.4;font-size:.7rem;cursor:pointer;display:none;margin:0;width:calc(1em + .1rem + 2px)}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:hover,.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:focus,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:hover,.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton:focus{color:#373a3c;opacity:.8}.aa-Autocomplete .aa-Form .aa-InputWrapperSuffix .aa-CopyButton[hidden],.aa-DetachedFormContainer .aa-Form .aa-InputWrapperSuffix .aa-CopyButton[hidden]{display:none}.aa-PanelLayout:empty{display:none}.quarto-search-no-results.no-query{display:none}.aa-Source:has(.no-query){display:none}#quarto-search-results .aa-Panel{border:solid #ced4da 1px}#quarto-search-results .aa-SourceNoResults{width:398px}.aa-DetachedOverlay .aa-Panel,#quarto-search-results .aa-Panel{max-height:65vh;overflow-y:auto;font-size:.925rem}.aa-DetachedOverlay .aa-SourceNoResults,#quarto-search-results .aa-SourceNoResults{height:60px;display:flex;justify-content:center;align-items:center}.aa-DetachedOverlay .search-error,#quarto-search-results .search-error{padding-top:10px;padding-left:20px;padding-right:20px;cursor:default}.aa-DetachedOverlay .search-error .search-error-title,#quarto-search-results .search-error .search-error-title{font-size:1.1rem;margin-bottom:.5rem}.aa-DetachedOverlay .search-error .search-error-title .search-error-icon,#quarto-search-results .search-error .search-error-title .search-error-icon{margin-right:8px}.aa-DetachedOverlay .search-error .search-error-text,#quarto-search-results .search-error .search-error-text{font-weight:300}.aa-DetachedOverlay .search-result-text,#quarto-search-results .search-result-text{font-weight:300;overflow:hidden;text-overflow:ellipsis;display:-webkit-box;-webkit-line-clamp:2;-webkit-box-orient:vertical;line-height:1.2rem;max-height:2.4rem}.aa-DetachedOverlay .aa-SourceHeader .search-result-header,#quarto-search-results .aa-SourceHeader .search-result-header{font-size:.875rem;background-color:#f2f2f2;padding-left:14px;padding-bottom:4px;padding-top:4px}.aa-DetachedOverlay .aa-SourceHeader .search-result-header-no-results,#quarto-search-results .aa-SourceHeader .search-result-header-no-results{display:none}.aa-DetachedOverlay .aa-SourceFooter .algolia-search-logo,#quarto-search-results .aa-SourceFooter .algolia-search-logo{width:110px;opacity:.85;margin:8px;float:right}.aa-DetachedOverlay .search-result-section,#quarto-search-results .search-result-section{font-size:.925em}.aa-DetachedOverlay a.search-result-link,#quarto-search-results a.search-result-link{color:inherit;text-decoration:none}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item,#quarto-search-results li.aa-Item[aria-selected=true] .search-item{background-color:#2780e3}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item.search-result-more,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-section,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-text,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-title-container,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-result-text-container,#quarto-search-results li.aa-Item[aria-selected=true] .search-item.search-result-more,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-section,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-text,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-title-container,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-result-text-container{color:#fff;background-color:#2780e3}.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item mark.search-match,.aa-DetachedOverlay li.aa-Item[aria-selected=true] .search-item .search-match.mark,#quarto-search-results li.aa-Item[aria-selected=true] .search-item mark.search-match,#quarto-search-results li.aa-Item[aria-selected=true] .search-item .search-match.mark{color:#fff;background-color:#4b95e8}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item,#quarto-search-results li.aa-Item[aria-selected=false] .search-item{background-color:#fff}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item.search-result-more,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-section,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-text,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-title-container,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-result-text-container,#quarto-search-results li.aa-Item[aria-selected=false] .search-item.search-result-more,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-section,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-text,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-title-container,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-result-text-container{color:#373a3c}.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item mark.search-match,.aa-DetachedOverlay li.aa-Item[aria-selected=false] .search-item .search-match.mark,#quarto-search-results li.aa-Item[aria-selected=false] .search-item mark.search-match,#quarto-search-results li.aa-Item[aria-selected=false] .search-item .search-match.mark{color:inherit;background-color:#e5effc}.aa-DetachedOverlay .aa-Item .search-result-doc:not(.document-selectable) .search-result-title-container,#quarto-search-results .aa-Item .search-result-doc:not(.document-selectable) .search-result-title-container{background-color:#fff;color:#373a3c}.aa-DetachedOverlay .aa-Item .search-result-doc:not(.document-selectable) .search-result-text-container,#quarto-search-results .aa-Item .search-result-doc:not(.document-selectable) .search-result-text-container{padding-top:0px}.aa-DetachedOverlay li.aa-Item .search-result-doc.document-selectable .search-result-text-container,#quarto-search-results li.aa-Item .search-result-doc.document-selectable .search-result-text-container{margin-top:-4px}.aa-DetachedOverlay .aa-Item,#quarto-search-results .aa-Item{cursor:pointer}.aa-DetachedOverlay .aa-Item .search-item,#quarto-search-results .aa-Item .search-item{border-left:none;border-right:none;border-top:none;background-color:#fff;border-color:#ced4da;color:#373a3c}.aa-DetachedOverlay .aa-Item .search-item p,#quarto-search-results .aa-Item .search-item p{margin-top:0;margin-bottom:0}.aa-DetachedOverlay .aa-Item .search-item i.bi,#quarto-search-results .aa-Item .search-item i.bi{padding-left:8px;padding-right:8px;font-size:1.3em}.aa-DetachedOverlay .aa-Item .search-item .search-result-title,#quarto-search-results .aa-Item .search-item .search-result-title{margin-top:.3em;margin-bottom:.1rem}.aa-DetachedOverlay .aa-Item .search-result-title-container,#quarto-search-results .aa-Item .search-result-title-container{font-size:1em;display:flex;padding:6px 4px 6px 4px}.aa-DetachedOverlay .aa-Item .search-result-text-container,#quarto-search-results .aa-Item .search-result-text-container{padding-bottom:8px;padding-right:8px;margin-left:44px}.aa-DetachedOverlay .aa-Item .search-result-doc-section,.aa-DetachedOverlay .aa-Item .search-result-more,#quarto-search-results .aa-Item .search-result-doc-section,#quarto-search-results .aa-Item .search-result-more{padding-top:8px;padding-bottom:8px;padding-left:44px}.aa-DetachedOverlay .aa-Item .search-result-more,#quarto-search-results .aa-Item .search-result-more{font-size:.8em;font-weight:400}.aa-DetachedOverlay .aa-Item .search-result-doc,#quarto-search-results .aa-Item .search-result-doc{border-top:1px solid #ced4da}.aa-DetachedSearchButton{background:none;border:none}.aa-DetachedSearchButton .aa-DetachedSearchButtonPlaceholder{display:none}.navbar .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon{color:#545555}.sidebar-tools-collapse #quarto-search,.sidebar-tools-main #quarto-search{display:inline}.sidebar-tools-collapse #quarto-search .aa-Autocomplete,.sidebar-tools-main #quarto-search .aa-Autocomplete{display:inline}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton{padding-left:4px;padding-right:4px}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon{color:#595959}.sidebar-tools-collapse #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon .aa-SubmitIcon,.sidebar-tools-main #quarto-search .aa-DetachedSearchButton .aa-DetachedSearchButtonIcon .aa-SubmitIcon{margin-top:-3px}.aa-DetachedContainer{background:rgba(255,255,255,.65);width:90%;bottom:0;box-shadow:rgba(206,212,218,.6) 0 0 0 1px;outline:currentColor none medium;display:flex;flex-direction:column;left:0;margin:0;overflow:hidden;padding:0;position:fixed;right:0;top:0;z-index:1101}.aa-DetachedContainer::after{height:32px}.aa-DetachedContainer .aa-SourceHeader{margin:var(--aa-spacing-half) 0 var(--aa-spacing-half) 2px}.aa-DetachedContainer .aa-Panel{background-color:#fff;border-radius:0;box-shadow:none;flex-grow:1;margin:0;padding:0;position:relative}.aa-DetachedContainer .aa-PanelLayout{bottom:0;box-shadow:none;left:0;margin:0;max-height:none;overflow-y:auto;position:absolute;right:0;top:0;width:100%}.aa-DetachedFormContainer{background-color:#fff;border-bottom:1px solid #ced4da;display:flex;flex-direction:row;justify-content:space-between;margin:0;padding:.5em}.aa-DetachedCancelButton{background:none;font-size:.8em;border:0;border-radius:3px;color:#373a3c;cursor:pointer;margin:0 0 0 .5em;padding:0 .5em}.aa-DetachedCancelButton:hover,.aa-DetachedCancelButton:focus{box-shadow:rgba(39,128,227,.6) 0 0 0 1px;outline:currentColor none medium}.aa-DetachedContainer--modal{bottom:inherit;height:auto;margin:0 auto;position:absolute;top:100px;border-radius:6px;max-width:850px}@media(max-width: 575.98px){.aa-DetachedContainer--modal{width:100%;top:0px;border-radius:0px;border:none}}.aa-DetachedContainer--modal .aa-PanelLayout{max-height:var(--aa-detached-modal-max-height);padding-bottom:var(--aa-spacing-half);position:static}.aa-Detached{height:100vh;overflow:hidden}.aa-DetachedOverlay{background-color:rgba(55,58,60,.4);position:fixed;left:0;right:0;top:0;margin:0;padding:0;height:100vh;z-index:1100}.quarto-listing{padding-bottom:1em}.listing-pagination{padding-top:.5em}ul.pagination{float:right;padding-left:8px;padding-top:.5em}ul.pagination li{padding-right:.75em}ul.pagination li.disabled a,ul.pagination li.active a{color:#373a3c;text-decoration:none}ul.pagination li:last-of-type{padding-right:0}.listing-actions-group{display:flex}.quarto-listing-filter{margin-bottom:1em;width:200px;margin-left:auto}.quarto-listing-sort{margin-bottom:1em;margin-right:auto;width:auto}.quarto-listing-sort .input-group-text{font-size:.8em}.input-group-text{border-right:none}.quarto-listing-sort select.form-select{font-size:.8em}.listing-no-matching{text-align:center;padding-top:2em;padding-bottom:3em;font-size:1em}#quarto-margin-sidebar .quarto-listing-category{padding-top:0;font-size:1rem}#quarto-margin-sidebar .quarto-listing-category-title{cursor:pointer;font-weight:600;font-size:1rem}.quarto-listing-category .category{cursor:pointer}.quarto-listing-category .category.active{font-weight:600}.quarto-listing-category.category-cloud{display:flex;flex-wrap:wrap;align-items:baseline}.quarto-listing-category.category-cloud .category{padding-right:5px}.quarto-listing-category.category-cloud .category-cloud-1{font-size:.75em}.quarto-listing-category.category-cloud .category-cloud-2{font-size:.95em}.quarto-listing-category.category-cloud .category-cloud-3{font-size:1.15em}.quarto-listing-category.category-cloud .category-cloud-4{font-size:1.35em}.quarto-listing-category.category-cloud .category-cloud-5{font-size:1.55em}.quarto-listing-category.category-cloud .category-cloud-6{font-size:1.75em}.quarto-listing-category.category-cloud .category-cloud-7{font-size:1.95em}.quarto-listing-category.category-cloud .category-cloud-8{font-size:2.15em}.quarto-listing-category.category-cloud .category-cloud-9{font-size:2.35em}.quarto-listing-category.category-cloud .category-cloud-10{font-size:2.55em}.quarto-listing-cols-1{grid-template-columns:repeat(1, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-1{grid-template-columns:repeat(1, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-1{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-2{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-2{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-2{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-3{grid-template-columns:repeat(3, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-3{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-3{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-4{grid-template-columns:repeat(4, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-4{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-4{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-5{grid-template-columns:repeat(5, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-5{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-5{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-6{grid-template-columns:repeat(6, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-6{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-6{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-7{grid-template-columns:repeat(7, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-7{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-7{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-8{grid-template-columns:repeat(8, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-8{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-8{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-9{grid-template-columns:repeat(9, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-9{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-9{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-10{grid-template-columns:repeat(10, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-10{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-10{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-11{grid-template-columns:repeat(11, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-11{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-11{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-cols-12{grid-template-columns:repeat(12, minmax(0, 1fr));gap:1.5em}@media(max-width: 767.98px){.quarto-listing-cols-12{grid-template-columns:repeat(2, minmax(0, 1fr));gap:1.5em}}@media(max-width: 575.98px){.quarto-listing-cols-12{grid-template-columns:minmax(0, 1fr);gap:1.5em}}.quarto-listing-grid{gap:1.5em}.quarto-grid-item.borderless{border:none}.quarto-grid-item.borderless .listing-categories .listing-category:last-of-type,.quarto-grid-item.borderless .listing-categories .listing-category:first-of-type{padding-left:0}.quarto-grid-item.borderless .listing-categories .listing-category{border:0}.quarto-grid-link{text-decoration:none;color:inherit}.quarto-grid-link:hover{text-decoration:none;color:inherit}.quarto-grid-item h5.title,.quarto-grid-item .title.h5{margin-top:0;margin-bottom:0}.quarto-grid-item .card-footer{display:flex;justify-content:space-between;font-size:.8em}.quarto-grid-item .card-footer p{margin-bottom:0}.quarto-grid-item p.card-img-top{margin-bottom:0}.quarto-grid-item p.card-img-top>img{object-fit:cover}.quarto-grid-item .card-other-values{margin-top:.5em;font-size:.8em}.quarto-grid-item .card-other-values tr{margin-bottom:.5em}.quarto-grid-item .card-other-values tr>td:first-of-type{font-weight:600;padding-right:1em;padding-left:1em;vertical-align:top}.quarto-grid-item div.post-contents{display:flex;flex-direction:column;text-decoration:none;height:100%}.quarto-grid-item .listing-item-img-placeholder{background-color:#adb5bd;flex-shrink:0}.quarto-grid-item .card-attribution{padding-top:1em;display:flex;gap:1em;text-transform:uppercase;color:#6c757d;font-weight:500;flex-grow:10;align-items:flex-end}.quarto-grid-item .description{padding-bottom:1em}.quarto-grid-item .card-attribution .date{align-self:flex-end}.quarto-grid-item .card-attribution.justify{justify-content:space-between}.quarto-grid-item .card-attribution.start{justify-content:flex-start}.quarto-grid-item .card-attribution.end{justify-content:flex-end}.quarto-grid-item .card-title{margin-bottom:.1em}.quarto-grid-item .card-subtitle{padding-top:.25em}.quarto-grid-item .card-text{font-size:.9em}.quarto-grid-item .listing-reading-time{padding-bottom:.25em}.quarto-grid-item .card-text-small{font-size:.8em}.quarto-grid-item .card-subtitle.subtitle{font-size:.9em;font-weight:600;padding-bottom:.5em}.quarto-grid-item .listing-categories{display:flex;flex-wrap:wrap;padding-bottom:5px}.quarto-grid-item .listing-categories .listing-category{color:#6c757d;border:solid 1px #dee2e6;border-radius:.25rem;text-transform:uppercase;font-size:.65em;padding-left:.5em;padding-right:.5em;padding-top:.15em;padding-bottom:.15em;cursor:pointer;margin-right:4px;margin-bottom:4px}.quarto-grid-item.card-right{text-align:right}.quarto-grid-item.card-right .listing-categories{justify-content:flex-end}.quarto-grid-item.card-left{text-align:left}.quarto-grid-item.card-center{text-align:center}.quarto-grid-item.card-center .listing-description{text-align:justify}.quarto-grid-item.card-center .listing-categories{justify-content:center}table.quarto-listing-table td.image{padding:0px}table.quarto-listing-table td.image img{width:100%;max-width:50px;object-fit:contain}table.quarto-listing-table a{text-decoration:none}table.quarto-listing-table th a{color:inherit}table.quarto-listing-table th a.asc:after{margin-bottom:-2px;margin-left:5px;display:inline-block;height:1rem;width:1rem;background-repeat:no-repeat;background-size:1rem 1rem;background-image:url('data:image/svg+xml,');content:""}table.quarto-listing-table th a.desc:after{margin-bottom:-2px;margin-left:5px;display:inline-block;height:1rem;width:1rem;background-repeat:no-repeat;background-size:1rem 1rem;background-image:url('data:image/svg+xml,');content:""}table.quarto-listing-table.table-hover td{cursor:pointer}.quarto-post.image-left{flex-direction:row}.quarto-post.image-right{flex-direction:row-reverse}@media(max-width: 767.98px){.quarto-post.image-right,.quarto-post.image-left{gap:0em;flex-direction:column}.quarto-post .metadata{padding-bottom:1em;order:2}.quarto-post .body{order:1}.quarto-post .thumbnail{order:3}}.list.quarto-listing-default div:last-of-type{border-bottom:none}@media(min-width: 992px){.quarto-listing-container-default{margin-right:2em}}div.quarto-post{display:flex;gap:2em;margin-bottom:1.5em;border-bottom:1px solid #dee2e6}@media(max-width: 767.98px){div.quarto-post{padding-bottom:1em}}div.quarto-post .metadata{flex-basis:20%;flex-grow:0;margin-top:.2em;flex-shrink:10}div.quarto-post .thumbnail{flex-basis:30%;flex-grow:0;flex-shrink:0}div.quarto-post .thumbnail img{margin-top:.4em;width:100%;object-fit:cover}div.quarto-post .body{flex-basis:45%;flex-grow:1;flex-shrink:0}div.quarto-post .body h3.listing-title,div.quarto-post .body .listing-title.h3{margin-top:0px;margin-bottom:0px;border-bottom:none}div.quarto-post .body .listing-subtitle{font-size:.875em;margin-bottom:.5em;margin-top:.2em}div.quarto-post .body .description{font-size:.9em}div.quarto-post a{color:#373a3c;display:flex;flex-direction:column;text-decoration:none}div.quarto-post a div.description{flex-shrink:0}div.quarto-post .metadata{display:flex;flex-direction:column;font-size:.8em;font-family:var(--bs-font-sans-serif);flex-basis:33%}div.quarto-post .listing-categories{display:flex;flex-wrap:wrap;padding-bottom:5px}div.quarto-post .listing-categories .listing-category{color:#6c757d;border:solid 1px #dee2e6;border-radius:.25rem;text-transform:uppercase;font-size:.65em;padding-left:.5em;padding-right:.5em;padding-top:.15em;padding-bottom:.15em;cursor:pointer;margin-right:4px;margin-bottom:4px}div.quarto-post .listing-description{margin-bottom:.5em}div.quarto-about-jolla{display:flex !important;flex-direction:column;align-items:center;margin-top:10%;padding-bottom:1em}div.quarto-about-jolla .about-image{object-fit:cover;margin-left:auto;margin-right:auto;margin-bottom:1.5em}div.quarto-about-jolla img.round{border-radius:50%}div.quarto-about-jolla img.rounded{border-radius:10px}div.quarto-about-jolla .quarto-title h1.title,div.quarto-about-jolla .quarto-title .title.h1{text-align:center}div.quarto-about-jolla .quarto-title .description{text-align:center}div.quarto-about-jolla h2,div.quarto-about-jolla .h2{border-bottom:none}div.quarto-about-jolla .about-sep{width:60%}div.quarto-about-jolla main{text-align:center}div.quarto-about-jolla .about-links{display:flex}@media(min-width: 992px){div.quarto-about-jolla .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-jolla .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-jolla .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-jolla .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-jolla .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-jolla .about-link:hover{color:#2780e3}div.quarto-about-jolla .about-link i.bi{margin-right:.15em}div.quarto-about-solana{display:flex !important;flex-direction:column;padding-top:3em !important;padding-bottom:1em}div.quarto-about-solana .about-entity{display:flex !important;align-items:start;justify-content:space-between}@media(min-width: 992px){div.quarto-about-solana .about-entity{flex-direction:row}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity{flex-direction:column-reverse;align-items:center;text-align:center}}div.quarto-about-solana .about-entity .entity-contents{display:flex;flex-direction:column}@media(max-width: 767.98px){div.quarto-about-solana .about-entity .entity-contents{width:100%}}div.quarto-about-solana .about-entity .about-image{object-fit:cover}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-image{margin-bottom:1.5em}}div.quarto-about-solana .about-entity img.round{border-radius:50%}div.quarto-about-solana .about-entity img.rounded{border-radius:10px}div.quarto-about-solana .about-entity .about-links{display:flex;justify-content:left;padding-bottom:1.2em}@media(min-width: 992px){div.quarto-about-solana .about-entity .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-solana .about-entity .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-solana .about-entity .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-solana .about-entity .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-solana .about-entity .about-link:hover{color:#2780e3}div.quarto-about-solana .about-entity .about-link i.bi{margin-right:.15em}div.quarto-about-solana .about-contents{padding-right:1.5em;flex-basis:0;flex-grow:1}div.quarto-about-solana .about-contents main.content{margin-top:0}div.quarto-about-solana .about-contents h2,div.quarto-about-solana .about-contents .h2{border-bottom:none}div.quarto-about-trestles{display:flex !important;flex-direction:row;padding-top:3em !important;padding-bottom:1em}@media(max-width: 991.98px){div.quarto-about-trestles{flex-direction:column;padding-top:0em !important}}div.quarto-about-trestles .about-entity{display:flex !important;flex-direction:column;align-items:center;text-align:center;padding-right:1em}@media(min-width: 992px){div.quarto-about-trestles .about-entity{flex:0 0 42%}}div.quarto-about-trestles .about-entity .about-image{object-fit:cover;margin-bottom:1.5em}div.quarto-about-trestles .about-entity img.round{border-radius:50%}div.quarto-about-trestles .about-entity img.rounded{border-radius:10px}div.quarto-about-trestles .about-entity .about-links{display:flex;justify-content:center}@media(min-width: 992px){div.quarto-about-trestles .about-entity .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-trestles .about-entity .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-trestles .about-entity .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-trestles .about-entity .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-trestles .about-entity .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-trestles .about-entity .about-link:hover{color:#2780e3}div.quarto-about-trestles .about-entity .about-link i.bi{margin-right:.15em}div.quarto-about-trestles .about-contents{flex-basis:0;flex-grow:1}div.quarto-about-trestles .about-contents h2,div.quarto-about-trestles .about-contents .h2{border-bottom:none}@media(min-width: 992px){div.quarto-about-trestles .about-contents{border-left:solid 1px #dee2e6;padding-left:1.5em}}div.quarto-about-trestles .about-contents main.content{margin-top:0}div.quarto-about-marquee{padding-bottom:1em}div.quarto-about-marquee .about-contents{display:flex;flex-direction:column}div.quarto-about-marquee .about-image{max-height:550px;margin-bottom:1.5em;object-fit:cover}div.quarto-about-marquee img.round{border-radius:50%}div.quarto-about-marquee img.rounded{border-radius:10px}div.quarto-about-marquee h2,div.quarto-about-marquee .h2{border-bottom:none}div.quarto-about-marquee .about-links{display:flex;justify-content:center;padding-top:1.5em}@media(min-width: 992px){div.quarto-about-marquee .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-marquee .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-marquee .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-marquee .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-marquee .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-marquee .about-link:hover{color:#2780e3}div.quarto-about-marquee .about-link i.bi{margin-right:.15em}@media(min-width: 992px){div.quarto-about-marquee .about-link{border:none}}div.quarto-about-broadside{display:flex;flex-direction:column;padding-bottom:1em}div.quarto-about-broadside .about-main{display:flex !important;padding-top:0 !important}@media(min-width: 992px){div.quarto-about-broadside .about-main{flex-direction:row;align-items:flex-start}}@media(max-width: 991.98px){div.quarto-about-broadside .about-main{flex-direction:column}}@media(max-width: 991.98px){div.quarto-about-broadside .about-main .about-entity{flex-shrink:0;width:100%;height:450px;margin-bottom:1.5em;background-size:cover;background-repeat:no-repeat}}@media(min-width: 992px){div.quarto-about-broadside .about-main .about-entity{flex:0 10 50%;margin-right:1.5em;width:100%;height:100%;background-size:100%;background-repeat:no-repeat}}div.quarto-about-broadside .about-main .about-contents{padding-top:14px;flex:0 0 50%}div.quarto-about-broadside h2,div.quarto-about-broadside .h2{border-bottom:none}div.quarto-about-broadside .about-sep{margin-top:1.5em;width:60%;align-self:center}div.quarto-about-broadside .about-links{display:flex;justify-content:center;column-gap:20px;padding-top:1.5em}@media(min-width: 992px){div.quarto-about-broadside .about-links{flex-direction:row;column-gap:.8em;row-gap:15px;flex-wrap:wrap}}@media(max-width: 991.98px){div.quarto-about-broadside .about-links{flex-direction:column;row-gap:1em;width:100%;padding-bottom:1.5em}}div.quarto-about-broadside .about-link{color:#686d71;text-decoration:none;border:solid 1px}@media(min-width: 992px){div.quarto-about-broadside .about-link{font-size:.8em;padding:.25em .5em;border-radius:4px}}@media(max-width: 991.98px){div.quarto-about-broadside .about-link{font-size:1.1em;padding:.5em .5em;text-align:center;border-radius:6px}}div.quarto-about-broadside .about-link:hover{color:#2780e3}div.quarto-about-broadside .about-link i.bi{margin-right:.15em}@media(min-width: 992px){div.quarto-about-broadside .about-link{border:none}}.tippy-box[data-theme~=quarto]{background-color:#fff;border:solid 1px #dee2e6;border-radius:.25rem;color:#373a3c;font-size:.875rem}.tippy-box[data-theme~=quarto]>.tippy-backdrop{background-color:#fff}.tippy-box[data-theme~=quarto]>.tippy-arrow:after,.tippy-box[data-theme~=quarto]>.tippy-svg-arrow:after{content:"";position:absolute;z-index:-1}.tippy-box[data-theme~=quarto]>.tippy-arrow:after{border-color:rgba(0,0,0,0);border-style:solid}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-6px}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-6px}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-6px}.tippy-box[data-placement^=left]>.tippy-arrow:before{right:-6px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-arrow:before{border-top-color:#fff}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-arrow:after{border-top-color:#dee2e6;border-width:7px 7px 0;top:17px;left:1px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-svg-arrow>svg{top:16px}.tippy-box[data-theme~=quarto][data-placement^=top]>.tippy-svg-arrow:after{top:17px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-arrow:before{border-bottom-color:#fff;bottom:16px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-arrow:after{border-bottom-color:#dee2e6;border-width:0 7px 7px;bottom:17px;left:1px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-svg-arrow>svg{bottom:15px}.tippy-box[data-theme~=quarto][data-placement^=bottom]>.tippy-svg-arrow:after{bottom:17px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-arrow:before{border-left-color:#fff}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-arrow:after{border-left-color:#dee2e6;border-width:7px 0 7px 7px;left:17px;top:1px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-svg-arrow>svg{left:11px}.tippy-box[data-theme~=quarto][data-placement^=left]>.tippy-svg-arrow:after{left:12px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-arrow:before{border-right-color:#fff;right:16px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-arrow:after{border-width:7px 7px 7px 0;right:17px;top:1px;border-right-color:#dee2e6}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-svg-arrow>svg{right:11px}.tippy-box[data-theme~=quarto][data-placement^=right]>.tippy-svg-arrow:after{right:12px}.tippy-box[data-theme~=quarto]>.tippy-svg-arrow{fill:#373a3c}.tippy-box[data-theme~=quarto]>.tippy-svg-arrow:after{background-image:url(data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj48cGF0aCBkPSJNMCA2czEuNzk2LS4wMTMgNC42Ny0zLjYxNUM1Ljg1MS45IDYuOTMuMDA2IDggMGMxLjA3LS4wMDYgMi4xNDguODg3IDMuMzQzIDIuMzg1QzE0LjIzMyA2LjAwNSAxNiA2IDE2IDZIMHoiIGZpbGw9InJnYmEoMCwgOCwgMTYsIDAuMikiLz48L3N2Zz4=);background-size:16px 6px;width:16px;height:6px}.top-right{position:absolute;top:1em;right:1em}.hidden{display:none !important}.zindex-bottom{z-index:-1 !important}.quarto-layout-panel{margin-bottom:1em}.quarto-layout-panel>figure{width:100%}.quarto-layout-panel>figure>figcaption,.quarto-layout-panel>.panel-caption{margin-top:10pt}.quarto-layout-panel>.table-caption{margin-top:0px}.table-caption p{margin-bottom:.5em}.quarto-layout-row{display:flex;flex-direction:row;align-items:flex-start}.quarto-layout-valign-top{align-items:flex-start}.quarto-layout-valign-bottom{align-items:flex-end}.quarto-layout-valign-center{align-items:center}.quarto-layout-cell{position:relative;margin-right:20px}.quarto-layout-cell:last-child{margin-right:0}.quarto-layout-cell figure,.quarto-layout-cell>p{margin:.2em}.quarto-layout-cell img{max-width:100%}.quarto-layout-cell .html-widget{width:100% !important}.quarto-layout-cell div figure p{margin:0}.quarto-layout-cell figure{display:inline-block;margin-inline-start:0;margin-inline-end:0}.quarto-layout-cell table{display:inline-table}.quarto-layout-cell-subref figcaption,figure .quarto-layout-row figure figcaption{text-align:center;font-style:italic}.quarto-figure{position:relative;margin-bottom:1em}.quarto-figure>figure{width:100%;margin-bottom:0}.quarto-figure-left>figure>p,.quarto-figure-left>figure>div{text-align:left}.quarto-figure-center>figure>p,.quarto-figure-center>figure>div{text-align:center}.quarto-figure-right>figure>p,.quarto-figure-right>figure>div{text-align:right}figure>p:empty{display:none}figure>p:first-child{margin-top:0;margin-bottom:0}figure>figcaption{margin-top:.5em}div[id^=tbl-]{position:relative}.quarto-figure>.anchorjs-link{position:absolute;top:.6em;right:.5em}div[id^=tbl-]>.anchorjs-link{position:absolute;top:.7em;right:.3em}.quarto-figure:hover>.anchorjs-link,div[id^=tbl-]:hover>.anchorjs-link,h2:hover>.anchorjs-link,.h2:hover>.anchorjs-link,h3:hover>.anchorjs-link,.h3:hover>.anchorjs-link,h4:hover>.anchorjs-link,.h4:hover>.anchorjs-link,h5:hover>.anchorjs-link,.h5:hover>.anchorjs-link,h6:hover>.anchorjs-link,.h6:hover>.anchorjs-link,.reveal-anchorjs-link>.anchorjs-link{opacity:1}#title-block-header{margin-block-end:1rem;position:relative;margin-top:-1px}#title-block-header .abstract{margin-block-start:1rem}#title-block-header .abstract .abstract-title{font-weight:600}#title-block-header a{text-decoration:none}#title-block-header .author,#title-block-header .date,#title-block-header .doi{margin-block-end:.2rem}#title-block-header .quarto-title-block>div{display:flex}#title-block-header .quarto-title-block>div>h1,#title-block-header .quarto-title-block>div>.h1{flex-grow:1}#title-block-header .quarto-title-block>div>button{flex-shrink:0;height:2.25rem;margin-top:0}@media(min-width: 992px){#title-block-header .quarto-title-block>div>button{margin-top:5px}}tr.header>th>p:last-of-type{margin-bottom:0px}table,.table{caption-side:top;margin-bottom:1.5rem}caption,.table-caption{padding-top:.5rem;padding-bottom:.5rem;text-align:center}.utterances{max-width:none;margin-left:-8px}iframe{margin-bottom:1em}details{margin-bottom:1em}details[show]{margin-bottom:0}details>summary{color:#6c757d}details>summary>p:only-child{display:inline}pre.sourceCode,code.sourceCode{position:relative}p code:not(.sourceCode){white-space:pre-wrap}code{white-space:pre}@media print{code{white-space:pre-wrap}}pre>code{display:block}pre>code.sourceCode{white-space:pre}pre>code.sourceCode>span>a:first-child::before{text-decoration:none}pre.code-overflow-wrap>code.sourceCode{white-space:pre-wrap}pre.code-overflow-scroll>code.sourceCode{white-space:pre}code a:any-link{color:inherit;text-decoration:none}code a:hover{color:inherit;text-decoration:underline}ul.task-list{padding-left:1em}[data-tippy-root]{display:inline-block}.tippy-content .footnote-back{display:none}.quarto-embedded-source-code{display:none}.quarto-unresolved-ref{font-weight:600}.quarto-cover-image{max-width:35%;float:right;margin-left:30px}.cell-output-display .widget-subarea{margin-bottom:1em}.cell-output-display:not(.no-overflow-x),.knitsql-table:not(.no-overflow-x){overflow-x:auto}.panel-input{margin-bottom:1em}.panel-input>div,.panel-input>div>div{display:inline-block;vertical-align:top;padding-right:12px}.panel-input>p:last-child{margin-bottom:0}.layout-sidebar{margin-bottom:1em}.layout-sidebar .tab-content{border:none}.tab-content>.page-columns.active{display:grid}div.sourceCode>iframe{width:100%;height:300px;margin-bottom:-0.5em}div.ansi-escaped-output{font-family:monospace;display:block}/*! +* +* ansi colors from IPython notebook's +* +*/.ansi-black-fg{color:#3e424d}.ansi-black-bg{background-color:#3e424d}.ansi-black-intense-fg{color:#282c36}.ansi-black-intense-bg{background-color:#282c36}.ansi-red-fg{color:#e75c58}.ansi-red-bg{background-color:#e75c58}.ansi-red-intense-fg{color:#b22b31}.ansi-red-intense-bg{background-color:#b22b31}.ansi-green-fg{color:#00a250}.ansi-green-bg{background-color:#00a250}.ansi-green-intense-fg{color:#007427}.ansi-green-intense-bg{background-color:#007427}.ansi-yellow-fg{color:#ddb62b}.ansi-yellow-bg{background-color:#ddb62b}.ansi-yellow-intense-fg{color:#b27d12}.ansi-yellow-intense-bg{background-color:#b27d12}.ansi-blue-fg{color:#208ffb}.ansi-blue-bg{background-color:#208ffb}.ansi-blue-intense-fg{color:#0065ca}.ansi-blue-intense-bg{background-color:#0065ca}.ansi-magenta-fg{color:#d160c4}.ansi-magenta-bg{background-color:#d160c4}.ansi-magenta-intense-fg{color:#a03196}.ansi-magenta-intense-bg{background-color:#a03196}.ansi-cyan-fg{color:#60c6c8}.ansi-cyan-bg{background-color:#60c6c8}.ansi-cyan-intense-fg{color:#258f8f}.ansi-cyan-intense-bg{background-color:#258f8f}.ansi-white-fg{color:#c5c1b4}.ansi-white-bg{background-color:#c5c1b4}.ansi-white-intense-fg{color:#a1a6b2}.ansi-white-intense-bg{background-color:#a1a6b2}.ansi-default-inverse-fg{color:#fff}.ansi-default-inverse-bg{background-color:#000}.ansi-bold{font-weight:bold}.ansi-underline{text-decoration:underline}:root{--quarto-body-bg: #fff;--quarto-body-color: #373a3c;--quarto-text-muted: #6c757d;--quarto-border-color: #dee2e6;--quarto-border-width: 1px;--quarto-border-radius: 0.25rem}table.gt_table{color:var(--quarto-body-color);font-size:1em;width:100%;background-color:rgba(0,0,0,0);border-top-width:inherit;border-bottom-width:inherit;border-color:var(--quarto-border-color)}table.gt_table th.gt_column_spanner_outer{color:var(--quarto-body-color);background-color:rgba(0,0,0,0);border-top-width:inherit;border-bottom-width:inherit;border-color:var(--quarto-border-color)}table.gt_table th.gt_col_heading{color:var(--quarto-body-color);font-weight:bold;background-color:rgba(0,0,0,0)}table.gt_table thead.gt_col_headings{border-bottom:1px solid currentColor;border-top-width:inherit;border-top-color:var(--quarto-border-color)}table.gt_table thead.gt_col_headings:not(:first-child){border-top-width:1px;border-top-color:var(--quarto-border-color)}table.gt_table td.gt_row{border-bottom-width:1px;border-bottom-color:var(--quarto-border-color);border-top-width:0px}table.gt_table tbody.gt_table_body{border-top-width:1px;border-bottom-width:1px;border-bottom-color:var(--quarto-border-color);border-top-color:currentColor}div.columns{display:initial;gap:initial}div.column{display:inline-block;overflow-x:initial;vertical-align:top;width:50%}.code-annotation-tip-content{word-wrap:break-word}.code-annotation-container-hidden{display:none !important}dl.code-annotation-container-grid{display:grid;grid-template-columns:min-content auto}dl.code-annotation-container-grid dt{grid-column:1}dl.code-annotation-container-grid dd{grid-column:2}pre.sourceCode.code-annotation-code{padding-right:0}code.sourceCode .code-annotation-anchor{z-index:100;position:absolute;right:.5em;left:inherit;background-color:rgba(0,0,0,0)}:root{--mermaid-bg-color: #fff;--mermaid-edge-color: #373a3c;--mermaid-node-fg-color: #373a3c;--mermaid-fg-color: #373a3c;--mermaid-fg-color--lighter: #4f5457;--mermaid-fg-color--lightest: #686d71;--mermaid-font-family: Source Sans Pro, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol;--mermaid-label-bg-color: #fff;--mermaid-label-fg-color: #2780e3;--mermaid-node-bg-color: rgba(39, 128, 227, 0.1);--mermaid-node-fg-color: #373a3c}@media print{:root{font-size:11pt}#quarto-sidebar,#TOC,.nav-page{display:none}.page-columns .content{grid-column-start:page-start}.fixed-top{position:relative}.panel-caption,.figure-caption,figcaption{color:#666}}.code-copy-button{position:absolute;top:0;right:0;border:0;margin-top:5px;margin-right:5px;background-color:rgba(0,0,0,0);z-index:3}.code-copy-button:focus{outline:none}.code-copy-button-tooltip{font-size:.75em}pre.sourceCode:hover>.code-copy-button>.bi::before{display:inline-block;height:1rem;width:1rem;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:1rem 1rem}pre.sourceCode:hover>.code-copy-button-checked>.bi::before{background-image:url('data:image/svg+xml,')}pre.sourceCode:hover>.code-copy-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}pre.sourceCode:hover>.code-copy-button-checked:hover>.bi::before{background-image:url('data:image/svg+xml,')}main ol ol,main ul ul,main ol ul,main ul ol{margin-bottom:1em}ul>li:not(:has(>p))>ul,ol>li:not(:has(>p))>ul,ul>li:not(:has(>p))>ol,ol>li:not(:has(>p))>ol{margin-bottom:0}ul>li:not(:has(>p))>ul>li:has(>p),ol>li:not(:has(>p))>ul>li:has(>p),ul>li:not(:has(>p))>ol>li:has(>p),ol>li:not(:has(>p))>ol>li:has(>p){margin-top:1rem}body{margin:0}main.page-columns>header>h1.title,main.page-columns>header>.title.h1{margin-bottom:0}@media(min-width: 992px){body .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.fullcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] 35px [page-end-inset page-end] 5fr [screen-end-inset] 1.5em}body.slimcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset] 35px [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.listing:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 850px - 3em )) [body-content-end] 3em [body-end] 50px [body-end-outset] minmax(0px, 250px) [page-end-inset] minmax(50px, 100px) [page-end] 1fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 175px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] minmax(25px, 50px) [page-start-inset] minmax(50px, 150px) [body-start-outset] minmax(25px, 50px) [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] minmax(25px, 50px) [body-end-outset] minmax(50px, 150px) [page-end-inset] minmax(25px, 50px) [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(50px, 100px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 50px [page-start-inset] minmax(50px, 150px) [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(450px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start] minmax(50px, 100px) [page-start-inset] 50px [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(0px, 200px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 50px [page-start-inset] minmax(50px, 150px) [body-start-outset] 50px [body-start] 1.5em [body-content-start] minmax(450px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(50px, 150px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] minmax(25px, 50px) [page-start-inset] minmax(50px, 150px) [body-start-outset] minmax(25px, 50px) [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] minmax(25px, 50px) [body-end-outset] minmax(50px, 150px) [page-end-inset] minmax(25px, 50px) [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}}@media(max-width: 991.98px){body .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.fullcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.slimcontent:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.listing:not(.floating):not(.docked) .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset] 5fr [body-start] 1.5em [body-content-start] minmax(500px, calc( 1250px - 3em )) [body-content-end body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 145px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start] 35px [page-start-inset] minmax(0px, 145px) [body-start-outset] 35px [body-start] 1.5em [body-content-start] minmax(450px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1.5em [body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(75px, 150px) [page-end-inset] 25px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 1000px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc( 800px - 3em )) [body-content-end] 1.5em [body-end body-end-outset page-end-inset page-end] 4fr [screen-end-inset] 1.5em [screen-end]}body.docked.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.docked.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(25px, 50px) [page-end-inset] 50px [page-end] 5fr [screen-end-inset] 1.5em [screen-end]}body.floating.slimcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 35px [body-end-outset] minmax(75px, 145px) [page-end-inset] 35px [page-end] 4fr [screen-end-inset] 1.5em [screen-end]}body.floating.listing .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset] 5fr [page-start page-start-inset body-start-outset body-start] 1em [body-content-start] minmax(500px, calc( 750px - 3em )) [body-content-end] 1.5em [body-end] 50px [body-end-outset] minmax(75px, 150px) [page-end-inset] 25px [page-end] 4fr [screen-end-inset] 1.5em [screen-end]}}@media(max-width: 767.98px){body .page-columns,body.fullcontent:not(.floating):not(.docked) .page-columns,body.slimcontent:not(.floating):not(.docked) .page-columns,body.docked .page-columns,body.docked.slimcontent .page-columns,body.docked.fullcontent .page-columns,body.floating .page-columns,body.floating.slimcontent .page-columns,body.floating.fullcontent .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}body:not(.floating):not(.docked) .page-columns.toc-left .page-columns{display:grid;gap:0;grid-template-columns:[screen-start] 1.5em [screen-start-inset page-start page-start-inset body-start-outset body-start body-content-start] minmax(0px, 1fr) [body-content-end body-end body-end-outset page-end-inset page-end screen-end-inset] 1.5em [screen-end]}nav[role=doc-toc]{display:none}}body,.page-row-navigation{grid-template-rows:[page-top] max-content [contents-top] max-content [contents-bottom] max-content [page-bottom]}.page-rows-contents{grid-template-rows:[content-top] minmax(max-content, 1fr) [content-bottom] minmax(60px, max-content) [page-bottom]}.page-full{grid-column:screen-start/screen-end !important}.page-columns>*{grid-column:body-content-start/body-content-end}.page-columns.column-page>*{grid-column:page-start/page-end}.page-columns.column-page-left>*{grid-column:page-start/body-content-end}.page-columns.column-page-right>*{grid-column:body-content-start/page-end}.page-rows{grid-auto-rows:auto}.header{grid-column:screen-start/screen-end;grid-row:page-top/contents-top}#quarto-content{padding:0;grid-column:screen-start/screen-end;grid-row:contents-top/contents-bottom}body.floating .sidebar.sidebar-navigation{grid-column:page-start/body-start;grid-row:content-top/page-bottom}body.docked .sidebar.sidebar-navigation{grid-column:screen-start/body-start;grid-row:content-top/page-bottom}.sidebar.toc-left{grid-column:page-start/body-start;grid-row:content-top/page-bottom}.sidebar.margin-sidebar{grid-column:body-end/page-end;grid-row:content-top/page-bottom}.page-columns .content{grid-column:body-content-start/body-content-end;grid-row:content-top/content-bottom;align-content:flex-start}.page-columns .page-navigation{grid-column:body-content-start/body-content-end;grid-row:content-bottom/page-bottom}.page-columns .footer{grid-column:screen-start/screen-end;grid-row:contents-bottom/page-bottom}.page-columns .column-body{grid-column:body-content-start/body-content-end}.page-columns .column-body-fullbleed{grid-column:body-start/body-end}.page-columns .column-body-outset{grid-column:body-start-outset/body-end-outset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-body-outset table{background:#fff}.page-columns .column-body-outset-left{grid-column:body-start-outset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-body-outset-left table{background:#fff}.page-columns .column-body-outset-right{grid-column:body-content-start/body-end-outset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-body-outset-right table{background:#fff}.page-columns .column-page{grid-column:page-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page table{background:#fff}.page-columns .column-page-inset{grid-column:page-start-inset/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-inset table{background:#fff}.page-columns .column-page-inset-left{grid-column:page-start-inset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-inset-left table{background:#fff}.page-columns .column-page-inset-right{grid-column:body-content-start/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-inset-right figcaption table{background:#fff}.page-columns .column-page-left{grid-column:page-start/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-left table{background:#fff}.page-columns .column-page-right{grid-column:body-content-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-page-right figcaption table{background:#fff}#quarto-content.page-columns #quarto-margin-sidebar,#quarto-content.page-columns #quarto-sidebar{z-index:1}@media(max-width: 991.98px){#quarto-content.page-columns #quarto-margin-sidebar.collapse,#quarto-content.page-columns #quarto-sidebar.collapse,#quarto-content.page-columns #quarto-margin-sidebar.collapsing,#quarto-content.page-columns #quarto-sidebar.collapsing{z-index:1055}}#quarto-content.page-columns main.column-page,#quarto-content.page-columns main.column-page-right,#quarto-content.page-columns main.column-page-left{z-index:0}.page-columns .column-screen-inset{grid-column:screen-start-inset/screen-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset table{background:#fff}.page-columns .column-screen-inset-left{grid-column:screen-start-inset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-left table{background:#fff}.page-columns .column-screen-inset-right{grid-column:body-content-start/screen-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-right table{background:#fff}.page-columns .column-screen{grid-column:screen-start/screen-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen table{background:#fff}.page-columns .column-screen-left{grid-column:screen-start/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-left table{background:#fff}.page-columns .column-screen-right{grid-column:body-content-start/screen-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-right table{background:#fff}.page-columns .column-screen-inset-shaded{grid-column:screen-start/screen-end;padding:1em;background:#f8f9fa;z-index:998;transform:translate3d(0, 0, 0);margin-bottom:1em}.zindex-content{z-index:998;transform:translate3d(0, 0, 0)}.zindex-modal{z-index:1055;transform:translate3d(0, 0, 0)}.zindex-over-content{z-index:999;transform:translate3d(0, 0, 0)}img.img-fluid.column-screen,img.img-fluid.column-screen-inset-shaded,img.img-fluid.column-screen-inset,img.img-fluid.column-screen-inset-left,img.img-fluid.column-screen-inset-right,img.img-fluid.column-screen-left,img.img-fluid.column-screen-right{width:100%}@media(min-width: 992px){.margin-caption,div.aside,aside:not(.footnotes),.column-margin{grid-column:body-end/page-end !important;z-index:998}.column-sidebar{grid-column:page-start/body-start !important;z-index:998}.column-leftmargin{grid-column:screen-start-inset/body-start !important;z-index:998}.no-row-height{height:1em;overflow:visible}}@media(max-width: 991.98px){.margin-caption,div.aside,aside:not(.footnotes),.column-margin{grid-column:body-end/page-end !important;z-index:998}.no-row-height{height:1em;overflow:visible}.page-columns.page-full{overflow:visible}.page-columns.toc-left .margin-caption,.page-columns.toc-left div.aside,.page-columns.toc-left aside:not(.footnotes),.page-columns.toc-left .column-margin{grid-column:body-content-start/body-content-end !important;z-index:998;transform:translate3d(0, 0, 0)}.page-columns.toc-left .no-row-height{height:initial;overflow:initial}}@media(max-width: 767.98px){.margin-caption,div.aside,aside:not(.footnotes),.column-margin{grid-column:body-content-start/body-content-end !important;z-index:998;transform:translate3d(0, 0, 0)}.no-row-height{height:initial;overflow:initial}#quarto-margin-sidebar{display:none}#quarto-sidebar-toc-left{display:none}.hidden-sm{display:none}}.panel-grid{display:grid;grid-template-rows:repeat(1, 1fr);grid-template-columns:repeat(24, 1fr);gap:1em}.panel-grid .g-col-1{grid-column:auto/span 1}.panel-grid .g-col-2{grid-column:auto/span 2}.panel-grid .g-col-3{grid-column:auto/span 3}.panel-grid .g-col-4{grid-column:auto/span 4}.panel-grid .g-col-5{grid-column:auto/span 5}.panel-grid .g-col-6{grid-column:auto/span 6}.panel-grid .g-col-7{grid-column:auto/span 7}.panel-grid .g-col-8{grid-column:auto/span 8}.panel-grid .g-col-9{grid-column:auto/span 9}.panel-grid .g-col-10{grid-column:auto/span 10}.panel-grid .g-col-11{grid-column:auto/span 11}.panel-grid .g-col-12{grid-column:auto/span 12}.panel-grid .g-col-13{grid-column:auto/span 13}.panel-grid .g-col-14{grid-column:auto/span 14}.panel-grid .g-col-15{grid-column:auto/span 15}.panel-grid .g-col-16{grid-column:auto/span 16}.panel-grid .g-col-17{grid-column:auto/span 17}.panel-grid .g-col-18{grid-column:auto/span 18}.panel-grid .g-col-19{grid-column:auto/span 19}.panel-grid .g-col-20{grid-column:auto/span 20}.panel-grid .g-col-21{grid-column:auto/span 21}.panel-grid .g-col-22{grid-column:auto/span 22}.panel-grid .g-col-23{grid-column:auto/span 23}.panel-grid .g-col-24{grid-column:auto/span 24}.panel-grid .g-start-1{grid-column-start:1}.panel-grid .g-start-2{grid-column-start:2}.panel-grid .g-start-3{grid-column-start:3}.panel-grid .g-start-4{grid-column-start:4}.panel-grid .g-start-5{grid-column-start:5}.panel-grid .g-start-6{grid-column-start:6}.panel-grid .g-start-7{grid-column-start:7}.panel-grid .g-start-8{grid-column-start:8}.panel-grid .g-start-9{grid-column-start:9}.panel-grid .g-start-10{grid-column-start:10}.panel-grid .g-start-11{grid-column-start:11}.panel-grid .g-start-12{grid-column-start:12}.panel-grid .g-start-13{grid-column-start:13}.panel-grid .g-start-14{grid-column-start:14}.panel-grid .g-start-15{grid-column-start:15}.panel-grid .g-start-16{grid-column-start:16}.panel-grid .g-start-17{grid-column-start:17}.panel-grid .g-start-18{grid-column-start:18}.panel-grid .g-start-19{grid-column-start:19}.panel-grid .g-start-20{grid-column-start:20}.panel-grid .g-start-21{grid-column-start:21}.panel-grid .g-start-22{grid-column-start:22}.panel-grid .g-start-23{grid-column-start:23}@media(min-width: 576px){.panel-grid .g-col-sm-1{grid-column:auto/span 1}.panel-grid .g-col-sm-2{grid-column:auto/span 2}.panel-grid .g-col-sm-3{grid-column:auto/span 3}.panel-grid .g-col-sm-4{grid-column:auto/span 4}.panel-grid .g-col-sm-5{grid-column:auto/span 5}.panel-grid .g-col-sm-6{grid-column:auto/span 6}.panel-grid .g-col-sm-7{grid-column:auto/span 7}.panel-grid .g-col-sm-8{grid-column:auto/span 8}.panel-grid .g-col-sm-9{grid-column:auto/span 9}.panel-grid .g-col-sm-10{grid-column:auto/span 10}.panel-grid .g-col-sm-11{grid-column:auto/span 11}.panel-grid .g-col-sm-12{grid-column:auto/span 12}.panel-grid .g-col-sm-13{grid-column:auto/span 13}.panel-grid .g-col-sm-14{grid-column:auto/span 14}.panel-grid .g-col-sm-15{grid-column:auto/span 15}.panel-grid .g-col-sm-16{grid-column:auto/span 16}.panel-grid .g-col-sm-17{grid-column:auto/span 17}.panel-grid .g-col-sm-18{grid-column:auto/span 18}.panel-grid .g-col-sm-19{grid-column:auto/span 19}.panel-grid .g-col-sm-20{grid-column:auto/span 20}.panel-grid .g-col-sm-21{grid-column:auto/span 21}.panel-grid .g-col-sm-22{grid-column:auto/span 22}.panel-grid .g-col-sm-23{grid-column:auto/span 23}.panel-grid .g-col-sm-24{grid-column:auto/span 24}.panel-grid .g-start-sm-1{grid-column-start:1}.panel-grid .g-start-sm-2{grid-column-start:2}.panel-grid .g-start-sm-3{grid-column-start:3}.panel-grid .g-start-sm-4{grid-column-start:4}.panel-grid .g-start-sm-5{grid-column-start:5}.panel-grid .g-start-sm-6{grid-column-start:6}.panel-grid .g-start-sm-7{grid-column-start:7}.panel-grid .g-start-sm-8{grid-column-start:8}.panel-grid .g-start-sm-9{grid-column-start:9}.panel-grid .g-start-sm-10{grid-column-start:10}.panel-grid .g-start-sm-11{grid-column-start:11}.panel-grid .g-start-sm-12{grid-column-start:12}.panel-grid .g-start-sm-13{grid-column-start:13}.panel-grid .g-start-sm-14{grid-column-start:14}.panel-grid .g-start-sm-15{grid-column-start:15}.panel-grid .g-start-sm-16{grid-column-start:16}.panel-grid .g-start-sm-17{grid-column-start:17}.panel-grid .g-start-sm-18{grid-column-start:18}.panel-grid .g-start-sm-19{grid-column-start:19}.panel-grid .g-start-sm-20{grid-column-start:20}.panel-grid .g-start-sm-21{grid-column-start:21}.panel-grid .g-start-sm-22{grid-column-start:22}.panel-grid .g-start-sm-23{grid-column-start:23}}@media(min-width: 768px){.panel-grid .g-col-md-1{grid-column:auto/span 1}.panel-grid .g-col-md-2{grid-column:auto/span 2}.panel-grid .g-col-md-3{grid-column:auto/span 3}.panel-grid .g-col-md-4{grid-column:auto/span 4}.panel-grid .g-col-md-5{grid-column:auto/span 5}.panel-grid .g-col-md-6{grid-column:auto/span 6}.panel-grid .g-col-md-7{grid-column:auto/span 7}.panel-grid .g-col-md-8{grid-column:auto/span 8}.panel-grid .g-col-md-9{grid-column:auto/span 9}.panel-grid .g-col-md-10{grid-column:auto/span 10}.panel-grid .g-col-md-11{grid-column:auto/span 11}.panel-grid .g-col-md-12{grid-column:auto/span 12}.panel-grid .g-col-md-13{grid-column:auto/span 13}.panel-grid .g-col-md-14{grid-column:auto/span 14}.panel-grid .g-col-md-15{grid-column:auto/span 15}.panel-grid .g-col-md-16{grid-column:auto/span 16}.panel-grid .g-col-md-17{grid-column:auto/span 17}.panel-grid .g-col-md-18{grid-column:auto/span 18}.panel-grid .g-col-md-19{grid-column:auto/span 19}.panel-grid .g-col-md-20{grid-column:auto/span 20}.panel-grid .g-col-md-21{grid-column:auto/span 21}.panel-grid .g-col-md-22{grid-column:auto/span 22}.panel-grid .g-col-md-23{grid-column:auto/span 23}.panel-grid .g-col-md-24{grid-column:auto/span 24}.panel-grid .g-start-md-1{grid-column-start:1}.panel-grid .g-start-md-2{grid-column-start:2}.panel-grid .g-start-md-3{grid-column-start:3}.panel-grid .g-start-md-4{grid-column-start:4}.panel-grid .g-start-md-5{grid-column-start:5}.panel-grid .g-start-md-6{grid-column-start:6}.panel-grid .g-start-md-7{grid-column-start:7}.panel-grid .g-start-md-8{grid-column-start:8}.panel-grid .g-start-md-9{grid-column-start:9}.panel-grid .g-start-md-10{grid-column-start:10}.panel-grid .g-start-md-11{grid-column-start:11}.panel-grid .g-start-md-12{grid-column-start:12}.panel-grid .g-start-md-13{grid-column-start:13}.panel-grid .g-start-md-14{grid-column-start:14}.panel-grid .g-start-md-15{grid-column-start:15}.panel-grid .g-start-md-16{grid-column-start:16}.panel-grid .g-start-md-17{grid-column-start:17}.panel-grid .g-start-md-18{grid-column-start:18}.panel-grid .g-start-md-19{grid-column-start:19}.panel-grid .g-start-md-20{grid-column-start:20}.panel-grid .g-start-md-21{grid-column-start:21}.panel-grid .g-start-md-22{grid-column-start:22}.panel-grid .g-start-md-23{grid-column-start:23}}@media(min-width: 992px){.panel-grid .g-col-lg-1{grid-column:auto/span 1}.panel-grid .g-col-lg-2{grid-column:auto/span 2}.panel-grid .g-col-lg-3{grid-column:auto/span 3}.panel-grid .g-col-lg-4{grid-column:auto/span 4}.panel-grid .g-col-lg-5{grid-column:auto/span 5}.panel-grid .g-col-lg-6{grid-column:auto/span 6}.panel-grid .g-col-lg-7{grid-column:auto/span 7}.panel-grid .g-col-lg-8{grid-column:auto/span 8}.panel-grid .g-col-lg-9{grid-column:auto/span 9}.panel-grid .g-col-lg-10{grid-column:auto/span 10}.panel-grid .g-col-lg-11{grid-column:auto/span 11}.panel-grid .g-col-lg-12{grid-column:auto/span 12}.panel-grid .g-col-lg-13{grid-column:auto/span 13}.panel-grid .g-col-lg-14{grid-column:auto/span 14}.panel-grid .g-col-lg-15{grid-column:auto/span 15}.panel-grid .g-col-lg-16{grid-column:auto/span 16}.panel-grid .g-col-lg-17{grid-column:auto/span 17}.panel-grid .g-col-lg-18{grid-column:auto/span 18}.panel-grid .g-col-lg-19{grid-column:auto/span 19}.panel-grid .g-col-lg-20{grid-column:auto/span 20}.panel-grid .g-col-lg-21{grid-column:auto/span 21}.panel-grid .g-col-lg-22{grid-column:auto/span 22}.panel-grid .g-col-lg-23{grid-column:auto/span 23}.panel-grid .g-col-lg-24{grid-column:auto/span 24}.panel-grid .g-start-lg-1{grid-column-start:1}.panel-grid .g-start-lg-2{grid-column-start:2}.panel-grid .g-start-lg-3{grid-column-start:3}.panel-grid .g-start-lg-4{grid-column-start:4}.panel-grid .g-start-lg-5{grid-column-start:5}.panel-grid .g-start-lg-6{grid-column-start:6}.panel-grid .g-start-lg-7{grid-column-start:7}.panel-grid .g-start-lg-8{grid-column-start:8}.panel-grid .g-start-lg-9{grid-column-start:9}.panel-grid .g-start-lg-10{grid-column-start:10}.panel-grid .g-start-lg-11{grid-column-start:11}.panel-grid .g-start-lg-12{grid-column-start:12}.panel-grid .g-start-lg-13{grid-column-start:13}.panel-grid .g-start-lg-14{grid-column-start:14}.panel-grid .g-start-lg-15{grid-column-start:15}.panel-grid .g-start-lg-16{grid-column-start:16}.panel-grid .g-start-lg-17{grid-column-start:17}.panel-grid .g-start-lg-18{grid-column-start:18}.panel-grid .g-start-lg-19{grid-column-start:19}.panel-grid .g-start-lg-20{grid-column-start:20}.panel-grid .g-start-lg-21{grid-column-start:21}.panel-grid .g-start-lg-22{grid-column-start:22}.panel-grid .g-start-lg-23{grid-column-start:23}}@media(min-width: 1200px){.panel-grid .g-col-xl-1{grid-column:auto/span 1}.panel-grid .g-col-xl-2{grid-column:auto/span 2}.panel-grid .g-col-xl-3{grid-column:auto/span 3}.panel-grid .g-col-xl-4{grid-column:auto/span 4}.panel-grid .g-col-xl-5{grid-column:auto/span 5}.panel-grid .g-col-xl-6{grid-column:auto/span 6}.panel-grid .g-col-xl-7{grid-column:auto/span 7}.panel-grid .g-col-xl-8{grid-column:auto/span 8}.panel-grid .g-col-xl-9{grid-column:auto/span 9}.panel-grid .g-col-xl-10{grid-column:auto/span 10}.panel-grid .g-col-xl-11{grid-column:auto/span 11}.panel-grid .g-col-xl-12{grid-column:auto/span 12}.panel-grid .g-col-xl-13{grid-column:auto/span 13}.panel-grid .g-col-xl-14{grid-column:auto/span 14}.panel-grid .g-col-xl-15{grid-column:auto/span 15}.panel-grid .g-col-xl-16{grid-column:auto/span 16}.panel-grid .g-col-xl-17{grid-column:auto/span 17}.panel-grid .g-col-xl-18{grid-column:auto/span 18}.panel-grid .g-col-xl-19{grid-column:auto/span 19}.panel-grid .g-col-xl-20{grid-column:auto/span 20}.panel-grid .g-col-xl-21{grid-column:auto/span 21}.panel-grid .g-col-xl-22{grid-column:auto/span 22}.panel-grid .g-col-xl-23{grid-column:auto/span 23}.panel-grid .g-col-xl-24{grid-column:auto/span 24}.panel-grid .g-start-xl-1{grid-column-start:1}.panel-grid .g-start-xl-2{grid-column-start:2}.panel-grid .g-start-xl-3{grid-column-start:3}.panel-grid .g-start-xl-4{grid-column-start:4}.panel-grid .g-start-xl-5{grid-column-start:5}.panel-grid .g-start-xl-6{grid-column-start:6}.panel-grid .g-start-xl-7{grid-column-start:7}.panel-grid .g-start-xl-8{grid-column-start:8}.panel-grid .g-start-xl-9{grid-column-start:9}.panel-grid .g-start-xl-10{grid-column-start:10}.panel-grid .g-start-xl-11{grid-column-start:11}.panel-grid .g-start-xl-12{grid-column-start:12}.panel-grid .g-start-xl-13{grid-column-start:13}.panel-grid .g-start-xl-14{grid-column-start:14}.panel-grid .g-start-xl-15{grid-column-start:15}.panel-grid .g-start-xl-16{grid-column-start:16}.panel-grid .g-start-xl-17{grid-column-start:17}.panel-grid .g-start-xl-18{grid-column-start:18}.panel-grid .g-start-xl-19{grid-column-start:19}.panel-grid .g-start-xl-20{grid-column-start:20}.panel-grid .g-start-xl-21{grid-column-start:21}.panel-grid .g-start-xl-22{grid-column-start:22}.panel-grid .g-start-xl-23{grid-column-start:23}}@media(min-width: 1400px){.panel-grid .g-col-xxl-1{grid-column:auto/span 1}.panel-grid .g-col-xxl-2{grid-column:auto/span 2}.panel-grid .g-col-xxl-3{grid-column:auto/span 3}.panel-grid .g-col-xxl-4{grid-column:auto/span 4}.panel-grid .g-col-xxl-5{grid-column:auto/span 5}.panel-grid .g-col-xxl-6{grid-column:auto/span 6}.panel-grid .g-col-xxl-7{grid-column:auto/span 7}.panel-grid .g-col-xxl-8{grid-column:auto/span 8}.panel-grid .g-col-xxl-9{grid-column:auto/span 9}.panel-grid .g-col-xxl-10{grid-column:auto/span 10}.panel-grid .g-col-xxl-11{grid-column:auto/span 11}.panel-grid .g-col-xxl-12{grid-column:auto/span 12}.panel-grid .g-col-xxl-13{grid-column:auto/span 13}.panel-grid .g-col-xxl-14{grid-column:auto/span 14}.panel-grid .g-col-xxl-15{grid-column:auto/span 15}.panel-grid .g-col-xxl-16{grid-column:auto/span 16}.panel-grid .g-col-xxl-17{grid-column:auto/span 17}.panel-grid .g-col-xxl-18{grid-column:auto/span 18}.panel-grid .g-col-xxl-19{grid-column:auto/span 19}.panel-grid .g-col-xxl-20{grid-column:auto/span 20}.panel-grid .g-col-xxl-21{grid-column:auto/span 21}.panel-grid .g-col-xxl-22{grid-column:auto/span 22}.panel-grid .g-col-xxl-23{grid-column:auto/span 23}.panel-grid .g-col-xxl-24{grid-column:auto/span 24}.panel-grid .g-start-xxl-1{grid-column-start:1}.panel-grid .g-start-xxl-2{grid-column-start:2}.panel-grid .g-start-xxl-3{grid-column-start:3}.panel-grid .g-start-xxl-4{grid-column-start:4}.panel-grid .g-start-xxl-5{grid-column-start:5}.panel-grid .g-start-xxl-6{grid-column-start:6}.panel-grid .g-start-xxl-7{grid-column-start:7}.panel-grid .g-start-xxl-8{grid-column-start:8}.panel-grid .g-start-xxl-9{grid-column-start:9}.panel-grid .g-start-xxl-10{grid-column-start:10}.panel-grid .g-start-xxl-11{grid-column-start:11}.panel-grid .g-start-xxl-12{grid-column-start:12}.panel-grid .g-start-xxl-13{grid-column-start:13}.panel-grid .g-start-xxl-14{grid-column-start:14}.panel-grid .g-start-xxl-15{grid-column-start:15}.panel-grid .g-start-xxl-16{grid-column-start:16}.panel-grid .g-start-xxl-17{grid-column-start:17}.panel-grid .g-start-xxl-18{grid-column-start:18}.panel-grid .g-start-xxl-19{grid-column-start:19}.panel-grid .g-start-xxl-20{grid-column-start:20}.panel-grid .g-start-xxl-21{grid-column-start:21}.panel-grid .g-start-xxl-22{grid-column-start:22}.panel-grid .g-start-xxl-23{grid-column-start:23}}main{margin-top:1em;margin-bottom:1em}h1,.h1,h2,.h2{color:#4b4f51;margin-top:2rem;margin-bottom:1rem;font-weight:600}h1.title,.title.h1{margin-top:0}h2,.h2{border-bottom:1px solid #dee2e6;padding-bottom:.5rem}h3,.h3{font-weight:600}h3,.h3,h4,.h4{opacity:.9;margin-top:1.5rem}h5,.h5,h6,.h6{opacity:.9}.header-section-number{color:#747a7f}.nav-link.active .header-section-number{color:inherit}mark,.mark{padding:0em}.panel-caption,caption,.figure-caption{font-size:.9rem}.panel-caption,.figure-caption,figcaption{color:#747a7f}.table-caption,caption{color:#373a3c}.quarto-layout-cell[data-ref-parent] caption{color:#747a7f}.column-margin figcaption,.margin-caption,div.aside,aside,.column-margin{color:#747a7f;font-size:.825rem}.panel-caption.margin-caption{text-align:inherit}.column-margin.column-container p{margin-bottom:0}.column-margin.column-container>*:not(.collapse){padding-top:.5em;padding-bottom:.5em;display:block}.column-margin.column-container>*.collapse:not(.show){display:none}@media(min-width: 768px){.column-margin.column-container .callout-margin-content:first-child{margin-top:4.5em}.column-margin.column-container .callout-margin-content-simple:first-child{margin-top:3.5em}}.margin-caption>*{padding-top:.5em;padding-bottom:.5em}@media(max-width: 767.98px){.quarto-layout-row{flex-direction:column}}.nav-tabs .nav-item{margin-top:1px;cursor:pointer}.tab-content{margin-top:0px;border-left:#dee2e6 1px solid;border-right:#dee2e6 1px solid;border-bottom:#dee2e6 1px solid;margin-left:0;padding:1em;margin-bottom:1em}@media(max-width: 767.98px){.layout-sidebar{margin-left:0;margin-right:0}}.panel-sidebar,.panel-sidebar .form-control,.panel-input,.panel-input .form-control,.selectize-dropdown{font-size:.9rem}.panel-sidebar .form-control,.panel-input .form-control{padding-top:.1rem}.tab-pane div.sourceCode{margin-top:0px}.tab-pane>p{padding-top:1em}.tab-content>.tab-pane:not(.active){display:none !important}div.sourceCode{background-color:rgba(233,236,239,.65);border:1px solid rgba(233,236,239,.65);border-radius:.25rem}pre.sourceCode{background-color:rgba(0,0,0,0)}pre.sourceCode{border:none;font-size:.875em;overflow:visible !important;padding:.4em}.callout pre.sourceCode{padding-left:0}div.sourceCode{overflow-y:hidden}.callout div.sourceCode{margin-left:initial}.blockquote{font-size:inherit;padding-left:1rem;padding-right:1.5rem;color:#747a7f}.blockquote h1:first-child,.blockquote .h1:first-child,.blockquote h2:first-child,.blockquote .h2:first-child,.blockquote h3:first-child,.blockquote .h3:first-child,.blockquote h4:first-child,.blockquote .h4:first-child,.blockquote h5:first-child,.blockquote .h5:first-child{margin-top:0}pre{background-color:initial;padding:initial;border:initial}p code:not(.sourceCode),li code:not(.sourceCode),td code:not(.sourceCode){background-color:#f7f7f7;padding:.2em}nav p code:not(.sourceCode),nav li code:not(.sourceCode),nav td code:not(.sourceCode){background-color:rgba(0,0,0,0);padding:0}td code:not(.sourceCode){white-space:pre-wrap}#quarto-embedded-source-code-modal>.modal-dialog{max-width:1000px;padding-left:1.75rem;padding-right:1.75rem}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-body{padding:0}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-body div.sourceCode{margin:0;padding:.2rem .2rem;border-radius:0px;border:none}#quarto-embedded-source-code-modal>.modal-dialog>.modal-content>.modal-header{padding:.7rem}.code-tools-button{font-size:1rem;padding:.15rem .15rem;margin-left:5px;color:#6c757d;background-color:rgba(0,0,0,0);transition:initial;cursor:pointer}.code-tools-button>.bi::before{display:inline-block;height:1rem;width:1rem;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:1rem 1rem}.code-tools-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}#quarto-embedded-source-code-modal .code-copy-button>.bi::before{background-image:url('data:image/svg+xml,')}#quarto-embedded-source-code-modal .code-copy-button-checked>.bi::before{background-image:url('data:image/svg+xml,')}.sidebar{will-change:top;transition:top 200ms linear;position:sticky;overflow-y:auto;padding-top:1.2em;max-height:100vh}.sidebar.toc-left,.sidebar.margin-sidebar{top:0px;padding-top:1em}.sidebar.toc-left>*,.sidebar.margin-sidebar>*{padding-top:.5em}.sidebar.quarto-banner-title-block-sidebar>*{padding-top:1.65em}figure .quarto-notebook-link{margin-top:.5em}.quarto-notebook-link{font-size:.75em;color:#6c757d;margin-bottom:1em;text-decoration:none;display:block}.quarto-notebook-link:hover{text-decoration:underline;color:#2780e3}.quarto-notebook-link::before{display:inline-block;height:.75rem;width:.75rem;margin-bottom:0em;margin-right:.25em;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:.75rem .75rem}.quarto-alternate-notebooks i.bi,.quarto-alternate-formats i.bi{margin-right:.4em}.quarto-notebook .cell-container{display:flex}.quarto-notebook .cell-container .cell{flex-grow:4}.quarto-notebook .cell-container .cell-decorator{padding-top:1.5em;padding-right:1em;text-align:right}.quarto-notebook .cell-code code{white-space:pre-wrap}.quarto-notebook h2,.quarto-notebook .h2{border-bottom:none}.sidebar .quarto-alternate-formats a,.sidebar .quarto-alternate-notebooks a{text-decoration:none}.sidebar .quarto-alternate-formats a:hover,.sidebar .quarto-alternate-notebooks a:hover{color:#2780e3}.sidebar .quarto-alternate-notebooks h2,.sidebar .quarto-alternate-notebooks .h2,.sidebar .quarto-alternate-formats h2,.sidebar .quarto-alternate-formats .h2,.sidebar nav[role=doc-toc]>h2,.sidebar nav[role=doc-toc]>.h2{font-size:.875rem;font-weight:400;margin-bottom:.5rem;margin-top:.3rem;font-family:inherit;border-bottom:0;padding-bottom:0;padding-top:0px}.sidebar .quarto-alternate-notebooks h2,.sidebar .quarto-alternate-notebooks .h2,.sidebar .quarto-alternate-formats h2,.sidebar .quarto-alternate-formats .h2{margin-top:1rem}.sidebar nav[role=doc-toc]>ul a{border-left:1px solid #e9ecef;padding-left:.6rem}.sidebar .quarto-alternate-notebooks h2>ul a,.sidebar .quarto-alternate-notebooks .h2>ul a,.sidebar .quarto-alternate-formats h2>ul a,.sidebar .quarto-alternate-formats .h2>ul a{border-left:none;padding-left:.6rem}.sidebar .quarto-alternate-notebooks ul a:empty,.sidebar .quarto-alternate-formats ul a:empty,.sidebar nav[role=doc-toc]>ul a:empty{display:none}.sidebar .quarto-alternate-notebooks ul,.sidebar .quarto-alternate-formats ul,.sidebar nav[role=doc-toc] ul{padding-left:0;list-style:none;font-size:.875rem;font-weight:300}.sidebar .quarto-alternate-notebooks ul li a,.sidebar .quarto-alternate-formats ul li a,.sidebar nav[role=doc-toc]>ul li a{line-height:1.1rem;padding-bottom:.2rem;padding-top:.2rem;color:inherit}.sidebar nav[role=doc-toc] ul>li>ul>li>a{padding-left:1.2em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>a{padding-left:2.4em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>a{padding-left:3.6em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>ul>li>a{padding-left:4.8em}.sidebar nav[role=doc-toc] ul>li>ul>li>ul>li>ul>li>ul>li>ul>li>a{padding-left:6em}.sidebar nav[role=doc-toc] ul>li>a.active,.sidebar nav[role=doc-toc] ul>li>ul>li>a.active{border-left:1px solid #2780e3;color:#2780e3 !important}.sidebar nav[role=doc-toc] ul>li>a:hover,.sidebar nav[role=doc-toc] ul>li>ul>li>a:hover{color:#2780e3 !important}kbd,.kbd{color:#373a3c;background-color:#f8f9fa;border:1px solid;border-radius:5px;border-color:#dee2e6}div.hanging-indent{margin-left:1em;text-indent:-1em}.citation a,.footnote-ref{text-decoration:none}.footnotes ol{padding-left:1em}.tippy-content>*{margin-bottom:.7em}.tippy-content>*:last-child{margin-bottom:0}.table a{word-break:break-word}.table>thead{border-top-width:1px;border-top-color:#dee2e6;border-bottom:1px solid #b6babc}.callout{margin-top:1.25rem;margin-bottom:1.25rem;border-radius:.25rem;overflow-wrap:break-word}.callout .callout-title-container{overflow-wrap:anywhere}.callout.callout-style-simple{padding:.4em .7em;border-left:5px solid;border-right:1px solid #dee2e6;border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.callout.callout-style-default{border-left:5px solid;border-right:1px solid #dee2e6;border-top:1px solid #dee2e6;border-bottom:1px solid #dee2e6}.callout .callout-body-container{flex-grow:1}.callout.callout-style-simple .callout-body{font-size:.9rem;font-weight:400}.callout.callout-style-default .callout-body{font-size:.9rem;font-weight:400}.callout.callout-titled .callout-body{margin-top:.2em}.callout:not(.no-icon).callout-titled.callout-style-simple .callout-body{padding-left:1.6em}.callout.callout-titled>.callout-header{padding-top:.2em;margin-bottom:-0.2em}.callout.callout-style-simple>div.callout-header{border-bottom:none;font-size:.9rem;font-weight:600;opacity:75%}.callout.callout-style-default>div.callout-header{border-bottom:none;font-weight:600;opacity:85%;font-size:.9rem;padding-left:.5em;padding-right:.5em}.callout.callout-style-default div.callout-body{padding-left:.5em;padding-right:.5em}.callout.callout-style-default div.callout-body>:first-child{margin-top:.5em}.callout>div.callout-header[data-bs-toggle=collapse]{cursor:pointer}.callout.callout-style-default .callout-header[aria-expanded=false],.callout.callout-style-default .callout-header[aria-expanded=true]{padding-top:0px;margin-bottom:0px;align-items:center}.callout.callout-titled .callout-body>:last-child:not(.sourceCode),.callout.callout-titled .callout-body>div>:last-child:not(.sourceCode){margin-bottom:.5rem}.callout:not(.callout-titled) .callout-body>:first-child,.callout:not(.callout-titled) .callout-body>div>:first-child{margin-top:.25rem}.callout:not(.callout-titled) .callout-body>:last-child,.callout:not(.callout-titled) .callout-body>div>:last-child{margin-bottom:.2rem}.callout.callout-style-simple .callout-icon::before,.callout.callout-style-simple .callout-toggle::before{height:1rem;width:1rem;display:inline-block;content:"";background-repeat:no-repeat;background-size:1rem 1rem}.callout.callout-style-default .callout-icon::before,.callout.callout-style-default .callout-toggle::before{height:.9rem;width:.9rem;display:inline-block;content:"";background-repeat:no-repeat;background-size:.9rem .9rem}.callout.callout-style-default .callout-toggle::before{margin-top:5px}.callout .callout-btn-toggle .callout-toggle::before{transition:transform .2s linear}.callout .callout-header[aria-expanded=false] .callout-toggle::before{transform:rotate(-90deg)}.callout .callout-header[aria-expanded=true] .callout-toggle::before{transform:none}.callout.callout-style-simple:not(.no-icon) div.callout-icon-container{padding-top:.2em;padding-right:.55em}.callout.callout-style-default:not(.no-icon) div.callout-icon-container{padding-top:.1em;padding-right:.35em}.callout.callout-style-default:not(.no-icon) div.callout-title-container{margin-top:-1px}.callout.callout-style-default.callout-caution:not(.no-icon) div.callout-icon-container{padding-top:.3em;padding-right:.35em}.callout>.callout-body>.callout-icon-container>.no-icon,.callout>.callout-header>.callout-icon-container>.no-icon{display:none}div.callout.callout{border-left-color:#6c757d}div.callout.callout-style-default>.callout-header{background-color:#6c757d}div.callout-note.callout{border-left-color:#2780e3}div.callout-note.callout-style-default>.callout-header{background-color:#e9f2fc}div.callout-note:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-note.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-note .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-tip.callout{border-left-color:#3fb618}div.callout-tip.callout-style-default>.callout-header{background-color:#ecf8e8}div.callout-tip:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-tip.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-tip .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-warning.callout{border-left-color:#ff7518}div.callout-warning.callout-style-default>.callout-header{background-color:#fff1e8}div.callout-warning:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-warning.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-warning .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-caution.callout{border-left-color:#f0ad4e}div.callout-caution.callout-style-default>.callout-header{background-color:#fef7ed}div.callout-caution:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-caution.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-caution .callout-toggle::before{background-image:url('data:image/svg+xml,')}div.callout-important.callout{border-left-color:#ff0039}div.callout-important.callout-style-default>.callout-header{background-color:#ffe6eb}div.callout-important:not(.callout-titled) .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-important.callout-titled .callout-icon::before{background-image:url('data:image/svg+xml,');}div.callout-important .callout-toggle::before{background-image:url('data:image/svg+xml,')}.quarto-toggle-container{display:flex;align-items:center}.quarto-reader-toggle .bi::before,.quarto-color-scheme-toggle .bi::before{display:inline-block;height:1rem;width:1rem;content:"";background-repeat:no-repeat;background-size:1rem 1rem}.sidebar-navigation{padding-left:20px}.navbar .quarto-color-scheme-toggle:not(.alternate) .bi::before{background-image:url('data:image/svg+xml,')}.navbar .quarto-color-scheme-toggle.alternate .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-color-scheme-toggle:not(.alternate) .bi::before{background-image:url('data:image/svg+xml,')}.sidebar-navigation .quarto-color-scheme-toggle.alternate .bi::before{background-image:url('data:image/svg+xml,')}.quarto-sidebar-toggle{border-color:#dee2e6;border-bottom-left-radius:.25rem;border-bottom-right-radius:.25rem;border-style:solid;border-width:1px;overflow:hidden;border-top-width:0px;padding-top:0px !important}.quarto-sidebar-toggle-title{cursor:pointer;padding-bottom:2px;margin-left:.25em;text-align:center;font-weight:400;font-size:.775em}#quarto-content .quarto-sidebar-toggle{background:#fafafa}#quarto-content .quarto-sidebar-toggle-title{color:#373a3c}.quarto-sidebar-toggle-icon{color:#dee2e6;margin-right:.5em;float:right;transition:transform .2s ease}.quarto-sidebar-toggle-icon::before{padding-top:5px}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-icon{transform:rotate(-180deg)}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-title{border-bottom:solid #dee2e6 1px}.quarto-sidebar-toggle-contents{background-color:#fff;padding-right:10px;padding-left:10px;margin-top:0px !important;transition:max-height .5s ease}.quarto-sidebar-toggle.expanded .quarto-sidebar-toggle-contents{padding-top:1em;padding-bottom:10px}.quarto-sidebar-toggle:not(.expanded) .quarto-sidebar-toggle-contents{padding-top:0px !important;padding-bottom:0px}nav[role=doc-toc]{z-index:1020}#quarto-sidebar>*,nav[role=doc-toc]>*{transition:opacity .1s ease,border .1s ease}#quarto-sidebar.slow>*,nav[role=doc-toc].slow>*{transition:opacity .4s ease,border .4s ease}.quarto-color-scheme-toggle:not(.alternate).top-right .bi::before{background-image:url('data:image/svg+xml,')}.quarto-color-scheme-toggle.alternate.top-right .bi::before{background-image:url('data:image/svg+xml,')}#quarto-appendix.default{border-top:1px solid #dee2e6}#quarto-appendix.default{background-color:#fff;padding-top:1.5em;margin-top:2em;z-index:998}#quarto-appendix.default .quarto-appendix-heading{margin-top:0;line-height:1.4em;font-weight:600;opacity:.9;border-bottom:none;margin-bottom:0}#quarto-appendix.default .footnotes ol,#quarto-appendix.default .footnotes ol li>p:last-of-type,#quarto-appendix.default .quarto-appendix-contents>p:last-of-type{margin-bottom:0}#quarto-appendix.default .quarto-appendix-secondary-label{margin-bottom:.4em}#quarto-appendix.default .quarto-appendix-bibtex{font-size:.7em;padding:1em;border:solid 1px #dee2e6;margin-bottom:1em}#quarto-appendix.default .quarto-appendix-bibtex code.sourceCode{white-space:pre-wrap}#quarto-appendix.default .quarto-appendix-citeas{font-size:.9em;padding:1em;border:solid 1px #dee2e6;margin-bottom:1em}#quarto-appendix.default .quarto-appendix-heading{font-size:1em !important}#quarto-appendix.default *[role=doc-endnotes]>ol,#quarto-appendix.default .quarto-appendix-contents>*:not(h2):not(.h2){font-size:.9em}#quarto-appendix.default section{padding-bottom:1.5em}#quarto-appendix.default section *[role=doc-endnotes],#quarto-appendix.default section>*:not(a){opacity:.9;word-wrap:break-word}.btn.btn-quarto,div.cell-output-display .btn-quarto{color:#cbcccc;background-color:#373a3c;border-color:#373a3c}.btn.btn-quarto:hover,div.cell-output-display .btn-quarto:hover{color:#cbcccc;background-color:#555859;border-color:#4b4e50}.btn-check:focus+.btn.btn-quarto,.btn.btn-quarto:focus,.btn-check:focus+div.cell-output-display .btn-quarto,div.cell-output-display .btn-quarto:focus{color:#cbcccc;background-color:#555859;border-color:#4b4e50;box-shadow:0 0 0 .25rem rgba(77,80,82,.5)}.btn-check:checked+.btn.btn-quarto,.btn-check:active+.btn.btn-quarto,.btn.btn-quarto:active,.btn.btn-quarto.active,.show>.btn.btn-quarto.dropdown-toggle,.btn-check:checked+div.cell-output-display .btn-quarto,.btn-check:active+div.cell-output-display .btn-quarto,div.cell-output-display .btn-quarto:active,div.cell-output-display .btn-quarto.active,.show>div.cell-output-display .btn-quarto.dropdown-toggle{color:#fff;background-color:#5f6163;border-color:#4b4e50}.btn-check:checked+.btn.btn-quarto:focus,.btn-check:active+.btn.btn-quarto:focus,.btn.btn-quarto:active:focus,.btn.btn-quarto.active:focus,.show>.btn.btn-quarto.dropdown-toggle:focus,.btn-check:checked+div.cell-output-display .btn-quarto:focus,.btn-check:active+div.cell-output-display .btn-quarto:focus,div.cell-output-display .btn-quarto:active:focus,div.cell-output-display .btn-quarto.active:focus,.show>div.cell-output-display .btn-quarto.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(77,80,82,.5)}.btn.btn-quarto:disabled,.btn.btn-quarto.disabled,div.cell-output-display .btn-quarto:disabled,div.cell-output-display .btn-quarto.disabled{color:#fff;background-color:#373a3c;border-color:#373a3c}nav.quarto-secondary-nav.color-navbar{background-color:#f8f9fa;color:#545555}nav.quarto-secondary-nav.color-navbar h1,nav.quarto-secondary-nav.color-navbar .h1,nav.quarto-secondary-nav.color-navbar .quarto-btn-toggle{color:#545555}@media(max-width: 991.98px){body.nav-sidebar .quarto-title-banner{margin-bottom:0;padding-bottom:0}body.nav-sidebar #title-block-header{margin-block-end:0}}p.subtitle{margin-top:.25em;margin-bottom:.5em}code a:any-link{color:inherit;text-decoration-color:#6c757d}/*! light */div.observablehq table thead tr th{background-color:var(--bs-body-bg)}input,button,select,optgroup,textarea{background-color:var(--bs-body-bg)}.code-annotated .code-copy-button{margin-right:1.25em;margin-top:0;padding-bottom:0;padding-top:3px}.code-annotation-gutter-bg{background-color:#fff}.code-annotation-gutter{background-color:rgba(233,236,239,.65)}.code-annotation-gutter,.code-annotation-gutter-bg{height:100%;width:calc(20px + .5em);position:absolute;top:0;right:0}dl.code-annotation-container-grid dt{margin-right:1em;margin-top:.25rem}dl.code-annotation-container-grid dt{font-family:var(--bs-font-monospace);color:#4f5457;border:solid #4f5457 1px;border-radius:50%;height:22px;width:22px;line-height:22px;font-size:11px;text-align:center;vertical-align:middle;text-decoration:none}dl.code-annotation-container-grid dt[data-target-cell]{cursor:pointer}dl.code-annotation-container-grid dt[data-target-cell].code-annotation-active{color:#fff;border:solid #aaa 1px;background-color:#aaa}pre.code-annotation-code{padding-top:0;padding-bottom:0}pre.code-annotation-code code{z-index:3}#code-annotation-line-highlight-gutter{width:100%;border-top:solid rgba(170,170,170,.2666666667) 1px;border-bottom:solid rgba(170,170,170,.2666666667) 1px;z-index:2;background-color:rgba(170,170,170,.1333333333)}#code-annotation-line-highlight{margin-left:-4em;width:calc(100% + 4em);border-top:solid rgba(170,170,170,.2666666667) 1px;border-bottom:solid rgba(170,170,170,.2666666667) 1px;z-index:2;background-color:rgba(170,170,170,.1333333333)}code.sourceCode .code-annotation-anchor.code-annotation-active{background-color:var(--quarto-hl-normal-color, #aaaaaa);border:solid var(--quarto-hl-normal-color, #aaaaaa) 1px;color:#e9ecef;font-weight:bolder}code.sourceCode .code-annotation-anchor{font-family:var(--bs-font-monospace);color:var(--quarto-hl-co-color);border:solid var(--quarto-hl-co-color) 1px;border-radius:50%;height:18px;width:18px;font-size:9px;margin-top:2px}code.sourceCode button.code-annotation-anchor{padding:2px}code.sourceCode a.code-annotation-anchor{line-height:18px;text-align:center;vertical-align:middle;cursor:default;text-decoration:none}@media print{.page-columns .column-screen-inset{grid-column:page-start-inset/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset table{background:#fff}.page-columns .column-screen-inset-left{grid-column:page-start-inset/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-left table{background:#fff}.page-columns .column-screen-inset-right{grid-column:body-content-start/page-end-inset;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-inset-right table{background:#fff}.page-columns .column-screen{grid-column:page-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen table{background:#fff}.page-columns .column-screen-left{grid-column:page-start/body-content-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-left table{background:#fff}.page-columns .column-screen-right{grid-column:body-content-start/page-end;z-index:998;transform:translate3d(0, 0, 0)}.page-columns .column-screen-right table{background:#fff}.page-columns .column-screen-inset-shaded{grid-column:page-start-inset/page-end-inset;padding:1em;background:#f8f9fa;z-index:998;transform:translate3d(0, 0, 0);margin-bottom:1em}}.quarto-video{margin-bottom:1em}.table>thead{border-top-width:0}.table>:not(caption)>*:not(:last-child)>*{border-bottom-color:#ebeced;border-bottom-style:solid;border-bottom-width:1px}.table>:not(:first-child){border-top:1px solid #b6babc;border-bottom:1px solid inherit}.table tbody{border-bottom-color:#b6babc}a.external:after{display:inline-block;height:.75rem;width:.75rem;margin-bottom:.15em;margin-left:.25em;content:"";vertical-align:-0.125em;background-image:url('data:image/svg+xml,');background-repeat:no-repeat;background-size:.75rem .75rem}div.sourceCode code a.external:after{content:none}a.external:after:hover{cursor:pointer}.quarto-ext-icon{display:inline-block;font-size:.75em;padding-left:.3em}.code-with-filename .code-with-filename-file{margin-bottom:0;padding-bottom:2px;padding-top:2px;padding-left:.7em;border:var(--quarto-border-width) solid var(--quarto-border-color);border-radius:var(--quarto-border-radius);border-bottom:0;border-bottom-left-radius:0%;border-bottom-right-radius:0%}.code-with-filename div.sourceCode,.reveal .code-with-filename div.sourceCode{margin-top:0;border-top-left-radius:0%;border-top-right-radius:0%}.code-with-filename .code-with-filename-file pre{margin-bottom:0}.code-with-filename .code-with-filename-file,.code-with-filename .code-with-filename-file pre{background-color:rgba(219,219,219,.8)}.quarto-dark .code-with-filename .code-with-filename-file,.quarto-dark .code-with-filename .code-with-filename-file pre{background-color:#555}.code-with-filename .code-with-filename-file strong{font-weight:400}.quarto-title-banner{margin-bottom:1em;color:#545555;background:#f8f9fa}.quarto-title-banner .code-tools-button{color:#878888}.quarto-title-banner .code-tools-button:hover{color:#545555}.quarto-title-banner .code-tools-button>.bi::before{background-image:url('data:image/svg+xml,')}.quarto-title-banner .code-tools-button:hover>.bi::before{background-image:url('data:image/svg+xml,')}.quarto-title-banner .quarto-title .title{font-weight:600}.quarto-title-banner .quarto-categories{margin-top:.75em}@media(min-width: 992px){.quarto-title-banner{padding-top:2.5em;padding-bottom:2.5em}}@media(max-width: 991.98px){.quarto-title-banner{padding-top:1em;padding-bottom:1em}}main.quarto-banner-title-block>section:first-child>h2,main.quarto-banner-title-block>section:first-child>.h2,main.quarto-banner-title-block>section:first-child>h3,main.quarto-banner-title-block>section:first-child>.h3,main.quarto-banner-title-block>section:first-child>h4,main.quarto-banner-title-block>section:first-child>.h4{margin-top:0}.quarto-title .quarto-categories{display:flex;flex-wrap:wrap;row-gap:.5em;column-gap:.4em;padding-bottom:.5em;margin-top:.75em}.quarto-title .quarto-categories .quarto-category{padding:.25em .75em;font-size:.65em;text-transform:uppercase;border:solid 1px;border-radius:.25rem;opacity:.6}.quarto-title .quarto-categories .quarto-category a{color:inherit}#title-block-header.quarto-title-block.default .quarto-title-meta{display:grid;grid-template-columns:repeat(2, 1fr)}#title-block-header.quarto-title-block.default .quarto-title .title{margin-bottom:0}#title-block-header.quarto-title-block.default .quarto-title-author-orcid img{margin-top:-0.2em;height:.8em;width:.8em}#title-block-header.quarto-title-block.default .quarto-description p:last-of-type{margin-bottom:0}#title-block-header.quarto-title-block.default .quarto-title-meta-contents p,#title-block-header.quarto-title-block.default .quarto-title-authors p,#title-block-header.quarto-title-block.default .quarto-title-affiliations p{margin-bottom:.1em}#title-block-header.quarto-title-block.default .quarto-title-meta-heading{text-transform:uppercase;margin-top:1em;font-size:.8em;opacity:.8;font-weight:400}#title-block-header.quarto-title-block.default .quarto-title-meta-contents{font-size:.9em}#title-block-header.quarto-title-block.default .quarto-title-meta-contents a{color:#373a3c}#title-block-header.quarto-title-block.default .quarto-title-meta-contents p.affiliation:last-of-type{margin-bottom:.1em}#title-block-header.quarto-title-block.default p.affiliation{margin-bottom:.1em}#title-block-header.quarto-title-block.default .description,#title-block-header.quarto-title-block.default .abstract{margin-top:0}#title-block-header.quarto-title-block.default .description>p,#title-block-header.quarto-title-block.default .abstract>p{font-size:.9em}#title-block-header.quarto-title-block.default .description>p:last-of-type,#title-block-header.quarto-title-block.default .abstract>p:last-of-type{margin-bottom:0}#title-block-header.quarto-title-block.default .description .abstract-title,#title-block-header.quarto-title-block.default .abstract .abstract-title{margin-top:1em;text-transform:uppercase;font-size:.8em;opacity:.8;font-weight:400}#title-block-header.quarto-title-block.default .quarto-title-meta-author{display:grid;grid-template-columns:1fr 1fr}.quarto-title-tools-only{display:flex;justify-content:right}body{-webkit-font-smoothing:antialiased}.badge.bg-light{color:#373a3c}.progress .progress-bar{font-size:8px;line-height:8px}/*# sourceMappingURL=038018dfc50d695214e8253e62c2ede5.css.map */ diff --git a/r-book/site_libs/bootstrap/bootstrap.min.js b/r-book/site_libs/bootstrap/bootstrap.min.js new file mode 100644 index 00000000..cc0a2556 --- /dev/null +++ b/r-book/site_libs/bootstrap/bootstrap.min.js @@ -0,0 +1,7 @@ +/*! + * Bootstrap v5.1.3 (https://getbootstrap.com/) + * Copyright 2011-2021 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors) + * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) + */ +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).bootstrap=e()}(this,(function(){"use strict";const t="transitionend",e=t=>{let e=t.getAttribute("data-bs-target");if(!e||"#"===e){let i=t.getAttribute("href");if(!i||!i.includes("#")&&!i.startsWith("."))return null;i.includes("#")&&!i.startsWith("#")&&(i=`#${i.split("#")[1]}`),e=i&&"#"!==i?i.trim():null}return e},i=t=>{const i=e(t);return i&&document.querySelector(i)?i:null},n=t=>{const i=e(t);return i?document.querySelector(i):null},s=e=>{e.dispatchEvent(new Event(t))},o=t=>!(!t||"object"!=typeof t)&&(void 0!==t.jquery&&(t=t[0]),void 0!==t.nodeType),r=t=>o(t)?t.jquery?t[0]:t:"string"==typeof t&&t.length>0?document.querySelector(t):null,a=(t,e,i)=>{Object.keys(i).forEach((n=>{const s=i[n],r=e[n],a=r&&o(r)?"element":null==(l=r)?`${l}`:{}.toString.call(l).match(/\s([a-z]+)/i)[1].toLowerCase();var l;if(!new RegExp(s).test(a))throw new TypeError(`${t.toUpperCase()}: Option "${n}" provided type "${a}" but expected type "${s}".`)}))},l=t=>!(!o(t)||0===t.getClientRects().length)&&"visible"===getComputedStyle(t).getPropertyValue("visibility"),c=t=>!t||t.nodeType!==Node.ELEMENT_NODE||!!t.classList.contains("disabled")||(void 0!==t.disabled?t.disabled:t.hasAttribute("disabled")&&"false"!==t.getAttribute("disabled")),h=t=>{if(!document.documentElement.attachShadow)return null;if("function"==typeof t.getRootNode){const e=t.getRootNode();return e instanceof ShadowRoot?e:null}return t instanceof ShadowRoot?t:t.parentNode?h(t.parentNode):null},d=()=>{},u=t=>{t.offsetHeight},f=()=>{const{jQuery:t}=window;return t&&!document.body.hasAttribute("data-bs-no-jquery")?t:null},p=[],m=()=>"rtl"===document.documentElement.dir,g=t=>{var e;e=()=>{const e=f();if(e){const i=t.NAME,n=e.fn[i];e.fn[i]=t.jQueryInterface,e.fn[i].Constructor=t,e.fn[i].noConflict=()=>(e.fn[i]=n,t.jQueryInterface)}},"loading"===document.readyState?(p.length||document.addEventListener("DOMContentLoaded",(()=>{p.forEach((t=>t()))})),p.push(e)):e()},_=t=>{"function"==typeof t&&t()},b=(e,i,n=!0)=>{if(!n)return void _(e);const o=(t=>{if(!t)return 0;let{transitionDuration:e,transitionDelay:i}=window.getComputedStyle(t);const n=Number.parseFloat(e),s=Number.parseFloat(i);return n||s?(e=e.split(",")[0],i=i.split(",")[0],1e3*(Number.parseFloat(e)+Number.parseFloat(i))):0})(i)+5;let r=!1;const a=({target:n})=>{n===i&&(r=!0,i.removeEventListener(t,a),_(e))};i.addEventListener(t,a),setTimeout((()=>{r||s(i)}),o)},v=(t,e,i,n)=>{let s=t.indexOf(e);if(-1===s)return t[!i&&n?t.length-1:0];const o=t.length;return s+=i?1:-1,n&&(s=(s+o)%o),t[Math.max(0,Math.min(s,o-1))]},y=/[^.]*(?=\..*)\.|.*/,w=/\..*/,E=/::\d+$/,A={};let T=1;const O={mouseenter:"mouseover",mouseleave:"mouseout"},C=/^(mouseenter|mouseleave)/i,k=new Set(["click","dblclick","mouseup","mousedown","contextmenu","mousewheel","DOMMouseScroll","mouseover","mouseout","mousemove","selectstart","selectend","keydown","keypress","keyup","orientationchange","touchstart","touchmove","touchend","touchcancel","pointerdown","pointermove","pointerup","pointerleave","pointercancel","gesturestart","gesturechange","gestureend","focus","blur","change","reset","select","submit","focusin","focusout","load","unload","beforeunload","resize","move","DOMContentLoaded","readystatechange","error","abort","scroll"]);function L(t,e){return e&&`${e}::${T++}`||t.uidEvent||T++}function x(t){const e=L(t);return t.uidEvent=e,A[e]=A[e]||{},A[e]}function D(t,e,i=null){const n=Object.keys(t);for(let s=0,o=n.length;sfunction(e){if(!e.relatedTarget||e.relatedTarget!==e.delegateTarget&&!e.delegateTarget.contains(e.relatedTarget))return t.call(this,e)};n?n=t(n):i=t(i)}const[o,r,a]=S(e,i,n),l=x(t),c=l[a]||(l[a]={}),h=D(c,r,o?i:null);if(h)return void(h.oneOff=h.oneOff&&s);const d=L(r,e.replace(y,"")),u=o?function(t,e,i){return function n(s){const o=t.querySelectorAll(e);for(let{target:r}=s;r&&r!==this;r=r.parentNode)for(let a=o.length;a--;)if(o[a]===r)return s.delegateTarget=r,n.oneOff&&j.off(t,s.type,e,i),i.apply(r,[s]);return null}}(t,i,n):function(t,e){return function i(n){return n.delegateTarget=t,i.oneOff&&j.off(t,n.type,e),e.apply(t,[n])}}(t,i);u.delegationSelector=o?i:null,u.originalHandler=r,u.oneOff=s,u.uidEvent=d,c[d]=u,t.addEventListener(a,u,o)}function I(t,e,i,n,s){const o=D(e[i],n,s);o&&(t.removeEventListener(i,o,Boolean(s)),delete e[i][o.uidEvent])}function P(t){return t=t.replace(w,""),O[t]||t}const j={on(t,e,i,n){N(t,e,i,n,!1)},one(t,e,i,n){N(t,e,i,n,!0)},off(t,e,i,n){if("string"!=typeof e||!t)return;const[s,o,r]=S(e,i,n),a=r!==e,l=x(t),c=e.startsWith(".");if(void 0!==o){if(!l||!l[r])return;return void I(t,l,r,o,s?i:null)}c&&Object.keys(l).forEach((i=>{!function(t,e,i,n){const s=e[i]||{};Object.keys(s).forEach((o=>{if(o.includes(n)){const n=s[o];I(t,e,i,n.originalHandler,n.delegationSelector)}}))}(t,l,i,e.slice(1))}));const h=l[r]||{};Object.keys(h).forEach((i=>{const n=i.replace(E,"");if(!a||e.includes(n)){const e=h[i];I(t,l,r,e.originalHandler,e.delegationSelector)}}))},trigger(t,e,i){if("string"!=typeof e||!t)return null;const n=f(),s=P(e),o=e!==s,r=k.has(s);let a,l=!0,c=!0,h=!1,d=null;return o&&n&&(a=n.Event(e,i),n(t).trigger(a),l=!a.isPropagationStopped(),c=!a.isImmediatePropagationStopped(),h=a.isDefaultPrevented()),r?(d=document.createEvent("HTMLEvents"),d.initEvent(s,l,!0)):d=new CustomEvent(e,{bubbles:l,cancelable:!0}),void 0!==i&&Object.keys(i).forEach((t=>{Object.defineProperty(d,t,{get:()=>i[t]})})),h&&d.preventDefault(),c&&t.dispatchEvent(d),d.defaultPrevented&&void 0!==a&&a.preventDefault(),d}},M=new Map,H={set(t,e,i){M.has(t)||M.set(t,new Map);const n=M.get(t);n.has(e)||0===n.size?n.set(e,i):console.error(`Bootstrap doesn't allow more than one instance per element. Bound instance: ${Array.from(n.keys())[0]}.`)},get:(t,e)=>M.has(t)&&M.get(t).get(e)||null,remove(t,e){if(!M.has(t))return;const i=M.get(t);i.delete(e),0===i.size&&M.delete(t)}};class B{constructor(t){(t=r(t))&&(this._element=t,H.set(this._element,this.constructor.DATA_KEY,this))}dispose(){H.remove(this._element,this.constructor.DATA_KEY),j.off(this._element,this.constructor.EVENT_KEY),Object.getOwnPropertyNames(this).forEach((t=>{this[t]=null}))}_queueCallback(t,e,i=!0){b(t,e,i)}static getInstance(t){return H.get(r(t),this.DATA_KEY)}static getOrCreateInstance(t,e={}){return this.getInstance(t)||new this(t,"object"==typeof e?e:null)}static get VERSION(){return"5.1.3"}static get NAME(){throw new Error('You have to implement the static method "NAME", for each component!')}static get DATA_KEY(){return`bs.${this.NAME}`}static get EVENT_KEY(){return`.${this.DATA_KEY}`}}const R=(t,e="hide")=>{const i=`click.dismiss${t.EVENT_KEY}`,s=t.NAME;j.on(document,i,`[data-bs-dismiss="${s}"]`,(function(i){if(["A","AREA"].includes(this.tagName)&&i.preventDefault(),c(this))return;const o=n(this)||this.closest(`.${s}`);t.getOrCreateInstance(o)[e]()}))};class W extends B{static get NAME(){return"alert"}close(){if(j.trigger(this._element,"close.bs.alert").defaultPrevented)return;this._element.classList.remove("show");const t=this._element.classList.contains("fade");this._queueCallback((()=>this._destroyElement()),this._element,t)}_destroyElement(){this._element.remove(),j.trigger(this._element,"closed.bs.alert"),this.dispose()}static jQueryInterface(t){return this.each((function(){const e=W.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}R(W,"close"),g(W);const $='[data-bs-toggle="button"]';class z extends B{static get NAME(){return"button"}toggle(){this._element.setAttribute("aria-pressed",this._element.classList.toggle("active"))}static jQueryInterface(t){return this.each((function(){const e=z.getOrCreateInstance(this);"toggle"===t&&e[t]()}))}}function q(t){return"true"===t||"false"!==t&&(t===Number(t).toString()?Number(t):""===t||"null"===t?null:t)}function F(t){return t.replace(/[A-Z]/g,(t=>`-${t.toLowerCase()}`))}j.on(document,"click.bs.button.data-api",$,(t=>{t.preventDefault();const e=t.target.closest($);z.getOrCreateInstance(e).toggle()})),g(z);const U={setDataAttribute(t,e,i){t.setAttribute(`data-bs-${F(e)}`,i)},removeDataAttribute(t,e){t.removeAttribute(`data-bs-${F(e)}`)},getDataAttributes(t){if(!t)return{};const e={};return Object.keys(t.dataset).filter((t=>t.startsWith("bs"))).forEach((i=>{let n=i.replace(/^bs/,"");n=n.charAt(0).toLowerCase()+n.slice(1,n.length),e[n]=q(t.dataset[i])})),e},getDataAttribute:(t,e)=>q(t.getAttribute(`data-bs-${F(e)}`)),offset(t){const e=t.getBoundingClientRect();return{top:e.top+window.pageYOffset,left:e.left+window.pageXOffset}},position:t=>({top:t.offsetTop,left:t.offsetLeft})},V={find:(t,e=document.documentElement)=>[].concat(...Element.prototype.querySelectorAll.call(e,t)),findOne:(t,e=document.documentElement)=>Element.prototype.querySelector.call(e,t),children:(t,e)=>[].concat(...t.children).filter((t=>t.matches(e))),parents(t,e){const i=[];let n=t.parentNode;for(;n&&n.nodeType===Node.ELEMENT_NODE&&3!==n.nodeType;)n.matches(e)&&i.push(n),n=n.parentNode;return i},prev(t,e){let i=t.previousElementSibling;for(;i;){if(i.matches(e))return[i];i=i.previousElementSibling}return[]},next(t,e){let i=t.nextElementSibling;for(;i;){if(i.matches(e))return[i];i=i.nextElementSibling}return[]},focusableChildren(t){const e=["a","button","input","textarea","select","details","[tabindex]",'[contenteditable="true"]'].map((t=>`${t}:not([tabindex^="-"])`)).join(", ");return this.find(e,t).filter((t=>!c(t)&&l(t)))}},K="carousel",X={interval:5e3,keyboard:!0,slide:!1,pause:"hover",wrap:!0,touch:!0},Y={interval:"(number|boolean)",keyboard:"boolean",slide:"(boolean|string)",pause:"(string|boolean)",wrap:"boolean",touch:"boolean"},Q="next",G="prev",Z="left",J="right",tt={ArrowLeft:J,ArrowRight:Z},et="slid.bs.carousel",it="active",nt=".active.carousel-item";class st extends B{constructor(t,e){super(t),this._items=null,this._interval=null,this._activeElement=null,this._isPaused=!1,this._isSliding=!1,this.touchTimeout=null,this.touchStartX=0,this.touchDeltaX=0,this._config=this._getConfig(e),this._indicatorsElement=V.findOne(".carousel-indicators",this._element),this._touchSupported="ontouchstart"in document.documentElement||navigator.maxTouchPoints>0,this._pointerEvent=Boolean(window.PointerEvent),this._addEventListeners()}static get Default(){return X}static get NAME(){return K}next(){this._slide(Q)}nextWhenVisible(){!document.hidden&&l(this._element)&&this.next()}prev(){this._slide(G)}pause(t){t||(this._isPaused=!0),V.findOne(".carousel-item-next, .carousel-item-prev",this._element)&&(s(this._element),this.cycle(!0)),clearInterval(this._interval),this._interval=null}cycle(t){t||(this._isPaused=!1),this._interval&&(clearInterval(this._interval),this._interval=null),this._config&&this._config.interval&&!this._isPaused&&(this._updateInterval(),this._interval=setInterval((document.visibilityState?this.nextWhenVisible:this.next).bind(this),this._config.interval))}to(t){this._activeElement=V.findOne(nt,this._element);const e=this._getItemIndex(this._activeElement);if(t>this._items.length-1||t<0)return;if(this._isSliding)return void j.one(this._element,et,(()=>this.to(t)));if(e===t)return this.pause(),void this.cycle();const i=t>e?Q:G;this._slide(i,this._items[t])}_getConfig(t){return t={...X,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(K,t,Y),t}_handleSwipe(){const t=Math.abs(this.touchDeltaX);if(t<=40)return;const e=t/this.touchDeltaX;this.touchDeltaX=0,e&&this._slide(e>0?J:Z)}_addEventListeners(){this._config.keyboard&&j.on(this._element,"keydown.bs.carousel",(t=>this._keydown(t))),"hover"===this._config.pause&&(j.on(this._element,"mouseenter.bs.carousel",(t=>this.pause(t))),j.on(this._element,"mouseleave.bs.carousel",(t=>this.cycle(t)))),this._config.touch&&this._touchSupported&&this._addTouchEventListeners()}_addTouchEventListeners(){const t=t=>this._pointerEvent&&("pen"===t.pointerType||"touch"===t.pointerType),e=e=>{t(e)?this.touchStartX=e.clientX:this._pointerEvent||(this.touchStartX=e.touches[0].clientX)},i=t=>{this.touchDeltaX=t.touches&&t.touches.length>1?0:t.touches[0].clientX-this.touchStartX},n=e=>{t(e)&&(this.touchDeltaX=e.clientX-this.touchStartX),this._handleSwipe(),"hover"===this._config.pause&&(this.pause(),this.touchTimeout&&clearTimeout(this.touchTimeout),this.touchTimeout=setTimeout((t=>this.cycle(t)),500+this._config.interval))};V.find(".carousel-item img",this._element).forEach((t=>{j.on(t,"dragstart.bs.carousel",(t=>t.preventDefault()))})),this._pointerEvent?(j.on(this._element,"pointerdown.bs.carousel",(t=>e(t))),j.on(this._element,"pointerup.bs.carousel",(t=>n(t))),this._element.classList.add("pointer-event")):(j.on(this._element,"touchstart.bs.carousel",(t=>e(t))),j.on(this._element,"touchmove.bs.carousel",(t=>i(t))),j.on(this._element,"touchend.bs.carousel",(t=>n(t))))}_keydown(t){if(/input|textarea/i.test(t.target.tagName))return;const e=tt[t.key];e&&(t.preventDefault(),this._slide(e))}_getItemIndex(t){return this._items=t&&t.parentNode?V.find(".carousel-item",t.parentNode):[],this._items.indexOf(t)}_getItemByOrder(t,e){const i=t===Q;return v(this._items,e,i,this._config.wrap)}_triggerSlideEvent(t,e){const i=this._getItemIndex(t),n=this._getItemIndex(V.findOne(nt,this._element));return j.trigger(this._element,"slide.bs.carousel",{relatedTarget:t,direction:e,from:n,to:i})}_setActiveIndicatorElement(t){if(this._indicatorsElement){const e=V.findOne(".active",this._indicatorsElement);e.classList.remove(it),e.removeAttribute("aria-current");const i=V.find("[data-bs-target]",this._indicatorsElement);for(let e=0;e{j.trigger(this._element,et,{relatedTarget:o,direction:d,from:s,to:r})};if(this._element.classList.contains("slide")){o.classList.add(h),u(o),n.classList.add(c),o.classList.add(c);const t=()=>{o.classList.remove(c,h),o.classList.add(it),n.classList.remove(it,h,c),this._isSliding=!1,setTimeout(f,0)};this._queueCallback(t,n,!0)}else n.classList.remove(it),o.classList.add(it),this._isSliding=!1,f();a&&this.cycle()}_directionToOrder(t){return[J,Z].includes(t)?m()?t===Z?G:Q:t===Z?Q:G:t}_orderToDirection(t){return[Q,G].includes(t)?m()?t===G?Z:J:t===G?J:Z:t}static carouselInterface(t,e){const i=st.getOrCreateInstance(t,e);let{_config:n}=i;"object"==typeof e&&(n={...n,...e});const s="string"==typeof e?e:n.slide;if("number"==typeof e)i.to(e);else if("string"==typeof s){if(void 0===i[s])throw new TypeError(`No method named "${s}"`);i[s]()}else n.interval&&n.ride&&(i.pause(),i.cycle())}static jQueryInterface(t){return this.each((function(){st.carouselInterface(this,t)}))}static dataApiClickHandler(t){const e=n(this);if(!e||!e.classList.contains("carousel"))return;const i={...U.getDataAttributes(e),...U.getDataAttributes(this)},s=this.getAttribute("data-bs-slide-to");s&&(i.interval=!1),st.carouselInterface(e,i),s&&st.getInstance(e).to(s),t.preventDefault()}}j.on(document,"click.bs.carousel.data-api","[data-bs-slide], [data-bs-slide-to]",st.dataApiClickHandler),j.on(window,"load.bs.carousel.data-api",(()=>{const t=V.find('[data-bs-ride="carousel"]');for(let e=0,i=t.length;et===this._element));null!==s&&o.length&&(this._selector=s,this._triggerArray.push(e))}this._initializeChildren(),this._config.parent||this._addAriaAndCollapsedClass(this._triggerArray,this._isShown()),this._config.toggle&&this.toggle()}static get Default(){return rt}static get NAME(){return ot}toggle(){this._isShown()?this.hide():this.show()}show(){if(this._isTransitioning||this._isShown())return;let t,e=[];if(this._config.parent){const t=V.find(ut,this._config.parent);e=V.find(".collapse.show, .collapse.collapsing",this._config.parent).filter((e=>!t.includes(e)))}const i=V.findOne(this._selector);if(e.length){const n=e.find((t=>i!==t));if(t=n?pt.getInstance(n):null,t&&t._isTransitioning)return}if(j.trigger(this._element,"show.bs.collapse").defaultPrevented)return;e.forEach((e=>{i!==e&&pt.getOrCreateInstance(e,{toggle:!1}).hide(),t||H.set(e,"bs.collapse",null)}));const n=this._getDimension();this._element.classList.remove(ct),this._element.classList.add(ht),this._element.style[n]=0,this._addAriaAndCollapsedClass(this._triggerArray,!0),this._isTransitioning=!0;const s=`scroll${n[0].toUpperCase()+n.slice(1)}`;this._queueCallback((()=>{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct,lt),this._element.style[n]="",j.trigger(this._element,"shown.bs.collapse")}),this._element,!0),this._element.style[n]=`${this._element[s]}px`}hide(){if(this._isTransitioning||!this._isShown())return;if(j.trigger(this._element,"hide.bs.collapse").defaultPrevented)return;const t=this._getDimension();this._element.style[t]=`${this._element.getBoundingClientRect()[t]}px`,u(this._element),this._element.classList.add(ht),this._element.classList.remove(ct,lt);const e=this._triggerArray.length;for(let t=0;t{this._isTransitioning=!1,this._element.classList.remove(ht),this._element.classList.add(ct),j.trigger(this._element,"hidden.bs.collapse")}),this._element,!0)}_isShown(t=this._element){return t.classList.contains(lt)}_getConfig(t){return(t={...rt,...U.getDataAttributes(this._element),...t}).toggle=Boolean(t.toggle),t.parent=r(t.parent),a(ot,t,at),t}_getDimension(){return this._element.classList.contains("collapse-horizontal")?"width":"height"}_initializeChildren(){if(!this._config.parent)return;const t=V.find(ut,this._config.parent);V.find(ft,this._config.parent).filter((e=>!t.includes(e))).forEach((t=>{const e=n(t);e&&this._addAriaAndCollapsedClass([t],this._isShown(e))}))}_addAriaAndCollapsedClass(t,e){t.length&&t.forEach((t=>{e?t.classList.remove(dt):t.classList.add(dt),t.setAttribute("aria-expanded",e)}))}static jQueryInterface(t){return this.each((function(){const e={};"string"==typeof t&&/show|hide/.test(t)&&(e.toggle=!1);const i=pt.getOrCreateInstance(this,e);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t]()}}))}}j.on(document,"click.bs.collapse.data-api",ft,(function(t){("A"===t.target.tagName||t.delegateTarget&&"A"===t.delegateTarget.tagName)&&t.preventDefault();const e=i(this);V.find(e).forEach((t=>{pt.getOrCreateInstance(t,{toggle:!1}).toggle()}))})),g(pt);var mt="top",gt="bottom",_t="right",bt="left",vt="auto",yt=[mt,gt,_t,bt],wt="start",Et="end",At="clippingParents",Tt="viewport",Ot="popper",Ct="reference",kt=yt.reduce((function(t,e){return t.concat([e+"-"+wt,e+"-"+Et])}),[]),Lt=[].concat(yt,[vt]).reduce((function(t,e){return t.concat([e,e+"-"+wt,e+"-"+Et])}),[]),xt="beforeRead",Dt="read",St="afterRead",Nt="beforeMain",It="main",Pt="afterMain",jt="beforeWrite",Mt="write",Ht="afterWrite",Bt=[xt,Dt,St,Nt,It,Pt,jt,Mt,Ht];function Rt(t){return t?(t.nodeName||"").toLowerCase():null}function Wt(t){if(null==t)return window;if("[object Window]"!==t.toString()){var e=t.ownerDocument;return e&&e.defaultView||window}return t}function $t(t){return t instanceof Wt(t).Element||t instanceof Element}function zt(t){return t instanceof Wt(t).HTMLElement||t instanceof HTMLElement}function qt(t){return"undefined"!=typeof ShadowRoot&&(t instanceof Wt(t).ShadowRoot||t instanceof ShadowRoot)}const Ft={name:"applyStyles",enabled:!0,phase:"write",fn:function(t){var e=t.state;Object.keys(e.elements).forEach((function(t){var i=e.styles[t]||{},n=e.attributes[t]||{},s=e.elements[t];zt(s)&&Rt(s)&&(Object.assign(s.style,i),Object.keys(n).forEach((function(t){var e=n[t];!1===e?s.removeAttribute(t):s.setAttribute(t,!0===e?"":e)})))}))},effect:function(t){var e=t.state,i={popper:{position:e.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(e.elements.popper.style,i.popper),e.styles=i,e.elements.arrow&&Object.assign(e.elements.arrow.style,i.arrow),function(){Object.keys(e.elements).forEach((function(t){var n=e.elements[t],s=e.attributes[t]||{},o=Object.keys(e.styles.hasOwnProperty(t)?e.styles[t]:i[t]).reduce((function(t,e){return t[e]="",t}),{});zt(n)&&Rt(n)&&(Object.assign(n.style,o),Object.keys(s).forEach((function(t){n.removeAttribute(t)})))}))}},requires:["computeStyles"]};function Ut(t){return t.split("-")[0]}function Vt(t,e){var i=t.getBoundingClientRect();return{width:i.width/1,height:i.height/1,top:i.top/1,right:i.right/1,bottom:i.bottom/1,left:i.left/1,x:i.left/1,y:i.top/1}}function Kt(t){var e=Vt(t),i=t.offsetWidth,n=t.offsetHeight;return Math.abs(e.width-i)<=1&&(i=e.width),Math.abs(e.height-n)<=1&&(n=e.height),{x:t.offsetLeft,y:t.offsetTop,width:i,height:n}}function Xt(t,e){var i=e.getRootNode&&e.getRootNode();if(t.contains(e))return!0;if(i&&qt(i)){var n=e;do{if(n&&t.isSameNode(n))return!0;n=n.parentNode||n.host}while(n)}return!1}function Yt(t){return Wt(t).getComputedStyle(t)}function Qt(t){return["table","td","th"].indexOf(Rt(t))>=0}function Gt(t){return(($t(t)?t.ownerDocument:t.document)||window.document).documentElement}function Zt(t){return"html"===Rt(t)?t:t.assignedSlot||t.parentNode||(qt(t)?t.host:null)||Gt(t)}function Jt(t){return zt(t)&&"fixed"!==Yt(t).position?t.offsetParent:null}function te(t){for(var e=Wt(t),i=Jt(t);i&&Qt(i)&&"static"===Yt(i).position;)i=Jt(i);return i&&("html"===Rt(i)||"body"===Rt(i)&&"static"===Yt(i).position)?e:i||function(t){var e=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&zt(t)&&"fixed"===Yt(t).position)return null;for(var i=Zt(t);zt(i)&&["html","body"].indexOf(Rt(i))<0;){var n=Yt(i);if("none"!==n.transform||"none"!==n.perspective||"paint"===n.contain||-1!==["transform","perspective"].indexOf(n.willChange)||e&&"filter"===n.willChange||e&&n.filter&&"none"!==n.filter)return i;i=i.parentNode}return null}(t)||e}function ee(t){return["top","bottom"].indexOf(t)>=0?"x":"y"}var ie=Math.max,ne=Math.min,se=Math.round;function oe(t,e,i){return ie(t,ne(e,i))}function re(t){return Object.assign({},{top:0,right:0,bottom:0,left:0},t)}function ae(t,e){return e.reduce((function(e,i){return e[i]=t,e}),{})}const le={name:"arrow",enabled:!0,phase:"main",fn:function(t){var e,i=t.state,n=t.name,s=t.options,o=i.elements.arrow,r=i.modifiersData.popperOffsets,a=Ut(i.placement),l=ee(a),c=[bt,_t].indexOf(a)>=0?"height":"width";if(o&&r){var h=function(t,e){return re("number"!=typeof(t="function"==typeof t?t(Object.assign({},e.rects,{placement:e.placement})):t)?t:ae(t,yt))}(s.padding,i),d=Kt(o),u="y"===l?mt:bt,f="y"===l?gt:_t,p=i.rects.reference[c]+i.rects.reference[l]-r[l]-i.rects.popper[c],m=r[l]-i.rects.reference[l],g=te(o),_=g?"y"===l?g.clientHeight||0:g.clientWidth||0:0,b=p/2-m/2,v=h[u],y=_-d[c]-h[f],w=_/2-d[c]/2+b,E=oe(v,w,y),A=l;i.modifiersData[n]=((e={})[A]=E,e.centerOffset=E-w,e)}},effect:function(t){var e=t.state,i=t.options.element,n=void 0===i?"[data-popper-arrow]":i;null!=n&&("string"!=typeof n||(n=e.elements.popper.querySelector(n)))&&Xt(e.elements.popper,n)&&(e.elements.arrow=n)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function ce(t){return t.split("-")[1]}var he={top:"auto",right:"auto",bottom:"auto",left:"auto"};function de(t){var e,i=t.popper,n=t.popperRect,s=t.placement,o=t.variation,r=t.offsets,a=t.position,l=t.gpuAcceleration,c=t.adaptive,h=t.roundOffsets,d=!0===h?function(t){var e=t.x,i=t.y,n=window.devicePixelRatio||1;return{x:se(se(e*n)/n)||0,y:se(se(i*n)/n)||0}}(r):"function"==typeof h?h(r):r,u=d.x,f=void 0===u?0:u,p=d.y,m=void 0===p?0:p,g=r.hasOwnProperty("x"),_=r.hasOwnProperty("y"),b=bt,v=mt,y=window;if(c){var w=te(i),E="clientHeight",A="clientWidth";w===Wt(i)&&"static"!==Yt(w=Gt(i)).position&&"absolute"===a&&(E="scrollHeight",A="scrollWidth"),w=w,s!==mt&&(s!==bt&&s!==_t||o!==Et)||(v=gt,m-=w[E]-n.height,m*=l?1:-1),s!==bt&&(s!==mt&&s!==gt||o!==Et)||(b=_t,f-=w[A]-n.width,f*=l?1:-1)}var T,O=Object.assign({position:a},c&&he);return l?Object.assign({},O,((T={})[v]=_?"0":"",T[b]=g?"0":"",T.transform=(y.devicePixelRatio||1)<=1?"translate("+f+"px, "+m+"px)":"translate3d("+f+"px, "+m+"px, 0)",T)):Object.assign({},O,((e={})[v]=_?m+"px":"",e[b]=g?f+"px":"",e.transform="",e))}const ue={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:function(t){var e=t.state,i=t.options,n=i.gpuAcceleration,s=void 0===n||n,o=i.adaptive,r=void 0===o||o,a=i.roundOffsets,l=void 0===a||a,c={placement:Ut(e.placement),variation:ce(e.placement),popper:e.elements.popper,popperRect:e.rects.popper,gpuAcceleration:s};null!=e.modifiersData.popperOffsets&&(e.styles.popper=Object.assign({},e.styles.popper,de(Object.assign({},c,{offsets:e.modifiersData.popperOffsets,position:e.options.strategy,adaptive:r,roundOffsets:l})))),null!=e.modifiersData.arrow&&(e.styles.arrow=Object.assign({},e.styles.arrow,de(Object.assign({},c,{offsets:e.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:l})))),e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-placement":e.placement})},data:{}};var fe={passive:!0};const pe={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:function(t){var e=t.state,i=t.instance,n=t.options,s=n.scroll,o=void 0===s||s,r=n.resize,a=void 0===r||r,l=Wt(e.elements.popper),c=[].concat(e.scrollParents.reference,e.scrollParents.popper);return o&&c.forEach((function(t){t.addEventListener("scroll",i.update,fe)})),a&&l.addEventListener("resize",i.update,fe),function(){o&&c.forEach((function(t){t.removeEventListener("scroll",i.update,fe)})),a&&l.removeEventListener("resize",i.update,fe)}},data:{}};var me={left:"right",right:"left",bottom:"top",top:"bottom"};function ge(t){return t.replace(/left|right|bottom|top/g,(function(t){return me[t]}))}var _e={start:"end",end:"start"};function be(t){return t.replace(/start|end/g,(function(t){return _e[t]}))}function ve(t){var e=Wt(t);return{scrollLeft:e.pageXOffset,scrollTop:e.pageYOffset}}function ye(t){return Vt(Gt(t)).left+ve(t).scrollLeft}function we(t){var e=Yt(t),i=e.overflow,n=e.overflowX,s=e.overflowY;return/auto|scroll|overlay|hidden/.test(i+s+n)}function Ee(t){return["html","body","#document"].indexOf(Rt(t))>=0?t.ownerDocument.body:zt(t)&&we(t)?t:Ee(Zt(t))}function Ae(t,e){var i;void 0===e&&(e=[]);var n=Ee(t),s=n===(null==(i=t.ownerDocument)?void 0:i.body),o=Wt(n),r=s?[o].concat(o.visualViewport||[],we(n)?n:[]):n,a=e.concat(r);return s?a:a.concat(Ae(Zt(r)))}function Te(t){return Object.assign({},t,{left:t.x,top:t.y,right:t.x+t.width,bottom:t.y+t.height})}function Oe(t,e){return e===Tt?Te(function(t){var e=Wt(t),i=Gt(t),n=e.visualViewport,s=i.clientWidth,o=i.clientHeight,r=0,a=0;return n&&(s=n.width,o=n.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(r=n.offsetLeft,a=n.offsetTop)),{width:s,height:o,x:r+ye(t),y:a}}(t)):zt(e)?function(t){var e=Vt(t);return e.top=e.top+t.clientTop,e.left=e.left+t.clientLeft,e.bottom=e.top+t.clientHeight,e.right=e.left+t.clientWidth,e.width=t.clientWidth,e.height=t.clientHeight,e.x=e.left,e.y=e.top,e}(e):Te(function(t){var e,i=Gt(t),n=ve(t),s=null==(e=t.ownerDocument)?void 0:e.body,o=ie(i.scrollWidth,i.clientWidth,s?s.scrollWidth:0,s?s.clientWidth:0),r=ie(i.scrollHeight,i.clientHeight,s?s.scrollHeight:0,s?s.clientHeight:0),a=-n.scrollLeft+ye(t),l=-n.scrollTop;return"rtl"===Yt(s||i).direction&&(a+=ie(i.clientWidth,s?s.clientWidth:0)-o),{width:o,height:r,x:a,y:l}}(Gt(t)))}function Ce(t){var e,i=t.reference,n=t.element,s=t.placement,o=s?Ut(s):null,r=s?ce(s):null,a=i.x+i.width/2-n.width/2,l=i.y+i.height/2-n.height/2;switch(o){case mt:e={x:a,y:i.y-n.height};break;case gt:e={x:a,y:i.y+i.height};break;case _t:e={x:i.x+i.width,y:l};break;case bt:e={x:i.x-n.width,y:l};break;default:e={x:i.x,y:i.y}}var c=o?ee(o):null;if(null!=c){var h="y"===c?"height":"width";switch(r){case wt:e[c]=e[c]-(i[h]/2-n[h]/2);break;case Et:e[c]=e[c]+(i[h]/2-n[h]/2)}}return e}function ke(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=void 0===n?t.placement:n,o=i.boundary,r=void 0===o?At:o,a=i.rootBoundary,l=void 0===a?Tt:a,c=i.elementContext,h=void 0===c?Ot:c,d=i.altBoundary,u=void 0!==d&&d,f=i.padding,p=void 0===f?0:f,m=re("number"!=typeof p?p:ae(p,yt)),g=h===Ot?Ct:Ot,_=t.rects.popper,b=t.elements[u?g:h],v=function(t,e,i){var n="clippingParents"===e?function(t){var e=Ae(Zt(t)),i=["absolute","fixed"].indexOf(Yt(t).position)>=0&&zt(t)?te(t):t;return $t(i)?e.filter((function(t){return $t(t)&&Xt(t,i)&&"body"!==Rt(t)})):[]}(t):[].concat(e),s=[].concat(n,[i]),o=s[0],r=s.reduce((function(e,i){var n=Oe(t,i);return e.top=ie(n.top,e.top),e.right=ne(n.right,e.right),e.bottom=ne(n.bottom,e.bottom),e.left=ie(n.left,e.left),e}),Oe(t,o));return r.width=r.right-r.left,r.height=r.bottom-r.top,r.x=r.left,r.y=r.top,r}($t(b)?b:b.contextElement||Gt(t.elements.popper),r,l),y=Vt(t.elements.reference),w=Ce({reference:y,element:_,strategy:"absolute",placement:s}),E=Te(Object.assign({},_,w)),A=h===Ot?E:y,T={top:v.top-A.top+m.top,bottom:A.bottom-v.bottom+m.bottom,left:v.left-A.left+m.left,right:A.right-v.right+m.right},O=t.modifiersData.offset;if(h===Ot&&O){var C=O[s];Object.keys(T).forEach((function(t){var e=[_t,gt].indexOf(t)>=0?1:-1,i=[mt,gt].indexOf(t)>=0?"y":"x";T[t]+=C[i]*e}))}return T}function Le(t,e){void 0===e&&(e={});var i=e,n=i.placement,s=i.boundary,o=i.rootBoundary,r=i.padding,a=i.flipVariations,l=i.allowedAutoPlacements,c=void 0===l?Lt:l,h=ce(n),d=h?a?kt:kt.filter((function(t){return ce(t)===h})):yt,u=d.filter((function(t){return c.indexOf(t)>=0}));0===u.length&&(u=d);var f=u.reduce((function(e,i){return e[i]=ke(t,{placement:i,boundary:s,rootBoundary:o,padding:r})[Ut(i)],e}),{});return Object.keys(f).sort((function(t,e){return f[t]-f[e]}))}const xe={name:"flip",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name;if(!e.modifiersData[n]._skip){for(var s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0===r||r,l=i.fallbackPlacements,c=i.padding,h=i.boundary,d=i.rootBoundary,u=i.altBoundary,f=i.flipVariations,p=void 0===f||f,m=i.allowedAutoPlacements,g=e.options.placement,_=Ut(g),b=l||(_!==g&&p?function(t){if(Ut(t)===vt)return[];var e=ge(t);return[be(t),e,be(e)]}(g):[ge(g)]),v=[g].concat(b).reduce((function(t,i){return t.concat(Ut(i)===vt?Le(e,{placement:i,boundary:h,rootBoundary:d,padding:c,flipVariations:p,allowedAutoPlacements:m}):i)}),[]),y=e.rects.reference,w=e.rects.popper,E=new Map,A=!0,T=v[0],O=0;O=0,D=x?"width":"height",S=ke(e,{placement:C,boundary:h,rootBoundary:d,altBoundary:u,padding:c}),N=x?L?_t:bt:L?gt:mt;y[D]>w[D]&&(N=ge(N));var I=ge(N),P=[];if(o&&P.push(S[k]<=0),a&&P.push(S[N]<=0,S[I]<=0),P.every((function(t){return t}))){T=C,A=!1;break}E.set(C,P)}if(A)for(var j=function(t){var e=v.find((function(e){var i=E.get(e);if(i)return i.slice(0,t).every((function(t){return t}))}));if(e)return T=e,"break"},M=p?3:1;M>0&&"break"!==j(M);M--);e.placement!==T&&(e.modifiersData[n]._skip=!0,e.placement=T,e.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function De(t,e,i){return void 0===i&&(i={x:0,y:0}),{top:t.top-e.height-i.y,right:t.right-e.width+i.x,bottom:t.bottom-e.height+i.y,left:t.left-e.width-i.x}}function Se(t){return[mt,_t,gt,bt].some((function(e){return t[e]>=0}))}const Ne={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(t){var e=t.state,i=t.name,n=e.rects.reference,s=e.rects.popper,o=e.modifiersData.preventOverflow,r=ke(e,{elementContext:"reference"}),a=ke(e,{altBoundary:!0}),l=De(r,n),c=De(a,s,o),h=Se(l),d=Se(c);e.modifiersData[i]={referenceClippingOffsets:l,popperEscapeOffsets:c,isReferenceHidden:h,hasPopperEscaped:d},e.attributes.popper=Object.assign({},e.attributes.popper,{"data-popper-reference-hidden":h,"data-popper-escaped":d})}},Ie={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.offset,o=void 0===s?[0,0]:s,r=Lt.reduce((function(t,i){return t[i]=function(t,e,i){var n=Ut(t),s=[bt,mt].indexOf(n)>=0?-1:1,o="function"==typeof i?i(Object.assign({},e,{placement:t})):i,r=o[0],a=o[1];return r=r||0,a=(a||0)*s,[bt,_t].indexOf(n)>=0?{x:a,y:r}:{x:r,y:a}}(i,e.rects,o),t}),{}),a=r[e.placement],l=a.x,c=a.y;null!=e.modifiersData.popperOffsets&&(e.modifiersData.popperOffsets.x+=l,e.modifiersData.popperOffsets.y+=c),e.modifiersData[n]=r}},Pe={name:"popperOffsets",enabled:!0,phase:"read",fn:function(t){var e=t.state,i=t.name;e.modifiersData[i]=Ce({reference:e.rects.reference,element:e.rects.popper,strategy:"absolute",placement:e.placement})},data:{}},je={name:"preventOverflow",enabled:!0,phase:"main",fn:function(t){var e=t.state,i=t.options,n=t.name,s=i.mainAxis,o=void 0===s||s,r=i.altAxis,a=void 0!==r&&r,l=i.boundary,c=i.rootBoundary,h=i.altBoundary,d=i.padding,u=i.tether,f=void 0===u||u,p=i.tetherOffset,m=void 0===p?0:p,g=ke(e,{boundary:l,rootBoundary:c,padding:d,altBoundary:h}),_=Ut(e.placement),b=ce(e.placement),v=!b,y=ee(_),w="x"===y?"y":"x",E=e.modifiersData.popperOffsets,A=e.rects.reference,T=e.rects.popper,O="function"==typeof m?m(Object.assign({},e.rects,{placement:e.placement})):m,C={x:0,y:0};if(E){if(o||a){var k="y"===y?mt:bt,L="y"===y?gt:_t,x="y"===y?"height":"width",D=E[y],S=E[y]+g[k],N=E[y]-g[L],I=f?-T[x]/2:0,P=b===wt?A[x]:T[x],j=b===wt?-T[x]:-A[x],M=e.elements.arrow,H=f&&M?Kt(M):{width:0,height:0},B=e.modifiersData["arrow#persistent"]?e.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},R=B[k],W=B[L],$=oe(0,A[x],H[x]),z=v?A[x]/2-I-$-R-O:P-$-R-O,q=v?-A[x]/2+I+$+W+O:j+$+W+O,F=e.elements.arrow&&te(e.elements.arrow),U=F?"y"===y?F.clientTop||0:F.clientLeft||0:0,V=e.modifiersData.offset?e.modifiersData.offset[e.placement][y]:0,K=E[y]+z-V-U,X=E[y]+q-V;if(o){var Y=oe(f?ne(S,K):S,D,f?ie(N,X):N);E[y]=Y,C[y]=Y-D}if(a){var Q="x"===y?mt:bt,G="x"===y?gt:_t,Z=E[w],J=Z+g[Q],tt=Z-g[G],et=oe(f?ne(J,K):J,Z,f?ie(tt,X):tt);E[w]=et,C[w]=et-Z}}e.modifiersData[n]=C}},requiresIfExists:["offset"]};function Me(t,e,i){void 0===i&&(i=!1);var n=zt(e);zt(e)&&function(t){var e=t.getBoundingClientRect();e.width,t.offsetWidth,e.height,t.offsetHeight}(e);var s,o,r=Gt(e),a=Vt(t),l={scrollLeft:0,scrollTop:0},c={x:0,y:0};return(n||!n&&!i)&&(("body"!==Rt(e)||we(r))&&(l=(s=e)!==Wt(s)&&zt(s)?{scrollLeft:(o=s).scrollLeft,scrollTop:o.scrollTop}:ve(s)),zt(e)?((c=Vt(e)).x+=e.clientLeft,c.y+=e.clientTop):r&&(c.x=ye(r))),{x:a.left+l.scrollLeft-c.x,y:a.top+l.scrollTop-c.y,width:a.width,height:a.height}}function He(t){var e=new Map,i=new Set,n=[];function s(t){i.add(t.name),[].concat(t.requires||[],t.requiresIfExists||[]).forEach((function(t){if(!i.has(t)){var n=e.get(t);n&&s(n)}})),n.push(t)}return t.forEach((function(t){e.set(t.name,t)})),t.forEach((function(t){i.has(t.name)||s(t)})),n}var Be={placement:"bottom",modifiers:[],strategy:"absolute"};function Re(){for(var t=arguments.length,e=new Array(t),i=0;ij.on(t,"mouseover",d))),this._element.focus(),this._element.setAttribute("aria-expanded",!0),this._menu.classList.add(Je),this._element.classList.add(Je),j.trigger(this._element,"shown.bs.dropdown",t)}hide(){if(c(this._element)||!this._isShown(this._menu))return;const t={relatedTarget:this._element};this._completeHide(t)}dispose(){this._popper&&this._popper.destroy(),super.dispose()}update(){this._inNavbar=this._detectNavbar(),this._popper&&this._popper.update()}_completeHide(t){j.trigger(this._element,"hide.bs.dropdown",t).defaultPrevented||("ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._popper&&this._popper.destroy(),this._menu.classList.remove(Je),this._element.classList.remove(Je),this._element.setAttribute("aria-expanded","false"),U.removeDataAttribute(this._menu,"popper"),j.trigger(this._element,"hidden.bs.dropdown",t))}_getConfig(t){if(t={...this.constructor.Default,...U.getDataAttributes(this._element),...t},a(Ue,t,this.constructor.DefaultType),"object"==typeof t.reference&&!o(t.reference)&&"function"!=typeof t.reference.getBoundingClientRect)throw new TypeError(`${Ue.toUpperCase()}: Option "reference" provided type "object" without a required "getBoundingClientRect" method.`);return t}_createPopper(t){if(void 0===Fe)throw new TypeError("Bootstrap's dropdowns require Popper (https://popper.js.org)");let e=this._element;"parent"===this._config.reference?e=t:o(this._config.reference)?e=r(this._config.reference):"object"==typeof this._config.reference&&(e=this._config.reference);const i=this._getPopperConfig(),n=i.modifiers.find((t=>"applyStyles"===t.name&&!1===t.enabled));this._popper=qe(e,this._menu,i),n&&U.setDataAttribute(this._menu,"popper","static")}_isShown(t=this._element){return t.classList.contains(Je)}_getMenuElement(){return V.next(this._element,ei)[0]}_getPlacement(){const t=this._element.parentNode;if(t.classList.contains("dropend"))return ri;if(t.classList.contains("dropstart"))return ai;const e="end"===getComputedStyle(this._menu).getPropertyValue("--bs-position").trim();return t.classList.contains("dropup")?e?ni:ii:e?oi:si}_detectNavbar(){return null!==this._element.closest(".navbar")}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_getPopperConfig(){const t={placement:this._getPlacement(),modifiers:[{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"offset",options:{offset:this._getOffset()}}]};return"static"===this._config.display&&(t.modifiers=[{name:"applyStyles",enabled:!1}]),{...t,..."function"==typeof this._config.popperConfig?this._config.popperConfig(t):this._config.popperConfig}}_selectMenuItem({key:t,target:e}){const i=V.find(".dropdown-menu .dropdown-item:not(.disabled):not(:disabled)",this._menu).filter(l);i.length&&v(i,e,t===Ye,!i.includes(e)).focus()}static jQueryInterface(t){return this.each((function(){const e=hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}static clearMenus(t){if(t&&(2===t.button||"keyup"===t.type&&"Tab"!==t.key))return;const e=V.find(ti);for(let i=0,n=e.length;ie+t)),this._setElementAttributes(di,"paddingRight",(e=>e+t)),this._setElementAttributes(ui,"marginRight",(e=>e-t))}_disableOverFlow(){this._saveInitialAttribute(this._element,"overflow"),this._element.style.overflow="hidden"}_setElementAttributes(t,e,i){const n=this.getWidth();this._applyManipulationCallback(t,(t=>{if(t!==this._element&&window.innerWidth>t.clientWidth+n)return;this._saveInitialAttribute(t,e);const s=window.getComputedStyle(t)[e];t.style[e]=`${i(Number.parseFloat(s))}px`}))}reset(){this._resetElementAttributes(this._element,"overflow"),this._resetElementAttributes(this._element,"paddingRight"),this._resetElementAttributes(di,"paddingRight"),this._resetElementAttributes(ui,"marginRight")}_saveInitialAttribute(t,e){const i=t.style[e];i&&U.setDataAttribute(t,e,i)}_resetElementAttributes(t,e){this._applyManipulationCallback(t,(t=>{const i=U.getDataAttribute(t,e);void 0===i?t.style.removeProperty(e):(U.removeDataAttribute(t,e),t.style[e]=i)}))}_applyManipulationCallback(t,e){o(t)?e(t):V.find(t,this._element).forEach(e)}isOverflowing(){return this.getWidth()>0}}const pi={className:"modal-backdrop",isVisible:!0,isAnimated:!1,rootElement:"body",clickCallback:null},mi={className:"string",isVisible:"boolean",isAnimated:"boolean",rootElement:"(element|string)",clickCallback:"(function|null)"},gi="show",_i="mousedown.bs.backdrop";class bi{constructor(t){this._config=this._getConfig(t),this._isAppended=!1,this._element=null}show(t){this._config.isVisible?(this._append(),this._config.isAnimated&&u(this._getElement()),this._getElement().classList.add(gi),this._emulateAnimation((()=>{_(t)}))):_(t)}hide(t){this._config.isVisible?(this._getElement().classList.remove(gi),this._emulateAnimation((()=>{this.dispose(),_(t)}))):_(t)}_getElement(){if(!this._element){const t=document.createElement("div");t.className=this._config.className,this._config.isAnimated&&t.classList.add("fade"),this._element=t}return this._element}_getConfig(t){return(t={...pi,..."object"==typeof t?t:{}}).rootElement=r(t.rootElement),a("backdrop",t,mi),t}_append(){this._isAppended||(this._config.rootElement.append(this._getElement()),j.on(this._getElement(),_i,(()=>{_(this._config.clickCallback)})),this._isAppended=!0)}dispose(){this._isAppended&&(j.off(this._element,_i),this._element.remove(),this._isAppended=!1)}_emulateAnimation(t){b(t,this._getElement(),this._config.isAnimated)}}const vi={trapElement:null,autofocus:!0},yi={trapElement:"element",autofocus:"boolean"},wi=".bs.focustrap",Ei="backward";class Ai{constructor(t){this._config=this._getConfig(t),this._isActive=!1,this._lastTabNavDirection=null}activate(){const{trapElement:t,autofocus:e}=this._config;this._isActive||(e&&t.focus(),j.off(document,wi),j.on(document,"focusin.bs.focustrap",(t=>this._handleFocusin(t))),j.on(document,"keydown.tab.bs.focustrap",(t=>this._handleKeydown(t))),this._isActive=!0)}deactivate(){this._isActive&&(this._isActive=!1,j.off(document,wi))}_handleFocusin(t){const{target:e}=t,{trapElement:i}=this._config;if(e===document||e===i||i.contains(e))return;const n=V.focusableChildren(i);0===n.length?i.focus():this._lastTabNavDirection===Ei?n[n.length-1].focus():n[0].focus()}_handleKeydown(t){"Tab"===t.key&&(this._lastTabNavDirection=t.shiftKey?Ei:"forward")}_getConfig(t){return t={...vi,..."object"==typeof t?t:{}},a("focustrap",t,yi),t}}const Ti="modal",Oi="Escape",Ci={backdrop:!0,keyboard:!0,focus:!0},ki={backdrop:"(boolean|string)",keyboard:"boolean",focus:"boolean"},Li="hidden.bs.modal",xi="show.bs.modal",Di="resize.bs.modal",Si="click.dismiss.bs.modal",Ni="keydown.dismiss.bs.modal",Ii="mousedown.dismiss.bs.modal",Pi="modal-open",ji="show",Mi="modal-static";class Hi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._dialog=V.findOne(".modal-dialog",this._element),this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._isShown=!1,this._ignoreBackdropClick=!1,this._isTransitioning=!1,this._scrollBar=new fi}static get Default(){return Ci}static get NAME(){return Ti}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||this._isTransitioning||j.trigger(this._element,xi,{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._isAnimated()&&(this._isTransitioning=!0),this._scrollBar.hide(),document.body.classList.add(Pi),this._adjustDialog(),this._setEscapeEvent(),this._setResizeEvent(),j.on(this._dialog,Ii,(()=>{j.one(this._element,"mouseup.dismiss.bs.modal",(t=>{t.target===this._element&&(this._ignoreBackdropClick=!0)}))})),this._showBackdrop((()=>this._showElement(t))))}hide(){if(!this._isShown||this._isTransitioning)return;if(j.trigger(this._element,"hide.bs.modal").defaultPrevented)return;this._isShown=!1;const t=this._isAnimated();t&&(this._isTransitioning=!0),this._setEscapeEvent(),this._setResizeEvent(),this._focustrap.deactivate(),this._element.classList.remove(ji),j.off(this._element,Si),j.off(this._dialog,Ii),this._queueCallback((()=>this._hideModal()),this._element,t)}dispose(){[window,this._dialog].forEach((t=>j.off(t,".bs.modal"))),this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}handleUpdate(){this._adjustDialog()}_initializeBackDrop(){return new bi({isVisible:Boolean(this._config.backdrop),isAnimated:this._isAnimated()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_getConfig(t){return t={...Ci,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Ti,t,ki),t}_showElement(t){const e=this._isAnimated(),i=V.findOne(".modal-body",this._dialog);this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE||document.body.append(this._element),this._element.style.display="block",this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.scrollTop=0,i&&(i.scrollTop=0),e&&u(this._element),this._element.classList.add(ji),this._queueCallback((()=>{this._config.focus&&this._focustrap.activate(),this._isTransitioning=!1,j.trigger(this._element,"shown.bs.modal",{relatedTarget:t})}),this._dialog,e)}_setEscapeEvent(){this._isShown?j.on(this._element,Ni,(t=>{this._config.keyboard&&t.key===Oi?(t.preventDefault(),this.hide()):this._config.keyboard||t.key!==Oi||this._triggerBackdropTransition()})):j.off(this._element,Ni)}_setResizeEvent(){this._isShown?j.on(window,Di,(()=>this._adjustDialog())):j.off(window,Di)}_hideModal(){this._element.style.display="none",this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._isTransitioning=!1,this._backdrop.hide((()=>{document.body.classList.remove(Pi),this._resetAdjustments(),this._scrollBar.reset(),j.trigger(this._element,Li)}))}_showBackdrop(t){j.on(this._element,Si,(t=>{this._ignoreBackdropClick?this._ignoreBackdropClick=!1:t.target===t.currentTarget&&(!0===this._config.backdrop?this.hide():"static"===this._config.backdrop&&this._triggerBackdropTransition())})),this._backdrop.show(t)}_isAnimated(){return this._element.classList.contains("fade")}_triggerBackdropTransition(){if(j.trigger(this._element,"hidePrevented.bs.modal").defaultPrevented)return;const{classList:t,scrollHeight:e,style:i}=this._element,n=e>document.documentElement.clientHeight;!n&&"hidden"===i.overflowY||t.contains(Mi)||(n||(i.overflowY="hidden"),t.add(Mi),this._queueCallback((()=>{t.remove(Mi),n||this._queueCallback((()=>{i.overflowY=""}),this._dialog)}),this._dialog),this._element.focus())}_adjustDialog(){const t=this._element.scrollHeight>document.documentElement.clientHeight,e=this._scrollBar.getWidth(),i=e>0;(!i&&t&&!m()||i&&!t&&m())&&(this._element.style.paddingLeft=`${e}px`),(i&&!t&&!m()||!i&&t&&m())&&(this._element.style.paddingRight=`${e}px`)}_resetAdjustments(){this._element.style.paddingLeft="",this._element.style.paddingRight=""}static jQueryInterface(t,e){return this.each((function(){const i=Hi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===i[t])throw new TypeError(`No method named "${t}"`);i[t](e)}}))}}j.on(document,"click.bs.modal.data-api",'[data-bs-toggle="modal"]',(function(t){const e=n(this);["A","AREA"].includes(this.tagName)&&t.preventDefault(),j.one(e,xi,(t=>{t.defaultPrevented||j.one(e,Li,(()=>{l(this)&&this.focus()}))}));const i=V.findOne(".modal.show");i&&Hi.getInstance(i).hide(),Hi.getOrCreateInstance(e).toggle(this)})),R(Hi),g(Hi);const Bi="offcanvas",Ri={backdrop:!0,keyboard:!0,scroll:!1},Wi={backdrop:"boolean",keyboard:"boolean",scroll:"boolean"},$i="show",zi=".offcanvas.show",qi="hidden.bs.offcanvas";class Fi extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._isShown=!1,this._backdrop=this._initializeBackDrop(),this._focustrap=this._initializeFocusTrap(),this._addEventListeners()}static get NAME(){return Bi}static get Default(){return Ri}toggle(t){return this._isShown?this.hide():this.show(t)}show(t){this._isShown||j.trigger(this._element,"show.bs.offcanvas",{relatedTarget:t}).defaultPrevented||(this._isShown=!0,this._element.style.visibility="visible",this._backdrop.show(),this._config.scroll||(new fi).hide(),this._element.removeAttribute("aria-hidden"),this._element.setAttribute("aria-modal",!0),this._element.setAttribute("role","dialog"),this._element.classList.add($i),this._queueCallback((()=>{this._config.scroll||this._focustrap.activate(),j.trigger(this._element,"shown.bs.offcanvas",{relatedTarget:t})}),this._element,!0))}hide(){this._isShown&&(j.trigger(this._element,"hide.bs.offcanvas").defaultPrevented||(this._focustrap.deactivate(),this._element.blur(),this._isShown=!1,this._element.classList.remove($i),this._backdrop.hide(),this._queueCallback((()=>{this._element.setAttribute("aria-hidden",!0),this._element.removeAttribute("aria-modal"),this._element.removeAttribute("role"),this._element.style.visibility="hidden",this._config.scroll||(new fi).reset(),j.trigger(this._element,qi)}),this._element,!0)))}dispose(){this._backdrop.dispose(),this._focustrap.deactivate(),super.dispose()}_getConfig(t){return t={...Ri,...U.getDataAttributes(this._element),..."object"==typeof t?t:{}},a(Bi,t,Wi),t}_initializeBackDrop(){return new bi({className:"offcanvas-backdrop",isVisible:this._config.backdrop,isAnimated:!0,rootElement:this._element.parentNode,clickCallback:()=>this.hide()})}_initializeFocusTrap(){return new Ai({trapElement:this._element})}_addEventListeners(){j.on(this._element,"keydown.dismiss.bs.offcanvas",(t=>{this._config.keyboard&&"Escape"===t.key&&this.hide()}))}static jQueryInterface(t){return this.each((function(){const e=Fi.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t]||t.startsWith("_")||"constructor"===t)throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}j.on(document,"click.bs.offcanvas.data-api",'[data-bs-toggle="offcanvas"]',(function(t){const e=n(this);if(["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this))return;j.one(e,qi,(()=>{l(this)&&this.focus()}));const i=V.findOne(zi);i&&i!==e&&Fi.getInstance(i).hide(),Fi.getOrCreateInstance(e).toggle(this)})),j.on(window,"load.bs.offcanvas.data-api",(()=>V.find(zi).forEach((t=>Fi.getOrCreateInstance(t).show())))),R(Fi),g(Fi);const Ui=new Set(["background","cite","href","itemtype","longdesc","poster","src","xlink:href"]),Vi=/^(?:(?:https?|mailto|ftp|tel|file|sms):|[^#&/:?]*(?:[#/?]|$))/i,Ki=/^data:(?:image\/(?:bmp|gif|jpeg|jpg|png|tiff|webp)|video\/(?:mpeg|mp4|ogg|webm)|audio\/(?:mp3|oga|ogg|opus));base64,[\d+/a-z]+=*$/i,Xi=(t,e)=>{const i=t.nodeName.toLowerCase();if(e.includes(i))return!Ui.has(i)||Boolean(Vi.test(t.nodeValue)||Ki.test(t.nodeValue));const n=e.filter((t=>t instanceof RegExp));for(let t=0,e=n.length;t{Xi(t,r)||i.removeAttribute(t.nodeName)}))}return n.body.innerHTML}const Qi="tooltip",Gi=new Set(["sanitize","allowList","sanitizeFn"]),Zi={animation:"boolean",template:"string",title:"(string|element|function)",trigger:"string",delay:"(number|object)",html:"boolean",selector:"(string|boolean)",placement:"(string|function)",offset:"(array|string|function)",container:"(string|element|boolean)",fallbackPlacements:"array",boundary:"(string|element)",customClass:"(string|function)",sanitize:"boolean",sanitizeFn:"(null|function)",allowList:"object",popperConfig:"(null|object|function)"},Ji={AUTO:"auto",TOP:"top",RIGHT:m()?"left":"right",BOTTOM:"bottom",LEFT:m()?"right":"left"},tn={animation:!0,template:'',trigger:"hover focus",title:"",delay:0,html:!1,selector:!1,placement:"top",offset:[0,0],container:!1,fallbackPlacements:["top","right","bottom","left"],boundary:"clippingParents",customClass:"",sanitize:!0,sanitizeFn:null,allowList:{"*":["class","dir","id","lang","role",/^aria-[\w-]*$/i],a:["target","href","title","rel"],area:[],b:[],br:[],col:[],code:[],div:[],em:[],hr:[],h1:[],h2:[],h3:[],h4:[],h5:[],h6:[],i:[],img:["src","srcset","alt","title","width","height"],li:[],ol:[],p:[],pre:[],s:[],small:[],span:[],sub:[],sup:[],strong:[],u:[],ul:[]},popperConfig:null},en={HIDE:"hide.bs.tooltip",HIDDEN:"hidden.bs.tooltip",SHOW:"show.bs.tooltip",SHOWN:"shown.bs.tooltip",INSERTED:"inserted.bs.tooltip",CLICK:"click.bs.tooltip",FOCUSIN:"focusin.bs.tooltip",FOCUSOUT:"focusout.bs.tooltip",MOUSEENTER:"mouseenter.bs.tooltip",MOUSELEAVE:"mouseleave.bs.tooltip"},nn="fade",sn="show",on="show",rn="out",an=".tooltip-inner",ln=".modal",cn="hide.bs.modal",hn="hover",dn="focus";class un extends B{constructor(t,e){if(void 0===Fe)throw new TypeError("Bootstrap's tooltips require Popper (https://popper.js.org)");super(t),this._isEnabled=!0,this._timeout=0,this._hoverState="",this._activeTrigger={},this._popper=null,this._config=this._getConfig(e),this.tip=null,this._setListeners()}static get Default(){return tn}static get NAME(){return Qi}static get Event(){return en}static get DefaultType(){return Zi}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}toggleEnabled(){this._isEnabled=!this._isEnabled}toggle(t){if(this._isEnabled)if(t){const e=this._initializeOnDelegatedTarget(t);e._activeTrigger.click=!e._activeTrigger.click,e._isWithActiveTrigger()?e._enter(null,e):e._leave(null,e)}else{if(this.getTipElement().classList.contains(sn))return void this._leave(null,this);this._enter(null,this)}}dispose(){clearTimeout(this._timeout),j.off(this._element.closest(ln),cn,this._hideModalHandler),this.tip&&this.tip.remove(),this._disposePopper(),super.dispose()}show(){if("none"===this._element.style.display)throw new Error("Please use show on visible elements");if(!this.isWithContent()||!this._isEnabled)return;const t=j.trigger(this._element,this.constructor.Event.SHOW),e=h(this._element),i=null===e?this._element.ownerDocument.documentElement.contains(this._element):e.contains(this._element);if(t.defaultPrevented||!i)return;"tooltip"===this.constructor.NAME&&this.tip&&this.getTitle()!==this.tip.querySelector(an).innerHTML&&(this._disposePopper(),this.tip.remove(),this.tip=null);const n=this.getTipElement(),s=(t=>{do{t+=Math.floor(1e6*Math.random())}while(document.getElementById(t));return t})(this.constructor.NAME);n.setAttribute("id",s),this._element.setAttribute("aria-describedby",s),this._config.animation&&n.classList.add(nn);const o="function"==typeof this._config.placement?this._config.placement.call(this,n,this._element):this._config.placement,r=this._getAttachment(o);this._addAttachmentClass(r);const{container:a}=this._config;H.set(n,this.constructor.DATA_KEY,this),this._element.ownerDocument.documentElement.contains(this.tip)||(a.append(n),j.trigger(this._element,this.constructor.Event.INSERTED)),this._popper?this._popper.update():this._popper=qe(this._element,n,this._getPopperConfig(r)),n.classList.add(sn);const l=this._resolvePossibleFunction(this._config.customClass);l&&n.classList.add(...l.split(" ")),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>{j.on(t,"mouseover",d)}));const c=this.tip.classList.contains(nn);this._queueCallback((()=>{const t=this._hoverState;this._hoverState=null,j.trigger(this._element,this.constructor.Event.SHOWN),t===rn&&this._leave(null,this)}),this.tip,c)}hide(){if(!this._popper)return;const t=this.getTipElement();if(j.trigger(this._element,this.constructor.Event.HIDE).defaultPrevented)return;t.classList.remove(sn),"ontouchstart"in document.documentElement&&[].concat(...document.body.children).forEach((t=>j.off(t,"mouseover",d))),this._activeTrigger.click=!1,this._activeTrigger.focus=!1,this._activeTrigger.hover=!1;const e=this.tip.classList.contains(nn);this._queueCallback((()=>{this._isWithActiveTrigger()||(this._hoverState!==on&&t.remove(),this._cleanTipClass(),this._element.removeAttribute("aria-describedby"),j.trigger(this._element,this.constructor.Event.HIDDEN),this._disposePopper())}),this.tip,e),this._hoverState=""}update(){null!==this._popper&&this._popper.update()}isWithContent(){return Boolean(this.getTitle())}getTipElement(){if(this.tip)return this.tip;const t=document.createElement("div");t.innerHTML=this._config.template;const e=t.children[0];return this.setContent(e),e.classList.remove(nn,sn),this.tip=e,this.tip}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),an)}_sanitizeAndSetContent(t,e,i){const n=V.findOne(i,t);e||!n?this.setElementContent(n,e):n.remove()}setElementContent(t,e){if(null!==t)return o(e)?(e=r(e),void(this._config.html?e.parentNode!==t&&(t.innerHTML="",t.append(e)):t.textContent=e.textContent)):void(this._config.html?(this._config.sanitize&&(e=Yi(e,this._config.allowList,this._config.sanitizeFn)),t.innerHTML=e):t.textContent=e)}getTitle(){const t=this._element.getAttribute("data-bs-original-title")||this._config.title;return this._resolvePossibleFunction(t)}updateAttachment(t){return"right"===t?"end":"left"===t?"start":t}_initializeOnDelegatedTarget(t,e){return e||this.constructor.getOrCreateInstance(t.delegateTarget,this._getDelegateConfig())}_getOffset(){const{offset:t}=this._config;return"string"==typeof t?t.split(",").map((t=>Number.parseInt(t,10))):"function"==typeof t?e=>t(e,this._element):t}_resolvePossibleFunction(t){return"function"==typeof t?t.call(this._element):t}_getPopperConfig(t){const e={placement:t,modifiers:[{name:"flip",options:{fallbackPlacements:this._config.fallbackPlacements}},{name:"offset",options:{offset:this._getOffset()}},{name:"preventOverflow",options:{boundary:this._config.boundary}},{name:"arrow",options:{element:`.${this.constructor.NAME}-arrow`}},{name:"onChange",enabled:!0,phase:"afterWrite",fn:t=>this._handlePopperPlacementChange(t)}],onFirstUpdate:t=>{t.options.placement!==t.placement&&this._handlePopperPlacementChange(t)}};return{...e,..."function"==typeof this._config.popperConfig?this._config.popperConfig(e):this._config.popperConfig}}_addAttachmentClass(t){this.getTipElement().classList.add(`${this._getBasicClassPrefix()}-${this.updateAttachment(t)}`)}_getAttachment(t){return Ji[t.toUpperCase()]}_setListeners(){this._config.trigger.split(" ").forEach((t=>{if("click"===t)j.on(this._element,this.constructor.Event.CLICK,this._config.selector,(t=>this.toggle(t)));else if("manual"!==t){const e=t===hn?this.constructor.Event.MOUSEENTER:this.constructor.Event.FOCUSIN,i=t===hn?this.constructor.Event.MOUSELEAVE:this.constructor.Event.FOCUSOUT;j.on(this._element,e,this._config.selector,(t=>this._enter(t))),j.on(this._element,i,this._config.selector,(t=>this._leave(t)))}})),this._hideModalHandler=()=>{this._element&&this.hide()},j.on(this._element.closest(ln),cn,this._hideModalHandler),this._config.selector?this._config={...this._config,trigger:"manual",selector:""}:this._fixTitle()}_fixTitle(){const t=this._element.getAttribute("title"),e=typeof this._element.getAttribute("data-bs-original-title");(t||"string"!==e)&&(this._element.setAttribute("data-bs-original-title",t||""),!t||this._element.getAttribute("aria-label")||this._element.textContent||this._element.setAttribute("aria-label",t),this._element.setAttribute("title",""))}_enter(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusin"===t.type?dn:hn]=!0),e.getTipElement().classList.contains(sn)||e._hoverState===on?e._hoverState=on:(clearTimeout(e._timeout),e._hoverState=on,e._config.delay&&e._config.delay.show?e._timeout=setTimeout((()=>{e._hoverState===on&&e.show()}),e._config.delay.show):e.show())}_leave(t,e){e=this._initializeOnDelegatedTarget(t,e),t&&(e._activeTrigger["focusout"===t.type?dn:hn]=e._element.contains(t.relatedTarget)),e._isWithActiveTrigger()||(clearTimeout(e._timeout),e._hoverState=rn,e._config.delay&&e._config.delay.hide?e._timeout=setTimeout((()=>{e._hoverState===rn&&e.hide()}),e._config.delay.hide):e.hide())}_isWithActiveTrigger(){for(const t in this._activeTrigger)if(this._activeTrigger[t])return!0;return!1}_getConfig(t){const e=U.getDataAttributes(this._element);return Object.keys(e).forEach((t=>{Gi.has(t)&&delete e[t]})),(t={...this.constructor.Default,...e,..."object"==typeof t&&t?t:{}}).container=!1===t.container?document.body:r(t.container),"number"==typeof t.delay&&(t.delay={show:t.delay,hide:t.delay}),"number"==typeof t.title&&(t.title=t.title.toString()),"number"==typeof t.content&&(t.content=t.content.toString()),a(Qi,t,this.constructor.DefaultType),t.sanitize&&(t.template=Yi(t.template,t.allowList,t.sanitizeFn)),t}_getDelegateConfig(){const t={};for(const e in this._config)this.constructor.Default[e]!==this._config[e]&&(t[e]=this._config[e]);return t}_cleanTipClass(){const t=this.getTipElement(),e=new RegExp(`(^|\\s)${this._getBasicClassPrefix()}\\S+`,"g"),i=t.getAttribute("class").match(e);null!==i&&i.length>0&&i.map((t=>t.trim())).forEach((e=>t.classList.remove(e)))}_getBasicClassPrefix(){return"bs-tooltip"}_handlePopperPlacementChange(t){const{state:e}=t;e&&(this.tip=e.elements.popper,this._cleanTipClass(),this._addAttachmentClass(this._getAttachment(e.placement)))}_disposePopper(){this._popper&&(this._popper.destroy(),this._popper=null)}static jQueryInterface(t){return this.each((function(){const e=un.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(un);const fn={...un.Default,placement:"right",offset:[0,8],trigger:"click",content:"",template:''},pn={...un.DefaultType,content:"(string|element|function)"},mn={HIDE:"hide.bs.popover",HIDDEN:"hidden.bs.popover",SHOW:"show.bs.popover",SHOWN:"shown.bs.popover",INSERTED:"inserted.bs.popover",CLICK:"click.bs.popover",FOCUSIN:"focusin.bs.popover",FOCUSOUT:"focusout.bs.popover",MOUSEENTER:"mouseenter.bs.popover",MOUSELEAVE:"mouseleave.bs.popover"};class gn extends un{static get Default(){return fn}static get NAME(){return"popover"}static get Event(){return mn}static get DefaultType(){return pn}isWithContent(){return this.getTitle()||this._getContent()}setContent(t){this._sanitizeAndSetContent(t,this.getTitle(),".popover-header"),this._sanitizeAndSetContent(t,this._getContent(),".popover-body")}_getContent(){return this._resolvePossibleFunction(this._config.content)}_getBasicClassPrefix(){return"bs-popover"}static jQueryInterface(t){return this.each((function(){const e=gn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}g(gn);const _n="scrollspy",bn={offset:10,method:"auto",target:""},vn={offset:"number",method:"string",target:"(string|element)"},yn="active",wn=".nav-link, .list-group-item, .dropdown-item",En="position";class An extends B{constructor(t,e){super(t),this._scrollElement="BODY"===this._element.tagName?window:this._element,this._config=this._getConfig(e),this._offsets=[],this._targets=[],this._activeTarget=null,this._scrollHeight=0,j.on(this._scrollElement,"scroll.bs.scrollspy",(()=>this._process())),this.refresh(),this._process()}static get Default(){return bn}static get NAME(){return _n}refresh(){const t=this._scrollElement===this._scrollElement.window?"offset":En,e="auto"===this._config.method?t:this._config.method,n=e===En?this._getScrollTop():0;this._offsets=[],this._targets=[],this._scrollHeight=this._getScrollHeight(),V.find(wn,this._config.target).map((t=>{const s=i(t),o=s?V.findOne(s):null;if(o){const t=o.getBoundingClientRect();if(t.width||t.height)return[U[e](o).top+n,s]}return null})).filter((t=>t)).sort(((t,e)=>t[0]-e[0])).forEach((t=>{this._offsets.push(t[0]),this._targets.push(t[1])}))}dispose(){j.off(this._scrollElement,".bs.scrollspy"),super.dispose()}_getConfig(t){return(t={...bn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}}).target=r(t.target)||document.documentElement,a(_n,t,vn),t}_getScrollTop(){return this._scrollElement===window?this._scrollElement.pageYOffset:this._scrollElement.scrollTop}_getScrollHeight(){return this._scrollElement.scrollHeight||Math.max(document.body.scrollHeight,document.documentElement.scrollHeight)}_getOffsetHeight(){return this._scrollElement===window?window.innerHeight:this._scrollElement.getBoundingClientRect().height}_process(){const t=this._getScrollTop()+this._config.offset,e=this._getScrollHeight(),i=this._config.offset+e-this._getOffsetHeight();if(this._scrollHeight!==e&&this.refresh(),t>=i){const t=this._targets[this._targets.length-1];this._activeTarget!==t&&this._activate(t)}else{if(this._activeTarget&&t0)return this._activeTarget=null,void this._clear();for(let e=this._offsets.length;e--;)this._activeTarget!==this._targets[e]&&t>=this._offsets[e]&&(void 0===this._offsets[e+1]||t`${e}[data-bs-target="${t}"],${e}[href="${t}"]`)),i=V.findOne(e.join(","),this._config.target);i.classList.add(yn),i.classList.contains("dropdown-item")?V.findOne(".dropdown-toggle",i.closest(".dropdown")).classList.add(yn):V.parents(i,".nav, .list-group").forEach((t=>{V.prev(t,".nav-link, .list-group-item").forEach((t=>t.classList.add(yn))),V.prev(t,".nav-item").forEach((t=>{V.children(t,".nav-link").forEach((t=>t.classList.add(yn)))}))})),j.trigger(this._scrollElement,"activate.bs.scrollspy",{relatedTarget:t})}_clear(){V.find(wn,this._config.target).filter((t=>t.classList.contains(yn))).forEach((t=>t.classList.remove(yn)))}static jQueryInterface(t){return this.each((function(){const e=An.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(window,"load.bs.scrollspy.data-api",(()=>{V.find('[data-bs-spy="scroll"]').forEach((t=>new An(t)))})),g(An);const Tn="active",On="fade",Cn="show",kn=".active",Ln=":scope > li > .active";class xn extends B{static get NAME(){return"tab"}show(){if(this._element.parentNode&&this._element.parentNode.nodeType===Node.ELEMENT_NODE&&this._element.classList.contains(Tn))return;let t;const e=n(this._element),i=this._element.closest(".nav, .list-group");if(i){const e="UL"===i.nodeName||"OL"===i.nodeName?Ln:kn;t=V.find(e,i),t=t[t.length-1]}const s=t?j.trigger(t,"hide.bs.tab",{relatedTarget:this._element}):null;if(j.trigger(this._element,"show.bs.tab",{relatedTarget:t}).defaultPrevented||null!==s&&s.defaultPrevented)return;this._activate(this._element,i);const o=()=>{j.trigger(t,"hidden.bs.tab",{relatedTarget:this._element}),j.trigger(this._element,"shown.bs.tab",{relatedTarget:t})};e?this._activate(e,e.parentNode,o):o()}_activate(t,e,i){const n=(!e||"UL"!==e.nodeName&&"OL"!==e.nodeName?V.children(e,kn):V.find(Ln,e))[0],s=i&&n&&n.classList.contains(On),o=()=>this._transitionComplete(t,n,i);n&&s?(n.classList.remove(Cn),this._queueCallback(o,t,!0)):o()}_transitionComplete(t,e,i){if(e){e.classList.remove(Tn);const t=V.findOne(":scope > .dropdown-menu .active",e.parentNode);t&&t.classList.remove(Tn),"tab"===e.getAttribute("role")&&e.setAttribute("aria-selected",!1)}t.classList.add(Tn),"tab"===t.getAttribute("role")&&t.setAttribute("aria-selected",!0),u(t),t.classList.contains(On)&&t.classList.add(Cn);let n=t.parentNode;if(n&&"LI"===n.nodeName&&(n=n.parentNode),n&&n.classList.contains("dropdown-menu")){const e=t.closest(".dropdown");e&&V.find(".dropdown-toggle",e).forEach((t=>t.classList.add(Tn))),t.setAttribute("aria-expanded",!0)}i&&i()}static jQueryInterface(t){return this.each((function(){const e=xn.getOrCreateInstance(this);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t]()}}))}}j.on(document,"click.bs.tab.data-api",'[data-bs-toggle="tab"], [data-bs-toggle="pill"], [data-bs-toggle="list"]',(function(t){["A","AREA"].includes(this.tagName)&&t.preventDefault(),c(this)||xn.getOrCreateInstance(this).show()})),g(xn);const Dn="toast",Sn="hide",Nn="show",In="showing",Pn={animation:"boolean",autohide:"boolean",delay:"number"},jn={animation:!0,autohide:!0,delay:5e3};class Mn extends B{constructor(t,e){super(t),this._config=this._getConfig(e),this._timeout=null,this._hasMouseInteraction=!1,this._hasKeyboardInteraction=!1,this._setListeners()}static get DefaultType(){return Pn}static get Default(){return jn}static get NAME(){return Dn}show(){j.trigger(this._element,"show.bs.toast").defaultPrevented||(this._clearTimeout(),this._config.animation&&this._element.classList.add("fade"),this._element.classList.remove(Sn),u(this._element),this._element.classList.add(Nn),this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.remove(In),j.trigger(this._element,"shown.bs.toast"),this._maybeScheduleHide()}),this._element,this._config.animation))}hide(){this._element.classList.contains(Nn)&&(j.trigger(this._element,"hide.bs.toast").defaultPrevented||(this._element.classList.add(In),this._queueCallback((()=>{this._element.classList.add(Sn),this._element.classList.remove(In),this._element.classList.remove(Nn),j.trigger(this._element,"hidden.bs.toast")}),this._element,this._config.animation)))}dispose(){this._clearTimeout(),this._element.classList.contains(Nn)&&this._element.classList.remove(Nn),super.dispose()}_getConfig(t){return t={...jn,...U.getDataAttributes(this._element),..."object"==typeof t&&t?t:{}},a(Dn,t,this.constructor.DefaultType),t}_maybeScheduleHide(){this._config.autohide&&(this._hasMouseInteraction||this._hasKeyboardInteraction||(this._timeout=setTimeout((()=>{this.hide()}),this._config.delay)))}_onInteraction(t,e){switch(t.type){case"mouseover":case"mouseout":this._hasMouseInteraction=e;break;case"focusin":case"focusout":this._hasKeyboardInteraction=e}if(e)return void this._clearTimeout();const i=t.relatedTarget;this._element===i||this._element.contains(i)||this._maybeScheduleHide()}_setListeners(){j.on(this._element,"mouseover.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"mouseout.bs.toast",(t=>this._onInteraction(t,!1))),j.on(this._element,"focusin.bs.toast",(t=>this._onInteraction(t,!0))),j.on(this._element,"focusout.bs.toast",(t=>this._onInteraction(t,!1)))}_clearTimeout(){clearTimeout(this._timeout),this._timeout=null}static jQueryInterface(t){return this.each((function(){const e=Mn.getOrCreateInstance(this,t);if("string"==typeof t){if(void 0===e[t])throw new TypeError(`No method named "${t}"`);e[t](this)}}))}}return R(Mn),g(Mn),{Alert:W,Button:z,Carousel:st,Collapse:pt,Dropdown:hi,Modal:Hi,Offcanvas:Fi,Popover:gn,ScrollSpy:An,Tab:xn,Toast:Mn,Tooltip:un}})); +//# sourceMappingURL=bootstrap.bundle.min.js.map \ No newline at end of file diff --git a/r-book/site_libs/clipboard/clipboard.min.js b/r-book/site_libs/clipboard/clipboard.min.js new file mode 100644 index 00000000..1103f811 --- /dev/null +++ b/r-book/site_libs/clipboard/clipboard.min.js @@ -0,0 +1,7 @@ +/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */ +!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.ClipboardJS=e():t.ClipboardJS=e()}(this,function(){return n={686:function(t,e,n){"use strict";n.d(e,{default:function(){return b}});var e=n(279),i=n.n(e),e=n(370),u=n.n(e),e=n(817),r=n.n(e);function c(t){try{return document.execCommand(t)}catch(t){return}}var a=function(t){t=r()(t);return c("cut"),t};function o(t,e){var n,o,t=(n=t,o="rtl"===document.documentElement.getAttribute("dir"),(t=document.createElement("textarea")).style.fontSize="12pt",t.style.border="0",t.style.padding="0",t.style.margin="0",t.style.position="absolute",t.style[o?"right":"left"]="-9999px",o=window.pageYOffset||document.documentElement.scrollTop,t.style.top="".concat(o,"px"),t.setAttribute("readonly",""),t.value=n,t);return e.container.appendChild(t),e=r()(t),c("copy"),t.remove(),e}var f=function(t){var e=1.anchorjs-link,.anchorjs-link:focus{opacity:1}",u.sheet.cssRules.length),u.sheet.insertRule("[data-anchorjs-icon]::after{content:attr(data-anchorjs-icon)}",u.sheet.cssRules.length),u.sheet.insertRule('@font-face{font-family:anchorjs-icons;src:url(data:n/a;base64,AAEAAAALAIAAAwAwT1MvMg8yG2cAAAE4AAAAYGNtYXDp3gC3AAABpAAAAExnYXNwAAAAEAAAA9wAAAAIZ2x5ZlQCcfwAAAH4AAABCGhlYWQHFvHyAAAAvAAAADZoaGVhBnACFwAAAPQAAAAkaG10eASAADEAAAGYAAAADGxvY2EACACEAAAB8AAAAAhtYXhwAAYAVwAAARgAAAAgbmFtZQGOH9cAAAMAAAAAunBvc3QAAwAAAAADvAAAACAAAQAAAAEAAHzE2p9fDzz1AAkEAAAAAADRecUWAAAAANQA6R8AAAAAAoACwAAAAAgAAgAAAAAAAAABAAADwP/AAAACgAAA/9MCrQABAAAAAAAAAAAAAAAAAAAAAwABAAAAAwBVAAIAAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAMCQAGQAAUAAAKZAswAAACPApkCzAAAAesAMwEJAAAAAAAAAAAAAAAAAAAAARAAAAAAAAAAAAAAAAAAAAAAQAAg//0DwP/AAEADwABAAAAAAQAAAAAAAAAAAAAAIAAAAAAAAAIAAAACgAAxAAAAAwAAAAMAAAAcAAEAAwAAABwAAwABAAAAHAAEADAAAAAIAAgAAgAAACDpy//9//8AAAAg6cv//f///+EWNwADAAEAAAAAAAAAAAAAAAAACACEAAEAAAAAAAAAAAAAAAAxAAACAAQARAKAAsAAKwBUAAABIiYnJjQ3NzY2MzIWFxYUBwcGIicmNDc3NjQnJiYjIgYHBwYUFxYUBwYGIwciJicmNDc3NjIXFhQHBwYUFxYWMzI2Nzc2NCcmNDc2MhcWFAcHBgYjARQGDAUtLXoWOR8fORYtLTgKGwoKCjgaGg0gEhIgDXoaGgkJBQwHdR85Fi0tOAobCgoKOBoaDSASEiANehoaCQkKGwotLXoWOR8BMwUFLYEuehYXFxYugC44CQkKGwo4GkoaDQ0NDXoaShoKGwoFBe8XFi6ALjgJCQobCjgaShoNDQ0NehpKGgobCgoKLYEuehYXAAAADACWAAEAAAAAAAEACAAAAAEAAAAAAAIAAwAIAAEAAAAAAAMACAAAAAEAAAAAAAQACAAAAAEAAAAAAAUAAQALAAEAAAAAAAYACAAAAAMAAQQJAAEAEAAMAAMAAQQJAAIABgAcAAMAAQQJAAMAEAAMAAMAAQQJAAQAEAAMAAMAAQQJAAUAAgAiAAMAAQQJAAYAEAAMYW5jaG9yanM0MDBAAGEAbgBjAGgAbwByAGoAcwA0ADAAMABAAAAAAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAH//wAP) format("truetype")}',u.sheet.cssRules.length)),u=document.querySelectorAll("[id]"),t=[].map.call(u,function(A){return A.id}),i=0;i\]./()*\\\n\t\b\v\u00A0]/g,"-").replace(/-{2,}/g,"-").substring(0,this.options.truncate).replace(/^-+|-+$/gm,"").toLowerCase()},this.hasAnchorJSLink=function(A){var e=A.firstChild&&-1<(" "+A.firstChild.className+" ").indexOf(" anchorjs-link "),A=A.lastChild&&-1<(" "+A.lastChild.className+" ").indexOf(" anchorjs-link ");return e||A||!1}}}); +// @license-end \ No newline at end of file diff --git a/r-book/site_libs/quarto-html/popper.min.js b/r-book/site_libs/quarto-html/popper.min.js new file mode 100644 index 00000000..2269d669 --- /dev/null +++ b/r-book/site_libs/quarto-html/popper.min.js @@ -0,0 +1,6 @@ +/** + * @popperjs/core v2.11.4 - MIT License + */ + +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self).Popper={})}(this,(function(e){"use strict";function t(e){if(null==e)return window;if("[object Window]"!==e.toString()){var t=e.ownerDocument;return t&&t.defaultView||window}return e}function n(e){return e instanceof t(e).Element||e instanceof Element}function r(e){return e instanceof t(e).HTMLElement||e instanceof HTMLElement}function o(e){return"undefined"!=typeof ShadowRoot&&(e instanceof t(e).ShadowRoot||e instanceof ShadowRoot)}var i=Math.max,a=Math.min,s=Math.round;function f(e,t){void 0===t&&(t=!1);var n=e.getBoundingClientRect(),o=1,i=1;if(r(e)&&t){var a=e.offsetHeight,f=e.offsetWidth;f>0&&(o=s(n.width)/f||1),a>0&&(i=s(n.height)/a||1)}return{width:n.width/o,height:n.height/i,top:n.top/i,right:n.right/o,bottom:n.bottom/i,left:n.left/o,x:n.left/o,y:n.top/i}}function c(e){var n=t(e);return{scrollLeft:n.pageXOffset,scrollTop:n.pageYOffset}}function p(e){return e?(e.nodeName||"").toLowerCase():null}function u(e){return((n(e)?e.ownerDocument:e.document)||window.document).documentElement}function l(e){return f(u(e)).left+c(e).scrollLeft}function d(e){return t(e).getComputedStyle(e)}function h(e){var t=d(e),n=t.overflow,r=t.overflowX,o=t.overflowY;return/auto|scroll|overlay|hidden/.test(n+o+r)}function m(e,n,o){void 0===o&&(o=!1);var i,a,d=r(n),m=r(n)&&function(e){var t=e.getBoundingClientRect(),n=s(t.width)/e.offsetWidth||1,r=s(t.height)/e.offsetHeight||1;return 1!==n||1!==r}(n),v=u(n),g=f(e,m),y={scrollLeft:0,scrollTop:0},b={x:0,y:0};return(d||!d&&!o)&&(("body"!==p(n)||h(v))&&(y=(i=n)!==t(i)&&r(i)?{scrollLeft:(a=i).scrollLeft,scrollTop:a.scrollTop}:c(i)),r(n)?((b=f(n,!0)).x+=n.clientLeft,b.y+=n.clientTop):v&&(b.x=l(v))),{x:g.left+y.scrollLeft-b.x,y:g.top+y.scrollTop-b.y,width:g.width,height:g.height}}function v(e){var t=f(e),n=e.offsetWidth,r=e.offsetHeight;return Math.abs(t.width-n)<=1&&(n=t.width),Math.abs(t.height-r)<=1&&(r=t.height),{x:e.offsetLeft,y:e.offsetTop,width:n,height:r}}function g(e){return"html"===p(e)?e:e.assignedSlot||e.parentNode||(o(e)?e.host:null)||u(e)}function y(e){return["html","body","#document"].indexOf(p(e))>=0?e.ownerDocument.body:r(e)&&h(e)?e:y(g(e))}function b(e,n){var r;void 0===n&&(n=[]);var o=y(e),i=o===(null==(r=e.ownerDocument)?void 0:r.body),a=t(o),s=i?[a].concat(a.visualViewport||[],h(o)?o:[]):o,f=n.concat(s);return i?f:f.concat(b(g(s)))}function x(e){return["table","td","th"].indexOf(p(e))>=0}function w(e){return r(e)&&"fixed"!==d(e).position?e.offsetParent:null}function O(e){for(var n=t(e),i=w(e);i&&x(i)&&"static"===d(i).position;)i=w(i);return i&&("html"===p(i)||"body"===p(i)&&"static"===d(i).position)?n:i||function(e){var t=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&r(e)&&"fixed"===d(e).position)return null;var n=g(e);for(o(n)&&(n=n.host);r(n)&&["html","body"].indexOf(p(n))<0;){var i=d(n);if("none"!==i.transform||"none"!==i.perspective||"paint"===i.contain||-1!==["transform","perspective"].indexOf(i.willChange)||t&&"filter"===i.willChange||t&&i.filter&&"none"!==i.filter)return n;n=n.parentNode}return null}(e)||n}var j="top",E="bottom",D="right",A="left",L="auto",P=[j,E,D,A],M="start",k="end",W="viewport",B="popper",H=P.reduce((function(e,t){return e.concat([t+"-"+M,t+"-"+k])}),[]),T=[].concat(P,[L]).reduce((function(e,t){return e.concat([t,t+"-"+M,t+"-"+k])}),[]),R=["beforeRead","read","afterRead","beforeMain","main","afterMain","beforeWrite","write","afterWrite"];function S(e){var t=new Map,n=new Set,r=[];function o(e){n.add(e.name),[].concat(e.requires||[],e.requiresIfExists||[]).forEach((function(e){if(!n.has(e)){var r=t.get(e);r&&o(r)}})),r.push(e)}return e.forEach((function(e){t.set(e.name,e)})),e.forEach((function(e){n.has(e.name)||o(e)})),r}function C(e){return e.split("-")[0]}function q(e,t){var n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&o(n)){var r=t;do{if(r&&e.isSameNode(r))return!0;r=r.parentNode||r.host}while(r)}return!1}function V(e){return Object.assign({},e,{left:e.x,top:e.y,right:e.x+e.width,bottom:e.y+e.height})}function N(e,r){return r===W?V(function(e){var n=t(e),r=u(e),o=n.visualViewport,i=r.clientWidth,a=r.clientHeight,s=0,f=0;return o&&(i=o.width,a=o.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(s=o.offsetLeft,f=o.offsetTop)),{width:i,height:a,x:s+l(e),y:f}}(e)):n(r)?function(e){var t=f(e);return t.top=t.top+e.clientTop,t.left=t.left+e.clientLeft,t.bottom=t.top+e.clientHeight,t.right=t.left+e.clientWidth,t.width=e.clientWidth,t.height=e.clientHeight,t.x=t.left,t.y=t.top,t}(r):V(function(e){var t,n=u(e),r=c(e),o=null==(t=e.ownerDocument)?void 0:t.body,a=i(n.scrollWidth,n.clientWidth,o?o.scrollWidth:0,o?o.clientWidth:0),s=i(n.scrollHeight,n.clientHeight,o?o.scrollHeight:0,o?o.clientHeight:0),f=-r.scrollLeft+l(e),p=-r.scrollTop;return"rtl"===d(o||n).direction&&(f+=i(n.clientWidth,o?o.clientWidth:0)-a),{width:a,height:s,x:f,y:p}}(u(e)))}function I(e,t,o){var s="clippingParents"===t?function(e){var t=b(g(e)),o=["absolute","fixed"].indexOf(d(e).position)>=0&&r(e)?O(e):e;return n(o)?t.filter((function(e){return n(e)&&q(e,o)&&"body"!==p(e)})):[]}(e):[].concat(t),f=[].concat(s,[o]),c=f[0],u=f.reduce((function(t,n){var r=N(e,n);return t.top=i(r.top,t.top),t.right=a(r.right,t.right),t.bottom=a(r.bottom,t.bottom),t.left=i(r.left,t.left),t}),N(e,c));return u.width=u.right-u.left,u.height=u.bottom-u.top,u.x=u.left,u.y=u.top,u}function _(e){return e.split("-")[1]}function F(e){return["top","bottom"].indexOf(e)>=0?"x":"y"}function U(e){var t,n=e.reference,r=e.element,o=e.placement,i=o?C(o):null,a=o?_(o):null,s=n.x+n.width/2-r.width/2,f=n.y+n.height/2-r.height/2;switch(i){case j:t={x:s,y:n.y-r.height};break;case E:t={x:s,y:n.y+n.height};break;case D:t={x:n.x+n.width,y:f};break;case A:t={x:n.x-r.width,y:f};break;default:t={x:n.x,y:n.y}}var c=i?F(i):null;if(null!=c){var p="y"===c?"height":"width";switch(a){case M:t[c]=t[c]-(n[p]/2-r[p]/2);break;case k:t[c]=t[c]+(n[p]/2-r[p]/2)}}return t}function z(e){return Object.assign({},{top:0,right:0,bottom:0,left:0},e)}function X(e,t){return t.reduce((function(t,n){return t[n]=e,t}),{})}function Y(e,t){void 0===t&&(t={});var r=t,o=r.placement,i=void 0===o?e.placement:o,a=r.boundary,s=void 0===a?"clippingParents":a,c=r.rootBoundary,p=void 0===c?W:c,l=r.elementContext,d=void 0===l?B:l,h=r.altBoundary,m=void 0!==h&&h,v=r.padding,g=void 0===v?0:v,y=z("number"!=typeof g?g:X(g,P)),b=d===B?"reference":B,x=e.rects.popper,w=e.elements[m?b:d],O=I(n(w)?w:w.contextElement||u(e.elements.popper),s,p),A=f(e.elements.reference),L=U({reference:A,element:x,strategy:"absolute",placement:i}),M=V(Object.assign({},x,L)),k=d===B?M:A,H={top:O.top-k.top+y.top,bottom:k.bottom-O.bottom+y.bottom,left:O.left-k.left+y.left,right:k.right-O.right+y.right},T=e.modifiersData.offset;if(d===B&&T){var R=T[i];Object.keys(H).forEach((function(e){var t=[D,E].indexOf(e)>=0?1:-1,n=[j,E].indexOf(e)>=0?"y":"x";H[e]+=R[n]*t}))}return H}var G={placement:"bottom",modifiers:[],strategy:"absolute"};function J(){for(var e=arguments.length,t=new Array(e),n=0;n=0?-1:1,i="function"==typeof n?n(Object.assign({},t,{placement:e})):n,a=i[0],s=i[1];return a=a||0,s=(s||0)*o,[A,D].indexOf(r)>=0?{x:s,y:a}:{x:a,y:s}}(n,t.rects,i),e}),{}),s=a[t.placement],f=s.x,c=s.y;null!=t.modifiersData.popperOffsets&&(t.modifiersData.popperOffsets.x+=f,t.modifiersData.popperOffsets.y+=c),t.modifiersData[r]=a}},ie={left:"right",right:"left",bottom:"top",top:"bottom"};function ae(e){return e.replace(/left|right|bottom|top/g,(function(e){return ie[e]}))}var se={start:"end",end:"start"};function fe(e){return e.replace(/start|end/g,(function(e){return se[e]}))}function ce(e,t){void 0===t&&(t={});var n=t,r=n.placement,o=n.boundary,i=n.rootBoundary,a=n.padding,s=n.flipVariations,f=n.allowedAutoPlacements,c=void 0===f?T:f,p=_(r),u=p?s?H:H.filter((function(e){return _(e)===p})):P,l=u.filter((function(e){return c.indexOf(e)>=0}));0===l.length&&(l=u);var d=l.reduce((function(t,n){return t[n]=Y(e,{placement:n,boundary:o,rootBoundary:i,padding:a})[C(n)],t}),{});return Object.keys(d).sort((function(e,t){return d[e]-d[t]}))}var pe={name:"flip",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name;if(!t.modifiersData[r]._skip){for(var o=n.mainAxis,i=void 0===o||o,a=n.altAxis,s=void 0===a||a,f=n.fallbackPlacements,c=n.padding,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.flipVariations,h=void 0===d||d,m=n.allowedAutoPlacements,v=t.options.placement,g=C(v),y=f||(g===v||!h?[ae(v)]:function(e){if(C(e)===L)return[];var t=ae(e);return[fe(e),t,fe(t)]}(v)),b=[v].concat(y).reduce((function(e,n){return e.concat(C(n)===L?ce(t,{placement:n,boundary:p,rootBoundary:u,padding:c,flipVariations:h,allowedAutoPlacements:m}):n)}),[]),x=t.rects.reference,w=t.rects.popper,O=new Map,P=!0,k=b[0],W=0;W=0,S=R?"width":"height",q=Y(t,{placement:B,boundary:p,rootBoundary:u,altBoundary:l,padding:c}),V=R?T?D:A:T?E:j;x[S]>w[S]&&(V=ae(V));var N=ae(V),I=[];if(i&&I.push(q[H]<=0),s&&I.push(q[V]<=0,q[N]<=0),I.every((function(e){return e}))){k=B,P=!1;break}O.set(B,I)}if(P)for(var F=function(e){var t=b.find((function(t){var n=O.get(t);if(n)return n.slice(0,e).every((function(e){return e}))}));if(t)return k=t,"break"},U=h?3:1;U>0;U--){if("break"===F(U))break}t.placement!==k&&(t.modifiersData[r]._skip=!0,t.placement=k,t.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function ue(e,t,n){return i(e,a(t,n))}var le={name:"preventOverflow",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name,o=n.mainAxis,s=void 0===o||o,f=n.altAxis,c=void 0!==f&&f,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.padding,h=n.tether,m=void 0===h||h,g=n.tetherOffset,y=void 0===g?0:g,b=Y(t,{boundary:p,rootBoundary:u,padding:d,altBoundary:l}),x=C(t.placement),w=_(t.placement),L=!w,P=F(x),k="x"===P?"y":"x",W=t.modifiersData.popperOffsets,B=t.rects.reference,H=t.rects.popper,T="function"==typeof y?y(Object.assign({},t.rects,{placement:t.placement})):y,R="number"==typeof T?{mainAxis:T,altAxis:T}:Object.assign({mainAxis:0,altAxis:0},T),S=t.modifiersData.offset?t.modifiersData.offset[t.placement]:null,q={x:0,y:0};if(W){if(s){var V,N="y"===P?j:A,I="y"===P?E:D,U="y"===P?"height":"width",z=W[P],X=z+b[N],G=z-b[I],J=m?-H[U]/2:0,K=w===M?B[U]:H[U],Q=w===M?-H[U]:-B[U],Z=t.elements.arrow,$=m&&Z?v(Z):{width:0,height:0},ee=t.modifiersData["arrow#persistent"]?t.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},te=ee[N],ne=ee[I],re=ue(0,B[U],$[U]),oe=L?B[U]/2-J-re-te-R.mainAxis:K-re-te-R.mainAxis,ie=L?-B[U]/2+J+re+ne+R.mainAxis:Q+re+ne+R.mainAxis,ae=t.elements.arrow&&O(t.elements.arrow),se=ae?"y"===P?ae.clientTop||0:ae.clientLeft||0:0,fe=null!=(V=null==S?void 0:S[P])?V:0,ce=z+ie-fe,pe=ue(m?a(X,z+oe-fe-se):X,z,m?i(G,ce):G);W[P]=pe,q[P]=pe-z}if(c){var le,de="x"===P?j:A,he="x"===P?E:D,me=W[k],ve="y"===k?"height":"width",ge=me+b[de],ye=me-b[he],be=-1!==[j,A].indexOf(x),xe=null!=(le=null==S?void 0:S[k])?le:0,we=be?ge:me-B[ve]-H[ve]-xe+R.altAxis,Oe=be?me+B[ve]+H[ve]-xe-R.altAxis:ye,je=m&&be?function(e,t,n){var r=ue(e,t,n);return r>n?n:r}(we,me,Oe):ue(m?we:ge,me,m?Oe:ye);W[k]=je,q[k]=je-me}t.modifiersData[r]=q}},requiresIfExists:["offset"]};var de={name:"arrow",enabled:!0,phase:"main",fn:function(e){var t,n=e.state,r=e.name,o=e.options,i=n.elements.arrow,a=n.modifiersData.popperOffsets,s=C(n.placement),f=F(s),c=[A,D].indexOf(s)>=0?"height":"width";if(i&&a){var p=function(e,t){return z("number"!=typeof(e="function"==typeof e?e(Object.assign({},t.rects,{placement:t.placement})):e)?e:X(e,P))}(o.padding,n),u=v(i),l="y"===f?j:A,d="y"===f?E:D,h=n.rects.reference[c]+n.rects.reference[f]-a[f]-n.rects.popper[c],m=a[f]-n.rects.reference[f],g=O(i),y=g?"y"===f?g.clientHeight||0:g.clientWidth||0:0,b=h/2-m/2,x=p[l],w=y-u[c]-p[d],L=y/2-u[c]/2+b,M=ue(x,L,w),k=f;n.modifiersData[r]=((t={})[k]=M,t.centerOffset=M-L,t)}},effect:function(e){var t=e.state,n=e.options.element,r=void 0===n?"[data-popper-arrow]":n;null!=r&&("string"!=typeof r||(r=t.elements.popper.querySelector(r)))&&q(t.elements.popper,r)&&(t.elements.arrow=r)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function he(e,t,n){return void 0===n&&(n={x:0,y:0}),{top:e.top-t.height-n.y,right:e.right-t.width+n.x,bottom:e.bottom-t.height+n.y,left:e.left-t.width-n.x}}function me(e){return[j,D,E,A].some((function(t){return e[t]>=0}))}var ve={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(e){var t=e.state,n=e.name,r=t.rects.reference,o=t.rects.popper,i=t.modifiersData.preventOverflow,a=Y(t,{elementContext:"reference"}),s=Y(t,{altBoundary:!0}),f=he(a,r),c=he(s,o,i),p=me(f),u=me(c);t.modifiersData[n]={referenceClippingOffsets:f,popperEscapeOffsets:c,isReferenceHidden:p,hasPopperEscaped:u},t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-reference-hidden":p,"data-popper-escaped":u})}},ge=K({defaultModifiers:[Z,$,ne,re]}),ye=[Z,$,ne,re,oe,pe,le,de,ve],be=K({defaultModifiers:ye});e.applyStyles=re,e.arrow=de,e.computeStyles=ne,e.createPopper=be,e.createPopperLite=ge,e.defaultModifiers=ye,e.detectOverflow=Y,e.eventListeners=Z,e.flip=pe,e.hide=ve,e.offset=oe,e.popperGenerator=K,e.popperOffsets=$,e.preventOverflow=le,Object.defineProperty(e,"__esModule",{value:!0})})); + diff --git a/r-book/site_libs/quarto-html/quarto-syntax-highlighting.css b/r-book/site_libs/quarto-html/quarto-syntax-highlighting.css new file mode 100644 index 00000000..d9fd98f0 --- /dev/null +++ b/r-book/site_libs/quarto-html/quarto-syntax-highlighting.css @@ -0,0 +1,203 @@ +/* quarto syntax highlight colors */ +:root { + --quarto-hl-ot-color: #003B4F; + --quarto-hl-at-color: #657422; + --quarto-hl-ss-color: #20794D; + --quarto-hl-an-color: #5E5E5E; + --quarto-hl-fu-color: #4758AB; + --quarto-hl-st-color: #20794D; + --quarto-hl-cf-color: #003B4F; + --quarto-hl-op-color: #5E5E5E; + --quarto-hl-er-color: #AD0000; + --quarto-hl-bn-color: #AD0000; + --quarto-hl-al-color: #AD0000; + --quarto-hl-va-color: #111111; + --quarto-hl-bu-color: inherit; + --quarto-hl-ex-color: inherit; + --quarto-hl-pp-color: #AD0000; + --quarto-hl-in-color: #5E5E5E; + --quarto-hl-vs-color: #20794D; + --quarto-hl-wa-color: #5E5E5E; + --quarto-hl-do-color: #5E5E5E; + --quarto-hl-im-color: #00769E; + --quarto-hl-ch-color: #20794D; + --quarto-hl-dt-color: #AD0000; + --quarto-hl-fl-color: #AD0000; + --quarto-hl-co-color: #5E5E5E; + --quarto-hl-cv-color: #5E5E5E; + --quarto-hl-cn-color: #8f5902; + --quarto-hl-sc-color: #5E5E5E; + --quarto-hl-dv-color: #AD0000; + --quarto-hl-kw-color: #003B4F; +} + +/* other quarto variables */ +:root { + --quarto-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; +} + +pre > code.sourceCode > span { + color: #003B4F; +} + +code span { + color: #003B4F; +} + +code.sourceCode > span { + color: #003B4F; +} + +div.sourceCode, +div.sourceCode pre.sourceCode { + color: #003B4F; +} + +code span.ot { + color: #003B4F; + font-style: inherit; +} + +code span.at { + color: #657422; + font-style: inherit; +} + +code span.ss { + color: #20794D; + font-style: inherit; +} + +code span.an { + color: #5E5E5E; + font-style: inherit; +} + +code span.fu { + color: #4758AB; + font-style: inherit; +} + +code span.st { + color: #20794D; + font-style: inherit; +} + +code span.cf { + color: #003B4F; + font-style: inherit; +} + +code span.op { + color: #5E5E5E; + font-style: inherit; +} + +code span.er { + color: #AD0000; + font-style: inherit; +} + +code span.bn { + color: #AD0000; + font-style: inherit; +} + +code span.al { + color: #AD0000; + font-style: inherit; +} + +code span.va { + color: #111111; + font-style: inherit; +} + +code span.bu { + font-style: inherit; +} + +code span.ex { + font-style: inherit; +} + +code span.pp { + color: #AD0000; + font-style: inherit; +} + +code span.in { + color: #5E5E5E; + font-style: inherit; +} + +code span.vs { + color: #20794D; + font-style: inherit; +} + +code span.wa { + color: #5E5E5E; + font-style: italic; +} + +code span.do { + color: #5E5E5E; + font-style: italic; +} + +code span.im { + color: #00769E; + font-style: inherit; +} + +code span.ch { + color: #20794D; + font-style: inherit; +} + +code span.dt { + color: #AD0000; + font-style: inherit; +} + +code span.fl { + color: #AD0000; + font-style: inherit; +} + +code span.co { + color: #5E5E5E; + font-style: inherit; +} + +code span.cv { + color: #5E5E5E; + font-style: italic; +} + +code span.cn { + color: #8f5902; + font-style: inherit; +} + +code span.sc { + color: #5E5E5E; + font-style: inherit; +} + +code span.dv { + color: #AD0000; + font-style: inherit; +} + +code span.kw { + color: #003B4F; + font-style: inherit; +} + +.prevent-inlining { + content: " { + // Find any conflicting margin elements and add margins to the + // top to prevent overlap + const marginChildren = window.document.querySelectorAll( + ".column-margin.column-container > * " + ); + + let lastBottom = 0; + for (const marginChild of marginChildren) { + if (marginChild.offsetParent !== null) { + // clear the top margin so we recompute it + marginChild.style.marginTop = null; + const top = marginChild.getBoundingClientRect().top + window.scrollY; + console.log({ + childtop: marginChild.getBoundingClientRect().top, + scroll: window.scrollY, + top, + lastBottom, + }); + if (top < lastBottom) { + const margin = lastBottom - top; + marginChild.style.marginTop = `${margin}px`; + } + const styles = window.getComputedStyle(marginChild); + const marginTop = parseFloat(styles["marginTop"]); + + console.log({ + top, + height: marginChild.getBoundingClientRect().height, + marginTop, + total: top + marginChild.getBoundingClientRect().height + marginTop, + }); + lastBottom = top + marginChild.getBoundingClientRect().height + marginTop; + } + } +}; + +window.document.addEventListener("DOMContentLoaded", function (_event) { + // Recompute the position of margin elements anytime the body size changes + if (window.ResizeObserver) { + const resizeObserver = new window.ResizeObserver( + throttle(layoutMarginEls, 50) + ); + resizeObserver.observe(window.document.body); + } + + const tocEl = window.document.querySelector('nav.toc-active[role="doc-toc"]'); + const sidebarEl = window.document.getElementById("quarto-sidebar"); + const leftTocEl = window.document.getElementById("quarto-sidebar-toc-left"); + const marginSidebarEl = window.document.getElementById( + "quarto-margin-sidebar" + ); + // function to determine whether the element has a previous sibling that is active + const prevSiblingIsActiveLink = (el) => { + const sibling = el.previousElementSibling; + if (sibling && sibling.tagName === "A") { + return sibling.classList.contains("active"); + } else { + return false; + } + }; + + // fire slideEnter for bootstrap tab activations (for htmlwidget resize behavior) + function fireSlideEnter(e) { + const event = window.document.createEvent("Event"); + event.initEvent("slideenter", true, true); + window.document.dispatchEvent(event); + } + const tabs = window.document.querySelectorAll('a[data-bs-toggle="tab"]'); + tabs.forEach((tab) => { + tab.addEventListener("shown.bs.tab", fireSlideEnter); + }); + + // fire slideEnter for tabby tab activations (for htmlwidget resize behavior) + document.addEventListener("tabby", fireSlideEnter, false); + + // Track scrolling and mark TOC links as active + // get table of contents and sidebar (bail if we don't have at least one) + const tocLinks = tocEl + ? [...tocEl.querySelectorAll("a[data-scroll-target]")] + : []; + const makeActive = (link) => tocLinks[link].classList.add("active"); + const removeActive = (link) => tocLinks[link].classList.remove("active"); + const removeAllActive = () => + [...Array(tocLinks.length).keys()].forEach((link) => removeActive(link)); + + // activate the anchor for a section associated with this TOC entry + tocLinks.forEach((link) => { + link.addEventListener("click", () => { + if (link.href.indexOf("#") !== -1) { + const anchor = link.href.split("#")[1]; + const heading = window.document.querySelector( + `[data-anchor-id=${anchor}]` + ); + if (heading) { + // Add the class + heading.classList.add("reveal-anchorjs-link"); + + // function to show the anchor + const handleMouseout = () => { + heading.classList.remove("reveal-anchorjs-link"); + heading.removeEventListener("mouseout", handleMouseout); + }; + + // add a function to clear the anchor when the user mouses out of it + heading.addEventListener("mouseout", handleMouseout); + } + } + }); + }); + + const sections = tocLinks.map((link) => { + const target = link.getAttribute("data-scroll-target"); + if (target.startsWith("#")) { + return window.document.getElementById(decodeURI(`${target.slice(1)}`)); + } else { + return window.document.querySelector(decodeURI(`${target}`)); + } + }); + + const sectionMargin = 200; + let currentActive = 0; + // track whether we've initialized state the first time + let init = false; + + const updateActiveLink = () => { + // The index from bottom to top (e.g. reversed list) + let sectionIndex = -1; + if ( + window.innerHeight + window.pageYOffset >= + window.document.body.offsetHeight + ) { + sectionIndex = 0; + } else { + sectionIndex = [...sections].reverse().findIndex((section) => { + if (section) { + return window.pageYOffset >= section.offsetTop - sectionMargin; + } else { + return false; + } + }); + } + if (sectionIndex > -1) { + const current = sections.length - sectionIndex - 1; + if (current !== currentActive) { + removeAllActive(); + currentActive = current; + makeActive(current); + if (init) { + window.dispatchEvent(sectionChanged); + } + init = true; + } + } + }; + + const inHiddenRegion = (top, bottom, hiddenRegions) => { + for (const region of hiddenRegions) { + if (top <= region.bottom && bottom >= region.top) { + return true; + } + } + return false; + }; + + const categorySelector = "header.quarto-title-block .quarto-category"; + const activateCategories = (href) => { + // Find any categories + // Surround them with a link pointing back to: + // #category=Authoring + try { + const categoryEls = window.document.querySelectorAll(categorySelector); + for (const categoryEl of categoryEls) { + const categoryText = categoryEl.textContent; + if (categoryText) { + const link = `${href}#category=${encodeURIComponent(categoryText)}`; + const linkEl = window.document.createElement("a"); + linkEl.setAttribute("href", link); + for (const child of categoryEl.childNodes) { + linkEl.append(child); + } + categoryEl.appendChild(linkEl); + } + } + } catch { + // Ignore errors + } + }; + function hasTitleCategories() { + return window.document.querySelector(categorySelector) !== null; + } + + function offsetRelativeUrl(url) { + const offset = getMeta("quarto:offset"); + return offset ? offset + url : url; + } + + function offsetAbsoluteUrl(url) { + const offset = getMeta("quarto:offset"); + const baseUrl = new URL(offset, window.location); + + const projRelativeUrl = url.replace(baseUrl, ""); + if (projRelativeUrl.startsWith("/")) { + return projRelativeUrl; + } else { + return "/" + projRelativeUrl; + } + } + + // read a meta tag value + function getMeta(metaName) { + const metas = window.document.getElementsByTagName("meta"); + for (let i = 0; i < metas.length; i++) { + if (metas[i].getAttribute("name") === metaName) { + return metas[i].getAttribute("content"); + } + } + return ""; + } + + async function findAndActivateCategories() { + const currentPagePath = offsetAbsoluteUrl(window.location.href); + const response = await fetch(offsetRelativeUrl("listings.json")); + if (response.status == 200) { + return response.json().then(function (listingPaths) { + const listingHrefs = []; + for (const listingPath of listingPaths) { + const pathWithoutLeadingSlash = listingPath.listing.substring(1); + for (const item of listingPath.items) { + if ( + item === currentPagePath || + item === currentPagePath + "index.html" + ) { + // Resolve this path against the offset to be sure + // we already are using the correct path to the listing + // (this adjusts the listing urls to be rooted against + // whatever root the page is actually running against) + const relative = offsetRelativeUrl(pathWithoutLeadingSlash); + const baseUrl = window.location; + const resolvedPath = new URL(relative, baseUrl); + listingHrefs.push(resolvedPath.pathname); + break; + } + } + } + + // Look up the tree for a nearby linting and use that if we find one + const nearestListing = findNearestParentListing( + offsetAbsoluteUrl(window.location.pathname), + listingHrefs + ); + if (nearestListing) { + activateCategories(nearestListing); + } else { + // See if the referrer is a listing page for this item + const referredRelativePath = offsetAbsoluteUrl(document.referrer); + const referrerListing = listingHrefs.find((listingHref) => { + const isListingReferrer = + listingHref === referredRelativePath || + listingHref === referredRelativePath + "index.html"; + return isListingReferrer; + }); + + if (referrerListing) { + // Try to use the referrer if possible + activateCategories(referrerListing); + } else if (listingHrefs.length > 0) { + // Otherwise, just fall back to the first listing + activateCategories(listingHrefs[0]); + } + } + }); + } + } + if (hasTitleCategories()) { + findAndActivateCategories(); + } + + const findNearestParentListing = (href, listingHrefs) => { + if (!href || !listingHrefs) { + return undefined; + } + // Look up the tree for a nearby linting and use that if we find one + const relativeParts = href.substring(1).split("/"); + while (relativeParts.length > 0) { + const path = relativeParts.join("/"); + for (const listingHref of listingHrefs) { + if (listingHref.startsWith(path)) { + return listingHref; + } + } + relativeParts.pop(); + } + + return undefined; + }; + + const manageSidebarVisiblity = (el, placeholderDescriptor) => { + let isVisible = true; + let elRect; + + return (hiddenRegions) => { + if (el === null) { + return; + } + + // Find the last element of the TOC + const lastChildEl = el.lastElementChild; + + if (lastChildEl) { + // Converts the sidebar to a menu + const convertToMenu = () => { + for (const child of el.children) { + child.style.opacity = 0; + child.style.overflow = "hidden"; + } + + nexttick(() => { + const toggleContainer = window.document.createElement("div"); + toggleContainer.style.width = "100%"; + toggleContainer.classList.add("zindex-over-content"); + toggleContainer.classList.add("quarto-sidebar-toggle"); + toggleContainer.classList.add("headroom-target"); // Marks this to be managed by headeroom + toggleContainer.id = placeholderDescriptor.id; + toggleContainer.style.position = "fixed"; + + const toggleIcon = window.document.createElement("i"); + toggleIcon.classList.add("quarto-sidebar-toggle-icon"); + toggleIcon.classList.add("bi"); + toggleIcon.classList.add("bi-caret-down-fill"); + + const toggleTitle = window.document.createElement("div"); + const titleEl = window.document.body.querySelector( + placeholderDescriptor.titleSelector + ); + if (titleEl) { + toggleTitle.append( + titleEl.textContent || titleEl.innerText, + toggleIcon + ); + } + toggleTitle.classList.add("zindex-over-content"); + toggleTitle.classList.add("quarto-sidebar-toggle-title"); + toggleContainer.append(toggleTitle); + + const toggleContents = window.document.createElement("div"); + toggleContents.classList = el.classList; + toggleContents.classList.add("zindex-over-content"); + toggleContents.classList.add("quarto-sidebar-toggle-contents"); + for (const child of el.children) { + if (child.id === "toc-title") { + continue; + } + + const clone = child.cloneNode(true); + clone.style.opacity = 1; + clone.style.display = null; + toggleContents.append(clone); + } + toggleContents.style.height = "0px"; + const positionToggle = () => { + // position the element (top left of parent, same width as parent) + if (!elRect) { + elRect = el.getBoundingClientRect(); + } + toggleContainer.style.left = `${elRect.left}px`; + toggleContainer.style.top = `${elRect.top}px`; + toggleContainer.style.width = `${elRect.width}px`; + }; + positionToggle(); + + toggleContainer.append(toggleContents); + el.parentElement.prepend(toggleContainer); + + // Process clicks + let tocShowing = false; + // Allow the caller to control whether this is dismissed + // when it is clicked (e.g. sidebar navigation supports + // opening and closing the nav tree, so don't dismiss on click) + const clickEl = placeholderDescriptor.dismissOnClick + ? toggleContainer + : toggleTitle; + + const closeToggle = () => { + if (tocShowing) { + toggleContainer.classList.remove("expanded"); + toggleContents.style.height = "0px"; + tocShowing = false; + } + }; + + // Get rid of any expanded toggle if the user scrolls + window.document.addEventListener( + "scroll", + throttle(() => { + closeToggle(); + }, 50) + ); + + // Handle positioning of the toggle + window.addEventListener( + "resize", + throttle(() => { + elRect = undefined; + positionToggle(); + }, 50) + ); + + window.addEventListener("quarto-hrChanged", () => { + elRect = undefined; + }); + + // Process the click + clickEl.onclick = () => { + if (!tocShowing) { + toggleContainer.classList.add("expanded"); + toggleContents.style.height = null; + tocShowing = true; + } else { + closeToggle(); + } + }; + }); + }; + + // Converts a sidebar from a menu back to a sidebar + const convertToSidebar = () => { + for (const child of el.children) { + child.style.opacity = 1; + child.style.overflow = null; + } + + const placeholderEl = window.document.getElementById( + placeholderDescriptor.id + ); + if (placeholderEl) { + placeholderEl.remove(); + } + + el.classList.remove("rollup"); + }; + + if (isReaderMode()) { + convertToMenu(); + isVisible = false; + } else { + // Find the top and bottom o the element that is being managed + const elTop = el.offsetTop; + const elBottom = + elTop + lastChildEl.offsetTop + lastChildEl.offsetHeight; + + if (!isVisible) { + // If the element is current not visible reveal if there are + // no conflicts with overlay regions + if (!inHiddenRegion(elTop, elBottom, hiddenRegions)) { + convertToSidebar(); + isVisible = true; + } + } else { + // If the element is visible, hide it if it conflicts with overlay regions + // and insert a placeholder toggle (or if we're in reader mode) + if (inHiddenRegion(elTop, elBottom, hiddenRegions)) { + convertToMenu(); + isVisible = false; + } + } + } + } + }; + }; + + const tabEls = document.querySelectorAll('a[data-bs-toggle="tab"]'); + for (const tabEl of tabEls) { + const id = tabEl.getAttribute("data-bs-target"); + if (id) { + const columnEl = document.querySelector( + `${id} .column-margin, .tabset-margin-content` + ); + if (columnEl) + tabEl.addEventListener("shown.bs.tab", function (event) { + const el = event.srcElement; + if (el) { + const visibleCls = `${el.id}-margin-content`; + // walk up until we find a parent tabset + let panelTabsetEl = el.parentElement; + while (panelTabsetEl) { + if (panelTabsetEl.classList.contains("panel-tabset")) { + break; + } + panelTabsetEl = panelTabsetEl.parentElement; + } + + if (panelTabsetEl) { + const prevSib = panelTabsetEl.previousElementSibling; + if ( + prevSib && + prevSib.classList.contains("tabset-margin-container") + ) { + const childNodes = prevSib.querySelectorAll( + ".tabset-margin-content" + ); + for (const childEl of childNodes) { + if (childEl.classList.contains(visibleCls)) { + childEl.classList.remove("collapse"); + } else { + childEl.classList.add("collapse"); + } + } + } + } + } + + layoutMarginEls(); + }); + } + } + + // Manage the visibility of the toc and the sidebar + const marginScrollVisibility = manageSidebarVisiblity(marginSidebarEl, { + id: "quarto-toc-toggle", + titleSelector: "#toc-title", + dismissOnClick: true, + }); + const sidebarScrollVisiblity = manageSidebarVisiblity(sidebarEl, { + id: "quarto-sidebarnav-toggle", + titleSelector: ".title", + dismissOnClick: false, + }); + let tocLeftScrollVisibility; + if (leftTocEl) { + tocLeftScrollVisibility = manageSidebarVisiblity(leftTocEl, { + id: "quarto-lefttoc-toggle", + titleSelector: "#toc-title", + dismissOnClick: true, + }); + } + + // Find the first element that uses formatting in special columns + const conflictingEls = window.document.body.querySelectorAll( + '[class^="column-"], [class*=" column-"], aside, [class*="margin-caption"], [class*=" margin-caption"], [class*="margin-ref"], [class*=" margin-ref"]' + ); + + // Filter all the possibly conflicting elements into ones + // the do conflict on the left or ride side + const arrConflictingEls = Array.from(conflictingEls); + const leftSideConflictEls = arrConflictingEls.filter((el) => { + if (el.tagName === "ASIDE") { + return false; + } + return Array.from(el.classList).find((className) => { + return ( + className !== "column-body" && + className.startsWith("column-") && + !className.endsWith("right") && + !className.endsWith("container") && + className !== "column-margin" + ); + }); + }); + const rightSideConflictEls = arrConflictingEls.filter((el) => { + if (el.tagName === "ASIDE") { + return true; + } + + const hasMarginCaption = Array.from(el.classList).find((className) => { + return className == "margin-caption"; + }); + if (hasMarginCaption) { + return true; + } + + return Array.from(el.classList).find((className) => { + return ( + className !== "column-body" && + !className.endsWith("container") && + className.startsWith("column-") && + !className.endsWith("left") + ); + }); + }); + + const kOverlapPaddingSize = 10; + function toRegions(els) { + return els.map((el) => { + const boundRect = el.getBoundingClientRect(); + const top = + boundRect.top + + document.documentElement.scrollTop - + kOverlapPaddingSize; + return { + top, + bottom: top + el.scrollHeight + 2 * kOverlapPaddingSize, + }; + }); + } + + let hasObserved = false; + const visibleItemObserver = (els) => { + let visibleElements = [...els]; + const intersectionObserver = new IntersectionObserver( + (entries, _observer) => { + entries.forEach((entry) => { + if (entry.isIntersecting) { + if (visibleElements.indexOf(entry.target) === -1) { + visibleElements.push(entry.target); + } + } else { + visibleElements = visibleElements.filter((visibleEntry) => { + return visibleEntry !== entry; + }); + } + }); + + if (!hasObserved) { + hideOverlappedSidebars(); + } + hasObserved = true; + }, + {} + ); + els.forEach((el) => { + intersectionObserver.observe(el); + }); + + return { + getVisibleEntries: () => { + return visibleElements; + }, + }; + }; + + const rightElementObserver = visibleItemObserver(rightSideConflictEls); + const leftElementObserver = visibleItemObserver(leftSideConflictEls); + + const hideOverlappedSidebars = () => { + marginScrollVisibility(toRegions(rightElementObserver.getVisibleEntries())); + sidebarScrollVisiblity(toRegions(leftElementObserver.getVisibleEntries())); + if (tocLeftScrollVisibility) { + tocLeftScrollVisibility( + toRegions(leftElementObserver.getVisibleEntries()) + ); + } + }; + + window.quartoToggleReader = () => { + // Applies a slow class (or removes it) + // to update the transition speed + const slowTransition = (slow) => { + const manageTransition = (id, slow) => { + const el = document.getElementById(id); + if (el) { + if (slow) { + el.classList.add("slow"); + } else { + el.classList.remove("slow"); + } + } + }; + + manageTransition("TOC", slow); + manageTransition("quarto-sidebar", slow); + }; + const readerMode = !isReaderMode(); + setReaderModeValue(readerMode); + + // If we're entering reader mode, slow the transition + if (readerMode) { + slowTransition(readerMode); + } + highlightReaderToggle(readerMode); + hideOverlappedSidebars(); + + // If we're exiting reader mode, restore the non-slow transition + if (!readerMode) { + slowTransition(!readerMode); + } + }; + + const highlightReaderToggle = (readerMode) => { + const els = document.querySelectorAll(".quarto-reader-toggle"); + if (els) { + els.forEach((el) => { + if (readerMode) { + el.classList.add("reader"); + } else { + el.classList.remove("reader"); + } + }); + } + }; + + const setReaderModeValue = (val) => { + if (window.location.protocol !== "file:") { + window.localStorage.setItem("quarto-reader-mode", val); + } else { + localReaderMode = val; + } + }; + + const isReaderMode = () => { + if (window.location.protocol !== "file:") { + return window.localStorage.getItem("quarto-reader-mode") === "true"; + } else { + return localReaderMode; + } + }; + let localReaderMode = null; + + const tocOpenDepthStr = tocEl?.getAttribute("data-toc-expanded"); + const tocOpenDepth = tocOpenDepthStr ? Number(tocOpenDepthStr) : 1; + + // Walk the TOC and collapse/expand nodes + // Nodes are expanded if: + // - they are top level + // - they have children that are 'active' links + // - they are directly below an link that is 'active' + const walk = (el, depth) => { + // Tick depth when we enter a UL + if (el.tagName === "UL") { + depth = depth + 1; + } + + // It this is active link + let isActiveNode = false; + if (el.tagName === "A" && el.classList.contains("active")) { + isActiveNode = true; + } + + // See if there is an active child to this element + let hasActiveChild = false; + for (child of el.children) { + hasActiveChild = walk(child, depth) || hasActiveChild; + } + + // Process the collapse state if this is an UL + if (el.tagName === "UL") { + if (tocOpenDepth === -1 && depth > 1) { + el.classList.add("collapse"); + } else if ( + depth <= tocOpenDepth || + hasActiveChild || + prevSiblingIsActiveLink(el) + ) { + el.classList.remove("collapse"); + } else { + el.classList.add("collapse"); + } + + // untick depth when we leave a UL + depth = depth - 1; + } + return hasActiveChild || isActiveNode; + }; + + // walk the TOC and expand / collapse any items that should be shown + + if (tocEl) { + walk(tocEl, 0); + updateActiveLink(); + } + + // Throttle the scroll event and walk peridiocally + window.document.addEventListener( + "scroll", + throttle(() => { + if (tocEl) { + updateActiveLink(); + walk(tocEl, 0); + } + if (!isReaderMode()) { + hideOverlappedSidebars(); + } + }, 5) + ); + window.addEventListener( + "resize", + throttle(() => { + if (!isReaderMode()) { + hideOverlappedSidebars(); + } + }, 10) + ); + hideOverlappedSidebars(); + highlightReaderToggle(isReaderMode()); +}); + +// grouped tabsets +window.addEventListener("pageshow", (_event) => { + function getTabSettings() { + const data = localStorage.getItem("quarto-persistent-tabsets-data"); + if (!data) { + localStorage.setItem("quarto-persistent-tabsets-data", "{}"); + return {}; + } + if (data) { + return JSON.parse(data); + } + } + + function setTabSettings(data) { + localStorage.setItem( + "quarto-persistent-tabsets-data", + JSON.stringify(data) + ); + } + + function setTabState(groupName, groupValue) { + const data = getTabSettings(); + data[groupName] = groupValue; + setTabSettings(data); + } + + function toggleTab(tab, active) { + const tabPanelId = tab.getAttribute("aria-controls"); + const tabPanel = document.getElementById(tabPanelId); + if (active) { + tab.classList.add("active"); + tabPanel.classList.add("active"); + } else { + tab.classList.remove("active"); + tabPanel.classList.remove("active"); + } + } + + function toggleAll(selectedGroup, selectorsToSync) { + for (const [thisGroup, tabs] of Object.entries(selectorsToSync)) { + const active = selectedGroup === thisGroup; + for (const tab of tabs) { + toggleTab(tab, active); + } + } + } + + function findSelectorsToSyncByLanguage() { + const result = {}; + const tabs = Array.from( + document.querySelectorAll(`div[data-group] a[id^='tabset-']`) + ); + for (const item of tabs) { + const div = item.parentElement.parentElement.parentElement; + const group = div.getAttribute("data-group"); + if (!result[group]) { + result[group] = {}; + } + const selectorsToSync = result[group]; + const value = item.innerHTML; + if (!selectorsToSync[value]) { + selectorsToSync[value] = []; + } + selectorsToSync[value].push(item); + } + return result; + } + + function setupSelectorSync() { + const selectorsToSync = findSelectorsToSyncByLanguage(); + Object.entries(selectorsToSync).forEach(([group, tabSetsByValue]) => { + Object.entries(tabSetsByValue).forEach(([value, items]) => { + items.forEach((item) => { + item.addEventListener("click", (_event) => { + setTabState(group, value); + toggleAll(value, selectorsToSync[group]); + }); + }); + }); + }); + return selectorsToSync; + } + + const selectorsToSync = setupSelectorSync(); + for (const [group, selectedName] of Object.entries(getTabSettings())) { + const selectors = selectorsToSync[group]; + // it's possible that stale state gives us empty selections, so we explicitly check here. + if (selectors) { + toggleAll(selectedName, selectors); + } + } +}); + +function throttle(func, wait) { + let waiting = false; + return function () { + if (!waiting) { + func.apply(this, arguments); + waiting = true; + setTimeout(function () { + waiting = false; + }, wait); + } + }; +} + +function nexttick(func) { + return setTimeout(func, 0); +} diff --git a/r-book/site_libs/quarto-html/tippy.css b/r-book/site_libs/quarto-html/tippy.css new file mode 100644 index 00000000..e6ae635c --- /dev/null +++ b/r-book/site_libs/quarto-html/tippy.css @@ -0,0 +1 @@ +.tippy-box[data-animation=fade][data-state=hidden]{opacity:0}[data-tippy-root]{max-width:calc(100vw - 10px)}.tippy-box{position:relative;background-color:#333;color:#fff;border-radius:4px;font-size:14px;line-height:1.4;white-space:normal;outline:0;transition-property:transform,visibility,opacity}.tippy-box[data-placement^=top]>.tippy-arrow{bottom:0}.tippy-box[data-placement^=top]>.tippy-arrow:before{bottom:-7px;left:0;border-width:8px 8px 0;border-top-color:initial;transform-origin:center top}.tippy-box[data-placement^=bottom]>.tippy-arrow{top:0}.tippy-box[data-placement^=bottom]>.tippy-arrow:before{top:-7px;left:0;border-width:0 8px 8px;border-bottom-color:initial;transform-origin:center bottom}.tippy-box[data-placement^=left]>.tippy-arrow{right:0}.tippy-box[data-placement^=left]>.tippy-arrow:before{border-width:8px 0 8px 8px;border-left-color:initial;right:-7px;transform-origin:center left}.tippy-box[data-placement^=right]>.tippy-arrow{left:0}.tippy-box[data-placement^=right]>.tippy-arrow:before{left:-7px;border-width:8px 8px 8px 0;border-right-color:initial;transform-origin:center right}.tippy-box[data-inertia][data-state=visible]{transition-timing-function:cubic-bezier(.54,1.5,.38,1.11)}.tippy-arrow{width:16px;height:16px;color:#333}.tippy-arrow:before{content:"";position:absolute;border-color:transparent;border-style:solid}.tippy-content{position:relative;padding:5px 9px;z-index:1} \ No newline at end of file diff --git a/r-book/site_libs/quarto-html/tippy.umd.min.js b/r-book/site_libs/quarto-html/tippy.umd.min.js new file mode 100644 index 00000000..ca292be3 --- /dev/null +++ b/r-book/site_libs/quarto-html/tippy.umd.min.js @@ -0,0 +1,2 @@ +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?module.exports=t(require("@popperjs/core")):"function"==typeof define&&define.amd?define(["@popperjs/core"],t):(e=e||self).tippy=t(e.Popper)}(this,(function(e){"use strict";var t={passive:!0,capture:!0},n=function(){return document.body};function r(e,t,n){if(Array.isArray(e)){var r=e[t];return null==r?Array.isArray(n)?n[t]:n:r}return e}function o(e,t){var n={}.toString.call(e);return 0===n.indexOf("[object")&&n.indexOf(t+"]")>-1}function i(e,t){return"function"==typeof e?e.apply(void 0,t):e}function a(e,t){return 0===t?e:function(r){clearTimeout(n),n=setTimeout((function(){e(r)}),t)};var n}function s(e,t){var n=Object.assign({},e);return t.forEach((function(e){delete n[e]})),n}function u(e){return[].concat(e)}function c(e,t){-1===e.indexOf(t)&&e.push(t)}function p(e){return e.split("-")[0]}function f(e){return[].slice.call(e)}function l(e){return Object.keys(e).reduce((function(t,n){return void 0!==e[n]&&(t[n]=e[n]),t}),{})}function d(){return document.createElement("div")}function v(e){return["Element","Fragment"].some((function(t){return o(e,t)}))}function m(e){return o(e,"MouseEvent")}function g(e){return!(!e||!e._tippy||e._tippy.reference!==e)}function h(e){return v(e)?[e]:function(e){return o(e,"NodeList")}(e)?f(e):Array.isArray(e)?e:f(document.querySelectorAll(e))}function b(e,t){e.forEach((function(e){e&&(e.style.transitionDuration=t+"ms")}))}function y(e,t){e.forEach((function(e){e&&e.setAttribute("data-state",t)}))}function w(e){var t,n=u(e)[0];return null!=n&&null!=(t=n.ownerDocument)&&t.body?n.ownerDocument:document}function E(e,t,n){var r=t+"EventListener";["transitionend","webkitTransitionEnd"].forEach((function(t){e[r](t,n)}))}function O(e,t){for(var n=t;n;){var r;if(e.contains(n))return!0;n=null==n.getRootNode||null==(r=n.getRootNode())?void 0:r.host}return!1}var x={isTouch:!1},C=0;function T(){x.isTouch||(x.isTouch=!0,window.performance&&document.addEventListener("mousemove",A))}function A(){var e=performance.now();e-C<20&&(x.isTouch=!1,document.removeEventListener("mousemove",A)),C=e}function L(){var e=document.activeElement;if(g(e)){var t=e._tippy;e.blur&&!t.state.isVisible&&e.blur()}}var D=!!("undefined"!=typeof window&&"undefined"!=typeof document)&&!!window.msCrypto,R=Object.assign({appendTo:n,aria:{content:"auto",expanded:"auto"},delay:0,duration:[300,250],getReferenceClientRect:null,hideOnClick:!0,ignoreAttributes:!1,interactive:!1,interactiveBorder:2,interactiveDebounce:0,moveTransition:"",offset:[0,10],onAfterUpdate:function(){},onBeforeUpdate:function(){},onCreate:function(){},onDestroy:function(){},onHidden:function(){},onHide:function(){},onMount:function(){},onShow:function(){},onShown:function(){},onTrigger:function(){},onUntrigger:function(){},onClickOutside:function(){},placement:"top",plugins:[],popperOptions:{},render:null,showOnCreate:!1,touch:!0,trigger:"mouseenter focus",triggerTarget:null},{animateFill:!1,followCursor:!1,inlinePositioning:!1,sticky:!1},{allowHTML:!1,animation:"fade",arrow:!0,content:"",inertia:!1,maxWidth:350,role:"tooltip",theme:"",zIndex:9999}),k=Object.keys(R);function P(e){var t=(e.plugins||[]).reduce((function(t,n){var r,o=n.name,i=n.defaultValue;o&&(t[o]=void 0!==e[o]?e[o]:null!=(r=R[o])?r:i);return t}),{});return Object.assign({},e,t)}function j(e,t){var n=Object.assign({},t,{content:i(t.content,[e])},t.ignoreAttributes?{}:function(e,t){return(t?Object.keys(P(Object.assign({},R,{plugins:t}))):k).reduce((function(t,n){var r=(e.getAttribute("data-tippy-"+n)||"").trim();if(!r)return t;if("content"===n)t[n]=r;else try{t[n]=JSON.parse(r)}catch(e){t[n]=r}return t}),{})}(e,t.plugins));return n.aria=Object.assign({},R.aria,n.aria),n.aria={expanded:"auto"===n.aria.expanded?t.interactive:n.aria.expanded,content:"auto"===n.aria.content?t.interactive?null:"describedby":n.aria.content},n}function M(e,t){e.innerHTML=t}function V(e){var t=d();return!0===e?t.className="tippy-arrow":(t.className="tippy-svg-arrow",v(e)?t.appendChild(e):M(t,e)),t}function I(e,t){v(t.content)?(M(e,""),e.appendChild(t.content)):"function"!=typeof t.content&&(t.allowHTML?M(e,t.content):e.textContent=t.content)}function S(e){var t=e.firstElementChild,n=f(t.children);return{box:t,content:n.find((function(e){return e.classList.contains("tippy-content")})),arrow:n.find((function(e){return e.classList.contains("tippy-arrow")||e.classList.contains("tippy-svg-arrow")})),backdrop:n.find((function(e){return e.classList.contains("tippy-backdrop")}))}}function N(e){var t=d(),n=d();n.className="tippy-box",n.setAttribute("data-state","hidden"),n.setAttribute("tabindex","-1");var r=d();function o(n,r){var o=S(t),i=o.box,a=o.content,s=o.arrow;r.theme?i.setAttribute("data-theme",r.theme):i.removeAttribute("data-theme"),"string"==typeof r.animation?i.setAttribute("data-animation",r.animation):i.removeAttribute("data-animation"),r.inertia?i.setAttribute("data-inertia",""):i.removeAttribute("data-inertia"),i.style.maxWidth="number"==typeof r.maxWidth?r.maxWidth+"px":r.maxWidth,r.role?i.setAttribute("role",r.role):i.removeAttribute("role"),n.content===r.content&&n.allowHTML===r.allowHTML||I(a,e.props),r.arrow?s?n.arrow!==r.arrow&&(i.removeChild(s),i.appendChild(V(r.arrow))):i.appendChild(V(r.arrow)):s&&i.removeChild(s)}return r.className="tippy-content",r.setAttribute("data-state","hidden"),I(r,e.props),t.appendChild(n),n.appendChild(r),o(e.props,e.props),{popper:t,onUpdate:o}}N.$$tippy=!0;var B=1,H=[],U=[];function _(o,s){var v,g,h,C,T,A,L,k,M=j(o,Object.assign({},R,P(l(s)))),V=!1,I=!1,N=!1,_=!1,F=[],W=a(we,M.interactiveDebounce),X=B++,Y=(k=M.plugins).filter((function(e,t){return k.indexOf(e)===t})),$={id:X,reference:o,popper:d(),popperInstance:null,props:M,state:{isEnabled:!0,isVisible:!1,isDestroyed:!1,isMounted:!1,isShown:!1},plugins:Y,clearDelayTimeouts:function(){clearTimeout(v),clearTimeout(g),cancelAnimationFrame(h)},setProps:function(e){if($.state.isDestroyed)return;ae("onBeforeUpdate",[$,e]),be();var t=$.props,n=j(o,Object.assign({},t,l(e),{ignoreAttributes:!0}));$.props=n,he(),t.interactiveDebounce!==n.interactiveDebounce&&(ce(),W=a(we,n.interactiveDebounce));t.triggerTarget&&!n.triggerTarget?u(t.triggerTarget).forEach((function(e){e.removeAttribute("aria-expanded")})):n.triggerTarget&&o.removeAttribute("aria-expanded");ue(),ie(),J&&J(t,n);$.popperInstance&&(Ce(),Ae().forEach((function(e){requestAnimationFrame(e._tippy.popperInstance.forceUpdate)})));ae("onAfterUpdate",[$,e])},setContent:function(e){$.setProps({content:e})},show:function(){var e=$.state.isVisible,t=$.state.isDestroyed,o=!$.state.isEnabled,a=x.isTouch&&!$.props.touch,s=r($.props.duration,0,R.duration);if(e||t||o||a)return;if(te().hasAttribute("disabled"))return;if(ae("onShow",[$],!1),!1===$.props.onShow($))return;$.state.isVisible=!0,ee()&&(z.style.visibility="visible");ie(),de(),$.state.isMounted||(z.style.transition="none");if(ee()){var u=re(),p=u.box,f=u.content;b([p,f],0)}A=function(){var e;if($.state.isVisible&&!_){if(_=!0,z.offsetHeight,z.style.transition=$.props.moveTransition,ee()&&$.props.animation){var t=re(),n=t.box,r=t.content;b([n,r],s),y([n,r],"visible")}se(),ue(),c(U,$),null==(e=$.popperInstance)||e.forceUpdate(),ae("onMount",[$]),$.props.animation&&ee()&&function(e,t){me(e,t)}(s,(function(){$.state.isShown=!0,ae("onShown",[$])}))}},function(){var e,t=$.props.appendTo,r=te();e=$.props.interactive&&t===n||"parent"===t?r.parentNode:i(t,[r]);e.contains(z)||e.appendChild(z);$.state.isMounted=!0,Ce()}()},hide:function(){var e=!$.state.isVisible,t=$.state.isDestroyed,n=!$.state.isEnabled,o=r($.props.duration,1,R.duration);if(e||t||n)return;if(ae("onHide",[$],!1),!1===$.props.onHide($))return;$.state.isVisible=!1,$.state.isShown=!1,_=!1,V=!1,ee()&&(z.style.visibility="hidden");if(ce(),ve(),ie(!0),ee()){var i=re(),a=i.box,s=i.content;$.props.animation&&(b([a,s],o),y([a,s],"hidden"))}se(),ue(),$.props.animation?ee()&&function(e,t){me(e,(function(){!$.state.isVisible&&z.parentNode&&z.parentNode.contains(z)&&t()}))}(o,$.unmount):$.unmount()},hideWithInteractivity:function(e){ne().addEventListener("mousemove",W),c(H,W),W(e)},enable:function(){$.state.isEnabled=!0},disable:function(){$.hide(),$.state.isEnabled=!1},unmount:function(){$.state.isVisible&&$.hide();if(!$.state.isMounted)return;Te(),Ae().forEach((function(e){e._tippy.unmount()})),z.parentNode&&z.parentNode.removeChild(z);U=U.filter((function(e){return e!==$})),$.state.isMounted=!1,ae("onHidden",[$])},destroy:function(){if($.state.isDestroyed)return;$.clearDelayTimeouts(),$.unmount(),be(),delete o._tippy,$.state.isDestroyed=!0,ae("onDestroy",[$])}};if(!M.render)return $;var q=M.render($),z=q.popper,J=q.onUpdate;z.setAttribute("data-tippy-root",""),z.id="tippy-"+$.id,$.popper=z,o._tippy=$,z._tippy=$;var G=Y.map((function(e){return e.fn($)})),K=o.hasAttribute("aria-expanded");return he(),ue(),ie(),ae("onCreate",[$]),M.showOnCreate&&Le(),z.addEventListener("mouseenter",(function(){$.props.interactive&&$.state.isVisible&&$.clearDelayTimeouts()})),z.addEventListener("mouseleave",(function(){$.props.interactive&&$.props.trigger.indexOf("mouseenter")>=0&&ne().addEventListener("mousemove",W)})),$;function Q(){var e=$.props.touch;return Array.isArray(e)?e:[e,0]}function Z(){return"hold"===Q()[0]}function ee(){var e;return!(null==(e=$.props.render)||!e.$$tippy)}function te(){return L||o}function ne(){var e=te().parentNode;return e?w(e):document}function re(){return S(z)}function oe(e){return $.state.isMounted&&!$.state.isVisible||x.isTouch||C&&"focus"===C.type?0:r($.props.delay,e?0:1,R.delay)}function ie(e){void 0===e&&(e=!1),z.style.pointerEvents=$.props.interactive&&!e?"":"none",z.style.zIndex=""+$.props.zIndex}function ae(e,t,n){var r;(void 0===n&&(n=!0),G.forEach((function(n){n[e]&&n[e].apply(n,t)})),n)&&(r=$.props)[e].apply(r,t)}function se(){var e=$.props.aria;if(e.content){var t="aria-"+e.content,n=z.id;u($.props.triggerTarget||o).forEach((function(e){var r=e.getAttribute(t);if($.state.isVisible)e.setAttribute(t,r?r+" "+n:n);else{var o=r&&r.replace(n,"").trim();o?e.setAttribute(t,o):e.removeAttribute(t)}}))}}function ue(){!K&&$.props.aria.expanded&&u($.props.triggerTarget||o).forEach((function(e){$.props.interactive?e.setAttribute("aria-expanded",$.state.isVisible&&e===te()?"true":"false"):e.removeAttribute("aria-expanded")}))}function ce(){ne().removeEventListener("mousemove",W),H=H.filter((function(e){return e!==W}))}function pe(e){if(!x.isTouch||!N&&"mousedown"!==e.type){var t=e.composedPath&&e.composedPath()[0]||e.target;if(!$.props.interactive||!O(z,t)){if(u($.props.triggerTarget||o).some((function(e){return O(e,t)}))){if(x.isTouch)return;if($.state.isVisible&&$.props.trigger.indexOf("click")>=0)return}else ae("onClickOutside",[$,e]);!0===$.props.hideOnClick&&($.clearDelayTimeouts(),$.hide(),I=!0,setTimeout((function(){I=!1})),$.state.isMounted||ve())}}}function fe(){N=!0}function le(){N=!1}function de(){var e=ne();e.addEventListener("mousedown",pe,!0),e.addEventListener("touchend",pe,t),e.addEventListener("touchstart",le,t),e.addEventListener("touchmove",fe,t)}function ve(){var e=ne();e.removeEventListener("mousedown",pe,!0),e.removeEventListener("touchend",pe,t),e.removeEventListener("touchstart",le,t),e.removeEventListener("touchmove",fe,t)}function me(e,t){var n=re().box;function r(e){e.target===n&&(E(n,"remove",r),t())}if(0===e)return t();E(n,"remove",T),E(n,"add",r),T=r}function ge(e,t,n){void 0===n&&(n=!1),u($.props.triggerTarget||o).forEach((function(r){r.addEventListener(e,t,n),F.push({node:r,eventType:e,handler:t,options:n})}))}function he(){var e;Z()&&(ge("touchstart",ye,{passive:!0}),ge("touchend",Ee,{passive:!0})),(e=$.props.trigger,e.split(/\s+/).filter(Boolean)).forEach((function(e){if("manual"!==e)switch(ge(e,ye),e){case"mouseenter":ge("mouseleave",Ee);break;case"focus":ge(D?"focusout":"blur",Oe);break;case"focusin":ge("focusout",Oe)}}))}function be(){F.forEach((function(e){var t=e.node,n=e.eventType,r=e.handler,o=e.options;t.removeEventListener(n,r,o)})),F=[]}function ye(e){var t,n=!1;if($.state.isEnabled&&!xe(e)&&!I){var r="focus"===(null==(t=C)?void 0:t.type);C=e,L=e.currentTarget,ue(),!$.state.isVisible&&m(e)&&H.forEach((function(t){return t(e)})),"click"===e.type&&($.props.trigger.indexOf("mouseenter")<0||V)&&!1!==$.props.hideOnClick&&$.state.isVisible?n=!0:Le(e),"click"===e.type&&(V=!n),n&&!r&&De(e)}}function we(e){var t=e.target,n=te().contains(t)||z.contains(t);"mousemove"===e.type&&n||function(e,t){var n=t.clientX,r=t.clientY;return e.every((function(e){var t=e.popperRect,o=e.popperState,i=e.props.interactiveBorder,a=p(o.placement),s=o.modifiersData.offset;if(!s)return!0;var u="bottom"===a?s.top.y:0,c="top"===a?s.bottom.y:0,f="right"===a?s.left.x:0,l="left"===a?s.right.x:0,d=t.top-r+u>i,v=r-t.bottom-c>i,m=t.left-n+f>i,g=n-t.right-l>i;return d||v||m||g}))}(Ae().concat(z).map((function(e){var t,n=null==(t=e._tippy.popperInstance)?void 0:t.state;return n?{popperRect:e.getBoundingClientRect(),popperState:n,props:M}:null})).filter(Boolean),e)&&(ce(),De(e))}function Ee(e){xe(e)||$.props.trigger.indexOf("click")>=0&&V||($.props.interactive?$.hideWithInteractivity(e):De(e))}function Oe(e){$.props.trigger.indexOf("focusin")<0&&e.target!==te()||$.props.interactive&&e.relatedTarget&&z.contains(e.relatedTarget)||De(e)}function xe(e){return!!x.isTouch&&Z()!==e.type.indexOf("touch")>=0}function Ce(){Te();var t=$.props,n=t.popperOptions,r=t.placement,i=t.offset,a=t.getReferenceClientRect,s=t.moveTransition,u=ee()?S(z).arrow:null,c=a?{getBoundingClientRect:a,contextElement:a.contextElement||te()}:o,p=[{name:"offset",options:{offset:i}},{name:"preventOverflow",options:{padding:{top:2,bottom:2,left:5,right:5}}},{name:"flip",options:{padding:5}},{name:"computeStyles",options:{adaptive:!s}},{name:"$$tippy",enabled:!0,phase:"beforeWrite",requires:["computeStyles"],fn:function(e){var t=e.state;if(ee()){var n=re().box;["placement","reference-hidden","escaped"].forEach((function(e){"placement"===e?n.setAttribute("data-placement",t.placement):t.attributes.popper["data-popper-"+e]?n.setAttribute("data-"+e,""):n.removeAttribute("data-"+e)})),t.attributes.popper={}}}}];ee()&&u&&p.push({name:"arrow",options:{element:u,padding:3}}),p.push.apply(p,(null==n?void 0:n.modifiers)||[]),$.popperInstance=e.createPopper(c,z,Object.assign({},n,{placement:r,onFirstUpdate:A,modifiers:p}))}function Te(){$.popperInstance&&($.popperInstance.destroy(),$.popperInstance=null)}function Ae(){return f(z.querySelectorAll("[data-tippy-root]"))}function Le(e){$.clearDelayTimeouts(),e&&ae("onTrigger",[$,e]),de();var t=oe(!0),n=Q(),r=n[0],o=n[1];x.isTouch&&"hold"===r&&o&&(t=o),t?v=setTimeout((function(){$.show()}),t):$.show()}function De(e){if($.clearDelayTimeouts(),ae("onUntrigger",[$,e]),$.state.isVisible){if(!($.props.trigger.indexOf("mouseenter")>=0&&$.props.trigger.indexOf("click")>=0&&["mouseleave","mousemove"].indexOf(e.type)>=0&&V)){var t=oe(!1);t?g=setTimeout((function(){$.state.isVisible&&$.hide()}),t):h=requestAnimationFrame((function(){$.hide()}))}}else ve()}}function F(e,n){void 0===n&&(n={});var r=R.plugins.concat(n.plugins||[]);document.addEventListener("touchstart",T,t),window.addEventListener("blur",L);var o=Object.assign({},n,{plugins:r}),i=h(e).reduce((function(e,t){var n=t&&_(t,o);return n&&e.push(n),e}),[]);return v(e)?i[0]:i}F.defaultProps=R,F.setDefaultProps=function(e){Object.keys(e).forEach((function(t){R[t]=e[t]}))},F.currentInput=x;var W=Object.assign({},e.applyStyles,{effect:function(e){var t=e.state,n={popper:{position:t.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};Object.assign(t.elements.popper.style,n.popper),t.styles=n,t.elements.arrow&&Object.assign(t.elements.arrow.style,n.arrow)}}),X={mouseover:"mouseenter",focusin:"focus",click:"click"};var Y={name:"animateFill",defaultValue:!1,fn:function(e){var t;if(null==(t=e.props.render)||!t.$$tippy)return{};var n=S(e.popper),r=n.box,o=n.content,i=e.props.animateFill?function(){var e=d();return e.className="tippy-backdrop",y([e],"hidden"),e}():null;return{onCreate:function(){i&&(r.insertBefore(i,r.firstElementChild),r.setAttribute("data-animatefill",""),r.style.overflow="hidden",e.setProps({arrow:!1,animation:"shift-away"}))},onMount:function(){if(i){var e=r.style.transitionDuration,t=Number(e.replace("ms",""));o.style.transitionDelay=Math.round(t/10)+"ms",i.style.transitionDuration=e,y([i],"visible")}},onShow:function(){i&&(i.style.transitionDuration="0ms")},onHide:function(){i&&y([i],"hidden")}}}};var $={clientX:0,clientY:0},q=[];function z(e){var t=e.clientX,n=e.clientY;$={clientX:t,clientY:n}}var J={name:"followCursor",defaultValue:!1,fn:function(e){var t=e.reference,n=w(e.props.triggerTarget||t),r=!1,o=!1,i=!0,a=e.props;function s(){return"initial"===e.props.followCursor&&e.state.isVisible}function u(){n.addEventListener("mousemove",f)}function c(){n.removeEventListener("mousemove",f)}function p(){r=!0,e.setProps({getReferenceClientRect:null}),r=!1}function f(n){var r=!n.target||t.contains(n.target),o=e.props.followCursor,i=n.clientX,a=n.clientY,s=t.getBoundingClientRect(),u=i-s.left,c=a-s.top;!r&&e.props.interactive||e.setProps({getReferenceClientRect:function(){var e=t.getBoundingClientRect(),n=i,r=a;"initial"===o&&(n=e.left+u,r=e.top+c);var s="horizontal"===o?e.top:r,p="vertical"===o?e.right:n,f="horizontal"===o?e.bottom:r,l="vertical"===o?e.left:n;return{width:p-l,height:f-s,top:s,right:p,bottom:f,left:l}}})}function l(){e.props.followCursor&&(q.push({instance:e,doc:n}),function(e){e.addEventListener("mousemove",z)}(n))}function d(){0===(q=q.filter((function(t){return t.instance!==e}))).filter((function(e){return e.doc===n})).length&&function(e){e.removeEventListener("mousemove",z)}(n)}return{onCreate:l,onDestroy:d,onBeforeUpdate:function(){a=e.props},onAfterUpdate:function(t,n){var i=n.followCursor;r||void 0!==i&&a.followCursor!==i&&(d(),i?(l(),!e.state.isMounted||o||s()||u()):(c(),p()))},onMount:function(){e.props.followCursor&&!o&&(i&&(f($),i=!1),s()||u())},onTrigger:function(e,t){m(t)&&($={clientX:t.clientX,clientY:t.clientY}),o="focus"===t.type},onHidden:function(){e.props.followCursor&&(p(),c(),i=!0)}}}};var G={name:"inlinePositioning",defaultValue:!1,fn:function(e){var t,n=e.reference;var r=-1,o=!1,i=[],a={name:"tippyInlinePositioning",enabled:!0,phase:"afterWrite",fn:function(o){var a=o.state;e.props.inlinePositioning&&(-1!==i.indexOf(a.placement)&&(i=[]),t!==a.placement&&-1===i.indexOf(a.placement)&&(i.push(a.placement),e.setProps({getReferenceClientRect:function(){return function(e){return function(e,t,n,r){if(n.length<2||null===e)return t;if(2===n.length&&r>=0&&n[0].left>n[1].right)return n[r]||t;switch(e){case"top":case"bottom":var o=n[0],i=n[n.length-1],a="top"===e,s=o.top,u=i.bottom,c=a?o.left:i.left,p=a?o.right:i.right;return{top:s,bottom:u,left:c,right:p,width:p-c,height:u-s};case"left":case"right":var f=Math.min.apply(Math,n.map((function(e){return e.left}))),l=Math.max.apply(Math,n.map((function(e){return e.right}))),d=n.filter((function(t){return"left"===e?t.left===f:t.right===l})),v=d[0].top,m=d[d.length-1].bottom;return{top:v,bottom:m,left:f,right:l,width:l-f,height:m-v};default:return t}}(p(e),n.getBoundingClientRect(),f(n.getClientRects()),r)}(a.placement)}})),t=a.placement)}};function s(){var t;o||(t=function(e,t){var n;return{popperOptions:Object.assign({},e.popperOptions,{modifiers:[].concat(((null==(n=e.popperOptions)?void 0:n.modifiers)||[]).filter((function(e){return e.name!==t.name})),[t])})}}(e.props,a),o=!0,e.setProps(t),o=!1)}return{onCreate:s,onAfterUpdate:s,onTrigger:function(t,n){if(m(n)){var o=f(e.reference.getClientRects()),i=o.find((function(e){return e.left-2<=n.clientX&&e.right+2>=n.clientX&&e.top-2<=n.clientY&&e.bottom+2>=n.clientY})),a=o.indexOf(i);r=a>-1?a:r}},onHidden:function(){r=-1}}}};var K={name:"sticky",defaultValue:!1,fn:function(e){var t=e.reference,n=e.popper;function r(t){return!0===e.props.sticky||e.props.sticky===t}var o=null,i=null;function a(){var s=r("reference")?(e.popperInstance?e.popperInstance.state.elements.reference:t).getBoundingClientRect():null,u=r("popper")?n.getBoundingClientRect():null;(s&&Q(o,s)||u&&Q(i,u))&&e.popperInstance&&e.popperInstance.update(),o=s,i=u,e.state.isMounted&&requestAnimationFrame(a)}return{onMount:function(){e.props.sticky&&a()}}}};function Q(e,t){return!e||!t||(e.top!==t.top||e.right!==t.right||e.bottom!==t.bottom||e.left!==t.left)}return F.setDefaultProps({plugins:[Y,J,G,K],render:N}),F.createSingleton=function(e,t){var n;void 0===t&&(t={});var r,o=e,i=[],a=[],c=t.overrides,p=[],f=!1;function l(){a=o.map((function(e){return u(e.props.triggerTarget||e.reference)})).reduce((function(e,t){return e.concat(t)}),[])}function v(){i=o.map((function(e){return e.reference}))}function m(e){o.forEach((function(t){e?t.enable():t.disable()}))}function g(e){return o.map((function(t){var n=t.setProps;return t.setProps=function(o){n(o),t.reference===r&&e.setProps(o)},function(){t.setProps=n}}))}function h(e,t){var n=a.indexOf(t);if(t!==r){r=t;var s=(c||[]).concat("content").reduce((function(e,t){return e[t]=o[n].props[t],e}),{});e.setProps(Object.assign({},s,{getReferenceClientRect:"function"==typeof s.getReferenceClientRect?s.getReferenceClientRect:function(){var e;return null==(e=i[n])?void 0:e.getBoundingClientRect()}}))}}m(!1),v(),l();var b={fn:function(){return{onDestroy:function(){m(!0)},onHidden:function(){r=null},onClickOutside:function(e){e.props.showOnCreate&&!f&&(f=!0,r=null)},onShow:function(e){e.props.showOnCreate&&!f&&(f=!0,h(e,i[0]))},onTrigger:function(e,t){h(e,t.currentTarget)}}}},y=F(d(),Object.assign({},s(t,["overrides"]),{plugins:[b].concat(t.plugins||[]),triggerTarget:a,popperOptions:Object.assign({},t.popperOptions,{modifiers:[].concat((null==(n=t.popperOptions)?void 0:n.modifiers)||[],[W])})})),w=y.show;y.show=function(e){if(w(),!r&&null==e)return h(y,i[0]);if(!r||null!=e){if("number"==typeof e)return i[e]&&h(y,i[e]);if(o.indexOf(e)>=0){var t=e.reference;return h(y,t)}return i.indexOf(e)>=0?h(y,e):void 0}},y.showNext=function(){var e=i[0];if(!r)return y.show(0);var t=i.indexOf(r);y.show(i[t+1]||e)},y.showPrevious=function(){var e=i[i.length-1];if(!r)return y.show(e);var t=i.indexOf(r),n=i[t-1]||e;y.show(n)};var E=y.setProps;return y.setProps=function(e){c=e.overrides||c,E(e)},y.setInstances=function(e){m(!0),p.forEach((function(e){return e()})),o=e,m(!1),v(),l(),p=g(y),y.setProps({triggerTarget:a})},p=g(y),y},F.delegate=function(e,n){var r=[],o=[],i=!1,a=n.target,c=s(n,["target"]),p=Object.assign({},c,{trigger:"manual",touch:!1}),f=Object.assign({touch:R.touch},c,{showOnCreate:!0}),l=F(e,p);function d(e){if(e.target&&!i){var t=e.target.closest(a);if(t){var r=t.getAttribute("data-tippy-trigger")||n.trigger||R.trigger;if(!t._tippy&&!("touchstart"===e.type&&"boolean"==typeof f.touch||"touchstart"!==e.type&&r.indexOf(X[e.type])<0)){var s=F(t,f);s&&(o=o.concat(s))}}}}function v(e,t,n,o){void 0===o&&(o=!1),e.addEventListener(t,n,o),r.push({node:e,eventType:t,handler:n,options:o})}return u(l).forEach((function(e){var n=e.destroy,a=e.enable,s=e.disable;e.destroy=function(e){void 0===e&&(e=!0),e&&o.forEach((function(e){e.destroy()})),o=[],r.forEach((function(e){var t=e.node,n=e.eventType,r=e.handler,o=e.options;t.removeEventListener(n,r,o)})),r=[],n()},e.enable=function(){a(),o.forEach((function(e){return e.enable()})),i=!1},e.disable=function(){s(),o.forEach((function(e){return e.disable()})),i=!0},function(e){var n=e.reference;v(n,"touchstart",d,t),v(n,"mouseover",d),v(n,"focusin",d),v(n,"click",d)}(e)})),l},F.hideAll=function(e){var t=void 0===e?{}:e,n=t.exclude,r=t.duration;U.forEach((function(e){var t=!1;if(n&&(t=g(n)?e.reference===n:e.popper===n.popper),!t){var o=e.props.duration;e.setProps({duration:r}),e.hide(),e.state.isDestroyed||e.setProps({duration:o})}}))},F.roundArrow='',F})); + diff --git a/r-book/site_libs/quarto-nav/headroom.min.js b/r-book/site_libs/quarto-nav/headroom.min.js new file mode 100644 index 00000000..b08f1dff --- /dev/null +++ b/r-book/site_libs/quarto-nav/headroom.min.js @@ -0,0 +1,7 @@ +/*! + * headroom.js v0.12.0 - Give your page some headroom. Hide your header until you need it + * Copyright (c) 2020 Nick Williams - http://wicky.nillia.ms/headroom.js + * License: MIT + */ + +!function(t,n){"object"==typeof exports&&"undefined"!=typeof module?module.exports=n():"function"==typeof define&&define.amd?define(n):(t=t||self).Headroom=n()}(this,function(){"use strict";function t(){return"undefined"!=typeof window}function d(t){return function(t){return t&&t.document&&function(t){return 9===t.nodeType}(t.document)}(t)?function(t){var n=t.document,o=n.body,s=n.documentElement;return{scrollHeight:function(){return Math.max(o.scrollHeight,s.scrollHeight,o.offsetHeight,s.offsetHeight,o.clientHeight,s.clientHeight)},height:function(){return t.innerHeight||s.clientHeight||o.clientHeight},scrollY:function(){return void 0!==t.pageYOffset?t.pageYOffset:(s||o.parentNode||o).scrollTop}}}(t):function(t){return{scrollHeight:function(){return Math.max(t.scrollHeight,t.offsetHeight,t.clientHeight)},height:function(){return Math.max(t.offsetHeight,t.clientHeight)},scrollY:function(){return t.scrollTop}}}(t)}function n(t,s,e){var n,o=function(){var n=!1;try{var t={get passive(){n=!0}};window.addEventListener("test",t,t),window.removeEventListener("test",t,t)}catch(t){n=!1}return n}(),i=!1,r=d(t),l=r.scrollY(),a={};function c(){var t=Math.round(r.scrollY()),n=r.height(),o=r.scrollHeight();a.scrollY=t,a.lastScrollY=l,a.direction=ls.tolerance[a.direction],e(a),l=t,i=!1}function h(){i||(i=!0,n=requestAnimationFrame(c))}var u=!!o&&{passive:!0,capture:!1};return t.addEventListener("scroll",h,u),c(),{destroy:function(){cancelAnimationFrame(n),t.removeEventListener("scroll",h,u)}}}function o(t){return t===Object(t)?t:{down:t,up:t}}function s(t,n){n=n||{},Object.assign(this,s.options,n),this.classes=Object.assign({},s.options.classes,n.classes),this.elem=t,this.tolerance=o(this.tolerance),this.offset=o(this.offset),this.initialised=!1,this.frozen=!1}return s.prototype={constructor:s,init:function(){return s.cutsTheMustard&&!this.initialised&&(this.addClass("initial"),this.initialised=!0,setTimeout(function(t){t.scrollTracker=n(t.scroller,{offset:t.offset,tolerance:t.tolerance},t.update.bind(t))},100,this)),this},destroy:function(){this.initialised=!1,Object.keys(this.classes).forEach(this.removeClass,this),this.scrollTracker.destroy()},unpin:function(){!this.hasClass("pinned")&&this.hasClass("unpinned")||(this.addClass("unpinned"),this.removeClass("pinned"),this.onUnpin&&this.onUnpin.call(this))},pin:function(){this.hasClass("unpinned")&&(this.addClass("pinned"),this.removeClass("unpinned"),this.onPin&&this.onPin.call(this))},freeze:function(){this.frozen=!0,this.addClass("frozen")},unfreeze:function(){this.frozen=!1,this.removeClass("frozen")},top:function(){this.hasClass("top")||(this.addClass("top"),this.removeClass("notTop"),this.onTop&&this.onTop.call(this))},notTop:function(){this.hasClass("notTop")||(this.addClass("notTop"),this.removeClass("top"),this.onNotTop&&this.onNotTop.call(this))},bottom:function(){this.hasClass("bottom")||(this.addClass("bottom"),this.removeClass("notBottom"),this.onBottom&&this.onBottom.call(this))},notBottom:function(){this.hasClass("notBottom")||(this.addClass("notBottom"),this.removeClass("bottom"),this.onNotBottom&&this.onNotBottom.call(this))},shouldUnpin:function(t){return"down"===t.direction&&!t.top&&t.toleranceExceeded},shouldPin:function(t){return"up"===t.direction&&t.toleranceExceeded||t.top},addClass:function(t){this.elem.classList.add.apply(this.elem.classList,this.classes[t].split(" "))},removeClass:function(t){this.elem.classList.remove.apply(this.elem.classList,this.classes[t].split(" "))},hasClass:function(t){return this.classes[t].split(" ").every(function(t){return this.classList.contains(t)},this.elem)},update:function(t){t.isOutOfBounds||!0!==this.frozen&&(t.top?this.top():this.notTop(),t.bottom?this.bottom():this.notBottom(),this.shouldUnpin(t)?this.unpin():this.shouldPin(t)&&this.pin())}},s.options={tolerance:{up:0,down:0},offset:0,scroller:t()?window:null,classes:{frozen:"headroom--frozen",pinned:"headroom--pinned",unpinned:"headroom--unpinned",top:"headroom--top",notTop:"headroom--not-top",bottom:"headroom--bottom",notBottom:"headroom--not-bottom",initial:"headroom"}},s.cutsTheMustard=!!(t()&&function(){}.bind&&"classList"in document.documentElement&&Object.assign&&Object.keys&&requestAnimationFrame),s}); diff --git a/r-book/site_libs/quarto-nav/quarto-nav.js b/r-book/site_libs/quarto-nav/quarto-nav.js new file mode 100644 index 00000000..3b21201f --- /dev/null +++ b/r-book/site_libs/quarto-nav/quarto-nav.js @@ -0,0 +1,277 @@ +const headroomChanged = new CustomEvent("quarto-hrChanged", { + detail: {}, + bubbles: true, + cancelable: false, + composed: false, +}); + +window.document.addEventListener("DOMContentLoaded", function () { + let init = false; + + // Manage the back to top button, if one is present. + let lastScrollTop = window.pageYOffset || document.documentElement.scrollTop; + const scrollDownBuffer = 5; + const scrollUpBuffer = 35; + const btn = document.getElementById("quarto-back-to-top"); + const hideBackToTop = () => { + btn.style.display = "none"; + }; + const showBackToTop = () => { + btn.style.display = "inline-block"; + }; + if (btn) { + window.document.addEventListener( + "scroll", + function () { + const currentScrollTop = + window.pageYOffset || document.documentElement.scrollTop; + + // Shows and hides the button 'intelligently' as the user scrolls + if (currentScrollTop - scrollDownBuffer > lastScrollTop) { + hideBackToTop(); + lastScrollTop = currentScrollTop <= 0 ? 0 : currentScrollTop; + } else if (currentScrollTop < lastScrollTop - scrollUpBuffer) { + showBackToTop(); + lastScrollTop = currentScrollTop <= 0 ? 0 : currentScrollTop; + } + + // Show the button at the bottom, hides it at the top + if (currentScrollTop <= 0) { + hideBackToTop(); + } else if ( + window.innerHeight + currentScrollTop >= + document.body.offsetHeight + ) { + showBackToTop(); + } + }, + false + ); + } + + function throttle(func, wait) { + var timeout; + return function () { + const context = this; + const args = arguments; + const later = function () { + clearTimeout(timeout); + timeout = null; + func.apply(context, args); + }; + + if (!timeout) { + timeout = setTimeout(later, wait); + } + }; + } + + function headerOffset() { + // Set an offset if there is are fixed top navbar + const headerEl = window.document.querySelector("header.fixed-top"); + if (headerEl) { + return headerEl.clientHeight; + } else { + return 0; + } + } + + function footerOffset() { + const footerEl = window.document.querySelector("footer.footer"); + if (footerEl) { + return footerEl.clientHeight; + } else { + return 0; + } + } + + function updateDocumentOffsetWithoutAnimation() { + updateDocumentOffset(false); + } + + function updateDocumentOffset(animated) { + // set body offset + const topOffset = headerOffset(); + const bodyOffset = topOffset + footerOffset(); + const bodyEl = window.document.body; + bodyEl.setAttribute("data-bs-offset", topOffset); + bodyEl.style.paddingTop = topOffset + "px"; + + // deal with sidebar offsets + const sidebars = window.document.querySelectorAll( + ".sidebar, .headroom-target" + ); + sidebars.forEach((sidebar) => { + if (!animated) { + sidebar.classList.add("notransition"); + // Remove the no transition class after the animation has time to complete + setTimeout(function () { + sidebar.classList.remove("notransition"); + }, 201); + } + + if (window.Headroom && sidebar.classList.contains("sidebar-unpinned")) { + sidebar.style.top = "0"; + sidebar.style.maxHeight = "100vh"; + } else { + sidebar.style.top = topOffset + "px"; + sidebar.style.maxHeight = "calc(100vh - " + topOffset + "px)"; + } + }); + + // allow space for footer + const mainContainer = window.document.querySelector(".quarto-container"); + if (mainContainer) { + mainContainer.style.minHeight = "calc(100vh - " + bodyOffset + "px)"; + } + + // link offset + let linkStyle = window.document.querySelector("#quarto-target-style"); + if (!linkStyle) { + linkStyle = window.document.createElement("style"); + linkStyle.setAttribute("id", "quarto-target-style"); + window.document.head.appendChild(linkStyle); + } + while (linkStyle.firstChild) { + linkStyle.removeChild(linkStyle.firstChild); + } + if (topOffset > 0) { + linkStyle.appendChild( + window.document.createTextNode(` + section:target::before { + content: ""; + display: block; + height: ${topOffset}px; + margin: -${topOffset}px 0 0; + }`) + ); + } + if (init) { + window.dispatchEvent(headroomChanged); + } + init = true; + } + + // initialize headroom + var header = window.document.querySelector("#quarto-header"); + if (header && window.Headroom) { + const headroom = new window.Headroom(header, { + tolerance: 5, + onPin: function () { + const sidebars = window.document.querySelectorAll( + ".sidebar, .headroom-target" + ); + sidebars.forEach((sidebar) => { + sidebar.classList.remove("sidebar-unpinned"); + }); + updateDocumentOffset(); + }, + onUnpin: function () { + const sidebars = window.document.querySelectorAll( + ".sidebar, .headroom-target" + ); + sidebars.forEach((sidebar) => { + sidebar.classList.add("sidebar-unpinned"); + }); + updateDocumentOffset(); + }, + }); + headroom.init(); + + let frozen = false; + window.quartoToggleHeadroom = function () { + if (frozen) { + headroom.unfreeze(); + frozen = false; + } else { + headroom.freeze(); + frozen = true; + } + }; + } + + window.addEventListener( + "hashchange", + function (e) { + if ( + getComputedStyle(document.documentElement).scrollBehavior !== "smooth" + ) { + window.scrollTo(0, window.pageYOffset - headerOffset()); + } + }, + false + ); + + // Observe size changed for the header + const headerEl = window.document.querySelector("header.fixed-top"); + if (headerEl && window.ResizeObserver) { + const observer = new window.ResizeObserver( + updateDocumentOffsetWithoutAnimation + ); + observer.observe(headerEl, { + attributes: true, + childList: true, + characterData: true, + }); + } else { + window.addEventListener( + "resize", + throttle(updateDocumentOffsetWithoutAnimation, 50) + ); + } + setTimeout(updateDocumentOffsetWithoutAnimation, 250); + + // fixup index.html links if we aren't on the filesystem + if (window.location.protocol !== "file:") { + const links = window.document.querySelectorAll("a"); + for (let i = 0; i < links.length; i++) { + if (links[i].href) { + links[i].href = links[i].href.replace(/\/index\.html/, "/"); + } + } + + // Fixup any sharing links that require urls + // Append url to any sharing urls + const sharingLinks = window.document.querySelectorAll( + "a.sidebar-tools-main-item" + ); + for (let i = 0; i < sharingLinks.length; i++) { + const sharingLink = sharingLinks[i]; + const href = sharingLink.getAttribute("href"); + if (href) { + sharingLink.setAttribute( + "href", + href.replace("|url|", window.location.href) + ); + } + } + + // Scroll the active navigation item into view, if necessary + const navSidebar = window.document.querySelector("nav#quarto-sidebar"); + if (navSidebar) { + // Find the active item + const activeItem = navSidebar.querySelector("li.sidebar-item a.active"); + if (activeItem) { + // Wait for the scroll height and height to resolve by observing size changes on the + // nav element that is scrollable + const resizeObserver = new ResizeObserver((_entries) => { + // The bottom of the element + const elBottom = activeItem.offsetTop; + const viewBottom = navSidebar.scrollTop + navSidebar.clientHeight; + + // The element height and scroll height are the same, then we are still loading + if (viewBottom !== navSidebar.scrollHeight) { + // Determine if the item isn't visible and scroll to it + if (elBottom >= viewBottom) { + navSidebar.scrollTop = elBottom; + } + + // stop observing now since we've completed the scroll + resizeObserver.unobserve(navSidebar); + } + }); + resizeObserver.observe(navSidebar); + } + } + } +}); diff --git a/r-book/site_libs/quarto-search/autocomplete.umd.js b/r-book/site_libs/quarto-search/autocomplete.umd.js new file mode 100644 index 00000000..619c57cc --- /dev/null +++ b/r-book/site_libs/quarto-search/autocomplete.umd.js @@ -0,0 +1,3 @@ +/*! @algolia/autocomplete-js 1.7.3 | MIT License | © Algolia, Inc. and contributors | https://github.com/algolia/autocomplete */ +!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self)["@algolia/autocomplete-js"]={})}(this,(function(e){"use strict";function t(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function n(e){for(var n=1;n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function a(e,t){return function(e){if(Array.isArray(e))return e}(e)||function(e,t){var n=null==e?null:"undefined"!=typeof Symbol&&e[Symbol.iterator]||e["@@iterator"];if(null==n)return;var r,o,i=[],u=!0,a=!1;try{for(n=n.call(e);!(u=(r=n.next()).done)&&(i.push(r.value),!t||i.length!==t);u=!0);}catch(e){a=!0,o=e}finally{try{u||null==n.return||n.return()}finally{if(a)throw o}}return i}(e,t)||l(e,t)||function(){throw new TypeError("Invalid attempt to destructure non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function c(e){return function(e){if(Array.isArray(e))return s(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||l(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function l(e,t){if(e){if("string"==typeof e)return s(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);return"Object"===n&&e.constructor&&(n=e.constructor.name),"Map"===n||"Set"===n?Array.from(e):"Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)?s(e,t):void 0}}function s(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n=n?null===r?null:0:o}function S(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function I(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function E(e,t){var n=[];return Promise.resolve(e(t)).then((function(e){return Promise.all(e.filter((function(e){return Boolean(e)})).map((function(e){if(e.sourceId,n.includes(e.sourceId))throw new Error("[Autocomplete] The `sourceId` ".concat(JSON.stringify(e.sourceId)," is not unique."));n.push(e.sourceId);var t=function(e){for(var t=1;te.length)&&(t=e.length);for(var n=0,r=new Array(t);ne.length)&&(t=e.length);for(var n=0,r=new Array(t);n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}var ae,ce,le,se=null,pe=(ae=-1,ce=-1,le=void 0,function(e){var t=++ae;return Promise.resolve(e).then((function(e){return le&&t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}var ye=["props","refresh","store"],be=["inputElement","formElement","panelElement"],Oe=["inputElement"],_e=["inputElement","maxLength"],Pe=["item","source"];function je(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function we(e){for(var t=1;t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function Ee(e){var t=e.props,n=e.refresh,r=e.store,o=Ie(e,ye);return{getEnvironmentProps:function(e){var n=e.inputElement,o=e.formElement,i=e.panelElement;function u(e){!r.getState().isOpen&&r.pendingRequests.isEmpty()||e.target===n||!1===[o,i].some((function(t){return n=t,r=e.target,n===r||n.contains(r);var n,r}))&&(r.dispatch("blur",null),t.debug||r.pendingRequests.cancelAll())}return we({onTouchStart:u,onMouseDown:u,onTouchMove:function(e){!1!==r.getState().isOpen&&n===t.environment.document.activeElement&&e.target!==n&&n.blur()}},Ie(e,be))},getRootProps:function(e){return we({role:"combobox","aria-expanded":r.getState().isOpen,"aria-haspopup":"listbox","aria-owns":r.getState().isOpen?"".concat(t.id,"-list"):void 0,"aria-labelledby":"".concat(t.id,"-label")},e)},getFormProps:function(e){return e.inputElement,we({action:"",noValidate:!0,role:"search",onSubmit:function(i){var u;i.preventDefault(),t.onSubmit(we({event:i,refresh:n,state:r.getState()},o)),r.dispatch("submit",null),null===(u=e.inputElement)||void 0===u||u.blur()},onReset:function(i){var u;i.preventDefault(),t.onReset(we({event:i,refresh:n,state:r.getState()},o)),r.dispatch("reset",null),null===(u=e.inputElement)||void 0===u||u.focus()}},Ie(e,Oe))},getLabelProps:function(e){return we({htmlFor:"".concat(t.id,"-input"),id:"".concat(t.id,"-label")},e)},getInputProps:function(e){var i;function u(e){(t.openOnFocus||Boolean(r.getState().query))&&fe(we({event:e,props:t,query:r.getState().completion||r.getState().query,refresh:n,store:r},o)),r.dispatch("focus",null)}var a=e||{};a.inputElement;var c=a.maxLength,l=void 0===c?512:c,s=Ie(a,_e),p=A(r.getState()),f=function(e){return Boolean(e&&e.match(C))}((null===(i=t.environment.navigator)||void 0===i?void 0:i.userAgent)||""),d=null!=p&&p.itemUrl&&!f?"go":"search";return we({"aria-autocomplete":"both","aria-activedescendant":r.getState().isOpen&&null!==r.getState().activeItemId?"".concat(t.id,"-item-").concat(r.getState().activeItemId):void 0,"aria-controls":r.getState().isOpen?"".concat(t.id,"-list"):void 0,"aria-labelledby":"".concat(t.id,"-label"),value:r.getState().completion||r.getState().query,id:"".concat(t.id,"-input"),autoComplete:"off",autoCorrect:"off",autoCapitalize:"off",enterKeyHint:d,spellCheck:"false",autoFocus:t.autoFocus,placeholder:t.placeholder,maxLength:l,type:"search",onChange:function(e){fe(we({event:e,props:t,query:e.currentTarget.value.slice(0,l),refresh:n,store:r},o))},onKeyDown:function(e){!function(e){var t=e.event,n=e.props,r=e.refresh,o=e.store,i=ge(e,de);if("ArrowUp"===t.key||"ArrowDown"===t.key){var u=function(){var e=n.environment.document.getElementById("".concat(n.id,"-item-").concat(o.getState().activeItemId));e&&(e.scrollIntoViewIfNeeded?e.scrollIntoViewIfNeeded(!1):e.scrollIntoView(!1))},a=function(){var e=A(o.getState());if(null!==o.getState().activeItemId&&e){var n=e.item,u=e.itemInputValue,a=e.itemUrl,c=e.source;c.onActive(ve({event:t,item:n,itemInputValue:u,itemUrl:a,refresh:r,source:c,state:o.getState()},i))}};t.preventDefault(),!1===o.getState().isOpen&&(n.openOnFocus||Boolean(o.getState().query))?fe(ve({event:t,props:n,query:o.getState().query,refresh:r,store:o},i)).then((function(){o.dispatch(t.key,{nextActiveItemId:n.defaultActiveItemId}),a(),setTimeout(u,0)})):(o.dispatch(t.key,{}),a(),u())}else if("Escape"===t.key)t.preventDefault(),o.dispatch(t.key,null),o.pendingRequests.cancelAll();else if("Tab"===t.key)o.dispatch("blur",null),o.pendingRequests.cancelAll();else if("Enter"===t.key){if(null===o.getState().activeItemId||o.getState().collections.every((function(e){return 0===e.items.length})))return void(n.debug||o.pendingRequests.cancelAll());t.preventDefault();var c=A(o.getState()),l=c.item,s=c.itemInputValue,p=c.itemUrl,f=c.source;if(t.metaKey||t.ctrlKey)void 0!==p&&(f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),n.navigator.navigateNewTab({itemUrl:p,item:l,state:o.getState()}));else if(t.shiftKey)void 0!==p&&(f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),n.navigator.navigateNewWindow({itemUrl:p,item:l,state:o.getState()}));else if(t.altKey);else{if(void 0!==p)return f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),void n.navigator.navigate({itemUrl:p,item:l,state:o.getState()});fe(ve({event:t,nextState:{isOpen:!1},props:n,query:s,refresh:r,store:o},i)).then((function(){f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i))}))}}}(we({event:e,props:t,refresh:n,store:r},o))},onFocus:u,onBlur:y,onClick:function(n){e.inputElement!==t.environment.document.activeElement||r.getState().isOpen||u(n)}},s)},getPanelProps:function(e){return we({onMouseDown:function(e){e.preventDefault()},onMouseLeave:function(){r.dispatch("mouseleave",null)}},e)},getListProps:function(e){return we({role:"listbox","aria-labelledby":"".concat(t.id,"-label"),id:"".concat(t.id,"-list")},e)},getItemProps:function(e){var i=e.item,u=e.source,a=Ie(e,Pe);return we({id:"".concat(t.id,"-item-").concat(i.__autocomplete_id),role:"option","aria-selected":r.getState().activeItemId===i.__autocomplete_id,onMouseMove:function(e){if(i.__autocomplete_id!==r.getState().activeItemId){r.dispatch("mousemove",i.__autocomplete_id);var t=A(r.getState());if(null!==r.getState().activeItemId&&t){var u=t.item,a=t.itemInputValue,c=t.itemUrl,l=t.source;l.onActive(we({event:e,item:u,itemInputValue:a,itemUrl:c,refresh:n,source:l,state:r.getState()},o))}}},onMouseDown:function(e){e.preventDefault()},onClick:function(e){var a=u.getItemInputValue({item:i,state:r.getState()}),c=u.getItemUrl({item:i,state:r.getState()});(c?Promise.resolve():fe(we({event:e,nextState:{isOpen:!1},props:t,query:a,refresh:n,store:r},o))).then((function(){u.onSelect(we({event:e,item:i,itemInputValue:a,itemUrl:c,refresh:n,source:u,state:r.getState()},o))}))}},a)}}}function Ae(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function Ce(e){for(var t=1;t0},reshape:function(e){return e.sources}},e),{},{id:null!==(n=e.id)&&void 0!==n?n:v(),plugins:o,initialState:H({activeItemId:null,query:"",completion:null,collections:[],isOpen:!1,status:"idle",context:{}},e.initialState),onStateChange:function(t){var n;null===(n=e.onStateChange)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onStateChange)||void 0===n?void 0:n.call(e,t)}))},onSubmit:function(t){var n;null===(n=e.onSubmit)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onSubmit)||void 0===n?void 0:n.call(e,t)}))},onReset:function(t){var n;null===(n=e.onReset)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onReset)||void 0===n?void 0:n.call(e,t)}))},getSources:function(n){return Promise.all([].concat(F(o.map((function(e){return e.getSources}))),[e.getSources]).filter(Boolean).map((function(e){return E(e,n)}))).then((function(e){return d(e)})).then((function(e){return e.map((function(e){return H(H({},e),{},{onSelect:function(n){e.onSelect(n),t.forEach((function(e){var t;return null===(t=e.onSelect)||void 0===t?void 0:t.call(e,n)}))},onActive:function(n){e.onActive(n),t.forEach((function(e){var t;return null===(t=e.onActive)||void 0===t?void 0:t.call(e,n)}))}})}))}))},navigator:H({navigate:function(e){var t=e.itemUrl;r.location.assign(t)},navigateNewTab:function(e){var t=e.itemUrl,n=r.open(t,"_blank","noopener");null==n||n.focus()},navigateNewWindow:function(e){var t=e.itemUrl;r.open(t,"_blank","noopener")}},e.navigator)})}(e,t),r=R(Te,n,(function(e){var t=e.prevState,r=e.state;n.onStateChange(Be({prevState:t,state:r,refresh:u},o))})),o=function(e){var t=e.store;return{setActiveItemId:function(e){t.dispatch("setActiveItemId",e)},setQuery:function(e){t.dispatch("setQuery",e)},setCollections:function(e){var n=0,r=e.map((function(e){return L(L({},e),{},{items:d(e.items).map((function(e){return L(L({},e),{},{__autocomplete_id:n++})}))})}));t.dispatch("setCollections",r)},setIsOpen:function(e){t.dispatch("setIsOpen",e)},setStatus:function(e){t.dispatch("setStatus",e)},setContext:function(e){t.dispatch("setContext",e)}}}({store:r}),i=Ee(Be({props:n,refresh:u,store:r},o));function u(){return fe(Be({event:new Event("input"),nextState:{isOpen:r.getState().isOpen},props:n,query:r.getState().query,refresh:u,store:r},o))}return n.plugins.forEach((function(e){var n;return null===(n=e.subscribe)||void 0===n?void 0:n.call(e,Be(Be({},o),{},{refresh:u,onSelect:function(e){t.push({onSelect:e})},onActive:function(e){t.push({onActive:e})}}))})),function(e){var t,n,r=e.metadata,o=e.environment;if(null===(t=o.navigator)||void 0===t||null===(n=t.userAgent)||void 0===n?void 0:n.includes("Algolia Crawler")){var i=o.document.createElement("meta"),u=o.document.querySelector("head");i.name="algolia:metadata",setTimeout((function(){i.content=JSON.stringify(r),u.appendChild(i)}),0)}}({metadata:ke({plugins:n.plugins,options:e}),environment:n.environment}),Be(Be({refresh:u},i),o)}var Ue=function(e,t,n,r){var o;t[0]=0;for(var i=1;i=5&&((o||!e&&5===r)&&(u.push(r,0,o,n),r=6),e&&(u.push(r,e,0,n),r=6)),o=""},c=0;c"===t?(r=1,o=""):o=t+o[0]:i?t===i?i="":o+=t:'"'===t||"'"===t?i=t:">"===t?(a(),r=1):r&&("="===t?(r=5,n=o,o=""):"/"===t&&(r<5||">"===e[c][l+1])?(a(),3===r&&(u=u[0]),r=u,(u=u[0]).push(2,0,r),r=0):" "===t||"\t"===t||"\n"===t||"\r"===t?(a(),r=2):o+=t),3===r&&"!--"===o&&(r=4,u=u[0])}return a(),u}(e)),t),arguments,[])).length>1?t:t[0]}var We=function(e){var t=e.environment,n=t.document.createElementNS("http://www.w3.org/2000/svg","svg");n.setAttribute("class","aa-ClearIcon"),n.setAttribute("viewBox","0 0 24 24"),n.setAttribute("width","18"),n.setAttribute("height","18"),n.setAttribute("fill","currentColor");var r=t.document.createElementNS("http://www.w3.org/2000/svg","path");return r.setAttribute("d","M5.293 6.707l5.293 5.293-5.293 5.293c-0.391 0.391-0.391 1.024 0 1.414s1.024 0.391 1.414 0l5.293-5.293 5.293 5.293c0.391 0.391 1.024 0.391 1.414 0s0.391-1.024 0-1.414l-5.293-5.293 5.293-5.293c0.391-0.391 0.391-1.024 0-1.414s-1.024-0.391-1.414 0l-5.293 5.293-5.293-5.293c-0.391-0.391-1.024-0.391-1.414 0s-0.391 1.024 0 1.414z"),n.appendChild(r),n};function Qe(e,t){if("string"==typeof t){var n=e.document.querySelector(t);return"The element ".concat(JSON.stringify(t)," is not in the document."),n}return t}function $e(){for(var e=arguments.length,t=new Array(e),n=0;n2&&(u.children=arguments.length>3?lt.call(arguments,2):n),"function"==typeof e&&null!=e.defaultProps)for(i in e.defaultProps)void 0===u[i]&&(u[i]=e.defaultProps[i]);return _t(e,u,r,o,null)}function _t(e,t,n,r,o){var i={type:e,props:t,key:n,ref:r,__k:null,__:null,__b:0,__e:null,__d:void 0,__c:null,__h:null,constructor:void 0,__v:null==o?++pt:o};return null==o&&null!=st.vnode&&st.vnode(i),i}function Pt(e){return e.children}function jt(e,t){this.props=e,this.context=t}function wt(e,t){if(null==t)return e.__?wt(e.__,e.__.__k.indexOf(e)+1):null;for(var n;t0?_t(d.type,d.props,d.key,null,d.__v):d)){if(d.__=n,d.__b=n.__b+1,null===(f=g[s])||f&&d.key==f.key&&d.type===f.type)g[s]=void 0;else for(p=0;p0&&void 0!==arguments[0]?arguments[0]:[];return{get:function(){return e},add:function(t){var n=e[e.length-1];(null==n?void 0:n.isHighlighted)===t.isHighlighted?e[e.length-1]={value:n.value+t.value,isHighlighted:n.isHighlighted}:e.push(t)}}}(n?[{value:n,isHighlighted:!1}]:[]);return t.forEach((function(e){var t=e.split(Ht);r.add({value:t[0],isHighlighted:!0}),""!==t[1]&&r.add({value:t[1],isHighlighted:!1})})),r.get()}function Wt(e){return function(e){if(Array.isArray(e))return Qt(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||function(e,t){if(!e)return;if("string"==typeof e)return Qt(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);"Object"===n&&e.constructor&&(n=e.constructor.name);if("Map"===n||"Set"===n)return Array.from(e);if("Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))return Qt(e,t)}(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function Qt(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n",""":'"',"'":"'"},Gt=new RegExp(/\w/i),Kt=/&(amp|quot|lt|gt|#39);/g,Jt=RegExp(Kt.source);function Yt(e,t){var n,r,o,i=e[t],u=(null===(n=e[t+1])||void 0===n?void 0:n.isHighlighted)||!0,a=(null===(r=e[t-1])||void 0===r?void 0:r.isHighlighted)||!0;return Gt.test((o=i.value)&&Jt.test(o)?o.replace(Kt,(function(e){return zt[e]})):o)||a!==u?i.isHighlighted:a}function Xt(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function Zt(e){for(var t=1;te.length)&&(t=e.length);for(var n=0,r=new Array(t);n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function mn(e){return function(e){if(Array.isArray(e))return vn(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||function(e,t){if(!e)return;if("string"==typeof e)return vn(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);"Object"===n&&e.constructor&&(n=e.constructor.name);if("Map"===n||"Set"===n)return Array.from(e);if("Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))return vn(e,t)}(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function vn(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n0;if(!O.value.core.openOnFocus&&!t.query)return n;var r=Boolean(h.current||O.value.renderer.renderNoResults);return!n&&r||n},__autocomplete_metadata:{userAgents:Sn,options:e}}))})),j=p(n({collections:[],completion:null,context:{},isOpen:!1,query:"",activeItemId:null,status:"idle"},O.value.core.initialState)),w={getEnvironmentProps:O.value.renderer.getEnvironmentProps,getFormProps:O.value.renderer.getFormProps,getInputProps:O.value.renderer.getInputProps,getItemProps:O.value.renderer.getItemProps,getLabelProps:O.value.renderer.getLabelProps,getListProps:O.value.renderer.getListProps,getPanelProps:O.value.renderer.getPanelProps,getRootProps:O.value.renderer.getRootProps},S={setActiveItemId:P.value.setActiveItemId,setQuery:P.value.setQuery,setCollections:P.value.setCollections,setIsOpen:P.value.setIsOpen,setStatus:P.value.setStatus,setContext:P.value.setContext,refresh:P.value.refresh},I=d((function(){return Ve.bind(O.value.renderer.renderer.createElement)})),E=d((function(){return ct({autocomplete:P.value,autocompleteScopeApi:S,classNames:O.value.renderer.classNames,environment:O.value.core.environment,isDetached:_.value,placeholder:O.value.core.placeholder,propGetters:w,setIsModalOpen:k,state:j.current,translations:O.value.renderer.translations})}));function A(){tt(E.value.panel,{style:_.value?{}:wn({panelPlacement:O.value.renderer.panelPlacement,container:E.value.root,form:E.value.form,environment:O.value.core.environment})})}function C(e){j.current=e;var t={autocomplete:P.value,autocompleteScopeApi:S,classNames:O.value.renderer.classNames,components:O.value.renderer.components,container:O.value.renderer.container,html:I.value,dom:E.value,panelContainer:_.value?E.value.detachedContainer:O.value.renderer.panelContainer,propGetters:w,state:j.current,renderer:O.value.renderer.renderer},r=!g(e)&&!h.current&&O.value.renderer.renderNoResults||O.value.renderer.render;!function(e){var t=e.autocomplete,r=e.autocompleteScopeApi,o=e.dom,i=e.propGetters,u=e.state;nt(o.root,i.getRootProps(n({state:u,props:t.getRootProps({})},r))),nt(o.input,i.getInputProps(n({state:u,props:t.getInputProps({inputElement:o.input}),inputElement:o.input},r))),tt(o.label,{hidden:"stalled"===u.status}),tt(o.loadingIndicator,{hidden:"stalled"!==u.status}),tt(o.clearButton,{hidden:!u.query})}(t),function(e,t){var r=t.autocomplete,o=t.autocompleteScopeApi,u=t.classNames,a=t.html,c=t.dom,l=t.panelContainer,s=t.propGetters,p=t.state,f=t.components,d=t.renderer;if(p.isOpen){l.contains(c.panel)||"loading"===p.status||l.appendChild(c.panel),c.panel.classList.toggle("aa-Panel--stalled","stalled"===p.status);var m=p.collections.filter((function(e){var t=e.source,n=e.items;return t.templates.noResults||n.length>0})).map((function(e,t){var c=e.source,l=e.items;return d.createElement("section",{key:t,className:u.source,"data-autocomplete-source-id":c.sourceId},c.templates.header&&d.createElement("div",{className:u.sourceHeader},c.templates.header({components:f,createElement:d.createElement,Fragment:d.Fragment,items:l,source:c,state:p,html:a})),c.templates.noResults&&0===l.length?d.createElement("div",{className:u.sourceNoResults},c.templates.noResults({components:f,createElement:d.createElement,Fragment:d.Fragment,source:c,state:p,html:a})):d.createElement("ul",i({className:u.list},s.getListProps(n({state:p,props:r.getListProps({})},o))),l.map((function(e){var t=r.getItemProps({item:e,source:c});return d.createElement("li",i({key:t.id,className:u.item},s.getItemProps(n({state:p,props:t},o))),c.templates.item({components:f,createElement:d.createElement,Fragment:d.Fragment,item:e,state:p,html:a}))}))),c.templates.footer&&d.createElement("div",{className:u.sourceFooter},c.templates.footer({components:f,createElement:d.createElement,Fragment:d.Fragment,items:l,source:c,state:p,html:a})))})),v=d.createElement(d.Fragment,null,d.createElement("div",{className:u.panelLayout},m),d.createElement("div",{className:"aa-GradientBottom"})),h=m.reduce((function(e,t){return e[t.props["data-autocomplete-source-id"]]=t,e}),{});e(n(n({children:v,state:p,sections:m,elements:h},d),{},{components:f,html:a},o),c.panel)}else l.contains(c.panel)&&l.removeChild(c.panel)}(r,t)}function D(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};c();var t=O.value.renderer,n=t.components,r=u(t,In);y.current=Ge(r,O.value.core,{components:Ke(n,(function(e){return!e.value.hasOwnProperty("__autocomplete_componentName")})),initialState:j.current},e),m(),l(),P.value.refresh().then((function(){C(j.current)}))}function k(e){requestAnimationFrame((function(){var t=O.value.core.environment.document.body.contains(E.value.detachedOverlay);e!==t&&(e?(O.value.core.environment.document.body.appendChild(E.value.detachedOverlay),O.value.core.environment.document.body.classList.add("aa-Detached"),E.value.input.focus()):(O.value.core.environment.document.body.removeChild(E.value.detachedOverlay),O.value.core.environment.document.body.classList.remove("aa-Detached"),P.value.setQuery(""),P.value.refresh()))}))}return a((function(){var e=P.value.getEnvironmentProps({formElement:E.value.form,panelElement:E.value.panel,inputElement:E.value.input});return tt(O.value.core.environment,e),function(){tt(O.value.core.environment,Object.keys(e).reduce((function(e,t){return n(n({},e),{},o({},t,void 0))}),{}))}})),a((function(){var e=_.value?O.value.core.environment.document.body:O.value.renderer.panelContainer,t=_.value?E.value.detachedOverlay:E.value.panel;return _.value&&j.current.isOpen&&k(!0),C(j.current),function(){e.contains(t)&&e.removeChild(t)}})),a((function(){var e=O.value.renderer.container;return e.appendChild(E.value.root),function(){e.removeChild(E.value.root)}})),a((function(){var e=f((function(e){C(e.state)}),0);return b.current=function(t){var n=t.state,r=t.prevState;(_.value&&r.isOpen!==n.isOpen&&k(n.isOpen),_.value||!n.isOpen||r.isOpen||A(),n.query!==r.query)&&O.value.core.environment.document.querySelectorAll(".aa-Panel--scrollable").forEach((function(e){0!==e.scrollTop&&(e.scrollTop=0)}));e({state:n})},function(){b.current=void 0}})),a((function(){var e=f((function(){var e=_.value;_.value=O.value.core.environment.matchMedia(O.value.renderer.detachedMediaQuery).matches,e!==_.value?D({}):requestAnimationFrame(A)}),20);return O.value.core.environment.addEventListener("resize",e),function(){O.value.core.environment.removeEventListener("resize",e)}})),a((function(){if(!_.value)return function(){};function e(e){E.value.detachedContainer.classList.toggle("aa-DetachedContainer--modal",e)}function t(t){e(t.matches)}var n=O.value.core.environment.matchMedia(getComputedStyle(O.value.core.environment.document.documentElement).getPropertyValue("--aa-detached-modal-media-query"));e(n.matches);var r=Boolean(n.addEventListener);return r?n.addEventListener("change",t):n.addListener(t),function(){r?n.removeEventListener("change",t):n.removeListener(t)}})),a((function(){return requestAnimationFrame(A),function(){}})),n(n({},S),{},{update:D,destroy:function(){c()}})},e.getAlgoliaFacets=function(e){var t=En({transformResponse:function(e){return e.facetHits}}),r=e.queries.map((function(e){return n(n({},e),{},{type:"facet"})}));return t(n(n({},e),{},{queries:r}))},e.getAlgoliaResults=An,Object.defineProperty(e,"__esModule",{value:!0})})); + diff --git a/r-book/site_libs/quarto-search/fuse.min.js b/r-book/site_libs/quarto-search/fuse.min.js new file mode 100644 index 00000000..adc28356 --- /dev/null +++ b/r-book/site_libs/quarto-search/fuse.min.js @@ -0,0 +1,9 @@ +/** + * Fuse.js v6.6.2 - Lightweight fuzzy-search (http://fusejs.io) + * + * Copyright (c) 2022 Kiro Risk (http://kiro.me) + * All Rights Reserved. Apache Software License 2.0 + * + * http://www.apache.org/licenses/LICENSE-2.0 + */ +var e,t;e=this,t=function(){"use strict";function e(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function t(t){for(var n=1;ne.length)&&(t=e.length);for(var n=0,r=new Array(t);n0&&void 0!==arguments[0]?arguments[0]:1,t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:3,n=new Map,r=Math.pow(10,t);return{get:function(t){var i=t.match(C).length;if(n.has(i))return n.get(i);var o=1/Math.pow(i,.5*e),c=parseFloat(Math.round(o*r)/r);return n.set(i,c),c},clear:function(){n.clear()}}}var $=function(){function e(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},n=t.getFn,i=void 0===n?I.getFn:n,o=t.fieldNormWeight,c=void 0===o?I.fieldNormWeight:o;r(this,e),this.norm=E(c,3),this.getFn=i,this.isCreated=!1,this.setIndexRecords()}return o(e,[{key:"setSources",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[];this.docs=e}},{key:"setIndexRecords",value:function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[];this.records=e}},{key:"setKeys",value:function(){var e=this,t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[];this.keys=t,this._keysMap={},t.forEach((function(t,n){e._keysMap[t.id]=n}))}},{key:"create",value:function(){var e=this;!this.isCreated&&this.docs.length&&(this.isCreated=!0,g(this.docs[0])?this.docs.forEach((function(t,n){e._addString(t,n)})):this.docs.forEach((function(t,n){e._addObject(t,n)})),this.norm.clear())}},{key:"add",value:function(e){var t=this.size();g(e)?this._addString(e,t):this._addObject(e,t)}},{key:"removeAt",value:function(e){this.records.splice(e,1);for(var t=e,n=this.size();t2&&void 0!==arguments[2]?arguments[2]:{},r=n.getFn,i=void 0===r?I.getFn:r,o=n.fieldNormWeight,c=void 0===o?I.fieldNormWeight:o,a=new $({getFn:i,fieldNormWeight:c});return a.setKeys(e.map(_)),a.setSources(t),a.create(),a}function R(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},n=t.errors,r=void 0===n?0:n,i=t.currentLocation,o=void 0===i?0:i,c=t.expectedLocation,a=void 0===c?0:c,s=t.distance,u=void 0===s?I.distance:s,h=t.ignoreLocation,l=void 0===h?I.ignoreLocation:h,f=r/e.length;if(l)return f;var d=Math.abs(a-o);return u?f+d/u:d?1:f}function N(){for(var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:[],t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:I.minMatchCharLength,n=[],r=-1,i=-1,o=0,c=e.length;o=t&&n.push([r,i]),r=-1)}return e[o-1]&&o-r>=t&&n.push([r,o-1]),n}var P=32;function W(e){for(var t={},n=0,r=e.length;n1&&void 0!==arguments[1]?arguments[1]:{},o=i.location,c=void 0===o?I.location:o,a=i.threshold,s=void 0===a?I.threshold:a,u=i.distance,h=void 0===u?I.distance:u,l=i.includeMatches,f=void 0===l?I.includeMatches:l,d=i.findAllMatches,v=void 0===d?I.findAllMatches:d,g=i.minMatchCharLength,y=void 0===g?I.minMatchCharLength:g,p=i.isCaseSensitive,m=void 0===p?I.isCaseSensitive:p,k=i.ignoreLocation,M=void 0===k?I.ignoreLocation:k;if(r(this,e),this.options={location:c,threshold:s,distance:h,includeMatches:f,findAllMatches:v,minMatchCharLength:y,isCaseSensitive:m,ignoreLocation:M},this.pattern=m?t:t.toLowerCase(),this.chunks=[],this.pattern.length){var b=function(e,t){n.chunks.push({pattern:e,alphabet:W(e),startIndex:t})},x=this.pattern.length;if(x>P){for(var w=0,L=x%P,S=x-L;w3&&void 0!==arguments[3]?arguments[3]:{},i=r.location,o=void 0===i?I.location:i,c=r.distance,a=void 0===c?I.distance:c,s=r.threshold,u=void 0===s?I.threshold:s,h=r.findAllMatches,l=void 0===h?I.findAllMatches:h,f=r.minMatchCharLength,d=void 0===f?I.minMatchCharLength:f,v=r.includeMatches,g=void 0===v?I.includeMatches:v,y=r.ignoreLocation,p=void 0===y?I.ignoreLocation:y;if(t.length>P)throw new Error(w(P));for(var m,k=t.length,M=e.length,b=Math.max(0,Math.min(o,M)),x=u,L=b,S=d>1||g,_=S?Array(M):[];(m=e.indexOf(t,L))>-1;){var O=R(t,{currentLocation:m,expectedLocation:b,distance:a,ignoreLocation:p});if(x=Math.min(O,x),L=m+k,S)for(var j=0;j=z;q-=1){var B=q-1,J=n[e.charAt(B)];if(S&&(_[B]=+!!J),K[q]=(K[q+1]<<1|1)&J,F&&(K[q]|=(A[q+1]|A[q])<<1|1|A[q+1]),K[q]&$&&(C=R(t,{errors:F,currentLocation:B,expectedLocation:b,distance:a,ignoreLocation:p}))<=x){if(x=C,(L=B)<=b)break;z=Math.max(1,2*b-L)}}if(R(t,{errors:F+1,currentLocation:b,expectedLocation:b,distance:a,ignoreLocation:p})>x)break;A=K}var U={isMatch:L>=0,score:Math.max(.001,C)};if(S){var V=N(_,d);V.length?g&&(U.indices=V):U.isMatch=!1}return U}(e,n,i,{location:c+o,distance:a,threshold:s,findAllMatches:u,minMatchCharLength:h,includeMatches:r,ignoreLocation:l}),p=y.isMatch,m=y.score,k=y.indices;p&&(g=!0),v+=m,p&&k&&(d=[].concat(f(d),f(k)))}));var y={isMatch:g,score:g?v/this.chunks.length:1};return g&&r&&(y.indices=d),y}}]),e}(),z=function(){function e(t){r(this,e),this.pattern=t}return o(e,[{key:"search",value:function(){}}],[{key:"isMultiMatch",value:function(e){return D(e,this.multiRegex)}},{key:"isSingleMatch",value:function(e){return D(e,this.singleRegex)}}]),e}();function D(e,t){var n=e.match(t);return n?n[1]:null}var K=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=e===this.pattern;return{isMatch:t,score:t?0:1,indices:[0,this.pattern.length-1]}}}],[{key:"type",get:function(){return"exact"}},{key:"multiRegex",get:function(){return/^="(.*)"$/}},{key:"singleRegex",get:function(){return/^=(.*)$/}}]),n}(z),q=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=-1===e.indexOf(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}}],[{key:"type",get:function(){return"inverse-exact"}},{key:"multiRegex",get:function(){return/^!"(.*)"$/}},{key:"singleRegex",get:function(){return/^!(.*)$/}}]),n}(z),B=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=e.startsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,this.pattern.length-1]}}}],[{key:"type",get:function(){return"prefix-exact"}},{key:"multiRegex",get:function(){return/^\^"(.*)"$/}},{key:"singleRegex",get:function(){return/^\^(.*)$/}}]),n}(z),J=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=!e.startsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}}],[{key:"type",get:function(){return"inverse-prefix-exact"}},{key:"multiRegex",get:function(){return/^!\^"(.*)"$/}},{key:"singleRegex",get:function(){return/^!\^(.*)$/}}]),n}(z),U=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=e.endsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[e.length-this.pattern.length,e.length-1]}}}],[{key:"type",get:function(){return"suffix-exact"}},{key:"multiRegex",get:function(){return/^"(.*)"\$$/}},{key:"singleRegex",get:function(){return/^(.*)\$$/}}]),n}(z),V=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){var t=!e.endsWith(this.pattern);return{isMatch:t,score:t?0:1,indices:[0,e.length-1]}}}],[{key:"type",get:function(){return"inverse-suffix-exact"}},{key:"multiRegex",get:function(){return/^!"(.*)"\$$/}},{key:"singleRegex",get:function(){return/^!(.*)\$$/}}]),n}(z),G=function(e){a(n,e);var t=l(n);function n(e){var i,o=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},c=o.location,a=void 0===c?I.location:c,s=o.threshold,u=void 0===s?I.threshold:s,h=o.distance,l=void 0===h?I.distance:h,f=o.includeMatches,d=void 0===f?I.includeMatches:f,v=o.findAllMatches,g=void 0===v?I.findAllMatches:v,y=o.minMatchCharLength,p=void 0===y?I.minMatchCharLength:y,m=o.isCaseSensitive,k=void 0===m?I.isCaseSensitive:m,M=o.ignoreLocation,b=void 0===M?I.ignoreLocation:M;return r(this,n),(i=t.call(this,e))._bitapSearch=new T(e,{location:a,threshold:u,distance:l,includeMatches:d,findAllMatches:g,minMatchCharLength:p,isCaseSensitive:k,ignoreLocation:b}),i}return o(n,[{key:"search",value:function(e){return this._bitapSearch.searchIn(e)}}],[{key:"type",get:function(){return"fuzzy"}},{key:"multiRegex",get:function(){return/^"(.*)"$/}},{key:"singleRegex",get:function(){return/^(.*)$/}}]),n}(z),H=function(e){a(n,e);var t=l(n);function n(e){return r(this,n),t.call(this,e)}return o(n,[{key:"search",value:function(e){for(var t,n=0,r=[],i=this.pattern.length;(t=e.indexOf(this.pattern,n))>-1;)n=t+i,r.push([t,n-1]);var o=!!r.length;return{isMatch:o,score:o?0:1,indices:r}}}],[{key:"type",get:function(){return"include"}},{key:"multiRegex",get:function(){return/^'"(.*)"$/}},{key:"singleRegex",get:function(){return/^'(.*)$/}}]),n}(z),Q=[K,H,B,J,V,U,q,G],X=Q.length,Y=/ +(?=(?:[^\"]*\"[^\"]*\")*[^\"]*$)/;function Z(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{};return e.split("|").map((function(e){for(var n=e.trim().split(Y).filter((function(e){return e&&!!e.trim()})),r=[],i=0,o=n.length;i1&&void 0!==arguments[1]?arguments[1]:{},i=n.isCaseSensitive,o=void 0===i?I.isCaseSensitive:i,c=n.includeMatches,a=void 0===c?I.includeMatches:c,s=n.minMatchCharLength,u=void 0===s?I.minMatchCharLength:s,h=n.ignoreLocation,l=void 0===h?I.ignoreLocation:h,f=n.findAllMatches,d=void 0===f?I.findAllMatches:f,v=n.location,g=void 0===v?I.location:v,y=n.threshold,p=void 0===y?I.threshold:y,m=n.distance,k=void 0===m?I.distance:m;r(this,e),this.query=null,this.options={isCaseSensitive:o,includeMatches:a,minMatchCharLength:u,findAllMatches:d,ignoreLocation:l,location:g,threshold:p,distance:k},this.pattern=o?t:t.toLowerCase(),this.query=Z(this.pattern,this.options)}return o(e,[{key:"searchIn",value:function(e){var t=this.query;if(!t)return{isMatch:!1,score:1};var n=this.options,r=n.includeMatches;e=n.isCaseSensitive?e:e.toLowerCase();for(var i=0,o=[],c=0,a=0,s=t.length;a-1&&(n.refIndex=e.idx),t.matches.push(n)}}))}function ve(e,t){t.score=e.score}function ge(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{},r=n.includeMatches,i=void 0===r?I.includeMatches:r,o=n.includeScore,c=void 0===o?I.includeScore:o,a=[];return i&&a.push(de),c&&a.push(ve),e.map((function(e){var n=e.idx,r={item:t[n],refIndex:n};return a.length&&a.forEach((function(t){t(e,r)})),r}))}var ye=function(){function e(n){var i=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},o=arguments.length>2?arguments[2]:void 0;r(this,e),this.options=t(t({},I),i),this.options.useExtendedSearch,this._keyStore=new S(this.options.keys),this.setCollection(n,o)}return o(e,[{key:"setCollection",value:function(e,t){if(this._docs=e,t&&!(t instanceof $))throw new Error("Incorrect 'index' type");this._myIndex=t||F(this.options.keys,this._docs,{getFn:this.options.getFn,fieldNormWeight:this.options.fieldNormWeight})}},{key:"add",value:function(e){k(e)&&(this._docs.push(e),this._myIndex.add(e))}},{key:"remove",value:function(){for(var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:function(){return!1},t=[],n=0,r=this._docs.length;n1&&void 0!==arguments[1]?arguments[1]:{},n=t.limit,r=void 0===n?-1:n,i=this.options,o=i.includeMatches,c=i.includeScore,a=i.shouldSort,s=i.sortFn,u=i.ignoreFieldNorm,h=g(e)?g(this._docs[0])?this._searchStringList(e):this._searchObjectList(e):this._searchLogical(e);return fe(h,{ignoreFieldNorm:u}),a&&h.sort(s),y(r)&&r>-1&&(h=h.slice(0,r)),ge(h,this._docs,{includeMatches:o,includeScore:c})}},{key:"_searchStringList",value:function(e){var t=re(e,this.options),n=this._myIndex.records,r=[];return n.forEach((function(e){var n=e.v,i=e.i,o=e.n;if(k(n)){var c=t.searchIn(n),a=c.isMatch,s=c.score,u=c.indices;a&&r.push({item:n,idx:i,matches:[{score:s,value:n,norm:o,indices:u}]})}})),r}},{key:"_searchLogical",value:function(e){var t=this,n=function(e,t){var n=(arguments.length>2&&void 0!==arguments[2]?arguments[2]:{}).auto,r=void 0===n||n,i=function e(n){var i=Object.keys(n),o=ue(n);if(!o&&i.length>1&&!se(n))return e(le(n));if(he(n)){var c=o?n[ce]:i[0],a=o?n[ae]:n[c];if(!g(a))throw new Error(x(c));var s={keyId:j(c),pattern:a};return r&&(s.searcher=re(a,t)),s}var u={children:[],operator:i[0]};return i.forEach((function(t){var r=n[t];v(r)&&r.forEach((function(t){u.children.push(e(t))}))})),u};return se(e)||(e=le(e)),i(e)}(e,this.options),r=function e(n,r,i){if(!n.children){var o=n.keyId,c=n.searcher,a=t._findMatches({key:t._keyStore.get(o),value:t._myIndex.getValueForItemAtKeyId(r,o),searcher:c});return a&&a.length?[{idx:i,item:r,matches:a}]:[]}for(var s=[],u=0,h=n.children.length;u1&&void 0!==arguments[1]?arguments[1]:{},n=t.getFn,r=void 0===n?I.getFn:n,i=t.fieldNormWeight,o=void 0===i?I.fieldNormWeight:i,c=e.keys,a=e.records,s=new $({getFn:r,fieldNormWeight:o});return s.setKeys(c),s.setIndexRecords(a),s},ye.config=I,function(){ne.push.apply(ne,arguments)}(te),ye},"object"==typeof exports&&"undefined"!=typeof module?module.exports=t():"function"==typeof define&&define.amd?define(t):(e="undefined"!=typeof globalThis?globalThis:e||self).Fuse=t(); \ No newline at end of file diff --git a/r-book/site_libs/quarto-search/quarto-search.js b/r-book/site_libs/quarto-search/quarto-search.js new file mode 100644 index 00000000..f5d852d1 --- /dev/null +++ b/r-book/site_libs/quarto-search/quarto-search.js @@ -0,0 +1,1140 @@ +const kQueryArg = "q"; +const kResultsArg = "show-results"; + +// If items don't provide a URL, then both the navigator and the onSelect +// function aren't called (and therefore, the default implementation is used) +// +// We're using this sentinel URL to signal to those handlers that this +// item is a more item (along with the type) and can be handled appropriately +const kItemTypeMoreHref = "0767FDFD-0422-4E5A-BC8A-3BE11E5BBA05"; + +window.document.addEventListener("DOMContentLoaded", function (_event) { + // Ensure that search is available on this page. If it isn't, + // should return early and not do anything + var searchEl = window.document.getElementById("quarto-search"); + if (!searchEl) return; + + const { autocomplete } = window["@algolia/autocomplete-js"]; + + let quartoSearchOptions = {}; + let language = {}; + const searchOptionEl = window.document.getElementById( + "quarto-search-options" + ); + if (searchOptionEl) { + const jsonStr = searchOptionEl.textContent; + quartoSearchOptions = JSON.parse(jsonStr); + language = quartoSearchOptions.language; + } + + // note the search mode + if (quartoSearchOptions.type === "overlay") { + searchEl.classList.add("type-overlay"); + } else { + searchEl.classList.add("type-textbox"); + } + + // Used to determine highlighting behavior for this page + // A `q` query param is expected when the user follows a search + // to this page + const currentUrl = new URL(window.location); + const query = currentUrl.searchParams.get(kQueryArg); + const showSearchResults = currentUrl.searchParams.get(kResultsArg); + const mainEl = window.document.querySelector("main"); + + // highlight matches on the page + if (query !== null && mainEl) { + // perform any highlighting + highlight(escapeRegExp(query), mainEl); + + // fix up the URL to remove the q query param + const replacementUrl = new URL(window.location); + replacementUrl.searchParams.delete(kQueryArg); + window.history.replaceState({}, "", replacementUrl); + } + + // function to clear highlighting on the page when the search query changes + // (e.g. if the user edits the query or clears it) + let highlighting = true; + const resetHighlighting = (searchTerm) => { + if (mainEl && highlighting && query !== null && searchTerm !== query) { + clearHighlight(query, mainEl); + highlighting = false; + } + }; + + // Clear search highlighting when the user scrolls sufficiently + const resetFn = () => { + resetHighlighting(""); + window.removeEventListener("quarto-hrChanged", resetFn); + window.removeEventListener("quarto-sectionChanged", resetFn); + }; + + // Register this event after the initial scrolling and settling of events + // on the page + window.addEventListener("quarto-hrChanged", resetFn); + window.addEventListener("quarto-sectionChanged", resetFn); + + // Responsively switch to overlay mode if the search is present on the navbar + // Note that switching the sidebar to overlay mode requires more coordinate (not just + // the media query since we generate different HTML for sidebar overlays than we do + // for sidebar input UI) + const detachedMediaQuery = + quartoSearchOptions.type === "overlay" ? "all" : "(max-width: 991px)"; + + // If configured, include the analytics client to send insights + const plugins = configurePlugins(quartoSearchOptions); + + let lastState = null; + const { setIsOpen, setQuery, setCollections } = autocomplete({ + container: searchEl, + detachedMediaQuery: detachedMediaQuery, + defaultActiveItemId: 0, + panelContainer: "#quarto-search-results", + panelPlacement: quartoSearchOptions["panel-placement"], + debug: false, + openOnFocus: true, + plugins, + classNames: { + form: "d-flex", + }, + translations: { + clearButtonTitle: language["search-clear-button-title"], + detachedCancelButtonText: language["search-detached-cancel-button-title"], + submitButtonTitle: language["search-submit-button-title"], + }, + initialState: { + query, + }, + getItemUrl({ item }) { + return item.href; + }, + onStateChange({ state }) { + // Perhaps reset highlighting + resetHighlighting(state.query); + + // If the panel just opened, ensure the panel is positioned properly + if (state.isOpen) { + if (lastState && !lastState.isOpen) { + setTimeout(() => { + positionPanel(quartoSearchOptions["panel-placement"]); + }, 150); + } + } + + // Perhaps show the copy link + showCopyLink(state.query, quartoSearchOptions); + + lastState = state; + }, + reshape({ sources, state }) { + return sources.map((source) => { + try { + const items = source.getItems(); + + // Validate the items + validateItems(items); + + // group the items by document + const groupedItems = new Map(); + items.forEach((item) => { + const hrefParts = item.href.split("#"); + const baseHref = hrefParts[0]; + const isDocumentItem = hrefParts.length === 1; + + const items = groupedItems.get(baseHref); + if (!items) { + groupedItems.set(baseHref, [item]); + } else { + // If the href for this item matches the document + // exactly, place this item first as it is the item that represents + // the document itself + if (isDocumentItem) { + items.unshift(item); + } else { + items.push(item); + } + groupedItems.set(baseHref, items); + } + }); + + const reshapedItems = []; + let count = 1; + for (const [_key, value] of groupedItems) { + const firstItem = value[0]; + reshapedItems.push({ + ...firstItem, + type: kItemTypeDoc, + }); + + const collapseMatches = quartoSearchOptions["collapse-after"]; + const collapseCount = + typeof collapseMatches === "number" ? collapseMatches : 1; + + if (value.length > 1) { + const target = `search-more-${count}`; + const isExpanded = + state.context.expanded && + state.context.expanded.includes(target); + + const remainingCount = value.length - collapseCount; + + for (let i = 1; i < value.length; i++) { + if (collapseMatches && i === collapseCount) { + reshapedItems.push({ + target, + title: isExpanded + ? language["search-hide-matches-text"] + : remainingCount === 1 + ? `${remainingCount} ${language["search-more-match-text"]}` + : `${remainingCount} ${language["search-more-matches-text"]}`, + type: kItemTypeMore, + href: kItemTypeMoreHref, + }); + } + + if (isExpanded || !collapseMatches || i < collapseCount) { + reshapedItems.push({ + ...value[i], + type: kItemTypeItem, + target, + }); + } + } + } + count += 1; + } + + return { + ...source, + getItems() { + return reshapedItems; + }, + }; + } catch (error) { + // Some form of error occurred + return { + ...source, + getItems() { + return [ + { + title: error.name || "An Error Occurred While Searching", + text: + error.message || + "An unknown error occurred while attempting to perform the requested search.", + type: kItemTypeError, + }, + ]; + }, + }; + } + }); + }, + navigator: { + navigate({ itemUrl }) { + if (itemUrl !== offsetURL(kItemTypeMoreHref)) { + window.location.assign(itemUrl); + } + }, + navigateNewTab({ itemUrl }) { + if (itemUrl !== offsetURL(kItemTypeMoreHref)) { + const windowReference = window.open(itemUrl, "_blank", "noopener"); + if (windowReference) { + windowReference.focus(); + } + } + }, + navigateNewWindow({ itemUrl }) { + if (itemUrl !== offsetURL(kItemTypeMoreHref)) { + window.open(itemUrl, "_blank", "noopener"); + } + }, + }, + getSources({ state, setContext, setActiveItemId, refresh }) { + return [ + { + sourceId: "documents", + getItemUrl({ item }) { + if (item.href) { + return offsetURL(item.href); + } else { + return undefined; + } + }, + onSelect({ + item, + state, + setContext, + setIsOpen, + setActiveItemId, + refresh, + }) { + if (item.type === kItemTypeMore) { + toggleExpanded(item, state, setContext, setActiveItemId, refresh); + + // Toggle more + setIsOpen(true); + } + }, + getItems({ query }) { + if (query === null || query === "") { + return []; + } + + const limit = quartoSearchOptions.limit; + if (quartoSearchOptions.algolia) { + return algoliaSearch(query, limit, quartoSearchOptions.algolia); + } else { + // Fuse search options + const fuseSearchOptions = { + isCaseSensitive: false, + shouldSort: true, + minMatchCharLength: 2, + limit: limit, + }; + + return readSearchData().then(function (fuse) { + return fuseSearch(query, fuse, fuseSearchOptions); + }); + } + }, + templates: { + noResults({ createElement }) { + const hasQuery = lastState.query; + + return createElement( + "div", + { + class: `quarto-search-no-results${ + hasQuery ? "" : " no-query" + }`, + }, + language["search-no-results-text"] + ); + }, + header({ items, createElement }) { + // count the documents + const count = items.filter((item) => { + return item.type === kItemTypeDoc; + }).length; + + if (count > 0) { + return createElement( + "div", + { class: "search-result-header" }, + `${count} ${language["search-matching-documents-text"]}` + ); + } else { + return createElement( + "div", + { class: "search-result-header-no-results" }, + `` + ); + } + }, + footer({ _items, createElement }) { + if ( + quartoSearchOptions.algolia && + quartoSearchOptions.algolia["show-logo"] + ) { + const libDir = quartoSearchOptions.algolia["libDir"]; + const logo = createElement("img", { + src: offsetURL( + `${libDir}/quarto-search/search-by-algolia.svg` + ), + class: "algolia-search-logo", + }); + return createElement( + "a", + { href: "http://www.algolia.com/" }, + logo + ); + } + }, + + item({ item, createElement }) { + return renderItem( + item, + createElement, + state, + setActiveItemId, + setContext, + refresh + ); + }, + }, + }, + ]; + }, + }); + + window.quartoOpenSearch = () => { + setIsOpen(false); + setIsOpen(true); + focusSearchInput(); + }; + + // Remove the labeleledby attribute since it is pointing + // to a non-existent label + if (quartoSearchOptions.type === "overlay") { + const inputEl = window.document.querySelector( + "#quarto-search .aa-Autocomplete" + ); + if (inputEl) { + inputEl.removeAttribute("aria-labelledby"); + } + } + + // If the main document scrolls dismiss the search results + // (otherwise, since they're floating in the document they can scroll with the document) + window.document.body.onscroll = () => { + setIsOpen(false); + }; + + if (showSearchResults) { + setIsOpen(true); + focusSearchInput(); + } +}); + +function configurePlugins(quartoSearchOptions) { + const autocompletePlugins = []; + const algoliaOptions = quartoSearchOptions.algolia; + if ( + algoliaOptions && + algoliaOptions["analytics-events"] && + algoliaOptions["search-only-api-key"] && + algoliaOptions["application-id"] + ) { + const apiKey = algoliaOptions["search-only-api-key"]; + const appId = algoliaOptions["application-id"]; + + // Aloglia insights may not be loaded because they require cookie consent + // Use deferred loading so events will start being recorded when/if consent + // is granted. + const algoliaInsightsDeferredPlugin = deferredLoadPlugin(() => { + if ( + window.aa && + window["@algolia/autocomplete-plugin-algolia-insights"] + ) { + window.aa("init", { + appId, + apiKey, + useCookie: true, + }); + + const { createAlgoliaInsightsPlugin } = + window["@algolia/autocomplete-plugin-algolia-insights"]; + // Register the insights client + const algoliaInsightsPlugin = createAlgoliaInsightsPlugin({ + insightsClient: window.aa, + onItemsChange({ insights, insightsEvents }) { + const events = insightsEvents.map((event) => { + const maxEvents = event.objectIDs.slice(0, 20); + return { + ...event, + objectIDs: maxEvents, + }; + }); + + insights.viewedObjectIDs(...events); + }, + }); + return algoliaInsightsPlugin; + } + }); + + // Add the plugin + autocompletePlugins.push(algoliaInsightsDeferredPlugin); + return autocompletePlugins; + } +} + +// For plugins that may not load immediately, create a wrapper +// plugin and forward events and plugin data once the plugin +// is initialized. This is useful for cases like cookie consent +// which may prevent the analytics insights event plugin from initializing +// immediately. +function deferredLoadPlugin(createPlugin) { + let plugin = undefined; + let subscribeObj = undefined; + const wrappedPlugin = () => { + if (!plugin && subscribeObj) { + plugin = createPlugin(); + if (plugin && plugin.subscribe) { + plugin.subscribe(subscribeObj); + } + } + return plugin; + }; + + return { + subscribe: (obj) => { + subscribeObj = obj; + }, + onStateChange: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.onStateChange) { + plugin.onStateChange(obj); + } + }, + onSubmit: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.onSubmit) { + plugin.onSubmit(obj); + } + }, + onReset: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.onReset) { + plugin.onReset(obj); + } + }, + getSources: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.getSources) { + return plugin.getSources(obj); + } else { + return Promise.resolve([]); + } + }, + data: (obj) => { + const plugin = wrappedPlugin(); + if (plugin && plugin.data) { + plugin.data(obj); + } + }, + }; +} + +function validateItems(items) { + // Validate the first item + if (items.length > 0) { + const item = items[0]; + const missingFields = []; + if (item.href == undefined) { + missingFields.push("href"); + } + if (!item.title == undefined) { + missingFields.push("title"); + } + if (!item.text == undefined) { + missingFields.push("text"); + } + + if (missingFields.length === 1) { + throw { + name: `Error: Search index is missing the ${missingFields[0]} field.`, + message: `The items being returned for this search do not include all the required fields. Please ensure that your index items include the ${missingFields[0]} field or use index-fields in your _quarto.yml file to specify the field names.`, + }; + } else if (missingFields.length > 1) { + const missingFieldList = missingFields + .map((field) => { + return `${field}`; + }) + .join(", "); + + throw { + name: `Error: Search index is missing the following fields: ${missingFieldList}.`, + message: `The items being returned for this search do not include all the required fields. Please ensure that your index items includes the following fields: ${missingFieldList}, or use index-fields in your _quarto.yml file to specify the field names.`, + }; + } + } +} + +let lastQuery = null; +function showCopyLink(query, options) { + const language = options.language; + lastQuery = query; + // Insert share icon + const inputSuffixEl = window.document.body.querySelector( + ".aa-Form .aa-InputWrapperSuffix" + ); + + if (inputSuffixEl) { + let copyButtonEl = window.document.body.querySelector( + ".aa-Form .aa-InputWrapperSuffix .aa-CopyButton" + ); + + if (copyButtonEl === null) { + copyButtonEl = window.document.createElement("button"); + copyButtonEl.setAttribute("class", "aa-CopyButton"); + copyButtonEl.setAttribute("type", "button"); + copyButtonEl.setAttribute("title", language["search-copy-link-title"]); + copyButtonEl.onmousedown = (e) => { + e.preventDefault(); + e.stopPropagation(); + }; + + const linkIcon = "bi-clipboard"; + const checkIcon = "bi-check2"; + + const shareIconEl = window.document.createElement("i"); + shareIconEl.setAttribute("class", `bi ${linkIcon}`); + copyButtonEl.appendChild(shareIconEl); + inputSuffixEl.prepend(copyButtonEl); + + const clipboard = new window.ClipboardJS(".aa-CopyButton", { + text: function (_trigger) { + const copyUrl = new URL(window.location); + copyUrl.searchParams.set(kQueryArg, lastQuery); + copyUrl.searchParams.set(kResultsArg, "1"); + return copyUrl.toString(); + }, + }); + clipboard.on("success", function (e) { + // Focus the input + + // button target + const button = e.trigger; + const icon = button.querySelector("i.bi"); + + // flash "checked" + icon.classList.add(checkIcon); + icon.classList.remove(linkIcon); + setTimeout(function () { + icon.classList.remove(checkIcon); + icon.classList.add(linkIcon); + }, 1000); + }); + } + + // If there is a query, show the link icon + if (copyButtonEl) { + if (lastQuery && options["copy-button"]) { + copyButtonEl.style.display = "flex"; + } else { + copyButtonEl.style.display = "none"; + } + } + } +} + +/* Search Index Handling */ +// create the index +var fuseIndex = undefined; +async function readSearchData() { + // Initialize the search index on demand + if (fuseIndex === undefined) { + // create fuse index + const options = { + keys: [ + { name: "title", weight: 20 }, + { name: "section", weight: 20 }, + { name: "text", weight: 10 }, + ], + ignoreLocation: true, + threshold: 0.1, + }; + const fuse = new window.Fuse([], options); + + // fetch the main search.json + const response = await fetch(offsetURL("search.json")); + if (response.status == 200) { + return response.json().then(function (searchDocs) { + searchDocs.forEach(function (searchDoc) { + fuse.add(searchDoc); + }); + fuseIndex = fuse; + return fuseIndex; + }); + } else { + return Promise.reject( + new Error( + "Unexpected status from search index request: " + response.status + ) + ); + } + } + return fuseIndex; +} + +function inputElement() { + return window.document.body.querySelector(".aa-Form .aa-Input"); +} + +function focusSearchInput() { + setTimeout(() => { + const inputEl = inputElement(); + if (inputEl) { + inputEl.focus(); + } + }, 50); +} + +/* Panels */ +const kItemTypeDoc = "document"; +const kItemTypeMore = "document-more"; +const kItemTypeItem = "document-item"; +const kItemTypeError = "error"; + +function renderItem( + item, + createElement, + state, + setActiveItemId, + setContext, + refresh +) { + switch (item.type) { + case kItemTypeDoc: + return createDocumentCard( + createElement, + "file-richtext", + item.title, + item.section, + item.text, + item.href + ); + case kItemTypeMore: + return createMoreCard( + createElement, + item, + state, + setActiveItemId, + setContext, + refresh + ); + case kItemTypeItem: + return createSectionCard( + createElement, + item.section, + item.text, + item.href + ); + case kItemTypeError: + return createErrorCard(createElement, item.title, item.text); + default: + return undefined; + } +} + +function createDocumentCard(createElement, icon, title, section, text, href) { + const iconEl = createElement("i", { + class: `bi bi-${icon} search-result-icon`, + }); + const titleEl = createElement("p", { class: "search-result-title" }, title); + const titleContainerEl = createElement( + "div", + { class: "search-result-title-container" }, + [iconEl, titleEl] + ); + + const textEls = []; + if (section) { + const sectionEl = createElement( + "p", + { class: "search-result-section" }, + section + ); + textEls.push(sectionEl); + } + const descEl = createElement("p", { + class: "search-result-text", + dangerouslySetInnerHTML: { + __html: text, + }, + }); + textEls.push(descEl); + + const textContainerEl = createElement( + "div", + { class: "search-result-text-container" }, + textEls + ); + + const containerEl = createElement( + "div", + { + class: "search-result-container", + }, + [titleContainerEl, textContainerEl] + ); + + const linkEl = createElement( + "a", + { + href: offsetURL(href), + class: "search-result-link", + }, + containerEl + ); + + const classes = ["search-result-doc", "search-item"]; + if (!section) { + classes.push("document-selectable"); + } + + return createElement( + "div", + { + class: classes.join(" "), + }, + linkEl + ); +} + +function createMoreCard( + createElement, + item, + state, + setActiveItemId, + setContext, + refresh +) { + const moreCardEl = createElement( + "div", + { + class: "search-result-more search-item", + onClick: (e) => { + // Handle expanding the sections by adding the expanded + // section to the list of expanded sections + toggleExpanded(item, state, setContext, setActiveItemId, refresh); + e.stopPropagation(); + }, + }, + item.title + ); + + return moreCardEl; +} + +function toggleExpanded(item, state, setContext, setActiveItemId, refresh) { + const expanded = state.context.expanded || []; + if (expanded.includes(item.target)) { + setContext({ + expanded: expanded.filter((target) => target !== item.target), + }); + } else { + setContext({ expanded: [...expanded, item.target] }); + } + + refresh(); + setActiveItemId(item.__autocomplete_id); +} + +function createSectionCard(createElement, section, text, href) { + const sectionEl = createSection(createElement, section, text, href); + return createElement( + "div", + { + class: "search-result-doc-section search-item", + }, + sectionEl + ); +} + +function createSection(createElement, title, text, href) { + const descEl = createElement("p", { + class: "search-result-text", + dangerouslySetInnerHTML: { + __html: text, + }, + }); + + const titleEl = createElement("p", { class: "search-result-section" }, title); + const linkEl = createElement( + "a", + { + href: offsetURL(href), + class: "search-result-link", + }, + [titleEl, descEl] + ); + return linkEl; +} + +function createErrorCard(createElement, title, text) { + const descEl = createElement("p", { + class: "search-error-text", + dangerouslySetInnerHTML: { + __html: text, + }, + }); + + const titleEl = createElement("p", { + class: "search-error-title", + dangerouslySetInnerHTML: { + __html: ` ${title}`, + }, + }); + const errorEl = createElement("div", { class: "search-error" }, [ + titleEl, + descEl, + ]); + return errorEl; +} + +function positionPanel(pos) { + const panelEl = window.document.querySelector( + "#quarto-search-results .aa-Panel" + ); + const inputEl = window.document.querySelector( + "#quarto-search .aa-Autocomplete" + ); + + if (panelEl && inputEl) { + panelEl.style.top = `${Math.round(panelEl.offsetTop)}px`; + if (pos === "start") { + panelEl.style.left = `${Math.round(inputEl.left)}px`; + } else { + panelEl.style.right = `${Math.round(inputEl.offsetRight)}px`; + } + } +} + +/* Highlighting */ +// highlighting functions +function highlightMatch(query, text) { + if (text) { + const start = text.toLowerCase().indexOf(query.toLowerCase()); + if (start !== -1) { + const startMark = ""; + const endMark = ""; + + const end = start + query.length; + text = + text.slice(0, start) + + startMark + + text.slice(start, end) + + endMark + + text.slice(end); + const startInfo = clipStart(text, start); + const endInfo = clipEnd( + text, + startInfo.position + startMark.length + endMark.length + ); + text = + startInfo.prefix + + text.slice(startInfo.position, endInfo.position) + + endInfo.suffix; + + return text; + } else { + return text; + } + } else { + return text; + } +} + +function clipStart(text, pos) { + const clipStart = pos - 50; + if (clipStart < 0) { + // This will just return the start of the string + return { + position: 0, + prefix: "", + }; + } else { + // We're clipping before the start of the string, walk backwards to the first space. + const spacePos = findSpace(text, pos, -1); + return { + position: spacePos.position, + prefix: "", + }; + } +} + +function clipEnd(text, pos) { + const clipEnd = pos + 200; + if (clipEnd > text.length) { + return { + position: text.length, + suffix: "", + }; + } else { + const spacePos = findSpace(text, clipEnd, 1); + return { + position: spacePos.position, + suffix: spacePos.clipped ? "…" : "", + }; + } +} + +function findSpace(text, start, step) { + let stepPos = start; + while (stepPos > -1 && stepPos < text.length) { + const char = text[stepPos]; + if (char === " " || char === "," || char === ":") { + return { + position: step === 1 ? stepPos : stepPos - step, + clipped: stepPos > 1 && stepPos < text.length, + }; + } + stepPos = stepPos + step; + } + + return { + position: stepPos - step, + clipped: false, + }; +} + +// removes highlighting as implemented by the mark tag +function clearHighlight(searchterm, el) { + const childNodes = el.childNodes; + for (let i = childNodes.length - 1; i >= 0; i--) { + const node = childNodes[i]; + if (node.nodeType === Node.ELEMENT_NODE) { + if ( + node.tagName === "MARK" && + node.innerText.toLowerCase() === searchterm.toLowerCase() + ) { + el.replaceChild(document.createTextNode(node.innerText), node); + } else { + clearHighlight(searchterm, node); + } + } + } +} + +function escapeRegExp(string) { + return string.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string +} + +// highlight matches +function highlight(term, el) { + const termRegex = new RegExp(term, "ig"); + const childNodes = el.childNodes; + + // walk back to front avoid mutating elements in front of us + for (let i = childNodes.length - 1; i >= 0; i--) { + const node = childNodes[i]; + + if (node.nodeType === Node.TEXT_NODE) { + // Search text nodes for text to highlight + const text = node.nodeValue; + + let startIndex = 0; + let matchIndex = text.search(termRegex); + if (matchIndex > -1) { + const markFragment = document.createDocumentFragment(); + while (matchIndex > -1) { + const prefix = text.slice(startIndex, matchIndex); + markFragment.appendChild(document.createTextNode(prefix)); + + const mark = document.createElement("mark"); + mark.appendChild( + document.createTextNode( + text.slice(matchIndex, matchIndex + term.length) + ) + ); + markFragment.appendChild(mark); + + startIndex = matchIndex + term.length; + matchIndex = text.slice(startIndex).search(new RegExp(term, "ig")); + if (matchIndex > -1) { + matchIndex = startIndex + matchIndex; + } + } + if (startIndex < text.length) { + markFragment.appendChild( + document.createTextNode(text.slice(startIndex, text.length)) + ); + } + + el.replaceChild(markFragment, node); + } + } else if (node.nodeType === Node.ELEMENT_NODE) { + // recurse through elements + highlight(term, node); + } + } +} + +/* Link Handling */ +// get the offset from this page for a given site root relative url +function offsetURL(url) { + var offset = getMeta("quarto:offset"); + return offset ? offset + url : url; +} + +// read a meta tag value +function getMeta(metaName) { + var metas = window.document.getElementsByTagName("meta"); + for (let i = 0; i < metas.length; i++) { + if (metas[i].getAttribute("name") === metaName) { + return metas[i].getAttribute("content"); + } + } + return ""; +} + +function algoliaSearch(query, limit, algoliaOptions) { + const { getAlgoliaResults } = window["@algolia/autocomplete-preset-algolia"]; + + const applicationId = algoliaOptions["application-id"]; + const searchOnlyApiKey = algoliaOptions["search-only-api-key"]; + const indexName = algoliaOptions["index-name"]; + const indexFields = algoliaOptions["index-fields"]; + const searchClient = window.algoliasearch(applicationId, searchOnlyApiKey); + const searchParams = algoliaOptions["params"]; + const searchAnalytics = !!algoliaOptions["analytics-events"]; + + return getAlgoliaResults({ + searchClient, + queries: [ + { + indexName: indexName, + query, + params: { + hitsPerPage: limit, + clickAnalytics: searchAnalytics, + ...searchParams, + }, + }, + ], + transformResponse: (response) => { + if (!indexFields) { + return response.hits.map((hit) => { + return hit.map((item) => { + return { + ...item, + text: highlightMatch(query, item.text), + }; + }); + }); + } else { + const remappedHits = response.hits.map((hit) => { + return hit.map((item) => { + const newItem = { ...item }; + ["href", "section", "title", "text"].forEach((keyName) => { + const mappedName = indexFields[keyName]; + if ( + mappedName && + item[mappedName] !== undefined && + mappedName !== keyName + ) { + newItem[keyName] = item[mappedName]; + delete newItem[mappedName]; + } + }); + newItem.text = highlightMatch(query, newItem.text); + return newItem; + }); + }); + return remappedHits; + } + }, + }); +} + +function fuseSearch(query, fuse, fuseOptions) { + return fuse.search(query, fuseOptions).map((result) => { + const addParam = (url, name, value) => { + const anchorParts = url.split("#"); + const baseUrl = anchorParts[0]; + const sep = baseUrl.search("\\?") > 0 ? "&" : "?"; + anchorParts[0] = baseUrl + sep + name + "=" + value; + return anchorParts.join("#"); + }; + + return { + title: result.item.title, + section: result.item.section, + href: addParam(result.item.href, kQueryArg, query), + text: highlightMatch(query, result.item.text), + }; + }); +} diff --git a/r-book/standard_scores.html b/r-book/standard_scores.html new file mode 100644 index 00000000..82df9720 --- /dev/null +++ b/r-book/standard_scores.html @@ -0,0 +1,1823 @@ + + + + + + + + + +Resampling statistics - 16  Ranks, Quantiles and Standard Scores + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

16  Ranks, Quantiles and Standard Scores

+
+ + + +
+ + + + +
+ + +
+ +

Imagine we have a set of measures, in some particular units. We may want some way to see quickly how these measures compare to one another, and how they may compare to other measures, in different units.

+

Ranks are one way of having an implicit comparison between values.1 Is the value large in terms of the other values (with high rank) — or is it small (low rank)?

+

We can convert ranks to quantile positions. Quantile positions are values from 0 through 1 that are closer to 1 for high rank values, and closer to 0 for low rank values. Each value in the data has a rank, and a corresponding quantile position. We can also look at the value corresponding to each quantile position, and these are the quantiles. You will see what we mean later in the chapter.

+

Ranks and quantile positions give an idea whether the measure is high or low compared to the other values, but they do not immediately tell us whether the measure is exceptional or unusual. To do that, we may want to ask whether the measure falls outside the typical range of values — that is, how the measure compares to the distribution of values. One common way of doing this is to re-express the measures (values) as standard scores, where the standard score for a particular value tells you how far the value is from the center of the distribution, in terms of the typical spread of the distribution. (We will say more about what we mean by “typical” later.) Standard values are particularly useful to allow us to compare different types of measures on a standard scale. They translate the units of measurement into standard and comparable units.

+
+

16.1 Household income and congressional districts

+

Democratic congresswoman Marcy Kaptur has represented the 9th district of Ohio since 1983. Ohio’s 9th district is relatively working class, and the Democratic party has, traditionally, represented people with lower income. However, Kaptur has pointed out that this pattern appears to be changing; more of the high-income congressional districts now lean Democrat, and the Republican party is now more likely to represent lower-income districts. The French economist Thomas Piketty has described this phenomenon across several Western countries. Voters for left parties are now more likely to be highly educated and wealthy. He terms this shift “Brahmin Left Vs Merchant Right” (Piketty 2018). The data below come from a table Kaptur prepared that shows this pattern in the 2023 US congress. The table lists the top 20 districts by the median income of the households in that district, along with their representatives and their party.2

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 16.1: 20 most wealthy 2023 Congressional districts by household income
Ascending_RankDistrictMedian IncomeRepresentativeParty
422422MD-3114804J. SarbanesDemocrat
423423MA-5115618K. ClarkDemocrat
424424NY-12116070J. NadlerDemocrat
425425VA-8116332D. BeyerDemocrat
426426MD-5117049S. HoyerDemocrat
427427NJ-11117198M. SherrillDemocrat
428428NY-3119185G. SantosRepublican
429429CA-14119209E. SwalwellDemocrat
430430NJ-7119567T. KeanRepublican
431431NY-1120031N. LaLotaRepublican
432432WA-1120671S. DelBeneDemocrat
433433MD-8120948J. RaskinDemocrat
434434NY-4121979A. D’EspositoRepublican
435435CA-11124456N. PelosiDemocrat
436436CA-15125855K. MullinDemocrat
437437CA-10135150M. DeSaulnierDemocrat
438438VA-11139003G. ConnollyDemocrat
439439VA-10140815J. WextonDemocrat
440440CA-16150720A. EshooDemocrat
441441CA-17157049R. KhannaDemocrat
+
+ + +
+
+

You may notice right away that many of the 20 richest districts have Democratic Party representatives.

+

In fact, if we look at all 441 congressional districts in Kaptur’s table, we find a large difference in the average median household income for Democrat and Republican districts; the Democrat districts are, on average, about 14% richer (Table 16.2).

+
+
+
+ + + + + + + + + + + + + + + + + + +
Table 16.2: Means for median household income by party
Mean of median household income
Democrat$76,933
Republican$67,474
+
+ + +
+
+

Next we are going to tip our hand, and show how we got these data. In previous chapters, we had chunks like this in which we enter the values we will analyze. These values come from the example we introduced in Section 12.16:

+
+
# Liquor prices for US states with private market.
+priv <- c(4.82, 5.29, 4.89, 4.95, 4.55, 4.90, 5.25, 5.30, 4.29, 4.85, 4.54,
+          4.75, 4.85, 4.85, 4.50, 4.75, 4.79, 4.85, 4.79, 4.95, 4.95, 4.75,
+          5.20, 5.10, 4.80, 4.29)
+
+

Now we have 441 values to enter, and it is time to introduce Rs standard tools for loading data.

+
+

16.1.1 Comma-separated-values (CSV) format

+

The data we will load is in a file on disk called data/congress_2023.csv. These are data from Kaptur’s table in a comma-separated-values (CSV) format file. We refer to this file with its filename, containing the directory (data/) followed by the name of the file (congress_2023.csv), giving a filename of data/congress_2023.csv.

+

The CSV format is a very simple text format for storing table data. Usually, the first line of the CSV file contains the column names of the table, and the rest of the lines contain the row values. As the name suggests, commas (,) separate the column names in the first line, and the row values in the following lines. If you opened the data/congress_2023.csv file in some editor, such as Notepad on Windows or TextEdit on Mac, you would find that the first few lines looked like this:

+
+
Ascending_Rank,District,Median_Income,Representative,Party
+1,PR-At Large,22237,J. González-Colón,Republican
+2,AS-At Large,28352,A. Coleman,Republican
+3,MP-At Large,31362,G. Sablan,Democrat
+4,KY-5,37910,H. Rogers,Republican
+5,MS-2,37933,B. G. Thompson,Democrat
+
+

We are particularly interested in the column named Median_Income.

+

You may remember the idea of indexing, introduced in Section 7.6. Indexing occurs when we fetch data from within a container, such as a string or an array. We do this by putting square brackets [] after the value we want to index into, and put something inside the brackets to say what we want.

+

For example, to get the first element of the priv array above, we use indexing:

+

As you can index into strings and Numpy arrays, by using square brackets, so you can index into Pandas data frames. Instead of putting the position between the square brackets, we can put the column name. This fetches the data from that column, returning a new type of value called a Pandas Series.

+

We want to go straight to our familiar Numpy arrays, so we convert the column of data into a Numpy array, using the np.array function you have already seen:

+ +

:::

+
+
+

16.1.2 Introducing R data frames

+

R is a data analysis language, so, as you would expect, it is particularly good at loading data files, and presenting them to us as a useful table-like structure, called a data frame.

+

We start by using R to load our data file. R has a special function to do this, called read.csv.

+
+
district_income <- read.csv('data/congress_2023.csv')
+
+

We have thus far done many operations that returned R vectors. read.csv returns a new type of value, called a data frame:

+
+
class(district_income)
+
+
[1] "data.frame"
+
+
+

A data frame is R’s own way of representing a table, with columns and rows. You can think of it as R’s version of a spreadsheet. Data frames are a fundamental type in R, and there are many functions that operate on them. Among them is the function head which selects (by default) the first six rows of whatever you send it. Here we select the first six rows of the data frame.

+
+
# Show the first six rows in the data frame
+head(district_income)
+
+
  Ascending_Rank    District Median_Income    Representative      Party
+1              1 PR-At Large         22237 J. González-Colón Republican
+2              2 AS-At Large         28352        A. Coleman Republican
+3              3 MP-At Large         31362         G. Sablan   Democrat
+4              4        KY-5         37910         H. Rogers Republican
+5              5        MS-2         37933    B. G. Thompson   Democrat
+6              6       NY-15         40319         R. Torres   Democrat
+
+
+

The data are in income order, sorted lowest to highest, so the first five districts are those with the lowest household income.

+

We are particularly interested in the column named Median_Income.

+

You can fetch columns of data from a data frame by using R’s $ syntax. The $ syntax means “fetch the thing named on the right of the $ attached to the value given to the left of the $”.

+

So, to get the data for the Median_Income column, we can write:

+
+
# Use $ syntax to get a column of data from a data frame.
+# "fetch the Median_Income thing from district_income".
+incomes = district_income$Median_Income
+# The thing that comes back is our familiar R vector.
+# Show the first five values, by indexing with a slice.
+incomes[1:5]
+
+
[1] 22237 28352 31362 37910 37933
+
+
+
+
+

16.1.3 Incomes and Ranks

+

We now have the incomes values as a vector.

+

There are 441 values in the whole vector, one of each congressional district:

+
+
length(incomes)
+
+
[1] 441
+
+
+

While we are at it, let us also get the values from the “Ascending_Rank” column, with the same procedure. These are ranks from low to high, meaning 1 is the lowest median income, and 441 is the highest median income.

+
+
lo_to_hi_ranks <- district_income$Ascending_Rank
+# Show the first five values, by indexing with a slice.
+lo_to_hi_ranks[1:5]
+
+
[1] 1 2 3 4 5
+
+
+

In our case, the data frame has the Ascending_Rank column with the ranks we need, but if we need the ranks and we don’t have them, we can calculate them using the rank function.

+
+
+

16.1.4 Calculating ranks

+

As you might expect rank accepts a vector as an input argument. Let’s say that there are n <- length(data) values in the vector that we pass to rank. The function returns a vector, length \(n\), where the elements are the ranks of each corresponding element in the input data vector. A rank value of 1 corresponds the lowest value in data (closest to negative infinity), and a rank of \(n\) corresponds to the highest value (closest to positive infinity).

+

Here’s an example data vector to show how rank works.

+
+
# The data.
+data <- c(3, -1, 5, -2)
+# Corresponding ranks for the data.
+rank(data)
+
+
[1] 3 2 4 1
+
+
+

We can use rank to recalculate the ranks for the congressional median household income values.

+
+
# Recalculate the ranks.
+recalculated_ranks <- rank(incomes)
+# Show the first 5 ranks.
+recalculated_ranks[1:5]
+
+
[1] 1 2 3 4 5
+
+
+
+
+
+

16.2 Comparing two values in the district income data

+

Let us say that we have taken an interest in two particular members of Congress: the Speaker of the House of Representatives, Republican Kevin McCarthy, and the progressive activist and Democrat Alexandria Ocasio-Cortez. We will refer to both using their initials: KM for Kevin Owen McCarthy and AOC for Alexandra Ocasio-Cortez.

+

By scrolling through the CSV file, or (in our case) using some simple R code that we won’t cover now, we find the rows corresponding to McCarthy (KM) and Ocasio-Cortez (AOC) — Table 16.3.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 16.3: Rows for Kevin McCarthy and Alexandra Ocasio-Cortez
Ascending_RankDistrictMedian IncomeRepresentativeParty
81NY-1456129A. Ocasio-CortezDemocrat
295CA-2077205K. McCarthyRepublican
+
+ + +
+
+

The rows show the rank of each congressional district in terms of median household income. The districts are ordered by this rank, so we can get their respective indices (positions) in the incomes vector from their rank.

+
+
# Rank of McCarthy's district in terms of median household income.
+km_rank = 295
+# Index (position) of McCarthy's value in the "incomes" vector.
+# This is the same as the rank.
+km_index = km_rank
+
+

Now we have the index (position) of KM’s value, we can find the household income for his district from the incomes vector:

+
+
# Show the median household income from McCarthy's district
+# by indexing into the "incomes" vector:
+km_income <- incomes[km_index]
+km_income
+
+
[1] 77205
+
+
+

Here is the corresponding index and incomes value for AOC:

+
+
# Index (position) of AOC's value in the "incomes" array.
+aoc_rank = 81
+aoc_index = aoc_rank
+# Show the median household income from AOC's district
+# by indexing into the "incomes" array:
+aoc_income <- incomes[aoc_index]
+aoc_income
+
+
[1] 56129
+
+
+

Notice that we fetch the same value for median household income from incomes as you see in the corresponding rows.

+
+
+

16.3 Comparing values with ranks and quantile positions

+

We have KM’s and AOC’s district median household income values, but our next question might be — how unusual are these values?

+

Of course, it depends what we mean by unusual. We might mean, are they greater or smaller than most of the other values?

+

One way of answering that question is simply looking at the rank of the values. If the rank is lower than \(\frac{441}{2} = 220.5\) then this is a district with lower median income than most districts. If it is greater than \(220.5\) then it has higher median income than most districts. We see that KM’s district, with rank 295 is wealthier than most, whereas AOC’s district (rank 81) is poorer than most.

+

But we can’t interpret the ranks without remembering that there are 441 values, so — for example - a rank of 81 represents a relatively low value, whereas one of 295 is relatively high.

+

We would like some scale that tells us immediately whether this is a relatively low or a relatively high value, without having to remembering how many values there are.

+

This is a good use for quantile positions (QPs). The QP of a value tells you where the value ranks relative to the other values, on a scale from \(0\) through \(1\). A QP of \(0\) tells you this is the lowest-ranking value, and a QP of \(1\) tells you this is the highest-ranking value.

+

We can calculate the QP for each rank. Think of the low-to-high ranks as being a line starting at 1 (the lowest rank — for the lowest median income) and going up to 441 (the highest rank — for the highest median income).

+

The QP corresponding to any particular rank tells you how far along this line the rank is. Notice that the length of the line is the distance from the first to the last value, so 441 - 1 = 440.

+

So, if the rank was \(1\), then the value is at the start of the line. It has got \(\frac{0}{440}\) of the way along the line, and the QP is \(0\). If the rank is \(441\), the value is at the end of the line, it has got \(\frac{440}{440}\) of the way along the line and the QP is \(1\).

+

Now consider the rank of \(100\). It has got \(\frac{(100 - 1)}{440}\) of the way along the line, and the QP position is 0.22.

+

More generally, we can translate the high-to-low ranks to QPs with:

+
+
# Length of the line defining quantile positions.
+# Start of line is rank 1 (quantile position 0).
+# End of line is rank 441 (quantile position 1).
+distance <- length(lo_to_hi_ranks) - 1  # 440 in our case.
+quantile_positions <- (lo_to_hi_ranks - 1) / distance
+# Show the first five.
+quantile_positions[1:5]
+
+
[1] 0.00000 0.00227 0.00455 0.00682 0.00909
+
+
+

Let’s plot the ranks and the QPs together on the x-axis:

+
+
+
+
+

+
+
+
+
+

The QPs for KM and AOC tell us where their districts’ incomes are in the ranks, on a 0 to 1 scale:

+
+
km_quantile_position <- quantile_positions[km_index]
+km_quantile_position
+
+
[1] 0.668
+
+
+
+
aoc_quantile_position <- quantile_positions[aoc_index]
+aoc_quantile_position
+
+
[1] 0.182
+
+
+

If we multiply the QP by 100, we get the percentile positions — so the percentile position ranges from 0 through 100.

+
+
# Percentile positions are just quantile positions * 100
+message('KM percentile position: ', km_quantile_position * 100)
+
+
KM percentile position: 66.8181818181818
+
+
message('AOC percentile position: ', aoc_quantile_position * 100)
+
+
AOC percentile position: 18.1818181818182
+
+
+

Now consider one particular QP: \(0.5\). The \(0.5\) QP is exactly half-way along the line from rank \(1\) to rank \(441\). In our case this corresponds to rank \(\frac{441 - 1}{2} + 1 = 221\).

+
+
message('Middle rank: ', lo_to_hi_ranks[221])
+
+
Middle rank: 221
+
+
message('Quantile position: ', quantile_positions[221])
+
+
Quantile position: 0.5
+
+
+

The value corresponding to any particular QP is the quantile value, or just the quantile for short. For a QP of 0.5, the quantile (quantile value) is:

+
+
# Quantile value for 0.5
+message('Quantile value for QP of 0.5: ', incomes[221])
+
+
Quantile value for QP of 0.5: 67407
+
+
+

In fact we can ask R for this value (quantile) directly, using the quantile function:

+
+
quantile(incomes, 0.5)
+
+
  50% 
+67407 
+
+
+
+
+
+ +
+
+quantile and sorting +
+
+
+

In our case, the incomes data is already sorted from lowest (at position 1 in the vector to highest (at position 441 in the vector). The quantile function does not need the data to be sorted; it does its own internal sorting to do the calculation.

+

For example, we could shuffle incomes into a random order, and still get the same values from quantile.

+
+
shuffled_incomes <- sample(incomes)
+# Quantile still gives the same value.
+quantile(incomes, 0.5)
+
+
  50% 
+67407 
+
+
+
+
+

Above we have the 0.5 quantile — the value corresponding to the QP of 0.5.

+

The 0.5 quantile is an interesting value. By the definition of QP, exactly half of the remaining values (after excluding the 0.5 quantile value) have lower rank, and are therefore less than the 0.5 quantile value. Similarly exactly half of the remaining values are greater than the 0.5 quantile. You may recognize this as the median value. This is such a common quantile value that R has a function median as a shortcut for quantile(data, 0.5).

+

Another interesting QP is 0.25. We find the QP of 0.25 at rank:

+
+
qp25_rank <- (441 - 1) * 0.25 + 1
+qp25_rank
+
+
[1] 111
+
+
+
+
message('Rank corresponding to QP 0.25: ', qp25_rank)
+
+
Rank corresponding to QP 0.25: 111
+
+
message('0.25 quantile value: ', incomes[qp25_rank])
+
+
0.25 quantile value: 58961
+
+
message('0.25 quantile value using quantile: ', quantile(incomes, 0.25))
+
+
0.25 quantile value using quantile: 58961
+
+
+
+
+
+
+

+
+
+
+
+

Call the 0.25 quantile value \(V\). \(V\) is the number such that 25% of the remaining values are less than \(V\), and 75% are greater.

+

Now let’s think about the 0.01 quantile. We don’t have an income value exactly corresponding to this QP, because there is no rank exactly corresponding to the 0.01 QP.

+
+
rank_for_qp001 <- (441 - 1) * 0.01 + 1
+rank_for_qp001
+
+
[1] 5.4
+
+
+

Let’s have a look at the first 10 values for rank / QP and incomes:

+
+
+
+
+

+
+
+
+
+

What then, is the quantile value for QP = 0.01? There are various ways to answer that question (Hyndman and Fan 1996), but one obvious way, and the default for R, is to draw a straight line up from the matching rank — or equivalently, down from the QP — then note where that line crosses the lines joining the values to the left and right of the QP on the graph above, and look across to the y-axis for the corresponding value:

+
+
+
+
+

+
+
+
+
+
+
quantile(incomes, 0.01)
+
+
   1% 
+38887 
+
+
+

This is called the linear method — because it uses straight lines joining the points to estimate the quantile value for a QP that does not correspond to a whole-number rank.

+
+
+
+ +
+
+Calculating quantiles using the linear method +
+
+
+

We gave a graphical explanation of how to calculate the quantile for a QP that does not correspond to whole-number rank in the data. A more formal way of getting the value using the numerical equivalent of the graphical method is linear interpolation. Linear interpolation calculates the quantile value as a weighted average of the quantile values for the QPs of the whole number ranks just less than, and just greater than the QP we are interested in. For example, let us return to the QP of \(0.01\). Let us remind ourselves of the QPs, whole-number ranks and corresponding values either side of the QP \(0.01\):

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
Ranks, QPs and corresponding values around QP of 0.01
RankQuantile positionQuantile value
50.009937933
5.40.01V
60.011340319
+

What value should we should give \(V\) in the table? One answer is to take the average of the two values either side of the desired QP — in this case \((37933 + 40319) / 2\). We could write this same calculation as \(37933 * 0.5 + 40319 * 0.5\) — showing that we are giving equal weight (\(0.5\)) to the two values either side.

+

But giving both values equal weight doesn’t seem quite right, because the QP we want is closer to the QP for rank 5 (and corresponding value 37933) than it is to the QP for rank 6 (and corresponding value 40319). We should give more weight to the rank 5 value than the rank 6 value. Specifically the lower value is 0.4 rank units away from the QP rank we want, and the higher is 0.6 rank units away. So we give higher weight for shorter distance, and multiply the rank 5 value by \(1 - 0.4 = 0.6\), and the rank 6 value by \(1 - 0.6 = 0.4\). Therefore the weighted average is \(37933 * 0.6 + 40319 * 0.4 = 38887.4\). This is a mathematical way to get the value we described graphically, of tracking up from the rank of 5.4 to the line drawn between the values for rank 5 and 6, and reading off the y-value at which this track crosses that line.

+
+
+
+
+

16.4 Unusual values compared to the distribution

+

Now we return the problem of whether KMs and AOCs districts are unusual in terms of their median household incomes. From what we have so far, we might conclude that AOC’s district is fairly poor, and KM’s district is relatively wealthy. But — are either of their districts unusual in their wealth or poverty?

+

To answer that question, we have to think about the distribution of values. Are either AOC’s or KM’s district outside the typical spread of values for districts?

+

The rest of this section is an attempt to answer what we could mean by outside and typical spread.

+

Let us start with a histogram of the district incomes, marking the position of the KM and AOC districts.

+
+
+
+
+

+
+
+
+
+

What could we mean by “outside” the “typical spread”. By outside, we mean somewhere away from the center of the distribution. Let us take the mean of the distribution to be its center, and add that to the plot.

+
+
mean_income <- mean(incomes)
+
+
+
+
+
+

+
+
+
+
+
+
+

16.5 On deviations

+

Now let us ask what we could mean by typical spread. By spread we mean deviation either side of the center.

+

We can calculate how far away each income is away from the mean, by subtracting the mean from all the income values. Call the result — the deviations from the mean, or deviations for short.

+
+
deviations <- incomes - mean(incomes)
+
+

The deviation values give, for each district, how far that district’s income is from the mean. Values near the mean will have small (positive or negative) values, and values further from the mean will have large (positive and negative) values. Here is a histogram of the deviation values.

+
+
+
+
+

+
+
+
+
+

Notice that the shape of the distribution has not changed — all that changed is the position of the distribution on the x-axis. In fact, the distribution of deviations centers on zero — the deviations have a mean of (as near as the computer can accurately calculate) zero:

+
+
# Show the mean of the deviations, rounded to 8 decimal places.
+round(mean(deviations), 8)
+
+
[1] 0
+
+
+
+
+

16.6 The mean absolute deviation

+

Now let us consider the deviation value for KM and AOC:

+
+
message('Deviation for KM: ', deviations[km_index])
+
+
Deviation for KM: 5098.03628117914
+
+
message('Deviation for AOC: ', deviations[aoc_index])
+
+
Deviation for AOC: -15977.9637188209
+
+
+

We have the same problem as before. Yes, we see that KM has a positive deviation, and therefore, that his district is more wealthy than average across the 441 districts. Conversely AOC’s district has a negative deviation, and is poorer than average. But we still lack a standard measure of how far away from the mean each district is, in terms of the spread of values in the histogram.

+

To get such a standard measure, we would like idea of a typical or average deviation. Then we will compare KM’s and AOC’s deviations to the average deviation, to see if they are unusually far from the mean.

+

You have just seen above that we cannot use the literal average (mean) of the deviations for this purpose because the positive and negative deviations will exactly cancel out, and the mean deviation will always be as near as the computer can calculate to zero.

+

To stop the negatives canceling the positives, we can simply knock the minus signs off all the negative deviations.

+

This is the job of the R abs function — where abs is short for absolute. The abs function will knock minus signs off negative values, like this:

+
+
abs(c(-1, 0, 1, -2))
+
+
[1] 1 0 1 2
+
+
+

To get an average of the deviations, regardless of whether they are positive or negative, we can take the mean of the absolute deviations, like this:

+
+
# The Mean Absolute Deviation (MAD)
+abs_deviations <- abs(deviations)
+mad <- mean(abs_deviations)
+# Show the result
+mad
+
+
[1] 15102
+
+
+

This is the Mean Absolute Deviation (MAD). It is one measure of the typical spread. MAD is the average distance (regardless of positive or negative) of a value from the mean of the values.

+

We can get an idea of how typical a particular deviation is by dividing the deviation by the MAD value, like this:

+
+
message('Deviation in MAD units for KM: ', deviations[km_index] / mad)
+
+
Deviation in MAD units for KM: 0.337581239498037
+
+
message('Deviation in MAD units AOC: ', deviations[aoc_index] / mad)
+
+
Deviation in MAD units AOC: -1.05802714993755
+
+
+
+
+

16.7 The standard deviation

+

We are interested in the average deviation, but we find that a simple average of the deviations from the mean always gives 0 (perhaps with some tiny calculation error), because the positive and negative deviations cancel exactly.

+

The MAD calculation solves this problem by knocking the signs off the negative values before we take the mean.

+

Another very popular way of solving the same problem is to precede the calculation by squaring all the deviations, like this:

+
+
squared_deviations <- deviations ** 2
+# Show the first five values.
+squared_deviations[1:5]
+
+
[1] 2.49e+09 1.91e+09 1.66e+09 1.17e+09 1.17e+09
+
+
+
+
+
+ +
+
+Exponential format for showing very large and very small numbers +
+
+
+

The squared_deviation values above appear in exponential notation (E-notation). Other terms for E-notation are scientific notation, scientific form, or standard form. E-notation is a useful way to express very large (far from 0) or very small (close to 0) numbers in a more compact form.

+

E-notation represents a value as a floating point value \(m\) multiplied by 10 to the power of an exponent \(n\):

+

\[ +m * 10^n +\]

+

\(m\) is a floating point number with one digit before the decimal point — so it can be any value from 1.0 through 9.9999… \(n\) is an integer (positive or negative whole number).

+

For example, the median household income of KM’s district is 77205 (dollars). We can express that same number in E-notation as \(7.7205 * 10^4\) . R writes this as 7.7205e4, where the number before the e is \(m\) and the number after the e is the exponent value \(n\). E-notation is another way of writing the number, because \(7.7205 * 10^4 = 77205\).

+
+
7.7205e4 == 77205
+
+
[1] TRUE
+
+
+

It is no great advantage to use E-notation in this case; 77205 is probably easier to read and understand than 7.7205e4. The notation comes into its own where you start to lose track of the powers of 10 when you read a number — and that does happen when the number becomes very long without E-notation. For example, \(77205^2 = 5960612025\). \(5960612025\) is long enough that you start having to count the digits to see how large it is. In E-notation, that number is 5.960612025e9. If you remember that \(10^9\) is one US billion, then the E-notation tells you at a glance that the value is about \(5.9\) billion.

+

R makes its own decision whether to print out numbers using E-notation. This only affects the display of the numbers; the underlying values remain the same whether R chooses to show them in E-notation or not.

+
+
+

The process of squaring the deviations turns all the negative values into positive values.

+

We can then take the average (mean) of the squared deviations to give a measure of the typical squared deviation:

+
+
mean_squared_deviation <- mean(squared_deviations)
+mean_squared_deviation
+
+
[1] 3.86e+08
+
+
+

Rather confusingly, the field of statistics uses the term variance to refer to mean squared deviation value. Just to emphasize that naming, let’s do the same calculation but using “variance” as the variable name.

+
+
# Statistics calls the mean squared deviation - the "variance"
+variance <- mean(squared_deviations)
+variance
+
+
[1] 3.86e+08
+
+
+

The variance is the typical (in the sense of the mean) squared deviation. The units for the variance, in our case, would be squared dollars. But we are more interested in the typical deviation, in our original units – dollars rather than squared dollars.

+

So we take the square root of the mean squared deviation (the square root of the variance), to get the standard deviation. It is the standard deviation in the sense that it a measure of typical deviation, in the specific sense of the square root of the mean squared deviations.

+
+
# The standard deviation is the square root of the mean squared deviation.
+# (and therefore, the square root of the variance).
+standard_deviation <- sqrt(mean_squared_deviation)
+standard_deviation
+
+
[1] 19646
+
+
+

The standard deviation (the square root of the mean squared deviation) is a popular alternative to the Mean Absolute Deviation, as a measure of typical spread.

+

Figure 16.1 shows another histogram of the income values, marking the mean, the mean plus or minus one standard deviation, and the mean plus or minus two standard deviations. You can see that the mean plus or minus one standard deviation includes a fairly large proportion of the data. The mean plus or minus two standard deviation includes much larger proportion.

+
+
+
+
+

+
Figure 16.1: Income histogram plus or minus 1 and 2 standard deviations
+
+
+
+
+

Now let us return to the question of how unusual our two congressional districts are in terms of the distribution. First we calculate the number of standard deviations of each district from the mean:

+
+
km_std_devs <- deviations[km_index] / standard_deviation
+message('Deviation in standard deviation units for KM: ',
+        round(km_std_devs), 2)
+
+
Deviation in standard deviation units for KM: 02
+
+
aoc_std_devs <- deviations[aoc_index] / standard_deviation
+message('Deviation in standard deviation units for AOC: ',
+        round(aoc_std_devs), 2)
+
+
Deviation in standard deviation units for AOC: -12
+
+
+

The values for each district are a re-expression of the income values in terms of the distribution. They give the distance from the mean (positive or negative) in units of standard deviation.

+
+
+

16.8 Standard scores

+

We will often find uses for the procedure we have just applied, where we take the original values (here, incomes) and:

+
    +
  • Subtract the mean to convert to deviations, then
  • +
  • Divide by the standard deviation
  • +
+

Let’s apply that procedure to all the incomes values.

+

First we calculate the standard deviation:

+
+
deviations <- incomes - mean(incomes)
+income_std <- sqrt(mean(deviations ** 2))
+
+

Then we calculate standard scores:

+
+
deviations_in_stds <- deviations / income_std
+deviations_in_stds[1:5]
+
+
[1] -2.54 -2.23 -2.07 -1.74 -1.74
+
+
+

This procedure converts the original data (here incomes) to deviations from the mean in terms of the standard deviation. The resulting values are called standard scores or z-scores. One name for this procedure is “z-scoring”.

+

If you plot a histogram of the standard scores, you will see they have a mean of (actually exactly) 0, and a standard deviation of (actually exactly) 1.

+
+
+
+
+

+
+
+
+
+

With all this information — what should we conclude about the two districts in question? KM’s district is 0.26 standard deviations above the mean, but that’s not enough to conclude that it is unusual. We see from the histogram that a large proportion of the districts are at least this distance from the mean. We can calculate that proportion directly.

+
+
# Distances (negative or positive) from the mean.
+abs_std_devs <- abs(deviations_in_stds)
+# Number where distance greater than KM distance.
+n_gt_km <- sum(abs_std_devs > km_std_devs)
+prop_gt_km <- n_gt_km / length(deviations_in_stds)
+message("Proportion of districts further from mean than KM: ",
+        round(prop_gt_km, 2))
+
+
Proportion of districts further from mean than KM: 0.82
+
+
+

A full 82% of districts are further from the mean than is KM’s district. KM’s district is richer than average, but not unusual. The benefit of the standard deviation distance is that we can see this directly from the value, without doing the calculation of proportions, because the standard deviation is a measure of typical spread, and KM’s district is well-within this measure.

+

AOC’s district is -0.81 standard deviations from the mean. This is a little more unusual than KM’s score.

+
+
# Number where distance greater than AOC distance.
+# Make AOC's distance positive to correspond to distance from the mean.
+n_gt_aoc <- sum(abs_std_devs > abs(aoc_std_devs))
+prop_gt_aoc <- n_gt_aoc / length(deviations_in_stds)
+message("Proportion of districts further from mean than AOC's district: ",
+        round(prop_gt_aoc, 2))
+
+
Proportion of districts further from mean than AOC's district: 0.35
+
+
+

Only 35% of districts are further from the mean than AOC’s district, but this is still a reasonable proportion. We see from the standard score that AOC is within one standard deviation. AOC’s district is poorer than average, but not to a remarkable degree.

+
+
+

16.9 Standard scores to compare values on different scales

+

Why are standard scores so useful? They allow us to compare values on very different scales.

+

Consider the values in Table 16.4. Each row of the table corresponds to a team competing in the English Premier League (EPL) for the 2021-2022 season. For those of you with absolutely no interest in sports, the EPL is the league of the top 20 teams in English football, or soccer to our North American friends. The points column of the table gives the total number of points at the end of the 2021 season (from 38 games). The team gets 3 points for a win, and 1 point for a draw, so the maximum possible points from 38 games are \(3 * 38 = 114\). The wages column gives the estimated total wage bill in thousands of British Pounds (£1000).

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 16.4: 2021 points and wage bills (£1000s) for EPL teams
teampointswages
Manchester City93168572
Liverpool92148772
Chelsea74187340
Tottenham Hotspur71110416
Arsenal69118074
Manchester United58238780
West Ham United5677936
Leicester City5281590
Brighton and Hove Albion5149820
Wolverhampton Wanderers5162756
Newcastle United4973308
Crystal Palace4871910
Brentford4628606
Aston Villa4585330
Southampton4058657
Everton39110202
Leeds United3837354
Burnley3540830
Watford2342030
Norwich City2231750
+
+ + +
+
+

Let’s say we own Crystal Palace Football Club. Crystal Palace was a bit below average in the league in terms of points. Now we are thinking about whether we should invest in higher-paid players for the coming season, to improve our points score, and therefore, league position.

+

One thing we might like to know is whether there is an association between the wage bill and the points scored.

+

To look at that, we can do a scatter plot. This is a plot with — say — wages on the x-axis, and points on the y-axis. For each team we have a pair of values — their wage bill and their points scored. For each team, we put a marker on the scatter plot at the coordinates given by the wage value (on the x-axis) and the points value (on the y-axis).

+

Here is that plot for our EPL data in Table 16.4, with the Crystal Palace marker picked out in red.

+
+
+
+
+

+
+
+
+
+

It looks like there is a rough association of wages and points; teams that spend more in wages tend to have more points.

+

At the moment, the points and wages are in very different units. Points are on a possible scale of 0 (lose every game) to 38 * 3 = 114 (win every game). Wages are in thousands of pounds. Maybe we are not interested in the values in these units, but in how unusual the values are, in terms of wages, and in terms of points.

+

This is a good application of standard scores. Standard scores convert the original values to values on a standard scale, where 0 corresponds to an average value, 1 to a value one standard deviation above the mean, and -1 to a value one standard deviation below the mean. If we follow the standard score process for both points and wages, the values will be in the same standard units.

+

To do this calculation, we need the values from the table. We follow the same recipe as before, in loading the data with R.

+
+
points_wages = read.csv('data/premier_league.csv')
+points = points_wages$points
+wages = points_wages$wages
+
+

As you recall, the standard deviation is the square root of the mean squared deviation. In code:

+
+
# The standard deviation is the square root of the
+# mean squared deviation.
+wage_deviations <- wages - mean(wages)
+wage_std <- sqrt(mean(wage_deviations ** 2))
+wage_std
+
+
[1] 55524
+
+
+

Now we can apply the standard score procedure to wages. We divide the deviations by the standard deviation.

+
+
standard_wages <- (wages - mean(wages)) / wage_std
+
+

We apply the same procedure to the points:

+
+
point_deviations <- points - mean(points)
+point_std = sqrt(mean(point_deviations ** 2))
+standard_points = point_deviations / point_std
+
+

Now, when we plot the standard score version of the points against the standard score version of the wages, we see that they are in comparable units, each with a mean of 0, and a spread (a standard deviation) of 1.

+
+
+
+
+

+
+
+
+
+

Let us go back to our concerns as the owners of Crystal Palace. Counting down from the top in the table above, we see that Crystal Palace is the 12th row. Therefore, we can get the Crystal Palace wage value with:

+
+
cp_index <- 12
+cp_wages <- wages[cp_index]
+cp_wages
+
+
[1] 71910
+
+
+

We can get our wage bill in standard units in the same way:

+
+
cp_standard_wages <- standard_wages[cp_index]
+cp_standard_wages
+
+
[1] -0.347
+
+
+

Our wage bill is a below average, but its still within striking distance of the mean.

+

We know that we are comparing ourselves against the other teams, so perhaps we want to increase our wage bill by one standard deviation, to push us above the mean, and somewhat away from the center of the pack. If we add one standard deviation to our wage bill, that increases the standard score of our wages by 1.

+

But — if we increase our wages by one standard deviation — how much can we expect that to increase our points — in standard units.

+

That is question about the strength of the association between two measures — here wages and points — and we will cover that topic in much more detail in Chapter 29. But, racing ahead — here is the answer to the question we have just posed — the amount we expect to gain in points, in standard units, if we increase our wages by one standard deviation (and therefore, 1 in standard units).

+

For reasons we won’t justify now, we calculate the \(r\) value of association between wages and points, like this:

+
+
standards_multiplied <- standard_wages * standard_points
+r = mean(standards_multiplied)
+r
+
+
[1] 0.708
+
+
+

The \(r\) value is the answer to our question. For every one unit increase in standard scores in wages, we expect an increase of \(r\) (0.708) standard score units in points.

+
+
+

16.10 Conclusion

+

When we look at a set of values, we often ask questions about whether individual values are unusual or surprising. One way of doing that is to look at where the values are in the sorted order — for example, using the raw rank of values, or the proportion of values below this value — the quantiles or percentiles of a value. Another measure of interest is where a value is in comparison to the spread of all values either side of the mean. We use the term “deviations” to refer to the original values after we have subtracted the mean of the values. We can measure spread either side of the mean with metrics such as the mean of the absolute deviations (MAD) and the square root of the mean squared deviations (the standard deviation). One common use of the deviations and the standard deviation is to transform values into standard scores. These are the deviations divided by the standard deviation, and they transform values to have a standard mean (zero) and spread (standard deviation of 1). This can make it easier to compare sets of values with very different ranges and means.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/standard_scores_files/figure-html/fig-mean-stds-1.png b/r-book/standard_scores_files/figure-html/fig-mean-stds-1.png new file mode 100644 index 00000000..1ae75dab Binary files /dev/null and b/r-book/standard_scores_files/figure-html/fig-mean-stds-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-104-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-104-1.png new file mode 100644 index 00000000..5810390a Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-104-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-111-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-111-1.png new file mode 100644 index 00000000..95ee4925 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-111-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-120-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-120-1.png new file mode 100644 index 00000000..0e121c96 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-120-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-41-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-41-1.png new file mode 100644 index 00000000..1fd4ecf6 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-41-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-61-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-61-1.png new file mode 100644 index 00000000..d76ece82 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-61-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-64-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-64-1.png new file mode 100644 index 00000000..e7ac08d8 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-64-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-65-3.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-65-3.png new file mode 100644 index 00000000..5eab5ac4 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-65-3.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-68-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-68-1.png new file mode 100644 index 00000000..db6e5524 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-68-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-71-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-71-1.png new file mode 100644 index 00000000..0e045692 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-71-1.png differ diff --git a/r-book/standard_scores_files/figure-html/unnamed-chunk-74-1.png b/r-book/standard_scores_files/figure-html/unnamed-chunk-74-1.png new file mode 100644 index 00000000..dd9b28a1 Binary files /dev/null and b/r-book/standard_scores_files/figure-html/unnamed-chunk-74-1.png differ diff --git a/r-book/style.css b/r-book/style.css new file mode 100644 index 00000000..aad2ca98 --- /dev/null +++ b/r-book/style.css @@ -0,0 +1,76 @@ +.rmdcomment { + padding: 1em 1em 1em 4em; + margin-bottom: 10px; + background: #f5f5f5; + position:relative; +} + +.rmdcomment:before { + content: "\f075"; + font-family: FontAwesome; + left:10px; + position:absolute; + top:0px; + font-size: 45px; + } + +/* Unfortunately we need !important because of the + * extreme specificity of the Bookdown CSS rule for a elements + * at this level of the class heirarchy */ +.nb-links a { + background-color:#477DCA !important; + color:#FFF !important; + border-radius:3px; + display:inline-block; + font-size:1.2em; + font-weight:700; + padding:.4em 1em; + margin-bottom: .5em; + } + +.interact-button:hover { + text-decoration:none; +} + +div.interact-context { + display: inline; + padding-left: 1em; + font-weight: 600; +} + +.notebook-link:hover { + text-decoration:none; +} + +table { + font-size: 80%; + border-bottom: 1px solid darkgray; + margin-bottom: 4rem !important; +} + +table caption { + text-align: left; + font-size: 125%; + font-weight: bold; + overflow-x: visible; + white-space: pre; + border-bottom: 1px solid darkgray; + margin-bottom: 1.5rem; +} + +.lightable-paper { + width: auto; +} + +.question::before { + content: "Question:"; + font-weight: bold; +} + +.question { + border: 1px solid black; + background: #F5E1FD; + padding-left: 1rem; + padding-right: 1rem; + padding-top: 1rem; +} diff --git a/r-book/technical_note.html b/r-book/technical_note.html new file mode 100644 index 00000000..01cbfa11 --- /dev/null +++ b/r-book/technical_note.html @@ -0,0 +1,661 @@ + + + + + + + + + +Resampling statistics - 34  Technical Note to the Professional Reader + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

34  Technical Note to the Professional Reader

+
+ + + +
+ + + + +
+ + +
+ +

The material presented in this book fits together with the technical literature as follows: Though I (JLS) had proceeded from first principles rather than from the literature, I have from the start cited work by Chung and Fraser (1958) and Meyer Dwass (1957) They suggested taking samples of permutations in a two-sample test as a way of extending the applicability of Fisher’s randomization test (1935; 1960, chap. III, section 21). Resampling with replacement from a single sample to determine sample statistic variability was suggested by Simon (1969). Independent work by Efron (1979) explored the properties of this technique (Efron termed it the “bootstrap”) and lent it theoretical support. The notion of using these techniques routinely and in preference to conventional techniques based on Gaussian assumptions was suggested by Simon (1969) and by Simon, Atkinson, and Shevokas (1976).

+ + + + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/testing_counts_1.html b/r-book/testing_counts_1.html new file mode 100644 index 00000000..5ea735b5 --- /dev/null +++ b/r-book/testing_counts_1.html @@ -0,0 +1,2027 @@ + + + + + + + + + +Resampling statistics - 21  Hypothesis-Testing with Counted Data, Part 1 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

21  Hypothesis-Testing with Counted Data, Part 1

+
+ + + +
+ + + + +
+ + +
+ +
+

21.1 Introduction

+

The first task in inferential statistics is to make one or more point estimates — that is, to make one or more statements about how much there is of something we are interested in — including especially the mean and the dispersion. (That work goes under the label “estimation” and is discussed in Chapter 19.) Frequently the next step, after making such quantitative estimation of the universe from which a sample has been drawn, is to consider whether two or more samples are different from each other, or whether the single sample is different from a specified value; this work goes under the label “hypothesis testing.” We ask: Did something happen? Or: Is there a difference between two universes? These are yes-no questions.

+

In other cases, the next step is to inquire into the reliability of the estimates; this goes under the label “confidence intervals.” (Some writers include assessing reliability under the rubric of estimation, but I judge it better not to do so).

+

So: Having reviewed how to convert hypothesis-testing problems into statistically testable questions in Chapter 20, we now must ask: How does one employ resampling methods to make the statistical test? As is always the case when using resampling techniques, there is no unique series of steps by which to proceed. The crucial criterion in assessing the model is whether it accurately simulates the actual event. With hypothesis-testing problems, any number of models may be correct. Generally speaking, though, the model that makes fullest use of the quantitative information available from the data is the best model.

+

When attempting to deduce the characteristics of a universe from sample data, or when asking whether a sample was drawn from a particular universe, a crucial issue is whether a “one-tailed test” or a “two-tailed test” should be applied. That is, in examining the results of our resampling experiment based on the benchmark universe, do we examine both ends of the frequency distribution, or just one? If there is strong reason to believe a priori that the difference between the benchmark (null) universe and the sample will be in a given direction — for example if you hypothesize that the sample mean will be smaller than the mean of the benchmark universe — you should then employ a one-tailed test . If you do not have strong basis for such a prediction, use the two-tailed test. As an example, when a scientist tests a new medication, his/her hypothesis would be that the number of patients who get well will be higher in the treated group than in the control group. Thus, s/he applies the one-tailed test. See the text below for more detail on one- and two-tailed tests.

+

Some language first:

+

Hypothesis: In inferential statistics, a statement or claim about a universe that can be tested and that you wish to investigate.

+

Testing: The process of investigating the validity of a hypothesis.

+

Benchmark (or null) hypothesis: A particular hypothesis chosen for convenience when testing hypotheses in inferential statistics. For example, we could test the hypothesis that there is no difference between a sample and a given universe, or between two samples, or that a parameter is less than or greater than a certain value. The benchmark universe refers to this hypothesis. (The concept of the benchmark or null hypothesis was discussed in Chapter 9 and Chapter 20.)

+

Now let us begin the actual statistical testing of various sorts of hypotheses about samples and populations.

+
+
+

21.2 Should a single sample of counted data be considered different from a benchmark universe?

+
+

21.2.0.1 Example: Does Irradiation Affect the Sex Ratio in Fruit Flies?

+

Where the Benchmark Universe Mean (in this case, the Proportion) is Known, is the Mean (Proportion) of the Population Affected by the Treatment?)

+

You think you have developed a technique for irradiating the genes of fruit flies so that the sex ratio of the offspring will not be half males and half females. In the first twenty cases you treat, there are fourteen males and six females. Does this experimental result confirm that the irradiation does work?

+

First convert the scientific question — whether or not the treatment affects the sex distribution — into a probability-statistical question: Is the observed sample likely to have come from a benchmark universe in which the sex ratio is one male to one female? The benchmark (null) hypothesis, then, is that the treatment makes no difference and the sample comes from the one-male-to-one-female universe. Therefore, we investigate how likely a one-to-one universe is to produce a distribution of fourteen or more of just one sex.

+

A coin has a one-to-one (one out of two) chance of coming up tails. Therefore, we might flip a coin in groups of twenty flips, and count the number of heads in each twenty flips. Or we can use a random number table. The following steps will produce a sound estimate:

+
    +
  • Step 1. Let heads = male, tails = female.
  • +
  • Step 2. Flip twenty coins and count the number of males. If 14 or more males occur, record “yes.” Also, if 6 or fewer males occur, record “yes” because this means we have gotten 14 or more females. Otherwise, record “no.”
  • +
  • Step 3. Repeat step 2 perhaps 100 times.
  • +
  • Step 4. Calculate the proportion “yes” in the 100 trials. This proportion estimates the probability that a fruit-fly population with a propensity to produce 50 percent males will by chance produce as many as 14 or as few as 6 males in a sample of 20 flies.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.1: Results from 25 random trials for Fruitfly problem
Trial no# of heads>=14 or <= 6
18No
28No
312No
49No
512No
610No
79No
814Yes
914Yes
1010No
119No
128No
1313No
145Yes
157No
1611No
1711No
1810No
1910No
2011No
218No
229No
2316Yes
244Yes
2513No
+
+ + +
+
+

Table 21.1 shows the results obtained in twenty-five trials of twenty flips each. In three of the twenty-five trials (12 percent) there were fourteen or more heads, which we call “males,” and in two of the twenty-five trials (8 percent) there six or fewer heads, meaning there were fourteen or more tails (“females”). We can therefore estimate that, even if the treatment does not affect the sex and the births over a long period really are one to one, five out of twenty-five times (20 percent) we would get fourteen or more of one sex or the other. Therefore, finding fourteen males out of twenty births is not overwhelming evidence that the treatment has any effect, even though the result is suggestive.

+

How accurate is the estimate? Seventy-five more trials were made, and of the 100 trials eight contained fourteen or more “males” (8 percent), and 9 trials contained fourteen or more “females” (9 percent), a total of 17 percent. So the first twenty-five trials gave a fairly reliable indication. As a matter of fact, analytically-based computation (not explained here) shows that the probability of getting fourteen or more females out of twenty births is .057 and, of course, the same for fourteen or more males from a one-to-one universe, implying a total probability of .114 of getting fourteen or more males or females.

+

Now let us obtain larger and more accurate simulation samples with the computer. The key step in the R notebook below represents male fruit flies with the string 'male' and female fruit flies with the string 'female'. The sample function is then used to generate 20 of these strings with an equal probability that either string is selected. This simulates randomly choosing 20 fruit flies on the benchmark assumption — the “null hypothesis” — that each fruit fly has an equal chance of being a male or female. Now we want to discover the chances of getting more than 13 (i.e., 14 or more) males or more than 13 females under these conditions. So we use sum to count the number of males in each random sample and then store this value in the scores vector of this number for each sample. We repeat these steps 10,000 times.

+

After ten thousand samples have been drawn, we count (sum) how often there were more than 13 males and then count the number of times there were fewer than 7 males (because if there were fewer than 7 males there must have been more than 13 females). When we add the two results together we have the probability that the results obtained from the sample of irradiated fruit flies would be obtained from a random sample of fruit flies.

+
+

Start of fruit_fly notebook

+ + +
+
# Set the number of trials
+n_trials <- 10000
+
+# set the sample size for each trial
+sample_size <- 20
+
+# An empty array to store the trials
+scores <- numeric(n_trials)
+
+# Do 1000 trials
+for (i in 1:n_trials) {
+    # Generate 20 simulated fruit flies, where each has an equal chance of being
+    # male or female
+    a <- sample(c('male', 'female'), size = sample_size, prob = c(0.5, 0.5),
+    replace = TRUE)
+
+    # count the number of males in the sample
+    b <- sum(a == 'male')
+
+    # store the result of this trial
+    scores[i] <- b
+}
+
+# Produce a histogram of the trial results
+title_of_plot <- paste0("Number of males in", n_trials, " samples of \n", sample_size, " simulated fruit flies")
+hist(scores, xlab = 'Number of Males', main = title_of_plot)
+
+
+
+

+
+
+
+
+

In the histogram above, we see that in 16 percent of the trials, the number of males was 14 or more, or 6 or fewer. Or instead of reading the results from the histogram, we can calculate the result by tacking on the following commands to the above program:

+
+
# Determine the number of trials in which we had 14 or more males.
+j <- sum(scores >= 14)
+
+# Determine the number of trials in which we had 6 or fewer males.
+k <- sum(scores <= 6)
+
+# Add the two results together.
+m <- j + k
+
+# Convert to a proportion.
+mm <- m/n_trials
+
+# Print the results.
+print(mm)
+
+
[1] 0.121
+
+
+

End of fruit_fly notebook

+
+ +

Notice that the strength of the evidence for the effectiveness of the radiation treatment depends upon the original question: whether or not the treatment had any effect on the sex of the fruit fly, which is a two-tailed question. If there were reason to believe at the start that the treatment could increase only the number of males , then we would focus our attention on the result that in only three of the twenty-five trials were fourteen or more males. There would then be only a 3/25 = 0.12 probability of getting the observed results by chance if the treatment really has no effect, rather than the weaker odds against obtaining fourteen or more of either males or females.

+

Therefore, whether you decide to figure the odds of just fourteen or more males (what is called a “one-tail test”) or the odds for fourteen or more males plus fourteen or more females (a “two-tail test”), depends upon your advance knowledge of the subject. If you have no reason to believe that the treatment will have an effect only in the direction of creating more males and if you figure the odds for the one-tail test anyway, then you will be kidding yourself. Theory comes to bear here. If you have a strong hypothesis, deduced from a strong theory, that there will be more males, then you should figure one-tail odds, but if you have no such theory you should figure the weaker two-tail odds.1

+

In the case of the next problem concerning calves, we shall see that a one-tail test is appropriate because we have no interest in producing more male calves. Before leaving this example, let us review our intellectual strategy in handling the problem. First we observe a result (14 males in 20 flies) which differs from the proportion of the benchmark population (50 percent males). Because we have treated this sample with irradiation and observed a result that differs from the untreated benchmark-population’s mean, we speculate that the irradiation caused the sample to differ from the untreated population. We wish to check on whether this speculation is correct.

+

When asking whether this speculation is correct, we are implicitly asking whether future irradiation would also produce a proportion of males higher than 50 percent. That is, we are implicitly asking whether irradiated flies would produce more samples with male proportions as high as 14/20 than would occur by chance in the absence of irradiation.

+

If samples as far away as 14/20 from the benchmark population mean of 10/20 would occur frequently by chance, then we would not be impressed with that experimental evidence as proof that irradiation does affect the sex ratio. Hence we set up a model that will tell us the frequency with which samples of 14 or more males out of 20 births would be observed by chance. Carrying out the resampling procedure tells us that perhaps a tenth of the time such samples would be observed by chance. That is not extremely frequent, but it is not infrequent either. Hence we would probably conclude that the evidence is provocative enough to justify further experimentation, but not so strong that we should immediately believe in the truth of this speculation.

+

The logic of attaching meaning to the probabilistic outcome of a test of a hypothesis is discussed in Chapter 22. There also is more about the concept of the level of significance in Chapter 22.

+

Because of the great importance of this sort of case, which brings out the basic principles particularly clearly, let us consider another example:

+
+
+

21.2.1 Example: Does a treatment increase the female calf rate?

+

What is the probability that among 10 calves born, 9 or more will be female?

+

Let’s consider this question in the context of a set of queries for performing statistical inference that will be discussed further in Chapter 25.

+

The question: (From Hodges Jr and Lehmann (1970)): Female calves are more valuable than males. A bio-engineer claims to be able to cause more females to be born than the expected 50 percent rate. He conducts his procedure, and nine females are born out of the next 10 pregnancies among the treated cows. Should you believe his claim? That is, what is the probability of a result this (or more) surprising occurring by chance if his procedure has no effect? In this problem, we assume that on average 100 of 206 births are female, in contrast to the 50-50 benchmark universe in the previous problem.

+

What is the purpose of the work?: Female calves are more valuable than male calves.

+

Statistical inference?: Yes.

+

Confidence interval or Test of hypothesis?: Test of hypothesis.

+

Will you state the costs and benefits of various outcomes, or a loss function?: Yes. One need only say that the benefits are very large, and if the results are promising, it is worth gathering more data to confirm results.

+

How many samples of data are part of the hypothesis test?: One.

+

What is the size of the first sample about which you wish to make significance statements?: Ten.

+

What comparison(s) to make?: Compare the sample to the benchmark universe.

+

What is the benchmark universe: that embodies the null hypothesis? 100/206 female.

+

Which symbols for the observed entities?: Balls in bucket, or numbers.

+

What values or ranges of values?: We could write numbers 1 through 206 on pieces of paper, and take numbers 1-100 as “male” and 101-206 as “female”. Or we could use some other mechanism to give us a 100/206 chance of any one calf being female.

+

Finite or infinite universe?: Infinite.

+

Which sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)?: Ten calves.

+

What procedure to produce the sample entities?: Sampling with replacement.

+

Simple (single step) or complex (multiple “if” drawings)?: Can think of it either way.

+

What to record as the outcome of each resample trial?: The proportion (or number) of females.

+

What is the criterion to be used in the test?: The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of 100/206 females.

+

“One tail” or “two tail” test?: One tail, because the farmer is only interested in females. Finding a large proportion of males would not be of interest; it would not cause rejecting the null hypothesis.

+

The actual computation of probability may be done in several ways, as discussed earlier for four children and for ten cows. Conventional methods are discussed for comparison in Chapter 25. Here is the resampling solution in R.

+
+

Start of female_calves notebook

+ + +
+
# set the number of trials
+n_trials <- 10000
+
+# set the size of each sample
+sample_size <- 10
+
+# an array to store the results
+scores <- numeric(n_trials)
+
+# for 10000 repeats
+for (i in 1:n_trials) {
+
+    # generate 10 numbers between 1 and 206
+    a <- sample(1:206, size = sample_size)
+
+    # count how many numbers were between 101 and 206
+    b <- sum((a >= 101) & ((a <= 206)))
+
+    # store the result of the current trial
+    scores[i] <- b
+}
+
+# plot a histogram of the scores
+title_of_plot <- paste0("Number of females in", n_trials, " samples of \n", sample_size, " simulated calves")
+hist(scores, xlab = 'Number of Females', main = title_of_plot)
+
+# count the number of scores that were greater than or equal to 9
+k <- sum(scores >= 9)
+
+# express as a proportion
+kk <- k / n_trials
+
+# show the proportion
+print(paste("The probability of 9 or 10 females occurring by chance is", kk))
+
+
[1] "The probability of 9 or 10 females occurring by chance is 0.011"
+
+
+
+
+

+
+
+
+
+

We read from the result in vector kk in the “calves” program that the probability of 9 or 10 females occurring by chance is a bit more than one percent.

+

End of female_calves notebook

+
+ +
+
+

21.2.2 Example: A Public-Opinion Poll

+

Is the Proportion of a Population Greater Than a Given Value?

+

A municipal official wants to determine whether a majority of the town’s residents are for or against the awarding of a high-speed broadband internet contract, and he asks you to take a poll. You judge that the voter registration records are a fair representation of the universe in which the politician was interested, and you therefore decided to interview a random selection of registered voters. Of a sample of fifty people who expressed opinions, thirty said “yes” they were for the plan and twenty said “no,” they were against it. How conclusively do the results show that the people in town want this internet contract?

+

Now comes some necessary subtle thinking in the interpretation of what seems like a simple problem. Notice that our aim in the analysis is to avoid the mistake of saying that the town favors the plan when in fact it does not favor the plan. Our chance of making this mistake is greatest when the voters are evenly split, so we choose as the benchmark (null) hypothesis that 50 percent of the town does not want the plan. This statement really means that “50 percent or more do not want the plan.” We could assess the probability of obtaining our result from a population that is split (say) 52-48 against, but such a probability would necessarily be even smaller, and we are primarily interested in assessing the maximum probability of being wrong. If the maximum probability of error turns out to be inconsequential, then we need not worry about less likely errors.

+

This problem is very much like the one-group fruit fly irradiation problem above. The only difference is that now we are comparing the observed sample against an arbitrary value of 50 percent (because that is the break-point in a situation where the majority decides) whereas in Section 21.2.0.1 we compared the observed sample against the normal population proportion (also 50 percent, because that is the normal proportion of males). But it really does not matter why we are comparing the observed sample to the figure of 50 percent; the procedure is the same in both cases. (Please notice that there is nothing special about the 50 percent figure; the same procedure would be followed for 20 percent or 85 percent.)

+

In brief, we a) take two pieces of paper, write “Yes” on one and “No” on the other, put them in a bucket b) draw a piece of paper from the bucket, record whether it was “Yes” or “No”, replace, and repeat 50 times c) count the number of “yeses” and “noes” in the first fifty draws, c) repeat for perhaps a hundred trials, then d) count the proportion of the trials in which a 50-50 universe would produce thirty or more “yes” answers.

+

In operational steps, the procedure is as follows:

+
    +
  • Step 1. “1-5” = no, “6-0” = yes.
  • +
  • Step 2. In 50 random numbers, count the “yeses,” and record “false positive” if 30 or more “yeses.”
  • +
  • Step 3. Repeat step 2 perhaps 100 times.
  • +
  • Step 4. Calculate the proportion of experimental trials showing “false positive.” This estimates the probability that as many as 30 “yeses” would be observed by chance in a sample of 50 people if half (or more) are really against the plan.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.2: Results from 20 random trials for contract poll problem
Trial no# of "Noes"# of "Yeses">= 30 "Yeses"
12129
22525
32525
42525
52822
62822
72525
82822
92624
102228
112723
122525
132228
142426
152723
162723
172822
182624
193317
202327
+
+ + +
+
+

In Table 21.2, we see the results of twenty trials; 0 of 20 times (0 percent), 30 or more “yeses” were observed by chance. So our “significance level” or “prob value” is 0 percent, which is normally too high to feel confident that our poll results are reliable. This is the probability that as many as thirty of fifty people would say “yes” by chance if the population were “really” split evenly. (If the population were split so that more than 50 percent were against the plan, the probability would be even less that the observed results would occur by chance. In this sense, the benchmark hypothesis is conservative). On the other hand, if we had been counting the number of times there are 30 or more “No” votes that, in our setup, have the same odds as to 30 or more “Yes” votes, there would have been one. This indicates how samples can vary just by chance.

+

Taken together, the evidence suggests that the mayor would be wise not to place very much confidence in the poll results, but rather ought to act with caution or else take a larger sample of voters.

+
+

Start of contract_poll notebook

+ + +

This R notebook generates samples of 50 simulated voters on the assumption that only 50 percent are in favor of the contract. Then it counts (sums) the number of samples where over 29 (30 or more) of the 50 respondents said they were in favor of the contract. (That is, we use a “one-tailed test.”) The result in the kk variable is the chance of a “false positive,” that is, 30 or more people saying they favor a contract when support for the proposal is actually split evenly down the middle.

+
+
# We will do 10,000 iterations.
+n <- 10000
+
+# Make an array of integers to store the "Yes" counts.
+yeses <- numeric(n)
+
+for (i in 1:n) {
+    answers <- sample(c('No', 'Yes'), size=50, replace=TRUE)
+    yeses[i] <- sum(answers == 'Yes')
+}
+
+# Produce a histogram of the trial results.
+# Use integer bins for histogram, from 10 through 40.
+hist(yeses, breaks=10:40,
+     main='Number of yes votes out of 50, in null universe')
+
+
+
+

+
+
+
+
+

In the histogram above, we see that about 11 percent of our trials had 30 or more voters in favor, despite the fact that they were drawn from a population that was split 50-50. R will calculate this proportion directly if we add the following commands to the above:

+
+
k <- sum(yeses >= 30)
+kk <- k / n
+message('Proportion >= 30: ', round(kk, 2))
+
+
Proportion >= 30: 0.1
+
+
+

End of contract_poll notebook

+
+ +

The section above discusses testing hypotheses about a single sample of counted data relative to a benchmark universe. This section discusses the issue of whether two samples with counted data should be considered the same or different.

+
+
+

21.2.3 Example: Did the Trump-Clinton Poll Indicate that Trump Would Win?

+
+

Start of trump_clinton notebook

+ + +

What is the probability that a sample outcome such as actually observed (840 Trump, 660 Clinton) would occur by chance if Clinton is “really” ahead — that is, if Clinton has 50 percent (or more) of the support? To restate in sharper statistical language: What is the probability that the observed sample or one even more favorable to Trump would occur if the universe has a mean of 50 percent or below?

+

Here is a procedure that responds to that question:

+
    +
  1. Create a benchmark universe with one ball marked “Trump” and another marked “Clinton”
  2. +
  3. Draw a ball, record its marking, and replace. (We sample with replacement to simulate the practically-infinite population of U. S. voters.)
  4. +
  5. Repeat step 2 1500 times and count the number of “Trump”s. If 840 or greater, record “Y”; otherwise, record “N.”
  6. +
  7. Repeat steps 3 and 4 perhaps 1000 or 10,000 times, and count the number of “Y”s. The outcome estimates the probability that 840 or more Trump choices would occur if the universe is “really” half or more in favor of Clinton.
  8. +
+

This procedure may be done as follows with R.

+
+
# Number of repeats we will run.
+n <- 10000
+
+# Make an array to store the counts.
+trumps <- numeric(n)
+
+for (i in 1:n) {
+    votes <- sample(c('Trump', 'Clinton'), size=1500, replace=TRUE)
+    trumps[i] <- sum(votes == 'Trump')
+}
+
+# Integer bins from 675 through 825 in steps of 5.
+hist(trumps, breaks=seq(675, 826, by=5),
+     main='Number of Trump voters of 1500 in null-world simulation')
+
+# How often >= 840 Trump votes in random draw?
+k <- sum(trumps >= 840)
+# As a proportion of simulated resamples.
+kk <- k / n
+
+message('Proportion voting for Trump: ', kk)
+
+
Proportion voting for Trump: 0
+
+
+
+
+

+
+
+
+
+

The value for kk is our estimate of the probability that Trump’s “victory” in the sample would occur by chance if he really were behind. In this case, our probability estimate is less than 1 in 10,000 (< 0.0001).

+

End of trump_clinton notebook

+
+ + +
+
+

21.2.4 Example: Comparison of Possible Cancer Cure to Placebo

+

Do Two Binomial Populations Differ in Their Proportions.

+

Section 21.2.0.1 used an observed sample of male and female fruitflies to test the benchmark (null) hypothesis that the flies came from a universe with a one-to-one sex ratio, and the poll data problem also compared results to a 50-50 hypothesis. The calves problem also compared the results to a single benchmark universe — a proportion of 100/206 females. Now we want to compare two samples with each other , rather than comparing one sample with a hypothesized universe. That is, in this example we are not comparing one sample to a benchmark universe, but rather asking whether both samples come from the same universe. The universe from which both samples come, if both belong to the same universe, may be thought of as the benchmark universe, in this case.

+

The scientific question is whether pill P cures a rare cancer. A researcher gave pill P to six patients selected randomly from a group of twelve cancer patients; of the six, five got well. He gave an inactive placebo to the other six patients, and two of them got well. Does the evidence justify a conclusion that the pill has a curative effect?

+

(An identical statistical example would serve for an experiment on methods of teaching reading to children. In such a situation the researcher would respond to inconclusive results by running the experiment on more subjects, but in cases like the cancer-pill example the researcher often cannot obtain more subjects.)

+

We can answer the stated question by combining the two samples and testing both samples against the resulting combined universe. In this case, the universe is twelve subjects, seven (5 + 2) of whom got well. How likely would such a universe produce two samples as far apart as five of six, and two of six, patients who get well? In other words, how often will two samples of six subjects, each drawn from a universe in which 7/12 of the patients get well, be as far apart as 5 - 2 = 3 patients in favor of the sample designated “pill”? This is obviously a one-tail test, for we have no reason to believe that the pill group might do less well than the placebo group.

+

We might construct a twelve-sided die, seven of whose sides are marked “get well.” Or put 12 pieces of paper in a bucket, seven with “get well” and five with “not well”. Or we would use pairs of numbers from the random-number table, with numbers “01-07” corresponding to get well, numbers “08-12” corresponding to “not get well,” and all other numbers omitted. (If you wish to save time, you can work out a system that uses more numbers and skips fewer, but that is up to you.) Designate the first six subjects “pill” and the next six subjects “placebo.”

+

The specific procedure might be as follows:

+
    +
  • Step 1. Write “get well” on seven pieces of paper, “not well” on another five. Put the 12 pieces of paper into a bucket.
  • +
  • Step 2. Select two groups, “pill” and “placebo”, each with six random draws (with replacement) from the 12 pieces of paper.
  • +
  • Step 3. Record how many “get well” in each group.
  • +
  • Step 4. Subtract the result in group “placebo” from that in group “pill” (the difference may be negative).
  • +
  • Step 5. Repeat steps 1-4 perhaps 100 times.
  • +
  • Step 6. Compute the proportion of trials in which the pill does better by three or more cases.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.3: Results from 25 random trials for pill/placebo
Trial no# of pill cures# of placebo curesDifference
134-1
2431
336-3
435-2
5550
634-1
7514
8440
9440
1034-1
1124-2
12532
1346-2
14330
15431
1634-1
1745-1
18422
1945-1
20523
21541
2234-1
23550
2435-2
25532
+
+ + +
+
+

In the trials shown in Table 21.3, in two cases (8 percent) the difference between the randomly-drawn groups is three cases or greater. Apparently it is somewhat unusual — it happens 8 percent of the time — for this universe to generate “pill” samples in which the number of recoveries exceeds the number in the “placebo” samples by three or more. Therefore the answer to the scientific question, based on these samples, is that there is some reason to think that the medicine does have a favorable effect. But the investigator might sensibly await more data before reaching a firm conclusion about the pill’s efficiency, given the 8 percent probability.

+
+

Start of pill_placebo notebook

+ + +

Now for a R solution. Again, the benchmark hypothesis is that pill P has no effect, and we ask how often, on this assumption, the results that were obtained from the actual test of the pill would occur by chance.

+

Given that in the test 7 of 12 patients overall got well, the benchmark hypothesis assumes 7/12 to be the chances of any random patient being cured. We generate two similar samples of 6 patients, both taken from the same universe composed of the combined samples — the bootstrap procedure. We count (sum) the number who are “get well” in each sample. Then we subtract the number who got well in the “pill” sample from the number who got well in the “no-pill” sample. We record the resulting difference for each trial in the variable pill_betters.

+

In the actual test, 3 more patients got well in the sample given the pill than in the sample given the placebo. We therefore count how many of the trials yield results where the difference between the sample given the pill and the sample not given the pill was greater than 2 (equal to or greater than 3). This result is the probability that the results derived from the actual test would be obtained from random samples drawn from a population which has a constant cure rate, pill or no pill.

+
+
# The bucket with the pieces of paper.
+options <- rep(c('get well', 'not well'), c(7, 5))
+
+n <- 10000
+
+pill_betters <- numeric(n)
+
+for (i in 1:n) {
+    pill <- sample(options, size=6, replace=TRUE)
+    pill_cures <- sum(pill == 'get well')
+    placebo <- sample(options, size=6, replace=TRUE)
+    placebo_cures <- sum(placebo == 'get well')
+    pill_betters[i] <- pill_cures - placebo_cures
+}
+
+hist(pill_betters, breaks=-6:6,
+     main='Number of extra cures pill vs placebo in null universe')
+
+
+
+

+
+
+
+
+

Recall our actual observed results: In the medicine group, three more patients were cured than in the placebo group. From the histogram, we see that in only about 8 percent of the simulated trials did the “medicine” group do as well or better. The results seem to suggest — but by no means conclusively — that the medicine’s performance is not due to chance. Further study would probably be warranted. The following commands added to the above program will calculate this proportion directly:

+

End of pill_placebo notebook

+
+ +

As I (JLS) wrote when I first proposed this bootstrap method in 1969, this method is not the standard way of handling the problem; it is not even analogous to the standard analytic difference-of-proportions method (though since then it has become widely accepted). Though the method shown is quite direct and satisfactory, there are also many other resampling methods that one might construct to solve the same problem. By all means, invent your own statistics rather than simply trying to copy the methods described here; the examples given here only illustrate the process of inventing statistics rather than offering solutions for all classes of problems.

+
+
+

21.2.5 Example: Did Attitudes About Marijuana Change?

+ +

Consider two polls, each asking 1500 Americans about marijuana legalization. One poll, taken in 1980, found 52 percent of respondents in favor of decriminalization; the other, taken in 1985, found 46 percent in favor of decriminalization (Wonnacott and Wonnacott 1990, 275). Our null (benchmark) hypothesis is that both samples came from the same universe (the universe made up of the total of the two sets of observations). If so, let us then ask how likely would be two polls to produce results as different as were observed? Hence we construct a universe with a mean of 49 percent (the mean of the two polls of 52 percent and 46 percent), and repeatedly draw pairs of samples of size 1500 from it.

+

To see how the construction of the appropriate question is much more challenging intellectually than is the actual mathematics, let us consider another possibility suggested by a student: What about considering the universe to be the earlier poll with a mean of 52 percent, and then asking the probability that the later poll of 1500 people with a mean of 46 percent would come from it? Indeed, on first thought that procedure seems reasonable.

+

Upon reflection — and it takes considerable thought on these matters to get them right — that would not be an appropriate procedure. The student’s suggested procedure would be the same as assuming that we had long-run solid knowledge of the universe, as if based on millions of observations, and then asking about the probability of a particular sample drawn from it. That does not correspond to the facts.

+

The only way to find the approach you eventually consider best — and there is no guarantee that it is indeed correct — is by close reference to the particular facts of the case.

+
+
+

21.2.6 Example: Infarction and Cholesterol: Framingham Study

+

It is so important to understand the logic of hypothesis tests, and of the resampling method of doing them, that we will now tackle another problem similar to the preceding one.

+

This will be the first of several problems that use data from the famous Framingham study (drawn from Kahn and Sempos (1989)) concerning the development of myocardial infarction 16 years after the Framingham study began, for men ages 35- 44 with serum cholesterol above 250, compared to those with serum cholesterol below 250. The raw data are shown in Table 21.4. The data are from (Shurtleff 1970), cited in (Kahn and Sempos 1989, 12:61, Table 3-8). Kahn and Sempos divided the cases into “high” and “low” cholesterol.

+
+ + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.4: Development of Myocardial Infarction in Men Aged 35-44 After 16 Years
Serum CholesterolDeveloped MIDidn’t Develop MITotal
> 25010125135
<= 25021449470
+
+

The statistical logic properly begins by asking: How likely is that the two observed groups “really” came from the same “population” with respect to infarction rates? That is, we start with this question: How sure should one be that there is a difference in myocardial infarction rates between the high and low-cholesterol groups? Operationally, we address this issue by asking how likely it is that two groups as different in disease rates as the observed groups would be produced by the same “statistical universe.”

+

Key step: We assume that the relevant “benchmark” or “null hypothesis” population (universe) is the composite of the two observed groups. That is, if there were no “true” difference in infarction rates between the two serum-cholesterol groups, and the observed disease differences occurred just because of sampling variation, the most reasonable representation of the population from which they came is the composite of the two observed groups.

+

Therefore, we compose a hypothetical “benchmark” universe containing (135 + 470 =) 605 men at risk, and designate (10 + 21 =) 31 of them as infarction cases. We want to determine how likely it is that a universe like this one would produce — just by chance — two groups that differ as much as do the actually observed groups. That is, how often would random sampling from this universe produce one sub-sample of 135 men containing a large enough number of infarctions, and the other sub-sample of 470 men producing few enough infarctions, that the difference in occurrence rates would be as high as the observed difference of .029? (10/135 = .074, and 21/470 = .045, and .074 - .045 = .029).

+

So far, everything that has been said applies both to the conventional formulaic method and to the “new statistics” resampling method. But the logic is seldom explained to the reader of a piece of research — if indeed the researcher her/ himself grasps what the formula is doing. And if one just grabs for a formula with a prayer that it is the right one, one need never analyze the statistical logic of the problem at hand.

+

Now we tackle this problem with a method that you would think of yourself if you began with the following mind-set: How can I simulate the mechanism whose operation I wish to understand? These steps will do the job:

+
    +
  • Step 1: Fill a bucket with 605 balls, 31 red (infarction) and the rest (605 — 31 = 574) green (no infarction).
  • +
  • Step 2: Draw a sample of 135 (simulating the high serum-cholesterol group), one ball at a time and throwing it back after it is drawn to keep the simulated probability of an infarction the same throughout the sample; record the number of reds. Then do the same with another sample of 470 (the low serum-cholesterol group).
  • +
  • Step 3: Calculate the difference in infarction rates for the two simulated groups, and compare it to the actual difference of .029; if the simulated difference is that large, record “Yes” for this trial; if not, record “No.”
  • +
  • Step 4: Repeat steps 2 and 3 until a total of (say) 400 or 1000 trials have been completed. Compute the frequency with which the simulated groups produce a difference as great as actually observed. This frequency is an estimate of the probability that a difference as great as actually observed in Framingham would occur even if serum cholesterol has no effect upon myocardial infarction.
  • +
+

The procedure above can be carried out with balls in a bucket in a few hours. Yet it is natural to seek the added convenience of the computer to draw the samples. Here is a R program:

+
+

Start of framingham_hearts notebook

+ + +
+
n <- 10000
+
+men <- rep(c('infarction', 'no infarction'), c(31, 574))
+
+n_high <- 135  # Number of men with high cholesterol
+n_low <- 470  # Number of men with low cholesterol
+
+infarct_differences <- numeric(n)
+
+for (i in 1:n) {
+    highs <- sample(men, size=n_high, replace=TRUE)
+    lows <- sample(men, size=n_low, replace=TRUE)
+    high_infarcts <- sum(highs == 'infarction')
+    low_infarcts <- sum(lows == 'infarction')
+    high_prop <- high_infarcts / n_high
+    low_prop <- low_infarcts / n_low
+    infarct_differences[i] <- high_prop - low_prop
+}
+
+hist(infarct_differences, breaks=seq(-0.1, 0.1, by=0.005),
+     main='Infarct proportion differences in null universe')
+
+# How often was the resampled difference >= the observed difference?
+k <- sum(infarct_differences >= 0.029)
+# Convert this result to a proportion
+kk <- k / n
+
+message('Proportion of trials with difference >= observed: ',
+        round(kk, 2))
+
+
Proportion of trials with difference >= observed: 0.09
+
+
+
+
+

+
+
+
+
+

The results of the test using this program may be seen in the histogram. We find — perhaps surprisingly — that a difference as large as observed would occur by chance around 10 percent of the time. (If we were not guided by the theoretical expectation that high serum cholesterol produces heart disease, we might include the 10 percent difference going in the other direction, giving a 20 percent chance). Even a ten percent chance is sufficient to call into question the conclusion that high serum cholesterol is dangerous. At a minimum, this statistical result should call for more research before taking any strong action clinically or otherwise.

+

End of framingham_hearts notebook

+
+ +

Where should one look to determine which procedures should be used to deal with a problem such as set forth above? Unlike the formulaic approach, the basic source is not a manual which sets forth a menu of formulas together with sets of rules about when they are appropriate. Rather, you consult your own understanding about what is happening in (say) the Framingham situation, and the question that needs to be answered, and then you construct a “model” that is as faithful to the facts as is possible. The bucket-sampling described above is such a model for the case at hand.

+

To connect up what we have done with the conventional approach, one could apply a z test (conceptually similar to the t test, but applicable to yes-no data; it is the Normal-distribution approximation to the large binomial distribution). Do so, we find that the results are much the same as the resampling result — an eleven percent probability.

+

Someone may ask: Why do a resampling test when you can use a standard device such as a z or t test? The great advantage of resampling is that it avoids using the wrong method. The researcher is more likely to arrive at sound conclusions with resampling because s/he can understand what s/he is doing, instead of blindly grabbing a formula which may be in error.

+

The textbook from which the problem is drawn is an excellent one; the difficulty of its presentation is an inescapable consequence of the formulaic approach to probability and statistics. The body of complex algebra and tables that only a rare expert understands down to the foundations constitutes an impenetrable wall to understanding. Yet without such understanding, there can be only rote practice, which leads to frustration and error.

+
+
+

21.2.7 Example: Is One Pig Ration More Effective Than the Other?

+

Testing For a Difference in Means With a Two-by-Two Classification.

+

Each of two new types of ration is fed to twelve pigs. A farmer wants to know whether ration A or ration B is better.2 The weight gains in pounds for pigs fed on rations A and B are:

+

A: 31, 34, 29, 26, 32, 35, 38, 34, 31, 29, 32, 31

+

B: 26, 24, 28, 29, 30, 29, 31, 29, 32, 26, 28, 32

+

The statistical question may be framed as follows: should one consider that the pigs fed on the different rations come from the same universe with respect to weight gains?

+

In the actual experiment, 9 of the 12 pigs who were fed ration A also were in the top half of weight gains. How likely is it that one group of 12 randomly-chosen pigs would contain 9 of the 12 top weight gainers?

+

One approach to the problem is to divide the pigs into two groups — the twelve with the highest weight gains, and the twelve with the lowest weight gains — and examine whether an unusually large number of high-weight-gain pigs were fed on one or the other of the rations.

+

We can make this test by ordering and grouping the twenty four pigs:

+

High-weight group:

+

38 (ration A), 35 (A), 34 (A), 34 (A), 32 (B), 32 (A), 32 (A), 32 (B), 31 (A),

+

31 (B), 31 (A), 31 (A)

+

Low-weight group:

+

30 (B), 29 (A), 29 (A), 29 (B), 29 (B), 29 (B), 28 (B), 28 (B), 26 (A), 26 (B),

+

26 (B), 24 (B).

+

Among the twelve high-weight-gain pigs, nine were fed on ration A. We ask: Is this further from an even split than we are likely to get by chance? Let us take twelve red and twelve black cards, shuffle them, and deal out twelve cards (the other twelve need not be dealt out). Count the proportion of the hands in which one ration comes up nine or more times in the first twelve cards, to reflect ration A’s appearance nine times among the highest twelve weight gains. More specifically:

+
    +
  • Step 1. Constitute a deck of twelve red and twelve black cards, and shuffle.
  • +
  • Step 2. Deal out twelve cards, count the number red, and record “yes” if there are nine or more of either red or black.
  • +
  • Step 3. Repeat step 2 perhaps fifty times.
  • +
  • Step 4. Compute the proportion “yes.” This proportion estimates the probability sought.
  • +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Table 21.5: Results from 25 random trials for pig rations
Trial no# red# black>=9 red or black
166
275
366
457
557
675
766
875
984
1039+
1157
12210+
1357
1484
1575
1657
1757
1857
1993+
2084
2157
2275
2348
2439+
2548
+
+ + +
+
+

Table 21.5 shows the results of 25 trials. In four (marked by + signs) of the 25 (that is, 16 percent of the trials) there were nine or more either red or black cards in the first twelve cards. Again the results suggest that it would be slightly unusual for the results to favor one ration or the other so strongly just by chance if they come from the same universe.

+

Now the R procedure to answer the question:

+
+

Start of pig_rations notebook

+ + +

The ranks <- 1:24 statement creates a vector of numbers 1 through 24, which will represent the rankings of weight gains for each of the 24 pigs. We repeat the following procedure for 10000 trials. First we shuffle the elements of vector ranks so that the rank numbers for weight gains are randomized and placed in vector shuffled. We then select the first 12 elements of shuffled and place them in first_12; this represents the rankings of a randomly-selected group of 12 pigs. We next count (sum) in n_top the number of pigs whose rankings for weight gain were in the top half — that is, a rank of less than 13. We record that number in top_ranks, and then continue the loop, until we finish our n trials.

+

Since we did not know beforehand the direction of the effect of ration A on weight gain, we want to count the times that either more than 8 of the random selection of 12 pigs were in the top half of the rankings, or that fewer than 4 of these pigs were in the top half of the weight gain rankings — (The latter is the same as counting the number of times that more than 8 of the 12 non-selected random pigs were in the top half in weight gain.)

+

We do so with the final two sum statements. By adding the two results n_gte_9 and n_lte_3 together, we have the number of times out of 10,000 that differences in weight gains in two groups as dramatic as those obtained in the actual experiment would occur by chance.

+
+
# Constitute the set of the weight gain rank orders. ranks is now a vector
+# consisting of the numbers 1 — 24, in that order.
+ranks <- 1:24
+
+n <- 10000
+
+top_ranks <- numeric(n)
+
+for (i in 1:n) {
+    # Shuffle the ranks of the weight gains.
+    shuffled <- sample(ranks)
+    # Take the first 12 ranks.
+    first_12 <- shuffled[1:12]
+    # Determine how many of these randomly selected 12 ranks are less than
+    # 12 (i.e. 1-12), put that result in n_top.
+    n_top <- sum(first_12 <= 12)
+    # Keep track of each trial result in top_ranks
+    top_ranks[i] <- n_top
+}
+
+hist(top_ranks, breaks=1:11,
+     main='Number of top 12 ranks in pig-ration trials')
+
+
+
+

+
+
+
+
+

We see from the histogram that, in about 3 percent of the trials, either more than 8 or fewer than 4 top half ranks (1-12) made it into the random group of twelve that we selected. R will calculate this for us as follows:

+
+
# Determine how many of the trials yielded 9 or more top ranks.
+n_gte_9 <- sum(top_ranks >= 9)
+# Determine how many trials yielded 3 or fewer of the top ranks.
+# If there were 3 or fewer, then 9 or more of the top ranks must
+# have been in the other group (not selected).
+n_lte_3 <- sum(top_ranks <= 3)
+# Add the two together.
+n_both <- n_gte_9 + n_lte_3
+# Convert to a proportion.
+prop_both <- n_both / n
+
+message('Trial proportion >=9 top ranks in either group: ',
+        round(prop_both, 2))
+
+
Trial proportion >=9 top ranks in either group: 0.04
+
+
+

The decisions that are warranted on the basis of the results depend upon one’s purpose. If writing a scientific paper on the merits of ration A is the ultimate purpose, it would be sensible to test another batch of pigs to get further evidence. (Or you could proceed to employ another sort of test for a slightly more precise evaluation.) But if the goal is a decision on which type of ration to buy for a small farm and they are the same price, just go ahead and buy ration A because, even if it is no better than ration B, you have strong evidence that it is no worse .

+

End of pig_rations notebook

+
+ +
+
+

21.2.8 Example: Do Planet Densities Differ?

+

Consider the five planets known to the ancient world.

+

Mosteller and Rourke (1973, 17–19) ask us to compare the densities of the three planets farther from the sun than is the earth (Mars, density 0.71; Jupiter, 0.24; and Saturn, 0.12) against the densities of the planets closer to the sun than is the earth (Mercury, 0.68; Venus, 0.94).

+

The average density of the distant planets is .357, of the closer planets is .81. Is this difference (.353) statistically surprising, or is it likely to occur in a chance ordering of these planets?

+

We can answer this question with a permutation test; such sampling without replacement makes sense here because we are considering the entire set of planets, rather than a sample drawn from a larger population of planets (the word “population” is used here, rather than “universe,” to avoid confusion.) And because the number of objects is so small, one could examine all possible arrangements (permutations), and see how many have (say) differences in mean densities between the two groups as large as observed.

+

Another method that Mosteller and Rourke suggest is by a comparison of the density ranks of the two sets, where Saturn has rank 1 and Venus has rank 5. This might have a scientific advantage if the sample data are dominated by a single “outlier,” whose domination is removed when we rank the data.

+

We see that the sum of the ranks for the “closer” set is 3+5=8. We can then ask: If the ranks were assigned at random, how likely is it that a set of two planets would have a sum as large as 8? Again, because the sample is small, we can examine all the possible permutations, as Mosteller and Rourke do in Table 3-1 (Mosteller and Rourke 1973, 56) (Substitute “Closer” for “B,” “Further” for “A”). In two of the ten permutations, a sum of ranks as great as 8 is observed, so the probability of a result as great as observed happening by chance is 20 percent, using these data. (We could just as well consider the difference in mean ranks between the two groups: (8/2 - 7/3 = 10 / 6 = 1.67).

+ + +

To illuminate the logic of this test, consider comparing the heights of two samples of trees. If sample A has the five tallest trees, and sample B has the five shortest trees, the difference in mean ranks will be (6+7+8+9+10=) 40 — (1+2+3+4+5=) 15, the largest possible difference. If the groups are less sharply differentiated — for example, if sample A has #3 and sample B has #8 — the difference in ranks will be less than the maximum of 40, as you can quickly verify.

+

The method we have just used is called a Mann-Whitney test, though that label is usually applied when the data are too many to examine all the possible permutations; in that case one conventionally uses a table prepared by formula. In the case where there are too many for a complete permutation test, our resampling algorithm is as follows (though we’ll continue with the planets example):

+
    +
  1. Compute the mean ranks of the two groups.
  2. +
  3. Calculate the difference between the means computed in step 1.
  4. +
  5. Create a bucket containing the ranks from 1 to the number of observations (5, in the case of the planets)
  6. +
  7. Shuffle the ranks.
  8. +
  9. Since we are working with the ranked data, we must draw without replacement, because there can only be one #3, one #7, and so on. So draw the number of observations corresponding to the number of observations — 2 “Closer” and 3 “Further.”
  10. +
  11. Compute the mean ranks of the two simulated groups of planets.
  12. +
  13. Calculate the difference between the means computed in step 5 and record.
  14. +
  15. Repeat steps 4 through 7 perhaps 1000 times.
  16. +
  17. Count how often the shuffled difference in ranks exceeds the observed difference from step 2 (1.67).
  18. +
+
+

Start of planet_densities notebook

+ + +
+
# Steps 1 and 2.
+actual_mean_diff <- 8 / 2 - 7 / 3
+
+# Step 3
+ranks <- 1:5
+
+n <- 10000
+
+mean_differences <- numeric(n)
+
+for (i in 1:n) {
+    # Step 4
+    shuffled <- sample(ranks)
+    # Step 5
+    closer <- shuffled[1:2]  # First 2
+    further <- shuffled[3:5] # Last 3
+    # Step 6
+    mean_close <- mean(closer)
+    mean_far <- mean(further)
+    # Step 7
+    mean_differences[i] <- mean_close - mean_far
+}
+
+# Step 9
+k <- sum(mean_differences >= actual_mean_diff)
+prob <- k / n
+
+message('Proportion of trials with mean difference >= 1.67: ',
+        round(prob, 2))
+
+
Proportion of trials with mean difference >= 1.67: 0.2
+
+
+

Interpretation: 20 percent of the time, random shufflings produced a difference in ranks as great as or greater than observed. Hence, on the strength of this evidence, we should not conclude that there is a statistically surprising difference in densities between the further planets and the closer planets.

+

End of planet_densities notebook

+
+ +
+
+
+

21.3 Conclusion

+

This chapter has begun the actual work of testing hypotheses. The next chapter continues with discussion of somewhat more complex problems with counted data — more complex to think about, but no more difficult to actually treat mathematically with resampling simulation. If you have understood the general logic of the procedures used up until this point, you are in command of all the necessary conceptual knowledge to construct your own tests to answer any statistical question. A lot more practice, working on a variety of problems, obviously would help. But the key elements are simple: 1) Model the real situation accurately, 2) experiment with the model, and 3) compare the results of the model with the observed results.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-13-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-13-1.png new file mode 100644 index 00000000..3b916db1 Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-13-1.png differ diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-17-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-17-1.png new file mode 100644 index 00000000..96444649 Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-17-1.png differ diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-21-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-21-1.png new file mode 100644 index 00000000..2a9f0ff4 Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-21-1.png differ diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-24-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-24-1.png new file mode 100644 index 00000000..78c7f548 Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-24-1.png differ diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-28-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-28-1.png new file mode 100644 index 00000000..864060ba Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-28-1.png differ diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-5-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-5-1.png new file mode 100644 index 00000000..6dac749d Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-5-1.png differ diff --git a/r-book/testing_counts_1_files/figure-html/unnamed-chunk-9-1.png b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-9-1.png new file mode 100644 index 00000000..88cd3b4b Binary files /dev/null and b/r-book/testing_counts_1_files/figure-html/unnamed-chunk-9-1.png differ diff --git a/r-book/testing_counts_2.html b/r-book/testing_counts_2.html new file mode 100644 index 00000000..e4deaf4d --- /dev/null +++ b/r-book/testing_counts_2.html @@ -0,0 +1,2089 @@ + + + + + + + + + +Resampling statistics - 23  The Statistics of Hypothesis-Testing with Counted Data, Part 2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

23  The Statistics of Hypothesis-Testing with Counted Data, Part 2

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+

Here’s the bad-news-good-news message again: The bad news is that the subject of inferential statistics is extremely difficult — not because it is complex but rather because it is subtle. The cause of the difficulty is that the world around us is difficult to understand, and spoon-fed mathematical simplifications which you manipulate mechanically simply mislead you into thinking you understand that about which you have not got a clue.

+

The good news is that you — and that means you , even if you say you are “no good at math” — can understand these problems with a layperson’s hard thinking, even if you have no mathematical background beyond arithmetic and you think that you have no mathematical capability. That’s because the difficulty lies in such matters as pin-pointing the right question, and understanding how to interpret your results.

+

The problems in the previous chapter were tough enough. But this chapter considers problems with additional complications, such as when there are more than two groups, or paired comparisons for the same units of observation.

+
+

23.1 Comparisons among more than two samples of counted data

+

Example 17-1: Do Any of Four Treatments Affect the Sex Ratio in Fruit Flies? (When the Benchmark Universe Proportion is Known, Is the Propor tion of the Binomial Population Affected by Any of the Treatments?) (Program “4treat”)

+

Suppose that, instead of experimenting with just one type of radiation treatment on the flies (as in Example 15-1), you try four different treatments, which we shall label A, B, C, and D. Treatment A produces fourteen males and six females, but treatments B, C, and D produce ten, eleven, and ten males, respectively. It is immediately obvious that there is no reason to think that treatment B, C, or D affects the sex ratio. But what about treatment A?

+

A frequent and dangerous mistake made by young scientists is to scrounge around in the data for the most extreme result, and then treat it as if it were the only result. In the context of this example, it would be fallacious to think that the probability of the fourteen-males-to-six females split observed for treatment A is the same as the probability that we figured for a single experiment in Example 15-1. Instead, we must consider that our benchmark universe is composed of four sets of twenty trials, each trial having a 50-50 probability of being male. We can consider that our previous trials 1-4 in Example 15-1 constitute a single new trial, and each subsequent set of four previous trials constitute another new trial. We then ask how likely a new trial of our sets of twenty flips is to produce one set with fourteen or more of one or the other sex.

+

Let us make the procedure explicit, but using random numbers instead of coins this time:

+

Step 1. Let “1-5” = males, “6-0” = females

+

Step 2. Choose four groups of twenty numbers. If for any group there are 14 or more males, record “yes”; if 13 or less, record “no.”

+

Step 3. Repeat perhaps 1000 times.

+

Step 4. Calculate the proportion “yes” in the 1000 trials. This proportion estimates the probability that a fruit fly population with a proportion of 50 percent males will produce as many as 14 males in at least one of four samples of 20 flies.

+

We begin the trials with data as in Table 17-1. In two of the six simulation trials, more than one sample shows 14 or more males. Another trial shows fourteen or more females . Without even concerning ourselves about whether we should be looking at males or females, or just males, or needing to do more trials, we can see that it would be very common indeed to have one of four treatments show fourteen or more of one sex just by chance. This discovery clearly indicates that a result that would be fairly unusual (three in twenty-five) for a single sample alone is commonplace in one of four observed samples.

+

Table 17-1

+

Number of “Males” in Groups of 20 (Based on Random Numbers)

+

Trial Group A Group B Group C Group D Yes / No

+

>= 14 or <= 6

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
11112812No
212798No
36101010Yes
499127No
514121310Yes
6111497Yes
+

A key point of the RESAMPLING STATS program “4TREAT” is that each sample consists of four sets of 20 randomly generated hypothetical fruit flies. And if we consider 1000 trials, we will be examining 4000 sets of 20 fruit flies.

+

In each trial we GENERATE up to 4 random samples of 20 fruit flies, and for each, we count the number of males (“1”s) and then check whether that group has more than 13 of either sex (actually, more than 13 “1”s or less than 7 “1”s). If it does, then we change J to 1, which informs us that for this sample, at least 1 group of 20 fruit flies had results as unusual as the results from the fruit flies exposed to the four treatments.

+

After the 1000 runs are made, we count the number of trials where one sample had a group of fruit flies with 14 or more of either sex, and PRINT the results.

+ +
' Program file: "4treat.rss"
+
+REPEAT 1000
+    ' Do 1000 experiments.
+    COPY (0) j
+    ' j indicates whether we have obtained a trial group with 14 or more of
+    ' either sex. We start at "0" (= no).
+    REPEAT 4
+        ' Repeat the following steps 4 times to constitute 4 trial groups of 20
+        ' flies each.
+        GENERATE 20 1,2 a
+        ' Generate randomly 20 "1"s and "2"s and put them in a; let "1"
+
+        ' = male.
+        COUNT a =1 b
+        ' Count the number of males, put the result in b.
+        IF b >= 14
+            ' If the result is 14 or more males, then
+        END
+        COPY (1) j
+        ' Set the indicator to "1."
+
+        ' End the IF condition.
+        IF b <= 6
+            ' If the result is 6 or fewer males (the same as 14 or more females), then
+        END
+        COPY (1) j
+        ' Set the indicator to "1."
+
+        ' End the IF condition.
+    END
+END
+' End the procedure for one group, go back and repeat until all four
+' groups have been done.
+SCORE j z
+' j now tells us whether we got a result as extreme as that observed (j =
+' "1" if we did, j = "0" if not). We must keep track in z of this result
+' for each experiment.
+
+' End one experiment, go back and repeat until all 1000 are complete.
+COUNT z =1 k
+' Count the number of experiments in which we had results as extreme as
+' those observed.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the result.
+
+' Note: The file "4treat" on the Resampling Stats software disk contains
+' this set of commands.
+

In one set of 1000 trials, there were more than 13 or less than 7 males 33 percent of the time — clearly not an unusual occurrence.

+

Example 17-2: Do Four Psychological Treatments Differ in Effectiveness? (Do Several Two-Outcome Samples Differ Among Themselves in Their Propor tions? (Program “4treat1”)

+

Consider four different psychological treatments designed to rehabilitate juvenile delinquents. Instead of a numerical test score, there is only a “yes” or a “no” answer as to whether the juvenile has been rehabilitated or has gotten into trouble again. Label the treatments P, R, S, and T, each of which is administered to a separate group of twenty juvenile delinquents. The number of rehabilitations per group has been: P, 17; R, 10; S, 10; T, 7. Is it improbable that all four groups come from the same universe?

+

This problem is like the placebo vs. cancer-cure problem, but now there are more than two samples. It is also like the four-sample irradiated-fruit flies example (Example 17-1), except that now we are not asking whether any or some of the samples differ from a given universe (50-50 sex ratio in that case). Rather, we are now asking whether there are differences among the samples themselves. Please keep in mind that we are still dealing with two-outcome (yes-or-no, well-or-sick) problems. Later we shall take up problems that are similar except that the outcomes are “quantitative.”

+

If all four groups were drawn from the same universe, that universe has an estimated rehabilitation rate of 17/20 + 10/20 + 10/20 + 7/20 = 44/80 = 55/100, because the observed data taken as a whole constitute our best guess as to the nature of the universe from which they come — again, if they all come from the same universe. (Please think this matter over a bit, because it is important and subtle. It may help you to notice the absence of any other information about the universe from which they have all come, if they have come from the same universe.)

+

Therefore, select twenty two-digit numbers for each group from the random-number table, marking “yes” for each number “1-55” and “no” for each number “56-100.” Conduct a number of such trials. Then count the proportion of times that the difference between the highest and lowest groups is larger than the widest observed difference, the difference between P and T (17-7 = 10). In Table 17-2, none of the first six trials shows anywhere near as large a difference as the observed range of 10, suggesting that it would be rare for four treatments that are “really” similar to show so great a difference. There is thus reason to believe that P and T differ in their effects.

+

Table 7-2

+

Results of Six Random Trials for Problem “Delinquents”

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TrialPRSTLargest Minus Smallest
11198124
2101012122
39128124
491112103
5101011121
611119112
+

The strategy of the RESAMPLING STATS solution to “Delinquents” is similar to the strategy for previous problems in this chapter. The benchmark (null) hypothesis is that the treatments do not differ in their effects observed, and we estimate the probability that the observed results would occur by chance using the benchmark universe. The only new twist is that we must instruct the computer to find the groups with the highest and the lowest numbers of rehabilitations.

+

Using RESAMPLING STATS we GENERATE four “treatments,” each represented by 20 numbers, each number randomly selected between 1 and 100. We let 1-55 = success, 56-100

+

= failure. Follow along in the program for the rest of the procedure:

+ +
' Program file: "4treat1.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 20 1,100 a
+    ' The first treatment group, where "1-55" = success, "56-100" = failure
+    GENERATE 20 1,100 b
+    ' The second group
+    GENERATE 20 1,100 c
+    ' The third group
+    GENERATE 20 1,100 d
+    ' The fourth group
+    COUNT a <=55 aa
+    ' Count the first group's successes
+    COUNT b <=55 bb
+    ' Same for second, third & fourth groups
+    COUNT c <=55 cc
+    COUNT d <=55 dd
+END
+SUBTRACT aa bb ab
+' Now find all the pairwise differences in successes among the groups
+SUBTRACT aa cc ac
+SUBTRACT aa dd ad
+SUBTRACT bb cc bc
+SUBTRACT bb dd bd
+SUBTRACT cc dd cd
+CONCAT ab ac ad bc bd cd e
+' Concatenate, or join, all the differences in a single vector e
+ABS e f
+' Since we are interested only in the magnitude of the difference, not its
+' direction, we take the ABSolute value of all the differences.
+MAX f g
+' Find the largest of all the differences
+SCORE g z
+' Keep score of the largest
+
+' End a trial, go back and repeat until all 1000 are complete.
+COUNT z >=10 k
+' How many of the trials yielded a maximum difference greater than the
+' observed maximum difference?
+DIVIDE k 1000 kk
+' Convert to a proportion
+PRINT kk
+' Note: The file "4treat1" on the Resampling Stats software disk contains
+' this set of commands.
+

One percent of the experiments with randomly generated treatments from a common success rate of .55 produced differences in excess of the observed maximum difference (10).

+

An alternative approach to this problem would be to deal with each result’s departure from the mean, rather than the largest difference among the pairs. Once again, we want to deal with absolute departures, since we are interested only in magnitude of difference. We could take the absolute value of the differences, as above, but we will try something different here. Squaring the differences also renders them all positive: this is a common approach in statistics.

+

The first step is to examine our data and calculate this measure: The mean is 11, the differences are 6, 1, 1, and 4, the

+

squared differences are 36, 1, 1, and 16, and their sum is 54. Our experiment will be, as before, to constitute four groups of 20 at random from a universe with a 55 percent rehabilitation rate. We then calculate this same measure for the random groups. If it is frequently larger than 54, then we conclude that a uniform cure rate of 55 percent could easily have produced the observed results. The program that follows also GENERATES the four treatments by using a REPEAT loop, rather than spelling out the GENERATE command 4 times as above. In RESAMPLING STATS:

+ +
' Program file: "testing_counts_2_02.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    REPEAT 4
+        ' Repeat the following steps 4 times to constitute 4 groups of 20 and
+        ' count their rehabilitation rates.
+        GENERATE 20 1,100 a
+        ' Randomly generate 20 numbers between 1 and 100 and put them in a; let
+        ' 1-55 = rehabilitation, 56-100 no rehab.
+        COUNT a between 1 55 b
+        ' Count the number of rehabs, put the result in b.
+        SCORE b w
+        ' Keep track of the 4 rehab rates for the group of 20.
+    END
+    ' End the procedure for one group of 20, go back and repeat until all 4
+    ' are done.
+    MEAN w x
+    ' Calculate the mean
+    SUMSQRDEV w x y
+    ' Find the sum of squared deviations between group rehab rates (w) and the
+    ' overall rate (x).
+    SCORE y z
+    ' Keep track of the result for each trial.
+    CLEAR w
+    ' Erase the contents of w to prepare for the next trial.
+END
+' End one experiment, go back and repeat until all 1000 are complete.
+HISTOGRAM z
+' Produce a histogram of trial results.
+

4 Treatments

+

+

sum of squared differences

+

From this histogram, we see that in only 1 percent of the cases did our trial sum of squared differences equal or exceed 54, confirming our conclusion that this is an unusual result. We can have RESAMPLING STATS calculate this proportion:

+ +
' Program file: "4treat2.rss"
+
+COUNT z >= 54 k
+' Determine how many trials produced differences as great as those
+' observed.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the results.
+
+' Note: The file "4treat2" on the Resampling Stats software disk contains
+' this set of commands.
+

The conventional way to approach this problem would be with what is known as a “chi-square test.”

+

Example 17-3: Three-way Comparison

+

In a national election poll of 750 respondents in May, 1992, George Bush got 36 percent of the preferences (270 voters), Ross Perot got 30 percent (225 voters), and Bill Clinton got 28 percent (210 voters) ( Wall Street Journal, October 29, 1992, A16). Assuming that the poll was representative of actual voting, how likely is it that Bush was actually behind and just came out ahead in this poll by chance? Or to put it differently, what was the probability that Bush actually had a plurality of support, rather than that his apparent advantage was a matter of sampling variability? We test this by constructing a universe in which Bush is slightly behind (in practice, just equal), and then drawing samples to see how likely it is that those samples will show Bush ahead.

+

We must first find that universe — among all possible universes that yield a conclusion contrary to the conclusion shown by the data, and one in which we are interested — that has the highest probability of producing the observed sample. With a two-person race the universe is obvious: a universe that is evenly split except for a single vote against “our” candidate who is now in the lead, i.e. in practice a 50-50 universe. In that simple case we then ask the probability that that universe would produce a sample as far out in the direction of the conclusion drawn from the observed sample as the observed sample.

+

With a three-person race, however, the decision is not obvious (and if this problem becomes too murky for you, skip over it; it is included here more for fun than anything else). And there is no standard method for handling this problem in conventional statistics (a solution in terms of a confidence interval was first offered in 1992, and that one is very complicated and not very satisfactory to me). But the sort of thinking that we must labor to accomplish is also required for any conventional solution; the difficulty is inherent in the problem, rather than being inherent in resampling, and resampling will be at least as simple and understandable as any formulaic approach.

+

The relevant universe is (or so I think) a universe that is 35 Bush — 35 Perot — 30 Clinton (for a race where the poll indicates a 36-30-28 split); the 35-35-30 universe is of interest because it is the universe that is closest to the observed sample that does not provide a win for Bush (leaving out the “undecideds” for convenience); it is roughly analogous to the 50-50 split in the two-person race, though a clear-cut argument would require a lot more discussion. A universe that is split 34-34-32, or any of the other possible universes, is less likely to produce a 36-30-28 sample (such as was observed) than is a 35-35-30 universe, I believe, but that is a checkable matter. (In technical terms, it might be a “maximum likelihood universe” that we are looking for.)

+

We might also try a 36-36-28 universe to see if that produces a result very different than the 35-35-30 universe.

+

Among those universes where Bush is behind (or equal), a universe that is split 50-50-0 (with just one extra vote for the closest opponent to Bush) would be the most likely to produce a 6 percent difference between the top two candidates by chance, but we are not prepared to believe that the voters are split in such a fashion. This assumption shows that we are bringing some judgments to bear from outside the observed data.

+

For now, the point is not how to discover the appropriate benchmark hypothesis, but rather its criterion — which is, I repeat, that universe (among all possible universes) that yields a conclusion contrary to the conclusion shown by the data (and in which we are interested) and that (among such universes that yield such a conclusion) has the highest probability of producing the observed sample.

+

Let’s go through the logic again: 1) Bush apparently has a 6 percent lead over the second-place candidate. 2) We ask if the second-place candidate might be ahead if all voters were polled. We test that by setting up a universe in which the second-place candidate is infinitesimally ahead (in practice, we make the two top candidates equal in our hypothetical universe). And we make the third-place candidate somewhere close to the top two candidates. 3) We then draw samples from this universe and observe how often the result is a 6 percent lead for the top candidate (who starts off just below equal in the universe).

+

From here on, the procedure is straightforward: Determine how likely that universe is to produce a sample as far (or further) away in the direction of “our” candidate winning. (One could do something like this even if the candidate of interest were not now in the lead.)

+

This problem teaches again that one must think explicitly about the choice of a benchmark hypothesis. The grounds for the choice of the benchmark hypothesis should precede the program, or should be included as an extended comment within the program.

+

This program embodies the previous line of thought.

+ +
' Program file: "testing_counts_2_04.rss"
+
+URN 35#1 35#2 30#3 univ 1= Bush, 2= Perot, 3=Clinton
+REPEAT 1000
+    SAMPLE 750 univ samp
+    ' Take a sample of 750 votes
+    COUNT samp =1 bush
+    ' Count the Bush voters, etc.
+    COUNT samp =2 pero
+    ' Perot voters
+    COUNT samp =3 clin
+    ' Clinton voters
+    CONCAT pero clin others
+    ' Join Perot & Clinton votes
+    MAX others second
+    ' Find the larger of the other two
+    SUBTRACT bush second d
+    ' Find Bush's margin over 2nd
+    SCORE d z
+END
+HISTOGRAM z
+COUNT z >=46 m
+' Compare to the observed margin in the sample of 750 corresponding to a 6
+' percent margin by Bush over 2nd place finisher (rounded)
+DIVIDE m 1000 mm
+PRINT mm
+
+
+

+
Figure 23.1: Samples of 750 Voters:
+
+
+

The result is — Bush’s margin over 2nd (mm) = 0.018.

+

When we run this program with a 36-36-28 split, we also get a similar result — 2.6 percent. That is, the analysis shows a probability of only 2.6 percent that Bush would score a 6 percentage point “victory” in the sample, by chance, if the universe were split as specified. So Bush could feels reasonably confident that at the time the poll was taken, he was ahead of the other two candidates.

+
+
+

23.2 Paired Comparisons With Counted Data

+

Example 17-4: The Pig Rations Again, But Comparing Pairs of Pigs (Paired-Comparison Test) (Program “Pigs2”)

+

To illustrate how several different procedures can reasonably be used to deal with a given problem, here is another way to decide whether pig ration A is “really” better: We can assume that the order of the pig scores listed within each ration group is random — perhaps the order of the stalls the pigs were kept in, or their alphabetical-name order, or any other random order not related to their weights . Match the first pig eating ration A with the first pig eating ration B, and also match the second pigs, the third pigs, and so forth. Then count the number of matched pairs on which ration A does better. On nine of twelve pairings ration A does better, that is, 31.0 > 26.0, 34.0 > 24.0, and so forth.

+

Now we can ask: If the two rations are equally good, how often will one ration exceed the other nine or more times out of twelve, just by chance? This is the same as asking how often either heads or tails will come up nine or more times in twelve tosses. (This is a “two-tailed” test because, as far as we know, either ration may be as good as or better than the other.) Once we have decided to treat the problem in this manner, it is quite similar to Example 15-1 (the first fruitfly irradiation problem). We ask how likely it is that the outcome will be as far away as the observed outcome (9 “heads” of 12) from 6 of 12 (which is what we expect to get by chance in this case if the two rations are similar).

+

So we conduct perhaps fifty trials as in Table 17-3, where an asterisk denotes nine or more heads or tails.

+

Step 1. Let odd numbers equal “A better” and even numbers equal “B better.”

+

Step 2. Examine 12 random digits and check whether 9 or more, or 3 or less, are odd. If so, record “yes,” otherwise “no.”

+

Step 3. Repeat step 2 fifty times.

+

Step 4. Compute the proportion “yes,” which estimates the probability sought.

+

The results are shown in Table 17-3.

+

In 8 of 50 simulation trials, one or the other ration had nine or more tosses in its favor. Therefore, we estimate the probability to be .16 (eight of fifty) that samples this different would be generated by chance if the samples came from the same universe.

+

Table 17-3

+

Results From Fifty Simulation Trials Of The Problem “Pigs2”

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Trial

Heads” or Odds”

+

(Ration A)

“Tails” or “Evems”

+

(Ration B)

Trial

“Heads” or Odds”

+

(Ration A)

“Tails” or “Evens”

+

(Ration B)

1662666
2482757
3662875
4752948
* 5393066
657* 3193
784* 32210
8663375
9753457
*10933566
11753684
*12393766
13573848
14663957
15664084
16844157
17574266
*18934357
19664475
20754566
21484648
* 221024757
23664857
24574984
*25395075
+

Now for a RESAMPLING STATS program and results. “Pigs2” is different from “Pigs1” in that it compares the weight-gain results of pairs of pigs, instead of simply looking at the rankings for weight gains.

+

The key to “Pigs2” is the GENERATE statement. If we assume that ration A does not have an effect on weight gain (which is the “benchmark” or “null” hypothesis), then the results of the actual experiment would be no different than if we randomly GENERATE numbers “1” and “2” and treat a “1” as a larger weight gain for the ration A pig, and a “2” as a larger weight gain for the ration B pig. Both events have a .5 chance of occurring for each pair of pigs because if the rations had no effect on weight gain (the null hypothesis), ration A pigs would have larger weight gains about half of the time. The next step is to COUNT the number of times that the weight gains of one group (call it the group fed with ration A) were larger than the weight gains of the other (call it the group fed with ration B). The complete program follows:

+ +
' Program file: "pigs2.rss"
+
+REPEAT 1000
+    ' Do 1000 trials
+    GENERATE 12 1,2 a
+    ' Generate randomly 12 "1"s and "2"s, put them in a. This represents 12
+    ' "pairings" where "1" = ration a "wins," "2" = ration b = "wins."
+    COUNT a =1 b
+    ' Count the number of "pairings" where ration a won, put the result in b.
+    SCORE b z
+    ' Keep track of the result in z
+END
+' End the trial, go back and repeat until all 100 trials are complete.
+COUNT z >= 9 j
+' Determine how often we got 9 or more "wins" for ration a.
+COUNT z <= 3 k
+' Determine how often we got 3 or fewer "wins" for ration a.
+ADD j k m
+' Add the two together
+DIVIDE m 100 mm
+' Convert to a proportion
+PRINT mm
+' Print the result.
+
+' Note: The file "pigs2" on the Resampling Stats software disk contains
+' this set of commands.
+

Notice how we proceeded in Examples 15-6 and 17-4. The data were originally quantitative — weight gains in pounds for each pig. But for simplicity we classified the data into simpler counted-data formats. The first format (Example 15-6) was a rank order, from highest to lowest. The second format (Example 17-4) was simply higher-lower, obtained by randomly pairing the observations (using alphabetical letter, or pig’s stall number, or whatever was the cause of the order in which the data were presented to be random). Classifying the data in either of these ways loses some information and makes the subsequent tests somewhat cruder than more refined analysis could provide (as we shall see in the next chapter), but the loss of efficiency is not crucial in many such cases. We shall see how to deal directly with the quantitative data in Chapter 24.

+

Example 17-5: Merged Firms Compared to Two Non-Merged Groups

+

In a study by Simon, Mokhtari, and Simon (1996), a set of 33 advertising agencies that merged over a period of years were each compared to entities within two groups (each also of 33 firms) that did not merge; one non-merging group contained firms of roughly the same size as the final merged entities, and the other non-merging group contained pairs of non-merging firms whose total size was roughly the same as the total size of the merging entities.

+

The idea behind the matching was that each pair of merged firms was compared against

+
    +
  1. a pair of contemporaneous firms that were roughly the same size as the merging firms before the merger, and

  2. +
  3. a single firm that was roughly the same size as the merged entity after the merger.

    +

    Here (Table 17-4) are the data (provided by the authors):

    +

    Table 17-4

    +

    Revenue Growth In Year 1 Following Merger

    +

    Set # Merged Match1 Match2

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1-0.200000.025640.000000
    2-0.34831-0.125000.080460
    30.075140.06322-0.023121
    40.12613-0.041990.164671
    5-0.101690.080000.277778
    60.037840.149070.430168
    70.116160.151830.142857
    8-0.098360.037740.040000
    90.021370.07661.0111111
    10-0.017110.284340.189139
    11-0.364780.139070.038869
    120.088140.038740.094792
    13-0.263160.056410.045139
    14-0.049380.053710.008333
    150.011460.048050.094817
    160.009750.198160.060929
    170.071430.42083-0.024823
    180.001830.074320.053191
    190.00482-0.007070.050083
    20-0.053990.171520.109524
    210.022700.02788-0.022456
    220.059840.048570.167064
    23-0.059870.026430.020676
    24-0.08861-0.059270.077067
    25-0.02483-0.018390.059633
    260.076430.012620.034635
    27-0.00170-0.045490.053571
    28-0.219750.343090.042789
    290.382370.221050.115773
    30-0.006760.254940.237047
    31-0.162980.011240.190476
    320.191820.150480.151994
    330.061160.170450.093525
    +

    Comparisons were made in several years before and after the mergings to see whether the merged entities did better or worse than the non-merging entities they were matched with by the researchers, but for simplicity we may focus on just one of the more important years in which they were compared — say, the revenue growth rates in the year after the merger.

    +

    Here are those average revenue growth rates for the three groups:

    +

    Year’s rev. growth

    + + + + + + + + + + + + + + + +
    MERGED-0.0213
    MATCH 10.092085
    MATCH 20.095931
    +

    We could do a general test to determine whether there are differences among the means of the three groups, as was done in the “Differences Among 4 Pig Rations” problem (Section 24.0.1). However, we note that there may be considerable variation from one matched set to another — variation which can obscure the overall results if we resample from a large general bucket.

    +

    Therefore, we use the following resampling procedure that maintains the separation between matched sets by converting each observation into a rank (1, 2 or 3) within the matched set.

    +

    Here (Table 17-5) are those ranks:

    +

    Table 17-5

    +

    Ranked Within Matched Set (1 = worst, 3 = best)

    +

    Set # Merged Match1 Match2

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    1132
    2123
    3321
    4213
    5123
    6132
    7132
    8123
    9123
    10123
    11132
    12213
    13132
    14132
    15123
    16132
    17231
    18132
    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Set #MergedMatch1Match2
    19213
    20132
    21223
    22223
    23132
    24123
    25123
    26312
    27213
    28132
    29321
    30132
    31123
    32312
    33132
    +

    These are the average ranks for the three groups (1 = worst, 3

    +

    = best):

    + + + + + + + + + + + + + + + +
    MERGED1.45
    MATCH 12.18
    MATCH 22.36
    +

    Is it possible that the merged group received such a low (poor) average ranking just by chance? The null hypothesis is that the ranks within each set were assigned randomly, and that “merged” came out so poorly just by chance. The following procedure simulates random assignment of ranks to the “merged” group:

    +
      +
    1. Randomly select 33 integers between “1” and “3” (inclusive).

    2. +
    3. Find the average rank & record.

    4. +
    5. Repeat steps 1 and 2, say, 1000 times.

    6. +
    7. Find out how often the average rank is as low as 1.45

    8. +
  4. +
+

Here’s a RESAMPLING STATS program (“merge.sta”):

+ +
' Program file: "testing_counts_2_06.rss"
+
+REPEAT 1000
+    GENERATE 33 (1 2 3) ranks
+    MEAN ranks ranksum
+    SCORE ranksum z
+END
+HISTOGRAM z
+COUNT z <=1.45 k
+DIVIDE k 1000 kk
+PRINT kk
+

+

Result: kk = 0

+

Interpretation: 1000 random selections of 33 ranks never produced an average as low as the observed average. Therefore we rule out chance as an explanation for the poor ranking of the merged firms.

+

Exactly the same technique might be used in experimental medical studies wherein subjects in an experimental group are matched with two different entities that receive placebos or control treatments.

+

For example, there have been several recent three-way tests of treatments for depression: drug therapy versus cognitive therapy versus combined drug and cognitive therapy. If we are interested in the combined drug-therapy treatment in particular, comparing it to standard existing treatments, we can proceed in the same fashion as in the merger problem.

+

We might just as well consider the real data from the merger as hypothetical data for a proposed test in 33 triplets of people that have been matched within triplet by sex, age, and years of education. The three treatments were to be chosen randomly within each triplet.

+

Assume that we now switch scales from the merger data, so that #1 = best and #3 = worst, and that the outcomes on a series of tests were ranked from best (#1) to worst (#3) within each triplet. Assume that the combined drug-and-therapy regime has the best average rank. How sure can we be that the observed result would not occur by chance? Here are the data from the merger study, seen here as Table 17-5-b:

+

Table 17-5-b

+

Ranked Therapies Within Matched Patient Triplets

+

(hypothetical data identical to merger data) (1 = best, 3 = worst)

+

Triplet # Therapy Only Combined Drug Only

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
1132
2123
3321
4213
5123
6132
7132
8123
9123
10123
11132
12213
13132
14132
15123
16132
17231
18132
19213
20132
21213
22213
23132
24123
25123
26312
27213
28132
29321
30132
31123
32312
33132
+

These are the average ranks for the three groups (“1” = best, “3”= worst):

+ + + + + + + + + + + + + + + +
Combined1.45
Drug2.18
Therapy2.36
+

In these hypothetical data, the average rank for the drug and therapy regime is 1.45. Is it likely that the regimes do not “really” differ with respect to effectiveness, and that the drug and therapy regime came out with the best rank just by the luck of the draw? We test by asking, “If there is no difference, what is the probability that the treatment of interest will get an average rank this good, just by chance?”

+

We proceed exactly as with the solution for the merger problem (see above).

+

In the above problems, we did not concern ourselves with chance outcomes for the other therapies (or the matched firms) because they were not our primary focus. If, in actual fact, one of them had done exceptionally well or poorly, we would have paid little notice because their performance was not the object of the study. We needed, therefore, only to guard against the possibility that chance good luck for our therapy of interest might have led us to a hasty conclusion.

+

Suppose now that we are not interested primarily in the combined drug-therapy treatment, and that we have three treatments being tested, all on equal footing. It is no longer sufficient to ask the question “What is the probability that the combined therapy could come out this well just by chance?” We must now ask “What is the probability that any of the therapies could have come out this well by chance?” (Perhaps you can guess that this probability will be higher than the probability that our chosen therapy will do so well by chance.)

+

Here is a resampling procedure that will answer this question:

+
    +
  1. Put the numbers “1”, “2” and “3” (corresponding to ranks) in a bucket

  2. +
  3. Shuffle the numbers and deal them out to three locations that correspond to treatments (call the locations “t1,” “t2,” and “t3”)

  4. +
  5. Repeat step two another 32 times (for a total of 33 repetitions, for 33 matched triplets)

  6. +
  7. Find the average rank for each location (treatment.

  8. +
  9. Record the minimum (best) score.

  10. +
  11. Repeat steps 2-4, say, 1000 times.

  12. +
  13. Find out how often the minimum average rank for any treatment is as low as 1.45

  14. +
+ +
' Program file: "testing_counts_2_07.rss"
+
+NUMBERS (1 2 3) a
+' Step 1 above
+REPEAT 1000
+    ' Step 6
+    REPEAT 33
+        ' Step 3
+        SHUFFLE a a
+        ' Step 2
+        SCORE a t1 t2 t3
+        ' Step 2
+    END
+    ' Step 3
+    MEAN t1 tt1
+    ' Step 4
+    MEAN t2 tt2
+    MEAN t3 tt3
+    CLEAR t1
+    ' Clear the vectors where we've stored the ranks for this trial (must do
+    ' this whenever we have a SCORE statement that's part of a "nested" repeat
+    ' loop)
+    CLEAR t2
+    CLEAR t3
+    CONCAT tt1 tt2 tt3 b
+    ' Part of step 5
+    MIN b bb
+    ' Part of step 5
+    SCORE bb z
+    ' Part of step 5
+END
+' Step 6
+HISTOGRAM z
+COUNT z <=1.45 k
+' Step 7
+DIVIDE k 1000 kk
+PRINT kk
+

Interpretation: 1000 random shufflings of 33 ranks, apportioned to three “treatments,” never produced for the best treatment in the three an average as low as the observed average, therefore we rule out chance as an explanation for the success of the combined therapy.

+

An interesting feature of the mergers (or depression treatment) problem is that it would be hard to find a conventional test that would handle this three-way comparison in an efficient manner. Certainly it would be impossible to find a test that does not require formulae and tables that only a talented professional statistician could manage satisfactorily, and even s/ he is not likely to fully understand those formulaic procedures.

+

+

Result: kk = 0

+
+
+

23.3 Technical note

+

Some of the tests introduced in this chapter are similar to standard nonparametric rank and sign tests. They differ less in the structure of the test statistic than in the way in which significance is assessed (the comparison is to multiple simulations of a model based on the benchmark hypothesis, rather than to critical values calculated analytically).

+ + + +
+ +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/testing_measured.html b/r-book/testing_measured.html new file mode 100644 index 00000000..4f6a7e13 --- /dev/null +++ b/r-book/testing_measured.html @@ -0,0 +1,1617 @@ + + + + + + + + + +Resampling statistics - 24  The Statistics of Hypothesis-Testing With Measured Data + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

24  The Statistics of Hypothesis-Testing With Measured Data

+
+ + + +
+ + + + +
+ + +
+ +
+
+
+ +
+
+Draft page partially ported from original PDF +
+
+
+

This page is an automated and partial import from the original second-edition PDF.

+

We are in the process of updating this page for formatting, and porting any code from the original RESAMPLING-STATS language to Python and R.

+

Feel free to read this version for the sense, but expect there to be multiple issues with formatting.

+

We will remove this warning when the page has adequate formatting, and we have ported the code.

+
+
+ +

Chapter 21 and Chapter 19 discussed testing a hypothesis with data that either arrive in dichotomized (yes-no) form, or come as data in situations where it is convenient to dichotomize. We next consider hypothesis testing using measured data. Conventional statistical practice employs such devices as the “t-test” and “analysis of variance.” In contrast to those complex devices, the resampling method does not differ greatly from what has been discussed in previous chapters.

+
+

24.0.1 Example: The Pig Rations Still Once Again, Using Measured Data

+

Testing for the Difference Between Means of Two Equal-Sized Samples of Measured-Data Observations) (Program “Pigs3”)

+

Let us now treat the pig-food problem without converting the quantitative data into qualitative data, because a conversion always loses information.

+

The term “lose information” can be understood intuitively. Consider two sets of three sacks of corn. Set A includes sacks containing, respectively, one pound, two pounds, and three pounds. Set B includes sacks of one pound, two pounds, and a hundred pounds. If we rank the sacks by weight, the two sets can no longer be distinguished. The one-pound and two-pound sacks have ranks one and two in both cases, and their relative places in their sets are the same. But if we know not only that the one-pound sack is the smallest of its set and the three-pound or hundred-pound sack is the largest, but also that the largest sack is three pounds (or a hundred pounds), we have more information about a set than if we only know the ranks of its sacks.

+

Rank data are also known as “ordinal” data, whereas data measured in (say) pounds are known as “cardinal” data. Even though converting from cardinal (measured) to ordinal (ranked) data loses information, the conversion may increase convenience, and may therefore be worth doing in some cases.

+

+

We begin a measured-data procedure by noting that if the two pig foods are the same, then each of the observed weight gains came from the same benchmark universe . This is the basic tactic in our statistical strategy. That is, if the two foods came from the same universe, our best guess about the composition of that universe is that it includes weight gains just like the twenty-four we have observed , and in the same proportions, because that is all the information that we have about the universe; this is the bootstrap method. Since ours is (by definition) a sample from an infinite (or at least, a very large) universe of possible weight gains, we assume that there are many weight gains in the universe just like the ones we have observed, in the same proportion as we have observed them. For example, we assume that 2/24 of the universe is composed of 34-pound weight gains, as seen in Figure 18-1:

+

Figure 18-1

+

We recognize, of course, that weight gains other than the exact ones we observed certainly would occur in repeated experiments. And if we thought it reasonable to do so, we could assume that the “distribution” of the weight gains would follow a regular “smooth” shape such as Figure 18-2. But deciding just how to draw Figure 18-2 from the data in Figure 18-1 requires that we make arbitrary assumptions about unknown conditions. And if we were to draw Figure 18-2 in a form that would be sufficiently regular for conventional mathematical analysis, we might have to make some very strong assumptions going far beyond the observed data.

+

Drawing a smooth curve such as Figure 18-2 from the raw data in Figure 18-1 might be satisfactory — if done with wisdom and good judgment. But there is no necessity to draw such a smooth curve, in this case or in most cases. We can proceed by assuming simply that the benchmark universe — the universe to which we shall compare our samples, conventionally

+

Relative Probability

+

called the “null” or “hypothetical” universe — is composed only of elements similar to the observations we have in hand. We thereby lose no efficiency and avoid making unsound assumptions.

+

+

Size of Weight Gain, 30.2 = Mean

+

Figure 18-2

+

To carry out our procedure in practice: 1) Write down each of the twenty-four weight gains on a blank index card. We then have one card each for 31, 34, 29, 26, and so on. 2) Shuffle the twenty-four cards thoroughly, and pick one card. 3) Record the weight gain, and replace the card. (Recall that we are treating the weight gains as if they come from an infinite universe — that is, as if the probability of selecting any amount is the same no matter which others are selected randomly. Another way to say this is to state that each selection is independent of each other selection. If we did not replace the card before selecting the next weight gain, the selections would no longer be independent. See Chapter 11 for further discussion of this issue.) 4) Repeat this process until you have made two sets of 12 observations. 5) Call the first hand “food A” and the second hand “food B.” Determine the average weight gain for the two hands, and record it as in Table 18-1. Repeat this procedure many times.

+

In operational steps:

+

Step 1. Write down each observed weight gain on a card, e.g. 31, 34, 29...

+

Step 2. Shuffle and deal a card.

+

Step 3. Record the weight and replace the card.

+

Step 4. Repeat steps 2 and 3 eleven more times; call this group A.

+

Step 5. Repeat steps 2-3 another twelve times; call this group B.

+

Step 6. Calculate the mean weight gain of each group.

+

Step 7. Subtract the mean of group A from the mean of group B and record. If larger (more positive) than 3.16 (the difference between the observed means) or more negative than -3.16, record “more.” Otherwise record “less.”

+

Step 8. Repeat this procedure perhaps fifty times, and calculate the proportion “more.” This estimates the probability sought.

+

In none of the first ten simulated trials did the difference in the means of the random hands exceed the observed difference (3.16 pounds, in the top line in the table) between foods A and B. (The difference between group totals tells the same story and is faster, requiring no division calculations.)

+

In the old days before a computer was always easily available, I would quit making trials at such a point, confident that a difference in means as great as observed is not likely to happen by chance. (Using the convenient “multiplication rule” described in Chapter 9, we can estimate the probability of such an occurrence happening by chance in 10 successive trials as \(\frac{1}{2} * \frac{1}{2} * \frac{1}{2} ... = \frac{1}{2}^{10} = 1/1024 \approx .001\) = .1 percent, a small chance indeed.) Nevertheless, let us press on to do 50 trials.

+

Table 18-1

+ +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Trial #Mean of First 12 Observation s (First Hand)Mean of Second 12 Observation s (Second Hand)

Differenc e Greater or

+

Less Than Observed Differenc

Observed382 / 12=31.83344 / 12=28.673.16
1368 / 12=30.67357 / 12=29.75.87Less
2364 / 12=30.33361 / 12=30.08.25Less
3352 / 12=29.33373 / 12=31.08(1.75)Less
4378 / 12=31.50347 / 12=28.922.58Less
5365 / 12=30.42360 / 12=30.00.42Less
6352 / 12=29.33373 / 12=31.08(1.75)Less
7355 / 12=29.58370 / 12=30.83(1.25)Less
8366 / 12=30.50359 / 12=29.92.58Less
9360 / 12=30.00365 / 12=30.42(.42)Less
10355 / 12=29.58370 / 12=30.83(1.25)Less
11359 / 12=29.92366 / 12=30.50(.58)Less
12369 / 12=30.75356 / 12=29.671.08
+

Results of Fifty Random Samples for the Problem “PIGS3”

+

e

+ +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Trial #Mean of First 12 Observation s (First Hand)Mean of Second 12 Observation s (Second Hand)

Differenc e Greater or

+

Less Than Observed Differenc

Observed382 / 12=31.83344 / 12=28.673.16
13360 / 12=30.00365 / 12=30.42(.42)Less
14377 / 12=31.42348 / 12=29.002.42Less
15365 / 12=30.42360 / 12=30.00.42Less
16364 / 12=30.33361 / 12=30.08.25Less
17363 / 12=30.25362 / 12=30.17.08Less
18365 / 12=30.42360 / 12=30.00.42Less
19369 / 12=30.75356 / 12=29.671.08Less
20369 / 12=30.75356 / 12=29.671.08Less
21369 / 12=30.75356 / 12=29.671.08Less
22364 / 12=30.33361 / 12=30.08.25Less
23363 / 12=30.25362 / 12=30.17.08Less
24363 / 12=30.25362 / 12=30.17.08Less
25364 / 12=30.33361 / 12=30.08.25Less
26359 / 12=29.92366 / 12=30.50(.58)Less
27362 / 12=30.17363 / 12=30.25(.08)Less
28362 / 12=30.17363 / 12=30.25(.08)Less
29373 / 12=31.08352 / 12=29.331.75Less
30367 / 12=30.58358 / 12=29.83.75Less
31376 / 12=31.33349 / 12=29.082.25Less
32365 / 12=30.42360 / 12=30.00.42Less
33357 / 12=29.75368 / 12=30.67(1.42)Less
34349 / 12=29.08376 / 12=31.332.25Less
35356 / 12=29.67396 / 12=30.75(1.08)Less
36359 / 12=29.92366 / 12=30.50(.58)Less
37372 / 12=31.00353 / 12=29.421.58Less
38368 / 12=30.67357 / 12=29.75.92Less
39344 / 12=28.67382 / 12=31.81(3.16)Equal
40365 / 12=30.42360 / 12=30.00.42Less
41375 / 12=31.25350 / 12=29.172.08Less
42353 / 12=29.42372 / 12=31.00(1.58)Less
43357 / 12=29.75368 / 12=30.67(.92)Less
44363 / 12=30.25362 / 12=30.17.08Less
45353 / 12=29.42372 / 12=31.00(1.58)Less
46354 / 12=29.50371 / 12=30.92(1.42)Less
47353 / 12=29.42372 / 12=31.00(1.58)Less
48366 / 12=30.50350 / 12=29.92.58Less
49364 / 12=30.53361 / 12=30.08.25Less
50370 / 12=30.83355 / 12=29.581.25Less
+

Table 18-1 (continued)

+

e

+

Table 18-1 shows fifty trials of which only one (the thirty-ninth) is as “far out” as the observed samples. These data give us an estimate of the probability that, if the two foods come from the same universe, a difference this great or greater would occur just by chance. (Compare this 2 percent estimate with the probability of roughly 1 percent estimated with the conventional t test — a “significance level” of 1 percent.) On the average, the test described in this section yields a significance level as high as such mathematical-probability tests as the t test — that is, it is just as efficient — though the tests described in Examples 15-6 and 17-1 are likely to be less efficient because they convert measured data to ranked or classified data. 1

+

It is not appropriate to say that these data give us an estimate of the probability that the foods “do not come” from the same universe. This is because we can never state a probability that a sample came from a given universe unless the alternatives are fully specified in advance.2

+

This example also illustrates how the dispersion within samples affects the difficulty of finding out whether the samples differ from each other. For example, the average weight gain for food A was 32 pounds, versus 29 pounds for food B. If all the food A-fed pigs had gained weight within a range of say 29.9 and 30.1 pounds, and if all the food B-fed pigs had gained weight within a range of 28.9 and 29.1 pounds — that is, if the highest weight gain in food B had been lower than the lowest weight gain in food A — then there would be no question that food A is better, and even fewer observations would have made this statistically conclusive. Variation (dispersion) is thus of great importance in statistics and in the social sciences. The larger the dispersion among the observations within the samples, the larger the sample size necessary to make a conclusive comparison between two groups or reliable estimates of summarization statistics. (The dispersion might be measured by the mean absolute deviation (the average absolute difference between the mean and the individual observations, treating both plus and minus differences as positive), the variance (the average squared difference between the mean and the observations), the standard deviation (the square root of the variance), the range (the difference between the smallest and largest observations), or some other device.)

+ +

If you are performing your tests by hand rather than using a computer (a good exercise even nowadays when computers are so accessible), you might prefer to work with the median instead of the mean, because the median requires less computation. (The median also has the advantage of being less influenced by a single far-out observation that might be quite atypical; all measures have their special advantages and disadvantages.) Simply compare the difference in medians of the twelve-pig resamples to the difference in medians of the actual samples, just as was done with the means. The only operational difference is to substitute the word “median” for the word “mean” in the steps listed above. You may need a somewhat larger number of trials when working with medians, however, for they tend to be less precise than means.

+ +

The RESAMPLING STATS program compares the difference in the sums of the weight gains for the actual pigs against the difference resulting from two randomly-chosen groups of pigs, using the same numerical weight gains of individual pigs as were obtained in the actual experiment. If the differences in average weight gains of the randomly ordered groups are rarely as large as the difference in weight gains from the actual sets of pigs fed food A-alpha and food B-beta, then we can conclude that the foods do make a difference in pigs’ weight gains.

+

Note first that pigs in group A gained a total of 382 pounds while group B gained a total of 344 pounds — 38 fewer. To minimize computations, we will deal with totals like these, not averages.

+

First we construct vectors A and B of the weight gains of the pigs fed with the two foods. Then we combine the two vectors into one long vector and select two groups of 12 randomly and with replacement (the two SAMPLE commands). We SUM the weight gains for the two resamples, and calculate the difference. We keep SCORE of those differences, graph them on a HISTOGRAM, and see how many times resample A exceeded resample B by at least 38 pounds, or vice versa (we are testing whether the two are different, not whether food A produces larger weight gains).

+ +
' Program file: "testing_measured_00.rss"
+
+NUMBERS (31 34 29 26 32 35 38 34 31 29 32 31) a
+' Record group a's weight gains.
+NUMBERS (26 24 28 29 30 29 31 29 32 26 28 32) b
+' Record group b's weight gains.
+CONCAT a b c
+' Combine a and b together in one long vector.
+REPEAT 1000
+    ' Do 1000 experiments.
+    SAMPLE 12 c d
+    ' Take a "resample" of 12 with replacement from c and put it in d.
+    SAMPLE 12 c e
+    ' Take another "resample."
+    SUM d dd
+    ' Sum the first "resample."
+    SUM e ee
+    ' Sum the second "resample."
+    SUBTRACT dd ee f
+    ' Calculate the difference between the two resamples.
+    SCORE f z
+    ' Keep track of each trial result.
+END
+' End one experiment, go back and repeat until all trials are complete,
+' then proceed.
+HISTOGRAM z
+' Produce a histogram of trial results.
+

PIGS3: Difference Between Two Resamples

+

Sum of Weight Gains

+

+

1 st resample less 2 nd

+

From this histogram we see that none of the trials produced a difference between groups as large as that observed (or larger). RESAMPLING STATS will calculate this for us with the following commands:

+ +
' Program file: "pigs3.rss"
+
+COUNT z >= 38 k
+' Determine how many of the trials produced a difference between resamples
+
+' \>= 38.
+COUNT z <= -38 l
+' Likewise for a difference of -38.
+ADD k l m
+' Add the two together.
+DIVIDE m 1000 mm
+' Convert to a proportion.
+PRINT mm
+' Print the result.
+
+' Note: The file "pigs3" on the Resampling Stats software disk contains
+' this set of commands.
+
+
+

24.0.2 Example: Is There a Difference in Liquor Prices Between State-Run and Privately-Run Systems?

+

This is an example of testing for differences between means of unequal-sized samples of measured data.

+

In the 1960s I studied the price of liquor in the sixteen “monopoly” states (where the state government owns the retail liquor stores) compared to the twenty-six states in which retail liquor stores are privately owned. (Some states were omitted for technical reasons. And it is interesting to note that the situation and the price pattern has changed radically since then.) These data were introduced in the context of a problem in probability in Chapter 12.

+

These were the representative 1961 prices of a fifth of Seagram 7 Crown whiskey in the two sets of states:3

+

16 monopoly states: $4.65, $4.55, $4.11, $4.15, $4.20, $4.55, $3.80,

+

$4.00, $4.19, $4.75, $4.74, $4.50, $4.10, $4.00, $5.05, $4.20

+

Mean = $4.35

+

26 private-ownership states: $4.82, $5.29, $4.89, $4.95, $4.55, $4.90,

+

$5.25, $5.30, $4.29, $4.85, $4.54, $4.75, $4.85, $4.85, $4.50, $4.75,

+

$4.79, $4.85, $4.79, $4.95, $4.95, $4.75, $5.20, $5.10, $4.80, $4.29.

+

Mean = $4.84

+

The economic question that underlay the investigation — having both theoretical and policy ramifications — is as follows: Does state ownership affect prices? The empirical question is whether the prices in the two sets of states were systematically different. In statistical terms, we wish to test the hypothesis that there was a difference between the groups of states related to their mode of liquor distribution, or whether the observed $.49 differential in means might well have occurred by happenstance. In other words, we want to know whether the two sub-groups of states differed systematically in their liquor prices, or whether the observed pattern could well have been produced by chance variability.

+

The first step is to examine the two sets of data graphically to see whether there was such a clear-cut difference between them — of the order of Snow’s data on cholera, or the Japanese Navy data on beri-beri — that no test was necessary. The separate displays, and then the two combined together, are shown in Figure 24.1; the answer is not clear-cut and hence a formal test is necessary.

+ + +
+
+
+
+

+
Figure 24.1: Liquor prices by government and private
+
+
+
+
+

At first I used a resampling permutation test as follows: Assuming that the entire universe of possible prices consists of the set of events that were observed, because that is all the information available about the universe, I wrote each of the forty-two observed state prices on a separate card. The shuffled deck simulated a situation in which each state has an equal chance for each price.

+

On the “null hypothesis” that the two groups’ prices do not reflect different price-setting mechanisms, but rather differ only by chance, I then examined how often that simulated universe stochastically produces groups with results as different as observed in 1961. I repeatedly dealt groups of 16 and 26 cards, without replacing the cards, to simulate hypothetical monopoly-state and private-state samples, each time calculating the difference in mean prices.

+

The probability that the benchmark null-hypothesis universe would produce a difference between groups as large or larger than observed in 1961 is estimated by how frequently the mean of the group of randomly-chosen sixteen prices from the simulated state-ownership universe is less than (or equal to) the mean of the actual sixteen state-ownership prices. If the simulated difference between the randomly-chosen groups was frequently equal to or greater than observed in 1961, one would not conclude that the observed difference was due to the type of retailing system because it could well have been due to chance variation.

+

The results — not even one “success” in 10,000 trials — imply that there is a very small probability that two groups with mean prices as different as were observed would happen by chance if drawn from the universe of 42 observed prices. So we “reject the null hypothesis” and instead find persuasive the proposition that the type of liquor distribution system influences the prices that consumers pay.4

+

As I shall discuss later, the logical framework of this resampling version of the permutation test differs greatly from the formulaic version, which would have required heavy computation. The standard conventional alternative would be a Student’s t-test, in which the user simply plugs into an unintuitive formula and reads the result from a table. And because of the unequal numbers of cases and unequal dispersions in the two samples, an appropriate t-test is far from obvious, whereas resampling is not made more difficult by such realistic complications.

+ +

A program to handle the liquor problem with an infinite-universe bootstrap distribution simply substitutes the random sampling command SAMPLE for the SHUFFLE/TAKE commands. The results of the new test are indistinguishable from those in the program given above.

+

Still another difficult question is whether any hypothesis test is appropriate, because the states were not randomly selected for inclusion in one group or another, and the results could be caused by factors other than the liquor system; this applies to both the above methods. The states constitute the entire universe in which we are interested, rather than being a sample taken from some larger universe as with a biological experiment or a small survey sample. But this objection pertains to a conventional test as well as to resampling methods. And a similar question arises throughout medical and social science — to the two water suppliers between which John Snow detected vast differences in cholera rates, to rates of lung cancer in human smokers, to analyses of changes in speeding laws, and so on.

+

The appropriate question is not whether the units were assigned randomly, however, but whether there is strong reason to believe that the results are not meaningful because they are the result of a particular “hidden” variable.

+

These debates about fundamentals illustrate the unsettled state of statistical thinking about basic issues. Other disciplines also have their controversies about fundamentals. But in statistics these issues arise as early as the introductory course, because all but the most contrived problems are shot through with these questions. Instructors and researchers usually gloss over these matters, as Gigerenzer et al., show ( The Empire of Chance ). Again, because with resampling one does not become immersed in the difficult mathematical techniques that underlie conventional methods, one is quicker to see these difficult questions, which apply equally to conventional methods and resampling.

+ +

Example 18-3: Is There a Difference Between Treatments to Prevent Low Birthweights?

+

Next we consider the use of resampling with measured data to test the hypothesis that drug A prevents low birthweights (Rosner, 1982, p. 257). The data for the treatment and control groups are shown in Table 18-2.

+

Table 18-2

+

Birthweights in a Clinical Trial to Test a Drug for Preventing Low Birthweights

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Treatment GroupControl Group
6.96.4
7.66.7
7.35.4
7.68.2
6.85.3
7.26.6
8.05.8
5.55.7
5.86.2
7.37.1
8.27.0
6.96.9
6.85.6
5.74.2
8.66.8
Average: 7.086.26
+

Source: Rosner, Table 8.7

+

The treatment group averaged .82 pounds more than the control group. Here is a resampling approach to the problem:

+
    +
  1. If the drug has no effect, our best guess about the “universe” of birthweights is that it is composed of (say) a million each of the observed weights, all lumped together. In other words, in the absence of any other information or compelling theory, we assume that the combination of our samples is our best estimate of the universe. Hence let us write each of the birthweights on a card, and put them into a hat. Drawing them one by one and then replacing them is the operational equivalent of a very large (but equal) number of each birthweight.

  2. +
  3. Repeatedly draw two samples of 15 birthweights each, and check how frequently the observed difference is as large as, or larger than, the actual difference of .82 pounds.

  4. +
+

We find in the RESAMPLING STATS program below that only 1 percent of the pairs of hypothetical resamples produced means that differed by as much as .82. We therefore conclude that the observed difference is unlikely to have occurred by chance.

+ +
' Program file: "testing_measured_02.rss"
+
+NUMBERS (6.9 7.6 7.3 7.6 6.8 7.2 8.0 5.5 5.8 7.3 8.2 6.9 6.8 5.7 8.6) treat
+NUMBERS (6.4 6.7 5.4 8.2 5.3 6.6 5.8 5.7 6.2 7.1 7.0 6.9 5.6 4.2 6.8) control
+CONCAT treat control all
+' Combine all birthweight observations in same vector
+REPEAT 1000
+    ' Do 1000 simulations
+    SAMPLE 15 all treat$
+    ' Take a resample of 15 from all birth weights (the \$ indicates a
+    ' resampling counterpart to a real sample)
+    SAMPLE 15 all control$
+    ' Take a second, similar resample
+    MEAN treat$ mt
+    ' Find the means of the two resamples
+    MEAN control$ mc
+    SUBTRACT mt mc dif
+    ' Find the difference between the means of the two resamples
+    SCORE dif z
+    ' Keep score of the result
+END
+' End the simulation experiment, go back and repeat
+HISTOGRAM z
+' Produce a histogram of the resample differences
+COUNT z >= 0.82 k
+' How often did resample differences exceed the observed difference of
+' .82?
+

+

Resample differences in pounds

+

Result: Only 1.3 percent of the pairs of resamples produced means that differed by as much as .82. We can conclude that the observed difference is unlikely to have occurred by chance.

+
+
+

24.0.3 Example: Bootstrap Sampling with Replacement

+

Efron and Tibshirani (1993, 11) present this as their basic problem illustrating the bootstrap method: Seven mice were given a new medical treatment intended to improve their survival rates after surgery, and nine mice were not treated. The numbers of days the treated mice survived were 94, 38, 23, 197, 99, 16 and 14, whereas the numbers of days the untreated mice (the control group) survived were 52, 10, 40, 104, 51, 27, 146, 30, and 46. The question we ask is: Did the treatment prolong survival, or might chance variation be responsible for the observed difference in mean survival times?

+

We start by supposing the treatment did NOT prolong survival and that chance was responsible. If that is so, then we consider that the two groups came from the same universe. Now we’d like to know how likely it is that two groups drawn from this common universe would differ as much as the two observed groups differ.

+

If we had unlimited time and money, we would seek additional samples in the same way that we obtained these. Lacking time and money, we create a hypothetical universe that embodies everything we know about such a common universe. We imagine replicating each sample element millions of times to create an almost infinite universe that looks just like our samples. Then we can take resamples from this hypothetical universe and see how they behave.

+

Even on a computer, creating such a large universe is tedious so we use a shortcut. We replace each element after we pick it for a resample. That way, our hypothetical (bootstrap) universe is effectively infinite.

+

The following procedure will serve:

+
    +
  1. Calculate the difference between the means of the two observed samples – it’s 30.63 days in favor of the treated mice.

  2. +
  3. Consider the two samples combined (16 observations) as the relevant universe to resample from.

  4. +
  5. Draw 7 hypothetical observations with replacement and designate them “Treatment”; draw 9 hypothetical observations with replacement and designate them “Control.”

  6. +
  7. Compute and record the difference between the means of the two samples.

    +

    Repeat steps 2 and 3 perhaps 1000 times.

  8. +
  9. Determine how often the resampled difference exceeds the observed difference of 30.63.

  10. +
+

The following program (“mice2smp”) follows the above procedure:

+ +
' Program file: "testing_measured_03.rss"
+
+NUMBERS (94 38 23 197 99 16 141) treatmt
+' treatment group
+NUMBERS (52 10 40 104 51 27 146 30 46) control
+' control group
+CONCAT treatmt control u
+' U is our universe (step 2 above)
+REPEAT 1000
+    ' step 5 above
+    SAMPLE 7 u treatmt$
+    ' step 3 above
+    SAMPLE 9 u control$
+    ' step 3
+    MEAN treatmt$ tmean
+    ' step 4
+    MEAN control$ cmean
+    ' step 4
+    SUBTRACT tmean cmean diff
+    ' step 4
+    SCORE diff scrboard
+    ' step 4
+END
+' step 5
+HISTOGRAM scrboard
+COUNT scrboard >=30.63 k
+' step 6
+DIVIDE k 1000 prob
+PRINT prob
+

+

Result: PROB = 0.112

+

Interpretation: 1000 simulated resamples (of sizes 7 and 9) from a combined universe produced a difference as big as 30.63 more than 11 percent of the time. We cannot rule out the possibility that chance might be responsible for the observed advantage of the treatment group.

+

Example 18-5: Permutation Sampling Without Replacement

+

This section discusses at some length the question of when sampling with replacement (the bootstrap), and sampling without replacement (permutation or “exact” test) are the appropriate resampling methods. The case at hand seems like a clearcut case where the bootstrap is appropriate. (Note that in this case we draw both samples from a combined universe consisting of all observations, whether we do so with or without replacement.) Nevertheless, let us see how the technique would differ if one were to consider that the permutation test is appropriate. The algorithm would then be as follows (with the steps that are the same as above labeled “a” and those that are different labeled “b”):

+

1a. Calculate the difference between the means of the two observed samples – it’s 30.63 days in favor of the treated mice.

+

2a. Consider the two samples combined (16 observations) as the relevant universe to resample from.

+

3b. Draw 7 hypothetical observations without replacement and designate them “Treatment”; draw 9 hypothetical observations with replacement and designate them “Control.”

+

4a. Compute and record the difference between the means of the two samples.

+

5a. Repeat steps 2 and 3 perhaps 1000 times

+

6a. Determine how often the resampled difference exceeds the observed difference of 30.63.

+

Here is the RESAMPLING STATS program:

+ +
' Program file: "testing_measured_04.rss"
+
+NUMBERS (94 38 23 197 99 16 141) treatmt
+' treatment group
+NUMBERS (52 10 40 104 51 27 146 30 46) control
+' control group
+CONCAT treatmt control u
+' U is our universe (step 2 above)
+REPEAT 1000
+    ' step 5 above
+    SHUFFLE u ushuf
+    TAKE ushuf 1,7 treatmt$
+    ' step 3 above
+    TAKE ushuf 8,16 control$
+    ' step 3
+    MEAN treatmt$ tmean
+    ' step 4
+    MEAN control$ cmean
+    ' step 4
+    SUBTRACT tmean cmean diff
+    ' step 4
+    SCORE diff scrboard
+    ' step 4
+END
+' step 5
+HISTOGRAM scrboard
+COUNT scrboard >=30.63 k
+' step 6
+DIVIDE k 1000 prob
+PRINT prob
+

+

Result: prob = 0.145

+

Interpretation: 1000 simulated resamples (of sizes 7 and 9) from a combined universe produced a difference as big as 30.63 more than 14 percent of the time. We therefore should not rule out the possibility that chance might be responsible for the observed advantage of the treatment group.

+
+
+

24.1 Differences among four means

+

Example 18-6: Differences Among Four Pig Rations (Test for Differences Among Means of More Than Two Samples of Measured Data) (File “PIGS4”)

+

In Examples 15-1 and 15-4 we investigated whether or not the results shown by a single sample are sufficiently different from a null (benchmark) hypothesis so that the sample is unlikely to have come from the null-hypothesis benchmark universe. In Examples 15-7, 17-1, and 18-1 we then investigated whether or not the results shown by two samples suggest that both had come from the same universe, a universe that was assumed to be the composite of the two samples. Now as in Example 17-2 we investigate whether or not several samples come from the same universe, except that now we work with measured data rather than with counted data.

+

If one experiments with each of 100 different pig foods on twelve pigs, some of the foods will show much better results than will others just by chance , just as one family in sixteen is likely to have the very “high” number of 4 daughters in its first four children. Therefore, it is wrong reasoning to try out the 100 pig foods, select the food that shows the best results, and then compare it statistically with the average (sum) of all the other foods (or worse, with the poorest food). With such a procedure and enough samples, you will surely find one (or more) that seems very atypical statistically. A bridge hand with 12 or 13 spades seems very atypical, too, but if you deal enough bridge hands you will sooner or later get one with 12 or 13 spades — as a purely chance phenomenon, dealt randomly from a standard deck. Therefore we need a test that prevents our falling into such traps. Such a test usually operates by taking into account the differences among all the foods that were tried.

+

The method of Example 18-1 can be extended to handle this problem. Assume that four foods were each tested on twelve pigs. The weight gains in pounds for the pigs fed on foods A and B were as before. For foods C and D the weight gains were:

+

Ration C: 30, 30, 32, 31, 29, 27, 25, 30, 31, 32, 34, 33

+

Ration D: 32, 25, 31, 26, 32, 27, 28, 29, 29, 28, 23, 25

+

Now construct a benchmark universe of forty-eight index cards, one for each weight gain. Then deal out sets of four hands randomly. More specifically:

+

Step 1. Constitute a universe of the forty-eight observed weight gains in the four samples, writing the weight gains on cards.

+

Step 2. Draw four groups of twelve weight gains, with replacement, since we are drawing from a hypothesized infinite universe in which consecutive draws are independent. Determine whether the difference between the lowest and highest group means is as large or larger than the observed difference. If so write “yes,” otherwise “no.”

+

Step 3. Repeat step 2 fifty times.

+

Step 4. Count the trials in which the differences between the simulated groups with the highest and lowest means are as large or larger than the differences between the means of the highest and lowest observed samples. The proportion of such trials to the total number of trials is the probability that all four samples would differ as much as do the observed samples if they (in technical terms) come from the same universe.

+

The problem “Pigs4,” as handled by the steps given above, is quite similar to the way we handled Example TKTK, except that the data are measured (in pounds of weight gain) rather than simply counted (the number of rehabilitations).

+

Instead of working through a program for the procedure outlined above, let us consider a different approach to the problem — computing the difference between each pair of foods, six differences in all, converting all minus (-) signs to (+) differences. Then we can total the six differences, and compare the total with the sum of the six differences in the observed sample. The proportion of the resampling trials in which the observed sample sum is exceeded by the sum of the differences in the trials is the probability that the observed samples would differ as much as they do if they come from the same universe.5

+

One naturally wonders whether this latter test statistic is better than the range, as discussed above. It would seem obvious that using the information contained in all four samples should increase the precision of the estimate. And indeed it is so, as you can confirm for yourself by comparing the results of the two approaches. But in the long run, the estimate provided by the two approaches would be much the same. That is, there is no reason to think that one or another of the estimates is biased . However, successive samples from the population would steady down faster to the true value using the four-groupbased estimate than they would using the range. That is, the four-group-based estimate would require a smaller sample of pigs.

+

Is there reason to prefer one or the other approach from the point of view of some decision that might be made? One might think that the range procedure throws light on which one of the foods is best in a way that the four-group-based approach does not. But this is not correct. Both approaches answer this question, and only this question: Are the results from the four foods likely to have resulted from the same “universe” of weight gains or not? If one wants to know whether the best food is similar to, say, all the other three, the appropriate approach would be a two -sample approach similar to various two -sample examples discussed earlier. (It would be still another question to ask whether the best food is different from the worst. One would then use a procedure different from either of those discussed above.)

+

If the foods cost the same, one would not need even a twosample analysis to decide which food to feed. Feed the one whose results are best in the experiment, without bothering to ask whether it is “really” the best; you can’t go wrong as long as it doesn’t cost more to use it. (One could inquire about the probability that the food yielding the best results in the experiment would attain those results by chance even if it was worse than the others by some stipulated amount, but pursuing that line of thought may be left to the student as an exercise.)

+

In the problem “Pigs4,” we want a measure of how the groups differ. The obvious first step is to add up the total weight gains for each group: 382, 344, 364, 335. The next step is to calculate the differences between all the possible combinations of groups: 382-344=38, 382-364=18, 382-335=47, 344-364= -20, 344-335=9, 364-335=29.

+
+
+

24.2 Using Squared Differences

+

Here we face a choice. We could work with the absolute differences — that is, the results of the subtractions — treating each result as a positive number even if it is negative. We have seen this approach before. Therefore let us now take the opportunity of showing another approach. Instead of working with the absolute differences, we square each difference, and then SUM the squares. An advantage of working with the squares is that they are positive — a negative number squared is positive — which is convenient. Additionally, conventional statistics works mainly with squared quantities, and therefore it is worth getting familiar with that point of view. The squared differences in this case add up to 5096.

+

Using RESAMPLING STATS, we shuffle all the weight gains together, select four random groups, and determine whether the squared differences in the resample exceed 5096. If they do so with regularity, then we conclude that the observed differences could easily have occurred by chance.

+

With the CONCAT command, we string the four vectors into a single vector. After SHUFFLEing the 48-pig weight-gain vector G into H, we TAKE four randomized samples. And we compute the squared differences between the pairs of groups and SUM the squared differences just as we did above for the observed groups.

+

Last, we examine how often the simulated-trials data produce differences among the groups as large as (or larger than) the actually observed data — 5096.

+ +
' Program file: "pigs4.rss"
+
+NUMBERS (34 29 26 32 35 38 31 34 30 29 32 31) a
+NUMBERS (26 24 28 29 30 29 32 26 31 29 32 28) b
+NUMBERS (30 30 32 31 29 27 25 30 31 32 34 33) c
+NUMBERS (32 25 31 26 32 27 28 29 29 28 23 25) d
+' (Record the data for the 4 foods)
+CONCAT a b c d g
+' Combine the four vectors into g
+REPEAT 1000
+    ' Do 1000 trials
+    SHUFFLE g h
+    ' Shuffle all the weight gains.
+    SAMPLE 12 h p
+    ' Take 4 random samples, with replacement.
+    SAMPLE 12 h q
+    SAMPLE 12 h r
+    SAMPLE 12 h s
+    SUM p i
+    ' Sum the weight gains for the 4 resamples.
+    SUM q j
+    SUM r k
+    SUM s l
+    SUBTRACT i j ij
+    ' Find the differences between all the possible pairs of resamples.
+    SUBTRACT i k ik
+    SUBTRACT i l il
+    SUBTRACT j k jk
+    SUBTRACT j l jl
+    SUBTRACT k l kl
+    MULTIPLY ij ij ijsq
+    ' Find the squared differences.
+    MULTIPLY ik ik iksq
+    MULTIPLY il il ilsq
+    MULTIPLY jk jk jksq
+    MULTIPLY jl jl jlsq
+    MULTIPLY kl kl klsq
+    ADD ijsq iksq ilsq jksq jlsq klsq total
+    ' Add them together.
+    SCORE total z
+    ' Keep track of the total for each trial.
+END
+' End one trial, go back and repeat until 1000 trials are complete.
+HISTOGRAM z
+' Produce a histogram of the trial results.
+COUNT z >= 5096 k
+' Find out how many trials produced differences among groups as great as
+' or greater than those observed.
+DIVIDE k 1000 kk
+' Convert to a proportion.
+PRINT kk
+' Print the result.
+
+' Note: The file "pigs4" on the Resampling Stats software disk contains
+' this set of commands.
+

PIGS4: Differences Among Four Pig Rations

+

+

sums of squares

+

We find that our observed sum of squares — 5096 — was exceeded by randomly-drawn sums of squares in only 3 percent of our trials. We conclude that the four treatments are likely not all similar.

+
+
+

24.3 Exercises

+

Solutions for problems may be found in the section titled, “Exercise Solutions” at the back of this book.

+

Exercise 18-1

+

The data shown in Table 18-3 (Hollander and Wolfe 1999, 39, Table 3.1) might be data for the outcomes of two different mechanics, showing the length of time until the next overhaul is needed for nine pairs of similar vehicles. Or they could be two readings made by different instruments on the same sample of rock. In fact, they represent data for two successive tests for depression on the Hamilton scale, before and after drug therapy.

+ +

Table 18-3

+

Hamilton Depression Scale Values

+ +++++ + + + + + + + + + + + + + + +
Patient #Score BeforeScore After
1 2 3 4 5 6 7 8 91.83 .50 1.62 2.48 1.68 1.88 1.55 3.06 1.3.878 .647 .598 2.05 1.06 1.29 1.06 3.14 1.29
+

The task is to perform a test that will help decide whether there is a difference in the depression scores at the two visits (or the performances of the two mechanics). Perform both a bootstrap test and a permutation test, and give some reason for preferring one to the other in principle. How much do they differ in practice?

+

Exercise 18-2

+

Thirty-six of 72 (.5) taxis surveyed in Pittsburgh had visible seatbelts. Seventy-seven of 129 taxis in Chicago (.597) had visible seatbelts. Calculate a confidence interval for the difference in proportions, estimated at -.097. (Source: Peskun, Peter H., “A New Confidence Interval Method Based on the Normal Approximation for the Difference of Two Binomial Probabilities,” Journal of the American Statistical Association , 6/93 p. 656).

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/testing_procedures.html b/r-book/testing_procedures.html new file mode 100644 index 00000000..5109a76f --- /dev/null +++ b/r-book/testing_procedures.html @@ -0,0 +1,872 @@ + + + + + + + + + +Resampling statistics - 25  General Procedures for Testing Hypotheses + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

25  General Procedures for Testing Hypotheses

+
+ + + +
+ + + + +
+ + +
+ +
+

25.1 Introduction

+

The previous chapters have presented procedures for making statistical inferences that apply to both testing hypotheses and constructing confidence intervals: This chapter focuses on specific procedures for testing hypotheses.

+

`The general idea in testing hypotheses is to ask: Is there some other universe which might well have produced the observed sample? So we consider alternative hypotheses. This is a straightforward exercise in probability, asking about behavior of one or more universes. The choice of another universe(s) to examine depends upon purposes and other considerations.

+
+
+

25.2 Canonical question-and-answer procedure for testing hypotheses

+
+
+

25.3 Skeleton procedure for testing hypotheses

+

Akin to skeleton procedure for questions in probability and confidence intervals shown elsewhere

+

The following series of questions will be repeated below in the context of a specific inference.

+

What is the question? What is the purpose to be served by answering the question?

+

Is this a “probability” or a “statistics” question?

+

Assuming the Question is a Statistical Inference Question

+

What is the form of the statistics question?

+

Hypothesis test, or confidence interval, or other inference? One must first decide whether the conceptual-scientific question is of the form a) a test about the probability that some sample is likely to happen by chance rather than being very surprising (a test of a hypothesis), or b) a question about the accuracy of the estimate of a parameter of the population based upon sample evidence (a confidence interval):

+

Assuming the Question Concerns Testing Hypotheses

+

Will you state the costs and benefits of various outcomes, perhaps in the form of a “loss function”? If “yes,” what are they?

+

How many samples of data have been observed?

+

One, two, more than two?

+

What is the description of the observed sample(s)?

+

Raw data?

+

Which characteristic(s) (parameters) of the population are of interest to you?

+

What are the statistics of the sample(s) that refer to this (these) characteristics(s) in which you are interested?

+

What comparison(s) to make?

+

Samples to each other?

+

Sample to particular universe(s)? If so, which?

+

What is the benchmark (null) universe?

+

This may include presenting the raw data and/or such summary statistics as the computed mean, median, standard deviation, range, interquartile range, other:

+

If there is to be a Neyman-Pearson-type alternative universe, what is it? (In most cases the answer to this technical question is “no.”)

+

Which symbols for the observed entities?

+

Discrete or continuous?

+

What values or ranges of values?

+

Which sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? (Answer: samples the same size as has been observed)

+

[Here one may continue with the conventional method, using perhaps a t or f or chi-square test or whatever: Everything up to now is the same whether continuing with resampling or with standard parametric test.]

+

What procedure will be used to produce the resampled entities?

+

Randomly drawn?

+

Simple (single step) or complex (multiple “if” drawings)?

+

What procedure to produce resample?

+

Which universe will you draw them from? With or without replacement?

+

What size resamples? Number of resample trials?

+

What to record as outcome of each resample trial?

+

Mean, median, or whatever of resample?

+

Classifying the outcomes

+

What is the criterion of significance to be used in evaluating the results of the test?

+

Stating the distribution of results

+

Graph of each statistic recorded — occurrences for each value.

+

Count the outcomes that exceed criterion and divide by number of trials.

+
+
+

25.4 An example: can the bio-engineer increase the female calf rate?

+

The question. (from (Hodges Jr and Lehmann 1970, 310): Female calves are more valuable than male calves. A bio-engineer claims to have a method that can produce more females. He tests the procedure on ten of your pregnant cows, and the result is nine females. Should you believe that his method has some effect? That is, what is the probability of a result this surprising occurring by chance?

+

The purpose: Female calves are more valuable than male.

+

Inference? Yes.

+

Test of hypothesis? Yes.

+

Will you state the costs and benefits of various outcomes (or a loss function)? We need only say that the benefits of a method that works are very large, and if the results are promising, it is worth gathering more data to confirm results.

+

How many samples of data are part of the significance test? One

+

What is the size of the first sample about which you wish to make significance statements? Ten.

+

What comparison(s) to make? Compare sample to benchmark universe.

+

What is the benchmark universe that embodies the null hypothesis? 50-50 female, or 100/206 female.

+

If there is to be a Neyman-Pearson alternative universe , what is it? None.

+

Which symbols for the observed entities? Balls in bucket, or numbers.

+

What values or ranges of values? 0-1, (1-100), or 101-206.

+

Finite or infinite? Infinite.

+

Which sample(s) do you wish to compare to which, or to the null universe (and perhaps to the alternative universe)? Ten calves compared to universe.

+

What procedure to produce entities? Sampling with replacement,

+

Simple (single step) or complex (multiple “if” drawings)? One can think of it either way.

+

What to record as outcome of each resample trial? The proportion (or number) of females.

+

What is the criterion to be used in the test? The probability that in a sample of ten calves, nine (or more) females would be drawn by chance from the benchmark universe of half females. (Or frame in terms of a significance level.)

+

“One-tail” or “two-tail” test? One tail, because the farmer is only interested in females: Finding a large proportion of males would not be of interest, and would not cause one to reject the null hypothesis.

+

Computation of the probability sought. The actual computation of probability may be done with several formulaic or sample-space methods, and with several resampling methods: I will first show a resampling method and then several conventional methods. The following material, which allows one to compare resampling and conventional methods, is more germane to the earlier explication of resampling taken altogether in earlier chapters than it is to the theory of hypothesis tests discussed in this chapter, but it is more expedient to present it here.

+
+
+

25.5 Computation of Probabilities with Resampling

+

We can do the problem by hand as follows:

+
    +
  1. Constitute a bucket with either one blue and one pink ball, or 106 blue and 100 pink balls.
  2. +
  3. Draw ten balls with replacement, count pinks, and record.
  4. +
  5. Repeat step (2) say 400 times.
  6. +
  7. Calculate proportion of results with 9 or 10 pinks.
  8. +
+

Or, we can take advantage of the speed and efficiency of the computer as follows:

+
+
n <- 10000
+
+females <- numeric(n)
+
+for (i in 1:n) {
+    samp <- sample(c('female', 'male'), size=10, replace=TRUE)
+    females[i] <- sum(samp == 'female')
+}
+
+hist(females)
+
+k <- sum(females >= 9)
+kk <- k / n
+message('Proportion with >= 9 females: ', kk)
+
+
Proportion with >= 9 females: 0.011
+
+
+
+
+

+
+
+
+
+

This outcome implies that there is roughly a one percent chance that one would observe 9 or 10 female births in a single sample of 10 calves if the probability of a female on each birth is .5. This outcome should help the decision-maker decide about the plausibility of the bio-engineer’s claim to be able to increase the probability of female calves being born.

+
+
+

25.6 Conventional methods

+
+

25.6.1 The Sample Space and First Principles

+

Assume for a moment that our problem is a smaller one and therefore much easier — the probability of getting two females in two calves if the probability of a female is .5. One could then map out what mathematicians call the “sample space,” a technique that (in its simplest form) assigns to each outcome a single point, and find the proportion of points that correspond to a “success.” We list all four possible combinations — FF, FM, MF, MM. Now we look at the ratio of the number of combinations that have 2 females to the total, which is 1/4. We may then interpret this probability.

+

We might also use this method for (say) five female calves in a row. We can make a list of possibilities such as FFFFF, MFFFF, MMFFF, MMMFFF … MFMFM … MMMMM. There will be 2*2*2*2*2 = 32 possibilities, and 64 and 128 possibilities for six and seven calves respectively. But when we get as high as ten calves, this method would become very troublesome.

+
+
+

25.6.2 Sample Space Calculations

+

For two females in a row, we could use the well known, and very simple, multiplication rule; we could do so even for ten females in a row. But calculating the probability of nine females in ten is a bit more complex.

+
+
+

25.6.3 Pascal’s Triangle

+

One can use Pascal’s Triangle to obtain binomial coefficients for p = .5 and a sample size of 10, focusing on those for 9 or 10 successes. Then calculate the proportion of the total cases with 9 or 10 “successes” in one direction, to find the proportion of cases that pass beyond the criterion of 9 females. The method of Pascal’s Triangle requires more complete understanding of the probabilistic system than does the resampling simulation described above because Pascal’s Triangle requires that one understand the entire structure; simulation requires only that you follow the rules of the model.

+
+
+

25.6.4 The Quincunx

+

The quincunx — a device that filters tiny balls through a set of bumper points not unlike a pinball machine, mentioned here simply for completeness — is more a simulation method than theoretical, but it may be considered “conventional.” Hence, it is included here.

+
+
+

25.6.5 Table of Binomial Coefficients

+

Pascal’s Triangle becomes cumbersome or impractical with large numbers — say, 17 females of 20 births — or with probabilities other than .5. One might produce the binomial coefficients by algebraic multiplication, but that, too, becomes tedious even with small sample sizes. One can also use the pre-computed table of binomial coefficients found in any standard text. But the probabilities for n = 10 and 9 or 10 females are too small to be shown.

+
+
+

25.6.6 Binomial Formula

+

For larger sample sizes, one can use the binomial formula. The binomial formula gives no deeper understanding of the statistical structure than does the Triangle (but it does yield a deeper understanding of the pure mathematics). With very large numbers, even the binomial formula is cumbersome.

+
+
+

25.6.7 The Normal Approximation

+

When the sample size becomes too large for any of the above methods, one can then use the Normal approximation, which yields results close to the binomial (as seen very nicely in the output of the quincunx). But use of the Normal distribution requires an estimate of the standard deviation, which can be derived either by formula or by resampling. (See a more extended parallel discussion in Chapter 27 on confidence intervals for the Bush-Dukakis comparison.)

+

The desired probability can be obtained from the Z formula and a standard table of the Normal distribution found in every elementary text.

+

The Z table can be made less mysterious if we generate it with simulation, or with graph paper or Archimedes’ method, using as raw material (say) five “continuous” (that is, non-binomial) distributions, many of which are skewed: 1) Draw samples of (say) 50 or 100. 2) Plot the means to see that the Normal shape is the outcome. Then 3) standardize with the standard deviation by marking the standard deviations onto the histograms.

+

The aim of the above exercise and the heart of the conventional parametric method is to compare the sample result — the mean — to a standardized plot of the means of samples drawn from the universe of interest to see how likely it is that that universe produces means deviating as much from the universe mean as does our observed sample mean. The steps are:

+
    +
  1. Establish the Normal shape — from the exercise above, or from the quincunx or Pascal’s Triangle or the binomial formula or the formula for the Normal approximation or some other device.
  2. +
  3. Standardize that shape in standard deviations.
  4. +
  5. Compute the Z score for the sample mean — that is, its deviation from the universe mean in standard deviations.
  6. +
  7. Examine the Normal (or really, tables computed from graph paper, etc.) to find the probability of a mean deviating that far by chance.
  8. +
+

This is the canon of the procedure for most parametric work in statistics. (For some small samples, accuracy is improved with an adjustment.)

+
+
+
+

25.7 Choice of the benchmark universe1

+

In the example of the ten calves, the choice of a benchmark universe — a universe that (on average) produces equal proportions of males and females — seems rather straightforward and even automatic, requiring no difficult judgments. But in other cases the process requires more judgments.

+

Let’s consider another case where the choice of a benchmark universe requires no difficult judgments. Assume the U.S. Department of Labor’s Bureau of Labor Statistics (BLS) takes a very large sample — say, 20,000 persons — and finds a 10 percent unemployment rate. At some later time another but smaller sample is drawn — 2,000 persons — showing an 11 percent unemployment rate. Should BLS conclude that unemployment has risen, or is there a large chance that the difference between 10 percent and 11 percent is due to sample variability? In this case, it makes rather obvious sense to ask how often a sample of 2,000 drawn from a universe of 10 percent unemployment (ignoring the variability in the larger sample) will be as different as 11 percent due solely to sample variability? This problem differs from that of the calves only in the proportions and the sizes of the samples.

+

Let’s change the facts and assume that a very large sample had not been drawn and only a sample of 2,000 had been taken, indicating 11 percent unemployment. A policy-maker asks the probability that unemployment is above ten percent. It would still seem rather straightforward to ask how often a universe of 10 percent unemployment would produce a sample of 2000 with a proportion of 11 percent unemployed.

+

Still another problem where the choice of benchmark hypothesis is relatively straightforward: Say that BLS takes two samples of 2000 persons a month apart, and asks whether there is a difference in the results. Pooling the two samples and examining how often two samples drawn from the pooled universe would be as different as observed seems obvious.

+

One of the reasons that the above cases — especially the two-sample case — seem so clear-cut is that the variance of the benchmark hypothesis is not an issue, being implied by the fact that the samples deal with proportions. If the data were continuous, however, this issue would quickly arise. Consider, for example, that the BLS might take the same sorts of samples and ask unemployed persons the lengths of time they had been unemployed. Comparing a small sample to a very large one would be easy to decide about. And even comparing two small samples might be straightforward — simply pooling them as is.

+

But what about if you have a sample of 2,000 with data on lengths of unemployment spells with a mean of 30 days, and you are asked the probability that it comes from a universe with a mean of 25 days? Now there arises the question about the amount of variability to assume for that benchmark universe. Should it be the variability observed in the sample? That is probably an overestimate, because a universe with a smaller mean would probably have a smaller variance, too. So some judgment is required; there cannot be an automatic “objective” process here, whether one proceeds with the conventional or the resampling method.

+

The example of the comparison of liquor retailing systems in Section 24.0.2 provides more material on this subject.

+
+
+

25.8 Why is statistics — and hypothesis testing — so difficult?

+

Why is statistics such a difficult subject? The aforegoing procedural outline provides a window to the explanation. Hypothesis testing — as is also true of the construction of confidence intervals (but unlike simple probability problems) — involves a very long chain of reasoning, perhaps longer than in any other realm of systematic thinking. Furthermore, many decisions in the process require judgment that goes beyond technical analysis. All this emerges as one proceeds through the skeleton procedure above with any specific example.

+

(Bayes’ rule also is very difficult intuitively, but that probably is a result of the twists and turns required in all complex problems in conditional probability. Decision-tree analysis is counter-intuitive, too, probably because it starts at the end instead of the beginning of the story, as we are usually accustomed to doing.)

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/r-book/testing_procedures_files/figure-html/unnamed-chunk-2-1.png b/r-book/testing_procedures_files/figure-html/unnamed-chunk-2-1.png new file mode 100644 index 00000000..fa8e8594 Binary files /dev/null and b/r-book/testing_procedures_files/figure-html/unnamed-chunk-2-1.png differ diff --git a/r-book/what_is_probability.html b/r-book/what_is_probability.html new file mode 100644 index 00000000..2af8b1cc --- /dev/null +++ b/r-book/what_is_probability.html @@ -0,0 +1,909 @@ + + + + + + + + + +Resampling statistics - 3  What is probability? + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

3  What is probability?

+
+ + + +
+ + + + +
+ + +
+ +
+

Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales.” — Bertrand Russell (1945 p. xiv).

+
+
+

3.1 Introduction

+

The central concept for dealing with uncertainty is probability. Hence we must inquire into the “meaning” of the term probability. (The term “meaning” is in quotes because it can be a confusing word.)

+

You have been using the notion of probability all your life when drawing conclusions about what you expect to happen, and in reaching decisions in your public and personal lives.

+

You wonder: Will the kick from the 45 yard line go through the uprights? How much oil can you expect from the next well you drill, and what value should you assign to that prospect? Will you make money if you invest in tech stocks for the medium term, or should you spread your investments across the stock market? Will the next Space-X launch end in disaster? Your answers to these questions rest on the probabilities you estimate.

+

And you act on the basis of probabilities: You pay extra for an low-interest loan, if you think that interest rates are going to go up. You bet heavily on a poker hand if there is a high probability that you have the best hand. A hospital decides not to buy another ambulance when the administrator judges that there is a low probability that all the other ambulances will ever be in use at once. NASA decides whether or not to send off the space shuttle this morning as scheduled.

+

The idea of probability is essential when we reason about uncertainty, and so this chapter discusses what is meant by such key terms as “probability,” “chance”, “sample,” and “universe.” It discusses the nature and the usefulness of the concept of probability as used in this book, and it touches on the source of basic estimates of probability that are the raw material of statistical inferences.

+
+
+

3.2 The “Meaning” of “Probability”

+

Probability is difficult to define (Feller 1968), but here is a useful informal starting point:

+
+

A probability is a number from 0 through 1 that reflects how likely it is that a particular event will happen.

+
+

Any particular stated probability is an assertion that indicates how likely you believe it is that an event will occur.

+

If you give an event a probability of 0 you mean that you are certain it will not happen. If you give probability 1 to an event, you mean you are certain that it will happen. For example, if I give you one card from deck that you know contains only the standard 52 cards — before you look at the card, you can give probability 0 to the card being a joker, because you are certain the pack does not contain any joker cards. If I then select only the 14 spades from that deck, and give you a card from that selection, you will say there is probability 1 that the card is a black card, because all the spades are black cards.

+

A probability estimate of .2 indicates that you think there is twice as great a chance of the event happening as if you had estimated a probability of .1. This is the rock-bottom interpretation of the term “probability,” and the heart of the concept. 1

+

The idea of probability arises when you are not sure about what will happen in an uncertain situation. For example, you may lack information and therefore can only make an estimate. If someone asks you your name, you do not use the concept of probability to answer; you know the answer to a very high degree of surety. To be sure, there is some chance that you do not know your own name, but for all practical purposes you can be quite sure of the answer. If someone asks you who will win tomorrow’s baseball game, however, there is a considerable chance that you will be wrong no matter what you say. Whenever there is a reasonable chance that your prediction will be wrong, the concept of probability can help you.

+

The concept of probability helps you to answer the question, “How likely is it that…?” The purpose of the study of probability and statistics is to help you make sound appraisals of statements about the future, and good decisions based upon those appraisals. The concept of probability is especially useful when you have a sample from a larger set of data — a “universe” — and you want to know the probability of various degrees of likeness between the sample and the universe. (The universe of events you are sampling from is also called the “population,” a concept to be discussed below.) Perhaps the universe of your study is all high school graduates in 2018. You might then want to know, for example, the probability that the universe’s average SAT (university entrance) score will not differ from your sample’s average SAT by more than some arbitrary number of SAT points — say, ten points.

+

We have said that a probability statement is about the future. Well, usually. Occasionally you might state a probability about your future knowledge of past events — that is, “I think I’ll find out that…” — or even about the unknown past. (Historians use probabilities to measure their uncertainty about whether events occurred in the past, and the courts do, too, though the courts hesitate to say so explicitly.)

+

Sometimes one knows a probability, such as in the case of a gambler playing black on an honest roulette wheel, or an insurance company issuing a policy on an event with which it has had a lot of experience, such as a life insurance policy. But often one does not know the probability of a future event. Therefore, our concept of probability must include situations where extensive data are not available.

+

All of the many techniques used to estimate probabilities should be thought of as proxies for the actual probability. For example, if Mission Control at Space Central simulates what should and probably will happen in space if a valve is turned aboard a space craft just now being built, the test result on the ground is a proxy for the real probability of what will happen when the crew turn the valve in the planned mission.

+

In some cases, it is difficult to conceive of any data that can serve as a proxy. For example, the director of the CIA, Robert Gates, said in 1993 “that in May 1989, the CIA reported that the problems in the Soviet Union were so serious and the situation so volatile that Gorbachev had only a 50-50 chance of surviving the next three to four years unless he retreated from his reform policies” (The Washington Post , January 17, 1993, p. A42). Can such a statement be based on solid enough data to be more than a crude guess?

+

The conceptual probability in any specific situation is an interpretation of all the evidence that is then available . For example, a wise biomedical worker’s estimate of the chance that a given therapy will have a positive effect on a sick patient should be an interpretation of the results of not just one study in isolation, but of the results of that study plus everything else that is known about the disease and the therapy. A wise policymaker in business, government, or the military will base a probability estimate on a wide variety of information and knowledge. The same is even true of an insurance underwriter who bases a life-insurance or shipping-insurance rate not only on extensive tables of long-time experience but also on recent knowledge of other kinds. Each situation asks us to make a choice of the best method of estimating a probability — whether that estimate is objective — from a frequency series — or subjective, from the distillation of other experience.

+
+
+

3.3 The nature and meaning of the concept of probability

+

It is confusing and unnecessary to inquire what probability “really” is. (Indeed, the terms “really” and “is,” alone or in combination, are major sources of confusion in statistics and in other logical and scientific discussions, and it is often wise to avoid their use.) Various concepts of probability — which correspond to various common definitions of the term — are useful in particular contexts. This book contains many examples of the use of probability. Work with them will gradually develop a sound understanding of the concept.

+

There are two major concepts and points of view about probability — frequency and degrees of belief. Each is useful in some situations but not in others. Though they may seem incompatible in principle, there almost never is confusion about which is appropriate in a given situation.

+
    +
  1. Frequency . The probability of an event can be said to be the proportion of times that the event has taken place in the past, usually based on a long series of trials. Insurance companies use this when they estimate the probability that a thirty-five-year-old teacher will die during a period for which he wants to buy an insurance policy. (Notice this shortcoming: Sometimes you must bet upon events that have never or only infrequently taken place before, and so you cannot reasonably reckon the proportion of times they occurred one way or the other in the past.)

  2. +
  3. Degree of belief . The probability that an event will take place or that a statement is true can be said to correspond to the odds at which you would bet that the event will take place. (Notice a shortcoming of this concept: You might be willing to accept a five-dollar bet at 2-1 odds that your team will win the game, but you might be unwilling to bet a hundred dollars at the same odds.)

  4. +
+

See (Barnett 1982, chap. 3) for an in-depth discussion of different approaches to probability.

+

The connection between gambling and immorality or vice troubles some people about gambling examples. On the other hand, the immediacy and consequences of the decisions that the gambler has to make give the subject a special tang. There are several reasons why statistics use so many gambling examples — and especially tossing coins, throwing dice, and playing cards:

+
    +
  1. Historical . The theory of probability began with gambling examples of dice analyzed by Cardano, Galileo, and then by Pascal and Fermat.
  2. +
  3. Generality . These examples are not related to any particular walk of life, and therefore they can be generalized to applications in any walk of life. Students in any field — business, medicine, science — can feel equally at home with gambling examples.
  4. +
  5. Sharpness . These examples are particularly stark, and unencumbered by the baggage of particular walks of life or special uses.
  6. +
  7. Universality . Many other texts use these same examples, and therefore the use of them connects up this book with the main body of writing about probability and statistics.
  8. +
+

Often we’ll begin with a gambling example and then consider an example in one of the professional fields — such as business and other decision-making activities, biostatistics and medicine, social science and natural science — and everyday living. People in one field often can benefit from examples in others; for example, medical students should understand the need for business decision-making in terms of medical practice, as well as the biostatistical examples. And social scientists should understand the decision-making aspects of statistics if they have any interest in the use of their work in public policy.

+
+
+

3.4 Back to Proxies

+

Example of a proxy: The “probability risk assessments” (PRAs) that are made for the chances of failures of nuclear power plants are based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. A PRA can cost a nuclear facility $5 million.

+

Another example: If a manager of a high-street store looks at the sales of a particular brand of smart watches in the last two Decembers, and on that basis guesses how likely it is that she will run out of stock if she orders 200 smart watches, then the last two years’ experience is serving as a proxy for future experience. If a sales manager just “intuits” that the odds are 3 to 1 (a probability of .75) that the main local competitor will not meet a price cut, then all her past experience summed into her intuition is a proxy for the probability that it will really happen. Whether any proxy is a good or bad one depends on the wisdom of the person choosing the proxy and making the probability estimates.

+

How does one estimate a probability in practice? This involves practical skills not very different from the practical skills required to estimate with accuracy the length of a golf shot, the number of carpenters you will need to build a house, or the time it will take you to walk to a friend’s house; we will consider elsewhere some ways to improve your practical skills in estimating probabilities. For now, let us simply categorize and consider in the next section various ways of estimating an ordinary garden variety of probability, which is called an “unconditional” probability.

+
+
+

3.5 The various ways of estimating probabilities

+

Consider the probability of drawing an even-numbered spade from a deck of poker cards (consider the queen as even and the jack and king as odd). Here are several general methods of estimation, where we define each method in terms of the operations we use to make the estimate:

+
    +
  1. Experience.

    +

    The first possible source for an estimate of the probability of drawing an even-numbered spade is the purely empirical method of experience . If you have watched card games casually from time to time, you might simply guess at the proportion of times you have seen even-numbered spades appear — say, “about 1 in 15” or “about 1 in 9” (which is almost correct) or something like that. (If you watch long enough you might come to estimate something like 6 in 52.)

    +

    General information and experience are also the source for estimating the probability that the sales of a particular brand of smart watch this December will be between 200 and 250, based on sales the last two Decembers; that your team will win the football game tomorrow; that war will break out next year; or that a United States astronaut will reach Mars before a Russian astronaut. You simply put together all your relevant prior experience and knowledge, and then make an educated guess.

    +

    Observation of repeated events can help you estimate the probability that a machine will turn out a defective part or that a child can memorize four nonsense syllables correctly in one attempt. You watch repeated trials of similar events and record the results.

    +

    Data on the mortality rates for people of various ages in a particular country in a given decade are the basis for estimating the probabilities of death, which are then used by the actuaries of an insurance company to set life insurance rates. This is systematized experience — called a frequency series .

    +

    No frequency series can speak for itself in a perfectly objective manner. Many judgments inevitably enter into compiling every frequency series — deciding which frequency series to use for an estimate, choosing which part of the frequency series to use, and so on. For example, should the insurance company use only its records from last year, which will be too few to provide as much data as is preferable, or should it also use death records from years further back, when conditions were slightly different, together with data from other sources? (Of course, no two deaths — indeed, no events of any kind — are exactly the same. But under many circumstances they are practically the same, and science is only interested in such “practical” considerations.)

    +

    Given that we have to use judgment in probability estimates, the reader may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities; the two terms are working synonyms.

    +

    There is no logical difference between the sort of probability that the life insurance company estimates on the basis of its “frequency series” of past death rates, and the manager’s estimates of the sales of smart watches in December, based on sales in that month in the past two years. 2

    +

    The concept of a probability based on a frequency series can be rendered almost useless when all the observations are repetitions of a single magnitude — for example, the case of all successes and zero failures of space-shuttle launches prior to the Challenger shuttle tragedy in the 1980s; in those data alone there was almost no basis to estimate the probability of a shuttle failure. (Probabilists have made some rather peculiar attempts over the centuries to estimate probabilities from the length of a zero-defect time series — such as the fact that the sun has never failed to rise (foggy days aside! — based on the undeniable fact that the longer such a series is, the smaller the probability of a failure; see e.g., (Whitworth 1897, xix–xli). However, one surely has more information on which to act when one has a long series of observations of the same magnitude rather than a short series).

  2. +
  3. Simulated experience.

    +

    A second possible source of probability estimates is empirical scientific investigation with repeated trials of the phenomenon. This is an empirical method even when the empirical trials are simulations. In the case of the even-numbered spades, the empirical scientific procedure is to shuffle the cards, deal one card, record whether or not the card is an even-number spade, replace the card, and repeat the steps a good many times. The proportions of times you observe an even-numbered spade come up is a probability estimate based on a frequency series.

    +

    You might reasonably ask why we do not just count the number of even-numbered spades in the deck of fifty-two cards — using the sample space analysis you see below. No reason at all. But that procedure would not work if you wanted to estimate the probability of a baseball batter getting a hit or a cigarette lighter producing flame.

    +

    Some varieties of poker are so complex that experiment is the only feasible way to estimate the probabilities a player needs to know.

    +

    The resampling approach to statistics produces estimates of most probabilities with this sort of experimental “Monte Carlo” method. More about this later.

  4. +
  5. Sample space analysis and first principles.

    +

    A third source of probability estimates is counting the possibilities — the quintessential theoretical method. For example, by examination of an ordinary die one can determine that there are six different numbers that can come up. One can then determine that the probability of getting (say) either a “1” or a “2,” on a single throw, is 2/6 = 1/3, because two among the six possibilities are “1” or “2.” One can similarly determine that there are two possibilities of getting a “1” plus a “6” out of thirty-six possibilities when rolling two dice, yielding a probability estimate of 2/36 = 1/18.

    +

    Estimating probabilities by counting the possibilities has two requirements: 1) that the possibilities all be known (and therefore limited), and few enough to be studied easily; and 2) that the probability of each particular possibility be known, for example, that the probabilities of all sides of the dice coming up are equal, that is, equal to 1/6.

  6. +
  7. Mathematical shortcuts to sample-space analysis.

    +

    A fourth source of probability estimates is mathematical calculations . If one knows by other means that the probability of a spade is 1/4 and the probability of an even-numbered card is 6/13, one can use probability calculation rules to calculate that the probability of turning up an even-numbered spade is 6/52 (that is, 1/4 x 6/13). If one knows that the probability of a spade is 1/4 and the probability of a heart is 1/4, one can then calculate that the probability of getting a heart or a spade is 1/2 (that is 1/4 + 1/4). The point here is not the particular calculation procedures, which we will touch on later, but rather that one can often calculate the desired probability on the basis of already-known probabilities.

    +

    It is possible to estimate probabilities with mathematical calculation only if one knows by other means the probabilities of some related events. For example, there is no possible way of mathematically calculating that a child will memorize four nonsense syllables correctly in one attempt; empirical knowledge is necessary.

  8. +
  9. Kitchen-sink methods.

    +

    In addition to the above four categories of estimation procedures, the statistical imagination may produce estimates in still other ways such as a) the salesman’s seat-of-the-pants estimate of what the competition’s price will be next quarter, based on who-knows-what gossip, long-time acquaintance with the competitors, and so on, and b) the probability risk assessments (PRAs) that are made for the chances of failures of nuclear power plants based, not on long experience or even on laboratory experiment, but rather on theorizing of various kinds — using pieces of prior experience wherever possible, of course. Any of these methods may be a combination of theoretical and empirical methods.

  10. +
+

As an example of an organization struggling with kitchen-sink methods, consider the estimation of the probability of failure for the tragic flight of the Challenger shuttle, as described by the famous physicist Nobelist Richard Feynman. This is a very real case that includes just about every sort of complication that enters into estimating probabilities.

+
+

…Mr. Ullian told us that 5 out of 127 rockets that he had looked at had failed — a rate of about 4 percent. He took that 4 percent and divided it by 4, because he assumed a manned flight would be safer than an unmanned one. He came out with about a 1 percent chance of failure, and that was enough to warrant the destruct charges.

+

But NASA [the space agency in charge] told Mr. Ullian that the probability of failure was more like 1 in \(10^5\).

+

I tried to make sense out of that number. “Did you say 1 in \(10^5\)?”

+

“That’s right; 1 in 100,000.”

+

“That means you could fly the shuttle every day for an average of 300 years between accidents — every day, one flight, for 300 years — which is obviously crazy!”

+

“Yes, I know,” said Mr. Ullian. “I moved my number up to 1 in 1000 to answer all of NASA’s claims — that they were much more careful with manned flights, that the typical rocket isn’t a valid comparison, etcetera.”

+

But then a new problem came up: the Jupiter probe, Galileo , was going to use a power supply that runs on heat generated by radioactivity. If the shuttle carrying Galileo failed, radioactivity could be spread over a large area. So the argument continued: NASA kept saying 1 in 100,000 and Mr. Ullian kept saying 1 in 1000, at best.

+

Mr. Ullian also told us about the problems he had in trying to talk to the man in charge, Mr. Kingsbury: he could get appointments with underlings, but he never could get through to Kingsbury and find out how NASA got its figure of 1 in 100,000 (Feynman and Leighton 1988, 179–80).

+
+

Feynman tried to ascertain more about the origins of the figure of 1 in 100,000 that entered into NASA’s calculations. He performed an experiment with the engineers:

+
+

…“Here’s a piece of paper each. Please write on your paper the answer to this question: what do you think is the probability that a flight would be uncompleted due to a failure in this engine?”

+

They write down their answers and hand in their papers. One guy wrote “99-44/100% pure” (copying the Ivory soap slogan), meaning about 1 in 200. Another guy wrote something very technical and highly quantitative in the standard statistical way, carefully defining everything, that I had to translate — which also meant about 1 in 200. The third guy wrote, simply, “1 in 300.”

+

Mr. Lovingood’s paper, however, said:

+

“Cannot quantify. Reliability is judged from:

+
    +
  • past experience
  • +
  • quality control in manufacturing
  • +
  • engineering judgment”
  • +
+

“Well,” I said, “I’ve got four answers, and one of them weaseled.” I turned to Mr. Lovingood: “I think you weaseled.”

+

“I don’t think I weaseled.”

+

“You didn’t tell me what your confidence was, sir; you told me how you determined it. What I want to know is: after you determined it, what was it?”

+

He says, “100 percent” — the engineers’ jaws drop, my jaw drops; I look at him, everybody looks at him — “uh, uh, minus epsilon!”

+

So I say, “Well, yes; that’s fine. Now, the only problem is, WHAT IS EPSILON?”

+

He says, “\(10^-5\).” It was the same number that Mr. Ullian had told us about: 1 in 100,000.

+

I showed Mr. Lovingood the other answers and said, “You’ll be interested to know that there is a difference between engineers and management here — a factor of more than 300.”

+

He says, “Sir, I’ll be glad to send you the document that contains this estimate, so you can understand it.”

+

Later, Mr. Lovingood sent me that report. It said things like “The probability of mission success is necessarily very close to 1.0” — does that mean it is close to 1.0, or it ought to be close to 1.0? — and “Historically, this high degree of mission success has given rise to a difference in philosophy between unmanned and manned space flight programs; i.e., numerical probability versus engineering judgment.” As far as I can tell, “engineering judgment” means they’re just going to make up numbers! The probability of an engine-blade failure was given as a universal constant, as if all the blades were exactly the same, under the same conditions. The whole paper was quantifying everything. Just about every nut and bolt was in there: “The chance that a HPHTP pipe will burst is \(10^-7\).” You can’t estimate things like that; a probability of 1 in 10,000,000 is almost impossible to estimate. It was clear that the numbers for each part of the engine were chosen so that when you add everything together you get 1 in 100,000. (Feynman and Leighton 1988, 182–83).

+
+

We see in the Challenger shuttle case very mixed kinds of inputs to actual estimates of probabilities. They include frequency series of past flights of other rockets, judgments about the relevance of experience with that different sort of rocket, adjustments for special temperature conditions (cold), and much much more. There also were complex computational processes in arriving at the probabilities that were made the basis for the launch decision. And most impressive of all, of course, are the extraordinary differences in estimates made by various persons (or perhaps we should talk of various statuses and roles) which make a mockery of the notion of objective estimation in this case.

+

Working with different sorts of estimation methods in different sorts of situations is not new; practical statisticians do so all the time. We argue that we should make no apology for doing so.

+

The concept of probability varies from one field of endeavor to another; it is different in the law, in science, and in business. The concept is most straightforward in decision-making situations such as business and gambling; there it is crystal-clear that one’s interest is entirely in making accurate predictions so as to advance the interests of oneself and one’s group. The concept is most difficult in social science, where there is considerable doubt about the aims and values of an investigation. In sum, one should not think of what a probability “is” but rather how best to estimate it. In practice, neither in actual decision-making situations nor in scientific work — nor in classes — do people experience difficulties estimating probabilities because of philosophical confusions. Only philosophers and mathematicians worry — and even they really do not need to worry — about the “meaning” of probability3.

+
+
+

3.6 The relationship of probability to other magnitudes

+

An important argument in favor of approaching the concept of probability as an estimate is that an estimate of a probability often (though not always) is the opposite side of the coin from an estimate of a physical quantity such as time or space.

+

For example, uncertainty about the probability that one will finish a task within 9 minutes is another way of labeling the uncertainty that the time required to finish the task will be less than 9 minutes. Hence, if estimation is appropriate for time in this case, it should be equally appropriate for probability. The same is true for the probability that the quantity of smart watches sold will be between 200 and 250 units.

+

Hence the concept of probability, and its estimation in any particular case, should be no more puzzling than is the “dual” concept of time or distance or quantities of smart watches. That is, lack of certainty about the probability that an event will occur is not different in nature from lack of certainty about the amount of time or distance in the event. There is no essential difference between whether a part 2 inches in length will be the next to emerge from the machine, or what the length of the next part will be, or the length of the part that just emerged (if it has not yet been measured).

+

The information available for the measurement of (say) the length of a car or the location of a star is exactly the same information that is available with respect to the concept of probability in those situations. That is, one may have ten disparate observations of a car’s length which then constitute a probability distribution, and the same for the altitude of a star in the heavens.

+

In a book of puzzles about probability (Mosteller 1987, problem 42), this problem appears: “If a stick is broken in two at random, what is the average length of the smaller piece?” This particular puzzle does not even mention probability explicitly, and no one would feel the need to write a scholarly treatise on the meaning of the word “length” here, any more than one would one do so if the question were about an astronomer’s average observation of the angle of a star at a given time or place, or the average height of boards cut by a carpenter, or the average size of a basketball team. Nor would one write a treatise about the “meaning” of “time” if a similar puzzle involved the average time between two bird calls. Yet a rephrasing of the problem reveals its tie to the concept of probability, to wit: What is the probability that the smaller piece will be (say) more than half the length of the larger piece? Or, what is the probability distribution of the sizes of the shorter piece?

+

The duality of the concepts of probability and physical entities also emerges in Whitworth’s discussion (1897) of fair betting odds:

+
+

…What sum ought you fairly give or take now, while the event is undetermined, in exchange for the assurance that you shall receive a stated sum (say $1,000) if the favourable event occur? The chance of receiving $1,000 is worth something. It is not as good as the certainty of receiving $1,000, and therefore it is worth less than $1,000. But the prospect or expectation or chance, however slight, is a commodity which may be bought and sold. It must have its price somewhere between zero and $1,000. (p. xix.)

+
+
+

…And the ratio of the expectation to the full sum to be received is what is called the chance of the favourable event. For instance, if we say that the chance is 1/5, it is equivalent to saying that $200 is the fair price of the contingent $1,000. (p. xx.)…

+
+
+

The fair price can sometimes be calculated mathematically from a priori considerations: sometimes it can be deduced from statistics, that is, from the recorded results of observation and experiment. Sometimes it can only be estimated generally, the estimate being founded on a limited knowledge or experience. If your expectation depends on the drawing of a ticket in a raffle, the fair price can be calculated from abstract considerations: if it depend upon your outliving another person, the fair price can be inferred from recorded statistics: if it depend upon a benefactor not revoking his will, the fair price depends upon the character of your benefactor, his habit of changing his mind, and other circumstances upon the knowledge of which you base your estimate. But if in any of these cases you determine that $300 is the sum which you ought fairly to accept for your prospect, this is equivalent to saying that your chance, whether calculated or estimated, is 3/10... (p. xx.)

+
+

It is indubitable that along with frequency data, a wide variety of other information will affect the odds at which a reasonable person will bet. If the two concepts of probability stand on a similar footing here, why should they not be on a similar footing in all discussion of probability? I can think of no reason that they should not be so treated.

+

Scholars write about the “discovery” of the concept of probability in one century or another. But is it not likely that even in pre-history, when a fisherperson was asked how long the big fish was, s/he sometimes extended her/his arms and said, “About this long, but I’m not exactly sure,” and when a scout was asked how many of the enemy there were, s/he answered, “I don’t know for sure...probably about fifty.” The uncertainty implicit in these statements is the functional equivalent of probability statements. There simply is no need to make such heavy work of the probability concept as the philosophers and mathematicians and historians have done.

+
+
+

3.7 What is “chance”?

+

The study of probability focuses on events with randomness — that is, events about which there is uncertainty whether or not they will occur. And the uncertainty refers to your knowledge rather than to the event itself. For example, consider this physical illustration with a remote control. The remote control has a front end that should point at the TV that is controls, and a back end that will usually be pointing at me, the user of the remote control. Call the front — the TV end, and the back — the sofa end of the remote control.

+

I spin the remote control like a baton twirler. If I hold it at the sofa end and attempt to flip it so that it turns only half a revolution, I can be almost sure that I will correctly get the TV end and not the sofa end. And if I attempt to flip it a full revolution, again I can almost surely get the sofa end successfully. It is not a random event whether I catch the sofa end or the TV end (here ignoring those throws when I catch neither end) when doing only half a revolution or one revolution. The result is quite predictable in both these simple maneuvers so far.

+

When I say the result is “predictable,” I mean that you would not bet with me about whether this time I’ll get the TV or the sofa end. So we say that the outcome of my flip aiming at half a revolution is not “random.”

+

When I twirl the remote control so little, I control (almost completely) whether the sofa end or the TV end comes down to my hand; this is the same as saying that the outcome does not occur by chance.

+

The terms “random” and “chance” implicitly mean that you believe that I cannot control or cannot know in advance what will happen.

+

Whether this twirl will be the rare time I miss, however, should be considered chance. Though you would not bet at even odds on my catching the sofa end versus the TV end if there is to be only a half or one full revolution, you might bet — at (say) odds of 50 to 1 — that I will make a mistake and get it wrong, or drop it. So the very same flip can be seen as random or determined depending on what aspect of it we are looking at.

+

Of course you would not bet against me about my not making a mistake, because the bet might cause me to make a mistake purposely. This “moral hazard” is a problem that emerges when a person buys life insurance and may commit suicide, or when a boxer may lose a fight purposely. The people who stake money on those events say that such an outcome is “fixed” (a very appropriate word) and not random.

+

Now I attempt more difficult maneuvers with the remote control. I can do \(1\frac{1}{2}\) flips pretty well, and two full revolutions with some success — maybe even \(2\frac{1}{2}\) flips on a good day. But when I get much beyond that, I cannot determine very well whether I’ll get the sofa or the TV end. The outcome gradually becomes less and less predictable — that is, more and more random.

+

If I flip the remote control so that it revolves three or more times, I can hardly control the process at all, and hence I cannot predict well whether I’ll get the sofa end or the TV end. With 5 revolutions I have absolutely no control over the outcome; I cannot predict the outcome better than 50-50. At that point, getting the sofa end or the TV end has become a completely random event for our purposes, just like flipping a coin high in the air. So at that point we say that “chance” controls the outcome, though that word is just a synonym for my lack of ability to control and predict the outcome. “Chance” can be thought to stand for the myriad small factors that influence the outcome.

+

We see the same gradual increase in randomness with increasing numbers of shuffles of cards. After one shuffle, a skilled magician can know where every card is, and after two shuffles there is still much order that s/he can work with. But after (say) five shuffles, the magician no longer has any power to predict and control, and the outcome of any draw can then be thought of as random chance.

+

At what point do we say that the outcome is “random” or “pure chance” as to whether my hand will grasp the TV end, the sofa end, or at some other spot? There is no sharp boundary to this transition. Rather, the transition is gradual; this is the crucial idea, and one that I have not seen stated before.

+

Whether or not we refer to the outcome as random depends upon the twirler’s skill, which influences how predictable the event is. A baton twirler or juggler might be able to do ten flips with a non-random outcome; if the twirler is an expert and the outcome is highly predictable, we say it is not random but rather is determined.

+

Again, this shows that the randomness is not a property of the physical event, but rather of a person’s knowledge and skill.

+
+
+

3.8 What Do We Mean by “Random”?

+

We have defined “chance” and “random* as the absence of predictive power and/or explanation and/or control. Here we should not confuse the concepts of determinacy-indeterminacy and predictable-unpredictable. What matters for decision purposes is whether you can predict. Whether the process is”really” determinate is largely a matter of definition and labeling, an unnecessary philosophical controversy for our purposes (and perhaps for any other purpose) 4.

+

The remote control in the previous demonstration becomes unpredictable — that is, random — even though it still is subject to similar physical processes as when it is predictable. I do not deny in principle that these processes can be “understood,” or that one could produce a machine that would — like a baton twirler — make the course of the remote control predictable for many turns. But in practice we cannot make the predictions — and it is the practical reality, rather than the principle, that matters here.

+

When I flip the remote control half a turn or one turn, I control (almost completely) whether it comes down at the sofa end end or the TV end, so we do not say that the outcome is chance. Much the same can be said about what happens to the predictability of drawing a given card as one increases the number of times one shuffles a deck of cards.

+

Consider, too, a set of fake dice that I roll. Before you know they are fake, you assume that the probabilities of various outcomes is a matter of chance. But after you know that the dice are loaded, you no longer assume that the outcome is chance. This illustrates how the probabilities you work with are influenced by your knowledge of the facts of the situation.

+

Admittedly, this way of thinking about probability takes some getting used to. Events may appear to be random, but in fact, we can predict them — and visa versa. For example, suppose a magician does a simple trick with dice such as this one:

+
+

The magician turns her back while a spectator throws three dice on the table. He is instructed to add the faces. He then picks up any one die, adding the number on the bottom to the previous total. This same die is rolled again. The number it now shows is also added to the total. The magician turns around. She calls attention to the fact that she has no way of knowing which of the three dice was used for the second roll. She picks up the dice, shakes them in her hand a moment, then correctly announces the final sum.

+
+

Method:. When the spectator rolls the dice, they get three numbers, one from each of the three dice. Call these numbers \(a\), \(b\) and \(c\). Then he chooses one die — it doesn’t matter which, but let’s say he chooses the third die, with value \(c\). He adds the bottom of the third die to the total. Here’s the trick — the total of opposite faces on a standard die always add up to 7 — 1 is opposite 6, 2 is opposite 5, and 3 is opposite 4. So the total is now \(a + b + 7\). Then the spectator rolls the third die again, to get a new number \(d\). The total is now \(a + b + 7 + d\). When the magician turns round she can see what \(a\) and \(b\) and \(d\) are, so to get the right final total, she just needs to add 7 (Gardner 1985, p259). Ben Sparks does a nice demonstration of the trick on Numerphile YouTube.

+

The point here is that, until you know the trick, you (the magician) cannot predict the final sum, so the magician and the spectator consider the result as random. If you do know the trick, you can predict the result, and it is not random. Whether something is “random” or not, depends on what you know.

+

Consider the distributions of heights of various groups of living things (including people). When we consider all living things taken together, the shape of the overall distribution — many individuals at the tiny end where the viruses are found, and very few individuals at the tall end where the giraffes are — is determined mostly by the distribution of species that have different mean heights. Hence we can explain the shape of that distribution, and we do not say that is determined by “chance.” But with a homogenous cohort of a single species — say, all 25-year-old human females in the U.S. — our best description of the shape of the distribution is “chance.” With situations in between, the shape is partly due to identifiable factors — e.g. age — and partly due to “chance.”

+

Or consider the case of a basketball shooter: What causes her or him to make (or not make) a basket this shot, after a string of successes? Much must be ascribed to chance variation. But what causes a given shooter to be very good or very poor relative to other players? For that explanation we can point to such factors as the amount of practice or natural talent.

+

Again, all this has nothing to do with whether the mechanism is “really” chance, unlike the arguments that have been raging in physics for a century. That is the point of the remote control demonstration. Our knowledge and our power to predict the outcome gradually transits from non-chance (that is, “determined”) to chance (“not determined”) in a gradual way even though the same sort of physical mechanism produces each throw of the remote control.

+

Earlier I mentioned that when we say that chance controls the outcome of the remote control flip after (say) five revolutions, we mean that there are many small forces that affect the outcome. The effect of each force is not known, and each is independent of the other. None of these forces is large enough for me (as the remote control twirler) to deal with, or else I would deal with it and be able to improve my control and my ability to predict the outcome. This concept of many small influences — “small” meaning in practice those influences whose effects cannot be identified and allowed for — which affect the outcome and whose effects are not knowable and which are independent of each other is important in statistical inference. For example, as we will see later, when we add many unpredictable deviations together, and plot the distribution of the result, we end up with the famous and very common bell-shaped normal distribution — this striking result comes about because of a mathematical phenomenon called the Central Limit Theorem. We will show this at work, later in the book.

+
+
+

3.9 Randomness from the computer

+

We now have the idea of random variation as being variation we cannot predict. For example, when we flip the remote control through many rotations, we can no longer easily predict which end will land in our hand. We can call the result of any particular flip — random — because we cannot predict whether the result will be TV end or sofa end.

+

We still know some things about the result — it will be one of two options — TV or sofa (unless we drop it). But we cannot predict which. We say the result of each flip is random if we cannot do anything to improve our prediction of 50% for TV (or sofa) end on the next flip.

+

We are not saying the result is random in any deep, non-deterministic sense — we are only saying we can treat the result as random, because we cannot predict it.

+

Now consider getting random numbers from the computer, where the numbers can either be 0 or 1. This is rather like tossing a fair coin, where the results are 0 and 1 rather than “heads” and “tails”.

+

When we ask the computer for a random choice between 0 and 1, we accept it is random-enough, or random-like, if we can’t do anything to predict which of 0 or 1 we will get on any one trial. We can’t do better than guessing that the next value will be — say — 0 — and whichever number we guess, we will only ever have a 50% chance of being correct. We are not saying the computer is giving truly random numbers in some deep sense, only numbers we cannot distinguish from truly random numbers, because we cannot do anything to predict them. The technical term for random numbers from the computer is therefore pseudo-random — meaning, like random numbers, in the sense they are effectively unpredictable. Effectively unpredictable means there is no practical way for you, or even a very powerful computer, to do anything to improve your prediction of the next number in the series.

+
+
+

3.10 The philosophers’ dispute about the concept of probability

+

Those who call themselves “objectivists” or “frequentists” and those who call themselves “personalists” or “Bayesians” have been arguing for hundreds or even thousands of years about the “nature” of probability. The objectivists insist (correctly) that any estimation not based on a series of observations is subject to potential bias, from which they conclude (incorrectly) that we should never think of probability that way. They are worried about the perversion of science, the substitution of arbitrary assessments for value-free data-gathering. The personalists argue (correctly) that in many situations it is not possible to obtain sufficient data to avoid considerable judgment. Indeed, if a probability is about the future, some judgment is always required — about which observations will be relevant, and so on. They sometimes conclude (incorrectly) that the objectivists’ worries are unimportant.

+

As is so often the case, the various sides in the argument have different sorts of situations in mind. As we have seen, the arguments disappear if one thinks operationally with respect to the purpose of the work, rather than in terms of properties, as mentioned earlier.

+

Here is an example of the difficulty of focusing on the supposed properties of the mechanism or situation: The mathematical theorist asserts that the probability of a die falling with the “5” side up is 1/6, on the basis of the physics of equally-weighted sides. But if one rolls a particular die a million times, and it turns up “5” less than 1/6 of the time, one surely would use the observed proportion as the practical estimate. The probabilities of various outcomes with cheap dice may depend upon the number of pips drilled out on a side. In 20,000 throws of a red die and 20,000 throws of a white die, the proportions of 3’s and 4’s were, respectively, .159 and .146, .145 and .142 — all far below the expected proportions of .167. That is, 3’s and 4’s occurred about 11 percent less often that if the dice had been perfectly formed, a difference that could make a big difference in a gambling game (Bulmer 1979, 18).

+

It is reasonable to think of both the engineering method (the theoretical approach) and the empirical method (experimentation and data collection) as two alternative ways to estimate a probability. The two methods use different processes and different proxies for the probability you wish to estimate. One must adduce additional knowledge to decide which method to use in any given situation. It is sensible to use the empirical method when data are available. (But use both together whenever possible.)

+

In view of the inevitably subjective nature of probability estimates, you may prefer to talk about “degrees of belief” instead of probabilities. That’s fine, just as long as it is understood that we operate with degrees of belief in exactly the same way as we operate with probabilities. The two terms are working synonyms.

+

Most important: One cannot sensibly talk about probabilities in the abstract, without reference to some set of facts. The topic then loses its meaning, and invites confusion and argument. This also is a reason why a general formalization of the probability concept does not make sense.

+
+
+

3.11 The relationship of probability to the concept of resampling

+

There is no all-agreed definition of the concept of the resampling method in statistics. Unlike some other writers, I prefer to apply the term to problems in both pure probability and statistics. This set of examples may illustrate:

+
    +
  1. Consider asking about the number of hits one would expect from a 0.250 (25 percent) batter in a 400 at-bat season. One would call this a problem in “probability.” The sampling distribution of the batter’s results can be calculated by formula or produced by Monte Carlo simulation.

  2. +
  3. Now consider examining the number of hits in a given batter’s season, and asking how likely that number (or fewer) is to occur by chance if the batter’s long-run batting average is 0.250. One would call this a problem in “statistics.” But just as in example (1) above, the answer can be calculated by formula or produced by Monte Carlo simulation. And the calculation or simulation is exactly the same as used in (1).

    +

    Here the term “resampling” might be applied to the simulation with considerable agreement among people familiar with the term, but perhaps not by all such persons.

  4. +
  5. Next consider an observed distribution of distances that a batter’s hits travel in a season with 100 hits, with an observed mean of 150 feet per hit. One might ask how likely it is that a sample of 10 hits drawn with replacement from the observed distribution of hit lengths (with a mean of 150 feet) would have a mean greater than 160 feet, and one could easily produce an answer with repeated Monte Carlo samples. Traditionally this would be called a problem in probability.

  6. +
  7. Next consider that a batter gets 10 hits with a mean of 160 feet, and one wishes to estimate the probability that the sample would be produced by a distribution as specified in (3). This is a problem in statistics, and by 1996, it is common statistical practice to treat it with a resampling method. The actual simulation would, however, be identical to the work described in (3).

  8. +
+

Because the work in (4) and (2) differ only in question (4) involving measured data and question (2) involving counted data, there seems no reason to discriminate between the two cases with respect to the term “resampling.” With respect to the pairs of cases (1) and (2), and (3) and (4), there is no difference in the actual work performed, though there is a difference in the way the question is framed. I would therefore urge that the label “resampling” be applied to (1) and (3) as well as to (2) and (4), to bring out the important fact that the procedure is the same as in resampling questions in statistics.

+

One could easily produce examples like (1) and (2) for cases that are similar except that the drawing is without replacement, as in the sampling version of Fisher’s permutation test — for example, a tea taster (Fisher 1935; Fisher 1960, chap. II, section 5). And one could adduce the example of prices in different state liquor control systems (see Section 12.16) which is similar to cases (3) and (4) except that sampling without replacement seems appropriate. Again, the analogs to cases (2) and (4) would generally be called “resampling.”

+

The concept of resampling is defined in a more precise way in Section 8.9.

+
+
+

3.12 Conclusion

+

We define “chance” as the absence of predictive power and/ or explanation and/or control.

+

When the remote control rotates more than three or four turns I cannot control the outcome — whether TV or sofa end — with any accuracy. That is to say, I cannot predict much better than 50-50 with more than four rotations. So we then say that the outcome is determined by “chance.”

+

As to those persons who wish to inquire into what the situation “really” is: I hope they agree that we do not need to do so to proceed with our work. I hope all will agree that the outcome of flipping the TV gradually becomes unpredictable (random) though still subject to similar physical processes as when predictable. I do not deny in principle that these processes can be “understood,” certainly one can develop a machine (or a baton twirler) that will make the outcome predictable for many turns. But this has nothing to do with whether the mechanism is “really” something one wants to say is influenced by “chance.” This is the point of the cooking-TV demonstration. The outcome traverses from non-chance (determined) to chance (not determined) in a smooth way even though the physical mechanism that produces the revolutions remains much the same over the traverse.

+ + + +
+ + +
+ + +
+ + + + \ No newline at end of file diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 00000000..26aa25ac --- /dev/null +++ b/requirements.txt @@ -0,0 +1,7 @@ +# Python requirements to run code in Simon resampling book. +# Install with: +# $ pip install -r requirements.txt +numpy +scipy +matplotlib +pandas