Here's your next reading assignment. Read **Sections 4.3.4-4.4 **in your textbook and answer the following questions by 8 a.m., Wednesday, March 14th. (Yes, that's Pi Day. No, I don't think that's the coolest thing ever.) Be sure to login (using the link near the bottom of the sidebar) to the blog before leaving your answers in the comment section below.

- Consider the hypothesis test H
_{0}: μ = 100 vs. H_{A}: μ < 100. Suppose the p-value for this test using a particular sample turns out to be p=0.04. What probability does this p-value represent? Be as specific as possible. - Example 4.38 on page 167 provides a justification for the advice to never change a two-sided test to a one-sided test after observing the data. Do you accept this justification? Why or why not?
- In the example discussed in class on Monday, the hypothesis test H
_{0}: μ = 494 vs. H_{A}: μ > 494 was conducted and found to have a p-value of p=.2743. Under what conditions would you be okay to reject the null hypothesis given this p-value? - What's one question you have about the reading?

1) The p-value is the probability of getting data at least as favorable as the alternative hypothesis, assuming the null hypothesis is true.

2) His justification is relevant as a warning. We could switch tests after seeing the data if we are careful to minimize our Type 1 errors by doubling our alpha value.

In the example discussed in class on Monday, the hypothesis test H0: μ = 494 vs. HA: μ > 494 was conducted and found to have a p-value of p=.2743. Under what conditions would you be okay to reject the null hypothesis given this p-value?

3) If the sample size is very small, there is solid evidence to reject the null hypothesis if p = .2743.

4) If we take multiple samples, can we "average" the p-values across those samples to determine the probability of getting the average of those samples given H0?

1. This p-value represents the probability that we received a sample mean as far away from the true mean just by chance, assuming the null hypothesis is true. So if the true mean for this population is 100, there is a 4% chance that we would get a value as far away from 100 as the sample mean we observed.

2. Well I don't really understand their justification in terms of the likelihood of a Type I error, but it makes sense that once you start a certain test, you shouldn't change the parameters you're looking at just to make the data look better.

3. We typically reject the null hypothesis when the p-value is less than the significance level. So to reject the null hypothesis, H0: μ = 494, the significance level for this situation would be to be greater than .2743, which is high considering the typical significance level is .05.

4. Example 4.38 doesn't quite make sense to me. I don't really understand why there's a higher chance of making a Type 1 error. It seems that by only looking at one side of the curve, you're only half as likely to consider values with p-values below the significance level. Doesn't this mean that you would only reject the null hypothesis half the time you should, meaning you're making a Type II error?

1) p-value represents the probability that HA is more likely than Ho in the data while Ho is held true. The smaller p-value indicates higher probability to reject Ho.

2)The justification can be accepted since it is more likely to cause error when switching from double-sided to one-sided test. Besides, making changes after observing data can also introduce bias in the test.

3)If the p-value is less than alpha(significance level).

4)No question.

1. This p-value represents that there is a less than a 4% chance that a mean of less than 100 would be seen assuming the null hypothesis is true. Since there is such a small chance (<5%), this gives us good evidence to refute the null hypothesis in most cases.

2. If we just chose one side without having seen the data, we will chose to reject the H0 about 5% of the time as shown in the above question. If, however, we can see the data and can see the two sided scenario, we can chose the best "side," and redo the one sided test with that side. However, since our answer considers both sides, our error doubles from 5% to 10%. This gives us a p-value that isn't suited for comparison to our significance level, so we are in danger of arriving at the wrong conclusion.

3. If a type 2 error is more dangerous or more costly than a type 1 error, we should choose a higher significance level. If it's higher than our p-value, then we can reject the null hypothesis.

4. How is conducting a two sided test with a p-value of half the significance value, then turning it into a one-sided test once you've identified the good side different from just doing a one sided test where the p-value is the same as the significance value? They both have the same error rate, are they both adequate to overthrow the null hypothesis?

1) It means that the probability that another data set will reflect the alternative hypothesis at least as well as did the current data set, given that the null hypothesis is true, is about 0.04. So the smaller the p value, the less likely it is that this data set supports the alternative hypothesis purely by chance (shows correlation).

2) In the example it makes sense. According to the book, the way they went about setting up a one-sided test would double the likelihood of error as compared to the two-sided test. However I am not positive it is true in every case--if perhaps you changed the acceptable range to match your expected error for two-sided tests you could avoid this problem.

3) If it were more costly to fail to reject the null hypothesis. Say the school desperately needed higher test scores, and even small improvements would be favorable. Then a higher tolerance would be okay

4) Is it just a judgment call if the p-value we are deciding with is exactly met by the data? If they match, do you reject the null hypothesis or not?

1) This probability represents the likelihood that this sample mean would occur (4%), given that the null hypothesis is true. Without knowing the significance level, we cannot make any more decisions on this data.

2) Yes, if you change your assumptions after looking at the data, you are removing objectivity from your experiment. That would be the same as going back to change your hypothesis after you wrote your conclusion because the data didn't support what you thought.

3) You would be more willing to reject the null hypothesis if the opportunity costs of not doing so were high. You would therefore have a very high significance level.

4) Can we do more examples of determining significance levels? I'm a fan of the realistic examples, and I liked this reading for that reason 🙂

1. A p-value of 0.04 means that only 4% of random samples could be expected to have a sample mean less than 100.

2. Yes. For either test (one-sided or two-sided) you will have 5% error. But if you want to compare one-sided to two-sided, you have to do two one-sided tests, doubling the possible error.

3. In order to reject the null hypothesis, the p-value must be less than the significance level. If the significance level was 0.3 (that is 70% confidence interval), then we could reject the null hypothesis.

1) The p-value is the probability, assuming that the null hypothesis is true, of getting data at least as favorable as the data supporting the alternative hypothesis.

2) Switching tests may be discouraged, but doubling the alpha value to minimize the dangers of type A error would allow this switch.

3) Based on p = .2743, a small sample size would be grounds to reject the null hypothesis.

4) Can p-values be used as data points to find a mean p-value for probability over different samples?

1) The p-value represents the probability of observing a mean as extreme as the sampling mean, given that the null hypothesis is true.

2) Yes. However, we could switch tests after observing the data if we are careful to minimize Type 1 error by increasing the alpha value.

3) If we have a very large alpha value, to minimize Type II errors (p > 0.28) or if the sample size is very small or is strongly skewed.

4) How exactly does minimizing Type I errors increase Type II errors?

1. P is the conditional probability of the alternative H happening if we assume Ho is true.

2. Yes because we will make more Type 1 errors.

3. If our alpha is greater than .2743

4. Im confused on the difference between a one sided and a two sided hypothesis.

1. This represents that, if using the .05 confidence, it is reasonable to reject the Null Hypothesis

2. Yes, since the error margins for one-sided portions only account for the one side.

3. If we were doing confidence .30

4. None

1) There is 0.04 probability of observing a result as extreme as this one, given the null hypothesis is true.

2) Actually I did not understand why they are advising us against it.

3) It would be OK to reject the null hypothesis if p-value < alpha (significance level).

4) I really don't understand why it is never OK to change two-sided tests to one-sided tests after observing data.

1.

This p-value is the probability that the data observed is as favorable or more to the alternative hypothesis Ha, if the null hypothesis were true. So in this case, if the null hypothesis were true, the probability of observing the sample mean is only 0.04, meaning we can most likely reject the null hypothesis because it is below the significance level of 0.05.

2.

I do accept this justification. There is in total 10% error because we would incorrectly reject the null hypothesis 5% of the time and would incorrectly accept the null hypothesis 5% of the time. This would be twice the error rate of our accepted significance level of 0.05.

3.

It would be okay to reject the null hypothesis if an extremely large significance level is required, like in the case of a type 2 error when the safety of human lives is involved.

4.

What are other circumstances where a high p-value can be rejected?

1) A p-value of .04 would mean that the probability of the data having a mean of less than 100 is 4 percent.

2) Yes, it makes sense that the probability of a Type 1 error would double if you switch from a two-sided test to a one sided test.

3) We could reject the null hypothesis if the significance level is over .2743

1. It means that if the null hypothesis of mu = 100 were true, then there would be a probability of .04 that the observed sample mean would occur. This is less than the value of .05 (significance level) meaning we could reject the null hypothesis in this case.

2. Yes, from the example it is clear that you will make the mistake of committing type I error twice as much

3. If the significance level was greater than .2743 we could reject the null hypothesis.

4. The changing from two sided to one sided example is still kind of confusing

1. The probability represented by the p-value is the probability that our data supports the alternative hypothesis as opposed to the null hypothesis.

2. Yes, because they adequately described how it would double type 1 errors by mirroring it on either side.

3. That the school was required to do better or it would be shut down.

4. none.

1) the p-value is the probability that, assuming the null hypothesis to be true, that the result being tested in the alternative hypothesis (or one deviating from the null hypothesis mean even more) would occur. In this case, there is a 4% chance that the observed value lower than 100 (or any observed value lower than it) would occur in a random set of data taken from the set with a mean of 100.

2) I'm not quite sure why they are adding the probabilities for when the alternative hypothesis is changed based on high and low observed values and associating this combined value for a Type 1 error with only the low or high change to the alternative hypothesis. The logic behind it doesn't make sense, but at the same time it makes sense to set up your hypothesis test before observing values so that you as the researcher do not have a skewed opinion.

3) When there is a large risk associated with accepting the null hypothesis when the alternative hypothesis is actually true.

4) Since I won't be in lecture (refer to my email for the reason), are there any exercises that you suggest I work to go along with the reading?

A p-value of 0.04 means that, if the null hypothesis (μ = 100) was true, then the calculated average that we received from our sample is very unlikely (4% of the time, it would happen). A p-value that small typically means that we can reject the null hypothesis, or at the very least, means that the alternative hypothesis (μ < 100) is more likely to occur.

No, this example is fairly confusing. How can you get 5% error if only one half of the graph is relevant, but you can only safely reject the null hypothesis if the value is within the significance level? I'm confused, and I would like this explained in class if possible. This is also my question about the reading.

You could reject the null hypothesis if the p-value was less than your significance level. While it is not likely to have such a high significance level (28%?), if it was that high, you could safely reject the null hypothesis.

1) The p-value is the probability of observing data at least as favorable to the alternative hypothesis as our current data set, if the null hypothesis was true. In this case, there's a probability of .04 that we'll see data that supports our alternative hypothesis.

2) We could probably double our alpha value to reduce the risk of Type 1 errors if we switch from 1-sided to 2-sided, so I don't entirely accept this justification as law

3) Wasn't in class (sick), but I think if the sample was really small I'd be OK accepting this P-value

4) where does the value p <.05 come from when rejecting Ho?

1) The p-value is the probability of the selected data has a mean value smaller than 100.

2) The justification makes us to be careful with the Type 1 errors by doubling our alpha value.

3) If the sample size is very small, it's okay to reject the null hypothesis

1. This p-value means if the null hypothesis is true, the probability of observing a mean greater otr equal to 100 is only 0.04.

2. I didn't quite get it. Does it means you will have a smaller p-value when switching from 2-sided to 1-sided? Therefore type-1 error is increased?

3. If the type-2 error is most dangerous and costly than type-1 error, a higher p-value such as 0.2743 can be accepted to reject null hypothesis.

4. As 2 says, I don't quite understand what really happens when switch from 2-sided to 1-sided.

1) p-value is the probability that given Ho is true, the sample data set that we have would seem to suggest (or in favor of) HA to be true. In this example, there is a 4% chance that our data set would suggest that Ho is false and we adopt HA: μ < 100 to be true.

2) Maybe not. I guess my argument is: had we gone with the double-sided test and the null hypothesis was true, wouldn't we have incorrectly failed to reject the null hypothesis 5% of the time?

3) I would reject the null hypothesis if the significance level is at least equal to or bigger than .3

4) I am still confused over the reasoning of comparing p-value to the significance level. At one point, we say that if p-value is smaller than significance level, we reject the null hypothesis. Isn't small p-value suggests that we have very little chance that the data set we have follows the alternative hypothesis, hence we are less likely to reject Ho? Also, how do we check for skewness without making block diagrams before we can apply the normal model to the data set?

1.) This p-value represents the probability of observing such an extreme sample mean given that the null hypothesis is true.

2.) Yes; If you switch to a one-sided test after observing a data set, you are biased by that data set. For example, If you are using a two sided test at a significance level of 0.05, that means you should reject the null hypothesis if you achieve an upper or lower p value of 0.025. By switching to a one sided test after observing the data, you will end up rejecting the null hypothesis if you achieve an upper or lower p value of 0.05. You will be in effect switching to a significance level of 0.1 instead of the 0.05 you intended. This makes you twice as likely to reject the null hypothesis and twice as likely to make a type 1 error.

3. If the significance level was greater than or equal to 0.2743 assuming it is a one sided test.

4. What's a good example of when it would be better to use a higher significance level versus a low one?

1) The area under the normal distribution curve. so that means X bar could be either positive or negative value. This mean if the null hypothesis was true we will never get a large sample mean.

2) Well it really depend on the requirement, sometime it is best to use two sided and sometime it is best to use one sided.

3)If the null hypothesis is not true

4)I am still confuse with this topic in general. why do we care about all this null hypothesis. this subject drive me nuts

1. The p-value quantifies how strongly the data favors HA over H0. We typically regard a p-value of <0.05, ie this value, to be small enough to reject the H0. For this, we could reject that u=100 and that accept that u u0. Only if we received a *high enough* (note: not low enough) Z score, then we would have half of the allowable error range.

3. If the SE was small (ie a small sample size), it would be enough to reject the H0.

4. Nothing really.

1. The p-value is the probability of observing data at least as favorable to the al-

ternative hypothesis as our current data set, if the null hypothesis was true.

2. In fact, the justification is just a warning. We still can change a two-sided test to a one-sided test after observing the data. Just remember to double the alpha value.

3. I have no idea... may be the sample size must be very small???

4. I still can not understand the 4.38 very well. Whether the example means if we use the wrong method, the type 1 error will always be 10% if alpha equals to 5%?

1.) The p-value of 0.04 means that if the null hypothesis were true, the probability of observing such a sample is only 4%.

2.) I do accept this justification. Changing from a two-sided test to a one-sided test is changing from, for example, considering the outer 5% of a distribution (2.5% on each tail) to considering 5% on only either the left or right tail. This means that we will make twice as many Type I errors as intended.

3.) I would be comfortable rejecting the null hypothesis if the standard error of the population parameter is large.

4.) Are either type I or type II errors typically more detrimental, or does it entirely depend on the particular experiment?

1.A very small probability; so small that we should reject the null hypothesis

2.I'm not sure if I completely buy it, since the sample mean can either be smaller or larger than the null value, not both...

3.Since the p value is > 0.05, the null hypothesis can only be rejected if the null value is not in a specified, for example 95%, confidence interval

4.Can we actually calculate the probability in (1)?

1. The p-value represents the probability that the mean is 0.2743.

4. I thought that the justification discussed in Problem 2 on this quiz is very unclear. I had a very difficult time understanding what it was talking about.

My answers to 2 and 3 appear to be missing, so here they are:

2. I accept this justification because it shows that changing a two-sided test to a one-sided test just takes the best case scenario of each case, causing double the errors.

3. It would be ok to reject the null hypothesis given this p-value if the significance level was > 0.2743.

1. A p-value this low represents that we can reject the null hypothesis, and we will get data as said by the alternate hypothesis.

2. Only switch tests if it minimizes type 1 error, otherwise, the advice it correct.

3. The sample size needs to be small to reject the null.

4. How many forms of hypothesis testing are there?

1. This p-value means that there is a 4% chance that the sample mean was produced through random variation. Since this p-value is low, there is strong evidence to support the alternative.

2. I accept this justification. If we suddenly change from a two-sided test to a one-sided test, we will double our significance level and sabotage our hypothesis test.

3. We could reject the null hypothesis given this p-value if our significance level is higher than .2743. If it is lower, we will not be able to reject the null.

4. I didn't understand 4.4 much at all.

1) The p-value is .04. This means there is a 4% chance of finding data which is at least as favorable as the alternative hypothesis. This also assumes that the null hypothesis is true.

2) While his justification is acceptable, we may still make this alteration if we account for the change. This would include minimizing our Type 1 errors by increasing our alpha value.

3) It would be okay to reject the null hypothesis given the condition of a very small sample size.

4) I am not sure when the null hypothesis is true versus when it is false.

1. The p-value represents the probability of observing a sample mean that is at least as favorable to the alternate hypothesis (μ < 100) as the current sample mean if the null hypothesis (μ = 100) is true.

2. I accept the book's justification. Their argument seems to make sense to me because you shouldn't change what you're doing to get better statistics from your data.

3. It would be okay to reject the null hypothesis given this p-value if type-2 error would be extremely dangerous or costly. However, the p-value of .2743 is high and we should probably not reject the null hypothesis.

1.This p-value reflects how strongly the data favors μ < 100 than μ = 1:00. With p-value of 0.04, we can safely reject that μ = 1:00 because 0.04<0.05

2.No because we can only switch to one of the one-sided tests and this leaves only a 5% chance of making a type-one error.

3.If accepting the Ha could be very dangerous and we have chosen a high alpha value.

4.Is there a way to calculate p-value without having the s value.

1. Consider the hypothesis test H0: μ = 100 vs. HA: μ 494 was conducted and found to have a p-value of p=.2743. Under what conditions would you be okay to reject the null hypothesis given this p-value?

The null hypothesis had a lower p value

4. What's one question you have about the reading?

Can we get pie in class for pie day?

1. the probability of observing the sample mean if the null hypothesis was true.

2. no, one sided hypotheses can be used if the data is carefully observed beforehand.

3. if the sample is obtained incorrectly or too small

1. It represents the probability of even observing a sample of that value.

2. By switching the type of test, we double the possibility of experiencing a type-1 error. For a test like this, their outcome is justified.

3. If there are a lot of forces that caused the significance level up like pressure to change programs, the high p score of .27 could be accepted. Otherwise, we have a large chance of making a type-1 error and we shouldn't reject the null hypothesis.

4. Is there any non-subjective way of changing the significance level?

Consider the hypothesis test H0: μ = 100 vs. HA: μ 494 was conducted and found to have a p-value of p=.2743. Under what conditions would you be okay to reject the null hypothesis given this p-value?

Under no conditions. The p-value should be much lower. You could accept it, but you'd have to declare your lack of confidence.

What's one question you have about the reading?

More examples and integrate this with confidence intervals.

A)

The p-value is the probability of observing data at least as favorable to the alternative hypothesis as our current data set, if the null hypothesis was true. (the definition from the book).

B)

I would go with book justification. I wouldsay that say that I accept. because this is the book reason:

Suppose the sample mean was larger than the null value, μ0 (e.g. μ0 would represent

7 if H0: μ = 7). Then if we can flip to a one-sided test, we would use HA : μ > μ0.

Now if we obtain any observation with a Z score greater than 1.65, we would reject

H0. If the null hypothesis is true, we incorrectly reject the null hypothesis about 5%

of the time when the sample mean is above the null value, as shown in Figure 4.17.

Suppose the sample mean was smaller than the null value. Then if we change to a

one-sided test, we would use HA: μ < μ0. If x ̄ had a Z score smaller than -1.65, we would reject H0 . If the null hypothesis was true, then we would observe such a case about 5% of the time.

By examining these two scenarios, we can determine that we will make a Type 1

Error 5% + 5% = 10% of the time if we are allowed to swap to the “best” one-sided

test for the data. This is twice the error rate we prescribed with our significance level:

α = 0.05 (!).

C)

When the P-value is smaller that what we found. Besides that, when we have small size sample.

D)

It is not clear to me the main different between type1 and type2 error?!

Consider the hypothesis test H0: μ = 100 vs. HA: μ 494 was conducted and found to have a p-value of p=.2743. Under what conditions would you be okay to reject the null hypothesis given this p-value?

Answer: If our significance level alpha is greater than .2743, or, stated another way, if our confidence level is only intended to be less than 72.57 percent. Most of the time much higher confidence levels are desired, so it would be an unusual case that we would reject the null hypothesis given a p-value such as the above.

What's one question you have about the reading?

Answer: Actually this is about the question set, but for the question above (#3), what sorts of situations would there be that we were using a confidence level that is so low?

**NOTE: My computer ran out of power just moments before I was about to submit my answer, so I had to plug in and restart, which is why I am a few minutes late with my post. I hope you can still accept it.

1. From the definition in the book, the p-value is the probability of observing data at least as favorable to the alternative hypothesis as our current data set, if the null hypothesis is true. A p-value of 0.04 says that there is a 4% chance of obtaining positive results through random chance. Since, 4% is pretty low, this is a favorable p-value for accepting the alternative hypothesis.

2. I agree that you shouldn’t switch from a two-sided test to a one-sided test once you’ve seen the data, unless you are somehow able to reduce the Type 1 error and increase the significance level.

3. The same size in our class example is 86, students. However, if I didn’t know that this would be the case, I would be likely to reject the null hypothesis with a very small sample size.

4. What is the cutoff for acceptable p-values versus ones that are too high? Is there some sort of confidence interval for p-values that I can construct? Or at least maybe a cutoff point so that I can exactly quantify when a p-value is too big and when one is small enough.

1) That there is a 4% chance of choosing a sample that is supports the alternative hypothesis and rejects the null hypothesis as much as this one does.

2) Yes, but it seems more of a logical decision not to count your data twice than some hard rule of mathematics the way they explain it.

3) The P value we obtained in class basically signified that the there was a 27% chance that the increase in scores seen was a fluke rather than an actual result of changing the math curriculum. I would reject the null hypothesis and implement the new math curriculum if the costs of implementation were low enough that the benefits of a small increase in test scores outweighed the risks associated with new costs.

4) Nope