P-values in Statistical Statements

 

Nature  506, 150–152 (13 February 2014)

Statistical Errors

Regina Nuzzo

[paraphrase]

When UK statistician Ronald Fisher introduced the P value in the 1920s, he did not mean it to be a definitive test. He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: worthy of a second look. The idea was to run an experiment, then see if the results were consistent with what random chance might produce. Researchers would first set up a 'null hypothesis' that they wanted to disprove, such as there being no correlation or no difference between two groups. Next, they would play the devil's advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed. This probability was the P value. The smaller it was, suggested Fisher, the greater the likelihood that the straw-man null hypothesis was false.

Researchers need to realize the limits of conventional statistics. They should instead bring into their analysis elements of scientific judgement about the plausibility of a hypothesis and study limitations that are normally banished to the discussion section: results of identical or similar experiments,    proposed mechanisms,    clinical knowledge and so on. Experienced researchers have said there are three questions a scientist might want to ask after a study: 'What is the evidence?' 'What should I believe?' and 'What should I do?' One method cannot answer all these questions. The numbers are where the scientific discussion should start, not end.

[end of paraphrase]

 

Link to — Statisticians warn over Misuse of P Values