"Statistical Hypothesis Inference Testing"
Perhaps not the best way to describe the use of P values in research, I think its the one part of the recent article published in Nature that caught my attention. The article, "Scientific method: Statistical errors: P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume," is a good read on the topic, that revisits questions about the merit of our commonly used statistical tool of measuring value.
Number hacking or P-hacking has run the gambit of leading some articles to be perceived as more valuable than what they are at face value. Thats probably the main issue, as the background of where the P value comes from may be a surprise for some as a simple means for analysis that was never intended to be the end all be all tool. That is perhaps the biggest struggle when teaching the value of stats to students and healthcare professionals, especially in the era of evidence based medicine.
One thing that has stood out to me is what one champion of EBM once said in a conference I attended. The jist was, medicine rests so much on the assumption of P<0.05, when the rest of the sciences are much stricter. Would you get in a plane would land you at your destination with such a value? Currently, I think a shake-up may come in the future, but most likely not for some time. However, I am not sure if my mindset may adjust quick enough if that is the case after thinking such a way all this time.
Nuzzo R. Scientific method: Statistical errors: P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume. Nature 506: 150–152 (13 February 2014)