Scientific Skepticism | Dr. Steven J. Allen

97% is a number you might have heard a lot in the last few years. That’s the number of scientists who supposedly believe in global warming theory. That 97% claim is questionable, but let’s ask the more important question: why do we find the idea of consensus convincing at all? The terms “Global Warming Skeptic” and “Climate Change Skeptic” are insults, but those who use this line of attack ignore that science only works when there are skeptics. Science is rooted in replicable research and experimentation. A scientist examines an existing set of facts, and concocts a theory that explains those facts. He or she makes a prediction to test that theory. If the prediction comes true, that constitutes evidence to support the theory. If the prediction fails, that undermines the theory, and the scientist goes back to the drawing board. It doesn’t matter whether a scientist is on the payroll of the American Cancer Society or a tobacco company, whether he is a Communist, or a Jew or a Baptist, beats her spouse, or volunteers at a soup kitchen. Only the evidence counts.

But what happens when someone gets the evidence wrong and it needs correction? That’s what critical peer review, aka “skepticism,” is for. In biomedical sciences, non-replication rates are estimated to range between 75 to 90 percent. Venture capital firms now take it for granted that 50 percent of published academic studies cannot be replicated. Imagine what would be done in those cases if there were no skeptics. Business and medicine would be at a standstill. If climate skeptics end up being correct, those attempting to silence them will go down in history alongside the members of the “scientific consensus” that, in years past, agreed that the earth was the center of the universe, that continental drift was impossible, that canals existed on Mars, and that evils such as white supremacy and eugenics were scientifically true.

When told of a publication entitled “100 Authors Against Einstein,” Albert Einstein reputedly said, “Why one hundred? If I were wrong, one would have been enough.” Science cannot function if skeptics are harassed and ostracized. When someone is challenging a scientific consensus with facts and logic, that’s to be encouraged, not dismissed due to politics. Argument, not anathemas, is the way to approach scientific issues surrounding climate changes. To learn more, you can read our study on Climate Change advocacy at climatedollars.org. I’m Dr. Steven J. Allen, thanks for watching..

Is Most Published Research Wrong?

In 2011 an article was published in the reputable "Journal of Personality and Social Psychology". It was called "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect" or, in other words, proof that people can see into the future. The paper reported on nine experiments. In one, participants were shown two curtains on a computer screen and asked to predict which one had an image behind it, the other just covered a blank wall. Once the participant made their selection the computer randomly positioned an image behind one of the curtains, then the selected curtain was pulled back to show either the image or the blank wall the images were randomly selected from one of three categories: neutral, negative, or erotic. If participants selected the curtain covering the image this was considered a hit.

Now with there being two curtains and the images positions randomly behind one of them, you would expect the hit rate to be about fifty percent. And that is exactly what the researchers found, at least for negative neutral images however for erotic images the hit rate was fifty-three percent. Does that mean that we can see into the future? Is that slight deviation significant? Well to assess significance scientists usually turn to p-values, a statistic that tells you how likely a result, at least this extreme, is if the null hypothesis is true. In this case the null hypothesis would just be that people couldn't actually see into the future and the 53-percent result was due to lucky guesses. For this study the p-value was .01 meaning there was just a one-percent chance of getting a hit rate of fifty-three percent or higher from simple luck. p-values less than .05 are generally considered significant and worthy of publication but you might want to use a higher bar before you accept that humans can accurately perceive the future and, say, invite the study's author on your news program; but hey, it's your choice.

After all, the .05 threshold was arbitrarily selected by Ronald Fisher in a book he published in 1925. But this raises the question: how much of the published research literature is actually false? The intuitive answer seems to be five percent. I mean if everyone is using p less than .05 as a cut-off for statistical significance, you would expect five of every hundred results to be false positives but that unfortunately grossly underestimates the problem and here's why. Imagine you're a researcher in a field where there are a thousand hypotheses currently being investigated. Let's assume that ten percent of them reflect true relationships and the rest are false, but no one of course knows which are which, that's the whole point of doing the research. Now, assuming the experiments are pretty well designed, they should correctly identify around say 80 of the hundred true relationships this is known as a statistical power of eighty percent, so 20 results are false negatives, perhaps the sample size was too small or the measurements were not sensitive enough. Now considered that from those 900 false hypotheses using a p-value of .05, forty-five false hypotheses will be incorrectly considered true.

As for the rest, they will be correctly identified as false but most journals rarely published no results: they make up just ten to thirty percent of papers depending on the field, which means that the papers that eventually get published will include 80 true positive results: 45 false positive results and maybe 20 true negative results. Nearly a third of published results will be wrong even with the system working normally, things get even worse if studies are underpowered, and analysis shows they typically are, if there is a higher ratio of false-to-true hypotheses being tested or if the researchers are biased. All of this was pointed out in 2005 paper entitled "Why most published research is false". So, recently, researchers in a number of fields have attempted to quantify the problem by replicating some prominent past results. The Reproducibility Project repeated a hundred psychology studies but found only thirty-six percent had a statistically significant result the second time around and the strength of measured relationships were on average half those of the original studies.

An attempted verification of 53 studies considered landmarks in the basic science of cancer only managed to reproduce six even working closely with the original study's authors these results are even worse than i just calculated the reason for this is nicely illustrated by a 2015 study showing that eating a bar of chocolate every day can help you lose weight faster. In this case the participants were randomly allocated to one of three treatment groups: one went on a low-carb diet, another one on the same low carb diet plus a 1.5 ounce bar of chocolate per day and the third group was the control, instructed just to maintain their regular eating habits at the end of three weeks the control group had neither lost nor gained weight but both low carb groups had lost an average of five pounds per person the group that a chocolate however lost weight ten percent faster than the non-chocolate eaters the finding was statistically significant with a p-value less than .

05 As you might expect this news spread like wildfire, to the front page of Bild, the most widely circulated daily newspaper in Europe and into the Daily Star, the Irish Examiner, to Huffington Post and even Shape Magazine unfortunately the whole thing had been faked, kind of. I mean researchers did perform the experiment exactly as they described, but they intentionally designed it to increase the likelihood of false positives: the sample size was incredibly small, just five people per treatment group, and for each person 18 different measurements were tracked including: weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, and so on; so if weight loss didn't show a significant difference there were plenty of other factors that might have. So the headline could have been "chocolate lowers cholesterol" or "increases sleep quality" or… something. The point is: a p-value is only really valid for a single measure once you're comparing a whole slew of variables the probability that at least one of them gives you a false positive goes way up, and this is known as "p-hacking". Researchers can make a lot of decisions about their analysis that can decrease the p-value, for example let's say you analyze your data and you find it nearly reaches statistical significance, so you decide to collect just a few more data points to be sure then if the p-value drops below .

05 you stop collecting data, confident that these additional data points could only have made the result more significant if there were really a true relationship there, but numerical simulations show that relationships can cross the significance threshold by adding more data points even though a much larger sample would show that there really is no relationship. In fact, there are a great number of ways to increase the likelihood of significant results like: having two dependent variables, adding more observations, controlling for gender, or dropping one of three conditions combining all three of these strategies together increases the likelihood of a false-positive to over sixty percent, and that is using p less than .05 Now if you think this is just a problem for psychology neuroscience or medicine, consider the pentaquark, an exotic particle made up of five quarks, as opposed to the regular three for protons or neutrons.

Particle physics employs particularly stringent requirements for statistical significance referred to as 5-sigma or one chance in 3.5 million of getting a false positive, but in 2002 a Japanese experiment found evidence for the Theta-plus pentaquark, and in the two years that followed 11 other independent experiments then looked for and found evidence of that same pentaquark with very high levels of statistical significance. From July 2003 to May 2004 a theoretical paper on pentaquarks was published on average every other day, but alas, it was a false discovery for their experimental attempts to confirm that theta-plus pentaquark using greater statistical power failed to find any trace of its existence. The problem was those first scientists weren't blind to the data, they knew how the numbers were generated and what answer they expected to get, and the way the data was cut and analyzed, or p-hacked, produced the false finding. Now most scientists aren't p-hacking maliciously, there are legitimate decisions to be made about how to collect, analyze and report data, and these decisions impact on the statistical significance of results.

For example, 29 different research groups were given the same data and asked to determine if dark-skinned soccer players are more likely to be given red cards; using identical data some groups found there was no significant effect while others concluded dark-skinned players were three times as likely to receive a red card. The point is that data doesn't speak for itself, it must be interpreted. Looking at those results it seems that dark skinned players are more likely to get red carded but certainly not three times as likely; consensus helps in this case but for most results only one research group provides the analysis and therein lies the problem of incentives: scientists have huge incentives to publish papers, in fact their careers depend on it; as one scientist Brian Nosek puts it: "There is no cost to getting things wrong, the cost is not getting them published". Journals are far more likely to publish results that reach statistical significance so if a method of data analysis results in a p-value less than .

05 then you're likely to go with that method, publication's also more likely if the result is novel and unexpected, this encourages researchers to investigate more and more unlikely hypotheses which further decreases the ratio of true to spurious relationships that are tested; now what about replication? Isn't science meant to self-correct by having other scientists replicate the findings of an initial discovery? In theory yes but in practice it's more complicated, like take the precognition study from the start of this video: three researchers attempted to replicate one of those experiments, and what did they find? well, surprise surprise, the hit rate they obtained was not significantly different from chance. When they tried to publish their findings in the same journal as the original paper they were rejected. The reason? The journal refuses to publish replication studies. So if you're a scientist the successful strategy is clear and don't even attempt replication studies because few journals will publish them, and there is a very good chance that your results won't be statistically significant any way in which case instead of being able to convince colleagues of the lack of reproducibility of an effect you will be accused of just not doing it right.

So a far better approach is to test novel and unexpected hypotheses and then p-hack your way to a statistically significant result. Now I don't want to be too cynical about this because over the past 10 years things have started changing for the better. Many scientists acknowledge the problems i've outlined and are starting to take steps to correct them: there are more large-scale replication studies undertaken in the last 10 years, plus there's a site, Retraction Watch, dedicated to publicizing papers that have been withdrawn, there are online repositories for unpublished negative results and there is a move towards submitting hypotheses and methods for peer review before conducting experiments with the guarantee that research will be published regardless of results so long as the procedure is followed. This eliminates publication bias, promotes higher powered studies and lessens the incentive for p-hacking. The thing I find most striking about the reproducibility crisis in science is not the prevalence of incorrect information in published scientific journals after all getting to the truth we know is hard and mathematically not everything that is published can be correct.

What gets me is the thought that even trying our best to figure out what's true, using our most sophisticated and rigorous mathematical tools: peer review, and the standards of practice, we still get it wrong so often; so how frequently do we delude ourselves when we're not using the scientific method? As flawed as our science may be, it is far away more reliable than any other way of knowing that we have. This episode of veritasium was supported in part by these fine people on Patreon and by Audible.com, the leading provider of audiobooks online with hundreds of thousands of titles in all areas of literature including: fiction, nonfiction and periodicals, Audible offers a free 30-day trial to anyone who watches this channel, just go to audible.com/veritasium so they know i sent you. A book i'd recommend is called "The Invention of Nature" by Andrea Wolf which is a biography of Alexander von Humboldt, an adventurer and naturalist who actually inspired Darwin to board the Beagle; you can download that book or any other of your choosing for a one month free trial at audible.

com/veritasium so as always i want to thank Audible for supporting me and I really want to thank you for watching..

All Scientific Papers Should Be Free; Here’s Why They’re Not

If science drops in a field but no other researchers are around to hear it, does it further the academic area of study? Howdy researchers, Trace here for DNews. Science is a process, it’s a way of thinking about the world around us. Most of these scientific processes are thought through and then published in a journal, but to read them you have to pay! Shouldn’t all this scientific knowledge be FREE!? Firstly, science is mostly paid for by grants from governments, non-profits, foundations, universities, corporations or others with deep pockets. We did a video about it. But, even though the science was paid for, that’s just the first half of the equation… the other half is the scientific journal. The first journals were published over 350 years ago as a way to organize new scientific knowledge, and that continues today. According to the International Association of Scientific, Technical and Medical Publishers, 2.5 million new scientific papers are published each year in over 28,000 different journals.

A new paper is published every 20 seconds. (and you thought we’d run out of stuff for DNews 😉). Researchers need others to read their paper so it can affect their field. So, they freely send their treasured manuscripts to journals for peer review and publication. When a manuscript comes in, specialists select and send the best manuscripts to volunteer experts in the field who are “carefully selected based on… expertise, research area, and lack of bias” for peer review. After that, the papers are copy-edited, compiled into an issue of the journal, physically printed and then shipped and/or published online! They’re, like, the nerdiest magazines in the world. All this costs money… According to a study in PLOS One this whole process can cost 20 to 40 dollars per page, depending on how many papers the journal receives and how many they have to reject. Someone has to pay for that, and there are three ways this can happen: authors can submit for free and readers/subscribers pay (called the traditional model), or authors pay and readers get it for free (called open-access), or both authors and readers pay!English-language journals alone were worth $10 billion dollars in 2013! I know what you’re thinking, just put them on the internet! Save on shipping, like newspapers and magazines! Well, even though publishers don’t have to print and ship big books of papers anymore, they often still do.

And, even if the journals were only online, servers and bandwidth need to be paid for, and that ain’t cheap. Publishing requires dollah bills, y’all and someone has to pay, and everyone gets their money differently… For example: the American Association for the Advancement of Science (AAAS) publishes the Science journals, and the Public Library of Science publishes PLoS One among others; both are nonprofits. But, while PLOS uses an open-access (free to you) model, Triple-A-S publishes six journals: five with a traditional model (you pay) and one open-access. Plus, there are for-profit journals like Macmillan Publishers, who own the journal Nature (and a mix of traditional and open access options). And the giant Reed Elsevier (now called RELX) publishes over 2000 journals some of which are open-access and some are traditional! So, though some are non-profits, they don’t always give it to YOU for free, and those that do still can charge researchers up to 2900 dollars to publish! While others make money off scientific research which makes some people feel icky.

The whole thing is confusing. Asking “what is worse: for-profits charging universities or readers for access, or open-access charging authors?” Shrug. The debate rages. Many scientists argue as the peer review is provided for free by the scientific community, and the papers are provided for free by the scientific community; access to the papers should. be. free. The EU agrees, ordering any publically-funded papers to be made free by 2020; pushing toward open access to science! In the US, where many of the papers originate, some scientists are calling for boycotts on for-profit publishing. In the end, there was a time when practitioners needed a physical reference to the latest scientific achievements. In the days before the internet, getting a journal in the mail must have been both exciting and illuminating, but now, thanks to digital publishing… this whole pay-for-science model is wont to change… People WANT the knowledge to be free, but no one knows how to do it.

As y’all know, more research is always needed, but should that research be behind a paywall? Let us know down in the comments, make sure you subscribe so you get more DNews everyday. You can also come find us on Twitter, @seeker. But for more how much science actually costs, watch this video..

Ocean Temperatures – Changing Planet

The world’s oceans cover more than 70 percent of Earth’s surface. Millions of creatures, great and small, call the oceans home. These massive bodies of water play a crucial role in maintaining the planet’s delicate environmental balance, from supporting a complex food chain, to affecting global weather patterns. But rising air temperatures are warming the oceans and bringing dramatic impacts felt around the globe. Dr. TONY KNAP (Bermuda Institute of Ocean Sciences): One of the things warming does in, say areas off the United States, it creates a much bigger pool of warm water in the surface of the ocean that lends a huge amount of energy to hurricanes and tropical cyclones. THOMPSON: Dr. Tony Knap is the director of the Bermuda Institute of Ocean Sciences, or BIOS. Famous for its luxurious golf courses and pink sand beaches, Bermuda is also home to one of the world’s leading institutes for ocean studies, with a focus on water temperatures.

KNAP: Here off Bermuda, we have probably a better view of it then many other people are going to have over time. THOMPSON: Bermuda is located over 600 miles, or almost 1,000 kilometers, from the coast of North Carolina, in an area of the Atlantic Ocean called the Sargasso Sea. KNAP: We like to think of the Sargasso Sea in the North Atlantic as the canary in the coalmine. It’s the smallest ocean, it’s between North America and Europe and we think if we are going to see changes, we will see them first here in the ocean off Bermuda. THOMPSON: Scientists at BIOS have been measuring the temperature of the ocean since 1954, making it one of the world’s longest ongoing studies of ocean data. KNAP: Well you measure the temperature of the ocean in many ways. In the old days you used to do it with buckets and thermometers. Now you use sophisticated instruments called conductivity, temperature and depth recorders. THOMPSON: These recorders, called CTDs, are large measuring instruments lowered deep into the water at specific locations in the ocean. On this day, Knap and his team are headed to “Station S.

” QUENTIN LEWIS, Jr. (Captain, R/V Atlantic Explorer): The weather is not going to be our friend today, unfortunately. The winds out of the west, it’s 35-40 and some higher gusts. The seas are anywhere from 14 to 16 feet or higher. THOMPSON: Lowered to a depth of three kilometers, or just under two miles, the CTD records temperature, salinity, carbon dioxide levels, and captures water samples. KNAP: This is a screen for the output on the CTD. The temperature will be in red, blue is salinity or the saltiness, and yellow is the oxygen content. THOMPSON: At BIOS, all of the data is then carefully logged and analyzed. Dr. NICK BATES (Bermuda Institute of Ocean Sciences): With this instrument we can see changes that happen over the season, over the year. And then from year to year.

THOMPSON: Using ocean temperature data going back several decades, BIOS research can trace the warming trend. In the past 56 years, it has risen half a degree Celsius. KNAP: Since 1954 we’ve seen, on average, the temperature increasing by a small amount, an equivalent to what is really a half a watt per year which is, doesn’t seem like a lot but over the whole of the ocean, it’s a lot. THOMPSON: What’s a half a watt? KNAP: It’s not much. It’s about a 100th of a degree per year. It’s not a lot. THOMPSON: But that small a difference can make, have a huge impact? KNAP: Yeah. THOMPSON: Really? KNAP: Yeah, because it’s going on every year. You think about how big the ocean is, and how deep it is, and how much energy it has, I mean it’s a tremendous source of heat. THOMPSON: So where is that warming coming from? KNAP: The warming we believe is to due to changes in CO2 in the atmosphere, the atmosphere getting warmer and the surface of the ocean getting warmer.

And that transfer of heat is being made into the ocean. THOMPSON: So what is the impact of a warmer ocean? The rising temperature causes the ocean to expand, and raises sea levels. KNAP: The tides going up by 3.2 millimeters a year. Half of that is attributed to the ocean warming down to 700 meters. The oceans on average 4,000 meters deep so it has a lot more to expand. THOMPSON: Warming temperatures also impact the growth rates of certain organisms at the very bottom of the ocean food chain, like phytoplankton. And so if you see changes in phytoplankton, does that mean that we are going to see changes in the food chain at the ocean? KNAP: If the organisms that eat those organisms, OK, eat the plankton, for example, can’t eat those plankton, then yes you’ll see changes. THOMPSON: And the small changes being recorded could bring even stronger storms.

This report published in 2005 in Science Magazine shows the gradual rise of the number of Category 4 and 5 hurricanes over recent years. An increase in storm intensity like this many scientists believe is the result of the warming of the oceans. KNAP: You think about how big the ocean is, and how deep it is, and how much energy it has. Even if you look at difference in hurricanes intensity, etc., one, one and a half degree centigrade in the water column of one hundred meters makes a massive amount of difference. THOMPSON: Small changes with big consequences for the creatures in the sea and all the people who live along the coasts..