Guy Scientist, A “True” Story by a Fictional Character

(Jazz) Host: Ladies and Gentlemen Host: The NASA Climate Scientist Formerly known… as Josh Willis (applause) (laughter) Yeah, I used to be a regular guy. Just an average Joe named Josh. Willis. Sure I was a climate scientist, I worked for NASA, but deep down I was ordinary. Like you people. Then one day I snapped, like an overstretched balloon. I lost all aspects of modesty and humility. I realized I was more than a scientist, and a guy. I… was Guy Scientist! Climate Crusader for Truth, Social Justice and the Environmental Way. It was a bright clear Tuesday afternoon, in a state known for its sunshine. The kind of Southern California day that makes you wished you called in sick, and headed for the Getty with a bottle of two-buck Chuck and a footlong from Subway. But today was no picnic. I was headed right into the belly of the beast. One of the most conservative places known to man. Orange County.

I'd been invited to the Newport Beach Country Club to give a talk on global warming to some group of Good 'ol Boys. They were called the "Bluejays" or "Sparrow Club". Somethin' like that. They were Old World power brokers, CEOs of Fortune 500 companies, rich oil barons. The kind of men who don't drive hybrids and want to make America great again. But I was ready. I've given my global warming talk a thousand times to a thousand different school children and soccer moms and city council members. I showed up early. That way I could clock the old timers and pass judgement on them as they entered The place was fancy. Expensive carpets. Hardwood tables covered in white linen, and more oak on the walls than a barrel of Jack Daniels. My suspicions were confirmed as they started to arrive: they were old alright. They had more pacemakers than Dave Bruebeck's rhythm section. And white, too. I've seen more diversity in a bowl of basmati rice. Matter of fact, everybody in that place with a skin tone darker than Donald Trump's teeth was wearing a tuxedo, and handing out hors d'oeuvres. I had my work cut out for me, alright.

But they were crafty. The fed me prime beef, first. It was delicious. And then it was showtime. I was flying high, I told a few jokes to get 'em in the mood. Like, uh… It's so hot in the Arctic… I said it's SO hot in the Arctic. (How hot is it?) There we go. It's so hot the polar bears are threatening to build a wall to keep the brown bears from moving north. Yeah, you guys get it, but not this crowd. No, no… my punchlines landed like a lead brick on Spanish tile. I moved on. I moved on to some charts and graphs. I provided incontrovertible evidence that the Earth was warming faster now than at any time in the last 10,000 years. I looked out into the audience. They were not impressed. Matter of fact, I've seen more trust in the eyes of five year old on the Metro, clutching an Elmo doll in his tiny, white knuckled hands. It was time to bring out the big guns.

Time for the balloon gag. That's right. The balloon gag, is a simple physics experiment designed to illustrate the heat capacity of You see the oceans absorb more than 95 percent of the heat trapped by greenhouse gases. Why? Because water, that's why. Water sucks up heat faster than a desperate housewife downs mojitos on a hot summer day. And once it gets in the ocean, heat stays for a thousand years– just like your in-laws after dinner. I pulled out my balloon. I inflated it with air. I flipped open my trusty zippo. The tall, lanky flame moved closer and closer to the skin of the skin of the balloon until… Bam! It exploded like a firecracker on Cinco de Mayo. Now I had their attention. I explained that the balloon filled with air, couldn't take the heat. But fill up a balloon with water, it can take more flames than Sean Spicer at a press conference. Simple physics. Flames can't pop a balloon filled with water. I pulled out my water balloon.

I held it up high. This was it. This was the moment I won the hearts and minds of the climate deniers. I opened my trusty zippo. I brought the flame toward the skin of the balloon and… Bam! It exploded like a bottle of cheap champagne across the bow of an oil tanker. Instantly, my arm was soaked and water rained down onto the expensive carpet in a river of liquid shame. A small brown man appeared out of nowhere in a white tuxedo and laid a napkin over the wet spot on the floor. Apparently, these guys were so rich they didn't even have to obey the laws of physics. That was the moment I knew. That nice guy climate scientist, Josh Willis?… His days were numbered. He had to change. After the incident, they peppered me with questions about climate data and natural cycles. I gave 'em all the right answers, but there was no more winning hearts and minds thatday. I packed up my things and headed for the door.

I looked up, into the sun. At least it was still shining. You win this round, Sparrow-Blue Jay Club, I said. But you haven't heard the last, of Guy Scientist. (Jazz) (Cheers).

Scientific Skepticism | Dr. Steven J. Allen

97% is a number you might have heard a lot in the last few years. That’s the number of scientists who supposedly believe in global warming theory. That 97% claim is questionable, but let’s ask the more important question: why do we find the idea of consensus convincing at all? The terms “Global Warming Skeptic” and “Climate Change Skeptic” are insults, but those who use this line of attack ignore that science only works when there are skeptics. Science is rooted in replicable research and experimentation. A scientist examines an existing set of facts, and concocts a theory that explains those facts. He or she makes a prediction to test that theory. If the prediction comes true, that constitutes evidence to support the theory. If the prediction fails, that undermines the theory, and the scientist goes back to the drawing board. It doesn’t matter whether a scientist is on the payroll of the American Cancer Society or a tobacco company, whether he is a Communist, or a Jew or a Baptist, beats her spouse, or volunteers at a soup kitchen. Only the evidence counts.

But what happens when someone gets the evidence wrong and it needs correction? That’s what critical peer review, aka “skepticism,” is for. In biomedical sciences, non-replication rates are estimated to range between 75 to 90 percent. Venture capital firms now take it for granted that 50 percent of published academic studies cannot be replicated. Imagine what would be done in those cases if there were no skeptics. Business and medicine would be at a standstill. If climate skeptics end up being correct, those attempting to silence them will go down in history alongside the members of the “scientific consensus” that, in years past, agreed that the earth was the center of the universe, that continental drift was impossible, that canals existed on Mars, and that evils such as white supremacy and eugenics were scientifically true.

When told of a publication entitled “100 Authors Against Einstein,” Albert Einstein reputedly said, “Why one hundred? If I were wrong, one would have been enough.” Science cannot function if skeptics are harassed and ostracized. When someone is challenging a scientific consensus with facts and logic, that’s to be encouraged, not dismissed due to politics. Argument, not anathemas, is the way to approach scientific issues surrounding climate changes. To learn more, you can read our study on Climate Change advocacy at I’m Dr. Steven J. Allen, thanks for watching..

Is Most Published Research Wrong?

In 2011 an article was published in the reputable "Journal of Personality and Social Psychology". It was called "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect" or, in other words, proof that people can see into the future. The paper reported on nine experiments. In one, participants were shown two curtains on a computer screen and asked to predict which one had an image behind it, the other just covered a blank wall. Once the participant made their selection the computer randomly positioned an image behind one of the curtains, then the selected curtain was pulled back to show either the image or the blank wall the images were randomly selected from one of three categories: neutral, negative, or erotic. If participants selected the curtain covering the image this was considered a hit.

Now with there being two curtains and the images positions randomly behind one of them, you would expect the hit rate to be about fifty percent. And that is exactly what the researchers found, at least for negative neutral images however for erotic images the hit rate was fifty-three percent. Does that mean that we can see into the future? Is that slight deviation significant? Well to assess significance scientists usually turn to p-values, a statistic that tells you how likely a result, at least this extreme, is if the null hypothesis is true. In this case the null hypothesis would just be that people couldn't actually see into the future and the 53-percent result was due to lucky guesses. For this study the p-value was .01 meaning there was just a one-percent chance of getting a hit rate of fifty-three percent or higher from simple luck. p-values less than .05 are generally considered significant and worthy of publication but you might want to use a higher bar before you accept that humans can accurately perceive the future and, say, invite the study's author on your news program; but hey, it's your choice.

After all, the .05 threshold was arbitrarily selected by Ronald Fisher in a book he published in 1925. But this raises the question: how much of the published research literature is actually false? The intuitive answer seems to be five percent. I mean if everyone is using p less than .05 as a cut-off for statistical significance, you would expect five of every hundred results to be false positives but that unfortunately grossly underestimates the problem and here's why. Imagine you're a researcher in a field where there are a thousand hypotheses currently being investigated. Let's assume that ten percent of them reflect true relationships and the rest are false, but no one of course knows which are which, that's the whole point of doing the research. Now, assuming the experiments are pretty well designed, they should correctly identify around say 80 of the hundred true relationships this is known as a statistical power of eighty percent, so 20 results are false negatives, perhaps the sample size was too small or the measurements were not sensitive enough. Now considered that from those 900 false hypotheses using a p-value of .05, forty-five false hypotheses will be incorrectly considered true.

As for the rest, they will be correctly identified as false but most journals rarely published no results: they make up just ten to thirty percent of papers depending on the field, which means that the papers that eventually get published will include 80 true positive results: 45 false positive results and maybe 20 true negative results. Nearly a third of published results will be wrong even with the system working normally, things get even worse if studies are underpowered, and analysis shows they typically are, if there is a higher ratio of false-to-true hypotheses being tested or if the researchers are biased. All of this was pointed out in 2005 paper entitled "Why most published research is false". So, recently, researchers in a number of fields have attempted to quantify the problem by replicating some prominent past results. The Reproducibility Project repeated a hundred psychology studies but found only thirty-six percent had a statistically significant result the second time around and the strength of measured relationships were on average half those of the original studies.

An attempted verification of 53 studies considered landmarks in the basic science of cancer only managed to reproduce six even working closely with the original study's authors these results are even worse than i just calculated the reason for this is nicely illustrated by a 2015 study showing that eating a bar of chocolate every day can help you lose weight faster. In this case the participants were randomly allocated to one of three treatment groups: one went on a low-carb diet, another one on the same low carb diet plus a 1.5 ounce bar of chocolate per day and the third group was the control, instructed just to maintain their regular eating habits at the end of three weeks the control group had neither lost nor gained weight but both low carb groups had lost an average of five pounds per person the group that a chocolate however lost weight ten percent faster than the non-chocolate eaters the finding was statistically significant with a p-value less than .

05 As you might expect this news spread like wildfire, to the front page of Bild, the most widely circulated daily newspaper in Europe and into the Daily Star, the Irish Examiner, to Huffington Post and even Shape Magazine unfortunately the whole thing had been faked, kind of. I mean researchers did perform the experiment exactly as they described, but they intentionally designed it to increase the likelihood of false positives: the sample size was incredibly small, just five people per treatment group, and for each person 18 different measurements were tracked including: weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, and so on; so if weight loss didn't show a significant difference there were plenty of other factors that might have. So the headline could have been "chocolate lowers cholesterol" or "increases sleep quality" or… something. The point is: a p-value is only really valid for a single measure once you're comparing a whole slew of variables the probability that at least one of them gives you a false positive goes way up, and this is known as "p-hacking". Researchers can make a lot of decisions about their analysis that can decrease the p-value, for example let's say you analyze your data and you find it nearly reaches statistical significance, so you decide to collect just a few more data points to be sure then if the p-value drops below .

05 you stop collecting data, confident that these additional data points could only have made the result more significant if there were really a true relationship there, but numerical simulations show that relationships can cross the significance threshold by adding more data points even though a much larger sample would show that there really is no relationship. In fact, there are a great number of ways to increase the likelihood of significant results like: having two dependent variables, adding more observations, controlling for gender, or dropping one of three conditions combining all three of these strategies together increases the likelihood of a false-positive to over sixty percent, and that is using p less than .05 Now if you think this is just a problem for psychology neuroscience or medicine, consider the pentaquark, an exotic particle made up of five quarks, as opposed to the regular three for protons or neutrons.

Particle physics employs particularly stringent requirements for statistical significance referred to as 5-sigma or one chance in 3.5 million of getting a false positive, but in 2002 a Japanese experiment found evidence for the Theta-plus pentaquark, and in the two years that followed 11 other independent experiments then looked for and found evidence of that same pentaquark with very high levels of statistical significance. From July 2003 to May 2004 a theoretical paper on pentaquarks was published on average every other day, but alas, it was a false discovery for their experimental attempts to confirm that theta-plus pentaquark using greater statistical power failed to find any trace of its existence. The problem was those first scientists weren't blind to the data, they knew how the numbers were generated and what answer they expected to get, and the way the data was cut and analyzed, or p-hacked, produced the false finding. Now most scientists aren't p-hacking maliciously, there are legitimate decisions to be made about how to collect, analyze and report data, and these decisions impact on the statistical significance of results.

For example, 29 different research groups were given the same data and asked to determine if dark-skinned soccer players are more likely to be given red cards; using identical data some groups found there was no significant effect while others concluded dark-skinned players were three times as likely to receive a red card. The point is that data doesn't speak for itself, it must be interpreted. Looking at those results it seems that dark skinned players are more likely to get red carded but certainly not three times as likely; consensus helps in this case but for most results only one research group provides the analysis and therein lies the problem of incentives: scientists have huge incentives to publish papers, in fact their careers depend on it; as one scientist Brian Nosek puts it: "There is no cost to getting things wrong, the cost is not getting them published". Journals are far more likely to publish results that reach statistical significance so if a method of data analysis results in a p-value less than .

05 then you're likely to go with that method, publication's also more likely if the result is novel and unexpected, this encourages researchers to investigate more and more unlikely hypotheses which further decreases the ratio of true to spurious relationships that are tested; now what about replication? Isn't science meant to self-correct by having other scientists replicate the findings of an initial discovery? In theory yes but in practice it's more complicated, like take the precognition study from the start of this video: three researchers attempted to replicate one of those experiments, and what did they find? well, surprise surprise, the hit rate they obtained was not significantly different from chance. When they tried to publish their findings in the same journal as the original paper they were rejected. The reason? The journal refuses to publish replication studies. So if you're a scientist the successful strategy is clear and don't even attempt replication studies because few journals will publish them, and there is a very good chance that your results won't be statistically significant any way in which case instead of being able to convince colleagues of the lack of reproducibility of an effect you will be accused of just not doing it right.

So a far better approach is to test novel and unexpected hypotheses and then p-hack your way to a statistically significant result. Now I don't want to be too cynical about this because over the past 10 years things have started changing for the better. Many scientists acknowledge the problems i've outlined and are starting to take steps to correct them: there are more large-scale replication studies undertaken in the last 10 years, plus there's a site, Retraction Watch, dedicated to publicizing papers that have been withdrawn, there are online repositories for unpublished negative results and there is a move towards submitting hypotheses and methods for peer review before conducting experiments with the guarantee that research will be published regardless of results so long as the procedure is followed. This eliminates publication bias, promotes higher powered studies and lessens the incentive for p-hacking. The thing I find most striking about the reproducibility crisis in science is not the prevalence of incorrect information in published scientific journals after all getting to the truth we know is hard and mathematically not everything that is published can be correct.

What gets me is the thought that even trying our best to figure out what's true, using our most sophisticated and rigorous mathematical tools: peer review, and the standards of practice, we still get it wrong so often; so how frequently do we delude ourselves when we're not using the scientific method? As flawed as our science may be, it is far away more reliable than any other way of knowing that we have. This episode of veritasium was supported in part by these fine people on Patreon and by, the leading provider of audiobooks online with hundreds of thousands of titles in all areas of literature including: fiction, nonfiction and periodicals, Audible offers a free 30-day trial to anyone who watches this channel, just go to so they know i sent you. A book i'd recommend is called "The Invention of Nature" by Andrea Wolf which is a biography of Alexander von Humboldt, an adventurer and naturalist who actually inspired Darwin to board the Beagle; you can download that book or any other of your choosing for a one month free trial at audible.

com/veritasium so as always i want to thank Audible for supporting me and I really want to thank you for watching..

7 CRAZY Recent Breakthroughs in SCIENCE in 2017

For all those celebrity deaths and insane political shenanigans, 2016 actually gave us some pretty weird scientific developments too. From batteries that run on pee through to the world’s first three parent baby, it was a pretty nutso year. But if January’s developments are anything to go by then 2017 is gonna be even weirder, because in the past month we’ve seen a human pig hybrid, a skin printing machine and the potential discovery of a material theorised over a hundred years ago. This is is our list of seven crazy recent scientific breakthroughs. Number 7: Skin on Demand Making your own human skin suit is tough work these days, what with all the DNA to clear up, the funny looks at the dry cleaners, not to mention the kerfuffle in constructing a watertight alibi to fool the Feds. But thanks to a group of Spanish scientists this problem no longer exists, as they’ve developed the world’s first 3D bioprinter capable of producing fully-functional human skin.

This printer was the result of collaboration between the University Carlos the Third de Madrid and the less flamboyantly named BioDan Group who specialise in regenerative medicines. Their material mimics the structure of skin using a layer of collagen-producing fibroblasts, and it’s so close to the real thing it can be used in a wide range of fields, such as testing cosmetics, creating android epidermis, covering human skin loss, and of course the creation of a snappy little waistcoat for daddy. Number 6: Pig Man In the real-life sequel to Babe nobody wanted or asked for, researchers at California’s Salk Institute announced in late January the successful creation of a human-pig hybrid in the laboratory. Now I’m not sure making a creature that’s addicted to eating strips of its own buttocks is something I’d refer to as a success, but that’s because Johnny Cynical over here doesn’t understand the ramifications of this amazing development. The point of creating a human-pig chimera wasn’t to exhibit it in some circus freak-show; it was to provide a potential new source of human organs for transplant. In this experiment, pig embryos were injected with human cells to see if they could survive, and now that we know they can, we think it may eventually be possible to grow human organs inside animals to make up the organ donor shortfall.

Wow, meat, milk, skin and now organs? Thanks animals, you do a lot for us. Those damn vegetables have got a lot of catching up to do, haven’t you Mr Aubergine. Number 5: A Fitting End To Fillings I hate going to the dentist, which is why I’ve pulled out all of my own teeth and now I pay strangers to chew my food for me. But if you still own all your original chompers then a trip to the mouth doctor may soon be a lot less painful, thanks to a strange discovery made just a few weeks back. Researchers at King’s College London found that a drug used to help treat Alzheimers has a nifty little side effect, namely, it can encourage your teeth to repair themselves. Your teeth already do this on their own using dentine, but they don’t produce enough to fill large holes or cracks. However, with a kick up the pants from a drug called Tideglusib an enzyme which prevents dentine formation is turned off, and damage can be repaired naturally within as little as six weeks. I mean, that sounds great and all, but it’s not as much fun as paying a guy down the bus station to spit up food in your mouth like a little baby bird. Number 4: A New Type of Life Ever wonder why the movie Gattaca was called Gattaca? It’s because the letters G, T, A and C are the initials of the four natural bases, Guanine, Thymine, Cytosine and Adenine.

These pair up to form the base pairs of the DNA ladder, and different arrangements of these pairs create different lifeforms when arranged together. Everything from bacteria and baboons through to people and Penelope Cruz – who is not a person, she is a Goddess – everything is based on just four natural bases; until some crazy scientists decided to add two more. On 23rd January 2017, Researchers at The Scripps Research Institute announced the creation of an organism which held two artificial bases within its genetic code, making it the world’s first semi-synthetic organism. Such a development has many possible applications, including the creation of organisms tailored to fight certain diseases. But right now I’m more worried about the title of that movie. Gaxyttaxcy? Xygattyaxca? It’s like they didn’t even think about the ramifications of what they were doing to Ethan Hawke’s finest work? Number 3: An End to Old Age? In another piece of scientific razzle dazzle from the guys and girls at the Scripps Research Institute, we may have just made one of the key discoveries in the fight against cancer and aging.

In Mid-January a protein was identified which is responsible for determining the length of your telomeres, which is important, as this in turn dictates how quickly your cells age and whether they’re likely to mutate into cancer. Telomeres are like your cell’s little clocks, and this protein named TZAP could be seen as some form of battery, determining how long the clock runs for. If we can stretch your telomeres we may be able to delay the aging process, but if they’re unnaturally long they then begin to pose an increased cancer risk. It’s like riding a see saw with whirring blades above and a pit of sex-raptors beneath you – you wanna aim for somewhere in the middle. Thankfully, TZAP naturally prevents your telomeres growing too much by trimming them to keep them nice and short, and a further understanding of how they do this could help us get rid of tumours and wrinkles all at once.

Awesome, those are two of the top three things I hate the most…along with sex-raptors of course. Number 2: Hot Damn Did you know that the Red Hot Chili Peppers can reduce your chances of death? Unfortunately we’re talking about the food and not those delightful LA funk-monkeys, but that’s not gonna stop me using a bazillion song-title puns in this entry. So how does it work? Tell me baby. Well if you listen to me for One Hot Minute I will. Researchers at the Larner College of Medicine in Vermont used data taken from 16,000 Americans over 23 years, and they discovered that those who Dosed their food with spicy chilies enjoyed a 13% reduction in mortality rates from heart disease and stroke. Obviously you Can’t Stop death forever, because passing over to the Otherside is inevitable. But even if you survive a stroke you can be left in a seriously debilitating condition, as each one leaves Scar Tissue on your brain which can trigger seizures, leaving your life’s Fortune Faded. So the knowledge that we can reduce strokes and heart attacks is clearly no Minor Thing.

By The Way, this revelation is old news to some, as historically, many people Around The World already believed that spices contains mystical healing properties. But this is the first time it’s been confirmed scientifically. And do you know who’s excited about this the most? Me and my me and my me and my me and my me and my friends. We love spicy food. Number 1: Metallic hydrogen The existence of a metallic form of hydrogen was first theorised in 1935 by Eugene Wigner and Hillard Bell Huntington, with the knowledge that if the lightest of all elements could be turned into a metal it would prove to be a revolutionary breakthrough for technology. Super-efficient vehicles, improved electricity grids, stupidly fast computers and even space-faring craft are just some of the possible applications for metallic hydrogen, so you can understand why the scientific community collectively soiled itself on January 27th 2017, when one group of Harvard scientists claim they’d managed to create some.

Their experiment used two diamonds to crush liquid hydrogen at a temperature far below freezing point, because the pressure needed to create this substance is greater than you’d find at the centre of the Earth. The metallic hydrogen is still stuck between the two diamonds at the time of writing, as it must be released gradually to see if it can exist in a stable form at room temperature, so it remains to be seen whether this potentially ground-breaking material actually can be used with purpose. And furthermore, some physicists doubt whether the results of this experiment even prove anything at all, saying that further evidence needs to be submitted to give this discovery credence. But I guess we’ll find out soon enough if those naughty boys are telling porky pies or not. So that’s our list, but if you’re after more science-based intrigue of a different flavour, why not check out our recent video on the seven most devastating things mankind could discover, because these are the kind of breakthroughs you better hope we never make in our lifetimes.


Particle Accelerators Reimagined – with Suzie Sheehy

In 1927, a tall man from New Zealand, called Ernest Rutherford, stood not far from this location at the Royal Society. And as the new president of that institution, he had this to say– that he desires a copious supply of atoms and electrons which have an individual energy far transcending that of alpha and beta particles. Now the reason he was saying this is because awhile before, in his lab, they'd been doing the so-called famous gold foil experiments, where they had taken alpha particles from radioactive decay and impinged them on a piece of gold foil. Now those of you who know the story know that what they were expecting was that these alpha particles would kind of go straight through and some of them would be deflected a little bit. And what they actually found was that just a few of these alpha particles pinned straight back at them in the direction they were firing them.

And that was a real surprise. And now we know that they had discovered the nucleus of the atom– the tiny nucleus at the center. But Rutherford understood that in order to learn more, in order to dig deeper into the atom, in order to understand our universe at a deeper level, he was going to have to find something with a bit more energy– some projectiles which went a little bit faster. And so, effectively, what he was asking for at that time was the particle accelerator. And boy, did he get some. So 90 years on, this is what we think of now when we talk about a particle accelerator. This is the Large Hadron Collider, in case you're not familiar with it. It is a 27 kilometer long ring underneath the border between Switzerland and France. And now we think of these machines as a sort of huge behemoths, really. They are incredible feats of engineering, design, science, and even culture– breaking down boundaries between different countries.

But the question is, are these things useful? I mean, when you think about what we're looking at there and what Rutherford was looking at with the atom– atoms are tiny. And there are loads, and loads, and load of them in anything useful. So in the universe, for example, if I was to add up all the stars in the universe, there would be 10 to the 29 in scientific notation. So that's 1 with 29 zeros after it. That's the same number as the number of atoms just in the people sitting in this room– not even in the chairs and the ground. So there are as many atoms in us– in this room– as there are stars in the entire universe. Remember that the universe is 13.8 billion years old and pretty massive. So atoms, being very, very tiny, it doesn't immediately make sense that they're going to be that useful. And especially the particles inside them– the subatomic particles– it's not entirely obvious that they're going to be useful either.

And actually, if we go back further in history, we find that the initial people who worked with other types of particles were also skeptical about their use. One of the famous Friday evening discourses here was given by J.J. Thomson, the physicist, where he demonstrated the particle which later became known as the electron. And he actually came back. And this isn't that well-known– but the Royal Institution kindly dug through their archives for me and they gave me this document beautifully titled– allthediscoursesever.xls. And there was a whole host of lovely information. And one of them that they found for me was this, from J.J. Thompson. And he says– and I'll have to read up here because my screen is tiny– "if there are any among my audience, any who, 20 years ago, listened to the announcement I made here of the existence of electrons, they will, I think, admit that they would have been skeptical if they'd been told they would, in another 20 years, be listening to another discourse on the commercial application of these electrons.

For electrons are so small that it takes about 1,700 of them to give a mass [INAUDIBLE] out of an atom of hydrogen and they move at such a rate"– blah, blah, blah. "So such properties appear rather transcendental and not promising from a practical point of view." How wrong he was. And in fact, there was a toast that famously went around the Cavendish Lab– it's J.J. Thomson's favorite quote– "to the electron… May it never be of any use to anybody." So I thought, with particle accelerators, what I'd start with is to show you how we can actually make some particles. So I have here a very small particle accelerator rather like the one that J.J. Thomson would have used. And to power it, over here, I have a high voltage oscillator, which is actually an induction coil, which Faraday– himself presenting in here– would have been very proud of. So I'm going to switch this on and it actually converts a DC from a battery here to a very high voltage AC. And then over this side– fingers crossed my camera is working.

If we dim the lights a little– perhaps I can move that on there. There we are. OK. So that's is actually generating a beam of electrons. So to generate electrons is quite easy. You, more or less, apply voltage to a piece of metal and they start jumping out. So this is what's called a cathode ray tube. And some of you will be familiar with those because they were in the back of televisions for many, many years, before we had flat panel ones. So this has some of the basic components of a particle accelerator. So it starts with some particles. And then the next thing we have to do is give it some energy. And in this, all I'm doing is applying a high voltage across the terminals to rip the electrons out of one side and attract them to the other side. Now the other thing I can do with this beam of particles is I can actually move it around. And that's an incredibly important part of a particle accelerator– being able to control the beam of electrons.

And to do that, we actually use magnets. So I have, literally, just a simple bar magnet here. And I can show you that the beam bends when I bring it near. So this is just a simple magnetic field. I hope you can see that up there. I'll do that again. And this is a basic property of charged particles in a magnetic field. They will bend around a corner. So if I hold the magnet the opposite direction, then, as predicted, we should get the opposite effect. So we'll turn it around a few times there. So that has a few of the basic components of a particle accelerator. And I'll just come back over here. But I think you might not be surprised to hear that they get a little bit more complicated than that. And we'll get onto that in a little bit. I'm just going to switch that off.

A lot of people think, when you start talking about particle accelerators, that we should start from the very start– from generating particles– and then build up how we get to the very high energies at the speed of light. But actually, when we design accelerators, we start from the other end. We sort of start with, what would you like a beam of particles to actually do? So it might be, for example, that you would like to sterilize medical products. And in that case, what you'd need is actually a 10 megaelectron-volt beam of electrons, so quite a bit higher in energy than this one. And in that case, a high intensity beam of electrons is sent through all of the medical products that go into your hospital. So syringes, bandages, and that kind of thing are sent through on a conveyor belt, irradiated with electrons, and that's actually able to kill any of the germs– every single germ or bit of bacteria that might be on those products when they;re generated.

So that translates from a use back into the design of the accelerator. And that's a commercial system that's sold on the market to do that. Or maybe instead, I want a beam of something to, say, scan some cargo. In that case, you could start with a fairly similar system with electrons, use those electrons into a heavy metal target to generate x-rays, and send the x-rays through your cargo, and by doing so, map out the density and if there's any contraband or whatever that's inside cargo. And these are used as well. And there's another option, actually, at the moment of potentially using neutrons to do the same kind of scanning. So that's another thing. Perhaps you might want to treat cancer. Are you getting the impression there's a couple of uses here of particle accelerators? I hope so.

You might want to treat cancer. And actually, radiotherapy– LINACs, as we call them, linear accelerators– are some of the most ubiquitous accelerators around. There's five or six of them in most major hospitals. And this is actually just a small electronic accelerator. Again, it smashes electrons into a metal target, generates x-rays, and that's what's used to treat cancer in something like 40% of successful treatment cases. So that's a huge application. Now I think it's fair to say J.J. Thomson did not predict that. We've come quite a long way. And that's sort of electrons, but we can also move on to other types of particles. We know how to generate beams of protons and use those to generate other things as well, with our understanding of isotopes.

So one other area in medicine where we use protons in particular is to generate radioisotopes. And one example of where they might be used is in PET scans– positron emission tomography scans. And they're great because they actually also use our understanding of anti-matter. So in a positron emission tomography scan, if you don't know, someone is fed a small amount of what's called fluorodeoxyglucose. It's a sweet liquid. And into that is a tiny bit of radioactive fluorine-18, which is being generated by a particle accelerator, using a beam onto a target to generate that radioactive isotope. When it's inside the body, it emits positrons. It's a beta emitter. Now positrons, being anti-matter, when they come in contact with normal matter– electrons– they annihilate.

So they literally disappear and, instead, generate two photons– two particles of light– in exactly opposite directions. So we're able to then catch those photons, going in opposite directions, every time they happen and build up a picture of what's happening inside the person's body. And because this fluorodeoxyglucose actually concentrates in high metabolic areas, that means you're more likely to map out areas where there might be cancer, heart disease, et cetera. So that's another potential use. So if you add them all up, far from just being particle physics machines, there are loads and loads of those things. So there's over 35,000 particle accelerators in the world. And I've just sort of given a bit of a pie chart of how they're broken down. So something like 45% of them are used for radiotherapy and then most of the rest of them are used for industrial use– so whether that's treating things, scanning things, treating radial tires, changing the properties of gemstones, treating dirty drinking water to clean it up. All kinds of different things.

So you might get the idea that, actually, these things are pretty useful machines. And for me, working on them, I'd really quite like to see what else we could do with them. But before we go into that, we're going to have to understand a little bit more about how they work. So in the cathode ray tube before, I had just a single voltage supply– just a single voltage that the particles are moving through and gaining energy as they did so. But that's quite limiting because you can only gain as much energy as that voltage gives you. But the other way you can do it is you can take a voltage and try and re-use it again and again. And that's what this little demonstration here is supposed to show you, in one second. So this is powered by a Van de Graaff generator, which, I should say, Van de Graaff accelerators were one of the original types of particle accelerators. They use a rubber belt to build up static electricity, which goes on the top of the dome. And then I've attached that high voltage, which is about 30,000 volts on this device, onto my plastic bowl here, onto four strips which are crossed in the center. So those four strips in the center get charged up and the other ones around the outside– I've kept those at ground.

So what happens here– and it's a very, very simple model of an accelerator– is I have a ping pong ball covered in conducting paint. So it picks up the charge on the charge strip and gets pushed away. And then it rolls around a bit, dumps the charge on the grounded strip, but keeps rolling. So every time it goes over a charged strip, it gets a little bit of a kick. And so you saw, when I turned it on, it started in the middle and slowly, it built up some speed, built up momentum, and now it's limited by friction as to how fast it could go. Otherwise, I'm sure it could reach the speed of light. I'm sure. I'm sure it could. So this is a very basic model of a particle accelerator, but there's a little bit of a problem with how that one in particular operates. I'm just going to switch him off.

So one of the flaws in that demonstration actually– and it is lovely, but it's flawed– is that I've just got a single voltage there, which I'm really using again and again. But in order to do that, I actually have to change the charge on the particle. And real particles don't change charge, sadly. So we have to come up with another way. And in the real world, we actually use radio frequency cavities, which I'll show you in a moment. But this was the idea that Ernest Lawrence had when he invented a type of particle accelerator called the cyclotron. So what he was doing was taking a single, oscillating voltage and using it again and again. And the particle would gain energy. And because it was in a simple magnetic field, it would spiral outwards as it did.

So as you gain energy, the particle's going to spiral outwards. And so these machines were limited, in terms of energy– physically, by their size. So this is a 1 electron volt, I think, proton machine that he built with his graduate student, Milton Stanley Livingston, who did a lot of the practical work. And cyclotrons really became the cornerstone of nuclear physics research for a long time. And they're still used today, especially in things like radioisotope production and actually new forms of cancer treatment as well. So that's the cyclotron. The other type of accelerator I'd like to introduce you to– because there's two circular types which are related to my work. That's one of them. The other type is the synchrotron.

And this now is the type of machine that we use to reach higher and higher energies. The cyclotron was limited by the size of the magnet that you could use. So instead, we had to come up with another idea of how we can reach higher energies but without having to have these huge, huge magnets that were just incredibly heavy. And then a guy called Marcus Oliphant– an Australia, actually– invented the machine called the synchrotron. Now this machine's a bit different because it looks quite different from the cyclotron. It just has a single large ring of a series of different magnets around the ring. So what we have to do is we have to ramp up the strength of those magnets in time with the acceleration of the particles in order to actually keep everything synchronized. And that's where it gets its name from me. That's where the synchrotron actually comes from. And there are three main components of that– there's dipole magnets, which do the bending, quadrupole magnets, which we'll come onto in a minute, and then there's these RF cavities that I alluded to before. So I've just got a little video here. Here we go.

So this is what a radio frequency cavity looks like on the Large Hadron Collider. And that thing's probably about this tall. This thing operates at 400 megahertz, so the oscillations in that are 400 times per second. And it's fed by a high voltage radio frequency signal. So inside that cavity, effectively, there's an electromagnetic wave goes up, and down, and up, and down 400 million times a second. And as the particles go through that, they have to be timed exactly in order that when the field is up and in accelerating mode, it gets a kick forward. And when the field is down, they're not seeing the field because we can't always have the field up. So that's quite a large one. That's 400 megahertz, so they're quite big cavities. I actually have here the world's smallest radio frequency cavity, which I'm lucky to have one. This was developed for a new project at CERN called the Compact Linear Collider.

So that was 400 megahertz, that one. This one is 30 gigahertz– so very, very high frequency. So the particles would travel through the center of this device. And the RF is fed in through some wave guides, which are on the top, and these ones are really tiny. And inside there is where the particles actually gain some energy as they go through. But it's not very easy to see exactly what happens inside of there, so I've got a really simple demonstration to show you, which is how a sort of radio frequency electromagnetic feel can give some energy to some particles. So if we can dim the lights a little bit? I have a plasma ball here, in the center. I know it's pretty, but ignore the plasma bit. It's generated using a 30 kilohertz oscillation, which is emanating electromagnetic waves outwards, which is what's forming the plasma.

But also, those waves continue outside the confines of the plasma ball, which means that if I put some particles in the way, those particles actually get accelerated. And you'll notice that, actually, I'm not touching that. And actually, I can ground it as well. So I can sort of turn it on and off– sort of creating a little bit of a circuit. So it does work if I touch it, but, actually, one of the main things I want to show you is the sort of RF waves coming out here, accelerating the particles. And that's why it actually switches on and off, just in proximity. A new party trick for you. So yes. So that's how we give particles energy now. But we need to go a little bit further than that. And we need to understand how particles are focused as well. So it would be easy to assume that all you have to do is get the particles, give them energy, bend them round in a corner, job done.

No. We actually have to keep them focused as well. And we have a problem with that because the types of magnets we use– and any type of magnet– can't focus a beam of particles in both dimensions at once. So if I squeeze it horizontally, it's pulled apart vertically. And I'll solve this dilemma for you in a little while, but first, I just want to show you– this is a real, physical quadrupole magnet over here. It weighs about 30 kilos, so I wouldn't try picking it up. But this one is for a fairly medium energy electron beam. But once we get up to the very, very high energy– say, proton beams– we need much, much stronger magnets. And that's why you get these huge ones at the Large Hadron Collider. So all of that then in the synchrotron has to be synchronized together. So we have to have the accelerating cavities, we have to have the bending magnets, and we have to have the focusing system all acting on the beam in perfect timing in order to accelerate the beam. And on most synchrotrons, we use something like a sinusoidal cycle of the magnetic field and we link everything to that.

So this is just showing you what that cycle would look like. We would inject the beam at the low point of the cycle. As the particles are accelerated, the field increases. And then we would extract the beam at the top. Now that's a limitation for the synchrotron because it means– great as they are and they can reach whatever energy you want, as long as you have strong enough magnets– they have a cycle limitation. They only cycle– most of them– often once or a few times a second. The rapid cycling versions are up to sort of 50 to 70 times per second. So that's a limitation that will become important in a little while. Now backtracking a little while– at the start, I was showing you lots of applications. We face many, many challenges today, especially in the 21st century. And as a scientist, like me, you don't have to go very far to pick a challenge, you just have to sort of watch the headline news. This morning I wrote down– what was there? There was climate change, overpopulation, food and water shortages, incurable diseases, aging populations, security and terror threats, or our planet being destroyed by an asteroid– not an astronaut, sorry.

An asteroid. Don't get too depressed though. Yet I've chosen to work on these. I've chosen to work on particle accelerators when all those glorious challenges are out there. And the reason is because I believe and I want to use what we've learned from this field and these machines to help solve some of these real challenges facing us today. Now the next generation of accelerators for particle physics could take any form. So we're researching lots of different options– whether that's a very long, straight line linear accelerator, whether that's a circular accelerator, or even a more exotic one, colliding different types of particles– say particles called muons, which are like the heavier version of the electron.

And they're brilliant and that's really pushing our technology forward. But there are other areas of accelerated science which are pushing us in a slightly different direction. So in the accelerator world, we talk about there being two frontiers. There's the energy frontier– and, in particle physics, that's where we're going with that. We're trying to get to higher and higher energies in order to reach heavier and heavier mass and more rare and exotic particles. But there's also something called the intensity frontier. And that's the one that I work on. And the intensity frontier tends to lead us towards different applications. It could also lead us towards a new particle physics applications. But one of the main ones is actually to generate neutrons using a high intensity beam of protons and then use those neutrons to do other things. In the UK especially, people are really good at that because we have a spallation neutron source called ISIS, which is at the Rutherford Appleton Lab in Oxfordshire.

And that's been going for more than 30 years. And it's generating wonderful science from all kinds of fields using neutrons to investigate matter, and materials, and biology, and aircraft wings, and oil part blockages, and how to save babies with [INAUDIBLE], and all kinds of amazing science. So on the one hand, we need to understand, to generate more science that way, how we can generate more neutrons using a particle accelerator. But there's actually other challenges which are pushing our field further, and further, and further. And one of those is how we might be able to deal with the nuclear waste problem or parts of the nuclear waste problem. And there's an idea out there called accelerator driven subcritical reactors. Some of you may have heard of this already. Now this idea is to take a very high intensity proton beam, smash it into a target, generate neutrons in the same way we do in the ISIS accelerator, and then, using those neutrons to drive existing nuclear waste– especially minor actinides, high level nuclear waste– through their cycle in order to reduce the lifetime of that they would have to be stored for.

So one particularly popular idea is to mix in an element called thorium. And thorium is actually a fertile element, not a fissile one– unlike parts of uranium. But thorium is about twice as abundant in the earth as uranium is and you don't have to refine it. So if you mix in thorium and then you mix in these existing types of nuclear waste from existing reactive fleets, you would be able to bombard it with neutrons from the accelerator, transmute the nuclear waste, and get rid of it. If you did it in the right way and within enough power coming in, you could generate energy from that process as well. But this is an incredibly, incredibly challenging application. So let me give you kind of where we're at versus where we'd like to be. This is a plot, which is often used in my part of the accelerator field, which shows the beam power of different accelerators, which I'll explain a little bit at the moment. But all you need to know is the energies are on the x-axis and the beam current– so how many particles per second– are on the y-axis. And we can see sort of different generations of machines there.

So you can see, for example, some of the high energy particle physics machines are very high on the energy axis, not so high up the beam current axis. And so when we multiply those two numbers to give a beam power, it's maybe not that high. It's maybe 0.1 megawatts. On the other hand, the optimum energy for generating neutrons is about 1 gigaelectron vote. And so you'll see on sort of the left hand, but up at the top, a bunch of facilities and machines which are more attuned to this intensity frontier, which has slightly low energy, but they're generating quite enormous beam currents. And actually, state of the art, at the moment, is to get to about 1 megawatt or just over 1 megawatt, which has being done at the SNS Spallation Source in the US and at PSI, which is in Switzerland, that's done that before.

Now where we need to be for these future applications is at least 10 megawatts, if not 100. So we need to be 10 to 100 times more powerful than we are at the moment. Now I haven't mentioned reliability yet. When you get higher in power, it's much harder to make your machine reliable, so you can leave it on all the time, run it all the time. But actually, a real application, like transmuting nuclear waste, would also require us to be switched on all the time. So we actually have to be up to 1,000 times more reliable– so less small trips and small problems than we have at the moment. And we've never designed an accelerator with that in mind. So just to backtrack a second– how might we actually do that? Which parameters do we have to play with? Well, to get to high power, which is what we need, we have sort of three pieces of the puzzle– there's the energy– but, as I said, that's kind of fixed because if we're generating neutrons, 1 GeV is about right– then there's the particles per beam– and that's a limitation which I'll get to in a little while.

And then there's the repetition rate– there's how many times per second you can run the machine– because that limits your average intensity over time. And I said before that the two machines I was looking at are the cyclotron and the synchrotron. The cyclotron is limited in energy. It can't go up 1 GeV, so it can't get to the optimum energy that we need for this application. The synchrotron is limited in its repetition rate. So in order to generate a high average current, it has to try and operate it many times a cycle or to really, really ramp up how many particles there are in the machine at one time, which is really problematic. But there is actually another option. And this it's a type of machine that I have specialized in. And this is called a fixed field alternating gradient accelerator. Right now, this isn't going to make a huge amount of sense, but the main points in that are in the title– it's a fixed field, so that means we don't ramp the magnetic field in time, and alternating gradient.

And this alternating gradient has something to do with the focusing system, which I'll explain a little bit more about in a second. So it uses the same focusing as a synchrotron so we can reach high energies, but also the fixed magnetic field of the cyclotron, which means high energies, and also no sort of limitation, and giant magnets, and things like that. So the beam in this machine, it does actually spiral outwards a little bit, but only a small amount. And we've arranged the magnetic field to increase with radius in a very particular way, so that as the beam spirals outward, it sees a higher and higher field. So it sees a field like a synchrotron, but we don't have to ramp the thing in time. And in terms of high intensity, this could be a huge advantage to us in the future. And to understand a little bit more about that, I actually want to talk about how we trap particles and how we focus.

I've kind of been alluding to this magnet that squeezed one way and didn't the other. So I've got, over here, a very sort of visual demonstration to show you how this works. So if I have my particles in a beam and they're traveling through a series of magnets, when they go through, say, a focusing magnet, they're going to say a magnetic field which controls it like this– so if it's too far that way, it'll push it back to the center. And if it's too far this way, it will push it back to the center. But unfortunately, when it goes through the other type of magnet, it will be defocused. So no matter where it is, it's always going to be pulled away from the center and defocused. Now I think, to those of you looking at this demonstration, there's an obvious solution to how we solve that problem and how we make that focusing stable.

And that is– well, in this case, we have to alternate the gradient of that focusing. That's what I mean by alternating gradient. And in this particular case, we can do that by actually physically spinning this device. It creates quite a wind. Here's the difficult bit. I have to try and trap a particle with it. Let's have a go. Thank you very much. And again, we'll come back to that in a little while as well. So if we get the alternating gradient correct, and the right speed, and the right everything, then we can trap our particles. And that's quite a fundamental thing in accelerator physics. And it leads to this principle that we call strong focusing. And this is why the synchrotron was such a great invention and allowed us to reach higher energies– because by alternating the magnets back and forth between focusing and defocusing, we were able to focus in both planes.

And not only that, but we can focus in both planes stronger than any other way we know of focusing particles because it's sort of analogous, but not quite like lenses of light. But the slight other problem though is that you can't choose anywhere. Now this is, genuinely, a plot from my PhD Thesis. I'm actually not kidding. So what I was showing to you before, with this Paul trap– I've set it to a particular speed. I set it running– this saddle shape. And I put the particle on. And it was trapped for a short while before it flew off again. But that actually only works at certain speeds. So this diagram here is showing you– on the x and the y-axis– kind of the focusing strengths, so the curvature, on one. And let's say we're looking at the speed on the other axis. It doesn't matter. There's two parameters there. We're changing a couple of them.

And only for certain sets of those parameters is it stable. The green and the red show that it has to be stable in two different ways. So if I set this running again– and those of you, especially if you're higher up, you might be able to observe this quite closely. I have to feel exactly when it's in the right spot. There we go. So if you were to observe that closely– whoa. I'll try again– you would actually notice that there's a couple of different types of oscillations. There's sort of one round this way, a sort of radial one, and then there's also a vertical one. And both of those have to be correct in order for it to stay put. So let me just put that back on there again. Oh, I think that's the sweet spot. There we go. I'm not the best at doing that. We have other people who are better at it.

Otherwise, if it's not rotating at the right speed or if it's not in the sweet spot particularly, it goes flying off. So for example, if I do it much, much slower– a little bit too slow. Come on– and if I pop it on then, I think, based on just intuition, you can probably tell that that's not going to work. But I'll give it a fair try anyway. So more or less, it just builds up and comes straight off. And I'll do the same thing at an incredibly dizzying speed. Full power? Yeah. Yeah? All right. Full power. Woo. It gives off quite a wind. All right. I'll pop that back on there. And again, complete rubbish. Completely unstable. And that's because, as this diagram shows you, you can only be stable in particular dimensions.

And the fascinating thing is that the mathematics that describes this saddle shape and the mathematics that describes are focusing in a particle accelerator are pretty much identical. And this thing has a name. It's called a Paul trap. And I'll come back to that in a little while. So we have to set up our accelerator with a focusing system in a very specific way so that it sits in one of these stable regions. We usually use the region which is on the sort of bottom left corner of that plot because it's fairly easy to reach with normal magnet strengths and things like that. But you can see that it's a relatively small area, so we can't just put magnets anywhere. We actually have to choose them and design them very, very carefully. But I showed you before, with this device here, that when we do that, we set off oscillations. Now in physics, in any system which has an oscillation, there's one problem which we're always going to run into, which is the problem of resonances.

Now when I say the word resonance, I'm sure most of you know what I'm talking about. If you have a system that's oscillating, and you occasionally kick it, and you kick it in the same way every time, it builds up and builds up exponentially, and you get a resonance, and, in this case, your beam goes flying into the wall of your particle accelerator, which isn't such a good thing. So this diagram is showing you, on the x-axis, say, the oscillation rate of, say, one of these oscillations. So maybe that's the one around this way. And the vertical axis is showing you the oscillation rate in the other direction. So they interact as well. And so one of the reasons why I can never get this thing to stay more than about 30 seconds is because the turntable isn't precisely flat because there's imperfections in the building up. Not that it's imperfect– it's beautiful.

It's beautifully made, but it's not submicron level, just saying. So we will always have little imperfections which build up. So when we're designing an accelerator, we have a really tapped choice to make, which is, what value do I choose for those oscillations to keep my beam in the machine? And that's why I've shown you this diagram because it's pretty tough to spot. Your guess is, literally, as good as mine, in that case. But we do– we choose a specific point in that diagram. We might choose a few. We have some flexibility, so we might operate the machine in slightly different ways. And usually, we try and place that spot as far away from any and especially the crossing over points, where different orders of resonances actually cross over. And they can be driven by magnet misalignments, they can be driven by magnetic fields having a slightly wrong shape– all kinds of things.

So that sounds kind of like a disaster, but we do, in fact, as I said, operate 35,000 accelerators quite successfully. Thank you very much. But we have always designed accelerators to stay away from these resonances until a couple of years ago when we came up with a new type of accelerator, which was one of these fixed field alternating gradient machines that I talked about before. But we actually simplified it right down. The machine I showed you before had quite a complicated magnetic field shape. And someone said, well, what happens if we just simplify the field and just use these– just use these so-called quadrupole magnets, which have a nice, linear field shape? And everyone said, well, you'll get resonances. Yeah. OK. So what we found was that we have resonances all the way through the acceleration cycle. But we let it do that intentionally because one of the things you need to know about resonances is they need time to build up.

So this type of machine– and it's same was EMMA, the Electron Model for Many Applacations– was, literally, the first of its kind in the world where we intentionally crossed through resonances in the acceleration cycle– major resonances, which everyone else said, that's never going to work. But the theory was, if we went quickly enough, we'd be able to cross through them because there wouldn't be enough time for those resonances to build up and destroy the beam. And that's what we demonstrated and published back in 2012. And the plot on the bottom left of the screen there, the lower half of that plot just shows a red and black line coming down. That's our measurement of what we call the tune– that is, the oscillation rates– in this accelerator as they're going to the acceleration cycle. So you can see that it sort of decreases over time. And that was our demonstration that we were, indeed, crossing through resonances. And we did, indeed, manage to accelerate particles, get them out the other end, and show that, actually, if you go fast enough, this system works. Now unfortunately, that machine can't be applied to very high intensity very easily.

And there's an extra complication. When we have charged particles and they're all in the same place, we have, in physics, the Coulomb force– literally, the repulsion of different charges against each other. It might not have occurred to people to think, well, hang on, there's all these particles, they're really dense, they're going through this tube– aren't they repelling against each other? Yes. They are. And you know what? It's a real pain. If it accelerators were like this we had one particle– one, like this– life would be a dream. Unfortunately, the more particles we try and cram in there, especially in high intensity machines, the worse our problems get. Because instead of them having one oscillation period– instead of having one tune– all the particles are interacting with each other.

They're also, mind you, interacting with the beam pipe, with the magnets. There's a lot of electromagnetic fields going around there. And so when we have that diagram where I said before we choose a point– it ain't a point anymore. Instead, our beam becomes this spread of different particles where every particle has, more or less, a different tune. And this is really what limits us when it comes to designing higher and higher intensity machines. The more particles we cram in there, the bigger this spread gets, the more resonances we run into, the more beam loss we create, and the more risk we have of, literally, melting the beam pipe of accelerator, which we can't afford to do. So there's a couple of different ways that we could potentially think about solving that. And one way is to design the type of machine that I work on, which is this fixed field alternating gradient accelerator. But another one is something a little bit wackier, which is to take a synchrotron and add a special insertion of magnets, which kind of does away with the existence, in the physical questions, of resonances, which sounds rather confusing.

It is. It's a very theoretical concept. It's called an integral optics accelerator. And that's being driven by Fermilab in the United States. So there are a couple of ideas. There are also, of course, other ways you could do this. You could use a giant linear accelerator, although I work on circular ones because I think, in the future, the linear ones will be too large and costly. And I think we ought to be looking at a generation of smaller, circular machines. So that's when we have to come back to this device over here because understanding how those accelerators work is really hard. If I try and run a simulation of billions– literally tens of billions– of particles in one of these machines interacting with each other, interacting with the beam pipe, interacting with the magnets, generating secondary particles, doing all kinds of– this takes weeks and weeks, on huge clusters of computers, in order to run a single simulation to see what my beam is doing.

If I try and study it in a real accelerator, it takes weeks and weeks of beam time. And I showed you before that these machines are in use all the time. So the ISIS neutron source, for example, when that's on, it runs 24/7. There's very little time for someone to do a beam study. And there's, particularly, no time for anyone to intentionally lose any beam because we can't because it would generate radiation. So a few years ago, I was wondering, how can we study very intense accelerators in the future without building the accelerator first? Because it's a pretty big job. And that's when I came across this idea for the first time of the Paul trap. And I came across some papers from a group from Hiroshima university in Japan, who I now collaborate with. And they are using these devices to actually study beams of particle accelerators and intense beams, in particular.

So this is, I promise, the only slide I have in here with some serious equations in it. This describes the Hamiltonian of beam motion. This is on the lower side. And on the left, that's the Hamiltonian for beam motion in an accelerator. Now a Hamiltonian sort of describes the overall motion in a system– the physicist will be familiar. If you're not, all I want you to recognize in that equation is how similar it is to the other one. So the other equation is the Hamiltonian for what we call a Paul trap. And if you sort of compare all the different pieces of that equation, you'll see that they're very, very similar in form. And what these colleagues at Hiroshima University had realized is that they could actually use a Paul trap– a small one rather than a large one, like this– to actually study the physics of the beams of accelerators. So just to point out a few terms in that equation– on the left, the pxpy, that's momentum, and then there's a focusing term, which is the k term, and then x squared minus y, squared, which is the sort of the hyperbolic shape here.

And then, on the far right hand side, there's this phi sc. And on the left, there's a phi as well. This phi is what we call a space charged term. That describes this defocusing, weird, annoying effect from all the high intensity interactions between the different particles. So this device I discovered was able to not just simulate the beams in just about any accelerator– because we, literally, can dial it in– but it was also able to simulate the intense dynamics of those beams. And that was very, very exciting to me. So the first thing I did was I had to learn what the system is like. I'm an accelerator physicist. I don't I don't use these things usually. So this is an electric quadrupole trap. So the top right there shows an image of the quadrupole mode excitation of one of these traps, where we apply an electric RF field at 1 megahertz and we change that in time– kind of like we were changing this or like we change this in time, as we spin it.

And that changing quadrupole field does exactly the same thing that our focusing in the accelerator does. But this is fixed in space, so it's like a tabletop experiment. So in the accelerator, our beam is traveling through. It's going through magnets like this, it's going through RF cavities. And it's experiencing these focusing and defocusing forces. In this trap, we take argon ions, which we ionize with a little gun of electrons, and then we do the same thing, but we actually do it in time instead of in space as the beam travels through. And so that means that we can't necessarily look at all the acceleration based effects, but we can look at all these beautiful oscillations, resonances, and high intensity effects as well. And so I started working with these guys a couple of years ago. And the first thing we looked at was to actually recreate the EMMA experiment that I showed you before and the resonance crossing that happens in that. So that brings me now to what I'm working on at the moment as well as designing new types of accelerators.

I'm also building one of these Paul trap devices. And one of the things I find really incredible is how, in physics, the mathematics and the description of these systems actually translates between lots of different physical systems. So the beam in the accelerator, the beam inside one of these Paul traps, and the beam or the particle on this Paul trap as well actually all have the same equations of motion. And we can relate them to one another and use it to study. So this is a picture of my Paul trap. It's called IBEX– intense beam experiment– which we're building at the moment. And this is a picture of the design and then the manufacture. And I'm proud to say that this is really an up-to-the-minute discourse because this is a photograph I, literally, took yesterday. I went up to [INAUDIBLE] Laboratory up north and I took this photograph of this chamber, which actually just blew me away. It's so beautiful. It's cleaned for ultra high vacuum.

And inside that, is mounted this Paul trap mechanism. And into that, very soon, we'll also have a load of electrical connections, and put the lid on, pump it down, and then we'll be able to start running experiments to explore the intensity frontiers of accelerators with that. So I want to come back, just for a moment, to sort of speculate because I've shown you, all the way through, the design of particle accelerators– how they work, how they're accelerated, how we bend the beam, how we focus, and how we supply voltage. And there's a load of technology that's come a long, long way in decades and up to the present day. And alongside that, of course– alongside the development of technology and driving the development of that technology– is the field of particle physics, which a couple of you in the room are familiar with.

But let me go through very quickly some of the discoveries that we've made. So I talked before about the discovery of the electron– J.J. Thomson, in this theater. Well, the particle which later became known as the electron because, apparently, his nemesis named it, not him. And then we we're looking at the discovery and understanding of the photon. And then other types of slightly stranger particles. So there's muons. Now I mentioned before, muons are like heavier versions of the electron. There's also an even heavier version of the muon, called the tao, which is further down there, in sort of medium blue, which was only discovered in the late '70s. So there's three generations of matter that we've discovered. We don't quite understand why there's three. And as well as those particles– the electron, muon, and tao– there's all the ones that make up the rest of our matter. So there's ones that make up the protons and neutrons and those are the quarks. So on the left hand side there, there's is down, strange, up, charm, bottom, and, eventually, top, as they were discovered in series.

And those are the six different types of quarks which go in– and only the up and down versions of those quarks go into making our normal matter. Again, there's three generations. We don't know why. And then, of course, there's all the other force carrying particles– things like the photon, which does all the electromagnetic forces that we've been playing with, the gluon, which is holding the proton and neutron together with those quarks, the W boson and Z boson do other sort of weak contractions, and then, of course, recently discovered as well, the Higgs boson, which is the fundamental mechanism of how things get mass. That's a very quick rundown of the standard model of particle physics. And then, of course, there's other things in there which exist– there's neutrinos as well. They're really mysterious particles. And those are the ones, for example, that we could generate with the next generation of intense beam accelerators.

And so we have these different strands of particle physics going into the future. But what I'd like to ask you is, well, going back to the start of my lecture, and looking at J.J. Thomson, and his inability to predict what we were going to use the electron for, and, admittedly, his complete inability to predict– I think we're at an interesting place in history at the moment because of our slight inability, ourselves, to predict how we're going to use all of this knowledge in the future. So I've shown you a few applications of protons. There's lots of other applications of ion beams and things. Maybe there's applications of muons in the future. But I want to sort of leave you with the thought that having learnt to understand and control these beams of particles, it opens up the question of, exactly what could we do with these beams of particles and these accelerators in the future? So in the future, if someone's going to give a toast in my presence, I'd like it to be something like this– I'd like it to be, "to the particle accelerator… May it be of use to everybody." Thank you very much. I also am aware of people from the high energy physics community investing in plasma whitefield technologies– not only for high energy, actually, but also further applications.

I just wondered if you had any comments on that? Do you think it's a runner? Should we be worrying more about that? Or is it interesting? .

Bill Nye Quizzes TODAY Anchors About Science Myths And Facts | TODAY







The Science of Climate Change Q&A – Australian Academy of Science

The science of climate change is unlike anything else I have ever seen as a scientist. It makes people across the world feel like their experts in a way that they never do in science. This document tries to put you know why science is so concerned about climate change in words and ways that hopefully the average politician or the average punter can understand. If you start from the simple physics it's cause and effect. If you're going to pump greenhouse gases into the atmosphere, sorry guys it's going to get hotter. It's clear from the scientific evidence that climate is changing and it's also clear that the major cause for that change is human activity. So increased greenhouse gases in the atmosphere. Temperatures have already risen, so we have to learn cope with the existing rise in temperatures. Which ain't just going to go away. We need to adapt to survive in the warmer climate that has already developed.

In the time that human civilisation has been here, climate has been remarkably stable. There had been changes but the changes we see now over the last 150 years are equal and greater than that and the projections we have for the next 100 years are far greater than the changes we've seen over the last 5-6000 years. And as you gradually increase the temperature you also cause a whole range of climatic changes. Places that were dry become wetter and vice versa and of course we as humans have put all our infrastructure in places suitable for the current climate. and when we suddenly go through and talk about changing the weather by 2 degrees Celsius, you are asking to essentially scram all this money we've spent. It means most of Sydney could eventually be underwater, not tomorrow but in 100's of years. It is a big issue. it's people sitting in local councils.

It's people sitting in state government. it's people sitting in business and industry who actually have the answers. It is actually mobilizing those people to recognise the risks from climate change and to do something about them. When there is even a tiny gap of information that becomes a 50/50 argument. That is that the science may be right or maybe it's wrong. 50/50 even though its 99 to 1 Or not engaging hyperbole. We're not riding in some sort of wave that's short-term. This is real. It is our best bet and I am 99% confident that science has got the right answer..

What is Natural Selection?

This episode of State Clearly was only possible with support from our viewers, and from a company dedicated to developing and delivering to patients, new treatments for Alzheimer's. Stated Clearly presents: What is Natural Selection? Natural selection is one of several key concepts contained within the theory of evolution. To understand exactly what natural selection is and why it's so important let's first take a quick look at two other evolutionary concepts: Descent with Modification and the overarching idea of Common Descent. Descent with Modification is the observable fact that when parents have children, those children often look and behave slightly different than their parents, and slightly different than each other. They descend from their parents with modifications. The differences found in offspring are partially due to random genetic mutations.

Common Descent is the idea that all life on Earth is related. We descended from a common ancestor. through the gradual process of descent with modification over many many generations, a single original species is thought to have given rise to all the life we see today. the common descent of all life on earth is not a directly observable fact. We have no way of going back in time to watch it happen. Instead, Common Descent as a conclusion based on a massive collection of observable facts. Facts found independently in the study of fossils genetics comparative anatomy mathematics biochemistry and species distribution. Because the evidence for common descent is so overwhelming, the concept has been around since ancient times. In the past however, it was rejected by many philosophers and scientists for one main reason: You cannot get order and complexity from random chaos alone. The bodies and behaviors of living things are extremely complex and orderly.

Descent with Modification simply produces random variation. All through history no one could explain how complex life arose from simple life through random variation, until Charles Darwin discovered Natural Selection. Charles Darwin, who lived from 1809 to 1882 was a naturalist: someone who studies nature. At the start of his career he traveled the world by ship, collecting and documenting plants and animals. During his travels, Darwin became very interested in the idea of common descent. He noticed that islands contain species of plants and animals unique to those islands, they can't be found anyplace else on earth, but they often look and behave surprisingly similar to creatures found on nearby continents.

Tortoises on the Galapagos islands can be distinguished from those of Africa, meanwhile, with the exception of size, they're almost identical to a species found nearby in South America. Darwin believed the similarities could be best explained through Common Descent. Long ago a tortoise from the mainland may have drifted to the islands, possibly on a raft of storm debris, and once arriving, laid her eggs. Random changes caused by Descent with Modification over thousands of years, eventually transformed the island creatures and the mainland creatures so much, that they could no longer be considered the same species. This idea made good sense to Darwin except for one thing: the island creatures he found were not just randomly different from their mainland cousins, they were specially adapted for island life. the Galapagos is a collection of 18 main islands, many of which are home to tortoises.

The larger islands have lots of grass and vegetation. Tortoises there grow extra heavy and have dome like shells. Some of the smaller islands have very little grass, forcing the tortoises to feed on island cactus. the best cactus pads grown the tops of these plants. Fortunately, tortoises on these islands are equipped with expanded front legs and saddle like shells allowing them to stretch their necks extra long to reach their food. It's almost as if these island creatures have been perfectly sculpted to survive within their unique environments. How did this sculpting take place? Random Descent with modification alone could never do such a thing. Darwin drew upon his knowledge of selective breeding to answer this question. For thousands of years, farmers have been taking wild plants and animals, and through the process of selective breeding, have sculpted the original wild forms into new domestic forms, much better suited for human use and consumption. The process is slow but simple if a single plant produces a hundred seeds, most will grow to be nearly identical to the parent plant.

A few however, will be slightly different. Some variations are undesirable: smaller size, bitter taste, vulnerability to disease and so on. Other variations are highly valued! Thicker sweeter leaves for example. If a farmer only allows the best plants to reproduce and creates seeds for the next crop, small positive changes will add up over multiple generations, eventually producing a dramatically superior vegetable. You might be surprised to hear that broccoli cauliflower, kale, brussels sprouts, and cabbages, are all just different breeds of a single type of weed commonly found along the shores of the English Channel. The evolution of this original plant into all the varieties we see today was carefully guided by different farmers around the world, who simply selected for different traits. It's important to note, that the farmer doesn't actually create anything. Random Descent with Modification creates new traits. The farmer simply chooses which of those new creations are allowed to reproduce, and which are not. Darwin proposed that nature itself is also capable of selection.

It may not have an intelligent brain like a farmer, but nature is an extremely dangerous place in which to live. There are germs which can kill you. Animals that can eat you. You could die of heat exhaustion. You could die of exposure to the cold. When parents produce a variety of offspring, nature, simply by being difficult to survive in, decides which of those variations get to live in reproduce, and which do not. Over multiple generations, creatures became more and more fit for survival and reproduction within their specific environments. Darwin called this process Natural Selection. Since Darwin first put forth his idea in the mid 1800s Natural Selection has been studied and witnessed numerous times in nature and in the science lab. What started out as a mere idea is now officially an observable fact! Darwin's discovery has greatly expanded our understanding of the natural world it has lead to amazing new breakthroughs, and it finally allowed scientists to seriously consider the idea of Common Descent.

So to sum things up, What exactly is natural selection? Natural Selection is the process by which random evolutionary changes are selected for by nature in a consistent orderly non random way. Through the process of descent with modification, new traits are randomly produced. Nature then carefully decides which of those new traits to keep. Positive changes add up over multiple generations, negative traits are quickly discarded. Through this simple ongoing process, nature, even though it may not have a thinking mind, is capable of producing incredibly complex and beautiful creations. I'm Jon Perry, and that's Natural Selection stated clearly! that's it for this episode if you enjoyed it, subscribe to us on youtube and follow us on out face book page. if needed, I can be contacted directly from our website at .

Why humans are so bad at thinking about climate change

"We are hurtling toward the day when climate change could be irreversible." "Rising sea levels already altering this nation’s coast." "China’s capital is choking in its worst pollution of the year." "5% of species will become extinct." "Sea levels rising, glaciers melting." Okay. Enough. I get it. It’s not like I don’t care about polar bears and melting ice caps. I’m a conservation scientist, so of course I care. I’ve dedicated my entire career to this. But over the years, one thing has become clear to me: We need to change the way we talk about climate change. This doom-and-gloom messaging just isn’t working; we seem to want to tune it out. And this fear, this guilt, we know from psychology is not conducive to engagement. It's rather the opposite. It makes people passive, because when I feel fearful or guilt-full, I will withdraw from the issue and try to think about something else that makes me feel better. And with a problem this overwhelming, it’s pretty easy to just turn away and kick the can down the road. Somebody else can deal with it.

So it’s no wonder that scientists and policymakers have been struggling with this issue too. So I like to say that climate change is the policy problem from hell. You almost couldn't design a worse problem as a fit with our underlying psychology or the way our institutions make decisions. Many Americans continue to think of climate change as a distant problem: distant in time, that the impacts won't be felt for a generation or more; and distant in space, that this is about polar bears or maybe some developing countries. Again, it’s not like we don’t care about these things — it’s just such a complicated problem. But the thing is, we’ve faced enormous, scary climate issues before. Remember the hole in the ozone layer? As insurmountable as that seemed in the 1970s and ’80s, we were able to wrap our heads around that and take action.

People got this very simple, easy to understand, concrete image of this protective layer around the Earth, kind of like a roof, protecting us, in this case, from ultraviolet light, which by the way has the direct health consequence of potentially giving you skin cancer. Okay, so now you've got my attention. And so then they came up with this fabulous term, the “ozone hole.” Terrible problem, great term. People also got a concrete image of how we even ended up with this problem. For decades, chlorofluorocarbons, or CFCs, were the main ingredient in a lot of products, like aerosol spray cans. Then scientists discovered that CFCs were actually destroying the atmospheric ozone. People could look at their own hairspray and say, “Do I want to destroy the planet because of my hairspray? I mean, god no.” And so what's interesting is that sales of hairspray and those kinds of products and underarm aerosols started dropping quite dramatically.

People listened to scientists and took action. Now scientists predict that the hole in the ozone layer will be healed around 2050. That’s actually pretty amazing. And while stopping the use of one product is actually pretty easy, climate change caused by greenhouse gases … that’s much trickier. Because the sources are more complicated, and for the most part, they’re totally invisible. Right now, there is CO2 pouring out of tailpipes, there is CO2 pouring out of buildings, there is CO2 pouring out of smokestacks, but you can't see it. The fundamental cause of this problem is largely invisible to most of us. I mean, if CO2 was black, we would have dealt with this issue a long time ago. So CO2 touches every part of our lives — our cars, the places we work, the food we eat.

For now, let’s just focus on one thing: our energy use. How do we make that visible? That was the initial goal of UCLA’s Engage project, one of the nation’s largest behavioral experiments in energy conservation. What we're trying to do is to figure out how to frame information about electricity usage so that people save energy and conserve electricity. The idea is that electricity is relatively invisible to people. The research team outfitted part of a student housing complex with meters that tracked real-time usage of appliances and then sent them weekly reports. So you can see how much energy the stove used versus the dishwasher or the fridge. We realized, because of this project, the fridge was like the monster. So lucky for them, their landlord upgraded their fridge to an energy-efficient one. They also learned other energy-saving tips, like unplugging their dishwasher when not in use and air-drying their clothes during the summer months. And researchers, in turn, discovered where people were willing to cut back. The Engage project wanted to know what types of messaging could motivate people to change their behavior. We wanted to see over time over a year and with repeated messages, how do people, behave? How does that impact the consumer behavior? And what we found is that it's very different.

Some households were sent personalized emails with their energy bill about how they could save money; others learned how their energy usage impacted the environment and children’s health. Those who received messages about saving money did nothing. It was totally ineffective because electricity is relatively cheap. But emails sent that linked the amount of pollutants produced to rates of childhood asthma and cancer — well, those led to an 8% drop in energy use, and 19% in households with kids. Now, in a separate study, researchers brought social competition into the mix. First, they hung posters around a dorm building to publicly showcase how students were really doing: red dots for energy wasters, green for those doing a good job, and a shiny gold star for those going above and beyond. This social pressure approach led to a 20% reduction in energy use. This strategy was also used at Paulina’s complex, and it definitely brought out her competitive streak. For me, the competition was what motivated me, because seeing your apartment number and telling you that you are doing at the average, but you are not the best, was like, Why? I’m doing everything you are telling me to do.

I always wanted the gold star, because it was like, “Oh, my god, I want to be like the less consumption of energy in the whole building.” And psychology studies have proved this. We are social creatures, and as individualistic as we can be, turns out we do care about how we compare to others. And yes, we do like to be the best. Some people don’t want to say, Oh, I'm like the average. No, my usage is different and I want to be able to act on it. And people can act on it because with these meters, they can now see their exact impact. A company called Opower is playing with this idea of social competition. They work with over 100 utility companies to provide personalized energy reports to millions of customers around the world. Now consumers can not only see their energy use but how it compares to their neighbors’. Like the UCLA study found, this subtle social pressure encourages consumers to save energy.

It’s been so effective that in 2016, Opower was able to generate the equivalent of two terawatt-hours of electricity savings. That’s enough to power every home in Miami for more than a year. And they’re not alone. Even large companies are tapping into behavioral science to move the dial. Virgin Atlantic Airways gave a select group of pilots feedback on their fuel use. Over the course of a year, they collectively saved over 6,800 tons of fuel by making some simple changes: Adjusting their altitudes, routes, and speed reduced their carbon dioxide emissions by over 21,000 tons. These behavioral “nudges” do seem to be advancing how we as a society deal with some pretty complicated climate change issues, but it turns out we’re just getting started. There is no “quick fix.” We need people changing their companies, changing their business models, changing the products and services they provide. This is about broader-scale change. And part of this change includes embracing what makes us human.

That it can’t just be a guilt trip about dying polar bears or driving around in gas guzzlers. We need to talk about our wins, as well — like how we’re making progress, really being aware of our energy use, and taking advantage of that competitive spirit we all have in order to really move us from a state of apathy to action. Global warming is by far the biggest issue of our time. Climate Lab is a new series from Vox and the University of California, and we’ll be exploring some surprising ways we can tackle this problem. If you want to learn more, head to

All Scientific Papers Should Be Free; Here’s Why They’re Not

If science drops in a field but no other researchers are around to hear it, does it further the academic area of study? Howdy researchers, Trace here for DNews. Science is a process, it’s a way of thinking about the world around us. Most of these scientific processes are thought through and then published in a journal, but to read them you have to pay! Shouldn’t all this scientific knowledge be FREE!? Firstly, science is mostly paid for by grants from governments, non-profits, foundations, universities, corporations or others with deep pockets. We did a video about it. But, even though the science was paid for, that’s just the first half of the equation… the other half is the scientific journal. The first journals were published over 350 years ago as a way to organize new scientific knowledge, and that continues today. According to the International Association of Scientific, Technical and Medical Publishers, 2.5 million new scientific papers are published each year in over 28,000 different journals.

A new paper is published every 20 seconds. (and you thought we’d run out of stuff for DNews 😉). Researchers need others to read their paper so it can affect their field. So, they freely send their treasured manuscripts to journals for peer review and publication. When a manuscript comes in, specialists select and send the best manuscripts to volunteer experts in the field who are “carefully selected based on… expertise, research area, and lack of bias” for peer review. After that, the papers are copy-edited, compiled into an issue of the journal, physically printed and then shipped and/or published online! They’re, like, the nerdiest magazines in the world. All this costs money… According to a study in PLOS One this whole process can cost 20 to 40 dollars per page, depending on how many papers the journal receives and how many they have to reject. Someone has to pay for that, and there are three ways this can happen: authors can submit for free and readers/subscribers pay (called the traditional model), or authors pay and readers get it for free (called open-access), or both authors and readers pay!English-language journals alone were worth $10 billion dollars in 2013! I know what you’re thinking, just put them on the internet! Save on shipping, like newspapers and magazines! Well, even though publishers don’t have to print and ship big books of papers anymore, they often still do.

And, even if the journals were only online, servers and bandwidth need to be paid for, and that ain’t cheap. Publishing requires dollah bills, y’all and someone has to pay, and everyone gets their money differently… For example: the American Association for the Advancement of Science (AAAS) publishes the Science journals, and the Public Library of Science publishes PLoS One among others; both are nonprofits. But, while PLOS uses an open-access (free to you) model, Triple-A-S publishes six journals: five with a traditional model (you pay) and one open-access. Plus, there are for-profit journals like Macmillan Publishers, who own the journal Nature (and a mix of traditional and open access options). And the giant Reed Elsevier (now called RELX) publishes over 2000 journals some of which are open-access and some are traditional! So, though some are non-profits, they don’t always give it to YOU for free, and those that do still can charge researchers up to 2900 dollars to publish! While others make money off scientific research which makes some people feel icky.

The whole thing is confusing. Asking “what is worse: for-profits charging universities or readers for access, or open-access charging authors?” Shrug. The debate rages. Many scientists argue as the peer review is provided for free by the scientific community, and the papers are provided for free by the scientific community; access to the papers should. be. free. The EU agrees, ordering any publically-funded papers to be made free by 2020; pushing toward open access to science! In the US, where many of the papers originate, some scientists are calling for boycotts on for-profit publishing. In the end, there was a time when practitioners needed a physical reference to the latest scientific achievements. In the days before the internet, getting a journal in the mail must have been both exciting and illuminating, but now, thanks to digital publishing… this whole pay-for-science model is wont to change… People WANT the knowledge to be free, but no one knows how to do it.

As y’all know, more research is always needed, but should that research be behind a paywall? Let us know down in the comments, make sure you subscribe so you get more DNews everyday. You can also come find us on Twitter, @seeker. But for more how much science actually costs, watch this video..