Some scientists spend decades trying to catch a glimpse of a rare process. But with good experimental design and a lot of luck, they often need only a handful of signals to make a discovery.
In 2009, University of Naples physicist Giovanni de Lellis had a routine. Almost every day, he would sit at a microscope to examine the data from his experiment, the Oscillation Project with Emulsion-tRacking Apparatus, or OPERA, located in Gran Sasso, Italy. He was seeking the same thing he had been looking for since 1996, when he was with the CHORUS experiment at CERN: a tau neutrino.
More specifically, he was looking for evidence of a muon neutrino oscillating into a tau neutrino.
Neutrinos come in three flavors: electron, muon and tau. At the time, scientists knew that they oscillated, changing flavors as they traveled at close to the speed of light. But they had never seen a muon neutrino transform into a tau neutrino.
Until November 30, 2009. On that day, de Lellis and the rest of the OPERA collaboration spotted their first tau neutrino in a beam of muon neutrinos coming from CERN research center 730 kilometers away.
“Normally, what you would do is look and look, and nothing comes,” says de Lellis, now spokesperson for the OPERA collaboration. “So it's quite an exciting moment when you spot your event.”
For physicists seeking rare events, patience is key. Experiments like these often involve many years of waiting for a signal to appear. Some phenomena, such as neutrinoless double-beta decay, proton decay and dark matter, continue to elude researchers, despite decades of searching.
Still, scientists hope that after the lengthy wait, there will be a worthwhile reward. Finding neutrinoless double-beta decay would let researchers know that neutrinos are actually their own antiparticles and help explain why there’s more matter than antimatter. Discovering proton decay would test several grand unified theories—and let us know that one of the key components of atoms doesn’t last forever. And discovering dark matter would finally tell us what makes up about a quarter of the mass and energy in the universe.
“These are really hard experiments,” says Reina Maruyama, a physicist at Yale University working on neutrinoless double-beta decay experiment CUORE (Cryogenic Underground Observatory for Rare Events) as well as a number of direct dark matter searches. “But they will help answer really fundamental questions that have implications for how the universe was put together.”Seeking signs, cutting noise
For the OPERA collaboration, finding a likely tau neutrino candidate was just the beginning. Hours of additional work, including further analyses and verification from other scientists, were required to confirm that signal didn’t originate from another source.
Luckily, the first signal passed all the checks, and the team was able to observe four more candidate events in the following years. By 2015, the team had gathered enough data to confidently confirm that muon neutrinos had transformed into tau neutrinos. More specifically, they were able to achieve a 5-sigma result, the gold standard of detection in particle physics, which means there's only a 1 in 3.5 million chance that the signal from an experiment was a fluke.
For some experiments, seeing as few as two or three events could be enough to make a discovery, says Tiziano Camporesi, a physicist working on the CMS experiment at CERN. This was true when scientists at CERN’s Super Proton Synchrotron discovered the Z boson, a neutral elementary particle carrying the weak force, in 1983. “The Z boson discovery was basically made looking at three events,” Camporesi says, “but these three events were so striking that no other kind of particle being produced at the accelerator at the time could fake it.”
There are a number of ways scientists can improve their odds of catching an elusive event. In general, they can boost signals by making their detectors bigger and by improving the speed and precision with which they record incoming events.
But a lot depends on background noise: How prevalent are other phenomena that could create a false signal that looks like the one the scientists are searching for?
When it comes to rare events, scientists often have to go to great lengths to eliminate—or at least reduce—all sources of potential background noise. “Designing an experiment that is immune to background is challenging,” says Augusto Ceccucci, spokesperson for NA62, an experiment searching for an extremely rare kaon decay.
For its part, NA62 scientists remove background noise by, for example, studying only the decay products that coincide in time with the passage of incoming particles from a kaon beam, and carefully identifying the characteristics of signals that could mimic what they’re looking for so they can eliminate them.
The Super Cryogenic Dark Matter Search experiment, or SuperCDMS, led by SLAC National Accelerator Laboratory, goes to great lengths to protect its detectors from cosmic rays, particles that regularly rain down on Earth from space. To eliminate this source of background, scientists shield the detectors with iron, ship them by ground and sea, and operate them deep underground. “So it would not take many dark matter particles detected to satisfy the 5-sigma detection rule,” says Fermilab’s Dan Bauer, spokesperson for SuperCDMS.
At particle accelerators, the search for rare phenomena looks a little different. Rather than simply waiting for a particle to show up in a detector, physicists try to create them in particle collisions. The more elusive a phenomenon is, the more collisions it requires to find. Thus, at the Large Hadron Collider, “in order to achieve smaller and smaller probability of production, we're getting more and more intense beams,” Camporesi says.
Triangulating the results of different experiments can help scientists build a picture of the particles or processes they’re looking for without actually finding them. For example, by understanding what dark matter is not, physicists can constrain what it could be. “You take combinations of different experiments and you start rejecting different hypotheses,” Maruyama says.
Only time will tell whether scientists will be able to detect neutrinoless double-beta decay, proton decay, dark matter or other rare events that have yet to be spotted at physicists’ detectors. But once they do—and once scientists know what specific signatures to find, Maruyama says, “it becomes a lot easier to look for these things, and you can go ahead and study the heck out of them.”
Halina Abramowicz leads the group effort to decide the future of European particle physics.
Physics projects are getting bigger, more global, more collaborative and more advanced than ever—with long lead times for complex physics machines. That translates into more international planning to set the course for the future.
In 2014, the United States particle physics community set its priorities for the coming years using recommendations from the Particle Physics Project Prioritization Panel, or P5. In 2020, the European community will refresh its vision with the European Strategy Update for Particle Physics.
The first European strategy launched in 2006 and was revisited in 2013. In 2019, teams will gather input through planning meetings in preparation for the next refresh.
Halina Abramowicz, a physicist who works on the ATLAS experiment at CERN’s Large Hadron Collider and the FCAL research and development collaboration through Tel Aviv University, is the chair of the massive undertaking. During a visit to Fermilab to provide US-based scientists with an overview of the process, she sat down with Symmetry writer Lauren Biron to discuss the future of physics in Europe.What do you hope to achieve with the next European Strategy Update for Particle Physics? HA:
Europe is a very good example of the fact that particle physics is very international, because of the size of the infrastructure that we need to progress, and because of the financial constraints.
The community of physicists working on particle physics is very large; Europe has probably about 10,000 physicists. They have different interests, different expertise, and somehow, we have to make sure to have a very balanced program, such that the community is satisfied, and that at the same time it remains attractive, dynamic, and pushing the science forward. We have to take into account the interests of various national programs, universities, existing smaller laboratories, CERN, and make sure that there is a complementarity, a spread of activities—because that’s the way to keep the field attractive, that is, to be able to answer more questions faster.How do you decide when to revisit the European plan for particle physics? HA:
Once the Higgs was discovered, it became clear that it was time to revisit the strategy, and the first update happened in 2013. The recommendation was to vigorously pursue the preparations for the high-luminosity upgrade of the [Large Hadron Collider]. The high-luminosity LHC program was formally approved by the CERN Council in September 2016. By the end of 2018, the LHC experiments will have collected almost a factor of 10 more data. It will be a good time to reflect on the latest results, to think about mid-term plans, to discuss what are the different options to consider next and their possible timelines, and to ponder what would make sense as we look into the long-term future.
The other aspect which is very important is the fact that the process is called “strategy,” rather than “roadmap,” because it is a discussion not only of the scientific goals and associated projects, but also of how to achieve them. The strategy basically is about everything that the community should be doing in order to achieve the roadmap.What’s the difference between a strategy and a roadmap? HA:
The roadmap is about prioritizing the scientific goals and about the way to address them, while the strategy covers also all the different aspects to consider in order to make the program a success. For example, outreach is part of the strategy. We have to make sure we are doing something that society knows about and is interested in. Education: making sure we share our knowledge in a way which is understandable. Detector developments. Technology transfer. Work with industry. Making sure the byproducts of our activities can also be used for society. It’s a much wider view.What is your role in this process? HA:
The role of the secretary of the strategy is to organize the process and to chair the discussions so that there is an orderly process. At this stage, we have one year to prepare all the elements of the process that are needed—i.e. to collect the input. In the near future we will have to nominate people for the physics preparatory group that will help us organize the open symposium, which is basically the equivalent of a town-hall meeting.
The hope is that if it’s well organized and we can reach a consensus, especially on the most important aspects, the outcome will come from the community. We have to make sure through interaction with the European community and the worldwide community that we aren’t forgetting anything. The more inputs we have, the better. It is very important that the process be open.
The first year we debate the physics goals and try to organize the community around a possible plan. Then comes the process that is maybe a little shorter than a year, during which the constraints related to funding and interests of various national communities have to be integrated. I’m of course also hoping that we will get, as an input to the strategy discussions, some national roadmaps. It’s the role of the chair to keep this process flowing.Can you tell us a little about your background and how you came to serve as the chair for European Strategy Update? HA:
That’s a good question. I really don’t know. I did my PhD in 1978; I was one of the youngest PhDs of Warsaw University, thus I’ve spent 40 years in the field. That means that I have participated in at least five large experiments and at least two or three smaller projects. I have a very broad view—not necessarily a deep view—but a broad view of what’s happening.There are major particle physics projects going on around the world, like DUNE in the US and Belle II in Japan. How much will the panel look beyond Europe to coordinate activities, and how will it incorporate feedback from scientists on those projects? HA:
This is one of the issues that was very much discussed during my visit. We shouldn’t try to organize the whole world—in fact, a little bit of competition is very healthy. And complementarity is also very important.
At the physics-level discussions, we’ll make sure that we have representatives from the United States and other countries so we are provided with all the information. As I was discussing with many people here, if there are ideas, experiments or existing collaborations which already include European partners, then of course, there is no issue [because the European partners will provide input to the strategy].How do you see Europe working with Asia, in particular China, which has ambitions for a major collider? HA:
Collaboration is very important, and at the global level we have to find the right balance between competition, which is stimulating, and complementarity. So we’re very much hoping to have one representative from China in the physics preparatory group, because China seems to have ambitions to realize some of the projects which have been discussed. And I’m not talking only about the equivalent of [the Future Circular Collider]; they are also thinking about an [electron-positron] circular collider, and there are also other projects that could potentially be realized in China. I also think that if the Chinese community decides on one of these projects, it may need contributions from around the world. Funding is an important aspect for any future project, but it is also important to reach a critical mass of expertise, especially for large research infrastructures.This is a huge effort. What are some of the benefits and challenges of meeting with physicists from across Europe to come up with a single plan? HA:
The benefits are obvious. The more input we have, the fuller the picture we have, and the more likely we are to converge on something that satisfies maybe not everybody, but at least the majority—which I think is very important for a good feeling in the community.
The challenges are also obvious. On one hand, we rely very much on individuals and their creative ideas. These are usually the people who also happen to be the big pushers and tend to generate most controversies. So we will have to find a balance to keep the process interesting but constructive. There is no doubt that there will be passionate and exciting discussions that will need to happen; this is part of the process. There would be no point in only discussing issues on which we all agree.
The various physics communities, in the ideal situation, get organized. We have the neutrino community, [electron-positron collider] community, precision measurements community, the axion community—and here you can see all kinds of divisions. But if these communities can get organized and come up with what one could call their own white paper, or what I would call a 10-page proposal, of how various projects could be lined up, and what would be the advantages or disadvantages of such an approach, then the job will be very easy.And that input is what you’re aiming to get by December 2018? HA:
Yes, yes.How far does the strategy look out? HA:
It doesn’t have an end date. This is why one of the requests for the input is for people to estimate the time scale—how much time would be needed to prepare and to realize the project. This will allow us to build a timeline.
We have at present a large project that is approved: the high-luminosity LHC. This will keep an important part of our community busy for the next 10 to 20 years. But will the entire community remain fully committed for the whole duration of the program if there are no major discoveries?
I’m not sure that we can be fed intellectually by one project. I think we need more than one. There’s a diversity program—diversity in the sense of trying to maximize the physics output by asking questions which can be answered with the existing facilities. Maybe this is the time to pause and diversify while waiting for the next big step.Do you see any particular topics that you think are likely to come up in the discussion? HA:
There are many questions on the table. For example, should we go for a proton-proton or an [electron-positron] program? There are, for instance, voices advocating for a dedicated Higgs factory, which would allow us to make measurements of the Higgs properties to a precision that would be extremely hard to achieve at the LHC. So we will have to discuss if the next machine should be an [electron-positron] machine and check whether it is realistic and on what time scale.
One of the subjects that I’m pretty sure will come up as well is about pushing the accelerating technologies. Are we getting to the limit of what we can do with the existing technologies, and is it time to think about something else?
To learn more about the European Strategy Update for Particle Physics, watch Abramowicz’s colloquium at Fermilab.
Work has begun on an upgrade to the Facility for Advanced Accelerator Experimental Tests at SLAC National Accelerator Laboratory.
The Department of Energy’s SLAC National Accelerator Laboratory has started to assemble a new facility for revolutionary accelerator technologies that could make future accelerators 100 to 1000 times smaller and boost their capabilities.
The project is an upgrade to the Facility for Advanced Accelerator Experimental Tests (FACET), a DOE Office of Science user facility that operated from 2011 to 2016. FACET-II will produce beams of highly energetic electrons like its predecessor, but with even better quality. These beams will primarily be used to develop plasma acceleration techniques, which could lead to next-generation particle colliders that enhance our understanding of nature’s fundamental particles and forces, and novel X-ray lasers that provide us with unparalleled views of ultrafast processes in the atomic world around us.
FACET-II will be a unique facility that will help keep the US at the forefront of accelerator science, says SLAC’s Vitaly Yakimenko, project director. “Its high-quality beams will enable us to develop novel acceleration methods. In particular, those studies will bring us close to turning plasma acceleration into actual scientific applications.”
The DOE has now approved the $26 million project. The new facility, which is expected to be completed by the end of 2019, will also operate as an Office of Science user facility—a federally sponsored facility for advanced accelerator research available on a competitive, peer-reviewed basis to scientists from around the world.
“As a strategically important national user facility, FACET-II will allow us to explore the feasibility and applications of plasma-driven accelerator technology,” says James Siegrist, associate director of the High Energy Physics program of DOE’s Office of Science, which stewards advanced accelerator R&D in the US for the development of applications in science and society. “We’re looking forward to seeing the groundbreaking science in this area that FACET-II promises, with the potential for significant reduction of the size and cost of future accelerators, including free-electron lasers and medical accelerators.”
Bruce Dunham, head of SLAC’s Accelerator Directorate, says, “Our lab was built on accelerator technology and continues to push innovations in the field. We’re excited to see FACET-II move forward.”Surfing the plasma wake
The new facility will build on the successes of FACET, where scientists already demonstrated that the plasma technique can very efficiently boost the energy of electrons and their antimatter particles, positrons. In this method, researchers send a bunch of very energetic particles through a hot ionized gas, or plasma, creating a plasma wake for a trailing bunch to “surf” on and gain energy.
Researchers will use FACET-II to develop the plasma wakefield acceleration method, in which researchers send a bunch of very energetic particles through a hot ionized gas, or plasma, creating a plasma wake for a trailing bunch to “surf” on and gain energy.Greg Stewart, SLAC National Accelerator Laboratory
In conventional accelerators, particles draw energy from a radio-frequency field inside metal structures. However, these structures can only support a limited energy gain per distance before breaking down. Therefore, accelerators that generate very high energies become very long, and very expensive. The plasma wakefield approach promises to break new ground. Future plasma accelerators could, for example, unfold the same acceleration power as SLAC’s historic 2-mile-long copper accelerator in just a few meters.
Researchers will use FACET-II for crucial developments before plasma accelerators can become a reality. “We need to show that we’re able to preserve the quality of the beam as it passes through plasma,” says SLAC’s Mark Hogan, FACET-II project scientist. “High-quality beams are an absolute requirement for future applications in particle and X-ray laser physics.”
The FACET-II facility is currently funded to operate with electrons, but its design allows adding the capability to produce and accelerate positrons later—a step that would enable the development of plasma-based electron-positron particle colliders for particle physics experiments.
Another important objective is the development of novel electron sources that could lead to next-generation light sources, such as brighter-than-ever X-ray lasers. These powerful discovery machines provide scientists with unprecedented views of the ever-changing atomic world and open up new avenues for research in chemistry, biology and materials science.
Other science goals for FACET-II include compact wakefield accelerators that use certain electrical insulators instead of plasma, as well as diagnostics and computational tools that will accurately measure and simulate the physics of the new facility’s powerful electron beams. Science goals are being developed with regular input from the FACET user community.
“The approval for FACET-II is an exciting milestone for the science community,” says Chandrashekhar Joshi, a researcher from the University of California, Los Angeles, and longtime collaborator of SLAC’s plasma acceleration team. “The facility will push the boundaries of accelerator science, discover new and unexpected physics and substantially contribute to the nation’s coordinated effort in advanced accelerator R&D.”
Editor's note: This article is based on a press release issued by SLAC National Accelerator Laboratory.
New research results have potentially identified a fourth type of neutrino: the sterile neutrino.
New research results have potentially identified a fourth type of neutrino, a “sterile neutrino” particle. This particle will provide challenges for the Standard Model of particle physics if it is found to be a valid result in future experiments. The work, conducted by a multi-institutional team at Fermi National Accelerator Laboratory near Chicago, confirms results found in the 1990s in the Los Alamos Liquid Scintillator Neutrino Detector (LSND).
“This is a fascinating step forward for particle physics,” says Los Alamos National Laboratory Director Terry Wallace. “Of course, this needs to be tempered by the fact that future observations are required to really clinch this beyond any shadow of doubt. Modifications to the Standard Model, including dark matter, new particles and other interpretations of this result are still speculative. What stands with high confidence is that two experiments with completely different approaches observe the same effect.”
Neutrinos, traditionally understood to come in three flavors (electron, muon and tau), have been measured by the Mini Booster Neutrino Experiment, MiniBooNE, for some 15 years. The key to the current result is the existence of more electron-flavored neutrinos than expected. Neutrinos normally oscillate between the three types, but the overpopulation of the electron-flavored ones leads to a theory that some of the muon neutrinos became sterile neutrinos for a time during the regular oscillations. The sterile neutrinos could reasonably be assumed to have transitioned into electron neutrinos in their next oscillation phase, thus explaining the higher electron numbers.
“We cannot say definitively that it’s sterile neutrinos, but we can conclusively say something fundamental is going on,” says Richard Van de Water, the Los Alamos co-lead on the project.
“Over the past 20 to 30 years, neutrino oscillations have been observed from one flavor to another,” he says. “But in the late 1990s the LSND at Los Alamos saw evidence of electron neutrinos in the beam, and if our observations were correct, a much heavier type of neutrino was also in existence. That’s where the idea of sterile neutrinos came about."
The MiniBooNE experiment was designed differently from LSND, yet has shown similar results.
“I’m careful about saying it’s sterile neutrinos. It could be, but it could be something else," Van de Water says. "The same effect has now been observed in two experiments, with a small chance that there is a mistake in both experiments. We still see what shouldn’t be there, and it could be even more exciting. Speculating, I like to think of this as the first hint of the dark sector, perhaps interacting through neutrinos, and this could be a way to probe dark matter and dark energy.”
In a recent article, Fermilab's deputy director for research noted that the latest results from the MiniBooNE collaboration provide even stronger motivation for the three new short-baseline neutrino experiments at Fermilab that are based on liquid argon technology.
“These are exciting times for neutrino scientists,” Lykken says.
The MiniBooNE detector was established by a collaborative effort at Fermilab in 2002 to search for unusual neutrino interactions. In recent years the detector has collected data to study background interactions related to the short-baseline neutrino program. About 50 scientists from 20 institutions continue to work on the analysis of data recorded by the experiment.
Editor's note: A version of this article was originally published as a Los Alamos National Laboratory press release.
Watch SLAC theorist Lance Dixon write out a new formula that will contribute to a better understanding of certain particle collisions.
Physicists on experiments at the Large Hadron Collider study the results of high-energy particle collisions, often searching for surprises that their formulas don't predict. Finding such a surprise could lead to the discovery of new particles, properties or forces.
One of the formulas they use to predict the outcome of collisions is the ECC, which stands for Energy-Energy Correlation. The EEC measures how much energy in the form of particles goes into two detectors placed at a specific angle to one another.
A group including theorist Lance Dixon of the US Department of Energy's SLAC National Accelerator Laboratory and former postdoc Hua Xing Zhu recently figured out the formula for the biggest correction to EEC in decades.
It’s a formula their paper calls “remarkably simple.” For the video below, Dixon offered to write it down.Theorists love giant formulas (even more than coffee) Video of Theorists love giant formulas (even more than coffee)
Physicists see top quarks and Higgs bosons emanating from the same collisions in new results from the Large Hadron Collider.
Today two experiments at the Large Hadron Collider at CERN announced a discovery that finally links the two heaviest known particles: the top quark and the Higgs boson. The CMS and ATLAS experiments have seen simultaneous production of both particles during a rare subatomic process. This is the first time scientists have measured the Higgs boson’s direct interaction with top quarks.
“This observation connects for the first time directly the two heaviest elementary particles of the Standard Model: the top quark, which was discovered in 1995 at the Tevatron by the CDF and DZero experiments, and the Higgs boson,” says Boaz Klima, a scientist at the US Department of Energy’s Fermi National Accelerator Laboratory and the CMS publication board chair.
The Higgs boson was predicted in the 1960s and discovered by the CMS and ATLAS experiments in 2012 using particle collisions generated by the LHC.
Fundamental particles gain mass through their interaction with the Higgs field, so it would make sense that the top quark—the most massive particle ever discovered—would have a strong coupling with the Higgs boson. But scientists say they need to test every aspect of the theory in order to fully verify it.
Before its discovery, theorists had a good picture of how the Higgs boson was supposed to behave, according to the Standard Model of particle physics. Now that LHC physicists can nimbly produce and study Higgs bosons, the next step is to scrutinize these predictions and see if they hold water. A big question has been whether the Higgs boson can interact with quarks and, if so, what this relationship might look like.
“The Higgs boson was originally predicted because it helped explain why some force-carrying bosons had mass while others remained massless,” says Anadi Canepa, the new head of the CMS Department at Fermilab. “However, the Higgs also endows quarks with mass. ”
Even though scientists suspected that the Higgs boson interacts more strongly with the massive top quark than any other, all evidence until recently has been below the threshold required to claim a discovery. These new results—one paper published today in Physical Review Letters from the CMS collaboration and another paper submitted by the ATLAS collaboration—show definitively that the Higgs boson communicates with the top quark as predicted and opens up a new door to explore these interactions further.
The top quark played a key role in Higgs research even before scientists found the Higgs. Theorists used measurements of the top quark to help them narrow in on the mass of the Higgs boson prior to its discovery, and the top quark is helping physicists understand the strength of the Higgs field at different energies. The top quark also plays a huge role in Higgs boson production.
“Much of what we think we know about the Higgs boson hinges on its relationship with the top quark,” says Rachel Hyneman, a graduate student at the University of Michigan who worked on the ATLAS analysis. “We believe that roughly 90 percent of Higgs bosons are produced through virtual top quarks.”
The proton-proton collisions inside the LHC produce long chain reactions that often involve multiple steps and players. These new studies focused on the rare process in which two gluons inside the colliding protons fuse and produce two virtual top quarks, which are quantum mechanical fluctuations and not yet fully formed discrete particles.
“When these nascent top quarks recombine, they normally pop out a single Higgs boson,” Hyneman says. “But 1 percent of the time, this solitary Higgs is accompanied by two real top quarks. This is what we set out to find.”
Because Higgs bosons and top quarks are short-lived particles, they almost immediately transform into more-stable daughter particles, many of which also decay. This rapid transition from one generation to the next makes it challenging—though not impossible—to retrace the lineage of the detected daughter particles back to their common ancestor.
“We looked at many different decay modes of Higgs bosons,” says Chris Neu, a physicist at the University of Virginia who worked on the CMS analysis. “This process is so rare that we needed to combine results from different Higgs signatures to maximize our sensitivity and establish the top-Higgs signal.”
The next step is to precisely measure this coupling strength and determine if it matches the predictions.
"Directly measuring the coupling of the top quark to the Higgs boson is a fundamental test of the Standard Model,” says Sally Dawson, a senior physicist and theorist at DOE’s Brookhaven National Laboratory. “This measurement limits the possibilities for new heavy particles that may interact with the top quark."
Further studies will continue to explore the behavior of the Higgs boson and how it fits into the universal mosaic of matter.
People with disabilities are underrepresented in STEM.
When sociologist and broadcaster Tom Shakespeare was a graduate student at King’s College, Cambridge in the early 1990s, he sent a request to a physicist who was on his way to becoming the most famous scientist in modern history.
Shakespeare was part of a group organizing a campaign to raise awareness about the need for better access for students with disabilities. And he wanted Stephen Hawking in on it.
“We wrote to him and said, ‘Would you support us?’” Shakespeare says. “And he did.”
Hawking, who died on March 14, is known for many things: his groundbreaking theories about black holes, his gift for communicating science to the public, his bestselling books and television appearances. He was also one of the longest-surviving patients with amyotrophic lateral sclerosis (ALS), living with the neurodegenerative disease for more than 50 years.
The combination made him famous and something of an ambassador for the 1 billion people on the planet living with some form of disability. He demonstrated the complexity and individuality of disability, while shattering misconceptions about the levels at which people with disabilities can contribute to society.
But despite his decades of living and working in the public eye, scientists with disabilities are still fighting to be counted equally in academia and to get the access they need to fulfill their potential.
“Hawking's example had great impact on many people,” says Aaron Schaal, a mathematical physicist who, like Hawking, uses a wheelchair and communicates largely via computer. “He showed non-disabled people that disability need not have any influence on one’s mind. However, there are still many prejudices regarding academics with disabilities.”Artwork by Sandbox Studio, Chicago Opening doors
Hawking relied on round-the-clock assistance from a team of aids; a wheelchair equipped with state-of-the-art technology and voice synthesizers; and personalized accommodations in his home and places of work. “I realize that I am very lucky, in many ways,” Hawking wrote in an introduction to the World Health Organization’s first and only report on the global status of people with disabilities. “My success in theoretical physics has ensured that I am supported to live a worthwhile life. It is very clear that the majority of people with disabilities in the world have an extremely difficult time with everyday survival, let alone productive employment and personal fulfillment.”
According to a 2017 National Science Foundation report, 12.6 percent of the US population—approximately 40 million people—has some form of disability. But they are underrepresented in science: People with disabilities make up only 8.6 percent of employed scientists and engineers and 6.1 percent of physicists. This is despite that fact that undergraduates with disabilities enroll in STEM majors at roughly the same rate—about one in four—as those without.
And while only 3.6 percent of non-disabled people in science and engineering cite illness or disability as reason for not working, 34 percent of people with disabilities do, indicating that disability is still a significant barrier to employment.
When physicist Claire Malone finished her master’s degree in 2014, she decided to do her PhD research at the University of Cambridge, with a long-term placement at CERN, the largest particle physics laboratory in the world. Malone has cerebral palsy, which affects her movements, speech and the use of her hands. She uses a wheelchair to get around and a computer to help her communicate.
She says she found CERN to be very accommodating, but there were still limitations that made her work more difficult. Her off-site accommodation had a door she could not open from the inside without assistance, and while she had an accessible office to work in on-site, she couldn’t reach much of the lab’s campus.
“I had a great room for me and my scribe to work in, but the rest of the site was pretty inaccessible,” Malone says. “So I was physically isolated from colleagues and missed out on some of the team interaction and exchange of ideas. I guess that is the trouble with [CERN’s] good old 1950s architecture.”
Physical accessibility is not the only issue. Born with limited sight, physicist John Gardner lost his vision entirely at 49 after what should have been a simple eye procedure. Throughout his career, he encountered numerous obstacles. For example, a commonly used tool in physics is an oscilloscope, which measures electrical signals over time. But oscilloscopes display these signals visually in a chart, which is essentially useless if you can’t see.
“There are things like laboratory instruments that are unnecessarily inaccessible,” Gardner says. “It makes it hard for a blind person to do lab experiments that they’re otherwise perfectly capable of doing.”
Graphical information is so crucial, and so entirely inaccessible for blind people, that Gardner founded a company to create tactile graphs, eventually retiring from physics to focus on the mission. Gardner, in other words, approached his exclusion from science by doing what scientists do best: creating solutions to seemingly impossible problems.
Malone, too, devised workarounds to her challenges. “I think the biggest difficulty I have had to overcome in my physics studies is not being able to pick up a pencil and quickly scribble down a calculation,” she says. “Mathematics is the language of physics and most students develop an intuitive feel for how math ‘should work’ by ‘playing’ with equations on the page.” To make up for this, she developed a system of manipulating equations in her mind’s eye.
And Schaal, now a mathematical physicist at Ludwig Maximilians University of Munich, invented a communication board when he was just 9. The Plexiglass board is covered with letters, numbers and mathematical symbols, which he looks at in sequence to spell out words and do math.
Hawking himself had to constantly innovate: As both Schaal and Malone point out, his condition, unlike each of theirs, was progressive, meaning he needed to change his adaptations along with his changing body. But, as he demonstrated, day-to-day barriers need not impede a larger goal.
“I can’t drive a car until self-driving cars come along,” Gardner says. “But that doesn’t keep me from getting from place A to place B.”Artwork by Sandbox Studio, Chicago Learning by doing
In his book “The Panda’s Thumb,” evolutionary biologist Stephen Jay Gould wrote, “I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.”
It’s a sentiment that resonates with many groups who have been marginalized—“the centuries of blind people who have wasted away in sheltered workshops, institutions and rocking chairs across the world,” as Louisiana Tech Director of the Institute on Blindness Edward Bell put it—and it’s a core element of Hawking’s legacy.
“The point about Hawking is that he showed that if you do accommodate, you do include, you’re going to get great results,” Shakespeare says, “because disabled people have the same talent as everybody else, and sometimes more.”
The authors of the WHO report noted that the global economy suffers when 1 billion people, nearly 15 percent of the world’s population, are relegated to the sidelines. Productivity suffers and tax revenue is lost, and the effects increase exponentially as family members take time off to care for loved ones who can’t work. One Canadian study estimated the cost at 6.7 percent of the nation’s GDP, a loss of around $100 billion in US dollars annually.
Some institutions are working to improve this. CERN established an official diversity policy in 2014, building on a previous code of conduct that emphasizes respect in the workplace. The lab has instituted more options to work remotely, provides things like parking permits and online information about accessible paths, and frequently hosts talks, workshops and seminars on inclusivity. (Shakespeare spoke at CERN in 2013.)
The lab also recently began a project to create earmarked positions for students with disabilities as part of its internship program, to help the students develop necessary skills for employment while increasing diversity on the lab’s campus and in the broader pipeline of future scientists.
“Disability is a development issue,” says CERN Diversity Analyst Ioanna Koutava. “Research shows that disability may increase the risk of poverty and poverty may increase the risk of disability. With these positions we hope to give an opportunity to candidates to come work in a leading scientific institution... and for us to increase the inclusiveness of the organization by increasing our exposure to individual situations.”
Hawking himself saw inclusivity as a necessary effort, writing in his introduction to the WHO report, “We have a moral duty to remove the barriers to participation, and to invest sufficient funding and expertise to unlock the vast potential of people with disabilities.”
Hawking, with perseverance, genius and bit of luck, lived his vast potential. He inspired untold millions, broke down barriers and changed minds about what is truly possible, both in the cosmos and here on Earth.
“Many of us look to him as someone who showed what you could achieve,” Shakespeare says. “He, in return, was willing to be seen as a disabled person for the point of trying to get more people a better deal in society.”
This neutrino-watchers season preview will give you the rundown on what to expect to come out of neutrino research in the coming years.
There’s a lot to look forward to in the world of neutrinos, tiny particles that are constantly streaming through us unnoticed.
According to theorist Alexander Friedland of SLAC National Accelerator Laboratory, if you looked at the field of neutrino research 20 years ago, you wouldn’t recognize it compared to what it is now. “The developments have been absolutely remarkable,” he says. “It has evolved so much.”
Twenty years ago, in 1998, neutrinos exposed a shortcoming of the Standard Model of particle physics, scientists’ best explanation of the fundamental particles and forces that make up everything. According to the Standard Model, neutrinos should have no mass. But according to the observations of the Super-Kamiokande and then the Sudbury Neutrino Observatory experiments, they did. It was already known that they came in three types, but if they had mass this meant that they also shifted from one type to another as they flew along at nearly the speed of light.
Many mysteries remain about these particles with minuscule masses: Do neutrinos actually come in four types, as suggested by some experiments? What are the masses of neutrinos? Are neutrinos their own antiparticles? What can neutrinos tell us about the Standard Model, astrophysical phenomena and the formation of the universe?
Our current neutrino experiments have all gotten to a sort of midway point, says Lindley Winslow, a physicist at MIT. “We’re refueling and looking at the maps and figuring out our next steps into this really uncharted land,” she says. “It’s a little bit of a time to congratulate ourselves that we got to this point and then make the big push to the unknown.”
With Neutrino 2018, the XXVIII International Conference on Neutrino Physics and Astrophysics, right around the corner, we asked some neutrino experts for their quick takes on the roster of experiments going into this season and their predictions for upcoming victories in the field. Here’s what they had to say.Chasing hidden flavors
Neutrinos are known to oscillate between three known types, or flavors, as they move through space: electron, muon and tau. But in 1995, physicists working on the Liquid Scintillator Neutrino Detector, or LSND, at Los Alamos National Laboratory stumbled upon clues that there may be an extra flavor hiding on the sidelines. They called it a “sterile neutrino,” a neutrino flavor that would not interact like the others.Artwork by Sandbox Studio, Chicago with Corinne Mucha
“Neutrinos outnumber electrons, protons and neutrons in today’s universe by a factor of 10 billion,” says physicist Joshua Spitz of the University of Michigan.
“Given this, it’s easy to see that the existence of a fourth type of neutrino, and corresponding mixing to the other three, would have significantly affected the evolution of the universe. Specifically, large scale structure, galaxy formation, dark matter, cosmic microwave background observables, and the creation and abundance of heavy elements could all be affected by the addition of a new type of neutrino.”
In the years since the LSND anomaly, physicists have been designing experiments geared towards chasing down this hidden flavor. In 2002, the MiniBooNE experiment began collecting data related to this at Fermi National Accelerator Laboratory.
Results have thus far shown an excess of MiniBooNE events that is consistent with the LSND signal, but it isn’t clear how this fits into a model of sterile neutrinos. The co-spokespeople for MiniBooNE, Richard Van de Water and Rex Tayloe, plan to present updated results at Neutrino 2018 that will add significant new information.
“The results will provide new information and insights into the question of the LSND and MiniBooNE excesses, especially the question of the consistency of the two data sets indicating whether new physics such as sterile neutrinos, or other more complicated models, are at play,” Van de Water says.
In addition, new, more sensitive experiments are just starting to come online. MiniBooNE’s successor is an experiment called MicroBooNE; it is expected to release its first physics results in the coming year. MicroBooNE will eventually be joined at Fermilab by ICARUS and SBND, forming a suite of three detectors known as the Short-Baseline Neutrino Program.
Beyond these accelerator-based experiments—which also include the Japan-based JSNS2—a number of radioactive-source and reactor-based experiments, including PROSPECT, STEREO, DANSS, CHANDLER and SOLID, are also working and hope to catch the theorized sterile neutrino sometime in the near future.Tackling the mass ordering
Just as we know there are at least three different flavors of neutrinos, we also know that there are three different neutrino masses. But how these mass states are ordered is still a mystery. There are two possible ways neutrino mass states can be ordered: normal or inverted. Although many signs are pointing towards a normal ordering, the final call is still in review.Artwork by Sandbox Studio, Chicago with Corinne Mucha
Knowing whether neutrinos have a normal or inverted mass ordering can help scientists test other models of the universe, such as one in which the four forces of nature unite into one at high energies.
In contrast with the short-baseline experiments searching for sterile neutrinos, experiments tackling the question of mass ordering are built to go long. The two major long-baseline experiments in operation are the T2K experiment hosted by KEK accelerator laboratory, which monitors a beam of neutrinos traveling more than 180 miles across Japan, and NOvA hosted by Fermilab, which studies a beam that originates about 500 miles from the detector in the United States. Fermilab just completed an upgrade of its accelerators, and the detector for the T2K experiment will gain sensitivity with an upgrade this summer. Reactor-based experiments, such as the Daya Bay Reactor Neutrino Experiment in China, are also involved in the investigation.
Many of the experts consulted for this article—including André de Gouvêa at Northwestern and Friedland at SLAC—say they are looking forward to a slew of results in the next few years from NOvA and T2K that could bring us closer than ever to figuring out the mass ordering.
According to Spitz at Michigan, telescope-based observations of large-scale structure are also quickly gaining sensitivity to measuring the sum of the neutrino masses by looking at its influence on the gravitational clumping of matter in the early universe. Combining this with other results might allow scientists to uncover the neutrino mass ordering.
“Seeing agreement between NOvA, T2K and telescopic observations of this property of the neutrinos will be absolutely extraordinary,” he says, “and seeing disagreement might even be more interesting. This will truly be ‘astroparticle physics,’ when we can start relating the properties of the neutrino to the formation of the universe.”
Other experiments are working to measure the combined mass of the three types of neutrinos. KATRIN, a neutrino experiment in Germany with a 200-ton spectrometer at its core, has just started taking data. The experiment will measure the energy of the electrons spit out during the decay of the radioactive isotope tritium and look for very slight distortions that will clue researchers in to the neutrino’s absolute mass.
“The absolute neutrino mass is one of these things that oscillation experiments can’t see at all,” says Alexander Himmel, a physicist at Fermilab. “We’re seeing the very beginning of data-taking with KATRIN. It’s a very technically challenging experiment and it’s been slow to get up and running, so over the next few years we’re looking forward to getting really beautiful measurements from them, which I think will be very exciting.”
Project 8, another experiment going after the absolute mass of the neutrino, will also use tritium, instead measuring the energy of individual electrons by measuring the frequency of their spiraling motion in a magnetic field. Although the goal of Project 8 is to demonstrate the technology, physicists hope to scale up the technique in the future.Blowing the whistle on neutrino fouls
Most of the particles in our universe have corresponding antiparticles, which carry equal but opposite charges of their partners.Artwork by Sandbox Studio, Chicago with Corinne Mucha
Scientists believe that during the Big Bang, there should have been equal amounts of matter and antimatter in the universe. But when matter and antimatter collide, they annihilate. This match should have ended in a tie, with matter and antimatter cancelling each other out and leaving behind nothing but energy.
And yet somehow, as you can guess from the matter-packed world we live in, matter was victorious. Scientists are still trying to figure out why. This is where charge-parity violations come into play.
For a while, physicists believed there had to be some sort of symmetry between the behavior of particles and their antimatter teammates, called CP symmetry. This means that if antineutrinos subbed in for neutrinos, the universe should treat them identically. But if this symmetry is somehow broken, it might explain how matter got the upperhand.
Long-baseline experiments such as NOvA and T2K, with assistance from reactor-based experiments such as Daya Bay, have set out to track the oscillations of neutrinos and antineutrinos to determine if they are fundamentally different. That would indicate that CP is broken, offering a possible explanation for why matter took home the win in the creation of the universe.
According to Friedland, one of the major neutrino announcements expected soon is the release of antineutrino run data from the NOvA experiment, which, in combination with T2K, will either strengthen existing hints of CP violation or send teams of scientists running off in some new direction.
“We are seeing hints that something interesting is happening between neutrinos and antineutrinos,” says Kendall Mahn, a physicist at Michigan State University. “We’re trying to take more data to see if this is going to turn into something really exciting or fizzle out. It just shows us that we’re really on the leading edge of something.”
Another possible symmetry-breaking that might have had a hand in sculpting the universe as we know it is called lepton number violation. This would occur if neutrinos were actually their own antiparticles. Scientists are testing this hypothesis by looking for a process in which neutrinos act as their own opposites and cancel one another out: neutrinoless double-beta decay.
Experiments such as CUORE, Majorana Demonstrator, GERDA and NEXT are on the offensive, all having recently published new results. Results from KamLAND-Zen 800 are also anticipated by the end of the year.
“Just turning the detector on was a feat in itself, says Winslow at MIT, referring to CUORE. “Now we have the hard job of keeping it running for five years and getting the ultimate sensitivity where we actually think we should be able to see something.”The Standard Model fitness test
Scientists aren’t just studying neutrinos in neutrino experiments; they’re also creating tests of the Standard Model. Last summer, physicists involved in the COHERENT experiment hosted at Oak Ridge National Laboratory were able to measure for the first time a phenomenon predicted via the Standard Model that had been sought for four decades without detection. The phenomenon, known as coherent elastic neutrino-nucleus scattering, also comes into play in the explosions of supernovae.Artwork by Sandbox Studio, Chicago with Corinne Mucha
In coherent elastic neutrino-nucleus scattering, a neutrino hitting the nucleus of an atom does not just hit one part of it—a proton or a neutron—but rather kicks the entire nucleus as a whole.
“It’s like hitting a bowling ball with a ping pong ball,” says Kate Scholberg, a physicist at Duke. “Neutrinos almost never interact, but this cross-section is so large that the probability of a collision is 100 times more than for a regular neutrino interaction. The problem is that when you hit a bowling ball with a ping pong ball, it’s hard to get the bowling ball rolling very fast, there’s a really low-energy recoil [that is difficult to observe].”
Over the next few months, COHERENT, which currently holds the title of world’s smallest neutrino detector, will continue publishing results, searching for this effect in different nuclei, eventually leading to larger detectors capable of searching for additional oscillation effects.
Taking different approaches is key in propelling neutrino research forward, says Janet Conrad, a physicist at MIT. Another instrument she’s looking forward to using for precision measurements that test the Standard Model is IceCube, the giant South Pole neutrino observatory that consists of a cubic kilometer of Antarctic ice.
“IceCube is a unique detector that has produced nice dark matter results and a really interesting sterile neutrino limit,” she says, “but I think what many people don’t realize is what a fantastic beyond-Standard-Model search detector IceCube actually is. And it's just getting better as we understand the detectors more and more. Within the particle physics community, IceCube is the dark horse running up next to us that we haven't yet recognized.”The wild card
When a massive star explodes, the first messengers it sends across the galaxy are its speedy, unhindered neutrinos. Because these neutrinos escape from the star’s collapsing core, they contain information about the early stages of supernova events that is not available in any other way.Artwork by Sandbox Studio, Chicago with Corinne Mucha
In 1987, Supernova 1987A exploded in a nearby galaxy. Kamiokande-II, the Irvine-Michigan-Brookhaven detector and the Baksan Neutrino Observatory each recorded a burst of neutrino events from the explosion. The detections allowed scientists to confirm theoretical models of what goes on in the heart of these violent stellar explosions.
Although we’re not sure when the next galactic supernova will go off, the idea that it could happen in the coming decades—during a time where there are a growing number of neutrino experiments in operation—is exciting to many, including Scholberg and Friedland.
“The rate of supernova explosions is estimated to be two to three per century in our galaxy,” says Friedland, “which is about the same rate as large earthquakes occurring in the Bay Area. In the case of supernovae, just as in the case of earthquakes, we don’t know if one will go off tomorrow, but it definitely pays to be prepared.”
At the moment, Scholberg says, seven large neutrino detectors could observe a galactic supernova, and more will join them in the coming years. Seeing a nearby supernova would allow us to pursue many detailed questions about distant astrophysical phenomena, which will better inform our theories of the universe.Going into overtime
Before the end of this decade, additional experiments such as JUNO, an underground observatory in China that will build on the successes of the Daya Bay reactor experiment, will come online.Artwork by Sandbox Studio, Chicago with Corinne Mucha
In the next 10 to 15 years, experiments will continue to grow and improve. The Fermilab-hosted Deep Underground Neutrino Experiment, DUNE, will send neutrinos racing more than 800 miles across the United States to better understand their oscillations and potentially definitively answer some of our current questions.
Each question scientists answer is tied to other questions, and every point scored brings physicists ever closer to triumphs that could revolutionize our picture of the universe, from its tiniest particles to its largest scale astrophysical phenomena.
“Every day I come into work and we take a little step forward to some new understanding,” Mahn says. “There’s more stuff out there and we’re getting closer to it.”
The Story Collider visits Fermilab to highlight true stories from scientists.
How do snails, shooting stars and science fiction books all relate to physics? They’re just a few examples of where Fermilab scientists and other guest speakers drew inspiration for a recent edition of The Story Collider.
“Stories underlie a lot of what we do as scientists, whether we know it or not,” says Cindy Joe, a Fermilab engineering physicist. “We have a lot of beautiful stories, both science-related and not, but as scientists we sometimes pretend we’re above the emotional part of what we do. But it’s okay for emotion to underlie it.”
The Story Collider features storytellers in podcasts and live shows across the country—everyone from comedians and doctors to poets and physicists. It aspires to humanize its speakers and show that at the basis of every profession, including the sciences, is a person with hopes, dreams, desires and struggles.
On May 12, The Story Collider visited Fermilab with hosts Erin Barker and Kellie Vinal to explore some personal stories from people affiliated with lab. It was the culmination of the spring season of the Fermilab Arts and Lecture Series, which organizes and hosts events like concerts, theater productions and public lectures at the lab.
The evening saw both laughter and tears. Joe told the story of her pet snail who helped her through difficult times at the beginning of her physics career, when she often felt overlooked and ignored. But caring for a small, often overlooked and non-traditional pet helped Joe realize her worth as a person and a scientist.
“I realized that my core belief that every single person had fundamental, inherent value should maybe also apply to myself,” she said. “That my different perspective was important. That my experiences were real. That my contributions were good. That I deserved no less gentle kindness and consideration than anyone else. And maybe I should treat myself like it.”
Don Lincoln, Fermilab senior scientist and book author, told the audience about an accomplishment he is especially proud of: inspiring a young woman to pursue the sciences through his writing. He emphasized that writing popular science books for a general audience is a crucial method of inspiring young scientists.
“There was someone out there— someone who had the ability and passion to learn but didn’t even know that a career in physics existed,” he said.
Fermilab scientist emeritus Mike Albrow painted a picture of the night sky for his audience. The same night sky stirred him as both a child and adult, always creating, “a feeling of being all alone in the vast emptiness of it all.” He told the audience how much of a detriment light pollution was to the night sky and for kids (and adults) who wanted to look at the stars.
Visual artist and first-ever Fermilab artist-in-residence Lindsay Olson walked the audience through intermingling science and art—and how she fell in love with science in the middle of a waste water treatment plant. At Fermilab, despite feeling intimidated by high-energy physics, she relied on her curiosity to explore and then show through her art that you don’t need a PhD to be fascinated by physics.
Finally, Fermilab senior scientist Herman White described when a small and coincidental connection—his roots in Alabama—became a way for him to connect to people and share his science with them.
“Especially now, it’s incredibly important to connect the public to science and change their perception of it,” White says. “We need to relate to people on a human level.”
Joe notes that many scientists aren’t used to telling stories, but their stories are an opportunity both to convey the value of science and create relationships with people outside of the field. She highlights that science is part of everyone’s life, no matter where they come from or what they do for a living.
“The underlying theme is that science is human," she says. "We can all tell stories about science, no matter its role in our lives, by sharing our feelings, thoughts, and background. And our stories as scientists are really just our stories as humans.”The Story Collider at Fermilab Video of The Story Collider at Fermilab
Engineering the incredible, dependable, shrinkable Deep Underground Neutrino Experiment.
The Deep Underground Neutrino Experiment, designed to solve mysteries about tiny particles called neutrinos, is growing by the day. More than 1000 scientists from over 30 countries are now collaborating on the project. Construction of prototype detectors is well underway.
Engineers are getting ready to carve out space for the mammoth particle detector a mile below ground.
The international project is hosted by the Department of Energy’s Fermi National Accelerator Laboratory outside of Chicago—and it has people cracking engineering puzzles all around the globe. Here are five incredible engineering and design feats related to building the biggest liquid-argon neutrino experiment in the world.1. The DUNE detector modules can (and will) shrink by about half a foot (16.5 centimeters) when filled with liquid argon.
Each of the large DUNE detector modules in South Dakota will be about 175 feet (58 meters) long, but everything has to be able to comfortably shrink when chilled to negative 300 degrees Fahrenheit (negative 184 degrees Celsius). The exterior box that holds all of cold material and detector components, also known as the cryostat, will survive thanks to something akin to origami. It will be made of square panels with folds on all sides, creating raised bumps or corrugations around each square. As DUNE cools by hundreds of degrees to liquid argon temperatures, the vessel can actually stay the same size because of those folds; the corrugation provides extra material that can spread out as the flat areas shrink. But inside, the components will be on the move. Many of the major detector components within the cryostat will be attached to the ceiling with a dynamic suspension system that allows them to move up to half a foot as they chill.2. Researchers must engineer a new kind of target to withstand the barrage of particles it will take to make the world’s most intense high-energy neutrino beam for DUNE.
Targets are the material that a proton beam interacts with to produce neutrinos. The Fermilab accelerator complex is being upgraded with a new superconducting linear collider at the start of the accelerator chain to produce an even more powerful proton beam for DUNE—and that means engineers need a more robust target that can stand up to the intense onslaught of particles. Current neutrino beamlines at Fermilab use different targets—one with meter-long rows of water-cooled graphite tiles called fins, another with air-cooled beryllium. But engineers are working on a new helium-gas-cooled cylindrical rod target to meet the higher intensity. How intense is it? The new accelerator chain’s beam power will be delivered in short pulses with an instantaneous power of about 150 gigawatts, equivalent to powering 15 billion 100-watt lightbulbs at the same time for a fraction of a second.3. A single DUNE test detector component requires almost 15 miles of wire.
Before scientists start building the liquid-argon neutrino detectors a mile under the surface in South Dakota, they want to be sure their technology is going to work as expected. In a ProtoDUNE test detector being constructed at CERN, they are testing pieces called “anode plane assemblies.” Each of these panels is made of almost 15 miles (24 kilometers) of precisely tensioned wire that has to lay flat—within a few millimeters. The wire is a mere 150 microns thick—about the width of two hairs. This panel of wires will attract and detect particles produced when neutrinos interact with the liquid argon in the detector—and hundreds will be needed for DUNE.4. DUNE will be the highest voltage liquid-argon experiment in the world.
The four DUNE far detector modules, which will sit a mile underground at the Sanford Underground Research Facility in South Dakota, will use electrical components called field cages. These will capture particle tracks set in motion by a neutrino interaction. The different modules will feature different field cage designs, one of which has a target voltage of around 180,000 volts—about 1500 times as much voltage as you’d find in your kitchen toaster—while the other design is planning for 600,000 volts. This is much more than was produced by previous liquid-argon experiments like MicroBooNE and ICARUS (now both part of Fermilab’s short-baseline neutrino program), which typically operate between 70,000 and 80,000 volts. Building such a high-voltage experiment requires design creativity. Even “simple” things, from protecting against power surges and designing feedthroughs—the fancy plugs that bring this high voltage from the power supply to the detector—have to be carefully considered and, in some cases, built from scratch.5. Researchers expect DUNE’s data system to catch about 10 neutrinos per day—but must be able to catch thousands in seconds if a star goes supernova nearby.
A supernova is a giant explosion that occurs when a star collapses in on itself. Most people imagine the dramatic burst of light and heat, but much of the energy (around 99 percent) is carried away by neutrinos that can then be recorded here on Earth in neutrino detectors. On an average day, DUNE will typically see a handful of neutrinos coming from the world’s most intense high-energy neutrino beam—around 10 per day at the start of the experiment. Because neutrinos interact very rarely with other matter; scientists must send trillions to their distant detectors to catch even a few. But so many neutrinos are released by a supernova that the detector could see several thousand neutrinos within seconds if a star explodes in our Milky Way galaxy. A dedicated group within DUNE is working on how best to rapidly record the enormous amount of data from a supernova, which will be about 50 terabytes in ten seconds.
In case you missed it, here are the first “Five fascinating facts about DUNE.”
If two protons collide at 99.9999991 percent the speed of light, do they make a sound?
What is it like inside the LHC? Symmetry tackles some unconventional questions about the world’s highest energy particle accelerator.The LHC accelerates beams of particles, usually protons, around and around a 17-mile ring until they reach 99.9999991 percent the speed of light. If you could watch this happening, what would you see? A:
The LHC ring is actually made up of both straight and curved sections. If you were watching protons fly through one of the straight sections, it would be totally dark. But as the protons pass through the LHC’s curved sections, the particles emit synchrotron radiation in the form of photons.
At low energies, the photons are generally in the infrared, but at a couple of particular points in the ring, special magnets called undulators cause visible light to be emitted.
During the acceleration process (the so-called ramp), the energy of protons increases, and the energy of the photons they emit also increases. Once the protons reach their maximum energy, most of the photons are in the ultraviolet range. If you looked in the beam pipe at that point, you wouldn’t be able to see anything, but you would get a pretty good sunburn.What are space and time like for an LHC proton traveling at 99.9999991 percent the speed of light? A:
Two strange but well-known effects of moving at speeds that are a signification fraction of the speed of light are time dilation (moving clocks tick slowly) and length contraction.
Time dilation tells us that the time experienced by a moving observer is shorter than time experienced by a stationary observer. Length contraction tells us that a stationary observer will observe a moving object to be shorter in length than it would be if it were at rest.
To a proton travelling very close to the speed of light, time would appear to be passing normally. Proton time would seem strange only to an observer outside the LHC, for whom 1 second for the proton would appear to last about 2 hours.
What would seem strange from the proton’s point of view would be length. To the proton screaming around the LHC, the 17-mile circumference of the accelerator would appear to take up just about 13 feet.Speaking of screaming, do the particles going around the LHC generate any sound? If you stuck your ear up against the beam pipe and listened to the protons colliding, what would you hear? A:
The particles in the LHC are travelling in a very good vacuum, and there’s no sound in a vacuum. But there is a recording of the proton beam smashing into the graphite core of the beam dump, where particles are sent when scientists want to stop circulating them in the accelerator, and they do land with a bang.
Your browser does not support the audio element.How powerful are the collisions in the LHC? A:
The LHC collides two beams of protons at a combined energy of 13 TeV, or 13 trillion electronvolts. An electronvolt is a unit of energy, like a calorie or a joule. Electronvolts are used when to talk about the energy of motion of really small things such as particles and atoms.
One photon of infrared light has about 1 electronvolt of energy. A flying mosquito has about 4 trillion electronvolts of energy.
Knowing that, you might think 13 trillion electronvolts isn’t much. But what’s impressive is not so much the energy as the energy density: The energy of about 3 flying mosquitos is crammed into a space about 1 trillion times smaller across than one annoying insect. Nowhere else on Earth can we concentrate energy that much.What if, instead of colliding protons at 13 TeV, you could collide apples at the same speed? A:
If you could do that, you’d get some real specialty apple juice—and a huge amount of energy: close to 1 x 1020Joules. That’s about the same order of magnitude as the energy that was released when a meteor hit Canada 39 million years ago. The impact of that collision resulted in the Haughton Crater, which is about 14 miles (23 kilometers) across.
The LHC can’t accelerate an apple, though. Right now, it can accelerate about 600 trillion protons at a time. That may sound like a lot, but altogether, it adds up to about 1 nanogram of matter—roughly the same mass as a single human cell.
Conferences for Undergraduate Women in Physics aims to encourage more women and gender minorities to pursue careers in physics and improve diversity in the field.
Nicole Pfiester, an engineering grad student at Tufts University, says she has been interested in physics since she was a child. She says she loves learning how things work, and physics provides a foundation for doing just that.
But when Pfiester began pursuing a degree in physics as an undergraduate at Purdue University in 2006, she had a hard time feeling like she belonged in the male-dominated field.
“In a class of about 30 physics students,” she says, “I think two of us were women. I just always stood out. I was kind of shy back then and much more inclined to open up to other women than I was to men, especially in study groups. Not being around people I could relate to, while it didn't make things impossible, definitely made things more difficult.”
In 2008, two years into her undergraduate career, Pfiester attended a conference at the University of Michigan that was designed to address this very issue. The meeting was part of the Conferences for Undergraduate Women in Physics, or CUWiP, a collection of annual three-day regional conferences to give undergraduate women a sense of belonging and motivate them to continue in the field.
Pfiester says it was amazing to see so many female physicists in the same room and to learn that they had all gone through similar experiences. It inspired her and the other students she was with to start their own Women in Physics chapter at Purdue. Since then, the school has hosted two separate CUWiP events, in 2011 and 2015.
“Just seeing that there are other people like you doing what it is you want to do is really powerful,” Pfiester says. “It can really help you get through some difficult moments where it’s really easy, especially in college, to feel like you don’t belong. When you see other people experiencing the same struggles and, even more importantly, you see role models who look and talk like you, you realize that this is something you can do, too. I always left those conferences really energized and ready to get back into it.”
CUWiP was founded in 2006 when two graduate students at the University of Southern California realized that only 21 percent of US undergraduates in physics were women, a percentage that dropped even further in physics with career progression. In the 12 years since then, the percentage of undergraduate physics degrees going to women in the US has not grown, but CUWiP has. What began as one conference with 27 attendees has developed into a string of conferences held at sites across the country, as well as in Canada and the UK, with more than 1500 attendees per year. Since the American Physical Society took the conference under its umbrella in 2012, the number of participants has continued to grow every year.
The conferences are supported by the National Science Foundation, the Department of Energy and the host institutions. Most student transportation to the conferences is almost covered by the students’ home institutions, and APS provides extensive administrative support. In addition, local organizing committees contribute a significant volunteer effort.
“We want to provide women, gender minorities and anyone who attends the conference access to information and resources that are going to help them continue in science careers,” says Pearl Sandick, a dark matter physicist at the University of Utah and chair of the National Organizing Committee for CUWiP.
Some of the goals of the conference, Sandick says, are to make sure people leave with a greater sense of community, identify themselves more as physicists, become more aware of gender issues in physics, and feel valued and respected in their field. They accomplish this through workshops and panels featuring accomplished female physicists in a broad range of professions.
Before the beginning of the shared video keynote talk, attendees at each CUWiP site cheer and wave on video. This gives a sense of the national scale of the conference and the huge number of people involved.Courtesy of Columbia University
Students attending the conference have the opportunity to meet and network with women with successful careers in physics.Courtesy of Columbia University
Many CUWiP programs include a poster session where students have the opportunity to describe research in which they have been engaged, often through summer research programs.Photo by Eleanor Starkman
Ava Ghadimi, a math and physics graduate student from CUNY Baccalaureate for Unique and Interdisciplinary Studies, presents her research on "Searching for sources of astrophysical neutrinos: a multi-messenger approach with VERITAS" at the Princeton poster session.Photo by Eleanor Starkman
Jazlin McKinney of Texas Southern University discusses her research topic, “African American, Hispanic and Native American Women Students in STEM: Recommendations for Increasing the Bachelors, Masters and PhD Graduates,” with another participant at the CUWiP at the University of Kansas.Photo by Matt Rennells, Shedluv Photography
Zoe de Beurs of the University of Texas at Austin describes her research project, “Neutral Atom Focusing Using a Pulsed Electromagnetic Lens.“ Zoe was one of three students awarded the top poster presentation prize at the CUWiP at the University of Kansas.Photo by Matt Rennells, Shedluv Photography
University of Wisconsin, Madison physics and applied math major Arianna Ranabhat presents her poster on “Geocoronal Hydrogen Observations” at the Iowa State University CUWiP.Photo by Massimo Marengo/Iowa State University
Alynie Walter, an applied physics and mathematics major at St. Catherine University in Minnesota, presents her research on "Calibration of Temperature Sensors in Preparation for the 2017 Total Solar Eclipse” during the CUWiP at Iowa State poster session.Photo by Massimo Marengo/Iowa State University
Alyssa Miller, Iowa State University alumna and a member of Fermilab staff in the Beam Division, brainstorms about careers that use a physics degree.Photo by Massimo Marengo/Iowa State University
At the 2017 CUWiP at Princeton, attendees had the opportunity to touch a Van de Graaff generator, which produces static electricity.Photo by Eleanor Starkman
At Princeton, attendees had the opportunity to participate in CUWiP+ workshops, in which they could participate in hands-on demonstrations and perform introductory laboratories. In one of the workshops, students had the opportunity to construct simple plasma apparatus.Photo by Eleanor Starkman
The conferences include workshops and panels featuring accomplished female physicists in a broad range of professions.Photo by Eleanor Starkman Previous Next
“Often students come to the conference and are very discouraged,” says past chair Daniela Bortoletto, a high-energy physicist at the University of Oxford who organizes CUWiP in the UK. “But then they meet these extremely accomplished scientists who tell the stories of their lives, and they learn that everybody struggles at different steps, everybody gets discouraged at some point, and there are ups and downs in everyone’s careers. I think it’s valuable to see that. The students walk out of the conference with a lot more confidence.”
Through CUWiP, the organizers hope to equip students to make informed decisions about their education and expose them to the kinds of career opportunities that are open to them as physics majors, whether it means going to grad school or going into industry or science policy.
“Not every student in physics is aware that physicists do all kinds of things,” says Kate Scholberg, a neutrino physicist at Duke and past chair. “Everybody who has been a physics undergrad gets the question, ‘What are you going to do with that?’ We want to show students there’s a lot more out there than grad school and help them expand their professional networks.”
They also reach back to try to make conditions better for the next generations of physicists.
At the 2018 conference, Hope Marks, now a senior at Utah State University majoring in physics, participated in a workshop in which she wrote a letter to her high school physics teacher, who she says really sparked her interest in the field.
“I really liked the experiments we did and talking about some of the modern discoveries of physics,” she says. “I loved how physics allows us to explore the world from particles even smaller than atoms to literally the entire universe.”
The workshop was meant to encourage high school science and math teachers to support women in their classes.
One of the challenges to organizing the conferences, says Pat Burchat, an observational cosmologist at Stanford and past chair, is to build experiences that are engaging and accessible for undergraduate women.
“The tendency of organizers is naturally to think about the kinds of conferences they go to,” says Burchat says, “which usually consist of a bunch of research talks, often full of people sitting passively listening to someone talk. We want to make sure CUWiP consists of a lot of interactive sessions and workshops to keep the students engaged.”
Candace Bryan, a physics major at the University of Utah who has wanted to be an astronomer since elementary school, says one of the most encouraging parts of the conference was learning about imposter syndrome, which occurs when someone believes that they have made it to where they are only by chance and don’t feel deserving of their achievements.
“Science can be such an intimidating field,” she says. “It was the first time I’d ever heard that phrase, and it was really freeing to hear about it and know that so many others felt the same way. Every single person in that room raised their hand when they asked, ‘Who here has experienced imposter syndrome?’ That was really powerful. It helped me to try to move past that and improve awareness.”
Women feeling imposter syndrome sometimes interpret their struggles as a sign that they don’t belong in physics, Bryan says.
“It’s important to support women in physics and make sure they know there are other women out there who are struggling with the same things,” she says.
“It was really inspirational for everyone to see how far they had come and receive encouragement to keep going. It was really nice to have that feeling after conference of ‘I can go to that class and kill it,’ or ‘I can take that test and not feel like I’m going to fail.’ And if you do fail, it’s OK, because everyone else has at some point. The important thing is to keep going.”
The SuperCDMS SNOLAB project is expanding the hunt for dark matter to particles with properties not accessible to any other experiment.
The US Department of Energy has approved funding and start of construction for the SuperCDMS SNOLAB experiment, which will begin operations in the early 2020s to hunt for hypothetical dark matter particles called weakly interacting massive particles, or WIMPs. The experiment will be at least 50 times more sensitive than its predecessor, exploring WIMP properties that can’t be probed by other experiments and giving researchers a powerful new tool to understand one of the biggest mysteries of modern physics.
SLAC National Accelerator Laboratory is managing the construction project for the international SuperCDMS collaboration of 111 members from 26 institutions, which is preparing to do research with the experiment.
"Understanding dark matter is one of the hottest research topics—at SLAC and around the world," says JoAnne Hewett, head of SLAC’s Fundamental Physics Directorate and the lab’s chief research officer. "We're excited to lead the project and work with our partners to build this next-generation dark matter experiment."
With the DOE approvals known as Critical Decisions 2 and 3, the researchers can now build the experiment. The DOE Office of Science will contribute $19 million to the effort, joining forces with the National Science Foundation, which will contribute $12 million, and the Canada Foundation for Innovation, which will contribute $3 million.
“Our experiment will be the world’s most sensitive for relatively light WIMPs—in a mass range from a fraction of the proton mass to about 10 proton masses,” says Richard Partridge, head of the SuperCDMS group at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of SLAC and Stanford University. “This unparalleled sensitivity will create exciting opportunities to explore new territory in dark matter research.”An ultracold search 6800 feet underground
Scientists know that visible matter in the universe accounts for only 15 percent of all matter. The rest is a mysterious substance called dark matter. Due to its gravitational pull on regular matter, dark matter is a key driver for the evolution of the universe, affecting the formation of galaxies like our Milky Way. It therefore is fundamental to our very own existence.
But scientists have yet to find out what dark matter is made of. They believe it could be composed of dark matter particles, and WIMPs are top contenders. If these particles exist, they would barely interact with their environment and fly right through regular matter untouched. However, every so often, they could collide with an atom of our visible world, and dark matter researchers are looking for these rare interactions.
In the SuperCDMS SNOLAB experiment, the search will be done using silicon and germanium crystals, in which the collisions would trigger tiny vibrations. However, to measure the atomic jiggles, the crystals need to be cooled to less than minus 459.6 degrees Fahrenheit—a fraction of a degree above absolute zero temperature. These ultracold conditions give the experiment its name: Cryogenic Dark Matter Search, or CDMS. The prefix “Super” indicates an increased sensitivity compared to previous versions of the experiment.
The collisions would also produce pairs of electrons and electron deficiencies that move through the crystals, triggering additional atomic vibrations that amplify the signal from the dark matter collision. The experiment will be able to measure these “fingerprints” left by dark matter with sophisticated superconducting electronics.
The experiment will be assembled and operated at the Canadian laboratory SNOLAB—6,800 feet underground inside a nickel mine near the city of Sudbury. It’s the deepest underground laboratory in North America. There it will be protected from high-energy particles called cosmic radiation, which can create unwanted background signals.
“SNOLAB is excited to welcome the SuperCDMS SNOLAB collaboration to the underground lab,” says Kerry Loken, SNOLAB project manager. “We look forward to a great partnership and to supporting this world-leading science.”
Over the past months, a detector prototype has been successfully tested at SLAC.
“These tests were an important demonstration that we’re able to build the actual detector with high enough energy resolution, as well as detector electronics with low enough noise to accomplish our research goals,” says KIPAC’s Paul Brink, who oversees the detector fabrication at Stanford.
Together with seven other collaborating institutions, SLAC will provide the experiment’s centerpiece of four detector towers, each containing six crystals in the shape of oversized hockey pucks. The first tower could be sent to SNOLAB by the end of 2018.
“The detector towers are the most technologically challenging part of the experiment, pushing the frontiers of our understanding of low-temperature devices and superconducting readout,” says Bernard Sadoulet, a collaborator from the University of California, Berkeley.A strong collaboration for extraordinary science
In addition to SLAC, two other national labs are involved in the project. Fermi National Accelerator Laboratory is working on the experiment’s intricate shielding and cryogenics infrastructure, and Pacific Northwest National Laboratory is helping understand background signals in the experiment, a major challenge for the detection of faint WIMP signals.
A number of US and Canadian universities also play key roles in the experiment, working on tasks ranging from detector fabrication and testing to data analysis and simulation. The largest international contribution comes from Canada and includes the research infrastructure at SNOLAB.
“We’re fortunate to have a close-knit network of strong collaboration partners, which is crucial for our success,” says KIPAC’s Blas Cabrera, who directed the project through the CD-2/3 approval milestone. “The same is true for the outstanding support we’re receiving from the funding agencies in the US and Canada.”
Fermilab’s Dan Bauer, spokesperson of the SuperCDMS collaboration says, “Together we’re now ready to build an experiment that will search for dark matter particles that interact with normal matter in an entirely new region.”
SuperCDMS SNOLAB will be the latest in a series of increasingly sensitive dark matter experiments. The most recent version, located at the Soudan Mine in Minnesota, completed operations in 2015.
”The project has incorporated lessons learned from previous CDMS experiments to significantly improve the experimental infrastructure and detector designs for the experiment,” says SLAC’s Ken Fouts, project manager for SuperCDMS SNOLAB. “The combination of design improvements, the deep location and the infrastructure support provided by SNOLAB will allow the experiment to reach its full potential in the search for low-mass dark matter.”
Editor's note: A version of this article was originally published as a SLAC press release.
Approaching retirement, Jean Deken describes what it’s like to preserve decades of collective scientific memory at a national lab.
Jean Deken was hired at SLAC National Accelerator Laboratory for a daunting task—to chronicle the history and culture of the decades-old lab and its reseachers as the fast pace of its science continued. She became SLAC’s archivist on April 15, 1996.
Deken is retiring after more than 20 years at the lab. In this Q&A, she discusses big changes in physics, the challenges that archivists face, and her most surprising finds.What was it like when you first arrived at the lab? JD
At the time, I remember feeling overwhelmed because the archives were unstaffed for more than a year. When I arrived, I couldn’t fully open the door to my office because there were so many boxes that had been stacked there. Gradually, I unearthed the desk, chair, computer and phone.
BaBar was ramping up, which was the big experiment at the time that was exploring antimatter, the interactions of quarks and leptons, and new physics. The physicists wanted to know what do with their records, because they knew they were making history.
The Superconducting Super Collider in Texas had recently been canceled, and the contents of their library were distributed to other labs. SLAC received pallets and pallets of microfilmed physics journals. I worked with the library to figure out what to do with all them.
There was a pent-up need to get information into the archives. Because I was so busy, I sometimes didn’t have time to eat until the evenings.How did you first get involved with archiving science? JD
I was looking for a part-time job between undergraduate and graduate school, and I began working at the Missouri Botanical Garden as a cataloguing assistant. There was a stack of stuff in the corner of the cataloguing department that no one wanted to go near. I started digging into it and found manuscripts from the early days of the botanical garden by the founder and his scientific advisor.
I became fascinated by these documents, and the director of the library told me, “What you’re interested in, that’s called archiving.”
So I acquired some archival procedure manuals and started working on arranging these papers. Soon, I began fielding all the questions the library got about the history of the garden.How did you make your way to SLAC? JD
For many years, I worked at the National Archives in St. Louis, Missouri. While I was there, the Archives decided to celebrate the 50th anniversary of World War II in a really big way. In St. Louis we made a traveling exhibit that focused on war efforts of civilian and military personnel. I took the lead on looking into the civilian war effort, which included Women Air Service Pilots (WASPs) and scientists working in research and development, including those whose work contributed to the Manhattan Project.
Working on the exhibit, I became increasingly aware of the importance of preserving scientific perspectives as we uncovered stories hidden in personnel records. I thought, “Why did I never hear about this before?” It’s partly because the records of these efforts were scattered. That got me interested in learning more about archiving the records of government science.
At the same time, contemporary records were going electronic, in a big way. I remember thinking, “This changes everything.” I decided that the best solution for an archivist would be to be as close as possible to the records as they’re being created, to be embedded in an organization while working on how to preserve this information. Wanting to be an embedded archivist, and wanting to work with the records of government science, I applied for the archivist job at SLAC, and they offered it to me the day of my interview.What does it mean to process an archival collection, exactly? JD
For paper collections, you process the documents to try and maintain the original order. The contextual information gives insight into the personality and intellect of the records’ creator. But there’s often disorder in storage and therefore in reconstructing the original order.
The first stage is to create an inventory of every box and folder and tag each item to see connections with institutions and topics. This is how to make sure the contents are roughly chronological and sorted by topic.
Next I would make sure the documents were stored in acid-free boxes and file folders. At this point, I would also look for contaminants, such as acidic paper, insects, old tape and rusty staples. For these damaged items, I would sometimes simply remove the contaminants, and other times [for more damaged items] photocopy the documents on acid-free paper and store the original in a protective sleeve.
In one collection, I found an envelope full of cash. I went back to the scientist and said, “I’ve never gotten a tip before.” He had been collecting meal money for a conference and had lost track of the envelope.
After this physical work is done, I would create an electronic guide to the contents. We have also digitized some of the hardcopy archival materials when requested, and those copies are kept in a digital repository. We have just begun to dip our toes into archiving the lab’s digital materials, starting with photographs. The type of digital storage we are using is really an interim fix.Speaking of the discipline, what are some of the challenges archivists face? JD
I’ve been concerned about electronic records for decades now. The problem with digital records is that no one’s figured out how to make them last. This is still true, and it’s something archival science needs to address as a field. There are quite a few questions we’re asking ourselves: What data and records are worth preserving? How long should they be saved? Who will save them? And who gets access?
One of my own future efforts in the field—I’ll keep busy during retirement—has to do with data archiving. With data, there’s such a vast amount of information, and each scientific discipline has different protocols. At international and national labs such as SLAC, many of the scientists come from elsewhere, and there are various agreements and regulations about responsibilities towards data and records. I’m working on proposing policies for these varied situations using SLAC datasets as a test case.Was it challenging to learn enough about the science to preserve it well? JD
During the interview for the job, I asked, “You know I don’t have a physics background, why are you interested in me?” The interviewers told me, “We can teach you the physics that you need to know, and we also consider it part of our job to be able to explain physics.” But they told me they needed me to figure out the government regulations that relate to archives.
When I started, I bought children’s books about physics, listened and asked a lot questions.What have you learned about scientists themselves? JD
It surprised me that these absolutely brilliant scientists were actually down-to-earth and approachable. The experimentalists, for example, would test you to make sure you knew your stuff, but then they considered you a member of their team. The researchers are used to multidisciplinary teams and needed to know that you could pull your own weight.
I was also accustomed to a corporate government setting, and the environment at the lab was totally different. At first, I could not dress down enough to fit in. It was a funny, unexpected cultural shift.How has the lab changed, from your perspective? JD
The place has changed completely. When I started, SLAC was a single-purpose lab—focusing on high-energy physics. Later, it became a multipurpose laboratory and expanded into many other research areas.
In the 1990s, SLAC was mature in the field of high-energy physics. The leaders of the lab had a sense that we had a history that needed to be preserved.
That generation has moved on, and with the shift in scientific focus, everything is new enough that there’s a different sense of history. Right now, we are running full tilt to get research programs set up, and that’s where a lot of the attention is aimed. I often have to say to the scientists, “Remember, you’re doing something that’s historic.”What are some of the projects you’re most proud of? JD
During my interview, several people mentioned SLAC’s involvement with the early web.
SLAC has the oldest web pages still in existence. Even though Tim Berners-Lee at CERN created the first website, the original code wasn’t preserved. It has to do with a quirk of HTML—when you overwrite the code, it disappears. At SLAC, Louise Addis and Joan Winters had the foresight to understand this from almost the beginning, and they saved the original HTML pages from the first North American website. So, I was able to deposit those pages into the Stanford Web Archives when it was established a few years ago.
I was also a co-author of [SLAC Founding Director] Pief Panofsky’s memoir, which I edited. I like to tell people that his first language wasn’t German; it was physics. I really had to pull the story out of him to get the full flavor of what he wanted to say, but it was a lot of fun.
Overall, I’m really proud of the SLAC archives. It’s a robust and well-respected program with minimal resources. And it’s been a whole lot of fun. There’s nothing I’d rather have done.
The Japan-based experiment is one step closer to answering mystifying questions about antimatter.
For the first time, the SuperKEKB collider at the KEK laboratory in Tsukuba, Japan, is smashing together particles at the heart of a giant detector called Belle II.
“These first collisions represent a moment that all of us at Belle II have been looking forward to for a long time,” says Elisabetta Prencipe, a scientist at the German research center Forschungszentrum Juelich who works on particle tracking software and statistical analyses for Belle II. “It’s a step forward to opening a new door to the universe and our understanding of it.”
The project looks for potential differences between matter and its mirror-world twin, antimatter, to figure out why our universe is dominated by just one of the pair. The experiment has been seven years in the making.
During construction of the Belle II detector, the SuperKEKB accelerator was recommissioned to increase the number of particle collisions, a measure called its luminosity. Even now, the accelerator is preparing for the second part of this upgrade, which will take place in stages over the next 10 years. The upgrade will more tightly focus the beams and solidify SuperKEKB’s position as the highest-luminosity accelerator in the world.
On March 21, SuperKEKB successfully stored an electron beam in the main ring, and on March 31 it stored a beam of positrons, the electron’s antimatter counterparts. With the two colliding beams in place, Belle II saw its first successful collisions today.KEK/Belle II The beauty of quarks
Scientists predict that antimatter and matter should have been created in equal amounts during the hot early stages of the big bang that formed our universe. When matter and antimatter meet, they annihilate in a burst of energy. Yet despite their presumed equal ratio, matter has clearly won the fight, and now makes up everything we see around us. It is this confounding mystery that Belle II seeks to unravel.
Belle II’s beauty lies in its ability to detect unimaginably minute debris from high-energy collisions between electrons and positrons—particles so small they aren’t made up of anything else. In this debris, scientists look for physics beyond what they currently know by comparing particles’ properties to their predictions. The detector is especially sensitive to how other fundamental particles called quarks decay. It can closely study both quark properties and the structure of hadrons: particles made of multiple quarks bound together tightly.
At Belle II’s core, electrons and positrons collide at a high enough energy to create B-mesons, particles made of one matter and one antimatter quark. Scientists are particularly interested bottom quarks, also known as beauty quarks.
Bottom quarks are produced along with charm quarks at the center of Belle II. Both are heftier cousins of up and down quarks, which make up all ordinary matter, including you and whatever device you’re using to read this article. The collisions also produce tau leptons, which are like massive electrons. All of these particles are seldom found in nature and observing them can reveal new physics.
Since B-mesons contain bottom quarks, which have diverse kinds of decays, scientists will use Belle II to observe the different meson decays. If a meson containing regular quarks decays differently than one containing their antimatter twins, this could help explain why the universe is full of matter.Bolstering Belle
Belle II is the successor of earlier experiments used to produce B-mesons, Belle and BABAR. It will record about 40 times as many collisions as the original Belle. It’s also a tremendous collaboration between 25 countries, with 750 national and international physicists.
“Every measurement we’ve made until this point and every hint of new physics is limited by statistics and by the amount of data we have,” says Tom Browder, professor at the University of Hawaii and spokesperson for Belle II. “It’s very clear that to find any new physics we need much more data.”
With more collisions at the center of Belle II, scientists have more opportunities for an uncommon or unheard-of decay event to take place, giving them better insight into quarks’ behavior and how it factors into the universe’s creation.
“With 40 times more collisions per second than the previous Belle experiment, we’ll be able to search for rare decays, possibly observe new particles, and try to answer still unsolved questions about the origin of the universe,” Prencipe says. “Many of us are quite excited because this could mean the start of a new era, where lots of data are expected, new detectors will be tested, and we have great possibilities to perform unique physics.”
Breakthroughs in physics sometimes require an assist from the field of mathematics—and vice versa.
In 1912, Albert Einstein, then a 33-year-old theoretical physicist at the Eidgenössische Technische Hochschule in Zürich, was in the midst of developing an extension to his theory of special relativity.
With special relativity, he had codified the relationship between the dimensions of space and time. Now, seven years later, he was trying to incorporate into his theory the effects of gravity. This feat—a revolution in physics that would supplant Isaac Newton’s law of universal gravitation and result in Einstein’s theory of general relativity—would require some new ideas.
Fortunately, Einstein’s friend and collaborator Marcel Grossmann swooped in like a waiter bearing an exotic, appetizing delight (at least in a mathematician’s overactive imagination): Riemannian geometry.
This mathematical framework, developed in the mid-19th century by German mathematician Bernhard Riemann, was something of a revolution itself. It represented a shift in mathematical thinking from viewing mathematical shapes as subsets of the three-dimensional space they lived in to thinking about their properties intrinsically. For example, a sphere can be described as the set of points in 3-dimensional space that lie exactly 1 unit away from a central point. But it can also be described as a 2-dimensional object that has particular curvature properties at every single point. This alternative definition isn’t terribly important for understanding the sphere itself but ends up being very useful with more complicated manifolds or higher-dimensional spaces.
By Einstein’s time, the theory was still new enough that it hadn’t completely permeated through mathematics, but it happened to be exactly what Einstein needed. Riemannian geometry gave him the foundation he needed to formulate the precise equations of general relativity. Einstein and Grossmann were able to publish their work later that year.
“It’s hard to imagine how he would have come up with relativity without help from mathematicians,” says Peter Woit, a theoretical physicist in the Mathematics Department at Columbia University.
The story of general relativity could go to mathematicians’ heads. Here mathematics seems to be a benevolent patron, blessing the benighted world of physics with just the right equations at the right time.
When you go far enough back, you really can’t tell who’s a physicist and who’s a mathematician.
But of course the interplay between mathematics and physics is much more complicated than that. They weren’t even separate disciplines for most of recorded history. Ancient Greek, Egyptian and Babylonian mathematics took as an assumption the fact that we live in a world in which distance, time and gravity behave in a certain way.
“Newton was the first physicist,” says Sylvester James Gates, a physicist at Brown University. “In order to reach the pinnacle, he had to invent a new piece of mathematics; it’s called calculus.”
Calculus made some classical geometry problems easier to solve, but its foremost purpose to Newton was to give him a way to analyze the motion and change he observed in physics. In that story, mathematics is perhaps more of a butler, hired to help keep the affairs in order, than a savior.
Even after physics and mathematics began their separate evolutionary paths, the disciplines were closely linked. “When you go far enough back, you really can’t tell who’s a physicist and who’s a mathematician,” Woit says. (As a mathematician, I was a bit scandalized the first time I saw Emmy Noether’s name attached to physics! I knew her primarily through abstract algebra.)
Throughout the history of the two fields, mathematics and physics have each contributed important ideas to the other. Mathematician Hermann Weyl’s work on mathematical objects called Lie groups provided an important basis for understanding symmetry in quantum mechanics. In his 1930 book The Principles of Quantum Mechanics, theoretical physicist Paul Dirac introduced the Dirac delta function to help describe the concept in particle physics of a pointlike particle—anything so small that it would be modeled by a point in an idealized situation. A picture of the Dirac delta function looks like a horizontal line lying along the bottom of the x axis of a graph, at x=0, except at the place where it intersects with the y axis, where it explodes into a line pointing up to infinity. Dirac declared that the integral of this function, the measure of the area underneath it, was equal to 1. Strictly speaking, no such function exists, but Dirac’s use of the Dirac delta eventually spurred mathematician Laurent Schwartz to develop the theory of distributions in a mathematically rigorous way. Today distributions are extraordinarily useful in the mathematical fields of ordinary and partial differential equations.
Though modern researchers focus their work more and more tightly, the line between physics and mathematics is still a blurry one. A physicist has won the Fields Medal, one of the most prestigious accolades in mathematics. And a mathematician, Maxim Kontsevich, has won the new Breakthrough Prizes in both mathematics and physics. One can attend seminar talks about quantum field theory, black holes, and string theory in both math and physics departments. Since 2011, the annual String Math conference has brought mathematicians and physicists together to work on the intersection of their fields in string theory and quantum field theory.
String theory is perhaps the best recent example of the interplay between mathematics and physics, for reasons that eventually bring us back to Einstein and the question of gravity.
String theory is a theoretical framework in which those pointlike particles Dirac was describing become one-dimensional objects called strings. Part of the theoretical model for those strings corresponds to gravitons, theoretical particles that carry the force of gravity.
Most humans will tell you that we perceive the universe as having three spatial dimensions and one dimension of time. But string theory naturally lives in 10 dimensions. In 1984, as the number of physicists working on string theory ballooned, a group of researchers including Edward Witten, the physicist who was later awarded a Fields Medal, discovered that the extra six dimensions of string theory needed to be part of a space known as a Calabi-Yau manifold.
When mathematicians joined the fray to try to figure out what structures these manifolds could have, physicists were hoping for just a few candidates. Instead, they found boatloads of Calabi-Yaus. Mathematicians still have not finished classifying them. They haven’t even determined whether their classification has a finite number of pieces.
As mathematicians and physicists studied these spaces, they discovered an interesting duality between Calabi-Yau manifolds. Two manifolds that seem completely different can end up describing the same physics. This idea, called mirror symmetry, has blossomed in mathematics, leading to entire new research avenues. The framework of string theory has almost become a playground for mathematicians, yielding countless new avenues of exploration.
Mina Aganagic, a theoretical physicist at the University of California, Berkeley, believes string theory and related topics will continue to provide these connections between physics and math.
“In some sense, we’ve explored a very small part of string theory and a very small number of its predictions,” she says. Mathematicians and their focus on detailed rigorous proofs bring one point of view to the field, and physicists, with their tendency to prioritize intuitive understanding, bring another. “That’s what makes the relationship so satisfying.”
The relationship between physics and mathematics goes back to the beginning of both subjects; as the fields have advanced, this relationship has gotten more and more tangled, a complicated tapestry. There is seemingly no end to the places where a well-placed set of tools for making calculations could help physicists, or where a probing question from physics could inspire mathematicians to create entirely new mathematical objects or theories.
The Large Synoptic Survey Telescope will track billions of objects for 10 years, creating unprecedented opportunities for studies of cosmic mysteries.
When the Large Synoptic Survey Telescope begins to survey the night sky in the early 2020s, it’ll collect a treasure trove of data. The information will benefit a wide range of groundbreaking astronomical and astrophysical research, addressing topics such as dark matter, dark energy, the formation of galaxies and detailed studies of objects in our very own cosmic neighborhood, the Milky Way.
LSST’s centerpiece will be its 3.2-gigapixel camera, which is being assembled at the US Department of Energy’s SLAC National Accelerator Laboratory. Every few days, the largest digital camera ever built for astronomy will compile a complete image of the Southern sky. Moreover, it’ll do so over and over again for a period of 10 years. It’ll track the motions and changes of tens of billions of stars, galaxies and other objects in what will be the world’s largest stop-motion movie of the universe.
Fulfilling this extraordinary task requires extraordinary technology. The camera will be the size of a small SUV, weigh in at a whopping 3 tons, and use state-of-the-art optics, imaging technology and data management tools. But how exactly will it work?
It all starts with choosing the right location for the telescope. Astronomers want the sharpest images of the dimmest objects for their analyses, and they also want to maximize their observation time. They need the nights to be dark and the air to be dry and stable.
It turns out that the Atacama Desert, a plateau in the foothills of the Andes Mountains, scores very high for these criteria. That’s where LSST will be located—at nearly 8700 feet altitude on the Cerro Pachón ridge in Chile, 60 miles from the coastal town of La Serena.
The next challenge is that most objects LSST researchers want to study are so far away that their light has been traveling through space for millions to billions of years. It arrives on Earth merely as a faint glow, and astronomers need to collect as much of that glow as possible. For this purpose, LSST will have a large primary mirror with a diameter close to 28 feet.
The mirror will be part of a sophisticated three-mirror system that will reflect and focus the cosmic light into the camera.
The unique optical design is crucial for the telescope’s extraordinary field of view—a measure of the area of sky captured with every snapshot. At 9.6 square degrees, corresponding to 40 times the area of the full moon, the large field of view will allow astronomers to put together a complete map of the Southern night sky every few days.
After bouncing off the mirrors, the ancient cosmic light will enter the camera through a set of three large lenses. The largest one will have a diameter of more than 5 feet.
Together with the mirrors, the lenses’ job is to focus the light as sharply as possible onto the focal plane—a grid of light-sensitive sensors at the back of the camera where the light from the sky will be detected.
A filter changer will insert filters in front of the third lens, allowing astronomers to take images with different kinds of cosmic light that range from the ultraviolet to the near-infrared. This flexibility enhances the range of possible observations with LSST. For example, with an infrared filter researchers can look right through dust and get a better view of objects obscured by it. By comparing how bright an object is when seen through different filters, astronomers also learn how its emitted light varies with the wavelength, which reveals details about how the light is produced.
The heart of LSST’s camera is its 25-inch-wide focal plane. That’s where the light of stars and galaxies will be turned into electrical signals, which will then be used to reconstruct images of the sky. The focal plane will hold 189 imaging sensors, called charge-coupled devices, that perform this transformation.
Each CCD is 4096 pixels wide and long, and together they’ll add up to the camera’s 3.2 gigapixels. A “good” star will be the size of only a handful of pixels, whereas distant galaxies might appear as somewhat larger fuzzballs.
The focal plane will consist of 21 smaller square arrays, called rafts, with nine CCDs each. This modular structure will make it easier and less costly to replace imaging sensors if needed in the future.
To the delight of astronomers interested in extremely dim objects, the camera will have a large aperture (f/1.2, for the photographers among us), meaning that it’ll let a lot of light onto the imaging sensors. However, the large aperture will also make the depth of field very shallow, which means that objects will become blurry very quickly if they are not precisely projected onto the focal plane. That’s why the focal plane will need to be extremely flat, demanding that individual CCDs don’t stick out or recess by more than 0.0004 inches.
To eliminate unwanted background signals, known as dark currents, the sensors will also need to be cooled to minus 150 degrees Fahrenheit. The temperature will need to be kept stable to half a degree. Because water vapor inside the camera housing would form ice on the sensors at this chilly temperature, the focal plane must also be kept in a vacuum.
In addition to the 189 “science” sensors that will capture images of the sky, the focal plane will also have three specialty sensors in each of the four corners of the focal plane. Two so-called guiders will frequently monitor the position of a reference star and help LSST stay in sync with the Earth’s rotation. The third sensor, called a wavefront sensor, will be split into two halves that will be positioned six-hundredths of an inch above and below the focal plane. It’ll see objects as blurry “donuts” and provide information that will be used to adjust the telescope’s focus.Cinematography of astronomical dimension
Once the camera has taken enough data from a patch in the sky, about every 36 seconds, the telescope will be repositioned to look at the next spot. A computer algorithm will determine the patches in the sky that will be surveyed by LSST on any given night.
While the telescope is moving, a shutter between the filter and the third lens camera will close to prevent more light from falling onto the imaging sensors. At the same time, the CCDs will be read out and their information digitized.
The data will be sent into the processing and analysis pipeline that will handle LSST’s enormous flood of information (about 20 terabytes of data every single night). There, it will be turned into useable images. The system will also flag potential interesting events and send out alerts to astronomers within a minute.
This way—patch by patch—a complete image of the entire Southern sky will be stitched together every few days. Then the imaging process will start over and repeat for the 10-year duration of the survey, ultimately creating the largest time-lapse movie of the universe ever made and providing researchers with unprecedented research opportunities.Download the printable poster (PDF) Artwork by Sandbox Studio, Chicago with Ana Kova
These hardy physics components live at the center of particle production.
For some, a target is part of a game of darts. For others, it’s a retail chain. In particle physics, it’s the site of an intense, complex environment that plays a crucial role in generating the universe’s smallest components for scientists to study.
The target is an unsung player in particle physics experiments, often taking a back seat to scene-stealing light-speed particle beams and giant particle detectors. Yet many experiments wouldn’t exist without a target. And, make no mistake, a target that holds its own is a valuable player.
Scientists and engineers at Fermilab are currently investigating targets for the study of neutrinos—mysterious particles that could hold the key to the universe’s evolution.Intense interactions
The typical particle physics experiment is set up in one of two ways. In the first, two energetic particle beams collide into each other, generating a shower of other particles for scientists to study.
In the second, the particle beam strikes a stationary, solid material—the target. In this fixed-target setup, the powerful meeting produces the particle shower.
As the crash pad for intense beams, a target requires a hardy constitution. It has to withstand repeated onslaughts of high-power beams and hold up under hot temperatures.
You might think that, as stalwart players in the play of particle production, targets would look like a fortress wall (or maybe you imagined dartboard). But targets take different shapes—long and thin, bulky and wide. They’re also made of different materials, depending on the kind of particle one wants to make. They can be made of metal, water or even specially designed nanofibers.
In a fixed-target experiment, the beam—say, a proton beam—races toward the target, striking it. Protons in the beam interact with the target material’s nuclei, and the resulting particles shoot away from the target in all directions. Magnets then funnel and corral some of these newly born particles to a detector, where scientists measure their fundamental properties.The particle birthplace
The particles that emerge from the beam-target interaction depend in large part on the target material. Consider Fermilab neutrino experiments.
In these experiments, after the protons strike the target, some of the particles in the subsequent particle shower decay—or transform—into neutrinos.
The target has to be made of just the right stuff.
“Targets are crucial for particle physics research,” says Fermilab scientist Bob Zwaska. “They allow us to create all of these new particles, such as neutrinos, that we want to study.”
Graphite is a goldilocks material for neutrino targets. If kept at the right temperature while in the proton beam, the graphite generates particles of just the right energy to be able to decay into neutrinos.
For neutron targets, such as that at the Spallation Neutron Source at Oak Ridge National Laboratory, heavier metals such as mercury are used instead.
Maximum interaction is the goal of a target’s design. The target for Fermilab’s NOvA neutrino experiment, for example, is a straight row—about the length of your leg—of graphite fins that resemble tall dominoes. The proton beam barrels down its axis, and every encounter with a fin produces an interaction. The thin shape of the target ensures that few of the particles shooting off after collision are reabsorbed back into the target.Robust targets
“As long as the scientists have the particles they need to study, they’re happy. But down the line, sometimes the targets become damaged,” says Fermilab engineer Patrick Hurh. In such cases, engineers have to turn down—or occasionally turn off—the beam power. “If the beam isn’t at full capacity or is turned off, we’re not producing as many particles as we can for science.”
The more protons that are packed into the beam, the more interactions they have with the target, and the more particles that are produced for research. So targets need to be in tip-top shape as much as possible. This usually means replacing targets as they wear down, but engineers are always exploring ways of improving target resistance, whether it’s through design or material.
Consider what targets are up against. It isn’t only high-energy collisions—the kinds of interactions that produce particles for study—that targets endure.
Lower-energy interactions can have long-term, negative impacts on a target, building up heat energy inside it. As the target material rises in temperature, it becomes more vulnerable to cracking. Expanding warm areas hammer against cool areas, creating waves of energy that destabilize its structure.
Some of the collisions in a high-energy beam can also create lightweight elements such as hydrogen or helium. These gases build up over time, creating bubbles and making the target less resistant to damage.
A proton from the beam can even knock off an entire atom, disrupting the target’s crystal structure and causing it to lose durability.
Clearly, being a target is no picnic, so scientists and engineers are always improving targets to better roll with a punch.
For example, graphite, used in Fermilab’s neutrino experiments, is resistant to thermal strain. And, since it is porous, built-up gases that might normally wedge themselves between atoms and disrupt their arrangement may instead migrate to open areas in the atomic structure. The graphite is able to remain stable and withstand the waves of energy from the proton beam.
Engineers also find ways to maintain a constant target temperature. They design it so that it’s easy to keep cool, integrating additional cooling instruments into the target design. For example, external water tubes help cool the target for Fermilab’s NOvA neutrino experiment.Targets for intense neutrino beams
At Fermilab, scientists and engineers are also testing new designs for what will be the lab’s most powerful proton beam—the beam for the laboratory’s flagship Long-Baseline Neutrino Facility and Deep Underground Neutrino Experiment, known as LBNF/DUNE.
LBNF/DUNE is scheduled to begin operation in the 2020s. The experiment requires an intense beam of high-energy neutrinos—the most intense in the world. Only the most powerful proton beam can give rise to the quantities of neutrinos LBNF/DUNE needs.
Scientists are currently in the early testing stages for LBNF/DUNE targets, investigating materials that can withstand the high-power protons. Currently in the running are beryllium and graphite, which they’re stretching to their limits. Once they conclusively determine which material comes out on top, they’ll move to the design prototyping phase. So far, most of their tests are pointing to graphite as the best choice.
Targets will continue to evolve and adapt. LBNF/DUNE provides just one example of next-generation targets.
“Our research isn’t just guiding the design for LBNF/DUNE,” Hurh says. “It’s for the science itself. There will always be different and more powerful particle beams, and targets will evolve to meet the challenge.”
Editor's note: A version of this article was originally published by Fermilab.
It doesn’t seem like collisions of particles with no mass should be able to produce the “mass-giving” boson, the Higgs. But every other second at the LHC, they do.
Einstein’s most famous theory, often written as E=mc2, tells us that energy and matter are two sides of the same coin.
The Large Hadron Collider uses this principle to convert the energy contained within ordinary particles into new particles that are difficult to find in nature—particles like the Higgs boson, which is so massive that it almost immediately decays into pairs of lighter, more stable particles.
But not just any collision can create a Higgs boson.
“The Higgs is not just created from a ‘poof’ of energy,” says Laura Dodd, a researcher at the University of Wisconsin, Madison. “Particles follow a strict set of laws that dictate how they can form, decay and interact.”
One of these laws states that Higgs bosons can be produced only by particles that interact with the Higgs field—in other words, particles with mass.
The Higgs field is like an invisible spider’s web that permeates all of space. As particles travel through it, some get tangled in the sticky tendrils, a process that makes them gain mass and slow down. But for other particles—such as photons and gluons—this web is completely transparent, and they glide through unhindered.
Given enough energy, the particles wrapped in the Higgs field can transfer their energy into it and kick out a Higgs boson. Because massless particles do not interact with the Higgs field, it would make sense to say that they can’t create a Higgs. But scientists at the LHC would beg to differ.
The LHC accelerates protons around its 17-mile circumference to just under the speed of light and then brings them into head-on collisions at four intersections along its ring. Protons are not fundamental particles, particles that cannot be broken down into any smaller constituent pieces. Rather they are made up of gluons and quarks.
As two pepped-up protons pass through each other, it’s usually pairs of massless gluons that infuse invisible fields with their combined energy and excite other particles into existence—and that includes Higgs bosons.
We know that particles follow strict rules about who can talk to whom.
How? Gluons have found a way to cheat.
“It would be impossible to generate Higgs bosons with gluons if the collisions in the LHC were a simple, one-step processes,” says Richard Ruiz, a theorist at Durham University’s Institute for Particle Physics Phenomenology.
Luckily, they aren’t.
Gluons can momentarily “launder” their energy to a virtual particle, which converts the gluon’s energy into mass. If two gluons produce a pair of virtual top quarks, the tops can recombine and annihilate into a Higgs boson.
To be clear, virtual particles are not stable particles at all, but rather irregular disturbances in quantum mechanical fields that exist in a half-baked state for an incredibly short period of time. If a real particle were a thriving business, then a virtual particle would be a shell company.
Theorists predict that about 90 percent of Higgs bosons are created through gluon fusion. The probability of two gluons colliding, creating a top quark-antitop pair and propitiously producing a Higgs is roughly one in 2 billion. However, because the LHC generates 10 million proton collisions every second, the odds are in scientists’ favor and the production rate for Higgs bosons is roughly one every two seconds.
Shortly after the Higgs discovery, scientists were mostly focused on what happens to Higgs bosons after they decay, according to Dodd.
“But now that we have more data and a better understanding of the Higgs, we’re starting to look closer at the collision byproducts to better understand how frequently the Higgs is produced through the different mechanisms,” she says.
The Standard Model of particle physics predicts that almost all Higgs bosons are produced through one of four possible processes. What scientists would love to see are Higgs bosons being created in a way that the Standard Model of particle physics does not predict, such as in the decay of a new particle. Breaking the known rules would show that there is more going on than physicists previously understood.
“We know that particles follow strict rules about who can talk to whom because we’ve seen this time and time again during our experiments,” Ruiz says. “So now the question is, what if there is a whole sector of undiscovered particles that cannot communicate with our standard particles but can interact with the Higgs boson?”
Scientists are keeping an eye out for anything unexpected, such as an excess of certain particles radiating from a collision or decay paths that occur more or less frequently than scientists predicted. These indicators could point to undiscovered heavy particles morphing into Higgs bosons.
At the same time, to find hints of unexpected ingredients in the chain reactions that sometimes make Higgs bosons, scientists must know very precisely what they should expect.
“We have fantastic mathematical models that predict all this, and we know what both sides of the equations are,” Ruiz says. “Now we need to experimentally test these predictions to see if everything adds up, and if not, figure out what those extra missing variables might be.”
Scientists on the Axion Dark Matter Experiment have demonstrated technology that could lead to the discovery of theoretical light dark matter particles called axions.
Forty years ago, scientists theorized a new kind of low-mass particle that could solve one of the enduring mysteries of nature: what dark matter is made of. Now a new chapter in the search for that particle, the axion, has begun.
This week, the Axion Dark Matter Experiment (ADMX) unveiled a new result (published in Physical Review Letters) that places it in a category of one: It is the world’s first and only experiment to have achieved the necessary sensitivity to “hear” the telltale signs of these theoretical particles. This technological breakthrough is the result of more than 30 years of research and development, with the latest piece of the puzzle coming in the form of a quantum-enabled device that allows ADMX to listen for axions more closely than any experiment ever built.
ADMX is managed by the US Department of Energy’s Fermi National Accelerator Laboratory and located at the University of Washington. This new result, the first from the second-generation run of ADMX, sets limits on a small range of frequencies where axions may be hiding, and sets the stage for a wider search in the coming years.
“This result signals the start of the true hunt for axions,” says Fermilab’s Andrew Sonnenschein, the operations manager for ADMX. “If dark matter axions exist within the frequency band we will be probing for the next few years, then it’s only a matter of time before we find them.”
One theory suggests that galaxies are held together by a vast number of axions, low-mass particles that are almost invisible to detection as they stream through the cosmos. Efforts in the 1980s to find these particles, named by theorist Frank Wilczek, currently of the Massachusetts Institute of Technology, were unsuccessful, showing that their detection would be extremely challenging.
ADMX is an axion haloscope—essentially a large, low-noise, radio receiver, which scientists tune to different frequencies and listen to find the axion signal frequency. Axions almost never interact with matter, but with the aid of a strong magnetic field and a cold, dark, properly tuned, reflective box, ADMX can “hear” photons created when axions convert into electromagnetic waves inside the detector.
“If you think of an AM radio, it’s exactly like that,” says Gray Rybka, co-spokesperson for ADMX and assistant professor at the University of Washington. “We’ve built a radio that looks for a radio station, but we don't know its frequency. We turn the knob slowly while listening. Ideally we will hear a tone when the frequency is right.”Listening for Dark Matter with ADMX Video of Listening for Dark Matter with ADMX
This detection method, which might make the "invisible axion" visible, was invented by Pierre Sikivie of the University of Florida in 1983. Pioneering experiments and analyses by a collaboration of Fermilab, the University of Rochester and Brookhaven National Laboratory, as well as scientists at the University of Florida, demonstrated the practicality of the experiment. This led to the construction in the late 1990s of a large-scale detector at Lawrence Livermore National Laboratory that is the basis of the current ADMX.
It was only recently, however, that the ADMX team has been able to deploy superconducting quantum amplifiers to their full potential, enabling the experiment to reach unprecedented sensitivity. Previous runs of ADMX were stymied by background noise generated by thermal radiation and the machine’s own electronics.
Fixing thermal radiation noise is easy: A refrigeration system cools the detector down to 0.1 Kelvin (roughly -460 degrees Fahrenheit). But eliminating the noise from electronics proved more difficult. The first runs of ADMX used standard transistor amplifiers, but then ADMX scientists connected with John Clarke, a professor at the University of California Berkeley, who developed a quantum-limited amplifier for the experiment. This much quieter technology, combined with the refrigeration unit, reduces the noise by a significant enough level that the signal, should ADMX discover one, will come through loud and clear.
“The initial versions of this experiment, with transistor-based amplifiers, would have taken hundreds of years to scan the most likely range of axion masses. With the new superconducting detectors, we can search the same range on timescales of only a few years,” says Gianpaolo Carosi, co-spokesperson for ADMX and scientist at Lawrence Livermore National Laboratory.
“This result plants a flag,” says Leslie Rosenberg, professor at the University of Washington and chief scientist for ADMX. “It tells the world that we have the sensitivity, and have a very good shot at finding the axion. No new technology is needed. We don’t need a miracle anymore, we just need the time.”
ADMX will now test millions of frequencies at this level of sensitivity. If axions were found, it would be a major discovery that could explain not only dark matter, but other lingering mysteries of the universe. If ADMX does not find axions, that may force theorists to devise new solutions to those riddles.
“A discovery could come at any time over the next few years,” says scientist Aaron Chou of Fermilab. “It’s been a long road getting to this point, but we’re about to begin the most exciting time in this ongoing search for axions.”
Editor’s note: This article is based on a Fermilab press release.