Genealogy databases could reveal the identity of most Americans

Protecting the anonymity of publicly available genetic data, including DNA donated to research projects, may be impossible.

About 60 percent of people of European descent who search genetic genealogy databases will find a match with a relative who is a third cousin or closer, a new study finds. The result suggests that with a database of about 3 million people, police or anyone else with access to DNA data can figure out the identity of virtually any American of European descent, Yaniv Erlich and colleagues report online October 11 in Science.
Erlich, the chief science officer of the consumer genetic testing company MyHeritage, and colleagues examined his company’s database and that of the public genealogy site GEDMatch, each containing data from about 1.2 million people. Using DNA matches to relatives, along with family tree information and some basic demographic data, scientists estimate that they could narrow the identity of an anonymous DNA owner to just one or two people.

Recent cases identifying suspects in violent crimes through DNA searches of GEDMatch, such as the Golden State Killer case (SN Online: 4/29/18), have raised privacy concerns (SN Online: 6/7/18). And the same process used to find rape and murder suspects can also identify people who have donated anonymous DNA for genetic and medical research studies, the scientists say.

Genetic data used in research is stripped of information like names, ages and addresses, and can’t be used to identify individuals, government officials have said. But “that’s clearly untrue,” as Erlich and colleagues have demonstrated, says Rori Rohlfs, a statistical geneticist at San Francisco State University, who was not involved in the study.

Using genetic genealogy techniques that mirror searches for the Golden State Killer and suspects in at least 15 other criminal cases, Erlich’s team identified a woman who participated anonymously in the 1000 Genomes project. That project cataloged genetic variants in about 2,500 people from around the world.
Erlich’s team pulled the woman’s anonymous data from the publicly available 1000 Genomes database. The researchers then created a DNA profile similar to the ones generated by consumer genetic testing companies such as 23andMe and AncestryDNA (SN: 6/23/18, p.14) and uploaded that profile to GEDMatch.

A search turned up matches with two distant cousins, one from North Dakota and one from Wyoming. The cousins also shared DNA indicating that they had a common set of ancestors four to six generations ago. Building on some family tree information already collected by those cousins, researchers identified the ancestral couple and filled in hundreds of their descendants, looking for a woman who matched the age and other publicly available demographic data of the 1000 Genomes participant.

It took a day to find the right person.

That example suggests scientists that need to reconsider whether they can guarantee research participants anonymity if genetic data are publicly shared, Rohlfs says.

In reality, though, identifying a person from a DNA match with a distant relative is much harder than it appears, and requires a lot of expertise and gumshoe work, Ellen Greytak says. She is the director of bioinformatics at Parabon NanoLabs, a company in Reston, Va., that has helped close at least a dozen criminal cases since May using genetic genealogy searches. “The gulf between a match and identification is absolutely massive,” she says.

The company has also found that people of European descent often have DNA matches to relatives in GEDMatch. But tracking down a single suspect from those matches is often confounded by intermarriages, adoptions, aliases, cases of misidentified or unknown parentage and other factors, says CeCe Moore, a genealogist who spearheads Parabon’s genetic genealogy service.

“The study demonstrates the power of genetic genealogy in a theoretical way,” Moore says, “but doesn’t fully capture the challenges of the work in practice.” For instance, Erlich and colleagues already had some family tree information from the 1000 Genome woman’s relatives, “so they had a significant head start.”

Erlich’s example might be an oversimplification, Rohlfs says. The researchers made rough estimates and assumptions that are not perfect, but the conclusion is solid, she says. “Their work is approximate, but totally reasonable.” And that conclusion that almost anyone can be identified from DNA should spark public discussion about how DNA data should be used for law enforcement and research, she says.

‘End of the Megafauna’ examines why so many giant Ice Age animals went extinct

Ross D.E. MacPhee and Peter Schouten (illustrator)
W.W. Norton & Co., $35

Today’s land animals are a bunch of runts compared with creatures from the not-too-distant past. Beasts as big as elephants, gorillas and bears were once much more common around the world. Then, seemingly suddenly, hundreds of big species, including the woolly mammoth, the giant ground sloth and a lizard weighing as much as half a ton, disappeared. In End of the Megafauna, paleomammalogist Ross MacPhee makes one thing clear: The science on what caused the extinctions of these megafauna — animals larger than 44 kilograms, or about 100 pounds — is far from settled.
MacPhee dissects the evidence behind two main ideas: that as humans moved into new parts of the world over the last 50,000 years, people hunted the critters into oblivion, or that changes in climate left the animals too vulnerable to survive. As MacPhee shows, neither scenario matches all of the available data.

Throughout, Peter Schouten’s illustrations, reminiscent of paintings that enliven natural history museums, bring the behemoths back to life. At times, MacPhee slips in too many technical terms. But overall, he offers readers an informative, up-to-date overview of a fascinating period in Earth’s history.

Buy End of the Megafauna from Amazon.com. Science News is a participant in the Amazon Services LLC Associates Program. Please see our FAQ for more details.

Engineers are plugging holes in drinking water treatment

Off a gravel road at the edge of a college campus — next door to the town’s holding pen for stray dogs — is a busy test site for the newest technologies in drinking water treatment.

In the large shed-turned-laboratory, University of Massachusetts Amherst engineer David Reckhow has started a movement. More people want to use his lab to test new water treatment technologies than the building has space for.

The lab is a revitalization success story. In the 1970s, when the Clean Water Act put new restrictions on water pollution, the diminutive grey building in Amherst, Mass. was a place to test those pollution-control measures. But funding was fickle, and over the years, the building fell into disrepair. In 2015, Reckhow brought the site back to life. He and a team of researchers cleaned out the junk, whacked the weeds that engulfed the building and installed hundreds of thousands of dollars worth of monitoring equipment, much of it donated or bought secondhand.

“We recognized that there’s a lot of need for drinking water technology,” Reckhow says. Researchers, students and start-up companies all want access to test ways to disinfect drinking water, filter out contaminants or detect water-quality slipups. On a Monday afternoon in October, the lab is busy. Students crunch data around a big table in the main room. Small-scale tests of technology that uses electrochemistry to clean water chug along, hooked up to monitors that track water quality. On a lab bench sits a graduate student’s low-cost replica of an expensive piece of monitoring equipment. The device alerts water treatment plants when the by-products of disinfection chemicals in a water supply are reaching dangerous levels. In an attached garage, two startup companies are running larger-scale tests of new kinds of membranes that filter out contaminants.
Parked behind the shed is the almost-ready-to-roll newcomer. Starting in 2019, the Mobile Water Innovation Laboratory will take promising new and affordable technologies to local communities for testing. That’s important, says Reckhow, because there’s so much variety in the quality of water that comes into drinking water treatment plants. On-site testing is the only way to know whether a new approach is effective, he says, especially for newer technologies without long-term track records.

The facility’s popularity reflects a persistent concern in the United States: how to ensure affordable access to clean, safe drinking water. Although U.S. drinking water is heavily regulated and pretty clean overall, recent high-profile contamination cases, such as the 2014 lead crisis in Flint, Mich. (SN: 3/19/16, p. 8), have exposed weaknesses in the system and shaken people’s trust in their tap water.
Tapped out
In 2013 and 2014, 42 drinking water–associated outbreaks resulted in more than 1,000 illnesses and 13 deaths, based on reports to the U.S. Centers for Disease Control and Prevention. The top culprits were Legionella bacteria and some form of chemical, toxin or parasite, according to data published in November 2017.

Those numbers tell only part of the story, however. Many of the contaminants that the U.S. Environmental Protection Agency regulates through the 1974 Safe Drinking Water Act cause problems only when exposure happens over time; the effects of contaminants like lead don’t appear immediately after exposure. Records of EPA rule violations note that in 2015, 21 million people were served by drinking water systems that didn’t meet standards, researchers reported in a February study in the Proceedings of the National Academy of Sciences. That report tracked trends in drinking water violations from 1982 to 2015.
Current technology can remove most contaminants, says David Sedlak, an environmental engineer at the University of California, Berkeley. Those include microbes, arsenic, nitrates and lead. “And then there are some that are very difficult to degrade or transform,” such as industrial chemicals called PFAS.

Smaller communities, especially, can’t always afford top-of-the-line equipment or infrastructure overhauls to, for example, replace lead pipes. So Reckhow’s facility is testing approaches to help communities address water-quality issues in affordable ways.
Some researchers are adding technologies to deal with new, potentially harmful contaminants. Others are designing approaches that work with existing water infrastructure or clean up contaminants at their source.

How is your water treated?
A typical drinking water treatment plant sends water through a series of steps.

First, coagulants are added to the water. These chemicals clump together sediments, which can cloud water or make it taste funny, so they are bigger and easier to remove. A gentle shaking or spinning of the water, called flocculation, helps those clumps form (1). Next, the water flows into big tanks to sit for a while so the sediments can fall to the bottom (2). The cleaner water then moves through membranes that filter out smaller contaminants (3). Disinfection, via chemicals or ultraviolet light, kills harmful bacteria and viruses (4). Then the water is ready for distribution (5).
There’s a lot of room for variation within that basic water treatment process. Chemicals added at different stages can trigger reactions that break down chunky, toxic organic molecules into less harmful bits. Ion-exchange systems that separate contaminants by their electric charge can remove ions like magnesium or calcium that make water “hard,” as well as heavy metals, such as lead and arsenic, and nitrates from fertilizer runoff. Cities mix and match these strategies, adjusting chemicals and prioritizing treatment components, based on the precise chemical qualities of the local water supply.

Some water utilities are streamlining the treatment process by installing technologies like reverse osmosis, which removes nearly everything from the water by forcing the water molecules through a selectively permeable membrane with extremely tiny holes. Reverse osmosis can replace a number of steps in the water treatment process or reduce the number of chemicals added to water. But it’s expensive to install and operate, keeping it out of reach for many cities.

Fourteen percent of U.S. residents get water from wells and other private sources that aren’t regulated by the Safe Drinking Water Act. These people face the same contamination challenges as municipal water systems, but without the regulatory oversight, community support or funding.

“When it comes to lead in private wells … you’re on your own. Nobody is going to help you,” says Marc Edwards, the Virginia Tech engineer who helped uncover the Flint water crisis. Edwards and Virginia Tech colleague Kelsey Pieper collected water-quality data from over 2,000 wells across Virginia in 2012 and 2013. Some were fine, but others had lead levels of more than 100 parts per billion. When levels are higher than its 15 ppb threshold, the EPA mandates that cities take steps to control corrosion and notify the public about the contamination. The researchers reported those findings in 2015 in the Journal of Water and Health.

To remove lead and other contaminants, well users often rely on point-of-use treatments. A filter on the tap removes most, but not all, contaminants. Some people spring for costly reverse osmosis systems.
New tech solutions
These three new water-cleaning approaches wouldn’t require costly infrastructure overhauls.

Ferrate to cover many bases
Reckhow’s team at UMass Amherst is testing ferrate, an ion of iron, as a replacement for several water treatment steps. First, ferrate kills bacteria in the water. Next, it breaks down carbon-based chemical contaminants into smaller, less harmful molecules. Finally, it makes ions like manganese less soluble in water so they are easier to filter out, Reckhow and colleagues reported in 2016 in Journal–American Water Association. With its multifaceted effects, ferrate could potentially streamline the drinking water treatment process or reduce the use of chemicals, such as chlorine, that can yield dangerous by-products, says Joseph Goodwill, an environmental engineer at the University of Rhode Island in Kingston.

Ferrate could be a useful disinfectant for smaller drinking water systems that don’t have the infrastructure, expertise or money to implement something like ozone treatment, an approach that uses ozone gas to break down contaminants, Reckhow says.

Early next year, in the maiden voyage of his mobile water treatment lab, Reckhow plans to test the ferrate approach in the small Massachusetts town of Gloucester.
In the 36-foot trailer is a squeaky-clean array of plastic pipes and holding tanks. The setup routes incoming water through the same series of steps — purifying, filtering and disinfecting — that one would find in a standard drinking water treatment plant. With two sets of everything, scientists can run side-by-side experiments, comparing a new technology’s performance against the standard approach. That way researchers can see whether a new technology works better than existing options, says Patrick Wittbold, the UMass Amherst research engineer who headed up the trailer’s design.

Charged membranes
Filtering membranes tend to get clogged with small particles. “That’s been the Achilles’ heel of membrane treatment,” says Brian Chaplin, an engineer at the University of Illinois at Chicago. Unclogging the filter wastes energy and increases costs. Electricity might solve that problem and offer some side benefits, Chaplin suggests.

His team tested an electrochemical membrane made of titanium oxide or titanium dioxide that both filters water and acts as an electrode. Chemical reactions happening on the electrically charged membranes can turn nitrates into nitrogen gas or split water molecules, generating reactive ions that can oxidize contaminants in the water. The reactions also prevent particles from sticking to the membrane. Large carbon-based molecules like benzene become smaller and less harmful.
In lab tests, the membranes effectively filtered and destroyed contaminants, Chaplin says. In one test, a membrane transformed 67 percent of the nitrates in a solution into other molecules. The finished water was below the EPA’s regulatory nitrate limit of 10 parts per million, he and colleagues reported in July in Environmental Science and Technology. Chaplin expects to move the membrane into pilot tests within the next two years.

Obliterate the PFAS
The industrial chemicals known as PFAS present two challenges. Only the larger ones are effectively removed by granular activated carbon, the active material in many household water filters. The smaller PFAS remain in the water, says Christopher Higgins, an environmental engineer at the Colorado School of Mines in Golden. Plus, filtering isn’t enough because the chunky chemicals are hard to break down for safe disposal.

Higgins and colleague Timothy Strathmann, also at the Colorado School of Mines, are working on a process to destroy PFAS. First, a specialized filter with tiny holes grabs the molecules out of the water. Then, sulfite is added to the concentrated mixture of contaminants. When hit with ultraviolet light, the sulfite generates reactive electrons that break down the tough carbon-fluorine bonds in the PFAS molecules. Within 30 minutes, the combination of UV radiation and sulfites almost completely destroyed one type of PFAS, other researchers reported in 2016 in Environmental Science and Technology.

Soon, Higgins and Strathmann will test the process at Peterson Air Force Base in Colorado, one of nearly 200 U.S. sites known to have groundwater contaminated by PFAS. Cleaning up those sites would remove the pollutants from groundwater that may also feed wells or city water systems.

NASA’s OSIRIS-REx finds signs of water on the asteroid Bennu

As the asteroid Bennu comes into sharper focus, planetary scientists are seeing signs of water locked up in the asteroid’s rocks, NASA team members announced December 10.

“It’s one of the things we were hoping to find,” team member Amy Simon of NASA’s Goddard Space Flight Center in Greenbelt, Md., said in a news conference at the American Geophysical Union meeting in Washington, D.C. “This is evidence of liquid water in Bennu’s past. This is really big news.”
NASA’s OSIRIS-REx spacecraft just arrived at Bennu on December 3 (SN Online: 12/3/18). Over the next year, the team will search for the perfect spot on the asteroid to grab a handful of dust and return it to Earth. “Very early in the mission, we’ve found out Bennu is going to provide the type of material we want to return,” said principal investigator Dante Lauretta of the University of Arizona in Tucson. “It definitely looks like we’ve gone to the right place.”

OSIRIS-REx’s onboard spectrometers measure the chemical signatures of various minerals based on the wavelengths of light they emit and absorb. The instruments were able to see signs of hydrated minerals on Bennu’s surface about a month before the spacecraft arrived at the asteroid, and the signal has remained strong all over the asteroid’s surface as the spacecraft approached, Simon said. Those minerals can form only in the presence of liquid water, and suggest that Bennu had a hydrothermal system in its past.

Bennu’s surface is also covered in more boulders and craters than the team had expected based on observations of the asteroid taken from Earth. Remote observations led the team to expect a few large boulders, about 10 meters wide. Instead they see hundreds, some of them up to 50 meters wide.

“It’s a little more rugged of an environment,” Lauretta said. But that rough surface can reveal details of Bennu’s internal structure and history.
If Bennu were one solid mass, for instance, a major impact could crack or shatter its entire surface. The fact that it has large craters means it has survived impacts intact. It may be more of a rubble pile loosely held together by its own gravity.
The asteroid’s density supports the rubble pile idea. OSIRIS-REx’s first estimate of Bennu’s density shows it is about 1,200 kilograms per cubic meter, Lauretta said. The average rock is about 3,000 kilograms per cubic meter. The hydrated minerals go some way towards lowering the asteroid’s density, since water is less dense than rock. But up to 40 percent of the asteroid may be full of caves and voids as well, Lauretta said.

Some of the rocks on the surface appear to be fractured in a spindly pattern. “If you drop a dinner plate on the ground, you get a spider web of fractures,” says team member Kevin Walsh of the Southwest Research Institute in Boulder, Colo. “We’re seeing this in some boulders.”

The boulders may have cracked in response to the drastic change in temperatures they experience as the asteroid spins. Studying those fracture patterns in more detail will reveal the properties of the rocks.

The OSIRIS-REx team also needs to know how many boulders of various sizes are strewn across the asteroid’s surface. Any rock larger than about 20 centimeters across would pose a hazard to the spacecraft’s sampling arm, says Keara Burke of the University of Arizona. Burke, an undergraduate engineering student, is heading up a boulder mapping project.
“My primary goal is safety,” she says. “If it looks like a boulder to me, within reasonable guidelines, then I mark it as a boulder. We can’t sample anything if we’re going to crash.”

The team also needs to know where the smallest grains of rock and dust are, as OSIRIS-REx’s sampling arm can pick up grains only about 2 centimeters across. One way to find the small rocks is to measure how well the asteroid’s surface retains heat. Bigger rocks are slower to heat up and slower to cool down, so they’ll radiate heat out into space even on the asteroid’s night side. Smaller grains of dust heat up and cool down much more quickly.

“It’s exactly like a beach,” Walsh says. “During the day it’s scalding hot, but then it’s instantly cold when the sun sets.”

Measurements of the asteroid’s heat storage so far suggest that there are regions with grains as small as 1 or 2 centimeters across, Lauretta said, though it is still too early to be certain.

“I am confident that we’ll find some fine-grained regions,” Lauretta said. Some may be located inside craters. The challenge will be finding an area wide enough that the spacecraft’s navigation system can steer to it accurately.

New Horizons shows Ultima Thule looks like a snowman, or maybe BB-8

The results are in: Ultima Thule, the distant Kuiper Belt object that got a close visit from the New Horizons spacecraft on New Year’s Day, looks like two balls stuck together.

“What you are seeing is the first contact binary ever explored by a spacecraft, two separate objects that are now joined together,” principal investigator Alan Stern of the Southwest Research Institute in Boulder, Colo., said January 2 in a news conference held at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md.

“It’s a snowman, if it’s anything at all,” Stern said. (Twitter was quick to supply another analogy: the rolling BB-8 droid from Star Wars.)

That shape is enough to lend credence to the idea that planetary bodies grow up by the slow clumping of small rocks. Ultima Thule, whose official name is 2014 MU69, is thought to be among the oldest and least-altered objects in the solar system, so knowing how it formed can reveal how planets formed in general (SN Online: 12/18/18).
“Think of New Horizons as a time machine … that has brought us back to the very beginning of solar system history, to a place where we can observe the most primordial building blocks of the planets,” said Jeff Moore of NASA’s Ames Research Center in Moffett Field, Calif., who leads New Horizons’ geology team. “It’s gratifying to see these perfectly formed contact binaries in their native habitat. Our ideas of how these things form seem to be somewhat vindicated by these observations.”

The view from about 28,000 kilometers away shows that MU69 is about 33 kilometers long and has two spherical lobes, one about three times the size of the other. The spheres are connected by a narrow “neck” that appears brighter than much of the rest of the surface.
That could be explained by small grains of surface material rolling downhill to settle in the neck, because small grains tend to reflect more light than large ones, said New Horizons deputy project scientist Cathy Olkin of the Southwest Research Institute. Even the brightest areas reflected only about 13 percent of the sunlight that hit them, though. The darkest reflected just 6 percent, about the same brightness as potting soil.

Measurements also show that MU69 rotates once every 15 hours, give or take one hour. That’s a Goldilocks rotation speed, Olkin said. If it spun too fast, MU69 would break apart; too slow would be hard to explain for such a small body. Fifteen hours is just right.

The lobes’ spherical shape is best explained by collections of small rocks glomming together to form larger rocks, Moore said. The collisions between the rocks happened at extremely slow speeds, so the rocks accreted rather than breaking each other apart. The final collision was between the two spheres, which the team dubbed “Ultima” (the bigger one) and “Thule” (the smaller one).
That collision probably happened at no more than a few kilometers per hour, “the speed at which you might park your car in a parking space,” Moore said. “If you had a collision with another car at those speeds, you may not even bother to fill out the insurance forms.”

New Horizons also picked up MU69’s reddish color. The science team thinks the rusty hue comes from radiation altering exotic ice, frozen material like methane or nitrogen rather than water, although they don’t know exactly what that ice is made of yet.

The spacecraft is still sending data back to Earth, and will continue transmitting details of the flyby for the next 18 months. Even as the New Horizons team members shared the first pictures from the spacecraft’s flyby, data was arriving that will reveal details of MU69’s surface composition.

“The real excitement today is going to be in the composition team room,” Olkin said. “There’s no way to make anything like this type of observation without having a spacecraft there.”

One Antarctic ice shelf gets half its annual snowfall in just 10 days

Just a few powerful storms in Antarctica can have an outsized effect on how much snow parts of the southernmost continent get. Those ephemeral storms, preserved in ice cores, might give a skewed view of how quickly the continent’s ice sheet has grown or shrunk over time.

Relatively rare extreme precipitation events are responsible for more than 40 percent of the total annual snowfall across most of the continent — and in some places, as much as 60 percent, researchers report March 22 in Geophysical Research Letters.
Climatologist John Turner of the British Antarctic Survey in Cambridge and his colleagues used regional climate simulations to estimate daily precipitation across the continent from 1979 to 2016. Then, the team zoomed in on 10 locations — representing different climates from the dry interior desert to the often snowy coasts and the open ocean — to determine regional differences in snowfall.

While snowfall amounts vary greatly by location, extreme events packed the biggest wallop along Antarctica’s coasts, especially on the floating ice shelves, the researchers found. For instance, the Amery ice shelf in East Antarctica gets roughly half of its annual precipitation — which typically totals about half a meter of snow — in just 10 days, on average. In 1994, the ice shelf got 44 percent of its entire annual precipitation on a single day in September.

Ice cores aren’t just a window into the past; they are also used to predict the continent’s future in a warming world. So characterizing these coastal regions is crucial for understanding Antarctica’s ice sheet — and its potential future contribution to sea level rise.
Editor’s note: This story was updated April 5, 2019, to correct that the results were reported March 22 (not March 25).

‘Ghost Particle’ chronicles the neutrino’s discovery and what’s left to learn

We live in a sea of neutrinos. Every second, trillions of them pass through our bodies. They come from the sun, nuclear reactors, collisions of cosmic rays hitting Earth’s atmosphere, even the Big Bang. Among fundamental particles, only photons are more numerous. Yet because neutrinos barely interact with matter, they are notoriously difficult to detect.

The existence of the neutrino was first proposed in the 1930s and then verified in the 1950s (SN: 2/13/54). Decades later, much about the neutrino — named in part because it has no electric charge — remains a mystery, including how many varieties of neutrinos exist, how much mass they have, where that mass comes from and whether they have any magnetic properties.
These mysteries are at the heart of Ghost Particle by physicist Alan Chodos and science journalist James Riordon. The book is an informative, easy-to-follow introduction to the perplexing particle. Chodos and Riordon guide readers through how the neutrino was discovered, what we know — and don’t know — about it, and the ongoing and future experiments that (fingers crossed) will provide the answers.

It’s not just neutrino physicists who await those answers. Neutrinos, Riordon says, “are incredibly important both for understanding the universe and our existence in it.” Unmasking the neutrino could be key to unlocking the nature of dark matter, for instance. Or it could clear up the universe’s matter conundrum: The Big Bang should have produced equal amounts of matter and antimatter, the oppositely charged counterparts of electrons, protons and so on. When matter and antimatter come into contact, they annihilate each other. So in theory, the universe today should be empty — yet it’s not (SN: 9/22/22). It’s filled with matter and, for some reason, very little antimatter.

Science News spoke with Riordon, a frequent contributor to the magazine, about these puzzles and how neutrinos could act as a tool to observe the cosmos or even see into our own planet. The following conversation has been edited for length and clarity.

SN: In the first chapter, you list eight unanswered questions about neutrinos. Which is the most pressing to answer?

Riordon: Whether they’re their own antiparticles is probably one of the grandest. The proposal that neutrinos are their own antiparticles is an elegant solution to all sorts of problems, including the existence of this residue of matter we live in. Another one is figuring out how neutrinos fit in the standard model [of particle physics]. It’s one of the most successful theories there is, but it can’t explain the fact that neutrinos have mass.
SN: Why is now a good time to write a book about neutrinos?

Riordon: All of these questions about neutrinos are sort of coming to a head right now — the hints that neutrinos may be their own antiparticles, the issues of neutrinos not quite fitting the standard model, whether there are sterile neutrinos [a hypothetical neutrino that is a candidate for dark matter]. In the next few years, a decade or so, there will be a lot of experiments that will [help answer these questions,] and the resolution either way will be exciting.

SN: Neutrinos could also be used to help scientists observe a range of phenomena. What are some of the most interesting questions neutrinos could help with?

Riordon: There are some observations that simply have to be done with neutrinos, that there are no other technological alternatives for. There’s a problem with using light-based telescopes to look back in history. We have this really amazing James Webb Space Telescope that can see really far back in history. But at some point, when you go far enough back, the universe is basically opaque to light; you can’t see into it. Once we narrow down how to detect and how to measure the cosmic neutrino background [neutrinos that formed less than a second after the Big Bang], it will be a way to look back at the very beginning. Other than with gravitational waves, you can’t see back that far with anything else. So it’ll give us sort of a telescope back to the beginning of the universe.

The other thing is, when a supernova happens, all kinds of really cool stuff happens inside, and you can see it with neutrinos because neutrinos come out immediately in a burst. We call it the “cosmic neutrino bomb,” but you can track the supernova as it’s going along. With light, it takes a while for it to get out [of the stellar explosion]. We’re due for a [nearby] supernova. We haven’t had one since 1987. It was the last visible supernova in the sky and was a boon for research. Now that we have neutrino detectors around the world, this next one is going to be even better [for research], even more exciting.

And if we develop better instrumentation, we could use neutrinos to understand what’s going on in the center of the Earth. There’s no other way that you could probe the center of the Earth. We use seismic waves, but the resolution is really low. So we could resolve a lot of questions about what the planet is made of with neutrinos.

SN: Do you have a favorite “character” in the story of neutrinos?

Riordon: I’m certainly very fond of my grandfather Clyde Cowan [he and Frederick Reines were the first physicists to detect neutrinos]. But Reines is a riveting character. He was poetic. He was a singer. He really was this creative force. I mentioned [in the book] that they put this “SNEWS” sign on their detector for “supernova early warning system,” which sort of echoed the ballistic missile early warning systems at the time [during the Cold War]. That’s so ripe.

Astronomers spotted shock waves shaking the web of the universe for the first time

For the first time, astronomers have caught a glimpse of shock waves rippling along strands of the cosmic web — the enormous tangle of galaxies, gas and dark matter that fills the observable universe.

Combining hundreds of thousands of radio telescope images revealed the faint glow cast as shock waves send charged particles flying through the magnetic fields that run along the cosmic web. Spotting these shock waves could give astronomers a better look at these large-scale magnetic fields, whose properties and origins are largely mysterious, researchers report in the Feb. 17 Science Advances.
Finally, astronomers “can confirm what so far has only been predicted by simulations — that these shock waves exist,” says astrophysicist Marcus Brüggen of the University of Hamburg in Germany, who was not involved in the new study.

At its grandest scale, our universe looks something like Swiss cheese. Galaxies aren’t distributed evenly through space but rather are clumped together in enormous clusters connected by ropy filaments of dilute gas, galaxies and dark matter and separated by not-quite-empty voids (SN: 10/3/19).

Tugged by gravity, galaxy clusters merge, filaments collide, and gas from the voids falls onto filaments and clusters. In simulations of the cosmic web, all that action consistently sets off enormous shock waves in and along filaments.

Filaments make up most of the cosmic web but are much harder to spot than galaxies (SN: 1/20/14). While scientists have observed shock waves around galaxy clusters before, shocks in filaments “have never been really seen,” says astronomer Reinout van Weeren of Leiden University in the Netherlands, who was not involved in the study. “But they should be basically all around the cosmic web.”

Shock waves around filaments would accelerate charged particles through the magnetic fields that suffuse the cosmic web (SN: 6/6/19). When that happens, the particles emit light at wavelengths that radio telescopes can detect — though the signals are very weak.
A single shock wave in a filament “would look like nothing, it’d look like noise,” says radio astronomer Tessa Vernstrom of the International Centre for Radio Astronomy Research in Crawley, Australia.

Instead of looking for individual shock waves, Vernstrom and her colleagues combined radio images of more than 600,000 pairs of galaxy clusters close enough to be connected by filaments to create a single “stacked” image. This amplified weak signals and revealed that, on average, there is a faint radio glow from the filaments between clusters.

“When you can dig below the noise and still actually get a result — to me, that’s personally exciting,” Vernstrom says.

The faint signal is highly polarized, meaning that the radio waves are mostly aligned with one another. Highly polarized light is unusual in the cosmos, but it is expected from radio light cast by shock waves, van Weeren says. “So that’s really, I think, very good evidence for the fact that the shocks are likely indeed present.”
The discovery goes beyond confirming the predictions of cosmic web simulations. The polarized radio emissions also offer a rare peek at the magnetic fields that permeate the cosmic web, if only indirectly.

“These shocks,” Brüggen says, “are really able to show that there are large-scale magnetic fields that form [something] like a sheath around these filaments.”

He, van Weeren and Vernstrom all note that it’s still an open question how cosmic magnetic fields arose in the first place. The role these fields play in shaping the cosmic web is equally mysterious.

“It’s one of the four fundamental forces of nature, right? Magnetism,” Vernstrom says. “But at least on these large scales, we don’t really know how important it is.”

What the first look at the genetics of Chernobyl’s dogs revealed

For generations of dogs, home is the radioactive remains of the Chernobyl Nuclear Power Plant.

In the first genetic analysis of these animals, scientists have discovered that dogs living in the power plant industrial area are genetically distinct from dogs living farther away.

Though the team could distinguish between dog populations, the researchers did not pinpoint radiation as the reason for any genetic differences. But future studies that build on the findings, reported March 3 in Science Advances, may help uncover how radioactive environments leave their mark on animal genomes.
That could have implications for other nuclear disasters and even human space travel, says Timothy Mousseau, an evolutionary ecologist at the University of South Carolina in Columbia. “We have high hopes that what we learn from these dogs … will be of use for understanding human exposures in the future,” he says.

Since his first trip in 1999, Mousseau has stopped counting how many times he’s been to Chernobyl. “I lost track after we hit about 50 visits.”

He first encountered Chernobyl’s semi-feral dogs in 2017, on a trip with the Clean Futures Fund+, an organization that provides veterinary care to the animals. Not much is known about how local dogs survived after the nuclear accident. In 1986, an explosion at one of the power plant’s reactors kicked off a disaster that lofted vast amounts of radioactive isotopes into the air. Contamination from the plant’s radioactive cloud largely settled nearby, in a region now called the Chernobyl Exclusion Zone.

Dogs have lived in the area since the disaster, fed by Chernobyl cleanup workers and tourists. Some 250 strays were living in and around the power plant, among spent fuel-processing facilities and in the shadow of the ruined reactor. Hundreds more roam farther out in the exclusion zone, an area about the size of Yosemite National Park.
During Mousseau’s visits, his team collected blood samples from these dogs for DNA analysis, which let the researchers map out the dogs’ complex family structures. “We know who’s related to who,” says Elaine Ostrander, a geneticist at the National Human Genome Research Institute in Bethesda, Md. “We know their heritage.”

The canine packs are not just a hodgepodge of wild feral dogs, she says. “There are actually families of dogs breeding, living, existing in the power plant,” she says. “Who would have imagined?”

Dogs within the exclusion zone share ancestry with German shepherds and other shepherd breeds, like many other free-breeding dogs from Eastern Europe, the team reports. And though their work revealed that dogs in the power plant area look genetically different from dogs in Chernobyl City, about 15 kilometers away, the team does not know whether radiation caused these differences or not, Ostrander says. The dogs may be genetically distinct simply because they’re living in a relatively isolated area.

The new finding is not so surprising, says Jim Smith, an environmental scientist at the University of Portsmouth in England. He was not part of the new study but has worked in this field for decades. He’s concerned that people might assume “that the radiation has something to do with it,” he says. But “there’s no evidence of that.”

Scientists have been trying to pin down how radiation exposure at Chernobyl has affected wildlife for decades (SN: 5/2/14). “We’ve been looking at the consequences for birds and rodents and bacteria and plants,” Mousseau says. His team has found animals with elevated mutation rates, shortened life spans and early-onset cataracts.

It’s not easy to tease out the effects of low-dose radiation among other factors, Smith says. “[These studies] are so hard … there’s lots of other stuff going in the natural environment.” What’s more, animals can reap some benefits when humans leave contaminated zones, he says.

How, or if, radiation damage is piling up in dogs’ genomes is something the team is looking into now, Ostrander says. Knowing the dogs’ genetic backgrounds will make it easier to spot any radiation red flags, says Bridgett vonHoldt, an evolutionary geneticist at Princeton University, who was not involved in the work.

“I feel like it’s a cliffhanger,” she says. “I want to know more.”

Google’s quantum computer reached an error-correcting milestone

To shrink error rates in quantum computers, sometimes more is better. More qubits, that is.

The quantum bits, or qubits, that make up a quantum computer are prone to mistakes that could render a calculation useless if not corrected. To reduce that error rate, scientists aim to build a computer that can correct its own errors. Such a machine would combine the powers of multiple fallible qubits into one improved qubit, called a “logical qubit,” that can be used to make calculations (SN: 6/22/20).

Scientists now have demonstrated a key milestone in quantum error correction. Scaling up the number of qubits in a logical qubit can make it less error-prone, researchers at Google report February 22 in Nature.
Future quantum computers could solve problems impossible for even the most powerful traditional computers (SN: 6/29/17). To build those mighty quantum machines, researchers agree that they’ll need to use error correction to dramatically shrink error rates. While scientists have previously demonstrated that they can detect and correct simple errors in small-scale quantum computers, error correction is still in its early stages (SN: 10/4/21).

The new advance doesn’t mean researchers are ready to build a fully error-corrected quantum computer, “however, it does demonstrate that it is indeed possible, that error correction fundamentally works,” physicist Julian Kelly of Google Quantum AI said in a news briefing February 21.
Logical qubits store information redundantly in multiple physical qubits. That redundancy allows a quantum computer to check if any mistakes have cropped up and fix them on the fly. Ideally, the larger the logical qubit, the smaller the error rate should be. But if the original qubits are too faulty, adding in more of them will cause more problems than it solves.

Using Google’s Sycamore quantum chip, the researchers studied two different sizes of logical qubits, one consisting of 17 qubits and the other of 49 qubits. After making steady improvements to the performance of the original physical qubits that make up the device, the researchers tallied up the errors that still slipped through. The larger logical qubit had a lower error rate, about 2.9 percent per round of error correction, compared to the smaller logical qubit’s rate of about 3.0 percent, the researchers found.
That small improvement suggests scientists are finally tiptoeing into the regime where error correction can begin to squelch errors by scaling up. “It’s a major goal to achieve,” says physicist Andreas Wallraff of ETH Zurich, who was not involved with the research.

However, the result is only on the cusp of showing that error correction improves as scientists scale up. A computer simulation of the quantum computer’s performance suggests that, if the logical qubit’s size were increased even more, its error rate would actually get worse. Additional improvement to the original faulty qubits will be needed to enable scientists to really capitalize on the benefits of error correction.

Still, milestones in quantum computation are so difficult to achieve that they’re treated like pole jumping, Wallraff says. You just aim to barely clear the bar.