Genealogy databases could reveal the identity of most Americans

Protecting the anonymity of publicly available genetic data, including DNA donated to research projects, may be impossible.

About 60 percent of people of European descent who search genetic genealogy databases will find a match with a relative who is a third cousin or closer, a new study finds. The result suggests that with a database of about 3 million people, police or anyone else with access to DNA data can figure out the identity of virtually any American of European descent, Yaniv Erlich and colleagues report online October 11 in Science.
Erlich, the chief science officer of the consumer genetic testing company MyHeritage, and colleagues examined his company’s database and that of the public genealogy site GEDMatch, each containing data from about 1.2 million people. Using DNA matches to relatives, along with family tree information and some basic demographic data, scientists estimate that they could narrow the identity of an anonymous DNA owner to just one or two people.

Recent cases identifying suspects in violent crimes through DNA searches of GEDMatch, such as the Golden State Killer case (SN Online: 4/29/18), have raised privacy concerns (SN Online: 6/7/18). And the same process used to find rape and murder suspects can also identify people who have donated anonymous DNA for genetic and medical research studies, the scientists say.

Genetic data used in research is stripped of information like names, ages and addresses, and can’t be used to identify individuals, government officials have said. But “that’s clearly untrue,” as Erlich and colleagues have demonstrated, says Rori Rohlfs, a statistical geneticist at San Francisco State University, who was not involved in the study.

Using genetic genealogy techniques that mirror searches for the Golden State Killer and suspects in at least 15 other criminal cases, Erlich’s team identified a woman who participated anonymously in the 1000 Genomes project. That project cataloged genetic variants in about 2,500 people from around the world.
Erlich’s team pulled the woman’s anonymous data from the publicly available 1000 Genomes database. The researchers then created a DNA profile similar to the ones generated by consumer genetic testing companies such as 23andMe and AncestryDNA (SN: 6/23/18, p.14) and uploaded that profile to GEDMatch.

A search turned up matches with two distant cousins, one from North Dakota and one from Wyoming. The cousins also shared DNA indicating that they had a common set of ancestors four to six generations ago. Building on some family tree information already collected by those cousins, researchers identified the ancestral couple and filled in hundreds of their descendants, looking for a woman who matched the age and other publicly available demographic data of the 1000 Genomes participant.

It took a day to find the right person.

That example suggests scientists that need to reconsider whether they can guarantee research participants anonymity if genetic data are publicly shared, Rohlfs says.

In reality, though, identifying a person from a DNA match with a distant relative is much harder than it appears, and requires a lot of expertise and gumshoe work, Ellen Greytak says. She is the director of bioinformatics at Parabon NanoLabs, a company in Reston, Va., that has helped close at least a dozen criminal cases since May using genetic genealogy searches. “The gulf between a match and identification is absolutely massive,” she says.

The company has also found that people of European descent often have DNA matches to relatives in GEDMatch. But tracking down a single suspect from those matches is often confounded by intermarriages, adoptions, aliases, cases of misidentified or unknown parentage and other factors, says CeCe Moore, a genealogist who spearheads Parabon’s genetic genealogy service.

“The study demonstrates the power of genetic genealogy in a theoretical way,” Moore says, “but doesn’t fully capture the challenges of the work in practice.” For instance, Erlich and colleagues already had some family tree information from the 1000 Genome woman’s relatives, “so they had a significant head start.”

Erlich’s example might be an oversimplification, Rohlfs says. The researchers made rough estimates and assumptions that are not perfect, but the conclusion is solid, she says. “Their work is approximate, but totally reasonable.” And that conclusion that almost anyone can be identified from DNA should spark public discussion about how DNA data should be used for law enforcement and research, she says.

We’re probably undervaluing healthy lakes and rivers

For sale: Pristine lake. Price negotiable.

Most U.S. government attempts to quantify the costs and benefits of protecting the country’s bodies of water are likely undervaluing healthy lakes and rivers, researchers argue in a new study. That’s because some clean water benefits get left out of the analyses, sometimes because these benefits are difficult to pin numbers on. As a result, the apparent value of many environmental regulations is probably discounted.

The study, published online October 8 in the Proceedings of the National Academy of Sciences, surveyed 20 government reports analyzing the economic impacts of U.S. water pollution laws. Most of these laws have been enacted since 2000, when cost-benefit analyses became a requirement. Analysis of a measure for restricting river pollution, for example, might find that it increases costs for factories using that river for wastewater disposal, but boosts tourism revenues by drawing more kayakers and swimmers.
Only two studies out of 20 showed the economic benefits of these laws exceeding the costs. That’s uncommon among analyses of environmental regulations, says study coauthor David Keiser, an environmental economist at Iowa State University in Ames. Usually, the benefits exceed the costs.

So why does water pollution regulation seem, on paper at least, like such a losing proposition?

Keiser has an explanation: Summing up the monetary benefits of environmental policies is really hard. Many of these benefits are intangible and don’t have clear market values. So deciding which benefits to count, and how to count them, can make a big difference in the results.
Many analyses assume water will be filtered for drinking, Keiser says, so they don’t count the human health benefits of clean lakes and rivers (SN: 8/18/18, p. 14). That’s different from air pollution cost-benefit studies, which generally do include the health benefits of cleaner air by factoring in data tracking things like doctor’s visits or drug prescriptions. That could explain why Clean Air Act rules tend to get more favorable reviews, Keiser says — human health accounts for about 95 percent of the measured benefits of air quality regulations.

“You can avoid a lake with heavy, thick, toxic algal blooms,” Keiser says. “If you walk outside and have very polluted air, it’s harder to avoid.”

But even if people can avoid an algae-choked lake, they still pay a price for that pollution, says environmental scientist Thomas Bridgeman, director of the Lake Erie Center at the University of Toledo in Ohio.
Communities that pull drinking water from a lake filled with toxic blooms of algae or cyanobacteria spend more to make the water safe to drink. Bridgeman’s seen it firsthand: In 2014, Lake Erie’s cyanobacteria blooms from phosphorus runoff shut down Toledo’s water supply for two days and forced the city to spend $500 million on water treatment upgrades.

Most of the studies surveyed by Keiser and his team were missing other kinds of benefits, too. The reports usually left out the value of eliminating certain toxic and nonconventional pollutants — molecules such as bisphenol A, or BPA, and perfluorooctanoic acid, or PFOA (SN: 10/3/15, p. 12). In high quantities, these compounds, which are used to make some plastics and nonstick coatings, can cause harm to humans and wildlife. Many studies also didn’t include discussion of how the quality of surface waters can affect groundwater, which is a major source of drinking water for many people.

A lack of data on water quality may also limit studies, Keiser’s team suggests. While there’s a national database tracking daily local air pollution levels, the data from various water quality monitoring programs aren’t centralized. That makes gathering and evaluating trends in water quality harder.

Plus, there are the intangibles — the value of aquatic species that are essential to the food chain, for example.
“Some things are just inherently difficult to put a dollar [value] on,” says Robin Craig, an environmental law professor at the University of Utah in Salt Lake City. “What is it worth to have a healthy native ecosystem?… That’s where it can get very subjective very fast.”

That subjectivity can allow agencies to analyze policies in ways that suit their own political agendas, says Matthew Kotchen, an environmental economist at Yale University. An example: the wildly different assessments by the Obama and Trump administrations of the value gained from the 2015 Clean Water Rule, also known as the Waters of the United States rule.

The rule, passed under President Barack Obama, clarified the definition of waters protected under the 1972 Clean Water Act to include tributaries and wetlands connected to larger bodies of water. The Environmental Protection Agency estimated in 2015 that the rule would result in yearly economic benefits ranging from $300 million to $600 million, edging out the predicted annual costs of $200 million to $500 million. But in 2017, Trump’s EPA reanalyzed the rule and proposed rolling it back, saying that the agency had now calculated just $30 million to $70 million in annual benefits.

The difference in the conclusions came down to the consideration of wetlands: The 2015 analysis found that protecting wetlands, such as marshes and bogs that purify water, tallied up to $500 million in annual benefits. The Trump administration’s EPA, however, left wetlands out of the calculation entirely, says Kotchen, who analyzed the policy swing in Science in 2017.

Currently, the rule has gone into effect in 26 states, but is still tied up in legal challenges.

It’s an example of how methodology — and what counts as a benefit — can have a huge impact on the apparent value of environmental policies and laws.

The squishiness in analyzing environmental benefits underlies many of the Trump administration’s proposed rollbacks of Obama-era environmental legislation, not just ones about water pollution, Kotchen says. There are guidelines for how such cost-benefit analyses should be carried out, he says, but there’s still room for researchers or government agencies to choose what to include or exclude.

In June, the EPA, then under the leadership of Scott Pruitt, proposed revising the way the agency does cost-benefit analyses to no longer include so-called indirect benefits. For example, in evaluating policies to reduce carbon dioxide emissions, the agency would ignore the fact that those measures also reduce other harmful air pollutants. The move would, overall, make environmental policies look less beneficial.

These sharp contrasts in how presidential administrations approach environmental impact studies are not unprecedented, says Craig, the environmental law professor. “Pretty much every time we change presidents, the priorities for how to weigh those different elements change.”

How nectar bats fly nowhere

Flying forward is hard enough, but flying nowhere, just hovering, is so much harder. Most bats and birds can manage the feat for only a few frantic seconds.

Hovering means losing a useful aerodynamic shortcut, says aerospace engineer and biologist David Lentink of Stanford University. As a bat or bird flies forward, its body movement sends air flowing around the wings and providing some cheap lift. For animals on the scale of bats and birds, that’s a big help. Without that boost, “you’re going to have to move all the air over your wings by moving it with your wings,” he says. The energy per second you’re consuming to stay in place by flapping your wings back and forth like a hummingbird “is gigantic.”
So how do vertebrates in search of nectar, for whom a lot of energy-sucking hovering is part of life, manage the job? For the first direct measurements of the wingbeat forces that make hovering possible, Lentink’s Ph.D. student Rivers Ingersoll spent three years creating a flight chamber with exquisitely responsive sensors in the floor and ceiling. As a bird or bat hovers inside, the sensors can measure — every 200th of a second — tremors even smaller than a nanometer caused by air from fluttering wings. Once the delicate techno-marvel of an instrument was perfected, the researchers packed it into 11 shipping cases and sent it more than 6,000 kilometers to the wilds of Costa Rica.
“Very difficult,” Ingersoll acknowledges. The Las Cruces Research Station is great for field biology, but it’s nothing like a Stanford engineering lab. Every car turning into the station’s driveway set off the wingbeat sensors. And even the special thick-walled room that became the machine’s second home warmed up enough every day to give the instrument a fever.
Babying the instrument as best he could, Ingersoll made direct measurements for 17 hovering species of hummingbirds and three bats, including Pallas’s long-tongued bats (Glossophaga soricina). “Their up-pointy noses made me think of rhino faces,” he says.
Pallas’s bats specialize in nectar sipping much as hummingbirds do. Comparing wingbeats, bat vs. bird, revealed differences, though. Hummers coupled powerful downstrokes and recovery upstrokes that twist part of the wings almost backward. The twist supplied about a quarter of the energy it takes to keep a bird aloft, the researchers report in the September 26 Science Advances. The two kinds of nectar bats got a little more lift from the upstroke than did a bat that eats fruit instead of strenuously hovering for nectar. Yet even the specialist nectar bats relied mostly on downstrokes: powerful, deeply angled downstrokes of really big wings.

Those bat wings span proportionally more area than hummer wings. So the bats get about the same hovering power per gram of body weight that hummingbirds do. Supersizing can have its own kind of high-tech design elegance.

‘End of the Megafauna’ examines why so many giant Ice Age animals went extinct

Ross D.E. MacPhee and Peter Schouten (illustrator)
W.W. Norton & Co., $35

Today’s land animals are a bunch of runts compared with creatures from the not-too-distant past. Beasts as big as elephants, gorillas and bears were once much more common around the world. Then, seemingly suddenly, hundreds of big species, including the woolly mammoth, the giant ground sloth and a lizard weighing as much as half a ton, disappeared. In End of the Megafauna, paleomammalogist Ross MacPhee makes one thing clear: The science on what caused the extinctions of these megafauna — animals larger than 44 kilograms, or about 100 pounds — is far from settled.
MacPhee dissects the evidence behind two main ideas: that as humans moved into new parts of the world over the last 50,000 years, people hunted the critters into oblivion, or that changes in climate left the animals too vulnerable to survive. As MacPhee shows, neither scenario matches all of the available data.

Throughout, Peter Schouten’s illustrations, reminiscent of paintings that enliven natural history museums, bring the behemoths back to life. At times, MacPhee slips in too many technical terms. But overall, he offers readers an informative, up-to-date overview of a fascinating period in Earth’s history.

Buy End of the Megafauna from Amazon.com. Science News is a participant in the Amazon Services LLC Associates Program. Please see our FAQ for more details.

A lack of sleep can induce anxiety

SAN DIEGO — A sleepless night can leave the brain spinning with anxiety the next day.

In healthy adults, overnight sleep deprivation triggered anxiety the next morning, along with altered brain activity patterns, scientists reported November 4 at the annual meeting of the Society for Neuroscience.

People with anxiety disorders often have trouble sleeping. The new results uncover the reverse effect — that poor sleep can induce anxiety. The study shows that “this is a two-way interaction,” says Clifford Saper, a sleep researcher at Harvard Medical School and Beth Israel Deaconess Medical Center in Boston who wasn’t involved in the study. “The sleep loss makes the anxiety worse, which in turn makes it harder to sleep.”
Sleep researchers Eti Ben Simon and Matthew Walker, both of the University of California, Berkeley, studied the anxiety levels of 18 healthy people. Following either a night of sleep or a night of staying awake, these people took anxiety tests the next morning. After sleep deprivation, anxiety levels in these healthy people were 30 percent higher than when they had slept. On average, the anxiety scores reached levels seen in people with anxiety disorders, Ben Simon said November 5 in a news briefing.

What’s more, sleep-deprived people’s brain activity changed. In response to emotional videos, brain areas involved in emotions were more active, and the prefrontal cortex, an area that can put the brakes on anxiety, was less active, functional MRI scans showed.

The results suggest that poor sleep “is more than just a symptom” of anxiety, but in some cases, may be a cause, Ben Simon said.

Sound-absorbent wings and fur help some moths evade bats

Some moths aren’t so easy for bats to detect.

The cabbage tree emperor moth has wings with tiny scales that absorb sound waves sent out by bats searching for food. That absorption reduces the echoes that bounce back to bats, allowing Bunaea alcinoe to avoid being so noticeable to the nocturnal predators, researchers report online November 12 in the Proceedings of the National Academy of Sciences.

“They have this stealth coating on their body surfaces which absorbs the sound,” says study coauthor Marc Holderied, a bioacoustician at the University of Bristol in England. “We now understand the mechanism behind it.”

Bats sense their surroundings using echolocation, sending out sound waves that bounce off objects and return as echoes picked up by the bats’ supersensitive ears (SN: 9/30/17, p. 22). These moths, without ears that might alert them to an approaching predator, have instead developed scales of a size, shape and thickness suited to absorbing ultrasonic sound frequencies used by bats, the researchers found.
The team shot ultrasonic sound waves at a single, microscopic scale and observed it transferring sound wave energy into movement. The scientists then simulated the process with a 3-D computer model that showed the scale absorbing up to 50 percent of the energy from sound waves.

What’s more, it isn’t just wings that help such earless moths evade bats. Other moths in the same family as B. alcinoe also have sound-absorbing fur, the same researchers report online October 18 in the Journal of the Acoustical Society of America.
Holderied and his colleagues studied the fluffy thoraxes of the Madagascan bullseye moth and the promethea silk moth, and found that the fur also absorbs sound waves through a different process called porous absorption. In lab tests, the furry-bellied moths absorbed as much as 85 percent of the sound waves encountered. Researchers suspect that the equally fluffy cabbage tree emperor moth also has this ability.

Other moths that have ears can hear bats coming, and can quickly swerve out of the way of their predators, dipping and diving in dizzying directions (SN: 5/26/18, p. 11). Some moths also have long tails on their wings that researchers suspect can be twirled to disrupt bats’ sound waves (SN: 3/21/15, p. 17). Still other moths produce toxins to fend off foes.

Having sound-absorbent fur and scales “might require a lot less energy in terms of protection from the moth’s side,” says Akito Kawahara, an evolutionary biologist at the Florida Museum of Natural History in Gainesville who was not involved with the study. “It’s a very different kind of passive defense system.”

Holderied and his colleagues hope next to study how multiple scales, locked together, respond to ultrasonic sound waves. The findings could one day help in developing better soundproofing technology for sound engineers and acousticians.

50 years ago, screwworm flies inspired a new approach to insect control

Screwworm fly upsurge

Screwworms, the first pest to be eliminated on a large scale by the use of the sterile male technique, have shown an alarming increase, according to U.S. and Mexican officials…. The screwworm fly lays its eggs in open wounds on cattle. The maggots live on the flesh of their host, causing damage and death, and economic losses of many millions of dollars.
— Science News, November 23, 1968

Update
Though eradicated in the United States in 1966, screwworms reemerged two years later, probably coming up from Mexico. Outbreaks in southern U.S. states in 1972 and in Florida in 2016 were both handled with the sterile male technique, considered one of the most successful approaches for pest control. Males are sterilized with radiation, then released into a population to breed with wild counterparts; no offspring result. The method has been used with other pests, such as mosquitoes, which were dropped by drones over Brazil this year as a test before the technology is used against outbreaks like the Zika virus.

Engineers are plugging holes in drinking water treatment

Off a gravel road at the edge of a college campus — next door to the town’s holding pen for stray dogs — is a busy test site for the newest technologies in drinking water treatment.

In the large shed-turned-laboratory, University of Massachusetts Amherst engineer David Reckhow has started a movement. More people want to use his lab to test new water treatment technologies than the building has space for.

The lab is a revitalization success story. In the 1970s, when the Clean Water Act put new restrictions on water pollution, the diminutive grey building in Amherst, Mass. was a place to test those pollution-control measures. But funding was fickle, and over the years, the building fell into disrepair. In 2015, Reckhow brought the site back to life. He and a team of researchers cleaned out the junk, whacked the weeds that engulfed the building and installed hundreds of thousands of dollars worth of monitoring equipment, much of it donated or bought secondhand.

“We recognized that there’s a lot of need for drinking water technology,” Reckhow says. Researchers, students and start-up companies all want access to test ways to disinfect drinking water, filter out contaminants or detect water-quality slipups. On a Monday afternoon in October, the lab is busy. Students crunch data around a big table in the main room. Small-scale tests of technology that uses electrochemistry to clean water chug along, hooked up to monitors that track water quality. On a lab bench sits a graduate student’s low-cost replica of an expensive piece of monitoring equipment. The device alerts water treatment plants when the by-products of disinfection chemicals in a water supply are reaching dangerous levels. In an attached garage, two startup companies are running larger-scale tests of new kinds of membranes that filter out contaminants.
Parked behind the shed is the almost-ready-to-roll newcomer. Starting in 2019, the Mobile Water Innovation Laboratory will take promising new and affordable technologies to local communities for testing. That’s important, says Reckhow, because there’s so much variety in the quality of water that comes into drinking water treatment plants. On-site testing is the only way to know whether a new approach is effective, he says, especially for newer technologies without long-term track records.

The facility’s popularity reflects a persistent concern in the United States: how to ensure affordable access to clean, safe drinking water. Although U.S. drinking water is heavily regulated and pretty clean overall, recent high-profile contamination cases, such as the 2014 lead crisis in Flint, Mich. (SN: 3/19/16, p. 8), have exposed weaknesses in the system and shaken people’s trust in their tap water.
Tapped out
In 2013 and 2014, 42 drinking water–associated outbreaks resulted in more than 1,000 illnesses and 13 deaths, based on reports to the U.S. Centers for Disease Control and Prevention. The top culprits were Legionella bacteria and some form of chemical, toxin or parasite, according to data published in November 2017.

Those numbers tell only part of the story, however. Many of the contaminants that the U.S. Environmental Protection Agency regulates through the 1974 Safe Drinking Water Act cause problems only when exposure happens over time; the effects of contaminants like lead don’t appear immediately after exposure. Records of EPA rule violations note that in 2015, 21 million people were served by drinking water systems that didn’t meet standards, researchers reported in a February study in the Proceedings of the National Academy of Sciences. That report tracked trends in drinking water violations from 1982 to 2015.
Current technology can remove most contaminants, says David Sedlak, an environmental engineer at the University of California, Berkeley. Those include microbes, arsenic, nitrates and lead. “And then there are some that are very difficult to degrade or transform,” such as industrial chemicals called PFAS.

Smaller communities, especially, can’t always afford top-of-the-line equipment or infrastructure overhauls to, for example, replace lead pipes. So Reckhow’s facility is testing approaches to help communities address water-quality issues in affordable ways.
Some researchers are adding technologies to deal with new, potentially harmful contaminants. Others are designing approaches that work with existing water infrastructure or clean up contaminants at their source.

How is your water treated?
A typical drinking water treatment plant sends water through a series of steps.

First, coagulants are added to the water. These chemicals clump together sediments, which can cloud water or make it taste funny, so they are bigger and easier to remove. A gentle shaking or spinning of the water, called flocculation, helps those clumps form (1). Next, the water flows into big tanks to sit for a while so the sediments can fall to the bottom (2). The cleaner water then moves through membranes that filter out smaller contaminants (3). Disinfection, via chemicals or ultraviolet light, kills harmful bacteria and viruses (4). Then the water is ready for distribution (5).
There’s a lot of room for variation within that basic water treatment process. Chemicals added at different stages can trigger reactions that break down chunky, toxic organic molecules into less harmful bits. Ion-exchange systems that separate contaminants by their electric charge can remove ions like magnesium or calcium that make water “hard,” as well as heavy metals, such as lead and arsenic, and nitrates from fertilizer runoff. Cities mix and match these strategies, adjusting chemicals and prioritizing treatment components, based on the precise chemical qualities of the local water supply.

Some water utilities are streamlining the treatment process by installing technologies like reverse osmosis, which removes nearly everything from the water by forcing the water molecules through a selectively permeable membrane with extremely tiny holes. Reverse osmosis can replace a number of steps in the water treatment process or reduce the number of chemicals added to water. But it’s expensive to install and operate, keeping it out of reach for many cities.

Fourteen percent of U.S. residents get water from wells and other private sources that aren’t regulated by the Safe Drinking Water Act. These people face the same contamination challenges as municipal water systems, but without the regulatory oversight, community support or funding.

“When it comes to lead in private wells … you’re on your own. Nobody is going to help you,” says Marc Edwards, the Virginia Tech engineer who helped uncover the Flint water crisis. Edwards and Virginia Tech colleague Kelsey Pieper collected water-quality data from over 2,000 wells across Virginia in 2012 and 2013. Some were fine, but others had lead levels of more than 100 parts per billion. When levels are higher than its 15 ppb threshold, the EPA mandates that cities take steps to control corrosion and notify the public about the contamination. The researchers reported those findings in 2015 in the Journal of Water and Health.

To remove lead and other contaminants, well users often rely on point-of-use treatments. A filter on the tap removes most, but not all, contaminants. Some people spring for costly reverse osmosis systems.
New tech solutions
These three new water-cleaning approaches wouldn’t require costly infrastructure overhauls.

Ferrate to cover many bases
Reckhow’s team at UMass Amherst is testing ferrate, an ion of iron, as a replacement for several water treatment steps. First, ferrate kills bacteria in the water. Next, it breaks down carbon-based chemical contaminants into smaller, less harmful molecules. Finally, it makes ions like manganese less soluble in water so they are easier to filter out, Reckhow and colleagues reported in 2016 in Journal–American Water Association. With its multifaceted effects, ferrate could potentially streamline the drinking water treatment process or reduce the use of chemicals, such as chlorine, that can yield dangerous by-products, says Joseph Goodwill, an environmental engineer at the University of Rhode Island in Kingston.

Ferrate could be a useful disinfectant for smaller drinking water systems that don’t have the infrastructure, expertise or money to implement something like ozone treatment, an approach that uses ozone gas to break down contaminants, Reckhow says.

Early next year, in the maiden voyage of his mobile water treatment lab, Reckhow plans to test the ferrate approach in the small Massachusetts town of Gloucester.
In the 36-foot trailer is a squeaky-clean array of plastic pipes and holding tanks. The setup routes incoming water through the same series of steps — purifying, filtering and disinfecting — that one would find in a standard drinking water treatment plant. With two sets of everything, scientists can run side-by-side experiments, comparing a new technology’s performance against the standard approach. That way researchers can see whether a new technology works better than existing options, says Patrick Wittbold, the UMass Amherst research engineer who headed up the trailer’s design.

Charged membranes
Filtering membranes tend to get clogged with small particles. “That’s been the Achilles’ heel of membrane treatment,” says Brian Chaplin, an engineer at the University of Illinois at Chicago. Unclogging the filter wastes energy and increases costs. Electricity might solve that problem and offer some side benefits, Chaplin suggests.

His team tested an electrochemical membrane made of titanium oxide or titanium dioxide that both filters water and acts as an electrode. Chemical reactions happening on the electrically charged membranes can turn nitrates into nitrogen gas or split water molecules, generating reactive ions that can oxidize contaminants in the water. The reactions also prevent particles from sticking to the membrane. Large carbon-based molecules like benzene become smaller and less harmful.
In lab tests, the membranes effectively filtered and destroyed contaminants, Chaplin says. In one test, a membrane transformed 67 percent of the nitrates in a solution into other molecules. The finished water was below the EPA’s regulatory nitrate limit of 10 parts per million, he and colleagues reported in July in Environmental Science and Technology. Chaplin expects to move the membrane into pilot tests within the next two years.

Obliterate the PFAS
The industrial chemicals known as PFAS present two challenges. Only the larger ones are effectively removed by granular activated carbon, the active material in many household water filters. The smaller PFAS remain in the water, says Christopher Higgins, an environmental engineer at the Colorado School of Mines in Golden. Plus, filtering isn’t enough because the chunky chemicals are hard to break down for safe disposal.

Higgins and colleague Timothy Strathmann, also at the Colorado School of Mines, are working on a process to destroy PFAS. First, a specialized filter with tiny holes grabs the molecules out of the water. Then, sulfite is added to the concentrated mixture of contaminants. When hit with ultraviolet light, the sulfite generates reactive electrons that break down the tough carbon-fluorine bonds in the PFAS molecules. Within 30 minutes, the combination of UV radiation and sulfites almost completely destroyed one type of PFAS, other researchers reported in 2016 in Environmental Science and Technology.

Soon, Higgins and Strathmann will test the process at Peterson Air Force Base in Colorado, one of nearly 200 U.S. sites known to have groundwater contaminated by PFAS. Cleaning up those sites would remove the pollutants from groundwater that may also feed wells or city water systems.

Why a chemistry teacher started a science board game company

A physicist, a gamer and two editors walk into a bar. No, this isn’t the setup for some joke. After work one night, a few Science News staffers tried out a new board game, Subatomic. This deck-building game combines chemistry and particle physics for an enjoyable — and educational — time.

Subatomic is simple to grasp: Players use quark and photon cards to build protons, neutrons and electrons. With those three particles, players then construct chemical elements to score points. Scientists are the wild cards: Joseph J. Thomson, Maria Goeppert-Mayer, Marie Curie and other Nobel laureates who discovered important things related to the atom provide special abilities or help thwart other players.
The game doesn’t shy away from difficult or unfamiliar concepts. Many players might be unfamiliar with quarks, a group of elementary particles. But after a few rounds, it’s ingrained in your brain that, for example, two up quarks and one down quark create a proton. And Subatomic includes a handy booklet that explains in easy-to-understand terms the science behind the game. The physicist in our group vouched for the game’s accuracy but had one qualm: Subatomic claims that two photons, or particles of light, can create an electron. That’s theoretically possible, but scientists have yet to confirm it in the lab.

The mastermind behind Subatomic is John Coveyou, who has a master’s degree in energy, environmental and chemical engineering. As the founder and CEO of Genius Games
, he has created six other games, including Ion ( SN: 5/30/15, p. 29 ) and Linkage ( SN: 12/27/14, p. 32 ). Next year, he’ll add a periodic table game to the list . Because Science News has reviewed several of his games, we decided to talk with Coveyou about where he gets his inspiration and how he includes real science in his products. The following discussion has been edited for length and clarity.
SN: When did you get interested in science?

Coveyou: My mom was mentally and physically disabled, and my dad was in and out of prison and mental institutions. So early on, things were very different for me. I ended up leaving home when I was in high school, hopscotching around from 12 different homes throughout my junior and senior year. I almost dropped out, but I had a lot of teachers who were amazing mentors. I didn’t know what else to do, so I joined the army. While I was in Iraq, I had a bunch of science textbooks shipped to me, and I read them in my free time. They took me out of the environments I was in and became extremely therapeutic. A lot of the issues we face as a society can be worked on by the next generation having a command of the sciences. So I’m very passionate about teaching people the sciences and helping people find joy in them.

SN: Why did you start creating science games?

Coveyou: I was teaching chemistry at a community college, and I noticed that my students were really intimidated by the chemistry concepts before they even came into the classroom. They really struggled with a lot of the basic terminology. At the same time, I’ve been a board gamer pretty much my whole life. And it kind of hit me like, “Whoa, wait a second. What if I made some games that taught some of the concepts that I’m trying to teach my chemistry students?” So I just took a shot at it. The first couple of games were terrible. I didn’t really know what I was doing, but I kept at it.

SN: How do you test the games?

Coveyou: We first test with other gamers. Once we’re ready to get feedback from the general public, we go to middle school or high school students. Once we test a game with people face-to-face, we will send it across the world to about 100 to 200 different play testers, and those vary from your hard-core gamers to homeschool families to science teachers, who try it in the classroom.

SN: How do you incorporate real science into your games?

Coveyou: I pretty much always start with a science concept in mind and think about how can we create a game that best reflects the science that we want to communicate. For all of our upcoming games, we include a booklet about the science. That document is not created by Genius Games. We have about 20 to 30 Ph.D.s and doctors across the globe who write the content and edit each other. That’s been a real treat to actually show players how the game is accurate. We’ve had so many scientists and teachers who are just astonished that we created something like this that was accurate, but also fun to play.

Voyager 2 spacecraft enters interstellar space

Voyager 2 has entered interstellar space. The spacecraft slipped out of the huge bubble of particles that encircles the solar system on November 5, becoming the second ever human-made craft to cross the heliosphere, or the boundary between the sun and the stars.

Coming in second place is no mean achievement. Voyager 1 became the first spacecraft to exit the solar system in 2012. But that craft’s plasma instrument stopped working in 1980, leaving scientists without a direct view of the solar wind, hot charged particles constantly streaming from the sun (SN Online: 9/12/13). Voyager 2’s plasma sensors are still working, providing unprecedented views of the space between stars.

“We’ve been waiting with bated breath for the last couple of months for us to be able to see this,” NASA solar physicist Nicola Fox said at a Dec. 10 news conference at the American Geophysical Union meeting in Washington, D.C.

NASA launched the twin Voyager spacecraft in 1977 on a grand tour of the solar system’s planets (SN: 8/19/17, p. 26). After that initial tour was over, both spacecraft continued travelling through the bubble of plasma that originates at the sun.
“When Voyager was launched, we didn’t know how large the bubble was, how long it would take to get [to its edge] and whether the spacecraft could last long enough to get there,” said Voyager project scientist Edward Stone of Caltech.

For most of Voyager 2’s journey, the spacecraft’s Plasma Science Experiment measured the speed, density, temperature, pressure and other properties of the solar wind. But on November 5, the experiment saw a sharp drop in the speed and the number of solar wind particles that hit the detector each second. At the same time, another detector started picking up more high-energy particles called cosmic rays that originate elsewhere in the galaxy.
Those measurements suggest that Voyager 2 has reached the region where the solar wind slams into the colder, denser population of particles that fill the space between stars. Voyager 2 is now a little more than 18 billion kilometers from the sun.

Intriguingly, Voyager 2’s measurements of cosmic rays and magnetic fields — which Voyager 1 could still make when it crossed the boundary — did not exactly match up with Voyager 1’s observations.
“That’s what makes it interesting,” Stone said. The variations are probably from the fact that the two spacecraft exited the heliosphere in different places, and that the sun is at a different part of its 11-year activity cycle than it was in 2012. “We would have been amazed if they had looked the same.”

The Voyagers probably have between five and 10 years left to continue exploring interstellar space, said Voyager project manager Suzanne Dodd from NASA’s Jet Propulsion Laboratory in Pasadena, Calif.

“Both spacecraft are very healthy if you consider them senior citizens,” Dodd said. The biggest concern is how much power they have left and how cold they are — Voyager 2 is currently about 3.6° Celsius, close to the freezing point of its hydrazine fuel. In the near future, the team will have to turn off some of the spacecraft’s instruments to keep the craft operating and sending data back to Earth.

“We do have difficult decisions ahead,” Dodd said. She added that her personal goal is to see the spacecraft last until 2027, for a total of 50 years in space. “That would be fantastic.”