Saturn has two hexagons, not one, swirling around its north pole

A new hexagon has emerged high in the skies over Saturn’s north pole.

As spring turned to summer in the planet’s northern hemisphere, a six-sided vortex appeared in the stratosphere. Surprisingly, the polar polygon seems to mirror the famous hexagonal cyclone that swirls in the clouds hundreds of kilometers below, researchers report online September 3 in Nature Communications.

When NASA’s Cassini spacecraft arrived at Saturn in 2004 — during summer in the southern hemisphere — the probe spied a similar vortex in the stratosphere over the south pole, though that one was shaped more like a plain old circle. As summer gradually turned to autumn, that vortex vanished.
Now, planetary scientist Leigh Fletcher at the University of Leicester in England and colleagues report that Cassini caught a new vortex growing in the north during the spacecraft’s final years. Relying on infrared maps of the atmosphere, the team found that from 2014 to 2017 a warm, swirling mass of air started developing over the north pole. That wasn’t surprising — but the six-sided shape came as a bit of a shock.

The shape suggests that the underlying hexagon somehow controls what happens in the stratosphere. These sorts of insights could help researchers understand how energy moves around in other planetary atmospheres.

Unfortunately, Cassini is no longer around — it dove into Saturn last year (SN: 9/2/17, p. 16). But Earth-based telescopes will keep an eye on the storm to see how it changes along with Saturn’s seasons.

A neutron star collision may have emitted a fast radio burst

A neutron star pileup may have emitted two different kinds of cosmic signals: ripples in spacetime known as gravitational waves and a brief blip of energy called a fast radio burst.

One of the three detectors that make up the gravitational wave observatory LIGO picked up a signal from a cosmic collision on April 25, 2019. About 2.5 hours later, a fast radio burst detector picked up a signal from the same region of sky, researchers report March 27 in Nature Astronomy.
If strengthened by further observations, the finding could bolster the theory that mysterious fast radio bursts have multiple origins — and neutron star mergers are one of them.

“We’re 99.5 percent sure” the two signals came from the same event, says astrophysicist Alexandra Moroianu, who spotted the merger and its aftermath while at the University of Western Australia in Perth. “We want to be 99.999 percent sure.”

Unfortunately, LIGO’s two other detectors didn’t catch the signal, so it’s impossible to precisely triangulate its location. “Even though it’s not a concrete, bang-on observation for something that’s been theorized for a decade, it’s the first evidence we’ve got,” Moroianu says. “If this is true … it’s going to be a big boom in fast radio burst science.”

Mysterious radio bursts
Astronomers have spotted more than 600 fast radio bursts, or FRBs, since 2007. Despite their frequency, the causes remain a mystery. One leading candidate is a highly magnetized neutron star called a magnetar, which could be left behind after a massive star explodes (SN: 6/4/20). But some FRBs appear to repeat, while others are apparent one-off events, suggesting that there’s more than one way to produce them (SN: 2/7/20).

Theorists have wondered if a collision between two neutron stars could spark a singular FRB, before the wreckage from the collision produces a black hole. Such a smashup should emit gravitational waves, too (SN: 10/16/17).

Moroianu and colleagues searched archived data from LIGO and the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a fast radio burst detector in British Columbia, to see if any of their signals lined up. The team found one candidate pairing: GW190425 and FRB20190425A.
Even though the gravitational wave was picked up only by the LIGO detector in Livingston, La., the team spotted other suggestive signs that the signals were related. The FRB and the gravitational waves came from the same distance, about 370 million light-years from Earth. The gravitational waves were from the only neutron star merger LIGO spotted in that observing run, and the FRB was particularly bright. There may even have been a burst of gamma rays at the same time, according to satellite data — another aftereffect of a neutron star merger.

“Everything points at this being a very interesting combination of signals,” Moroianu says. She says it’s like watching a crime drama on TV: “You have so much evidence that anyone watching the TV show would be like, ‘Oh, I think he did it.’ But it’s not enough to convince the court.”

Neutron star secrets
Despite the uncertainty, the finding has exciting implications, says astrophysicist Alessandra Corsi of Texas Tech University in Lubbock. One is the possibility that two neutron stars could merge into a single, extra-massive neutron star without immediately collapsing into a black hole. “There’s this fuzzy dividing line between what’s a neutron star and what’s a black hole,” says Corsi, who was not involved in the new work.

In 2013, astrophysicist Bing Zhang of the University of Nevada, Las Vegas suggested that a neutron star smashup could create an extra-massive neutron star that wobbles on the edge of stability for a few hours before collapsing into a black hole. In that case, the resulting FRB would be delayed — just like in the 2019 case.

The most massive neutron star yet observed is about 2.35 times the mass of the sun, but theorists think they could grow to be around three times the mass of the sun without collapsing (SN: 7/22/22). The neutron star that could have resulted from the collision in 2019 would have been 3.4 solar masses, Moroianu and colleagues calculate.

“Something like this, especially if it’s confirmed with more observations, it would definitely tell us something about how neutron matter behaves,” Corsi says. “The nice thing about this is we have hopes of testing this in the future.”

The next LIGO run is expected to start in May. Corsi is optimistic that more coincidences between gravitational waves and FRBs will show up, now that researchers know to look for them. “There should be a bright future ahead of us,” she says.

The biggest planet orbiting TRAPPIST-1 doesn’t appear to have an atmosphere

A rocky planet that circles a small star nearly 40 light-years from Earth is hot and has little or no atmosphere, a new study suggests. The finding raises questions about the possibility of atmospheres on the other orbs in the planetary system.

At the center of the system is the red dwarf star dubbed TRAPPIST-1; it hosts seven known planets with masses ranging from 0.3 to 1.4 times Earth’s, a few of which could hold liquid water (SN: 2/22/17; 3/19/18). The largest, TRAPPIST-1b, is the closest to its parent star and receives about four times the radiation Earth receives from the sun, says Thomas Greene, an astrobiologist at NASA’s Ames Research Center at Moffett Field, Calif.
Like all other planets in the system, TRAPPIST-1b is tidally locked, meaning that one side of the planet always faces the star, and one side looks away. Calculations suggest that if the stellar energy falling on TRAPPIST-1b were distributed around the planet — by an atmosphere, for example — and then reradiated equally in all directions, the planet’s surface temperature would be around 120° Celsius.

But the dayside temperature of the planet is actually around 230° C, Greene and colleagues report online March 27 in Nature. That, in turn, suggests that there’s little or no atmosphere to carry heat from the perpetually sunlit side of the planet to the dark side, the team argues.

To take TRAPPIST-1b’s temperature, Greene and his colleagues used the James Webb Space Telescope to observe the planet in a narrow band of infrared wavelengths five times in 2022. Because the observations were made just before and after the planet dodged behind its parent star, astronomers could see the fully lit face of the planet, Greene says.

The team’s results are “the first ‘deep dive’ look at this planet,” says Knicole Colon, an astrophysicist at NASA’s Goddard Space Flight Center in Greenbelt, Md, who was not involved with the study. “With every observation, we expect to learn something new,” she adds.

Astronomers have long suggested that planets around red dwarf stars might not be able to hold onto their atmospheres, largely because such stars’ frequent and high-energy flares would blast away any gaseous shroud they might have during their early years (SN: 12/20/22). Yet there are some scenarios in which such flares could heat up a planet’s surface and drive volcanism that, in turn, yields gases that could help form a new atmosphere.

“To be totally sure that this planet has no atmosphere, we need many more measurements,” says Michaël Gillon, an astrophysicist at the University of Liège in Belgium who was not part of the new study. It’s possible that when observed at a wider variety of wavelengths and from other angles, the planet could show signs of a gaseous shroud and thus possibly hints of volcanism.

Either way, says Laura Kriedberg, an astronomer at the Max Planck Institute for Astronomy in Heidelberg, Germany, who also did not participate in the study, the new result “definitely motivates detailed study of the cooler planets in the system, to see if the same is true of them.”

We’re probably undervaluing healthy lakes and rivers

For sale: Pristine lake. Price negotiable.

Most U.S. government attempts to quantify the costs and benefits of protecting the country’s bodies of water are likely undervaluing healthy lakes and rivers, researchers argue in a new study. That’s because some clean water benefits get left out of the analyses, sometimes because these benefits are difficult to pin numbers on. As a result, the apparent value of many environmental regulations is probably discounted.

The study, published online October 8 in the Proceedings of the National Academy of Sciences, surveyed 20 government reports analyzing the economic impacts of U.S. water pollution laws. Most of these laws have been enacted since 2000, when cost-benefit analyses became a requirement. Analysis of a measure for restricting river pollution, for example, might find that it increases costs for factories using that river for wastewater disposal, but boosts tourism revenues by drawing more kayakers and swimmers.
Only two studies out of 20 showed the economic benefits of these laws exceeding the costs. That’s uncommon among analyses of environmental regulations, says study coauthor David Keiser, an environmental economist at Iowa State University in Ames. Usually, the benefits exceed the costs.

So why does water pollution regulation seem, on paper at least, like such a losing proposition?

Keiser has an explanation: Summing up the monetary benefits of environmental policies is really hard. Many of these benefits are intangible and don’t have clear market values. So deciding which benefits to count, and how to count them, can make a big difference in the results.
Many analyses assume water will be filtered for drinking, Keiser says, so they don’t count the human health benefits of clean lakes and rivers (SN: 8/18/18, p. 14). That’s different from air pollution cost-benefit studies, which generally do include the health benefits of cleaner air by factoring in data tracking things like doctor’s visits or drug prescriptions. That could explain why Clean Air Act rules tend to get more favorable reviews, Keiser says — human health accounts for about 95 percent of the measured benefits of air quality regulations.

“You can avoid a lake with heavy, thick, toxic algal blooms,” Keiser says. “If you walk outside and have very polluted air, it’s harder to avoid.”

But even if people can avoid an algae-choked lake, they still pay a price for that pollution, says environmental scientist Thomas Bridgeman, director of the Lake Erie Center at the University of Toledo in Ohio.
Communities that pull drinking water from a lake filled with toxic blooms of algae or cyanobacteria spend more to make the water safe to drink. Bridgeman’s seen it firsthand: In 2014, Lake Erie’s cyanobacteria blooms from phosphorus runoff shut down Toledo’s water supply for two days and forced the city to spend $500 million on water treatment upgrades.

Most of the studies surveyed by Keiser and his team were missing other kinds of benefits, too. The reports usually left out the value of eliminating certain toxic and nonconventional pollutants — molecules such as bisphenol A, or BPA, and perfluorooctanoic acid, or PFOA (SN: 10/3/15, p. 12). In high quantities, these compounds, which are used to make some plastics and nonstick coatings, can cause harm to humans and wildlife. Many studies also didn’t include discussion of how the quality of surface waters can affect groundwater, which is a major source of drinking water for many people.

A lack of data on water quality may also limit studies, Keiser’s team suggests. While there’s a national database tracking daily local air pollution levels, the data from various water quality monitoring programs aren’t centralized. That makes gathering and evaluating trends in water quality harder.

Plus, there are the intangibles — the value of aquatic species that are essential to the food chain, for example.
“Some things are just inherently difficult to put a dollar [value] on,” says Robin Craig, an environmental law professor at the University of Utah in Salt Lake City. “What is it worth to have a healthy native ecosystem?… That’s where it can get very subjective very fast.”

That subjectivity can allow agencies to analyze policies in ways that suit their own political agendas, says Matthew Kotchen, an environmental economist at Yale University. An example: the wildly different assessments by the Obama and Trump administrations of the value gained from the 2015 Clean Water Rule, also known as the Waters of the United States rule.

The rule, passed under President Barack Obama, clarified the definition of waters protected under the 1972 Clean Water Act to include tributaries and wetlands connected to larger bodies of water. The Environmental Protection Agency estimated in 2015 that the rule would result in yearly economic benefits ranging from $300 million to $600 million, edging out the predicted annual costs of $200 million to $500 million. But in 2017, Trump’s EPA reanalyzed the rule and proposed rolling it back, saying that the agency had now calculated just $30 million to $70 million in annual benefits.

The difference in the conclusions came down to the consideration of wetlands: The 2015 analysis found that protecting wetlands, such as marshes and bogs that purify water, tallied up to $500 million in annual benefits. The Trump administration’s EPA, however, left wetlands out of the calculation entirely, says Kotchen, who analyzed the policy swing in Science in 2017.

Currently, the rule has gone into effect in 26 states, but is still tied up in legal challenges.

It’s an example of how methodology — and what counts as a benefit — can have a huge impact on the apparent value of environmental policies and laws.

The squishiness in analyzing environmental benefits underlies many of the Trump administration’s proposed rollbacks of Obama-era environmental legislation, not just ones about water pollution, Kotchen says. There are guidelines for how such cost-benefit analyses should be carried out, he says, but there’s still room for researchers or government agencies to choose what to include or exclude.

In June, the EPA, then under the leadership of Scott Pruitt, proposed revising the way the agency does cost-benefit analyses to no longer include so-called indirect benefits. For example, in evaluating policies to reduce carbon dioxide emissions, the agency would ignore the fact that those measures also reduce other harmful air pollutants. The move would, overall, make environmental policies look less beneficial.

These sharp contrasts in how presidential administrations approach environmental impact studies are not unprecedented, says Craig, the environmental law professor. “Pretty much every time we change presidents, the priorities for how to weigh those different elements change.”

A lack of sleep can induce anxiety

SAN DIEGO — A sleepless night can leave the brain spinning with anxiety the next day.

In healthy adults, overnight sleep deprivation triggered anxiety the next morning, along with altered brain activity patterns, scientists reported November 4 at the annual meeting of the Society for Neuroscience.

People with anxiety disorders often have trouble sleeping. The new results uncover the reverse effect — that poor sleep can induce anxiety. The study shows that “this is a two-way interaction,” says Clifford Saper, a sleep researcher at Harvard Medical School and Beth Israel Deaconess Medical Center in Boston who wasn’t involved in the study. “The sleep loss makes the anxiety worse, which in turn makes it harder to sleep.”
Sleep researchers Eti Ben Simon and Matthew Walker, both of the University of California, Berkeley, studied the anxiety levels of 18 healthy people. Following either a night of sleep or a night of staying awake, these people took anxiety tests the next morning. After sleep deprivation, anxiety levels in these healthy people were 30 percent higher than when they had slept. On average, the anxiety scores reached levels seen in people with anxiety disorders, Ben Simon said November 5 in a news briefing.

What’s more, sleep-deprived people’s brain activity changed. In response to emotional videos, brain areas involved in emotions were more active, and the prefrontal cortex, an area that can put the brakes on anxiety, was less active, functional MRI scans showed.

The results suggest that poor sleep “is more than just a symptom” of anxiety, but in some cases, may be a cause, Ben Simon said.

Why a chemistry teacher started a science board game company

A physicist, a gamer and two editors walk into a bar. No, this isn’t the setup for some joke. After work one night, a few Science News staffers tried out a new board game, Subatomic. This deck-building game combines chemistry and particle physics for an enjoyable — and educational — time.

Subatomic is simple to grasp: Players use quark and photon cards to build protons, neutrons and electrons. With those three particles, players then construct chemical elements to score points. Scientists are the wild cards: Joseph J. Thomson, Maria Goeppert-Mayer, Marie Curie and other Nobel laureates who discovered important things related to the atom provide special abilities or help thwart other players.
The game doesn’t shy away from difficult or unfamiliar concepts. Many players might be unfamiliar with quarks, a group of elementary particles. But after a few rounds, it’s ingrained in your brain that, for example, two up quarks and one down quark create a proton. And Subatomic includes a handy booklet that explains in easy-to-understand terms the science behind the game. The physicist in our group vouched for the game’s accuracy but had one qualm: Subatomic claims that two photons, or particles of light, can create an electron. That’s theoretically possible, but scientists have yet to confirm it in the lab.

The mastermind behind Subatomic is John Coveyou, who has a master’s degree in energy, environmental and chemical engineering. As the founder and CEO of Genius Games
, he has created six other games, including Ion ( SN: 5/30/15, p. 29 ) and Linkage ( SN: 12/27/14, p. 32 ). Next year, he’ll add a periodic table game to the list . Because Science News has reviewed several of his games, we decided to talk with Coveyou about where he gets his inspiration and how he includes real science in his products. The following discussion has been edited for length and clarity.
SN: When did you get interested in science?

Coveyou: My mom was mentally and physically disabled, and my dad was in and out of prison and mental institutions. So early on, things were very different for me. I ended up leaving home when I was in high school, hopscotching around from 12 different homes throughout my junior and senior year. I almost dropped out, but I had a lot of teachers who were amazing mentors. I didn’t know what else to do, so I joined the army. While I was in Iraq, I had a bunch of science textbooks shipped to me, and I read them in my free time. They took me out of the environments I was in and became extremely therapeutic. A lot of the issues we face as a society can be worked on by the next generation having a command of the sciences. So I’m very passionate about teaching people the sciences and helping people find joy in them.

SN: Why did you start creating science games?

Coveyou: I was teaching chemistry at a community college, and I noticed that my students were really intimidated by the chemistry concepts before they even came into the classroom. They really struggled with a lot of the basic terminology. At the same time, I’ve been a board gamer pretty much my whole life. And it kind of hit me like, “Whoa, wait a second. What if I made some games that taught some of the concepts that I’m trying to teach my chemistry students?” So I just took a shot at it. The first couple of games were terrible. I didn’t really know what I was doing, but I kept at it.

SN: How do you test the games?

Coveyou: We first test with other gamers. Once we’re ready to get feedback from the general public, we go to middle school or high school students. Once we test a game with people face-to-face, we will send it across the world to about 100 to 200 different play testers, and those vary from your hard-core gamers to homeschool families to science teachers, who try it in the classroom.

SN: How do you incorporate real science into your games?

Coveyou: I pretty much always start with a science concept in mind and think about how can we create a game that best reflects the science that we want to communicate. For all of our upcoming games, we include a booklet about the science. That document is not created by Genius Games. We have about 20 to 30 Ph.D.s and doctors across the globe who write the content and edit each other. That’s been a real treat to actually show players how the game is accurate. We’ve had so many scientists and teachers who are just astonished that we created something like this that was accurate, but also fun to play.

Voyager 2 spacecraft enters interstellar space

Voyager 2 has entered interstellar space. The spacecraft slipped out of the huge bubble of particles that encircles the solar system on November 5, becoming the second ever human-made craft to cross the heliosphere, or the boundary between the sun and the stars.

Coming in second place is no mean achievement. Voyager 1 became the first spacecraft to exit the solar system in 2012. But that craft’s plasma instrument stopped working in 1980, leaving scientists without a direct view of the solar wind, hot charged particles constantly streaming from the sun (SN Online: 9/12/13). Voyager 2’s plasma sensors are still working, providing unprecedented views of the space between stars.

“We’ve been waiting with bated breath for the last couple of months for us to be able to see this,” NASA solar physicist Nicola Fox said at a Dec. 10 news conference at the American Geophysical Union meeting in Washington, D.C.

NASA launched the twin Voyager spacecraft in 1977 on a grand tour of the solar system’s planets (SN: 8/19/17, p. 26). After that initial tour was over, both spacecraft continued travelling through the bubble of plasma that originates at the sun.
“When Voyager was launched, we didn’t know how large the bubble was, how long it would take to get [to its edge] and whether the spacecraft could last long enough to get there,” said Voyager project scientist Edward Stone of Caltech.

For most of Voyager 2’s journey, the spacecraft’s Plasma Science Experiment measured the speed, density, temperature, pressure and other properties of the solar wind. But on November 5, the experiment saw a sharp drop in the speed and the number of solar wind particles that hit the detector each second. At the same time, another detector started picking up more high-energy particles called cosmic rays that originate elsewhere in the galaxy.
Those measurements suggest that Voyager 2 has reached the region where the solar wind slams into the colder, denser population of particles that fill the space between stars. Voyager 2 is now a little more than 18 billion kilometers from the sun.

Intriguingly, Voyager 2’s measurements of cosmic rays and magnetic fields — which Voyager 1 could still make when it crossed the boundary — did not exactly match up with Voyager 1’s observations.
“That’s what makes it interesting,” Stone said. The variations are probably from the fact that the two spacecraft exited the heliosphere in different places, and that the sun is at a different part of its 11-year activity cycle than it was in 2012. “We would have been amazed if they had looked the same.”

The Voyagers probably have between five and 10 years left to continue exploring interstellar space, said Voyager project manager Suzanne Dodd from NASA’s Jet Propulsion Laboratory in Pasadena, Calif.

“Both spacecraft are very healthy if you consider them senior citizens,” Dodd said. The biggest concern is how much power they have left and how cold they are — Voyager 2 is currently about 3.6° Celsius, close to the freezing point of its hydrazine fuel. In the near future, the team will have to turn off some of the spacecraft’s instruments to keep the craft operating and sending data back to Earth.

“We do have difficult decisions ahead,” Dodd said. She added that her personal goal is to see the spacecraft last until 2027, for a total of 50 years in space. “That would be fantastic.”

A new implant uses light to control overactive bladders

A new soft, wireless implant may someday help people who suffer from overactive bladder get through the day with fewer bathroom breaks.

The implant harnesses a technique for controlling cells with light, known as optogenetics, to regulate nerve cells in the bladder. In experiments in rats with medication-induced overactive bladders, the device alleviated animals’ frequent need to pee, researchers report online January 2 in Nature.

Although optogenetics has traditionally been used for manipulating brain cells to study how the mind works, the new implant is part of a recent push to use the technique to tame nerve cells throughout the body (SN: 1/30/10, p. 18). Similar optogenetic implants could help treat disease and dysfunction in other organs, too.
“I was very happy to see this,” says Bozhi Tian, a materials scientist at the University of Chicago not involved in the work. An estimated 33 million people in the United States have overactive bladders. One available treatment is an implant that uses electric currents to regulate bladder nerve cells. But those implants “will stimulate a lot of nerves, not just the nerves that control the bladder,” Tian says. That can interfere with the function of neighboring organs, and continuous electrical stimulation can be uncomfortable.

The new optogenetic approach, however, targets specific nerves in only one organ and only when necessary. To control nerve cells with light, researchers injected a harmless virus carrying genetic instructions for bladder nerve cells to produce a light-activated protein called archaerhodopsin 3.0, or Arch. A stretchy sensor wrapped around the bladder tracks the wearer’s urination habits, and the implant wirelessly sends that information to a program on a tablet computer.
If the program detects the user heeding nature’s call at least three times per hour, it tells the implant to turn on a pair of tiny LEDs. The green glow of these micro light-emitting diodes activates the light-sensitive Arch proteins in the bladder’s nerve cells, preventing the cells from sending so many full-bladder alerts to the brain.
John Rogers, a materials scientist and bioengineer at Northwestern University in Evanston, Ill., and colleagues tested their implants by injecting rats with the overactive bladder–causing drug cyclophosphamide. Over the next several hours, the implants successfully detected when rats were passing water too frequently, and lit up green to bring the animals’ urination patterns back to normal.

Shriya Srinivasan, a medical engineer at MIT not involved in the work, is impressed with the short-term effectiveness of the implant. But, she says, longer-term studies may reveal complications with the treatment.

For instance, a patient might develop an immune reaction to the foreign Arch protein, which would cripple the protein’s ability to block signals from bladder nerves to the brain. But if proven safe and effective in the long term, similar optogenetic implants that sense and respond to organ motion may also help treat heart, lung or muscle tissue problems, she says.

Optogenetic implants could also monitor other bodily goings-on, says study coauthor Robert Gereau, a neuroscientist at Washington University in St. Louis. Hormone levels and tissue oxygenation or hydration, for example, could be tracked and used to trigger nerve-altering LEDs for medical treatment, he says.

4 things we’ll learn from the first closeup image of a black hole

Editor’s note: On April 10, the Event Horizon Telescope collaboration released a picture of the supermassive black hole at the center of galaxy M87. Read the full story here.

We’re about to see the first close-up of a black hole.

The Event Horizon Telescope, a network of eight radio observatories spanning the globe, has set its sights on a pair of behemoths: Sagittarius A*, the supermassive black hole at the Milky Way’s center, and an even more massive black hole 53.5 million light-years away in galaxy M87 (SN Online: 4/5/17).
In April 2017, the observatories teamed up to observe the black holes’ event horizons, the boundary beyond which gravity is so extreme that even light can’t escape (SN: 5/31/14, p. 16). After almost two years of rendering the data, scientists are gearing up to release the first images in April.

Here’s what scientists hope those images can tell us.

What does a black hole really look like?
Black holes live up to their names: The great gravitational beasts emit no light in any part of the electromagnetic spectrum, so they themselves don’t look like much.

But astronomers know the objects are there because of a black hole’s entourage. As a black hole’s gravity pulls in gas and dust, matter settles into an orbiting disk, with atoms jostling one another at extreme speeds. All that activity heats the matter white-hot, so it emits X-rays and other high-energy radiation. The most voraciously feeding black holes in the universe have disks that outshine all the stars in their galaxies (SN Online: 3/16/18).
The EHT’s image of the Milky Way’s Sagittarius A, also called SgrA, is expected to capture the black hole’s shadow on its accompanying disk of bright material. Computer simulations and the laws of gravitational physics give astronomers a pretty good idea of what to expect. Because of the intense gravity near a black hole, the disk’s light will be warped around the event horizon in a ring, so even the material behind the black hole will be visible.
And the image will probably look asymmetrical: Gravity will bend light from the inner part of the disk toward Earth more strongly than the outer part, making one side appear brighter in a lopsided ring.

Does general relativity hold up close to a black hole?
The exact shape of the ring may help break one of the most frustrating stalemates in theoretical physics.

The twin pillars of physics are Einstein’s theory of general relativity, which governs massive and gravitationally rich things like black holes, and quantum mechanics, which governs the weird world of subatomic particles. Each works precisely in its own domain. But they can’t work together.

“General relativity as it is and quantum mechanics as it is are incompatible with each other,” says physicist Lia Medeiros of the University of Arizona in Tucson. “Rock, hard place. Something has to give.” If general relativity buckles at a black hole’s boundary, it may point the way forward for theorists.

Since black holes are the most extreme gravitational environments in the universe, they’re the best environment to crash test theories of gravity. It’s like throwing theories at a wall and seeing whether — or how — they break. If general relativity does hold up, scientists expect that the black hole will have a particular shadow and thus ring shape; if Einstein’s theory of gravity breaks down, a different shadow.

Medeiros and her colleagues ran computer simulations of 12,000 different black hole shadows that could differ from Einstein’s predictions. “If it’s anything different, [alternative theories of gravity] just got a Christmas present,” says Medeiros, who presented the simulation results in January in Seattle at the American Astronomical Society meeting. Even slight deviations from general relativity could create different enough shadows for EHT to probe, allowing astronomers to quantify how different what they see is from what they expect.
Do stellar corpses called pulsars surround the Milky Way’s black hole?
Another way to test general relativity around black holes is to watch how stars careen around them. As light flees the extreme gravity in a black hole’s vicinity, its waves get stretched out, making the light appear redder. This process, called gravitational redshift, is predicted by general relativity and was observed near SgrA* last year (SN: 8/18/18, p. 12). So far, so good for Einstein.

An even better way to do the same test would be with a pulsar, a rapidly spinning stellar corpse that sweeps the sky with a beam of radiation in a regular cadence that makes it appear to pulse (SN: 3/17/18, p. 4). Gravitational redshift would mess up the pulsars’ metronomic pacing, potentially giving a far more precise test of general relativity.

“The dream for most people who are trying to do SgrA* science, in general, is to try to find a pulsar or pulsars orbiting” the black hole, says astronomer Scott Ransom of the National Radio Astronomy Observatory in Charlottesville, Va. “There are a lot of quite interesting and quite deep tests of [general relativity] that pulsars can provide, that EHT [alone] won’t.”

Despite careful searches, no pulsars have been found near enough to SgrA* yet, partly because gas and dust in the galactic center scatters their beams and makes them difficult to spot. But EHT is taking the best look yet at that center in radio wavelengths, so Ransom and colleagues hope it might be able to spot some.

“It’s a fishing expedition, and the chances of catching a whopper are really small,” Ransom says. “But if we do, it’s totally worth it.”
How do some black holes make jets?
Some black holes are ravenous gluttons, pulling in massive amounts of gas and dust, while others are picky eaters. No one knows why. SgrA* seems to be one of the fussy ones, with a surprisingly dim accretion disk despite its 4 million solar mass heft. EHT’s other target, the black hole in galaxy M87, is a voracious eater, weighing in at between about 3.5 billion and 7.22 billion solar masses. And it doesn’t just amass a bright accretion disk. It also launches a bright, fast jet of charged subatomic particles that stretches for about 5,000 light-years.

“It’s a little bit counterintuitive to think a black hole spills out something,” says astrophysicist Thomas Krichbaum of the Max Planck Institute for Radio Astronomy in Bonn, Germany. “Usually people think it only swallows something.”

Many other black holes produce jets that are longer and wider than entire galaxies and can extend billions of light-years from the black hole. “The natural question arises: What is so powerful to launch these jets to such large distances?” Krichbaum says. “Now with the EHT, we can for the first time trace what is happening.”

EHT’s measurements of M87’s black hole will help estimate the strength of its magnetic field, which astronomers think is related to the jet-launching mechanism. And measurements of the jet’s properties when it’s close to the black hole will help determine where the jet originates — in the innermost part of the accretion disk, farther out in the disk or from the black hole itself. Those observations might also reveal whether the jet is launched by something about the black hole itself or by the fast-flowing material in the accretion disk.

Since jets can carry material out of the galactic center and into the regions between galaxies, they can influence how galaxies grow and evolve, and even where stars and planets form (SN: 7/21/18, p. 16).

“It is important to understanding the evolution of galaxies, from the early formation of black holes to the formation of stars and later to the formation of life,” Krichbaum says. “This is a big, big story. We are just contributing with our studies of black hole jets a little bit to the bigger puzzle.”

Editor’s note: This story was updated April 1, 2019, to correct the mass of M87’s black hole; the entire galaxy’s mass is 2.4 trillion solar masses, but the black hole itself weighs in at several billion solar masses. In addition, the black hole simulation is an example of one that uphold’s Einstein’s theory of general relativity, not one that deviates from it.

In mice, anxiety isn’t all in the head. It can start in the heart

When you’re stressed and anxious, you might feel your heart race. Is your heart racing because you’re afraid? Or does your speeding heart itself contribute to your anxiety? Both could be true, a new study in mice suggests.

By artificially increasing the heart rates of mice, scientists were able to increase anxiety-like behaviors — ones that the team then calmed by turning off a particular part of the brain. The study, published in the March 9 Nature, shows that in high-risk contexts, a racing heart could go to your head and increase anxiety. The findings could offer a new angle for studying and, potentially, treating anxiety disorders.
The idea that body sensations might contribute to emotions in the brain goes back at least to one of the founders of psychology, William James, says Karl Deisseroth, a neuroscientist at Stanford University. In James’ 1890 book The Principles of Psychology, he put forward the idea that emotion follows what the body experiences. “We feel sorry because we cry, angry because we strike, afraid because we tremble,” James wrote.

The brain certainly can sense internal body signals, a phenomenon called interoception. But whether those sensations — like a racing heart — can contribute to emotion is difficult to prove, says Anna Beyeler, a neuroscientist at the French National Institute of Health and Medical Research in Bordeaux. She studies brain circuitry related to emotion and wrote a commentary on the new study but was not involved in the research. “I’m sure a lot of people have thought of doing these experiments, but no one really had the tools,” she says.

Deisseroth has spent his career developing those tools. He is one of the scientists who developed optogenetics — a technique that uses viruses to modify the genes of specific cells to respond to bursts of light (SN: 6/18/21; SN: 1/15/10). Scientists can use the flip of a light switch to activate or suppress the activity of those cells.
In the new study, Deisseroth and his colleagues used a light attached to a tiny vest over a mouse’s genetically engineered heart to change the animal’s heart rate. When the light was off, a mouse’s heart pumped at about 600 beats per minute. But when the team turned on a light that flashed at 900 beats per minutes, the mouse’s heartbeat followed suit. “It’s a nice reasonable acceleration, [one a mouse] would encounter in a time of stress or fear,” Deisseroth explains.

When the mice felt their hearts racing, they showed anxiety-like behavior. In risky scenarios — like open areas where a little mouse might be someone’s lunch — the rodents slunk along the walls and lurked in darker corners. When pressing a lever for water that could sometimes be coupled with a mild shock, mice with normal heart rates still pressed without hesitation. But mice with racing hearts decided they’d rather go thirsty.

“Everybody was expecting that, but it’s the first time that it has been clearly demonstrated,” Beyeler says.
The researchers also scanned the animals’ brains to find areas that might be processing the increased heart rate. One of the biggest signals, Deisseroth says, came from the posterior insula (SN: 4/25/16). “The insula was interesting because it’s highly connected with interoceptive circuitry,” he explains. “When we saw that signal, [our] interest was definitely piqued.”

Using more optogenetics, the team reduced activity in the posterior insula, which decreased the mice’s anxiety-like behaviors. The animals’ hearts still raced, but they behaved more normally, spending some time in open areas of mazes and pressing levers for water without fear.
A lot of people are very excited about the work, says Wen Chen, the branch chief of basic medicine research for complementary and integrative health at the National Center for Complementary and Integrative Health in Bethesda, Md. “No matter what kind of meetings I go into, in the last two days, everybody brought up this paper,” says Chen, who wasn’t involved in the research.

The next step, Deisseroth says, is to look at other parts of the body that might affect anxiety. “We can feel it in our gut sometimes, or we can feel it in our neck or shoulders,” he says. Using optogenetics to tense a mouse’s muscles, or give them tummy butterflies, might reveal other pathways that produce fearful or anxiety-like behaviors.

Understanding the link between heart and head could eventually factor into how doctors treat panic and anxiety, Beyeler says. But the path between the lab and the clinic, she notes, is much more convoluted than that of the heart to the head.