How fingerprints form was a mystery — until now

Scientists have finally figured out how those arches, loops and whorls formed on your fingertips.

While in the womb, fingerprint-defining ridges expand outward in waves starting from three different points on each fingertip. The raised skin arises in a striped pattern thanks to interactions between three molecules that follow what’s known as a Turing pattern, researchers report February 9 in Cell. How those ridges spread from their starting sites — and merge — determines the overarching fingerprint shape.
Fingerprints are unique and last for a lifetime. They’ve been used to identify individuals since the 1800s. Several theories have been put forth to explain how fingerprints form, including spontaneous skin folding, molecular signaling and the idea that ridge pattern may follow blood vessel arrangements.

Scientists knew that the ridges that characterize fingerprints begin to form as downward growths into the skin, like trenches. Over the few weeks that follow, the quickly multiplying cells in the trenches start growing upward, resulting in thickened bands of skin.

Since budding fingerprint ridges and developing hair follicles have similar downward structures, researchers in the new study compared cells from the two locations. The team found that both sites share some types of signaling molecules — messengers that transfer information between cells — including three known as WNT, EDAR and BMP. Further experiments revealed that WNT tells cells to multiply, forming ridges in the skin, and to produce EDAR, which in turn further boosts WNT activity. BMP thwarts these actions.

To examine how these signaling molecules might interact to form patterns, the team adjusted the molecules’ levels in mice. Mice don’t have fingerprints, but their toes have striped ridges in the skin comparable to human prints. “We turn a dial — or molecule — up and down, and we see the way the pattern changes,” says developmental biologist Denis Headon of the University of Edinburgh.

Increasing EDAR resulted in thicker, more spaced-out ridges, while decreasing it led to spots rather than stripes. The opposite occurred with BMP, since it hinders EDAR production.

That switch between stripes and spots is a signature change seen in systems governed by Turing reaction-diffusion, Headon says. This mathematical theory, proposed in the 1950s by British mathematician Alan Turing, describes how chemicals interact and spread to create patterns seen in nature (SN: 7/2/10). Though, when tested, it explains only some patterns (SN: 1/21/14).

Mouse digits, however, are too tiny to give rise to the elaborate shapes seen in human fingerprints. So, the researchers used computer models to simulate a Turing pattern spreading from the three previously known ridge initiation sites on the fingertip: the center of the finger pad, under the nail and at the joint’s crease nearest the fingertip.
By altering the relative timing, location and angle of these starting points, the team could create each of the three most common fingerprint patterns — arches, loops and whorls — and even rarer ones. Arches, for instance, can form when finger pad ridges get a slow start, allowing ridges originating from the crease and under the nail to occupy more space.

“It’s a very well-done study,” says developmental and stem cell biologist Sarah Millar, director of the Black Family Stem Cell Institute at the Icahn School of Medicine at Mount Sinai in New York City.

Controlled competition between molecules also determines hair follicle distribution, says Millar, who was not involved in the work. The new study, she says, “shows that the formation of fingerprints follows along some basic themes that have already been worked out for other types of patterns that we see in the skin.”

Millar notes that people with gene mutations that affect WNT and EDAR have skin abnormalities. “The idea that those molecules might be involved in fingerprint formation was floating around,” she says.

Overall, Headon says, the team aims to aid formation of skin structures, like sweat glands, when they’re not developing properly in the womb, and maybe even after birth.

“What we want to do, in broader terms, is understand how the skin matures.”

The deadly VEXAS syndrome is more common than doctors thought

A mysterious new disease may be to blame for severe, unexplained inflammation in older men. Now, researchers have their first good look at who the disease strikes, and how often.

VEXAS syndrome, an illness discovered just two years ago, affects nearly 1 in 4,000 men over 50 years old, scientists estimate January 24 in JAMA. The disease also occurs in older women, though less frequently. Altogether, more than 15,000 people in the United States may be suffering from the syndrome, says study coauthor David Beck, a clinical geneticist at NYU Langone Health in New York City. Those numbers indicate that physicians should be on the lookout for VEXAS, Beck says. “It’s underrecognized and underdiagnosed. A lot of physicians aren’t yet aware of it.”
Beck’s team reported discovering VEXAS syndrome in 2020, linking mutations in a gene called UBA1 to a suite of symptoms including fever, low blood cell count and inflammation. His team’s new study is the first to estimate how often VEXAS occurs in the general population — and the results are surprising. “It’s more prevalent than we suspected,” says Emma Groarke, a hematologist at the National Institutes of Health in Bethesda, Md., who was not involved with the study.
VEXAS tends to show up later in life ­­— after people somehow acquire UBA1 mutations in their blood cells. Patients may feel overwhelming fatigue, lethargy and have skin rashes, Beck says. “The disease is progressive, and it’s severe.” VEXAS can also be deadly. Once a person’s symptoms begin, the median survival time is about 10 years, his team has found.

Until late 2020, no one knew that there was a genetic thread connecting VEXAS syndrome’s otherwise unexplained symptoms. In fact, individuals may be diagnosed with other conditions, including polyarteritis nodosa, an inflammatory blood disease, and relapsing polychondritis, a connective tissue disorder, before being diagnosed with VEXAS.

To ballpark the number of VEXAS-affected individuals, Beck’s team combed through electronic health records of more than 160,000 people in Pennsylvania, in a collaboration with the NIH and Geisinger Health. In people over 50, the disease-causing UBA1 mutations showed up in roughly 1 in 4,000 men. Among women in that age bracket, about 1 in 26,000 had the mutations.

A genetic test of the blood can help doctors diagnose VEXAS, and treatments like steroids and other immunosuppressive drugs, which tamp down inflammation, can ease symptoms. Groarke and her NIH colleagues have also started a small phase II clinical trial testing bone marrow transplants as a way to swap patients’ diseased blood cells for healthy ones.

Beck says he hopes to raise awareness about the disease, though he recognizes that there’s much more work to do. In his team’s study, for instance, the vast majority of participants were white Pennsylvanians, so scientists don’t know how the disease affects other populations. Researchers also don’t know what spurs the blood cell mutations, nor how they spark an inflammatory frenzy in the body.

“The more patients that are diagnosed, the more we’ll learn about the disease,” Beck says. “This is just one step in the process of finding more effective therapies.”

Too much of this bacteria in the nose may worsen allergy symptoms

A type of bacteria that’s overabundant in the nasal passages of people with hay fever may worsen symptoms. Targeting that bacteria may provide a way to rein in ever-running noses.

Hay fever occurs when allergens, such as pollen or mold, trigger an inflammatory reaction in the nasal passages, leading to itchiness, sneezing and overflowing mucus. Researchers analyzed the composition of the microbial population in the noses of 55 people who have hay fever and those of 105 people who don’t. There was less diversity in the nasal microbiome of people who have hay fever and a whole lot more of a bacterial species called Streptococcus salivarius, the team reports online January 12 in Nature Microbiology.
S. salivarius was 17 times more abundant in the noses of allergy sufferers than the noses of those without allergies, says Michael Otto, a molecular microbiologist at the National Institute of Allergy and Infectious Diseases in Bethesda, Md. That imbalance appears to play a part in further provoking allergy symptoms. In laboratory experiments with allergen-exposed cells that line the airways, S. salivarius boosted the cells’ production of proteins that promote inflammation.

And it turns out that S. salivarius really likes runny noses. One prominent, unpleasant symptom of hay fever is the overproduction of nasal discharge. The researchers found that S. salivarius binds very well to airway-lining cells exposed to an allergen and slathered in mucus — better than a comparison bacteria that also resides in the nose.

The close contact appears to be what makes the difference. It means that substances on S. salivarius’ surface that can drive inflammation — common among many bacteria — are close enough to exert their effect on cells, Otto says.

Hay fever, which disrupts daily activities and disturbs sleep, is estimated to affect as many as 30 percent of adults in the United States. The new research opens the door “to future studies targeting this bacteria” as a potential treatment for hay fever, says Mahboobeh Mahdavinia, a physician scientist who studies immunology and allergies at Rush University Medical Center in Chicago.

But any treatment would need to avoid harming the “good” bacteria that live in the nose, says Mahdavinia, who was not involved in the research.

The proteins on S. salivarius’ surface that are important to its ability to attach to mucus-covered cells might provide a target, says Otto. The bacteria bind to proteins called mucins found in the slimy, runny mucus. By learning more about S. salivarius’ surface proteins, Otto says, it may be possible to come up with “specific methods to block that adhesion.”

Lots of Tatooine-like planets around binary stars may be habitable

SEATTLE — Luke Skywalker’s home planet in Star Wars is the stuff of science fiction. But Tatooine-like planets in orbit around pairs of stars might be our best bet in the search for habitable planets beyond our solar system.

Many stars in the universe come in pairs. And lots of those should have planets orbiting them (SN: 10/25/21). That means there could be many more planets orbiting around binaries than around solitary stars like ours. But until now, no one had a clear idea about whether those planets’ environments could be conducive to life. New computer simulations suggest that, in many cases, life could imitate art.
Earthlike planets orbiting some configurations of binary stars can stay in stable orbits for at least a billion years, researchers reported January 11 at the American Astronomical Society meeting. That sort of stability, the researchers propose, would be enough to potentially allow life to develop, provided the planets aren’t too hot or cold.

Of the planets that stuck around, about 15 percent stayed in their habitable zone — a temperate region around their stars where water could stay liquid — most or even all of the time.

The researchers ran simulations of 4,000 configurations of binary stars, each with an Earthlike planet in orbit around them. The team varied things like the relative masses of the stars, the sizes and shapes of the stars’ orbits around each other, and the size of the planet’s orbit around the binary pair.

The scientists then tracked the motion of the planets for up to a billion years of simulated time to see if the planets would stay in orbit over the sorts of timescales that might allow life to emerge.

A planet orbiting binary stars can get kicked out of the star system due to complicated interactions between the planet and stars. In the new study, the researchers found that, for planets with large orbits around star pairs, only about 1 out of 8 were kicked out of the system. The rest were stable enough to continue to orbit for the full billion years. About 1 in 10 settled in their habitable zones and stayed there.

Of the 4,000 planets that the team simulated, roughly 500 maintained stable orbits that kept them in their habitable zones at least 80 percent of the time.

“The habitable zone . . . as I’ve characterized it so far, spans from freezing to boiling,” said Michael Pedowitz, an undergraduate student at the College of New Jersey in Ewing who presented the research. Their definition is overly strict, he said, because they chose to model Earthlike planets without atmospheres or oceans. That’s simpler to simulate, but it also allows temperatures to fluctuate wildly on a planet as it orbits.
“An atmosphere and oceans would smooth over temperature variations fairly well,” says study coauthor Mariah MacDonald, an astrobiologist also at the College of New Jersey. An abundance of air and water would potentially allow a planet to maintain habitable conditions, even if it spent more of its time outside of the nominal habitable zone around a binary star system.

The number of potentially habitable planets “will increase once we add atmospheres,” MacDonald says, “but I can’t yet say by how much.”

She and Pedowitz hope to build more sophisticated models in the coming months, as well as extend their simulations beyond a billion years and include changes in the stars that can affect conditions in a solar system as it ages.

The possibility of stable and habitable planets in binary star systems is a timely issue says Penn State astrophysicist Jason Wright, who was not involved in the study.

“At the time Star Wars came out,” he says, “we didn’t know of any planets outside the solar system, and wouldn’t for 15 years. Now we know that there are many and that they orbit these binary stars.”

These simulations of planets orbiting binaries could serve as a guide for future experiments, Wright says. “This is an under-explored population of planets. There’s no reason we can’t go after them, and studies like this are presumably showing us that it’s worthwhile to try.”

Why pandemic fatigue and COVID-19 burnout took over in 2022

2022 was the year many people decided the coronavirus pandemic had ended.

President Joe Biden said as much in an interview with 60 Minutes in September. “The pandemic is over,” he said while strolling around the Detroit Auto Show. “We still have a problem with COVID. We’re still doing a lot of work on it. But the pandemic is over.”

His evidence? “No one’s wearing masks. Everybody seems to be in pretty good shape.”

But the week Biden’s remarks aired, about 360 people were still dying each day from COVID-19 in the United States. Globally, about 10,000 deaths were recorded every week. That’s “10,000 too many, when most of these deaths could be prevented,” the World Health Organization Director-General Tedros Adhanom Ghebreyesus said in a news briefing at the time. Then, of course, there are the millions who are still dealing with lingering symptoms long after an infection.
Those staggering numbers have stopped alarming people, maybe because those stats came on the heels of two years of mind-boggling death counts (SN Online: 5/18/22). Indifference to the mounting death toll may reflect pandemic fatigue that settled deep within the public psyche, leaving many feeling over and done with safety precautions.

“We didn’t warn people about fatigue,” says Theresa Chapple-McGruder, an epidemiologist in the Chicago area. “We didn’t warn people about the fact that pandemics can last long and that we still need people to be willing to care about yourselves, your neighbors, your community.”

Public health agencies around the world, including in Singapore and the United Kingdom, reinforced the idea that we could “return to normal” by learning to “live with COVID.” The U.S. Centers for Disease Control and Prevention’s guidelines raised the threshold for case counts that would trigger masking (SN Online: 3/3/22). The agency also shortened suggested isolation times for infected people to five days, even though most people still test positive for the virus and are potentially infectious to others for several days longer (SN Online: 8/19/22).

The shifting guidelines bred confusion and put the onus for deciding when to mask, test and stay home on individuals. In essence, the strategy shifted from public health — protecting your community — to individual health — protecting yourself.
Doing your part can be exhausting, says Eric Kennedy, a sociologist specializing in disaster management at York University in Toronto. “Public health is saying, ‘Hey, you have to make the right choices every single moment of your life.’ Of course, people are going to get tired with that.”

Doing the right thing — from getting vaccinated to wearing masks indoors — didn’t always feel like it paid off on a personal level. As good as the vaccines are at keeping people from becoming severely ill or dying of COVID-19, they were not as effective at protecting against infection. This year, many people who tried hard to make safe choices and had avoided COVID-19 got infected by wily omicron variants (SN Online: 4/22/22). People sometimes got reinfected — some more than once (SN: 7/16/22 & 7/30/22, p. 8).
Those infections may have contributed to a sense of futility. “Like, ‘I did my best. And even with all of that work, I still got it. So why should I try?’ ” says Kennedy, head of a Canadian project monitoring the sociological effects of the COVID-19 pandemic.

Getting vaccinated, masking and getting drugs or antibody treatments can reduce the severity of infection and may cut the chances of infecting others. “We should have been talking about this as a community health issue and not a personal health issue,” Chapple-McGruder says. “We also don’t talk about the fact that our uptake [of these tools] is nowhere near what we need” to avoid the hundreds of daily deaths.

A lack of data about how widely the coronavirus is still circulating makes it difficult to say whether the pandemic is ending. In the United States, the influx of home tests was “a blessing and a curse,” says Beth Blauer, data lead for the Johns Hopkins University Coronavirus Resource Center. The tests gave an instant readout that told people whether they were infected and should isolate. But because those results were rarely reported to public health officials, true numbers of cases became difficult to gauge, creating a big data gap (SN Online: 5/27/22).
The flow of COVID-19 data from many state and local agencies also slowed to a trickle. In October, even the CDC began reporting cases and deaths weekly instead of daily. Altogether, undercounting of the coronavirus’s reach became worse than ever.

“We’re being told, ‘it’s up to you now to decide what to do,’ ” Blauer says, “but the data is not in place to be able to inform real-time decision making.”

With COVID-19 fatigue so widespread, businesses, governments and other institutions have to find ways to step up and do their part, Kennedy says. For instance, requiring better ventilation and filtration in public buildings could clean up indoor air and reduce the chance of spreading many respiratory infections, along with COVID-19. That’s a behind-the-scenes intervention that individuals don’t have to waste mental energy worrying about, he says.

The bottom line: People may have stopped worrying about COVID-19, but the virus isn’t done with us yet. “We have spent two-and-a-half years in a long, dark tunnel, and we are just beginning to glimpse the light at the end of that tunnel. But it is still a long way off,” WHO’s Tedros said. “The tunnel is still dark, with many obstacles that could trip us up if we don’t take care.” If the virus makes a resurgence, will we see it coming and will we have the energy to combat it again?

Space rocks may have bounced off baby Earth, but slammed into Venus

Squabbling sibling planets may have hurled space rocks when they were young.

Simulations suggest that space rocks the size of baby planets struck both the newborn Earth and Venus, but many of the rocks that only grazed Earth went on to hit — and stick — to Venus. That difference in early impacts could help explain why Earth and Venus are such different worlds today, researchers report September 23 in the Planetary Science Journal.

“The pronounced differences between Earth and Venus, in spite of their similar orbits and masses, has been one of the biggest puzzles in our solar system,” says planetary scientist Shigeru Ida of the Tokyo Institute of Technology, who was not involved in the new work. This study introduces “a new point that has not been raised before.”

Scientists have typically thought that there are two ways that collisions between baby planets can go. The objects could graze each other and each continue on its way, in a hit-and-run collision. Or two protoplanets could stick together, or accrete, making one larger planet. Planetary scientists often assume that every hit-and-run collision eventually leads to accretion. Objects that collide must have orbits that cross each other’s, so they’re bound to collide again and again, and eventually should stick.
But previous work from planetary scientist Erik Asphaug of the University of Arizona in Tucson and others suggests that isn’t so. It takes special conditions for two planets to merge, Asphaug says, like relatively slow impact speeds, so hit-and-runs were probably much more common in the young solar system.

Asphaug and colleagues wondered what that might have meant for Earth and Venus, two apparently similar planets with vastly different climates. Both worlds are about the same size and mass, but Earth is wet and clement while Venus is a searing, acidic hellscape (SN: 2/13/18).

“If they started out on similar pathways, somehow Venus took a wrong turn,” Asphaug says.

The team ran about 4,000 computer simulations in which Mars-sized protoplanets crashed into a young Earth or Venus, assuming the two planets were at their current distances from the sun. The researchers found that about half of the time, incoming protoplanets grazed Earth without directly colliding. Of those, about half went on to collide with Venus.

Unlike Earth, Venus ended up accreting most of the objects that hit it in the simulations. Hitting Earth first slowed incoming objects down enough to let them stick to Venus later, the study suggests. “You have this imbalance where things that hit the Earth, but don’t stick, tend to end up on Venus,” Asphaug says. “We have a fundamental explanation for why Venus ended up accreting differently from the Earth.”

If that’s really what happened, it would have had a significant effect on the composition of the two worlds. Earth would have ended up with more of the outer mantle and crust material from the incoming protoplanets, while Venus would have gotten more of their iron-rich cores.

The imbalance in impacts could even explain some major Venusian mysteries, like why the planet doesn’t have a moon, why it spins so slowly and why it lacks a magnetic field — though “these are hand-waving kind of conjectures,” Asphaug says.

Ida says he hopes that future work will look into those questions more deeply. “I’m looking forward to follow-up studies to examine if the new result actually explains the Earth-Venus difference,” he says.

The idea fits into a growing debate among planetary scientists about how the solar system grew up, says planetary scientist Seth Jacobson of Michigan State University in East Lansing. Was it built violently, with lots of giant collisions, or calmly, with planets growing smoothly via pebbles sticking together?

“This paper falls on the end of lots of giant impacts,” Jacobson says.

Each rocky planet in the solar system should have very different chemistry and structure depending on which scenario is true. But scientists know the chemistry and structure of only one planet with any confidence: Earth. And Earth’s early history has been overwritten by plate tectonics and other geologic activity. “Venus is the missing link,” Jacobson says. “Learning more about Venus’ chemistry and interior structure is going to tell us more about whether it had a giant impact or not.”

Three missions to Venus are expected to launch in the late 2020s and 2030s (SN: 6/2/21). Those should help, but none are expected to take the kind of detailed composition measurements that could definitively solve the mystery. That would take a long-lived lander, or a sample return mission, both of which would be extremely difficult on hot, hostile Venus.

“I wish there was an easier way to test it,” Jacobson says. “I think that’s where we should concentrate our energy as terrestrial planet formation scientists going forward.”

Satellite swarms may outshine the night sky’s natural constellations

Fleets of private satellites orbiting Earth will be visible to the naked eye in the next few years, sometimes all night long.

Companies like SpaceX and Amazon have launched hundreds of satellites into low orbits since 2019, with plans to launch thousands more in the works — a trend that’s alarming astronomers. The goal of these satellite “mega-constellations” is to bring high-speed internet around the globe, but these bright objects threaten to disrupt astronomers’ ability to observe the cosmos (SN: 3/12/20). “For astronomers, this is kind of a pants-on-fire situation,” says radio astronomer Harvey Liszt of the National Radio Astronomical Observatory in Charlottesville, Va.

Now, a new simulation of the potential positions and brightness of these satellites shows that, contrary to earlier predictions, casual sky watchers will have their view disrupted, too. And parts of the world will be affected more than others, astronomer Samantha Lawler of the University of Regina in Canada and her colleagues report in a paper posted September 9 at arXiv.org.

“How will this affect the way the sky looks to your eyeballs?” Lawler asks. “We humans have been looking up at the night sky and analyzing patterns there for as long as we’ve been human. It’s part of what makes us human.” These mega-constellations could mean “we’ll see a human-made pattern more than we can see the stars, for the first time in human history.”
Flat, smooth surfaces on satellites can reflect sunlight depending on their position in the sky. Earlier research had suggested that most of the new satellites would not be visible with the naked eye.

Lawler, along with Aaron Boley of the University of British Columbia and Hanno Rein of the University of Toronto at Scarborough in Canada, started building their simulation with public data about the launch plans of four companies — SpaceX’s Starlink, Amazon’s Kuiper, OneWeb and StarNet/GW — that had been filed with the U.S. Federal Communications Commission and the International Telecommunications Union. The filings detailed the expected orbital heights and angles of 65,000 satellites that could be launched over the next few years.

“It’s impossible to predict the future, but this is realistic,” says astronomer Meredith Rawls of the University of Washington in Seattle, who was not involved in the new study. “A lot of times when people make these simulations, they pick a number out of a hat. This really justifies the numbers that they pick.”

There are currently about 7,890 objects in Earth orbit, about half of which are operational satellites, according to the U.N. Office for Outer Space Affairs. But that number is increasing fast as companies launch more and more satellites (SN: 12/28/20). In August 2020, there were only about 2,890 operational satellites.

Next, the researchers computed how many satellites will be in the sky at different times of year, at different hours of the night and from different positions on Earth’s surface. They also estimated how bright the satellites were likely to be at different hours of the day and times of the year.

That calculation required a lot of assumptions because companies aren’t required to publish details about their satellites like the materials they’re made of or their precise shapes, both of which can affect reflectivity. But there are enough satellites in orbit that Lawler and colleagues could compare their simulated satellites to the light reflected down to Earth by the real ones.

The simulations showed that “the way the night sky is going to change will not affect all places equally,” Lawler says. The places where naked-eye stargazing will be most affected are at latitudes 50° N and 50° S, regions that cross lower Canada, much of Europe, Kazakhstan and Mongolia, and the southern tips of Chile and Argentina, the researchers found.
“The geometry of sunlight in the summer means there will be hundreds of visible satellites all night long,” Lawler says. “It’s bad everywhere, but it’s worse there.” For her, this is personal: She lives at 50° N.

Closer to the equator, where many research observatories are located, there is a period of about three hours in the winter and near the time of the spring and fall equinoxes with few or no sunlit satellites visible. But there are still hundreds of sunlit satellites all night at these locations in the summer.

A few visible satellites can be a fun spectacle, Lawler concedes. “I think we really are at a transition point here where right now, seeing a satellite, or even a Starlink train, is cool and different and wow, that’s amazing,” she says. “I used to look up when the [International Space Station] was overhead.” But she compares the coming change to watching one car go down the road 100 years ago, versus living next to a busy freeway now.

“Every sixteenth star will actually be moving,” she says. “I hope I’m wrong. I’ve never wanted to be wrong about a simulation more than this. But without mitigation, this is what the sky will look like in a few years.”

Astronomers have been meeting with representatives from private companies, as well as space lawyers and government officials, to work out compromises and mitigation strategies. Companies have been testing ways to reduce reflectivity, like shading the satellites with a “visor.” Other proposed strategies include limiting the satellites to lower orbits, where they move faster across the sky and leave a fainter streak in telescope images. Counterintuitively, lower satellites may be better for some astronomy research, Rawls says. “They move out of the way quick.”

But that lower altitude strategy will mean more visible satellites for other parts of the world, and more that are visible to the naked eye. “There’s not some magical orbital altitude that solves all our problems,” Rawls says. “There are some latitudes on Earth where no matter what altitude you put your satellites at, they’re going to be all over the darn place. The only way out of this is fewer satellites.”

There are currently no regulations concerning how bright a satellite can be or how many satellites a private company can launch. Scientists are grateful that companies are willing to work with them, but nervous that their cooperation is voluntary.

“A lot of the people who work on satellites care about space. They’re in this industry because they think space is awesome,” Rawls says. “We share that, which helps. But it doesn’t fix it. I think we need to get some kind of regulation as soon as possible.” (Representatives from Starlink, Kuiper and OneWeb did not respond to requests for comment.)

Efforts are under way to bring the issue to the attention of the United Nations and to try to use existing environmental regulations to place limits on satellite launches, says study coauthor Boley (who also lives near 50° N).

Analogies to other global pollution problems, like space junk, can provide inspiration and precedents, he says. “There are a number of ways forward. We shouldn’t just lose hope. We can do things about this.”

NASA’s Perseverance rover snagged its first Martian rock samples

The Perseverance rover has captured its first two slices of Mars.

NASA’s latest Mars rover drilled into a flat rock nicknamed Rochette on September 1 and filled a roughly finger-sized tube with stone. The sample is the first ever destined to be sent back to Earth for further study. On September 7, the rover snagged a second sample from the same rock. Both are now stored in airtight tubes inside the rover’s body.

Getting pairs of samples from every rock it drills is “a little bit of an insurance policy,” says deputy project scientist Katie Stack Morgan of NASA’s Jet Propulsion Lab in Pasadena, Calif. It means the rover can drop identical stores of samples in two different places, boosting chances that a future mission will be able to pick up at least one set.

The successful drilling is a comeback story for Perseverance. The rover’s first attempt to take a bit of Mars ended with the sample crumbling to dust, leaving an empty tube (SN: 8/19/21). Scientists think that rock was too soft to hold up to the drill.
Nevertheless, the rover persevered.

“Even though some of its rocks are not, Mars is hard,” said Lori Glaze, director of NASA’s planetary science division, in a September 10 news briefing.

Rochette is a hard rock that appears to have been less severely eroded by millennia of Martian weather (SN: 7/14/20). (Fun fact: All the rocks Perseverance drills into will get names related to national parks; the region on Mars the rover is now exploring is called Mercantour, so the name Rochette — or “Little Rock” — comes from a village in France near Mercantour National Park.)

Rover measurements of the rock’s texture and chemistry suggests that it’s made of basalt and may have been part of an ancient lava flow. That’s useful because volcanic rocks preserve their ages well, Stack Morgan says. When scientists on Earth get their hands on the sample, they’ll be able to use the concentrations of certain elements and isotopes to figure out exactly how old the rock is — something that’s never been done for a pristine Martian rock.

Rochette also contains salt minerals that probably formed when the rock interacted with water over long time periods. That could suggest groundwater moving through the Martian subsurface, maybe creating habitable environments within the rocks, Stack Morgan says.

“It really feels like this rich treasure trove of information for when we get this sample back,” Stack Morgan says.

Once a future mission brings the rocks back to Earth, scientists can search inside those salts for tiny fluid bubbles that might be trapped there. “That would give us a glimpse of Jezero crater at the time when it was wet and was able to sustain ancient Martian life,” said planetary scientist Yulia Goreva of JPL at the news briefing.

Scientists will have to be patient, though — the earliest any samples will make it back to Earth is 2031. But it’s still a historic milestone, says planetary scientist Meenakshi Wadhwa of Arizona State University in Tempe.

“These represent the beginning of Mars sample return,” said Wadhwa said at the news briefing. “I’ve dreamed of having samples back from Mars to analyze in my lab since I was a graduate student. We’ve talked about Mars sample return for decades. Now it’s starting to actually feel real.”

Astronomers may have seen a star gulp down a black hole and explode

For the first time, astronomers have captured solid evidence of a rare double cosmic cannibalism — a star swallowing a compact object such as a black hole or neutron star. In turn, that object gobbled the star’s core, causing it to explode and leave behind only a black hole.

The first hints of the gruesome event, described in the Sept. 3 Science, came from the Very Large Array (VLA), a radio telescope consisting of 27 enormous dishes in the New Mexican desert near Socorro. During the observatory’s scans of the night sky in 2017, a burst of radio energy as bright as the brightest exploding star — or supernova — as seen from Earth appeared in a dwarf star–forming galaxy approximately 500 million light-years away.

“We thought, ‘Whoa, this is interesting,’” says Dillon Dong, an astronomer at Caltech.

He and his colleagues made follow-up observations of the galaxy using the VLA and one of the telescopes at the W.M. Keck Observatory in Hawaii, which sees in the same optical light as our eyes. The Keck telescope caught a luminous outflow of material spewing in all directions at 3.2 million kilometers per hour from a central location, suggesting that an energetic explosion had occurred there in the past.
The team then found an extremely bright X-ray source in archival data from the Monitor of All Sky X-ray Image (MAXI) telescope, a Japanese instrument that sits on the International Space Station. This X-ray burst was in the same place as the radio one but had been observed back in 2014.

Piecing the data together, Dong and his colleagues think this is what happened: Long ago, a binary pair of stars were born orbiting each other; one died in a spectacular supernova and became either a neutron star or a black hole. As gravity brought the two objects closer together, the dead star actually entered the outer layers of its larger stellar sibling.

The compact object spiraled inside the still-living star for hundreds of years, eventually making its way down to and then eating its partner’s core. During this time, the larger star shed huge amounts of gas and dust, forming a shell of material around the duo.

In the living star’s center, gravitational forces and complex magnetic interactions from the dead star’s munching launched enormous jets of energy — picked up as an X-ray flash in 2014 — as well as causing the larger star to explode. Debris from the detonation smashed with colossal speed into the surrounding shell of material, generating the optical and radio light.

While theorists have previously envisioned such a scenario, dubbed a merger-triggered core collapse supernova, this appears to represent the first direct observation of this phenomenon, Dong says.

“They’ve done some pretty good detective work using these observations,” says Adam Burrows, an astrophysicist at Princeton University who was not involved in the new study. He says the findings should help constrain the timing of a process called common envelope evolution, in which one star becomes immersed inside another. Such stages in stars’ lives are relatively short-lived in cosmic time and difficult to both observe and simulate. Most of the time, the engulfing partner dies before its core is consumed, leading to two compact objects like white dwarfs, neutron stars or black holes orbiting one another.

The final stages of these systems are exactly what observatories like the Advanced Laser Interferometer Gravitational-Wave Observatory, or LIGO, detect when capturing spacetime’s ripples, Dong says (SN: 8/4/21). Now that astronomers know to look for these multiple lines of evidence, he expects them to find more examples of this strange phenomenon.

'Fire Nagy' chants take over Chicago after Bears' latest loss, including at Matt Nagy's son's football game

The "Fire Nagy" chants can be heard all over Chicago, from Soldier Field all the way to Matt Nagy's son's football games.

The Bears have now lost five straight games, and fans are frustrated, to say the least. They have aimed their anger at the 43-year-old head coach, who has found himself in the middle of the majority of Bears-related drama this season.
After Sunday's 16-13 loss to the Ravens, the Soldier Field crowd broke out into "Fire Nagy" chants. Then, on Monday night at the Bulls vs. Pacers game at the United Center, "Fire Nagy" chants could be heard. Then, on Tuesday, video surfaced showing that "Fire Nagy" chants took over his own son's football game on Saturday night.
Nagy's son plays football for Lake Forest High School in a Chicago suburb. Lake Forest played Cary-Grove High School on Saturday, and Cary-Grove captured a blowout victory, which prompted the Cary-Grove student section to start yelling "Fire Nagy."
Cary-Grove principal Neil Lesinski posted an apology on Twitter on Tuesday morning. His full statement:
It doesn't seem like the "Fire Nagy" chants will be going anywhere if the Bears continue to lose. They have a chance to get their fourth win of the season on Thanksgiving when they play the 0-9-1 Lions.

On Tuesday, there was a rumor circulating that Nagy's last game as coach would be against the Lions on Thursday. Nagy addressed these rumors with the media on Tuesday, claiming that they are "not accurate."