Ancient DNA reveals who is in Spain’s ‘pit of bones’ cave

Neandertals hung out in what’s now northern Spain around 430,000 years ago, an analysis of ancient DNA suggests. That’s an earlier Neandertal presence in Europe, by at least 30,000 years, than many researchers had assumed.

Fragments of nuclear DNA from a tooth and partial leg bone discovered at Sima de los Huesos, a chamber deep inside a Spanish cave, resemble corresponding parts of a previously reassembled Neandertal genome, researchers say in a study published online March 14 in Nature.
Not much nuclear DNA survives in such ancient fossils, say paleogeneticist Matthias Meyer of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and his colleagues. Meyer’s group recovered DNA fragments covering a fraction of 1 percent of the newly recovered Neandertal tooth and leg genomes. Just enough DNA remained to enable comparisons with DNA of a Neandertal woman (SN: 1/25/14, p. 17) and a Denisovan woman (SN: 9/22/12, p. 5). Denisovans are considered close genetic cousins of Neandertals.

The early age for the new genetic finds challenges the idea that fossils from Sima de los Huesos, or pit of bones, come from a species called Homo heidelbergensis. Some researchers have suspected that by around 400,000 years ago, H. heidelbergensis gave rise to evolutionary precursors of both Neandertals and Homo sapiens.
An ancient genetic puzzle has also emerged at Sima de los Huesos. On one hand, nuclear DNA — which passes from both parents to their children — pegs the Spanish hominids as Neandertals. But mitochondrial DNA — typically inherited only from the mother — already extracted from one Sima de los Huesos fossil (SN: 12/28/13, p. 8) and described for a second fossil in the new study has more in common with Denisovans.

Denisovans lived in East Asia at least 44,000 years ago, but their evolutionary history is unknown.

If early Neandertals lived in northern Spain roughly 430,000 years ago, “we have to go back further in time to reach the common ancestor of Neandertals and Denisovans,” Meyer says.
The new genetic data from Sima de los Huesos now suggest that Denisovans split from Neandertals perhaps 450,000 years ago, says paleoanthropologist Chris Stringer of the Natural History Museum in London. Genetic and fossil evidence point to Neandertals and H. sapiens diverging from a common ancestor around 650,000 years ago, he proposes.

But it’s hard to say whether that common ancestor was H. heidelbergensis, Stringer adds. “Research must refocus on fossils from 400,000 to 800,000 years ago to determine which ones might lie on ancestral lineages of Neandertals, Denisovans and modern humans.”

Hominids throughout Eurasia during that time may have shared a mitochondrial DNA pattern observed in Sima de los Huesos Neandertals and Asian Denisovans, Meyer suggests. If that was the case, Neandertals acquired a new form of mitochondrial DNA by interbreeding with modern humans or their direct ancestors from Africa sometime between 430,000 and 100,000 years ago (SN: 3/19/16, p. 6).

Another possibility is that Neandertals traveled to Europe from Asia more than 430,000 years ago, carrying Denisovan mitochondrial DNA with them, says paleogeneticist Carles Lalueza-Fox of the Institute of Evolutionary Biology in Barcelona. Or hybrid descendants of early Neandertals and early Denisovans may have lived at Sima de los Huesos, carrying Denisovan mitochondrial DNA, he speculates.

“We really need more genetic data from Sima de los Huesos, and other sites of that age, to narrow down these scenarios,” Meyer says.

Cause of mass starfish die-offs is still a mystery

In the summer of 2013, an epidemic began sweeping through the intertidal zone off the west coast of North America. The victims were several species of sea star, including Pisaster ochraceus, a species that comes in orange and purple variants. (It’s also notable because it’s the starfish that provided ecology with the fundamental concept of a keystone species.) Affected individuals appeared to “melt,” losing grip with the rocks to which they were attached — and then losing their arms. This sea star wasting disease, as it is known, soon killed sea stars from Baja California to Alaska.

This wasn’t the first outbreak of sea star wasting disease. A 1978 outbreak in the Gulf of California, for instance, killed so many Heliaster kubinjiisun stars that the once ubiquitous species is now incredibly rare.

These past incidents, though, happened fast and within smaller regions, so scientists had struggled to figure out what had happened. With the latest outbreak happening over such a large — and well-studied — region and period of time, marine biologists have been able to gather more data on the disease than ever before. And they’re getting closer to figuring out just what happened in this latest incident.

One likely factor is the sea star-associated densovirus, which, in 2014, scientists reported finding in greater abundance in starfish with sea star wasting disease than in healthy sea stars. But the virus can’t be the only cause of the disease; it’s found in both healthy and sick sea stars, and it has been around since at least 1942, the earliest year it has been found in museum specimens. So there must be some other factor at play.
Earlier this year, scientists studying the outbreak in Washington state reported in the Proceedings of the Royal Society B thatwarm waters may increase disease progression and rates of death. Studies of California starfish came to a similar conclusion. But a new study, appearing May 4 in PLOS One , finds that may not be true for sea stars in Oregon. Bruce Menge and colleagues at Oregon State University took advantage of their long-term study of Oregon starfish to evaluate what happened to sea stars during the recent epidemic and found that wasting disease increased with cooler , not warmer, temperatures. “Given conflicting results on the role of temperature as a trigger of [sea star wasting disease], it seems most likely that multiple factors interacted in complex ways to cause the outbreak,” they conclude.
What those factors are, though, is still a mystery.

Also unclear is what long-term effects this outbreak will have on Pacific intertidal communities.

In the 1960s, Robert Paine of the University of Washington performed what is now considered a classic experiment. For years, he removed starfish from one area of rock in Makah Bay at the northwestern tip of Washington and left another bit of rock alone as a control. Without the starfish to prey on them, mussels were able to take over. The sea stars, Paine concluded, were a “keystone species” that kept the local food web in control.

If sea star wasting disease has similar effects on the Pacific intertidal food web, Menge and his colleagues write, “it would result in losses or large reductions of many species of macrophytes, anemones, limpets, chitons, sea urchins and other organisms from the low intertidal zone.”

What happens, the group says, may depend on how quickly the disease disappears from the region and how many young sea stars can grow up and start munching on mussels.

Stephen Hawking finds the inner genius in ordinary people

It’s hard to believe that it took reality television this long to get around to dealing with space, time and our place in the cosmos.

In PBS’ Genius by Stephen Hawking, the physicist sets out to prove that anyone can tackle humankind’s big questions for themselves. Each of the series’ six installments focuses on a different problem, such as the possibility of time travel or the likelihood that there is life elsewhere in the universe. With Hawking as a guide, three ordinary folks must solve a series of puzzles that guide them toward enlightenment about that episode’s theme. Rather than line up scientists to talk at viewers, the show invites us to follow each episode’s trio on a journey of discovery.
By putting the focus on nonexperts, Genius emphasizes that science is not a tome of facts handed down from above but a process driven by curiosity. After working through a demonstration of how time slows down near a black hole, one participant reflects: “It’s amazing to see it play out like this.”
The show is a fun approach to big ideas in science and philosophy, and the enthusiasm of the guests is infectious. Without knowing what was edited out, though, it’s difficult to say whether the show proves Hawking’s belief that anyone can tackle these heady questions. Each situation is carefully designed to lead the participants to specific conclusions, and there seems to be some off-camera prompting.

But the bigger message is a noble one: A simple and often surprising chain of reasoning can lead to powerful insights about the universe, and reading about the cosmos pales next to interacting with stand-ins for its grandeur. It’s one thing, for example, to hear that there are roughly 300 billion stars in the Milky Way. But to stand next to a mountain of sand where each grain represents one of those stars is quite another. “I never would have got it until I saw it,” says one of the guests, gesturing to the galaxy of sand grains. “This I get.”

Snot could be crucial to dolphin echolocation

In hunting down delicious fish, Flipper may have a secret weapon: snot.

Dolphins emit a series of quick, high-frequency sounds — probably by forcing air over tissues in the nasal passage — to find and track potential prey. “It’s kind of like making a raspberry,” says Aaron Thode of the Scripps Institution of Oceanography in San Diego. Thode and colleagues tweaked a human speech modeling technique to reproduce dolphin sounds and discern the intricacies of their unique style of sound production. He presented the results on May 24 in Salt Lake City at the annual meeting of the Acoustical Society of America.

Dolphin chirps have two parts: a thump and a ring. Their model worked on the assumption that lumps of tissue bumping together produce the thump, and those tissues pulling apart produce the ring. But to match the high frequencies of live bottlenose dolphins, the researchers had to make the surfaces of those tissues sticky. That suggests that mucus lining the nasal passage tissue is crucial to dolphin sonar.

The vocal model also successfully mimicked whistling noises used to communicate with other dolphins and faulty clicks that probably result from inadequate snot. Such techniques could be adapted to study sound production or echolocation in sperm whales and other dolphin relatives.
Researchers modified a human speech model developed in the 1970s to study dolphin echolocation. The animation above mimics the vibration of lumps of tissue (green) in the dolphin’s nasal passage (black) that are drenched in mucus. Snot-covered tissues (blue) stick together (red) and pull apart to create the click sound.

Jupiter’s stormy weather no tempest in teapot

Jupiter’s turbulence is not just skin deep. The giant planet’s visible storms and blemishes have roots far below the clouds, researchers report in the June 3 Science. The new observations offer a preview of what NASA’s Juno spacecraft will see when it sidles up to Jupiter later this year.

A chain of rising plumes, each reaching nearly 100 kilometers into Jupiter, dredges up ammonia to form ice clouds. Between the plumes, dry air sinks back into the Jovian depths. And the famous Great Red Spot, a storm more than twice as wide as Earth that has churned for several hundred years, extends at least dozens of kilometers below the clouds as well.

Jupiter’s dynamic atmosphere provides a possible window into how the planet works inside. “One of the big questions is what is driving that change,” says Leigh Fletcher, a planetary scientist at the University of Leicester in England. “Why does it change so rapidly, and what are the environmental and climate-related factors that result from those changes?”

To address some of those questions, Imke de Pater, a planetary scientist at the University of California, Berkeley, and colleagues observed Jupiter with the Very Large Array radio observatory in New Mexico. Jupiter emits radio waves generated by heat left over from its formation about 4.6 billion years ago. Ammonia gas within Jupiter’s atmosphere intercepts certain radio frequencies. By mapping how and where those frequencies are absorbed, the researchers created a three-dimensional map of the ammonia that lurks beneath Jupiter’s clouds. Those plumes and downdrafts appear to be powered by a narrow wave of gas that wraps around much of the planet.

The depths of Jupiter’s atmospheric choppiness isn’t too surprising, says Scott Bolton, a planetary scientist at the Southwest Research Institute in San Antonio. “Almost everyone I know would have guessed that,” he says. But the observations do provide a teaser for what to expect from the Juno mission, led by Bolton. The spacecraft arrives at Jupiter on July 4 to begin a 20-month investigation of what’s going on beneath Jupiter’s clouds using tools similar to those used in this study.

The new observations confirm that Juno should work as planned, Bolton says.

By getting close to the planet — just 5,000 kilometers from the cloud tops — Juno will break through the fog of radio waves from Jupiter’s radiation belts that obscures observations made from Earth and limits what telescopes like the Very Large Array can see. But the spacecraft will see only a narrow swath of Jupiter’s bulk at a time. “That’s where ground-based work like the research de Pater has been doing is really essential,” Fletcher says. Observations such as these will let Juno scientists know what’s going on throughout the atmosphere so they can better understand what Jupiter is telling them.

Doctors need better ways to figure out fevers in newborns

Two days after my first daughter was born, her pediatrician paid a house call to examine her newest patient. After packing up her gear, she told me something alarming: “For the next few months, a fever is an emergency.” If we measured a rectal temperature at or above 100.4° Fahrenheit, go to the hospital, she said. Call her on the way, but don’t wait.

I, of course, had no idea that a fever constituted an emergency. But our pediatrician explained that a fever in a very young infant can signal a fast-moving and dangerous bacterial infection. These infections are rare (and fortunately becoming even rarer thanks to newly created vaccines). But they’re serious, and newborns are particularly susceptible.

I’ve since heard from friends who have been through this emergency. Their newborns were poked, prodded and monitored by anxious doctors, in the hopes of quickly ruling out a serious bacterial infection. For infants younger than two months, it’s “enormously difficult to tell if an infant is seriously ill and requires antibiotics and/or hospitalization,” says Howard Bauchner, a pediatrician formerly at Boston University School of Medicine and now editor in chief of the Journal of the American Medical Association.

A new research approach, described in two JAMA papers published in August, may ultimately lead to better ways to find the cause of a fever.

These days, for most (but not all) very young infants, their arrival at a hospital will trigger a workup that includes a urine culture and a blood draw. Often doctors will perform a lumbar puncture, more commonly known as a spinal tap, to draw a sample of cerebrospinal fluid from the area around the spinal cord.

Doctors collect these fluids to look for bacteria. Blood, urine and cerebrospinal fluid are smeared onto culture dishes, and doctors wait and see if any bacteria grow. In the meantime, the feverish infant may be started on antibiotics, just in case. But this approach has its limitations. Bacterial cultivation can take several days. The antibiotics may not be necessary. And needless to say, it’s not easy to get those fluids, particularly from a newborn.

Some scientists believe that instead of looking for bacteria or viruses directly, we ought to be looking at how our body responds to them. Unfortunately, the symptoms of a bacterial and viral infection are frustratingly similar. “You get a fever. You feel sick,” says computational immunologist Purvesh Khatri of Stanford University. Sadly, there are no obvious telltale symptoms of one or the other, not even green snot. In very young infants, a fever might be the only sign that something is amiss.
But more subtle clues could betray the cause of the fever. When confronted with an infection, our immune systems ramp up in specific ways. Depending on whether we are fighting a viral or bacterial foe, different genes turn up their activity. “The immune system knows what’s going on,” Khatri says. That means that if we could identify the genes that reliably get ramped up by viruses and those that get ramped up by bacteria, then we could categorize the infection based on our genetic response.

That’s the approach used by two groups of researchers, whose study results both appear in the August 23/30 JAMA. One group found that in children younger than 2, two specific genes could help make the call on infection type. Using blood samples, the scientists found that one of the genes ramped up its activity in response to a viral infection, and the other responded to a bacterial infection.

The other study looked at immune responses in even younger children. In infants younger than 60 days, the activity of 66 genes measured in blood samples did a pretty good job of distinguishing between bacterial and viral infections. “These are really exciting preliminary results,” says Khatri, who has used a similar method for adults. “We need to do more work.”

Bauchner points out that in order to be useful, “the test would have to be very, very accurate in very young infants.” There’s very little room for error. “Only time will tell how good these tests will be,” he says. In an editorial that accompanied the two studies, he evoked the promise of these methods. If other experiments replicate and refine the results of these studies, he could envision a day in which the parents of a feverish newborn could do a test at home, call their doctor and together decide if the child needs more care.

That kind of test isn’t here yet, but scientists are working on it. The technology couldn’t come soon enough for doctors and parents desperate to figure out a fever.

Endurance training leaves no memory in muscles

Use it or lose it, triathletes.

Muscles don’t have long-term memory for exercises like running, biking and swimming, a new study suggests. The old adage that once you’ve been in shape, it’s easier to get fit again could be a myth, at least for endurance athletes, researchers in Sweden report September 22 in PLOS Genetics.

“We really challenged the statement that your muscles can remember previous training,” says Maléne Lindholm of the Karolinska Institute in Stockholm. But even if muscles forget endurance exercise, the researchers say, other parts of the body may remember, and that could make retraining easier for people who’ve been in shape before.
Endurance training is amazingly good for the body. Weak muscle contractions, sustained over a long period of time — as in during a bike ride — change proteins, mainly ones involved in metabolism. This translates into more energy-efficient muscle that can help stave off illnesses like diabetes, cardiovascular disease and some cancers. The question is, how long do those improvements last?

Previous work in mice has shown that muscles “remember” strength training (SN: 9/11/10, p. 15). But rather than making muscles more efficient, strength-training moves like squats and push-ups make muscles bigger and stronger. The muscles bulk up as they develop more nuclei. More nuclei lead to more production of proteins that build muscle fibers. Cells keep their extra nuclei even after regular exercise stops, to make protein easily once strength training restarts, says physiologist Kristian Gundersen at the University of Oslo in Norway. Since endurance training has a different effect on muscles, scientists weren’t sure if the cells would remember it or not.
To answer that question, Lindholm’s team ran volunteers through a 15-month endurance training experiment. In the first three months, 23 volunteers trained four times a week, kicking one leg 60 times per minute for 45 minutes. Volunteers rested their other leg. Lindholm’s team took muscle biopsies before and after the three-month period to see how gene activity changed with training. Specifically, the scientists looked for changes in the number of mRNAs (the blueprints for proteins) that each gene was making. Genes associated with energy production showed the greatest degree of change in activity with training.
At a follow-up, after participants had stopped training for nine months, scientists again biopsied muscle from the thighs of 12 volunteers, but didn’t find any major differences in patterns of gene activity between the previously trained legs and the untrained legs. “The training effects were presumed to have been lost,” says Lindholm. After another three-month bout of training, this time in both legs, the researchers saw no differences between the previously trained and untrained legs.
While this study didn’t find muscle memory for endurance — most existing evidence is anecdotal — it still might be easier for former athletes to get triathalon-ready, researchers say. The new result has “no bearing on the possible memory in other organ systems,” Gundersen says. The heart and cardiovascular system could remember and more easily regain previous fitness levels, for example, he says.

Even within muscle tissue, immune cells or stem cells could also have some memory not found in this study, says molecular exercise physiologist Monica Hubal of George Washington University in Washington, D.C. Lindholm adds that well-trained connections between nerves and muscles could also help lapsed athletes get in shape faster than people who have never exercised before. “They know how to exercise, how it’s supposed to feel,” Lindholm says. “Your brain knows exactly how to activate your muscles, you don’t forget how to do that.”

Primitive signs of emotions spotted in sugar-buzzed bumblebees

To human observers, bumblebees sipping nectar from flowers appear cheerful. It turns out that the insects may actually enjoy their work. A new study suggests that bees experience a “happy” buzz after receiving a sugary snack, although it’s probably not the same joy that humans experience chomping on a candy bar.

Scientists can’t ask bees or other animals how they feel. Instead, researchers must look for signs of positive or negative emotions in an animal’s decision making or behavior, says Clint Perry, a neuroethologist at Queen Mary University of London. In one such study, for example, scientists shook bees vigorously in a machine for 60 seconds — hard enough to annoy, but not hard enough to cause injury — and found that stressed bees made more pessimistic decisions while foraging for food.
The new study, published in the Sept. 30 Science, is the first to look for signs of positive bias in bee decision making, Perry says. His team trained 24 bees to navigate a small arena connected to a plastic tunnel. When the tunnel was marked with a blue “flower” (a placard), the bees learned that a tasty vial of sugar water awaited them at its end. When a green “flower” was present, there was no reward. Once the bees learned the difference, the scientists threw the bees a curveball: Rather than being blue or green, the “flower” had a confusing blue-green hue.

Faced with the ambiguous color, the bees appeared to dither, meandering around for roughly 100 seconds before deciding whether to enter the tunnel. Some didn’t enter at all. But when the scientists gave half the bees a treat — a drop of concentrated sugar water — that group spent just 50 seconds circling the entrance before deciding to check it out. Overall, the two groups flew roughly the same distances at the same speeds, suggesting that the group that had gotten a treat first had not simply experienced a boost in energy from the sugar, but were in a more positive, optimistic state, Perry says.

In a separate experiment, Perry and colleagues simulated a spider attack on the bees by engineering a tiny arm that darted out and immobilized them with a sponge. Sugar-free bees took about 50 seconds longer than treated bees to resume foraging after the harrowing encounter.

The researchers then applied a solution to the bees’ thoraxes that blocked the action of dopamine, one of several chemicals that transmit rewarding signals in the insect brain. With dopamine blocked, the effects of the sugar treat disappeared, further suggesting that a change in mood, and not just increased energy, was responsible for the bees’ behavior.

The results provide the first evidence for positive, emotion-like states in bees, says Ralph Adolphs, a neuroscientist at Caltech. Yet he suspects that the metabolic effects of sugar did influence the bees’ behavior.
Geraldine Wright, a neuroethologist at Newcastle University in England, shares that concern. “The data reported in the paper doesn’t quite convince me that eating sucrose didn’t change how they behaved, even though they say it didn’t affect flight time or speed of flight,” she says. “I would be very cautious in interpreting the responses of bees in this assay as a positive emotional state.”

‘Time crystal’ created in lab

It may sound like science fiction, but it’s not: Scientists have created the first time crystal, using a chain of ions. Just as a standard crystal repeats in a regular spatial pattern, a time crystal repeats in time, returning to a similar configuration at regular intervals.

“This is a remarkable experiment,” says physicist Chetan Nayak of Microsoft Station Q at the University of California, Santa Barbara. “There is a ‘wow factor.’”

Scientists at the University of Maryland and the University of California, Berkeley created a chain of 10 ytterbium ions. These ions behave like particles with spin, a sort of quantum mechanical version of angular momentum, which can point either up or down. Using a laser, the physicists flipped the spins in a chain of ions halfway around, from up to down, and allowed the ions to interact so that the spin of each ion would influence the others. The researchers repeated this sequence at regular intervals, flipping the ions halfway each time and letting them interact. When scientists measured the ions’ spins, on average the ions went full circle, returning to their original states, in twice the time interval at which they were flipped halfway.
This behavior is sensible — if each flip turns something halfway around, it takes two flips to return to its original position. But scientists found that the ions’ spins would return to their original orientation at that same rate even if they were not flipped perfectly halfway. This result indicates that the system of ions prefers to respond at a certain regular period — the hallmark of a time crystal — just as atoms in a crystal prefer a perfectly spaced lattice. Such time crystals are “one of the first examples of a new phase of matter,” says physicist Norman Yao of UC Berkeley, a coauthor of the new result, posted online September 27 at

Time crystals take an important unifying concept in physics — the idea of symmetry breaking — and extend it to time. Physical laws typically treat all points in space equally — no one location is different from any other. In a liquid, for example, atoms are equally likely to be found at any point in space. This is a continuous symmetry, as the conditions are the same at any point along the spatial continuum. If the liquid solidifies into a crystal, that symmetry is broken: Atoms are found only at certain regularly spaced positions, with voids in between. Likewise, if you rotate a crystal, on a microscopic level it would look different from different angles, but liquid will look the same however it’s rotated. In physics, such broken symmetries underlie topics ranging from magnets to superconductors to the Higgs mechanism, which imbues elementary particles with mass and gives rise to the Higgs boson.

In 2012, theoretical physicist Frank Wilczek of MIT proposed that symmetry breaking in time might produce time crystals (SN: 3/24/12, p. 8). But follow-up work indicated that time crystals couldn’t emerge in a system in a state of equilibrium, which is settled into a stable configuration. Instead, physicists realized, driven systems, which are periodically perturbed by an external force — like the laser flipping the ions — could create such crystals. “The original examples were either flawed or too simple,” says Wilczek. “This is much more interesting.”

Unlike the continuous symmetry that is broken in the transition from a liquid to a solid crystal, in the driven systems that the scientists used to create time crystals, the symmetry is discrete, appearing at time intervals corresponding to the time between perturbations. If the system repeats itself at a longer time interval than the one it’s driven at — as the scientists’ time crystal does — that symmetry is broken.

Time crystals are too new for scientists to have a handle on their potential practical applications. “It’s like a baby, you don’t know what it’s going to grow up to be,” Wilczek says. But, he says, “I don’t think we’ve heard the last of this by a long shot.”
There probably are related systems yet to be uncovered, says Nayak. “We’re just kind of scratching the surface of the kinds of amazing phenomena — such as time crystals — that we can have in nonequilibrium quantum systems. So I think it’s the first window into a whole new arena for us to explore.”

Interactive map reveals hidden details of the Milky Way

There’s much more to the universe than meets the eye, and a new web-based app lets you explore just how much our eyes are missing. Gleamoscope presents the night sky across a range of electromagnetic frequencies. Spots of gamma rays pinpoint distant feeding black holes. Tendrils of dust glow with infrared light throughout the Milky Way. A supernova remnant — the site of a star that exploded roughly 11,000 years ago — blasts out X-rays and radio waves.

Many of these phenomena are nearly imperceptible in visible light. So astronomers use equipment, such as specialized cameras and antennas, that can detect other frequencies of electromagnetic radiation. Computers turn the data into images, often assigning colors to certain frequencies to highlight specific details or physical processes.

In Gleamoscope, a slider smoothly transitions the scene from one frequency of light to another, turning the familiar star-filled night sky into a variety of psychedelic landscapes. Pan and magnification controls allow you to scan all around the night sky and zoom in for a closer look. The interactive map combines images from many observatories and includes new data from the Murchison Widefield Array, a network of radio antennas in Australia. Over 300,000 galaxies appear as dots in images of the new radio data, described in an upcoming issue of Monthly Notices of the Royal Astronomical Society. The radio map by itself can also be explored on mobile devices in a separate app called GLEAM, available on Google Play.