A total of 754 million domestic passenger trips were made in China in the first seven days of the 8-day National Day and Mid-Autumn Festival holidays, a year-on-year increase of 78.9%. Holiday tourism generated 668.1 billion yuan ($92.80 billion), up 132.6% year-on-year, official data showed on Friday.
The high-altitude pressurized liable building independently developed and built by China Construction Third Engineering Bureau Co. Ltd, China's first "zero-altitude astronomical observation station," held a handover ceremony in the Kizilsu Kirghiz Autonomous Prefecture, Northwest China's Xinjiang Uygur Autonomous Region, and was officially delivered for demonstration application, authorities confirmed recently.
"Zero-altitude building" refers to the use of pressurization and other technologies to adjust the indoor key human settlement environment indicators to the low altitude level in the plain area, so that the human body feels more comfortable. When used, the altitude inside the building can be set according to the needs of personnel and reduce the impact of altitude sickness.
The observatory is located near Muztagh Ata, the third highest peak of the West Kunlun Mountains in Akto county. It sits at an altitude of 4,526 meters, with a total construction area of approximately 150 square meters, capable of supporting scientific research, residential, and office functions.
The observation station is a scientific research pressurized building specially designed for plateau astronomical work in response to the needs of its cooperative organization, Beijing Normal University. It can increase the overall atmospheric pressure inside the building to one standard atmospheric pressure while solving the problems of low pressure and oxygen deficiency in the plateau.
The facility provides a comfortable and healthy environment for astronomers to work and live in high-altitude areas, lowering the risk of physiological damage caused by the plateau's extreme environment.
It is expected to support a new phase of rapid development in China's astronomical scientific research work, according to media reports.
In the 1967 animated Disney film The Jungle Book, the feral boy Mowgli encounters a jazz-singing orangutan named King Louie, who implores Mowgli to teach him the secret of fire. King Louie presented a challenge for the producers of Disney’s live-action, CGI-enhanced remake of the film, opening April 15. “We had this notion that we would be as authentic as we could be to the region,” says producer Brigham Taylor. The problem: Orangutans are not native to India.
In fact, King Louie himself is not native to Rudyard Kipling’s original stories. But instead of scrapping the character, the filmmakers got creative. While researching India’s wildlife, the film’s art department learned that a colossal ape named Gigantopithecus once roamed the region. Various species of Gigantopithecus lived in India, China and Southeast Asia from about 6.5 million years ago until as recently as a few hundred thousand years ago. The ape was truly gigantic — by some estimates, twice as big as a gorilla.
So King Louie morphed from orangutan to Gigantopithecus. The switch was a “fun justification,” Taylor says, to keep the character and play up his size while still staying true to India’s fauna. (Yes, the ape is extinct, but this is a movie about talking animals. And fossil evidence does suggest that the ape at least mingled with the human ancestor Homo erectus.)
Using the scientific information they could find on the Internet, visual effects artists imagined how the animal would look and move, Taylor says. The result: an ape that resembles an overgrown orangutan, Gigantopithecus’ closest living relative. The movie ape has shaggy hair, flaring cheeks and a saggy pouch that hangs from the throat like a double chin — and towers about 12 feet tall.
It’s difficult to judge how accurate Disney’s rendering is. Despite possibly having been the largest primate ever to have lived, Gigantopithecus
left behind few fossils. Scientists have just four lower jaws and over a thousand teeth, says biological anthropologist Russell Ciochon of the University of Iowa. That’s not much to go on, but Ciochon and colleagues made their own reconstruction a couple decades ago.
The researchers took a jaw from China and made an outline of a skull that could fit such a jaw. Because most primate skulls scale to body size, Ciochon says, his group could estimate Gigantopithecus’ weight, 800 to 900 pounds, and height, about 9 feet from head to toe. (The species that lived in India was actually probably smaller.) Adding other details like hair to the animal is a matter of conjecture, Ciochon says.
But the teeth do offer some solid details about the ape’s lifestyle. Wear patterns and microscopic debris stuck to the teeth indicate Gigantopithecus dined on fruits, leaves, shoots, roots and perhaps even bamboo. Last year, researchers confirmed those details after analyzing the ratios of carbon isotopes in teeth found in Southeast Asia. The analysis also determined that Gigantopithecus was a strict forest dweller, even though it also lived near grasslands in some areas. In fact, the researchers contend, Gigantopithecus’ reliance on forests and its big size — and therefore big appetite — may have been the animal’s undoing. As Southeast Asia’s jungles gave way to expanding grasslands during the last glacial period, Gigantopithecus may have been unable to cope.
Perhaps if our ancestors had shared the secret of fire with Gigantopithecus, the giant ape would still be around today.
There’s an osprey nest just outside Jeffrey Brodeur’s office at the Woods Hole Oceanographic Institution in Massachusetts. “I literally turn to my left and they’re right there,” says Brodeur, the organization’s communications and outreach specialist. WHOI started live-streaming the osprey nest in 2005.
For the first few years, few people really noticed. All that changed in 2014. An osprey pair had taken up residence and produced two chicks. But the mother began to attack her own offspring. Brodeur began getting e-mails complaining about “momzilla.” And that was just the beginning.
“We became this trainwreck of an osprey nest,” he says. In the summer of 2015, the osprey family tried again. This time, they suffered food shortages. The camera received an avalanche of attention, complaints and e-mails protesting the institute’s lack of intervention. One scolded, “it is absolutely disgusting that you will not take those chicks away from that demented witch of a parent!!!!! Instead you let them be constantly abused and go without [sic] food. Yes this is nature but you have a choice to help or not. This is totally unacceptable. She should be done away with so not to abuse again.” By mid-2015, Brodeur began to receive threats. “People were saying ‘we’re gonna come help them if you don’t,’” he recalls.
The osprey cam was turned off, and remains off to this day. Brodeur says he’s always wondered why people had such strong feelings about a bird’s parenting skills.
Why do people spend so much time and emotion attempting to apply their own moral sense to an animal’s actions? The answer lies in the human capacity for empathy — one of the qualities that helps us along as a social species.
When we are confronted with another person — say, someone in pain — our brains respond not just by observing, but by copying the experience. “Empathy results in emotion sharing,” explains Claus Lamm, a social cognitive neuroscientist at the University of Vienna in Austria. “I don’t just know what you are feeling, I create an emotion in myself. This emotion makes connections to situations when I was in that emotional state myself.”
Lamm and his colleagues showed that viewing someone in pain activates certain brain areas such as the insula, anterior cingulate cortex and medial cingulate cortex, regions that are active when we ourselves are in pain. “They allow us to have this first person experience of the pain of the other person,” Lamm explains.
When participants viewed someone reacting as though they were in pain to a stimulus that wasn’t painful for the viewer, the participants showed activity in the frontal cortex in areas important for distinction between “self” and “other.” We can still sympathize with someone else’s pain, even if we don’t know what it feels like, Lamm and his colleagues reported in 2010 in the Journal of Cognitive Neuroscience.
This works for animals, too: We ascribe certain emotions or feelings to animals based on their actions. “You know you have a mind, thoughts and feelings,” says Kurt Gray, a psychologist at the University of North Carolina in Chapel Hill. “You take it for granted that other people do too, but you can never really know. With animals, you can’t know for sure, so your best guess is what you would do in that situation.”
When people see an animal suffering — such as, say, a suffering osprey chick — they feel empathy. They then categorize that sufferer into a “feeler,” or a victim. But that suffering chick can’t exist in a vacuum. “When there’s a starving chick, we think, ‘oh, it’s terrible!’” Gray says. “It’s not enough for us to say nature is red in tooth and claw. There must be someone to blame for this.”
In a theory he calls dyadic completion, he explains that we think of moral situations — situations in which there is suffering — as dyads or pairs. Every victim needs a perpetrator. A sufferer with no one responsible is psychologically incomplete, and viewers will fill in a perpetrator in response. In the case of suffering osprey chicks, he notes, that perpetrator might be an uncaring osprey mom, or the camera operator who refuses to intervene in a natural process. Gray and his colleagues published their ideas on dyadic completion in 2014 in the Journal of Experimental Psychology.
Anthropomorphizing animals — whether or not it is logical or realistic — is usually pretty harmless. “It’s probably OK to say a cat is content,” says John Hadley, an ethicist at Western Sydney University in Australia. Similarly, it’s OK to say that a mother osprey is being violent when she attacks her own young. People are describing what they see in emotional terms they recognize. But this doesn’t mean that these animals should be held responsible for their actions, he says. When we judge an animal for its parenting skills, “in one sense it implies we want to hold these animals up as objects of praise or blame.” The natural tendency to ascribe emotions to animals, he says, is “only really problematic if [the emotions] are inaccurate or if they lead to some kind of ethical problem.”
People can’t put an osprey on trial for being a bad parent. But as in the case of an abandoned bison calf in Yellowstone, people do sometimes intervene — even though their actions might not be helpful. “That’s a question of ethical systems coming in to conflict,” Hadley says. “National parks apply a holistic ethic, try to let nature run its course…. But a more common-sense approach would be that you can intervene, there’s suffering you can stop and you should try and stop it.”
The feelings of pity and the desire to intervene is really all about us. “When we look at nonhuman animals and we read them as if they are humans … that might just be our being narrow and unable to imagine any creature that is not somehow a reflection of us,” says Janet Stemwedel, a philosopher at San Jose State University in California. “There’s a way in which looking at animals and reading them as human and imagining them as having emotions and inner lives is maybe a gateway to caring,” Stemwedel says. This caring might be erring on the side of caution, she explains, “acknowledging the limits of what we can know about how [animals] experience the world.” If we fail to imagine what animals might be feeling, “we could do a great deal of harm, [and] put suffering in the world that doesn’t need to be there,” she notes.
Our caring for the suffering and the lonely is part of what makes us a social species. “Evolution endowed us with a moral sense because it was useful for living in groups,” Gray notes. “It’s not crazy. It’s the same impulse that leads us to protect children from child abuse, and it so happens that we extend that to osprey children.” Those anthropomorphizing impulses aren’t stupid or useless. Instead, they tell us something, not about animals, but about ourselves.
ORLANDO, Fla. — Organisms as different as plants, bacteria, yeast and humans could hold genetic swap meets and come away with fully functional genes, new research suggests.
Researchers have known for decades that organisms on all parts of the evolutionary tree have many of the same genes. “How many of these shared genes are truly functionally the same thing?” wondered Aashiq Kachroo, a geneticist at the University of Texas at Austin, and colleagues. The answer, Kachroo revealed July 15 at the Allied Genetics Conference, is that about half of shared genes are interchangeable across species.
Last year, Kachroo and colleagues reported that human genes could substitute for 47 percent of yeast genes that the two species have in common (SN: 6/27/15, p. 5). Now, in unpublished experiments, the researchers have swapped yeast genes with analogous ones from Escherichia coli bacteria or with those from the plant Arabidopsis thaliana. About 60 percent of E. coli genes could stand in for their yeast counterparts, Kachroo reported. Plant swaps are ongoing, but the researchers already have evidence that plant genes can substitute for yeast genes involved in some important biological processes.
In particular, many organisms share the eight-step biochemical chain reaction that makes the molecule heme. The researchers found that all but one of yeast’s heme-producing genes could be swapped with one from E. coli or plants.
A new genetic discovery could equip researchers to fight a superbug by stripping it of its power rather than killing it outright.
Scientists have identified a set of genes in Clostridium difficile that turns on its production of toxins. Those toxins can damage intestinal cells, leading to diarrhea, abdominal pain and potentially life-threatening disease. Unlocking the bug’s genetic weapon-making secret could pave the way for new nonantibiotic therapies to disarm the superbug while avoiding collateral damage to other “good” gut bacteria, researchers report August 16 in mBio.
Identifying a specific set of genes that control toxin production is a big step forward, says Matthew Bogyo. Bogyo, a chemical biologist at Stanford University, also studies ways to defuse C. difficile’s toxin-making.
C. difficile bacteria infect a half million people and kill about 29,000 each year in the United States. In some individuals, though, the microbe hangs out in the gut for years without causing trouble. That’s because human intestines normally have plenty of good bacteria to keep disease-causing ones in check. However, a round of antibiotics can throw the system off balance, and if enough good bugs die off, “C. difficile takes over,” says lead author Charles Darkoh, a molecular microbiologist at the University of Texas Health Science Center at Houston. As infection rages, C. difficile can develop resistance to antibiotic drugs, turning it into an intractable superbug.
Darkoh’s team reported last year that C. difficile regulates toxin production with quorum sensing — a system that lets bacteria conserve resources and launch an attack only if their numbers reach a critical threshold. That study identified two sets of quorum-signaling genes, agr1 and agr2, that could potentially activate toxin production.
In the new analysis, Darkoh and colleagues tested the ability of a series of C. difficile strains to make toxins when incubated with human skin cells. Some C. difficile strains had either agr1 or agr2 deleted; others had all their quorum-sensing genes or lacked both gene sets. Agr1 is responsible for packing the superbug’s punch, the researchers found. C. difficile mutants without that set of genes made no detectable toxins, and skin cells growing in close quarters stayed healthy. Feeding those mutant bugs to mice caused no harm, whereas mice that swallowed normal C. difficile lost weight and developed diarrhea within days. In the skin cell cultures, agr2-deficient strains were just as lethal as normal C. difficile, showing that only agr1 is essential for toxin production.
Based on their new findings, Darkoh and colleagues have identified several compounds that inactivate C. difficile toxins or block key steps in the molecular pathway controlling their production. The researchers are testing these agents in mice.
In a mouse study published in Science Translational Medicine last year, Bogyo and colleagues found a different compound that could disarm C. difficile by targeting its toxins. And several companies are trying to fight C. difficile with probiotics — cocktails of good bacteria. Results have been mixed.
Mysterious flashes of radio waves from deep space keep coming, but they are just as mysterious as ever.
Gamma rays might have accompanied one of these eruptions, researchers report in the Nov. 20 Astrophysical Journal Letters. This is the first time high-energy photons have been associated with these blasts of radio energy, known as fast radio bursts. If the gamma rays did come from the same place as the radio waves, then the underlying source could be roughly 1 billion times as energetic as thought.
Another burst, meanwhile, takes the record for brightest blast. The signal was bright enough to reveal details about the magnetic field between galaxies, astronomers report online November 17 in Science.
Fast radio bursts, or FRBs, have intrigued astronomers since the first one was reported in 2007 (SN: 8/9/14, p. 22). Since then, astronomers have discovered 18 in total. In most cases, a blip of radio waves lasting just a few milliseconds appears in the sky and is never seen again. Only one so far is known to repeat (SN: 4/2/16, p. 12). Most seem to originate in remote galaxies, possibly billions of light-years away. Until now, no one has detected any other frequency of electromagnetic radiation besides radio waves coming from these cosmic beacons.
A flash of gamma rays appeared at about the same time and from the same direction as a radio burst detected in 2013, James DeLaunay, a physics graduate student at Penn State, and colleagues report. They pored over old data from the Swift observatory, a NASA satellite launched in 2004, to see if it recorded any surges of gamma rays that might coincide with known radio bursts.
“Gamma rays associated with an FRB would be an incredibly important thing to find,” says Sarah Burke Spolaor, an astrophysicist at the National Radio Astronomy Observatory in Socorro, N.M. But she urges caution. “We don’t have a good inkling of where a specific burst comes from.” That leaves room for other types of eruptions to occur in the vicinity just by chance. DeLaunay and collaborators calculate that the odds of that are low, about one in 800. But several researchers are taking a wait-and-see attitude before feeling more confident that the gamma rays and FRB are linked.
“It’s tantalizing, but a lot more would need to be found to be convincing,” says Jason Hessels, an astrophysicist at the Netherlands Institute for Radio Astronomy in Dwingeloo.
If the same source emits both the radio waves and gamma rays, that could rule out a couple of proposals for the causes of the eruptions. Powerful radio hiccups from pulsars, the rapidly spinning cores of dead stars, are one candidate that wouldn’t make the cut, because they aren’t known to generate gamma rays.
Collisions between two neutron stars, or between a neutron star and a black hole, look promising, says Derek Fox, an astrophysicist at Penn State and a coauthor of the study. The energy output and duration of the gamma-ray burst are a good match with what’s expected for these smashups, he says, though it’s not clear whether they happen often enough to account for the thousands of FRBs that astronomers suspect go off every day.
No one story neatly fits all the data. “I think there are at least two populations,” says Fox. Perhaps some FRBs repeat, while others do not; some belch out gamma rays, others do not. There might be no one type of event that creates all FRBs, but rather a multitude.
That idea is tentative as well. “It’s way too early to say if there are multiple populations,” says Laura Spitler, an astrophysicist at the Max Planck Institute for Radio Astronomy in Bonn, Germany. A grab bag of cosmic calamities is plausible. But there are other astronomical events that exhibit enormous diversity, enough that all FRBs could also have just one type of trigger. “The data we have now isn’t sufficient to land on one side or the other,” Spitler says.
A more recent FRB, detected in 2015 at the Parkes radio telescope in Australia, shows off some of that diversity — and demonstrates how FRBs can be used as cosmological tools. A brief blast of radio waves from at least 1.6 billion light-years away is about four times as intense as the previous record holder. The signal’s vigor could be an intrinsic quirk of the underlying outburst, or could mean that this burst was unusually close to our galaxy — or both.
“What’s really exciting most about it is not just that it’s bright,” says Vikram Ravi, a Caltech astronomer and lead author of the study, “but really because of what we hope to use FRBs for.” This FRB was bright enough for Ravi and colleagues to deduce the magnetic field between galaxies. To do that, they measured the signal’s polarization, the alignment of radio waves imprinted by magnetized plasmas encountered en route to Earth. They found that, on average, the magnetic field is feeble, less than 21 nanogauss (or about one 10-millionth as strong as Earth’s magnetic field). That’s in line with astronomers’ theories about the strength of intergalactic magnetism.
“It’s not telling us anything that’s unexpected,” says Duncan Lorimer, an astrophysicist at West Virginia University in Morgantown who reported the first FRB in 2007. But it shows that FRBs can be used to learn more about intergalactic space, a region that is notoriously difficult to study. “It’s one thing to say we expect the magnetic field to be weak, but it’s another thing to actually measure it,” he adds. “It’s a signpost of things to come.”
This burst encountered different environments than a burst reported last year in Nature, which suggested an FRB origin in a highly magnetized environment, possibly near young stars in a remote galaxy (SN Online: 12/2/15). There’s no hint that the latest burst originated in a similar locale.
“I don’t think we contradict each other at all,” Ravi says. “Some FRBs originate in very magnetic environments and some don’t. Given that these are the only two FRBs where these measurements have been made, it’s hard to tell.”
Lucy didn’t let an upright stance ground her. This 3.2-million-year-old Australopithecus afarensis, hominid evolution’s best-known fossil individual, strong-armed her way up trees, a new study finds.
Her lower body was built for walking. But exceptional upper-body strength, approaching that of chimpanzees, enabled Lucy to hoist herself into trees or onto tree branches, paleoanthropologist Christopher Ruff of Johns Hopkins University School of Medicine and his colleagues report November 30 in PLOS ONE.
Lucy, and presumably other members of her species, “combined walking on two legs with a significant amount of tree climbing,” says coauthor John Kappelman, a paleoanthropologist at the University of Texas at Austin. A Kappelman-led team concluded earlier this year that, based on numerous bone breaks, Lucy fell to her death from a tree, either while climbing or sleeping (SN: 9/17/16, p. 16). That’s a controversial claim, dismissed by some researchers as a misreading of bone damage caused by the fossilization process.
Debate about whether A. afarensis spent much time in trees goes back to shortly after the discovery of Lucy’s partial skeleton in 1974. Additional discoveries of A. afarensis fossils have only intensified disputes between those who regard the ancient species as primarily designed for walking and others convinced that Lucy’s crowd split time between walking and tree climbing (SN: 12/1/12, p. 16; SN: 7/17/10, p. 5).
Ruff’s team measured the internal structure and strength of Lucy’s two surviving upper arm bones and one upper leg bone, including the knob at the top of the upper leg that forms the hip joint. Data came from high-resolution X-ray CT scans taken in 2008 while her remains were in the United States for a museum tour.
These scans were compared with those of present-day people, chimps and bonobos, as well as 26 fossil hominids. These hominids — including both australopithecines like Lucy, as well as early members of the human genus, Homo — date to between 2.6 million and 600,000 years ago.
Lucy’s long, weight-bearing upper arms most closely resemble the anatomy of chimps, the scientists say. Studies of various living animals, including humans and chimps, indicate that daily behaviors during growth influence the development of limb bones. Thus, it’s plausible that Lucy pulled herself into trees from an early age, adding to the strength and length of her upper arms, the team proposes.
Although Lucy walked upright, she had a less efficient gait than that of people today and Homo erectus individuals dating to between 1.6 million and 700,000 years ago, the researchers say. The stress of supporting a robust upper body with a slighter lower body would have interfered with Lucy’s two-legged stride, they hold. Traits such as a relatively small hip joint and short legs limited Lucy’s ability to walk long distances, the investigators add.
Ruff’s study supports proposals over the last few decades that, for her size, Lucy had longer, stronger arms and smaller hip joints than people now do, says paleoanthropologist Carol Ward of the University of Missouri School of Medicine in Columbia. It’s plausible that a small hip joint slightly undermined Lucy’s stride, but that hasn’t been conclusively demonstrated, Ward adds.
Biological anthropologist Philip Reno of Penn State takes a harder line. “This new analysis does not resolve any of the debates regarding the use of tree climbing or the effectiveness of upright walking in australopithecines.” Radical, humanlike changes to the pelvis and foot in Lucy’s species suggest that her large upper arms were simply evolutionary holdovers from hominids’ tree-dwelling ancestors, not the consequences of extensive tree climbing, Reno argues. It’s hard to say how Lucy’s relatively small hip joint interacted with many other skeletal and muscular forces affecting an upright gait, he adds.
The big question, in Ward’s view, is whether skeletal changes in early Homo conducive to walking and running arose as a result of largely abandoning tree climbing or for other reasons, such as an increasing emphasis on arms and hands capable of manipulating objects in precise ways.
Forget honking Vs of geese or gathering herds of wildebeests. The biggest yearly mass movements of land animals may be the largely overlooked flights of aphids, moths, beetles, flies, spiders and their kin.
About 3.5 trillion arthropods fly or windsurf over the southern United Kingdom annually, researchers say after analyzing a decade of data from special entomological radar and net sweeps. The larger species in the study tended to flow in a consistent direction, suggesting that more species may have specialized biology for seasonal migrations than scientists realized, says study coauthor Jason Chapman, now at the University of Exeter in Penryn, England.
The creatures detected in the study may be little, but they add up to roughly 3,200 metric tons of animal weight, Chapman and colleagues report in the Dec. 23 Science. That’s 7.7 times the tonnage of U.K. songbirds migrating to Africa and equivalent to about 20,000 (flying) reindeer.
These are “huge flows of biomass and nutrients,” Chapman says. “One of the things we hope to achieve in this work is to convince people who are studying terrestrial ecosystems that they cannot ignore what’s happening in the skies above them.”
Biologist Martin Wikelski of the Max Planck Institute for Ornithology in Radolfzell, Germany, who wasn’t part of the study, calls these migrants “aerial plankton.” It’s a reference to the much-studied tiny sea creatures whose movements and blooms power oceanic food webs. Understanding insect migrations and abundances is crucial for figuring out food webs on land, including those that link insects and birds. That’s “particularly important nowadays as we are starting to lose many of our songbirds,” he says.
The word migration applied to arthropod movements doesn’t mean one animal’s roundtrip, Chapman says. Instead, the term describes leaving the home range and undertaking a sustained journey, maybe cued by seasons changing or food dwindling. A return trip, if there is one, could be the job of a future generation.
The migrants he studied, traveling at least 150 meters aboveground, aren’t just accidentally blowing in the wind, he says. Many of the tiniest — aphids and such that weigh less than 10 milligrams — take specific measures to start their journey, such as trekking to the top of a plant to catch a gust. Juvenile spiders stand on tiptoe reeling out silk until a breeze tugs a strand, and them, into the air. “They only do this when wind conditions will enable them to be caught and taken up; otherwise, it’s a terrible waste of silk,” Chapman says. Some caterpillars also spin silk to travel, and mites, with neither wings nor silk, can surf themselves into a good breeze.
The basic idea that a lot of arthropods migrate overhead is “absolutely not” a surprise to behavioral and evolutionary biologist Hugh Dingle of the University of California, Davis. He says so not dismissively, but joyously: “Now we have really good data.”
This smallest class of migrants, sampled with nets suspended from a big balloon, makes up more than 99 percent of the individual arthropods and about 80 percent of the total mass. They didn’t show an overall trend in flight direction. But radar techniques refined at Rothamsted Research in Harpenden, England, showed distinct seasonal patterns in direction for medium-sized and larger insects.
“That’s the big surprise for us,” Chapman says. “We assumed that those flows would just be determined by the wind.” But medium-sized and large insects such as lacewings and moths overall tended to head northward from May through June regardless of typical wind direction. And in August and September, they tended southward. “Lots of insects we didn’t think capable of this are clearly doing it,” he says.
Managing such a feat takes specialized biology for directed, seasonal migrations. Many of these arthropods must have some form of built-in compass plus a preferred direction and the genetics that change that preference as they or their offspring make the return migration. Entomologists have known some migratory details of monarch butterflies in North America and a handful of other such insects, many of them pest moths. But speculating about specialized migrants, Chapman says, “there must be thousands of these.”
The moon formed at least 4.51 billion years ago, no more than 60 million years after the formation of the solar system, researchers report online January 11 in Science Advances. This update to the moon’s age is in line with some previous estimates (SN Online: 4/17/15), although some argue the moon formed 150 million to 200 million years after the solar system’s birth.
A precise age is important for understanding how Earth evolved and how the solar system behaved in its formative years, says study coauthor Melanie Barboni, a geologist at UCLA. “If we want to understand other solar systems,” she says, “the first thing we have to do is understand ours.”
A run-in between Earth and something roughly the size of Mars is thought to be responsible for the creation of the moon. To nail down when this happened, Barboni and colleagues examined fragments of the mineral zircon brought back from the moon by the Apollo 14 astronauts. Relative amounts of uranium and lead as well as abundances of hafnium isotopes and the element lutetium provided radioactive decay clocks that record when the early moon’s global ocean of magma solidified. Hafnium and lutetium help determine when a crust formed over the moon’s liquid mantle while the radioactive decay of uranium to lead pinpoints when the zircon crystallized.
Previous analysis of the same zircon fragments revealed a similar age (within 68 million years after the formation of the solar system), but came with larger uncertainties. New techniques for uranium-lead dating and for understanding how the bombardment of the lunar surface by cosmic rays alters hafnium led to the improved age estimate.