To a bee, no two flowers smell quite the same. When honeybees forage for flowers, they search for, learn, and memorize distinctive floral scents and return to the hive to tell other bees what they’ve found through their famous waggle dance.
It is an important ritual that is being disrupted by one of the most pervasive forms of air pollution—diesel exhaust—according to a new study published Thursday in Scientific Reports. The research pinpoints the mechanism by which the fuel-combustion pollutants degrade certain chemicals in floral odors. The absence of those chemicals affects honeybees’ ability to recognize the scent.
Engine exhaust is hardly the only threat facing the honeybee. It is well recognized that exposure to multiple pesticides can impair bees’ olfactory skills, while ground-level ozone, or smog, and ultraviolet (UV) radiation can also degrade floral odor compounds that bees pick up on. Authorities around the globe are grappling with how to address the little-understood cyclical diseases that are causing colonies to dwindle.
The new study offers insight into the specific hazard for pollinators from the fumes from cars, trucks, trains, ships, and heavy machinery. Significantly, the study indicates that honeybees haven’t been helped by the “cleaner” diesel now used in Europe and the United States due to regulations that over the past decade removed sulfur from the fuel. The researchers used ultra-low-sulfur diesel fuel in their experiment…
This article was originally published on NPR’s Shots, 23 November 2012.
Albert Einstein was a smart guy. Everybody knows that. But was there something about the structure of his brain that made it special?
Scientists have been trying to answer that question ever since his death. Previously unpublished photographs of Einstein’s brain taken soon after he died were analyzed last week in the journal Brain. The images and the paper provide a more complete anatomical picture and may help shed light on his genius.
Every brain has unique nooks and crannies. Aside from sheer curiosity, examining Einstein’s brain could yield scientifically valuable insights. “There are strong links between variation in brain anatomy and variations in intellectual ability, period,” says Sandra Witelson, a neuroscientist at the Michael G. DeGroote School of Medicine at McMaster University in Ontario.
The story of how the photographs turned up is interesting in itself. In the hours after Einstein’s death in 1955, the autopsy pathologist, Thomas Harvey, took dozens of photographs and dissected the physicist’s brain into 240 parts for preservation. Harvey’s lab made slides for future study.
This article was originally published on NPR’s The Salt, 28 December 2012.
Got milk? Ancient European farmers who made cheese thousands of years ago certainly had it. But at that time, they lacked a genetic mutation that would have allowed them to digest raw milk’s dominant sugar, lactose, after childhood.
Today, however, 35 percent of the global population — mostly people with European ancestry — can digest lactose in adulthood without a hitch.
So, how did we transition from milk-a-phobics to milkaholics? “The first and most correct answer is, we don’t know,” says Mark Thomas, an evolutionary geneticist at University College London in the U.K.
Most babies can digest milk without getting an upset stomach thanks to an enzyme called lactase. Up until several thousand years ago, that enzyme turned off once a person grew into adulthood — meaning most adults were lactose intolerant (or “lactase nonpersistent,” as scientists call it).
But now that doesn’t happen for most people of Northern and Central European descent and in certain African and Middle Eastern populations. This development of lactose tolerance took only about 20,000 years — the evolutionary equivalent of a hot minute — but it would have required extremely strong selective pressure.
“Something happened when we started drinking milk that reduced mortality,” says Loren Cordain, an exercise physiologist at Colorado State University and an expert on Paleolithic nutrition. That something, though, is a bit of a mystery.
This article was originally published in Nature, 4 October 2012.
“They’re hard to breed and easy to kill,” says plant pathologist Fred Hebard as he attacks a 2-metre-tall chestnut tree in southwest Virginia. Hebard bores a hole in the bark and squeezes a mash of orange fungus into the wood. The tree is a hybrid of the Chinese and American chestnut species, and Hebard hopes that it has enough resistance genes to keep the fungus — called chestnut blight — at bay. If so, the hybrid could help to resurrect a long-gone icon.
Until a century ago, the American chestnut (Castanea dentata) was the cornerstone tree species of eastern North America. With long, straight trunks and bushy crowns, it carpeted the forest floor each autumn with prickly brown nuts. But the arrival of chestnut blight (Cryphonectria parasitica) from Asia wiped out almost all the stately trees, leaving only a few, isolated stands. Since then, a faithful fan club of scientists and citizens has sought to tame the blight.
As chief scientist of the American Chestnut Foundation (ACF), a group of chestnut enthusiasts and scientists, Hebard has bred thousands of hybrids at the organization’s research farm in Meadowview, Virginia. He crosses descendants of the original American chestnut with the much smaller Chinese variety (Castanea mollissima), which has some natural immunity to the Asian fungus. And after decades of work, he is within reach of his goal, a tall American tree with enough Chinese traits to keep it healthy.
Other researchers are trying to attack the blight with viruses or are creating trees that are genetically modified (GM) to resist the fungus, and could be the first GM forest trees released in the wild in the United States. Progress with all three approaches is raising hopes that chestnuts will soon start to flourish again in the forests of the American east. “We’re starting to pull the American chestnut tree back from the brink of extinction,” says Hebard.
The work is also offering lessons that could help to save other trees, such as elm and ash, which face similar threats in America and abroad.
Once upon a time in Borneo, everybody was dying of malaria, so they sprayed a lot of the insecticide DDT (dichlorodiphenyltrichloroethane), which killed the mosquitoes that transmitted the disease. Cases dropped, but inexplicably, peoples’ roofs started caving in. DDT had also killed wasps that kept the caterpillar population in check, so the caterpillars ate the roof thatch. Geckos ate the wasps, and cats ate the geckos. Cats started dropping dead, and the rat population flourished, which lead to an outbreak of the plague. So, to solve the problem, health officials parachuted cats into Borneo.
Or, at least that’s how the story goes.
I first heard of this ecological fable — nicknamed “Operation Cat Drop” — from a friend who liked to break it out at dinner parties. Frankly, it sounded a bit ridiculous. So, ridiculous in fact, that somebody could very well have made it up, and some have argued that the tale is just that: fiction. The cat story started popping up in print in the 1960s, making appearances in The New York Times, Time, and Natural History magazine. In the late 1960s and early 70s, biomagnification and the ecological impacts on avian species took center stage in the public debate over the safety of DDT. But, I’ve always wondered whether there was any truth to the cat story, which did come up in congressional hearings on DDT use. Turns out, there’s more than you’d think. Patrick O’Shaughnessy, an environmental engineer at the University of Iowa, did some digging and found kernels of truth from which the cat drop myth probably grew. His work was published in the American Journal of Public Health. Here’s what we know…
In the 1950s, the World Health Organization (WHO) launched a global effort to eradicate malaria, following successful campaigns in the United States, Europe, and Venezuela. Resistance to the insecticide had popped up in some mosquitoes, but they were very optimistic. Perhaps a little too optimistic. From 1952-55, in the Sarawak region of Borneo, malaria control teams sprayed DDT, benzene hexachloride, and briefly dieldrin twice a year inside local long houses with thatch roofs. At first, the program enjoyed some success. From 1953 to 1955, the fraction of local mosquitoes carrying the disease fell from 35.6% to 1.6%.
I love water parks. The slides, the wave pools, the smell of sunscreen, the funnel cakes, and above all the lazy rivers. A good water park will always have a soft spot in my heart.
That’s why when I first learned of a water park in South Africa called Sun City that seemed to be propagating a myth that an ancient civilization of European origin had built a great lost city in the region, I was a little shocked. In my 10th grade world history class, we watched a documentary about the real ancient ruins of Great Zimbabwe, an archaeological site with a controversial past, and its final images were of children frolicking at Sun City. The innocence of my childhood spent in inner-tubes clashed with this bizarre message that frankly sounded…kind of racist.
The myth begins with bad archaeology. Built between 1200 and 1400 CE, Great Zimbabwe once served as the capital of a complex local civilization in southeastern modern Zimbabwe. Its stone enclosures with walls up to 30 feet high were built without mortar and adorned with soapstone bird sculptures. The hilltop palace and surrounding city covered about 1800 acres, and could have housed more than 10,000 people. Untouched by European influence, Great Zimbabwe probably looked pretty impressive, rising above the grassland.
These people weren’t isolated. Excavations of the site turned up gold working equipment, ironware, pottery shards, Arab coins, Chinese porcelain, and Persian beads. Second hand stories from Arab traders eventually made their way to the ears of Portuguese explorers. Some thought it was Ophir, a biblical city built by the Queen of Sheba, or the seat of a mythological Christian ruler called Prester John…or even the location of King Solomon’s legendary mines.
By the time Karl Mauch, a German Indiana Jones type, showed up in 1871, the city had been abandoned, likely due to famine around 1500. Mauch came in with a preconceived notion that these were ruins of a civilized society. And let’s not beat around the bush here, by civilized, he meant white. Since fragments from a cross beam at the site matched the red hue of his pencil, he thought they were cedar from Lebanon. In this line of logic, he ultimately hypothesized that the Queen of Sheba called Great Zimbabwe home.
The first of several archaeological blunders, Mauch’s theory caught on with Cecil Rhodes — either a hero or a villain depending on who you ask today. Back then, Rhodes headed up the British South Africa Company (BSA). Rhodes financed another investigation of the site led by James T. Bent, whose archaeological resume was shorter than mine. Bent attributed artifacts they discovered to “a prehistoric race … a northern race coming from Arabia … closely akin to the Phoenician and Egyptian … and eventually developing into the more civilized races of the ancient world.”
Bent’s successor was even worse. Instead of preserving the site, local journalist Richard Nicklin Hall dug up a 3 to 12 foot layer of archaeological evidence across the site, under pretense that he was removing “the filth and decadence of the Kaffir occupation”. He disregarded stratification and even disposed of artifacts. Hall’s efforts and local looting left little behind for future researchers to analyze.David Randall McIver, the first to scientifically excavate the site from 1905 to 1906, concluded that the mud dwellings within the stone enclosures were definitely African in origin and dated to the middle ages. In 1929, data from archaeologist Gertrude Caton-Thompson (no relation) backed up McIver’s claims and pegged the cite’s settlement at 1100 CE.
You’d think that would have been the end of it. Not so. Scholars went on to propose evidence of Indian or Malaya inhabitants. In the 1960s, the Rhodesian government suppressed archaeological theories of indigenous black settlement — any papers that suggested an African origin had to equally address the possibility of Phoenician or Semitic builders. Around the same time, liberationists saw the site as a symbol of African Nationalism and later chose to rename Rhodesia in its honor. In 1973, archaeologist Peter Garlake finally put the myth to rest when he wrote the definitive book on Great Zimbabwe — founded by not by whites, but by black Africans.
So, by now you’re asking where does the water park come in? Sun City is a footnote to the larger saga of Great Zimbabwe. Built in 1979 before the fall of Apartheid, the water park is part of a larger resort and casino complex outside of Johannesburg. Though I’ve never been, Sun City popped up in my life again last spring. This time in the security line of the Joburg Airport; the biologist behind me happened to be heading there for a scientific conference. Today, the hokey premise of the resort still centers on the “fairytale” of a lost tribe that build a great city in the region — presented as fantasy, not reality.
So, in this weird way, the myth persists. Archaeologists do still debate Great Zimbabwe’s origins. But, today, it’s a matter of which African tribe actually settled the area and why.
Most people who visit New Zealand never see a kiwi in the wild. You might catch a glimpse of one in a dimly lit indoor cage, but in the wild the country’s national bird is a rare sight. Just to clarify, I’m talking about the awkward looking bird, not the odd looking fruit. New Zealand has five species of kiwi birds, native inhabitants of the “land of the long white cloud”. Thanks to introduced predators and hunting, four are on the REDD list. On the other hand, the little spotted kiwi (Apteryx owenii) has been a conservation success story. That is, until now.
Even though the little spotted kiwi is extremely rare very rare, it’s the only kiwi bird without “endangered” status. Its success story is an interesting one. Around a century ago, things were looking decidedly not good for the little spotted kiwi. It had disappeared from New Zealand’s north island altogether, and in 1912, conservationists took five remaining birds from the Jackson Bay on the South Island and moved them to a small island 5 kilometers off the North Island’s coast called Kapiti. Whether or not a native population already inhabited the island is still up for debate today, but the birds were spotted there in 1929, well after relocation.
By the 1980s, the original south island population was gone, but the birds on Kapiti were actually doing pretty well. Meanwhile, another population on D’Urville island wasn’t doing so great. Conservationists did the same thing again: moving the last remaining male and female birds on D’Urville to nearby Long Island, along with three birds from Kapiti. Around the same time, individuals from Kapiti and founded populations on some of New Zealand’s other coastal isles.
Taking the last individuals from a species in danger of becoming extinct and focusing all of the energy on protecting them — either in a small area of their native range in the wild or in captivity — is a go-to worst-case scenario tactic in conservation. It worked for cheetahs, the mexican wolf, and another of New Zealand’s avian residents, the Takahe. So, the Kapiti island population flourished, exceeding 1600 individuals today. Based on numbers alone, spotted kiwis are doing great.
But, recent study published in Proceedings of the Royal Society B gives conservationists pause. Results suggests that these populations have inbred themselves into a genetic bottleneck — when a population drops, its genetic variation gets slashed. Basically, they lack genetic variety. A new disease could swoop in and easily wipe them out; the same goes for other challenges like sudden changes in climate (something that’s not out of the realm of possibility in the next million years).
“Yes, we have eight populations, and yes, they are all growing in size in terms of number of birds,” Kristina Ramstad, a co-author and a biologist at Victoria University in Wellington, NZ, toldScience. “But they are all incredibly low in genetic diversity. … If the right disease comes along, it could wipe all of them out.” Science‘s Traci Watson outlines another species — the bengali tiger — that suffers a similar problem: a companion study in Proceedings of the Royal Society B found that the tigers retain only 7% of their ancestors’ genetic variation.
As for the kiwi study, Ramstad and her colleagues compared genetic data at 15 spots in the species’s genome from populations on Kapiti to those on Red Mercury, Tiritiri Matangi, and Long Island. All four populations were almost genetically identical and had telltale signs of genetic bottlenecks in recent years. Mysteriously, Kapiti birds are losing some measure of genetic diversity each year. The other groups are too. In fact, little spotted kiwis have the lowest diversity of all kiwi species. On Long Island, a single mating pair from Kapiti founded the population that persists today, and that the birds from D’Urville haven’t contributed at all to the overall genetic diversity of the species. For whatever reason, they never mated and disappeared.
“We don’t know why [the D’Urville birds didn’t breed],” Ramstad toldScientific American. “We don’t know how long little spotted kiwi live and we don’t know what’s their oldest age of reproduction. It’s still a bit of a guess, they keep outliving the scientists following them. So the birds [from D’Urville Island], could have been too old, or one of them could have been infertile. It could simply be a case that they didn’t fancy each other.”
So, what’s the takeaway message here: should conservations be doing something differently? or is the game just stacked against them? Keeping up connections between surviving populations — so that they mate and pop out genetically diverse kids — seems to be as important as making sure populations in protected areas have the best shot at survival. The sentiment seems to be that things are just a heck of a lot more complicated than originally thought. And, that’s no reason to throw in the towel. New Zealand’s Department of Conservation plans to relocate Kapiti birds (which have the most genetic diversity of the bunch) to smaller islands to boost their DNA. “Don’t keep all of your kiwis on one island” remains the best tactic at the moment.
For more info, check out Science and Scientific American‘s excellent pieces on the study: here and here.
I should preface this with the fact that my experience with jelly fish is limited. To me, they’re just an occasional nuisance that looks cool at the aquarium. I’ve only been stung once, but I did have to wear a fashionable blue unitard to go snorkeling on the Great Barrier Reef and avoid the seasonal danger of getting stung by a tiny, (sometimes) lethal box jellyfish.
Jellyfish have been around for millions of years, and jellyfish blooms, in Australia and elsewhere, are just another facet of a healthy ocean environment. But, blooms are becoming more and more common, and spawning the In the Sea of Japan, a now almost yearly swarm of the gigantic Nomura’s jellyfish (Nemopilema nomurai) famously capsized a boat. Large blooms disrupt fishing industries and even shutdown nuclear power plants.
Jelly fish blooms are cyclic and seasonal — lots of factors come into play like sun exposure, temperature, nutrient levels, changes in ocean currents, and the balance between predators and prey in the ecosystem. Some scientists think human influence might be the underlying issue. Agricultural runoff can add nutrients to the system, providing more food to the zooplankton that jellies eat.
Fishing can also take competitors — small pelagic fish (fish that live in the water column, near the ocean surface) such as sardines, herrings, and anchovies — out of the ecological equation. Jellies also feed on these fishes’ eggs and larva, and sans regulation, their populations can easily explode and invade new territories. Thus, over-fishing is another suspect that some scientists and conservationists point to, and a recent paper in the Bulletin of Marine Science adds evidence to the pile.
Researchers at France’s Institut de Recherche pour le Développement (IRD) present two contrasting case studies in the Benguela ocean current, which flows north along the southeastern coast of Africa. In the first, just off the coast of Namibia in an area with lax fishing regulations, pelagic fish populations barely have time to recover before fishing stars up again, and jellies are already colonizing the area. If the current trajectory plays out, sardines and the like might one day be absent from the local food chain, with negative implications for the ecosystems other inhabitants. On the other hand, the small fish still dominate the second ecosystem, off the coast of South Africa where strict fishing regulations have been in place for nearly half a century.
These scientists aren’t the first to point to over-fishing as the problem. A 2009 review paper published in Trends in Ecology and Evolution pin-pointed over-fishing, in addition to the effects of climate change and excess nutrients from fertilizers and sewage run-off, as a compelling culprit for jellyfish population growth.
“Mounting evidence suggests that open-ocean ecosystems can flip from being dominated by fish, to being dominated by jellyfish,” Anthony Richardson, a marine ecologist and a co-author, said in a statement at the time. “This would have lasting, ecological, economic, and social consequences. We need to start managing the marine environment in a holistic and precautionary way to prevent more examples of what could be termed a ‘jellyfish joyride’.”
Other researchers suggest that jellyfish are doing the same thing in Antarctica, and out-compete local penguins species.
Though some scientists think global jellyfish populations are booming, others aren’t convinced. The data is iffy (from anecdotes and case studies), and historic jelly population data is even worse. These pesky blobs are ridiculously hard to study, especially when they rival sumo wrestlers in weight. However, one group of marine biologists is tackling the daunting task of crunching jellyfish numbers. Their results published earlier this year in PNAS, say that the evidence for a jellyfish population explosion just isn’t there (…yet). It turns out that jelly fish populations oscillate over a natural 20ish year period. They also detected a small linear uptick since the 1970s, but only further monitoring will tell if that trend is a serious problem or a minor blip.
So, if we are indeed on a ‘jellyfish joyride’, how do we get off? One solution or consequence (depending on how you look at it): eat the jellies. It fact, jellyfish ending up on dinner plates seems to be the go-to example of the hardships we’ll face with over-fishing and other global food-related crisis.
Upon reading this, I could not help, but google, “jellyfish edible”. It turns out that some species are harvested for food. They produce tentacle toxins that are not harmful to humans and/or their bodies are more rigid. Rhopilema esculentum is popular in China, while Cannonball jellies (Stomolophus meleagris) off the US east coast are growing in popularity, as well.
In fact, jellyfish has been a staple of Chinese cuisine (read not Panda Express) for centuries. “Jellyfish masters” (I kid you not, it’s an actual job title) soak jellyfish strips in a salty mix, dry them, and ship them off to restaurants, where chefs rehydrate the strips and serve them raw or cooked. Smithsonian describes “cold shredded jellyfish” purchased at Jackey Cafe in DC’s Chinatown as “wetly crunchy” in a seaweed salad sort of way. Yum?
You’ve probably never heard of the greater wax moth. But, according to a paper published in last week’s issue of Biology Letters, they hear just about everything — even sounds that don’t exists…as far as we know. These rather drab-looking insects have the highest hearing frequency range, 30 kilohertz (kHz) to 300 kHz, in the animal kingdom.
Greater wax moths (Galleria mellonella. Believe it or not there’s actually a lesser wax moth out there, too) commonly mooch off bee hives, much to the chagrin of apiarists. Humans spread them around the globe by moving bee hives from one place to another, so while their native range probably encompassed Europe and parts of Asia, they’re invasive pests in Australia and North America. Adult wax moths leave hives for one reason: to mate. At night, the males emit mating calls — at 90 to 95 kHz — to attract females. Their larva — yellowish-green caterpillars called waxworms — inhabit hives around the world and feed off the beeswax, usually from the combs where bee larvae live. Obviously, this doesn’t end well for the bees, and hives can quickly die off once the moths settle in.
Greater wax moths may be a pest, but their mating calls make them a target and a scrumptious food source for bats. While stalking their prey via echolocation, bats also make higher pitched calls to communicate without drawing the attention of an unsuspecting moth. Up until now, the North American gypsy moth (Lymantria dispar, another pest) had the highest hearing frequency limit of any insect: 150 kHz. In contrast, bat echolocation reaches up to 212 kHz. Scientists have always supposed moths’ limits to be much lower, and a group of researchers at University of Strathclyde in Glasgow, Scotland, wondered, “are any moths keeping up in the evolutionary arms-race?”
Because greater wax moths live all over the world, they hear lots of different bat calls, making them an ideal test subject. Great wax moths hear through tiny tympanal membranes (and by tiny I mean smaller than a millimeter), which vibrate when they pick up sounds, and activate the four nearby receptor cells that transmit a nerve signal to the brain. The set up is simple, but comparable to how a human ear drum works. The research team ordered greater wax moth larvae from a website that specializes in “live” food for exotic animals, and waited for the larvae to cocoon, pupate, and emerge as adults from their lab incubator. Suspending 20 adult moths so they couldn’t fly off, the researchers measured how much their tympanal membranes vibrated and whether they transmitted a signal when they heard sounds from 30 kHz to 300 kHz. At 300 kHz, all the moths “ear drums” vibrated, and 15 of the 20 specimen showed neural activity.
“Such extreme auditory frequency sensitivity is unmatched in the animal kingdom,” the researchers say. By comparison, humans hear between .02 and 20 kHz, and dolphins, another animal that relies on echolocation, can hear up 160 kHz. It’s safe to say greater wax moths are worthy opponents to bats trying to swoop in and eat them for dinner.
No animal known to scientists can make a sound as high as 300 kHz. So, why did these moths evolve such ultrasonic hearing?
In New Scientist’s write-up of the study, Hannah Moir, a member of the research team who specializes in bioaccoustics, posits two possible explanations: 1. Bats are actually making higher frequency sounds than we can record (microphones have trouble detecting sounds over 150 kHz). 2. It’s an accident. They needed to pick up high frequency bat calls, and this is just a side perk of that adaptation. Either way, f bats feel selection pressure to expand their range, these moths are already one step ahead.
Evolutionary mysteries aside, the research actually has some practical implications. Moir’s collaborator James Windmill hopes to use what they learned from the greater wax moth to develop new ultrasonic tech — think itty bitty microphones. I personally see inspiration for a Saturday morning cartoon here. If comics can feature ticks and cockroaches saving the world, a moth with ultrasonic hearing at least seems more plausible. Perhaps as Spiderman’s slightly less exciting cousin?