Beautiful buds beset bumblebees with bad bugs

Sexual liaisons can be difficult to achieve without some type of purposeful motion.  Flowering plants, which are rooted to the ground, are particularly challenged to bring the male close enough to the female to have sex.  One awesome adaptation is pollen, technically the male gametophyte –  or gamete (sperm)-generating plant. These tiny males get to females either by floating through the air, or by being transferred by animal pollinators such as bees. Plants can lure bees to their flowers by producing nectar – a sugar rich fluid – which bees lap up and use as a carbohydrate source.  While nectaring, bees also collect pollen, either intentionally or inadvertently, which provides them with essential proteins. When bees travel to the next flower, they may inadvertently drop some of their pollen load near the female gametophyte – in this case a tiny egg-generating plant (though tiny, the female gametophyte is considerably larger than is the male gametophyte).  We call this process of “tiny boy meets tiny girl” pollination. Once the two gametophytes meet, the pollen produces one or more sperm, which it uses to fertilize an egg within the female gametophyte.  There is more to it, but this will hopefully clarify the difference between pollination and fertilization.

monardadidyma.jpg

Bumblebee forages on beebalm, Monarda didyma. Credit: Jonathan Giacomini.

All of this business takes place within the friendly confines of the flower.  The same flower may be visited by many different bees of many different species. While feeding, bees carry on other bodily functions, including defecation.  They are not careful about where they defecate; consequently a bee’s breakfast might also include feces from a previous bee visitor. Bumblebee (Bombus impatiens) feces carries many disease organisms, including the gut parasite Crithidia bombi, which can reduce learning, decrease colony reproduction and impair a queen’s ability to found new colonies. Because pollinators are so critical in ecosystems, Lynn Adler and her colleagues wondered whether certain types of flowers were better vectors for harboring and transmitting Crithidia bombi to other bumblebees.

Antirrhinummajus

Bumblebee forages on the snapdragon, Antirrhinum majus. Credit: Jonathan Giacomini.

The researchers chose 14 different flowering plant species, allowing uninfected bumblebees to forage on inflorescences (clusters of flowers) inoculated with a measured amount of Crithidia bombi parasites.  The bees were reared for seven days after exposure, and then were assessed for whether they had picked up the infection from their foraging experience, and if so, how intense the infection was. The researchers dissected each tested bee and counted the number of Crithidia cells within the gut.

researcher-photo.jpg

Researcher conducts foraging trial with Lobelia siphilitica inflorescence. Credit: Jonathan Giacomini.

Adler and her colleagues discovered that some plant species caused a much higher pathogen count (mean number of infected cells in the bee gut) than did other plant species.  For example bees that foraged on Asclepias incarnata (ASC) had four times as many pathogens, on average, than did bees that foraged on Digitalis purpurea (DIG) (top graph below). Bees foraging on Asclepias were much more likely to get infected (had greater susceptibility) than bees that foraged on several other species, most notably Linaria vulgaris (LIN) and Eupatorium perfoliatum (EUP) (middle graph). Lastly, if we limit our consideration to infected bees, the mean intensity of the infection was much greater for bees foraging on some species, such as Asclepias and Monarda didyma (MON) than on others, such as Digitalis and Antirrhinum majus (ANT) (bottom graph).

AdlerFig1

(Top graph) Mean number of Crithidia (2 microliter gut sample) hosted by bees after foraging on one of 14 different flowering plant species. This graph includes both infected and uninfected bees. (Middle graph) Susceptibility – the proportion of bees infected – after foraging trials on different plant species. (Bottom graph) Intensity of infection – Mean number of Crithidia for infected bees only. The capital letters below the graph are the first three letters of the plant genus. Numbers in bars are sample size.  Error bars indicate 1 standard error.

It would be impossible to repeat this experiment on the 369,000 known species of flowering plants (with many more still to be identified).  So Adler and her colleagues really wanted to know whether there were some flower characteristics or traits associated with plant species that served as the best vectors of disease.  The researchers measured and counted variables associated with the flowers, such as the size and shape of the corolla, the number of open flowers and the number of reproductive structures (flowers, flower buds and fruits) per inflorescence.

bluelobelia.png

Flower traits measured by Adler and colleagues (example for blue lobelia, Lobelia siphilitica). CL is corolla length. CW is corolla width. PL is petal length. PW is petal width. Credit: Melissa Ha.

The researchers also wanted to know whether any variables associated with the bees, such as bee size and bee behavior, would predict how likely it was that a bee would get infected.  Surprisingly, the number of reproductive structures per inflorescence stood out as the most important variable. In addition, smaller bees were somewhat more likely to get infected than larger bees, and bees that foraged for a longer time period were more prone to infection.

AdlerFig2

Mean susceptibility of bees to Crithidia infection after foraging on 14 different flowering plant species, in relation to the number of reproductive structures (flowers, buds and fruits) per inflorescence.

These findings are both surprising and exciting. Adler and her colleagues were surprised to find such big differences in the ability of plant species to transmit disease.  In addition, they were puzzled about the importance of number of reproductive structures per inflorescence.  At this point, they don’t have a favorite hypothesis for its overriding importance, speculating that some unmeasured aspect of floral architecture influencing disease transmission might be related to the number of reproductive structures per inflorescence.

Penstemondigitalis

Bumblebee forages on Penstemon digitalis. In addition to the open flowers, note the large number of unopened buds.  Each of these counted as a reproductive structure for the graph above. Credit: Jonathan Giacomini.

The world is losing pollinators at a rapid rate, and there are concerns that if present trends continue, there may not be enough pollinators to pollinate flowers of some of our most important food crops. Disease is implicated in many of these declines, so it behooves us to understand how plants can serve as vectors of diseases that affect pollinators. Identifying floral traits that influence disease transmission could guide the creation of pollinator-friendly habitats within plant communities, and help to maintain diverse pollinator communities within the world’s ecosystems.

note: the paper that describes this research is from the journal Ecology. The reference is Adler, L. S., Michaud, K. M., Ellner, S. P., McArt, S. H., Stevenson, P. C. and Irwin, R. E. (2018), Disease where you dine: plant species and floral traits associated with pathogen transmission in bumble bees. Ecology, 99: 2535-2545. doi:10.1002/ecy.2503. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Dinoflagellates deter copepod consumption

Those of us who enjoy eating seafood are dismayed by the dreaded red tide, which renders some of our favorite prey toxic to us.  A red tide occurs when dinoflagellates and other algae increase sharply in abundance, often in response to upwelling of nutrients from the ocean floor.  Many of these dinoflagellates are red or brownish-red in color, so large numbers of them floating on or near the surface give the ocean its characteristic red color. These dinoflagellates produce toxic compounds (in particular neurotoxins) that pass through the food web, ultimately contaminating fish, molluscs and many other groups of species.

redtideCreditMarufish:FlickrIsahayaBay

Red tide at Isahaya Bay, Japan.  Credit: Marufish/Flickr.

Did toxicity arise in dinoflagellates to protect them from being eaten by predators – in particular by voracious copepods?  The problem with this hypothesis is that copepods eat an entire dinoflagellate.  Let’s imagine a dinoflagellate with a mutation that produces a toxic substance. At some point the dinoflagellate gets eaten, and the poor copepod consumer is exposed to the toxin.  Maybe it dies and maybe it lives, but the important result is that the dinoflagellate dies, and its mutant genes are gone forever, along with the toxic trait. The only way toxicity will benefit the dinoflagellate individual, and thus spread throughout the dinoflagellate population, is if it increases the survival/reproductive success of individuals with the toxic trait. This can occur if copepods have some mechanism for detecting toxic dinoflagellates, and are therefore less likely to eat them.

Jiayi Xu and Thomas Kiørboe went looking for such a mechanism using 13 different species or strains of dinoflagellates that were presented to the copepod Temora longicornis. This copepod beats its legs to create an ocean current that moves water, and presumably dinoflagellates, in its direction, which it then eats.  For their experiment, the researchers glued a hair to the dorsal surface of an individual copepod (very carefully), and they then attached the other side of the hair to a capillary tube, which was controlled by a micromanipulator. They placed these copepods into small aquaria, where the copepods continued to beat their legs, eat and engage in other bodily functions.

照片 3

Aquarium with tethered copepod and recording equipment: Credit: J. Xu.

The researchers then added a measured amount of one type of dinoflagellate into the aquarium, and using high resolution videography, watched the copepods feed over the next 24 hours.

Picture1

Tethered copepod beats its legs to attract a dinoflagellate (round blue circular cell). Credit: J. Xu.

Twelve of the dinoflegellate strains were known to be toxic, though they had several different types of poison. Protoceratium reticulatum was a nontoxic control species of dinoflagellate.  As you can see below, on average, copepods ate more of the nontoxic P. reticulatum than they did of any of the toxic species.

XuFig1

Average dinoflagellate biomass ingested by the tethered copepods.  P. reticulatum  is the nontoxic control.  Error bars are 1 SE.

Xu and Kiørboe identified two major mechanisms that underlie selectivity by the copepod predator.  In many cases, the copepod successfully captured the prey, but then rejected it (top graph below). For one strain of A. tamarense prey, and a lesser extent for K. brevis prey, the predator simply fed less as a consequence of reducing the proportion of time that it beat its feeding legs (bottom graph below).

XuFig3bd

Copepod feeding behavior on 13 dinoflagellate prey species.  Top graph is fraction of dinoflagellates rejected, while bottom graph is the proportion of time the copepods beats its feeding legs in the presence of a particular species/strain of dinoflagellate.  

If you look at the very first graph in this post, which shows the average dinoflagellate biomass consumed, you will note that both strains of K. brevis (K8 and K9) are eaten very sparingly.  The graphs just above show that the copepod rejects some K. brevis that it captures, and beats its legs a bit less often when presented with K. brevis. However, the rejection increase and leg beating decreases are not sufficient to account for the tremendous reduction in consumption. So something else must be going on.  The researchers suspect that the copepod can identify K. breviscells from a distance, presumably through olfaction, and decide not to capture them. This mechanism warrants further exploration.

One surprising finding of this study is that the copepod responds differently to one strain of the same species (A. tamarense) than it does to the other strains.  Xu and Kiorbe point out that previous studies of copepod/dinoflagellate interactions have identified other surprises.  For example, there are cases where a dinoflagellate strain is toxic to one strain of copepod, but harmless to another copepod strain of the same species. Also, within a dinoflagellate species, one strain may have a very different distribution of toxins than does a second strain.  So why does this degree of variation exist in this system?

The researchers argue that there may be an evolutionary arms race between copepods and dinoflagellates.  The copepod adapts to the toxin of co-occurring dinoflagellates, becoming resistant to the toxin. This selects for dinoflagellates that produce a novel toxin that the copepod is sensitive to. Over time, the copepod evolves resistance to the second toxin as well, and so on… Because masses of ocean water and populations of both groups are constantly mixing, different species and strains are exposed to novel environments with high frequency. Evolution happens.

note: the paper that describes this research is from the journal Ecology. The reference is Xu, J. and Kiørboe, T. (2018), Toxic dinoflagellates produce true grazer deterrents. Ecology, 99: 2240-2249. doi:10.1002/ecy.2479. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Decomposition: it’s who you are and where you are

“Follow the carbon” is a growing pastime of ecologists and environmental researchers worldwide. In the process of cellular respiration, organisms use carbon compounds to fuel their metabolic pathways, so having carbon around makes life possible.  Within ecosystems, following the carbon is equivalent to following how energy flows among the producers, consumers, detritivores and decomposers. In soils, decomposers play a central role in energy flow, but we might not appreciate their importance because many decomposers are tiny, and decomposition is very slow.  We are thrilled by a hawk subduing a rodent, but are less appreciative of a bacterium breaking down a lignin molecule, even though at their molecular heart, both processes are the same, in that complex carbon enters the organism and fuels cellular respiration.  However. from a global perspective, cellular respiration produces carbon dioxide as a waste product, which if allowed to escape the ecosystem, will increase the pool of atmospheric carbon dioxide thereby increasing the rate of global warming. So following the carbon is an ecological imperative.

As the world warms, trees and shrubs are colonizing regions that previously were inaccessible to them. In northern Sweden, mountain birch forests (Betula pubescens) and birch shrubs (Betula nana) are advancing into the tundra, replacing the heath that is dominated by the crowberry, Empetrum nigrum. As he began his PhD studies, Thomas Parker became interested in the general question of how decomposition changes as trees and shrubs expand further north in the Arctic. On his first trip to a field site in northern Sweden he noticed that the areas of forest and shrubs produced a lot of leaf litter in autumn yet there was no significant accumulation of this litter the following year. He wondered how the litter decomposed, and how this process might change as birch overtook the crowberry.

ParkerView

One of the study sides in autumn: mountain birch forest (yellow) in the background, dwarf birch (red) on the left and crowberry on the right. Credit: Tom Parker.

Several factors can affect leaf litter decomposition in northern climes.  First, depending on what they are made of, different species of leaves will decompose at different rates.  Second, different types of microorganisms present will target different types of leaves with varying degrees of efficiency.  Lastly, the abiotic environment may play a role; for example, due to shade and creation of discrete microenvironments, forests have deeper snowpack, keeping soils warmer in winter and potentially elevating decomposer cellular respiration rates. Working with several other researchers, Parker tested the following three hypotheses: (1) litter from the more productive vegetation types will decompose more quickly, (2) all types of litter decompose more quickly in forest and shrub environments, and (3) deep winter snow (in forest and shrub environments) increase litter decomposition compared to heath environments.

To test these hypotheses, Parker and his colleagues established 12 transects that transitioned from forest to shrub to heath. Along each transect, they set up three 2 m2 plots – one each in the forest, shrub, and heath – 36 plots in all. In September of 2012, the researchers collected fresh leaf littler from mountain birch, shrub birch and crowberry, which they sorted, dried and placed into 7X7 cm. polyester mesh bags.  They placed six litter bags of each species at each of the 36 plots, and then harvested these bags periodically over the next three years. Bags were securely attached to the ground so that small decomposers could get in, but the researchers had to choose a relatively small mesh diameter to make sure they successfully enclosed the tiny crowberry leaves. This restricted access to some of the larger decomposers.

ParkerLitterBags

Some litter bags attached to the soil surface at the beginning of the experiment. Credit: Tom Parker.

To test for the effect of snow depth, the researchers also set up snow fences on nearby heath sites.  These fences accumulated blowing and drifting snow, creating a snowpack comparable to that in nearby forest and shrub plots.

Parker and his colleagues found that B. pubescens leaves decomposed most rapidly and E. nigrum leases decomposed most slowly.  In addition, leaf litter decomposed fastest in the forest and most slowly in the heath.  Lastly, snow depth did not  influence decomposition rate.

ParkerEcologyFig1

(Left graph) Decomposition rates of E. nigrum, B. nana and B. pubescens in heath, shrub and forest. (Right graph) Decomposition rates of E. nigrum, B. nana and B. pubescens in heath under three different snow depths simulating snow accumulation at different vegetation types: Heath (control), + Snow (Shrub) and ++ Snow (Forest) . Error bars are 1 SE.

B. pubescens in forest and shrub lost the greatest amount (almost 50%) of mass over the three years of the study, while E. nigrum in heath lost the least (less than 30%).  However, B. pubescens decomposed much more rapidly in the forest than in the shrub between days 365 and 641. The bottom graphs below show that snow fences had no significant effect on decomposition.

ParkerEcologyFig2

Percentage of litter mass remaining (a, d) E. nigrum, (b, e) B. nana, (c, f) B. pubescens in heath, shrub, or forest. Top graphs (a, b, c) are natural transects, while the bottom graphs (d, e, f) represent heath tundra under three different snow depths simulating snow accumulation at different vegetation types: Heath (control), + Snow (Shrub) and ++ Snow (Forest) . Error bars represent are 1SE. Shaded areas on the x-axis indicate the snow covered season in the first two years of the study.

Why do mountain birch leaves decompose so much more than do crowberry leaves?  The researchers chemically analyzed both species and discovered that birch leaves had 1.7 times more carbohydrate than did crowberry, while crowberry had 4.9 times more lipids than did birch. Their chemical analysis showed much of birch’s rapid early decomposition was a result of rapid carbohydrate breakdown. In contrast, crowberry’s slow decomposition resulted from its high lipid content being relatively resistant to the actions of decomposers.

ParkerResearchers

Researchers (Parker right, Subke left) harvesting soils and litter in the tundra. Credit: Jens-Arne Subke.

Parker and his colleagues did discover that decomposition was fastest in the forest independent of litter type. Forest soils are rich in brown-rot fungi, which are known to target the carbohydrates (primarily cellulose) that are so abundant in mountain birch leaves.  The researchers propose that a history of high cellulose litter content has selected for a biochemical environment that efficiently breaks down cellulose-rich leaves. Once the brown-rot fungi and their allies have done much of the initial breakdown, another class of fungi (ectomycorrhizal fungi) kicks into action and metabolizes (and decomposes) the more complex organic molecules.

The result of all this decomposition in the forest, but not the heath, is that tundra heath stores much more organic compounds than does the adjacent forest (which loses stored organic compounds to decomposers).  As forests continue their relentless march northward replacing the heath, it is very likely that they will introduce their efficient army of decomposers to the former heathlands.  These decomposers will feast on the vast supply of stored organic carbon compounds, release large quantities of carbon dioxide into the atmosphere, which will further exacerbate global warming. This is one of several positive feedbacks loops expected to destabilize global climate systems in the coming years.

note: the paper that describes this research is from the journal Ecology. The reference is Parker, T. C., Sanderman, J., Holden, R. D., Blume‐Werry, G., Sjögersten, S., Large, D., Castro‐Díaz, M., Street, L. E., Subke, J. and Wookey, P. A. (2018), Exploring drivers of litter decomposition in a greening Arctic: results from a transplant experiment across a treeline. Ecology, 99: 2284-2294. doi:10.1002/ecy.2442. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Rice fields foster biodiversity

Restoration ecologists want to restore ecosystems that have been damaged or destroyed by human activity.  One approach they use is “rewilding” – which can mean different things to different people.  To some, rewilding involves returning large predators to an ecosystem, thereby reestablishing important ecological linkages.  To others, rewilding requires corridors that link different wild areas, so animals can migrate from one area to another.  One common thread in most concepts of rewilding is that once established, restored ecosystems should be self-sustaining, so that if ecosystems are left to their own devices, ecological linkages and biological diversity can return to pre-human-intervention levels, and remain at those levels in the future.

ardea intermedia (intermediate egret). photo by n. katayama

The intermediate egrit, Ardea intermedia, plucks a fish from a flooded rice field. Credit: N. Katayama.

Chieko Koshida and Naoki Katayama argue that rewilding may not always increase biological diversity.  In some cases, allowing ecosystems to return to their pre-human-intervention state can actually cause biological diversity to decline. Koshida and Katayama were surveying bird diversity in abandoned rice fields, and noticed that bird species distributions were different in long-abandoned rice fields in comparison to still-functioning rice fields.  To follow up on their observations, they surveyed the literature, and found 172 studies that addressed how rice field abandonment in Japan affected species richness (number of species) or abundance.  For the meta-analysis we will be discussing today, an eligible study needed to compare richness and/or abundance for at least two of three management states: (1) cultivated (tilled, flood irrigated, rice planted, and harvested every year), (2) fallow (tilled or mowed once every 1-3 years), and (3) long-abandoned (unmanaged for at least three years).

koshidafig1

Three different rice field management states – cultivated, fallow and long-abandoned – showing differences in vegetation and water conditions. Credit: C. Koshida.

Meta-analyses are always challenging, because the data are collected by many researchers, and for a variety of purposes.  For example, some researchers may only be interested in whether invasive species were present, or they may not be interested in how many individuals of a particular species were present. Ultimately 35 studies met Koshida and Katayama’s criteria for their meta-analysis (29 in Japanese and six in English).

Overall, abandoning or fallowing rice fields decreased species richness or abundance to 72% of the value of cultivated rice fields. As you might suspect, these effects were not uniform for different variables or comparisons. Not surprisingly, fish and amphibians declined sharply in abandoned rice fields – much more than other groups of organisms. Abundance declined more sharply in abandoned fields than did species richness.  Several other trends also emerged.  For example, complex landscapes such as yatsuda (forested valleys) and tanada (hilly terraces) were more affected than were simple landscapes.  In addition, wetter abandoned fields were able to maintain biological diversity, while dryer abandoned fields declined in richness and abundance.

koshidafig2

The effects of rice field abandonment or fallowing for eight different variables.  Effect size is the ln (Mt/Mc), where Mt = mean species richness or abundance for the treatment, and Mc = mean species richness for the control.  The treated field in all comparisons was the one that was abandoned for the longer time.  A positive effect size means that species richness or abundance  increased in the treated (longer abandoned) field, while a negative effect size means that species richness or abundance declined in the treated field. Numbers in parentheses are number of data sets used for comparisons.

When numerous variables are considered, researchers need to figure out which are most important.  Koshida and Katayama used a statistical approach known as “random forest” to model the impact of different variables on the reduction in biological diversity following abandonment.  This approach generates a variable – the percentage increase in mean square error (%increaseMSE) – which indicates the importance of each variable for the model (we won’t go into how this is done!).  As the graph below shows, soil moisture was the most important variable, which tells us (along with the previous figure above) that abandoned fields that maintained high moisture levels also kept their biological diversity, while those that dried out lost out considerably.  Management state was the second most important variable, as long-abandoned fields lost considerably more biological diversity than did fallow fields.

koshidafig4

Importance estimates of each variable (as measured by %increase MSE).  Higher values indicate greater importance.

Unfortunately, only three studies had data on changes in biological diversity over the long-term.  All three of these studies surveyed plant species richness over a 6 – 15 year period, so Koshida and Katayama combined them to explore whether plant species richness recovers following long-term rice field abandonment. Based on these studies, species richness continues to decline over the entire time period.

koshidafig6

Plant species richness in relation to time since rice fields were abandoned (based on three studies).

Koshida and Katayama conclude that left to their own devices, some ecosystems, like rice fields, will actually decrease, rather than increase, in biological diversity.  Rice fields are, however, special cases, because they provide alternatives to natural wetlands for many organisms dependent on aquatic/wetland environments (such as the frog below). In this sense, rice fields should be viewed as ecological refuges for these groups of organisms.

rana-porosa-porosa-tokyo-daruma-pond-frog.-photo-by-y.g.-baba.jpg

Rana porosa porosa (Tokyo Daruma Pond Frog). Credit: Y. G. Baba

These findings also have important management implications.  For example, conservation ecologists can promote biological diversity in abandoned rice fields by mowing and flooding. In addition, managers should pay particular attention to abandoned rice fields with complex structure, as they are particularly good reservoirs of biological diversity, and are likely to lose species if allowed to dry out. Failure to attend to these issues could lead to local extinctions of specialist wetland species and of terrestrial species that live in grasslands surrounding rice fields. Lastly, restoration ecologists working on other types of ecosystems need to carefully consider the effects on biological diversity of allowing those ecosystems to return to their natural state without any human intervention.

note: the paper that describes this research is from the journal Conservation Biology. The reference is Koshida, C. and Katayama, N. (2018), Meta‐analysis of the effects of rice‐field abandonment on biodiversity in Japan. Conservation Biology, 32: 1392-1402. doi:10.1111/cobi.13156. Thanks to the Society for Conservation Biology for allowing me to use figures from the paper. Copyright © 2018 by the Society for Conservation Biology. All rights reserved.

Sweltering ants seek salt

Like humans, ants need salt and sugar.  Salt is critical for a functioning nervous system and for maintaining muscle activity, while sugar is a ready energy source. In ectotherms such as ants, body temperature is influenced primarily by the external environment, with higher environmental temperatures leading to higher body temperatures.  When ants get hot their metabolic rates rise, so they can go out and do energetically demanding activities such as foraging for essential resources like salt and sugar. On the down side, hot ants excrete more salt and burn up more sugar.  In addition, like humans, very high body temperature can be lethal, so ants are forced to seek shelter during extreme heat.   As a beginning graduate student, Rebecca Prather wanted to know whether ants adjust their foraging rates on salt and sugar in response to the conflicting demands of elevated temperatures on ants’ physiological systems.

Prather at field site

Rebecca Prather at her field site in Oklahoma, USA. Credit: Rebecca Prather.

Prather and her colleagues studied two different field sites: Centennial Prairie is home to 16 ant species, while Pigtail Alley Prairie has nine species.  For their first experiment, the researchers established three transects with 100 stations baited with vials containing cotton balls and either 0.5% salt (NaCl) or 1% sucrose.  The bait stations were 1 meter apart.  After 1 hour, they collected the vials (with or without ants), and counted and identified each ant in each vial.  The researchers measured soil temperature at the surface and at a depth of 10 cm. The researchers repeated these experiments at 9 AM, 1 PM and 5 PM, April – October, 4 times each month.

AntsinVial.jpg

Ants recruited to vials with 0.5% salt solution.  Credit: Rebecca Prather.

Sugar is easily stored in the body, so while sugar consumption increases with temperature, due to increased ant metabolic rate, sugar excretion is relatively stable with temperature.  In contrast, salt cannot be stored effectively, so salt excretion increases at high body temperature.  Consequently, Prather and her colleagues expected that ant salt-demand would increase with temperature more rapidly than would ant sugar-demand.

PratherFig1

Ant behavior in response to vials with 0.5% salt (dark circles) and 1% sucrose (white circles) at varying soil temperatures at 9AM, 1 PM (13:00) and 5PM (17:00). The three left graphs show the number of vials discovered (containing at least one ant), while the three right graphs show the number of ants recruited per vial.  The Q10 value  = the rate of discovery or recruitment at 30 deg. C divided by the rate of discovery or recruitment at 20 deg. C. * indicates that the two curves have statistically significantly different slopes.

The researchers discovered that ants foraged more at high temperatures. However, when surface temperatures were too high (most commonly at 1 PM during summer months), ants could not forage and remained in their nests.  At all three times of day, ants discovered more salt vials at higher soil temperatures. Ants also discovered more sugar vials at higher temperatures in the morning and evening, but not during the 1 PM surveys. Most interesting, the slope of the curve was much steeper for salt discovery than it was for sugar discovery, indicating that higher temperature increased salt discovery rate more than it increased sugar discovery rate (three graphs on left).

When ants discover a high quality resource, they will recruit other nestmates to the resource to help with the harvest.  Ant recruitment rates increased with temperature to salt, but not sugar, indicating that ant demand for 0.5% salt increased more rapidly than ant demand for 1% sugar (three graphs above on right).

The researchers were concerned that the sugar concentrations were too low to excite much recruitment, so they replicated the experiments the following year using four different sugar concentrations.  Ant recruitment was substantially greater to higher sugar concentrations, but was still two to three times lower than it was to 0.5% salt.

PratherFig2

Ant recruitment (y-axis) to different sugar concentrations at a range of soil temperatures (X-axis). Q10 values are to the left of each line of best fit.

Three of the four most common ant species showed the salt and sugar preferences that we described above, but the other common species, Formica pallidefulva, actually decreased foraging at higher temperatures.  The researchers suggest that this species is outcompeted by the other more dominant species at high temperatures, and are forced to forage at lower temperatures when fewer competitors are present.

In a warming world, ant performance will increase as temperatures increase up to ants’ thermal maximum, at which point ant performance will crash.  Ants are critical to ecosystems, playing important roles as consumers and as seed dispersers. Thus many ecosystems in which ants are common (and there are many such ecosystems!) may function more or less efficiently depending on how changing temperatures influence ants’ abilities to consume and conserve essential nutrients such as salt.

note: the paper that describes this research is from the journal Ecology. The reference is Prather, R. M., Roeder, K. A., Sanders, N. J. and Kaspari, M. (2018), Using metabolic and thermal ecology to predict temperature dependent ecosystem activity: a test with prairie ants. Ecology, 99: 2113-2121. doi:10.1002/ecy.2445Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

What grows up must go down: plant species richness and soils below.

Almost 20 years ago, Dorota Porazinska was a postdoctoral researcher investigating whether plant diversity influenced the diversity of organisms that lived in the soil below these plants, including bacteria, protists, fungi and nematodes (collectively known as soil biota).  Surprisingly, she and her colleagues discovered no linkages between aboveground and belowground species diversity.  She suspected that two issues were responsible for this lack of linkage. First, the early study lumped related species into functional groups – for example nematodes that eat bacteria, or nematodes that eat fungi.  Lumping simplifies data collection but loses a lot of data because individual species are not distinguished.  Back in those days, identifying species with DNA analysis was time-consuming, expensive, and often impractical. The second issue was that even if aboveground-belowground diversity was linked, it might be difficult to detect.  Ecosystems are very complex, and many belowground species make a living off of legacies of carbon or other nutrients that are the remains of organisms that lived many generations ago.   These legacy organic nutrient pools allow for indirect (and thus more difficult to detect) linkages between aboveground and belowground species.

Porazinska and her colleagues reasoned that if there were aboveground/belowground relationships, they would be easiest to detect in the simplest ecosystems that lacked significant pools of legacy nutrients. They also used molecular techniques that were not readily available for earlier studies to identify distinct species based on DNA analysis. The researchers established 98 1-m radius circular plots at the Niwot Ridge Long Term Ecological Research Site in the Colorado, USA Rocky Mountains. At each plot, they identified and counted each vascular plant, and recorded the presence of moss and lichen.  They also censused soil biota by using a variety of DNA amplification and isolation techniques that allowed them to identify bacteria, archaea, protists, fungi and nematodes to species.

PorazinskaOpening9256 Photo

Field assistant Jarred Huxley surveys plants in a high species richness plot. Credit Dorota L. Porazinska.

As expected in this alpine environment, plant species richness was quite low, averaging only 8 species per plot (range = 0 – 27).  In contrast to what had been found in other ecosystems, high plant diversity was associated with high diversity of soil biota.

PorazinskaEcologyFig1

Relationship between plant richness (x-axis) and soil biota richness (y-axis) for (A) bacteria, (B) eukaryotes (excluding fungi and nematodes), (C) fungi, and (D) nematodes.  OTUs are operational taxonomic units, which represent organisms with very similar or identical DNA sequences on a marker gene.  For our purposes, they represent distinct species.

Looking at the graphs above, you can see that different groups responded to different degrees; nematodes had the strongest response to increases in plant richness while fungi had the weakest response.  When viewed at a finer level, some groups of soil organisms, including photosynthetic microorganisms such as cyanobacteria and green algae actually decreased, presumably in response to competition with aboveground plants for light and possibly nutrients.

Given the strong relationship between plant species richness and soil biota richness, Porazinska and her colleagues next explored whether high plant richness was associated with soil nutrient levels (nutrient pools).  In general, there was a strong correlation between plant species richness and nutrient pools (see graphs below).  But soil moisture, and the ability of soil to hold moisture were the two most important factors associated with nutrient pools.

PorazinskaEcologyFig2

Amount (micrograms per gram of soil) of carbon (left graph) and nitrogen (right graph) in relation to plant species richness.

Ecologists studying soil processes can measure the rates at which microorganisms are metabolizing nutrients such as carbon, phosphorus and nitrogen.  The expectation was that if high plant species richness was associated with higher soil biota richness, and larger soil nutrient pools, then the activity of enzymes that metabolize soil nutrients should proportionally increase with these factors.  The researchers found that enzyme activity was very low where plants were absent or rare, and greatest in complex plant communities.  But the most important factors influencing enzyme activity were the amount of organic carbon present within the soil, and the ability of the soil to hold water.

PorazinskaClosing4427

Patchy vegetation at the field site. Credit: Cliffton P. Bueno de Mesquita.

Porazinska and her colleagues hypothesize that the relationship between plant species richness, soil biota richness, nutrient pools, and soil processes such as enzyme activity, exist in most ecosystems, but are obscured by indirect linkages between these different levels.  They hypothesize that these relationships in other ecosystems such as grasslands and forests are difficult to observe.  In these more complex ecosystems, carbon inputs into the soil form large legacy carbon pools. These carbon pools, and the ability of the soil to hold nutrient pools, fundamentally influence the abundance and richness of soil biota. In contrast, in nutrient-poor soils, such as high Rocky Mountain alpine meadows, legacy carbon pools are rare and small. Consequently, plants and soil biota interact more directly, and correlations between plant species diversity and soil biota diversity are much easier to detect.

note: the paper that describes this research is from the journal Ecology. The reference is Porazinska, D. L., Farrer, E. C., Spasojevic, M. J., Bueno de Mesquita, C. P., Sartwell, S. A., Smith, J. G., White, C. T., King, A. J., Suding, K. N. and Schmidt, S. K. (2018), Plant diversity and density predict belowground diversity and function in an early successional alpine ecosystem. Ecology, 99: 1942-1952. doi:10.1002/ecy.2420. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

 

Meandering meerkats

Dispersal – the movement of individuals to a new location – is a complex process that ecologists divide into three stages: emigration (leaving the group), transience through an unfamiliar landscape, and settlement in a suitable habitat. Dispersal is fraught with danger, as dispersers usually have a higher chance of starving, of getting eaten by predators, and may suffer a low reproductive rate.  So why move?

The problem is that there are major issues with not moving.  First, if nobody disperses, population densities could increase alarmingly, putting strains on resources and increasing the incidence of disease transmission.  Second, if nobody disperses, close relatives would tend to live near each other.  If these relatives mate, there would be a high probability of bad combinations of genes being expressed, leading to developmental abnormalities or high offspring mortality (geneticists call this inbreeding depression). In social species, such as meerkats, Suricata suricatta, the issues are even more complex, as dispersal could break up social groups that work well together to detect predators or find resources.  Nino Maag and his colleagues explored what factors influence meerkat dispersal decisions, their survival and reproduction, and how those factors affected overall population dynamics in the Kuruman River Reserve in South Africa.

5_Arpat_Ozgul

A group of vigilant meerkats. Credit: Arpat Azgul

Meerkats live in groups of 2-50 individuals, with a dominant pair that monopolizes reproduction.  While pregnant, the dominant female usually evicts some subordinate females from the group; this coalition of evictees will either remain apart from the group (but within the confines of the territory) and eventually be allowed back in, or else emigrate to a new territory. By attaching radio collars to subordinate females, the researchers were able to follow emigrants to determine their fates.

3_Gabriele_Cozzi

Nino Maag collects data in the Kalahari Desert while a meerkat, wearing a radio collar, strolls by. Credit: Gabriele Cozzi.

How does population density affect emigration rates of evicted females?  You might think that meerkats would be most likely to emigrate at high population density, as a way of avoiding resource competition.  As it turns out the story is more complicated.  First, individual females (solid lines in graph below) are more likely to remain with the group (not emigrate) than are groups of two or more females (dashed lines). Second, emigration rates were highest at low population density, intermediate at high population density and lowest at intermediate population density. This nonlinear effect can be explained by low benefits of remaining in a very small group, so evictees are more likely to emigrate.  But as population density (and group size) increase, then the meerkats enjoy higher success as a result of cooperation between individuals  (in particular, detecting and avoiding predators).  But when population densities get too high, there are not enough resources to go around, and evictees are more likely to emigrate.

MaagFig2A

Proportion of evicted female meerkats that had not yet emigrated in relation to time since eviction at low (red), medium (light blue) and high (dark blue) population density.  Solid lines represent individual females, while dashed lines are coalitions of two or more females.

In addition to the density effects we just discussed, association with unrelated males from other groups early after eviction increased the probability that females would emigrate – presumably this increased the probability females would quickly create offspring in their new territory. Females also dispersed longer distances if unrelated males did not meet up with them, possibly to avoid inbreeding with closely-related males from neighboring groups.

Coalitions were more likely to return to the group if females were not pregnant – in fact 62% of pregnant evictees aborted their litters before being allowed back into the group.  Of the ones that did not abort before returning, only 42% of their litters survived to the first month.

The period of transience, when emigrators are seeking new territories can be prolonged and dangerous.  The mean dispersal distance was 2.24 km, and averaged about 46 days.  Larger coalitions with males present tended to disperse the shortest distances (left graph below). Dispersers took longest to settle at high population density – perhaps there were fewer available territories under those conditions (right graph below).

MaagFig4

A. Effect of coalition size and presence of unrelated males on dispersal distance. B. Effect of population density on transience time (interval between emigration and settling).

Large coalitions settled more quickly than did small coalitions, particularly if accompanied by unrelated males.  Once settled, females successfully carried through 89% of their pregnancies (compare that to the 62% abortion rate of females that returned to their original group).  These females had a litter survival rate (to the first month) of 65%.

Social and non-social species are influenced by population density in different ways.  The situation is relatively simple for non-social species; as population size increases, competition between individuals increases, so dispersal is more likely.  However, even for non-social species, we might expect dispersal at very low population levels, if there are no mates available. For social species such as meerkats, the situation is more complex.  Cooperation enhances survival and reproduction, so it is better to be in a larger group (with more cooperators). At the same time, if the group is too large, then resource competition starts being an increasingly disruptive factor. As ecologists collect more dispersal data from other social species, they will be able to test the hypothesis that population density in many species influences dispersal in a non-linear way.

note: the paper that describes this research is from the journal Ecology. The reference is Maag, N. , Cozzi, G. , Clutton‐Brock, T. and Ozgul, A. (2018), Density‐dependent dispersal strategies in a cooperative breeder. Ecology, 99: 1932-1941. doi:10.1002/ecy.2433. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.