Dinoflagellates deter copepod consumption

Those of us who enjoy eating seafood are dismayed by the dreaded red tide, which renders some of our favorite prey toxic to us.  A red tide occurs when dinoflagellates and other algae increase sharply in abundance, often in response to upwelling of nutrients from the ocean floor.  Many of these dinoflagellates are red or brownish-red in color, so large numbers of them floating on or near the surface give the ocean its characteristic red color. These dinoflagellates produce toxic compounds (in particular neurotoxins) that pass through the food web, ultimately contaminating fish, molluscs and many other groups of species.


Red tide at Isahaya Bay, Japan.  Credit: Marufish/Flickr.

Did toxicity arise in dinoflagellates to protect them from being eaten by predators – in particular by voracious copepods?  The problem with this hypothesis is that copepods eat an entire dinoflagellate.  Let’s imagine a dinoflagellate with a mutation that produces a toxic substance. At some point the dinoflagellate gets eaten, and the poor copepod consumer is exposed to the toxin.  Maybe it dies and maybe it lives, but the important result is that the dinoflagellate dies, and its mutant genes are gone forever, along with the toxic trait. The only way toxicity will benefit the dinoflagellate individual, and thus spread throughout the dinoflagellate population, is if it increases the survival/reproductive success of individuals with the toxic trait. This can occur if copepods have some mechanism for detecting toxic dinoflagellates, and are therefore less likely to eat them.

Jiayi Xu and Thomas Kiørboe went looking for such a mechanism using 13 different species or strains of dinoflagellates that were presented to the copepod Temora longicornis. This copepod beats its legs to create an ocean current that moves water, and presumably dinoflagellates, in its direction, which it then eats.  For their experiment, the researchers glued a hair to the dorsal surface of an individual copepod (very carefully), and they then attached the other side of the hair to a capillary tube, which was controlled by a micromanipulator. They placed these copepods into small aquaria, where the copepods continued to beat their legs, eat and engage in other bodily functions.

照片 3

Aquarium with tethered copepod and recording equipment: Credit: J. Xu.

The researchers then added a measured amount of one type of dinoflagellate into the aquarium, and using high resolution videography, watched the copepods feed over the next 24 hours.


Tethered copepod beats its legs to attract a dinoflagellate (round blue circular cell). Credit: J. Xu.

Twelve of the dinoflegellate strains were known to be toxic, though they had several different types of poison. Protoceratium reticulatum was a nontoxic control species of dinoflagellate.  As you can see below, on average, copepods ate more of the nontoxic P. reticulatum than they did of any of the toxic species.


Average dinoflagellate biomass ingested by the tethered copepods.  P. reticulatum  is the nontoxic control.  Error bars are 1 SE.

Xu and Kiørboe identified two major mechanisms that underlie selectivity by the copepod predator.  In many cases, the copepod successfully captured the prey, but then rejected it (top graph below). For one strain of A. tamarense prey, and a lesser extent for K. brevis prey, the predator simply fed less as a consequence of reducing the proportion of time that it beat its feeding legs (bottom graph below).


Copepod feeding behavior on 13 dinoflagellate prey species.  Top graph is fraction of dinoflagellates rejected, while bottom graph is the proportion of time the copepods beats its feeding legs in the presence of a particular species/strain of dinoflagellate.  

If you look at the very first graph in this post, which shows the average dinoflagellate biomass consumed, you will note that both strains of K. brevis (K8 and K9) are eaten very sparingly.  The graphs just above show that the copepod rejects some K. brevis that it captures, and beats its legs a bit less often when presented with K. brevis. However, the rejection increase and leg beating decreases are not sufficient to account for the tremendous reduction in consumption. So something else must be going on.  The researchers suspect that the copepod can identify K. breviscells from a distance, presumably through olfaction, and decide not to capture them. This mechanism warrants further exploration.

One surprising finding of this study is that the copepod responds differently to one strain of the same species (A. tamarense) than it does to the other strains.  Xu and Kiorbe point out that previous studies of copepod/dinoflagellate interactions have identified other surprises.  For example, there are cases where a dinoflagellate strain is toxic to one strain of copepod, but harmless to another copepod strain of the same species. Also, within a dinoflagellate species, one strain may have a very different distribution of toxins than does a second strain.  So why does this degree of variation exist in this system?

The researchers argue that there may be an evolutionary arms race between copepods and dinoflagellates.  The copepod adapts to the toxin of co-occurring dinoflagellates, becoming resistant to the toxin. This selects for dinoflagellates that produce a novel toxin that the copepod is sensitive to. Over time, the copepod evolves resistance to the second toxin as well, and so on… Because masses of ocean water and populations of both groups are constantly mixing, different species and strains are exposed to novel environments with high frequency. Evolution happens.

note: the paper that describes this research is from the journal Ecology. The reference is Xu, J. and Kiørboe, T. (2018), Toxic dinoflagellates produce true grazer deterrents. Ecology, 99: 2240-2249. doi:10.1002/ecy.2479. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Decomposition: it’s who you are and where you are

“Follow the carbon” is a growing pastime of ecologists and environmental researchers worldwide. In the process of cellular respiration, organisms use carbon compounds to fuel their metabolic pathways, so having carbon around makes life possible.  Within ecosystems, following the carbon is equivalent to following how energy flows among the producers, consumers, detritivores and decomposers. In soils, decomposers play a central role in energy flow, but we might not appreciate their importance because many decomposers are tiny, and decomposition is very slow.  We are thrilled by a hawk subduing a rodent, but are less appreciative of a bacterium breaking down a lignin molecule, even though at their molecular heart, both processes are the same, in that complex carbon enters the organism and fuels cellular respiration.  However. from a global perspective, cellular respiration produces carbon dioxide as a waste product, which if allowed to escape the ecosystem, will increase the pool of atmospheric carbon dioxide thereby increasing the rate of global warming. So following the carbon is an ecological imperative.

As the world warms, trees and shrubs are colonizing regions that previously were inaccessible to them. In northern Sweden, mountain birch forests (Betula pubescens) and birch shrubs (Betula nana) are advancing into the tundra, replacing the heath that is dominated by the crowberry, Empetrum nigrum. As he began his PhD studies, Thomas Parker became interested in the general question of how decomposition changes as trees and shrubs expand further north in the Arctic. On his first trip to a field site in northern Sweden he noticed that the areas of forest and shrubs produced a lot of leaf litter in autumn yet there was no significant accumulation of this litter the following year. He wondered how the litter decomposed, and how this process might change as birch overtook the crowberry.


One of the study sides in autumn: mountain birch forest (yellow) in the background, dwarf birch (red) on the left and crowberry on the right. Credit: Tom Parker.

Several factors can affect leaf litter decomposition in northern climes.  First, depending on what they are made of, different species of leaves will decompose at different rates.  Second, different types of microorganisms present will target different types of leaves with varying degrees of efficiency.  Lastly, the abiotic environment may play a role; for example, due to shade and creation of discrete microenvironments, forests have deeper snowpack, keeping soils warmer in winter and potentially elevating decomposer cellular respiration rates. Working with several other researchers, Parker tested the following three hypotheses: (1) litter from the more productive vegetation types will decompose more quickly, (2) all types of litter decompose more quickly in forest and shrub environments, and (3) deep winter snow (in forest and shrub environments) increase litter decomposition compared to heath environments.

To test these hypotheses, Parker and his colleagues established 12 transects that transitioned from forest to shrub to heath. Along each transect, they set up three 2 m2 plots – one each in the forest, shrub, and heath – 36 plots in all. In September of 2012, the researchers collected fresh leaf littler from mountain birch, shrub birch and crowberry, which they sorted, dried and placed into 7X7 cm. polyester mesh bags.  They placed six litter bags of each species at each of the 36 plots, and then harvested these bags periodically over the next three years. Bags were securely attached to the ground so that small decomposers could get in, but the researchers had to choose a relatively small mesh diameter to make sure they successfully enclosed the tiny crowberry leaves. This restricted access to some of the larger decomposers.


Some litter bags attached to the soil surface at the beginning of the experiment. Credit: Tom Parker.

To test for the effect of snow depth, the researchers also set up snow fences on nearby heath sites.  These fences accumulated blowing and drifting snow, creating a snowpack comparable to that in nearby forest and shrub plots.

Parker and his colleagues found that B. pubescens leaves decomposed most rapidly and E. nigrum leases decomposed most slowly.  In addition, leaf litter decomposed fastest in the forest and most slowly in the heath.  Lastly, snow depth did not  influence decomposition rate.


(Left graph) Decomposition rates of E. nigrum, B. nana and B. pubescens in heath, shrub and forest. (Right graph) Decomposition rates of E. nigrum, B. nana and B. pubescens in heath under three different snow depths simulating snow accumulation at different vegetation types: Heath (control), + Snow (Shrub) and ++ Snow (Forest) . Error bars are 1 SE.

B. pubescens in forest and shrub lost the greatest amount (almost 50%) of mass over the three years of the study, while E. nigrum in heath lost the least (less than 30%).  However, B. pubescens decomposed much more rapidly in the forest than in the shrub between days 365 and 641. The bottom graphs below show that snow fences had no significant effect on decomposition.


Percentage of litter mass remaining (a, d) E. nigrum, (b, e) B. nana, (c, f) B. pubescens in heath, shrub, or forest. Top graphs (a, b, c) are natural transects, while the bottom graphs (d, e, f) represent heath tundra under three different snow depths simulating snow accumulation at different vegetation types: Heath (control), + Snow (Shrub) and ++ Snow (Forest) . Error bars represent are 1SE. Shaded areas on the x-axis indicate the snow covered season in the first two years of the study.

Why do mountain birch leaves decompose so much more than do crowberry leaves?  The researchers chemically analyzed both species and discovered that birch leaves had 1.7 times more carbohydrate than did crowberry, while crowberry had 4.9 times more lipids than did birch. Their chemical analysis showed much of birch’s rapid early decomposition was a result of rapid carbohydrate breakdown. In contrast, crowberry’s slow decomposition resulted from its high lipid content being relatively resistant to the actions of decomposers.


Researchers (Parker right, Subke left) harvesting soils and litter in the tundra. Credit: Jens-Arne Subke.

Parker and his colleagues did discover that decomposition was fastest in the forest independent of litter type. Forest soils are rich in brown-rot fungi, which are known to target the carbohydrates (primarily cellulose) that are so abundant in mountain birch leaves.  The researchers propose that a history of high cellulose litter content has selected for a biochemical environment that efficiently breaks down cellulose-rich leaves. Once the brown-rot fungi and their allies have done much of the initial breakdown, another class of fungi (ectomycorrhizal fungi) kicks into action and metabolizes (and decomposes) the more complex organic molecules.

The result of all this decomposition in the forest, but not the heath, is that tundra heath stores much more organic compounds than does the adjacent forest (which loses stored organic compounds to decomposers).  As forests continue their relentless march northward replacing the heath, it is very likely that they will introduce their efficient army of decomposers to the former heathlands.  These decomposers will feast on the vast supply of stored organic carbon compounds, release large quantities of carbon dioxide into the atmosphere, which will further exacerbate global warming. This is one of several positive feedbacks loops expected to destabilize global climate systems in the coming years.

note: the paper that describes this research is from the journal Ecology. The reference is Parker, T. C., Sanderman, J., Holden, R. D., Blume‐Werry, G., Sjögersten, S., Large, D., Castro‐Díaz, M., Street, L. E., Subke, J. and Wookey, P. A. (2018), Exploring drivers of litter decomposition in a greening Arctic: results from a transplant experiment across a treeline. Ecology, 99: 2284-2294. doi:10.1002/ecy.2442. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Recovering soils suffer carbon loss

When dinosaurs roamed the Earth, and I was in high school, acid rain became big news.  Even my dad, who as an industrial chemist believed that industry seldom sinned, acknowledged that he could see how coal plants could release sulfur (and other) compounds, which would be converted to strong acids, borne by prevailing winds to distant destinations, and deposited by rain and snow into soils. Forest ecosystems in North America and Europe are happily, albeit slowly, recovering from the adverse effects of acid deposition, but there are some causes for concern.  At the Hubbard Brook Experimental Forest in New Hampshire, USA, researchers experimentally remediated some of the impacts of acid deposition by adding calcium silicate to a watershed (via helicopter!). A decade later, this treatment had caused a 35% decline in the total carbon stored in the soil. This result was very unexpected and alarming, as this could mean that acid-impacted temperate forests may become major sources of CO2, with more carbon running off into streams, and some becoming atmospheric CO2, as the effects of acid rain wane. Richard Marinos and Emily Bernhardt wanted to determine exactly what caused this carbon loss to better understand how forests will behave in the future as they recover from acidification.


The forest at Hubbard Brook in the Autumn. Credit: Hubbard Brook Ecosystem Study at hubbardbrook.org

The problem is that calcium and acidity (lower pH is more acid: higher pH is more alkaline) have different and complex effects on plants, soil microorganisms and the soils in which they live. Several previous studies demonstrated that higher soil pH (becoming more alkaline) caused an increase in carbon solubility, while higher calcium levels caused carbon to become less soluble. Soluble organic carbon forms a tiny fraction of total soil carbon, but is very important because it can be used by microorganisms for cellular respiration, and also can be leached from ecosystems as runoff. In general, soil microorganisms benefit as acidic soils recover because heavy metal toxicity is reduced, enzymes work better, and mycorrhizal associations are more robust.  Complicating the picture even more, both elevated calcium and increased pH have been associated with increased plant growth, but increased calcium is also associated with reduced fine root growth.

To help unravel this complexity, Marinos and Bernhardt experimentally tested the effects of increasing pH and increased calcium on soil organic carbon (SOC) solubility, microbial activity and plant growth.  They collected acidic soil from Hubbard Brook Experimental Forest, which formed three distinct layers: leaf litter on top, organic horizon below the leaf litter, and mineral soil below the organic horizon.


Soil excavation site at Hubbard Brook. Credit: Richard Marinos.

The researchers then filled 100 2.5-liter pots with these three soil layers (in correct sequence) and planted 50 pots with sugar maple saplings, leaving 50 pots unplanted.  Pots were moved to a greenhouse, and that November given one of five treatments: calcium chloride addition (Ca treatment), potassium hydroxide addition (alkalinity treatment), Ca + alkalinity treatment combined, a deionized water control, and a potassium chloride control. The potassium chloride control had no effect, so we won’t discuss it further.


Potted sugar maple saplings used for the experiments. Credit Richard Marinos.

The following July, Marinos and Bernhardt harvested all of the pots, carefully separating plant roots from the soil, and analyzing the organic horizon and mineral soil levels separately (there wasn’t enough leaf litter remaining for analysis). The researchers measured SOC by mixing soil from each pot with deionized water, centrifuging at high speed to extract the water-soluble material, combusting the material at high temperature and measuring how much CO2 was generated. The result is termed water extractable organic carbon (WEOC).

Remember that previous studies had shown that higher calcium levels decreased carbon solubility, while higher alkalinity increased carbon solubility. Surprisingly, Marinos and Bernhardt found that in unplanted pots, the Ca treatment reduced WEOC in both soil layers, while the alkalinity treatment decreased WEOC in the organic horizon, but not in mineral soil. In pots planted with maple saplings, the Ca treatment had no effect on WEOC, while the alkalinity treatment, and the Ca + alkalinity treatment, increased WEOC markedly.


Water-Extractable Organic Carbon in soil without plants (left column) and with plants (right column). Top graphs are organic horizon soils and bottom graphs are mineral horizon soils. Error bars are 1 standard error.

The next question was how might soil microorganisms fit into the plant-soil dynamics?


Soil respirations rates (top) over the short term (days 1-7 post-harvest) and (bottom) the long term (days 8-75 post-harvest). Error bars are 1 standard error.

Soil microorganisms use carbon products for cellular respiration, so the researchers expected that soils with more SOC would have higher respiration rates.  They measured soil respiration 1, 2, 4, 8, 16, 35 and 72 days after the harvest, so they could evaluate both short-term and long-term effects. In unplanted pots, soil respiration rates were unaffected by treatment.  But in planted pots, the alkalinity treatment increased soil respiration rates considerably in the short term (top graphs), but much less so in the long-term (bottom graphs). Putting the WEOC data from the figure above together with the respiration data from the two figures to your left, you can see that in pots with plants, increased alkalinity was associated with more SOC and higher respiration rates.

The researchers weighed the saplings after harvest and discovered that the sugar maples grew best in soils treated with calcium. Two previous studies had treated fields with calcium silicate and found better sugar maple growth in the treated fields.  Marinos and Bernhardt argue that their study provides evidence that it is the Ca enrichment, and not the increased pH, that caused increased growth for both of those studies.

Perhaps the most surprising finding is that higher alkalinity increased soil microbial activity only in pots with plants, and had no effect on soil microbial activity in pots without plants. Somehow, the plants in an alkaline environment are increasing the rate of microbial respiration, perhaps by releasing carbohydrates produced by photosynthesis into the soil, which could then stimulate decomposition of SOC by the microorganisms. Finding that this effect largely disappeared a few days after harvest (bottom graph above), supports the idea that the plants are releasing a substance that helps microorganisms carry on cellular respiration. But this idea awaits further study. In the meantime, we have a better understanding of how forest recovery from acid rain affects one aspect of the carbon cycle, though many other human inputs may interact with this recovery process.

note: the paper that describes this research is from the journal Ecology. The reference is Marinos, R. E. and Bernhardt, E. S. (2018), Soil carbon losses due to higher pH offset vegetation gains due to calcium enrichment in an acid mitigation experiment. Ecology, 99: 2363-2373. doi:10.1002/ecy.2478. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Rice fields foster biodiversity

Restoration ecologists want to restore ecosystems that have been damaged or destroyed by human activity.  One approach they use is “rewilding” – which can mean different things to different people.  To some, rewilding involves returning large predators to an ecosystem, thereby reestablishing important ecological linkages.  To others, rewilding requires corridors that link different wild areas, so animals can migrate from one area to another.  One common thread in most concepts of rewilding is that once established, restored ecosystems should be self-sustaining, so that if ecosystems are left to their own devices, ecological linkages and biological diversity can return to pre-human-intervention levels, and remain at those levels in the future.

ardea intermedia (intermediate egret). photo by n. katayama

The intermediate egrit, Ardea intermedia, plucks a fish from a flooded rice field. Credit: N. Katayama.

Chieko Koshida and Naoki Katayama argue that rewilding may not always increase biological diversity.  In some cases, allowing ecosystems to return to their pre-human-intervention state can actually cause biological diversity to decline. Koshida and Katayama were surveying bird diversity in abandoned rice fields, and noticed that bird species distributions were different in long-abandoned rice fields in comparison to still-functioning rice fields.  To follow up on their observations, they surveyed the literature, and found 172 studies that addressed how rice field abandonment in Japan affected species richness (number of species) or abundance.  For the meta-analysis we will be discussing today, an eligible study needed to compare richness and/or abundance for at least two of three management states: (1) cultivated (tilled, flood irrigated, rice planted, and harvested every year), (2) fallow (tilled or mowed once every 1-3 years), and (3) long-abandoned (unmanaged for at least three years).


Three different rice field management states – cultivated, fallow and long-abandoned – showing differences in vegetation and water conditions. Credit: C. Koshida.

Meta-analyses are always challenging, because the data are collected by many researchers, and for a variety of purposes.  For example, some researchers may only be interested in whether invasive species were present, or they may not be interested in how many individuals of a particular species were present. Ultimately 35 studies met Koshida and Katayama’s criteria for their meta-analysis (29 in Japanese and six in English).

Overall, abandoning or fallowing rice fields decreased species richness or abundance to 72% of the value of cultivated rice fields. As you might suspect, these effects were not uniform for different variables or comparisons. Not surprisingly, fish and amphibians declined sharply in abandoned rice fields – much more than other groups of organisms. Abundance declined more sharply in abandoned fields than did species richness.  Several other trends also emerged.  For example, complex landscapes such as yatsuda (forested valleys) and tanada (hilly terraces) were more affected than were simple landscapes.  In addition, wetter abandoned fields were able to maintain biological diversity, while dryer abandoned fields declined in richness and abundance.


The effects of rice field abandonment or fallowing for eight different variables.  Effect size is the ln (Mt/Mc), where Mt = mean species richness or abundance for the treatment, and Mc = mean species richness for the control.  The treated field in all comparisons was the one that was abandoned for the longer time.  A positive effect size means that species richness or abundance  increased in the treated (longer abandoned) field, while a negative effect size means that species richness or abundance declined in the treated field. Numbers in parentheses are number of data sets used for comparisons.

When numerous variables are considered, researchers need to figure out which are most important.  Koshida and Katayama used a statistical approach known as “random forest” to model the impact of different variables on the reduction in biological diversity following abandonment.  This approach generates a variable – the percentage increase in mean square error (%increaseMSE) – which indicates the importance of each variable for the model (we won’t go into how this is done!).  As the graph below shows, soil moisture was the most important variable, which tells us (along with the previous figure above) that abandoned fields that maintained high moisture levels also kept their biological diversity, while those that dried out lost out considerably.  Management state was the second most important variable, as long-abandoned fields lost considerably more biological diversity than did fallow fields.


Importance estimates of each variable (as measured by %increase MSE).  Higher values indicate greater importance.

Unfortunately, only three studies had data on changes in biological diversity over the long-term.  All three of these studies surveyed plant species richness over a 6 – 15 year period, so Koshida and Katayama combined them to explore whether plant species richness recovers following long-term rice field abandonment. Based on these studies, species richness continues to decline over the entire time period.


Plant species richness in relation to time since rice fields were abandoned (based on three studies).

Koshida and Katayama conclude that left to their own devices, some ecosystems, like rice fields, will actually decrease, rather than increase, in biological diversity.  Rice fields are, however, special cases, because they provide alternatives to natural wetlands for many organisms dependent on aquatic/wetland environments (such as the frog below). In this sense, rice fields should be viewed as ecological refuges for these groups of organisms.


Rana porosa porosa (Tokyo Daruma Pond Frog). Credit: Y. G. Baba

These findings also have important management implications.  For example, conservation ecologists can promote biological diversity in abandoned rice fields by mowing and flooding. In addition, managers should pay particular attention to abandoned rice fields with complex structure, as they are particularly good reservoirs of biological diversity, and are likely to lose species if allowed to dry out. Failure to attend to these issues could lead to local extinctions of specialist wetland species and of terrestrial species that live in grasslands surrounding rice fields. Lastly, restoration ecologists working on other types of ecosystems need to carefully consider the effects on biological diversity of allowing those ecosystems to return to their natural state without any human intervention.

note: the paper that describes this research is from the journal Conservation Biology. The reference is Koshida, C. and Katayama, N. (2018), Meta‐analysis of the effects of rice‐field abandonment on biodiversity in Japan. Conservation Biology, 32: 1392-1402. doi:10.1111/cobi.13156. Thanks to the Society for Conservation Biology for allowing me to use figures from the paper. Copyright © 2018 by the Society for Conservation Biology. All rights reserved.

Sweltering ants seek salt

Like humans, ants need salt and sugar.  Salt is critical for a functioning nervous system and for maintaining muscle activity, while sugar is a ready energy source. In ectotherms such as ants, body temperature is influenced primarily by the external environment, with higher environmental temperatures leading to higher body temperatures.  When ants get hot their metabolic rates rise, so they can go out and do energetically demanding activities such as foraging for essential resources like salt and sugar. On the down side, hot ants excrete more salt and burn up more sugar.  In addition, like humans, very high body temperature can be lethal, so ants are forced to seek shelter during extreme heat.   As a beginning graduate student, Rebecca Prather wanted to know whether ants adjust their foraging rates on salt and sugar in response to the conflicting demands of elevated temperatures on ants’ physiological systems.

Prather at field site

Rebecca Prather at her field site in Oklahoma, USA. Credit: Rebecca Prather.

Prather and her colleagues studied two different field sites: Centennial Prairie is home to 16 ant species, while Pigtail Alley Prairie has nine species.  For their first experiment, the researchers established three transects with 100 stations baited with vials containing cotton balls and either 0.5% salt (NaCl) or 1% sucrose.  The bait stations were 1 meter apart.  After 1 hour, they collected the vials (with or without ants), and counted and identified each ant in each vial.  The researchers measured soil temperature at the surface and at a depth of 10 cm. The researchers repeated these experiments at 9 AM, 1 PM and 5 PM, April – October, 4 times each month.


Ants recruited to vials with 0.5% salt solution.  Credit: Rebecca Prather.

Sugar is easily stored in the body, so while sugar consumption increases with temperature, due to increased ant metabolic rate, sugar excretion is relatively stable with temperature.  In contrast, salt cannot be stored effectively, so salt excretion increases at high body temperature.  Consequently, Prather and her colleagues expected that ant salt-demand would increase with temperature more rapidly than would ant sugar-demand.


Ant behavior in response to vials with 0.5% salt (dark circles) and 1% sucrose (white circles) at varying soil temperatures at 9AM, 1 PM (13:00) and 5PM (17:00). The three left graphs show the number of vials discovered (containing at least one ant), while the three right graphs show the number of ants recruited per vial.  The Q10 value  = the rate of discovery or recruitment at 30 deg. C divided by the rate of discovery or recruitment at 20 deg. C. * indicates that the two curves have statistically significantly different slopes.

The researchers discovered that ants foraged more at high temperatures. However, when surface temperatures were too high (most commonly at 1 PM during summer months), ants could not forage and remained in their nests.  At all three times of day, ants discovered more salt vials at higher soil temperatures. Ants also discovered more sugar vials at higher temperatures in the morning and evening, but not during the 1 PM surveys. Most interesting, the slope of the curve was much steeper for salt discovery than it was for sugar discovery, indicating that higher temperature increased salt discovery rate more than it increased sugar discovery rate (three graphs on left).

When ants discover a high quality resource, they will recruit other nestmates to the resource to help with the harvest.  Ant recruitment rates increased with temperature to salt, but not sugar, indicating that ant demand for 0.5% salt increased more rapidly than ant demand for 1% sugar (three graphs above on right).

The researchers were concerned that the sugar concentrations were too low to excite much recruitment, so they replicated the experiments the following year using four different sugar concentrations.  Ant recruitment was substantially greater to higher sugar concentrations, but was still two to three times lower than it was to 0.5% salt.


Ant recruitment (y-axis) to different sugar concentrations at a range of soil temperatures (X-axis). Q10 values are to the left of each line of best fit.

Three of the four most common ant species showed the salt and sugar preferences that we described above, but the other common species, Formica pallidefulva, actually decreased foraging at higher temperatures.  The researchers suggest that this species is outcompeted by the other more dominant species at high temperatures, and are forced to forage at lower temperatures when fewer competitors are present.

In a warming world, ant performance will increase as temperatures increase up to ants’ thermal maximum, at which point ant performance will crash.  Ants are critical to ecosystems, playing important roles as consumers and as seed dispersers. Thus many ecosystems in which ants are common (and there are many such ecosystems!) may function more or less efficiently depending on how changing temperatures influence ants’ abilities to consume and conserve essential nutrients such as salt.

note: the paper that describes this research is from the journal Ecology. The reference is Prather, R. M., Roeder, K. A., Sanders, N. J. and Kaspari, M. (2018), Using metabolic and thermal ecology to predict temperature dependent ecosystem activity: a test with prairie ants. Ecology, 99: 2113-2121. doi:10.1002/ecy.2445Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

What grows up must go down: plant species richness and soils below.

Almost 20 years ago, Dorota Porazinska was a postdoctoral researcher investigating whether plant diversity influenced the diversity of organisms that lived in the soil below these plants, including bacteria, protists, fungi and nematodes (collectively known as soil biota).  Surprisingly, she and her colleagues discovered no linkages between aboveground and belowground species diversity.  She suspected that two issues were responsible for this lack of linkage. First, the early study lumped related species into functional groups – for example nematodes that eat bacteria, or nematodes that eat fungi.  Lumping simplifies data collection but loses a lot of data because individual species are not distinguished.  Back in those days, identifying species with DNA analysis was time-consuming, expensive, and often impractical. The second issue was that even if aboveground-belowground diversity was linked, it might be difficult to detect.  Ecosystems are very complex, and many belowground species make a living off of legacies of carbon or other nutrients that are the remains of organisms that lived many generations ago.   These legacy organic nutrient pools allow for indirect (and thus more difficult to detect) linkages between aboveground and belowground species.

Porazinska and her colleagues reasoned that if there were aboveground/belowground relationships, they would be easiest to detect in the simplest ecosystems that lacked significant pools of legacy nutrients. They also used molecular techniques that were not readily available for earlier studies to identify distinct species based on DNA analysis. The researchers established 98 1-m radius circular plots at the Niwot Ridge Long Term Ecological Research Site in the Colorado, USA Rocky Mountains. At each plot, they identified and counted each vascular plant, and recorded the presence of moss and lichen.  They also censused soil biota by using a variety of DNA amplification and isolation techniques that allowed them to identify bacteria, archaea, protists, fungi and nematodes to species.

PorazinskaOpening9256 Photo

Field assistant Jarred Huxley surveys plants in a high species richness plot. Credit Dorota L. Porazinska.

As expected in this alpine environment, plant species richness was quite low, averaging only 8 species per plot (range = 0 – 27).  In contrast to what had been found in other ecosystems, high plant diversity was associated with high diversity of soil biota.


Relationship between plant richness (x-axis) and soil biota richness (y-axis) for (A) bacteria, (B) eukaryotes (excluding fungi and nematodes), (C) fungi, and (D) nematodes.  OTUs are operational taxonomic units, which represent organisms with very similar or identical DNA sequences on a marker gene.  For our purposes, they represent distinct species.

Looking at the graphs above, you can see that different groups responded to different degrees; nematodes had the strongest response to increases in plant richness while fungi had the weakest response.  When viewed at a finer level, some groups of soil organisms, including photosynthetic microorganisms such as cyanobacteria and green algae actually decreased, presumably in response to competition with aboveground plants for light and possibly nutrients.

Given the strong relationship between plant species richness and soil biota richness, Porazinska and her colleagues next explored whether high plant richness was associated with soil nutrient levels (nutrient pools).  In general, there was a strong correlation between plant species richness and nutrient pools (see graphs below).  But soil moisture, and the ability of soil to hold moisture were the two most important factors associated with nutrient pools.


Amount (micrograms per gram of soil) of carbon (left graph) and nitrogen (right graph) in relation to plant species richness.

Ecologists studying soil processes can measure the rates at which microorganisms are metabolizing nutrients such as carbon, phosphorus and nitrogen.  The expectation was that if high plant species richness was associated with higher soil biota richness, and larger soil nutrient pools, then the activity of enzymes that metabolize soil nutrients should proportionally increase with these factors.  The researchers found that enzyme activity was very low where plants were absent or rare, and greatest in complex plant communities.  But the most important factors influencing enzyme activity were the amount of organic carbon present within the soil, and the ability of the soil to hold water.


Patchy vegetation at the field site. Credit: Cliffton P. Bueno de Mesquita.

Porazinska and her colleagues hypothesize that the relationship between plant species richness, soil biota richness, nutrient pools, and soil processes such as enzyme activity, exist in most ecosystems, but are obscured by indirect linkages between these different levels.  They hypothesize that these relationships in other ecosystems such as grasslands and forests are difficult to observe.  In these more complex ecosystems, carbon inputs into the soil form large legacy carbon pools. These carbon pools, and the ability of the soil to hold nutrient pools, fundamentally influence the abundance and richness of soil biota. In contrast, in nutrient-poor soils, such as high Rocky Mountain alpine meadows, legacy carbon pools are rare and small. Consequently, plants and soil biota interact more directly, and correlations between plant species diversity and soil biota diversity are much easier to detect.

note: the paper that describes this research is from the journal Ecology. The reference is Porazinska, D. L., Farrer, E. C., Spasojevic, M. J., Bueno de Mesquita, C. P., Sartwell, S. A., Smith, J. G., White, C. T., King, A. J., Suding, K. N. and Schmidt, S. K. (2018), Plant diversity and density predict belowground diversity and function in an early successional alpine ecosystem. Ecology, 99: 1942-1952. doi:10.1002/ecy.2420. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.


Meandering meerkats

Dispersal – the movement of individuals to a new location – is a complex process that ecologists divide into three stages: emigration (leaving the group), transience through an unfamiliar landscape, and settlement in a suitable habitat. Dispersal is fraught with danger, as dispersers usually have a higher chance of starving, of getting eaten by predators, and may suffer a low reproductive rate.  So why move?

The problem is that there are major issues with not moving.  First, if nobody disperses, population densities could increase alarmingly, putting strains on resources and increasing the incidence of disease transmission.  Second, if nobody disperses, close relatives would tend to live near each other.  If these relatives mate, there would be a high probability of bad combinations of genes being expressed, leading to developmental abnormalities or high offspring mortality (geneticists call this inbreeding depression). In social species, such as meerkats, Suricata suricatta, the issues are even more complex, as dispersal could break up social groups that work well together to detect predators or find resources.  Nino Maag and his colleagues explored what factors influence meerkat dispersal decisions, their survival and reproduction, and how those factors affected overall population dynamics in the Kuruman River Reserve in South Africa.


A group of vigilant meerkats. Credit: Arpat Azgul

Meerkats live in groups of 2-50 individuals, with a dominant pair that monopolizes reproduction.  While pregnant, the dominant female usually evicts some subordinate females from the group; this coalition of evictees will either remain apart from the group (but within the confines of the territory) and eventually be allowed back in, or else emigrate to a new territory. By attaching radio collars to subordinate females, the researchers were able to follow emigrants to determine their fates.


Nino Maag collects data in the Kalahari Desert while a meerkat, wearing a radio collar, strolls by. Credit: Gabriele Cozzi.

How does population density affect emigration rates of evicted females?  You might think that meerkats would be most likely to emigrate at high population density, as a way of avoiding resource competition.  As it turns out the story is more complicated.  First, individual females (solid lines in graph below) are more likely to remain with the group (not emigrate) than are groups of two or more females (dashed lines). Second, emigration rates were highest at low population density, intermediate at high population density and lowest at intermediate population density. This nonlinear effect can be explained by low benefits of remaining in a very small group, so evictees are more likely to emigrate.  But as population density (and group size) increase, then the meerkats enjoy higher success as a result of cooperation between individuals  (in particular, detecting and avoiding predators).  But when population densities get too high, there are not enough resources to go around, and evictees are more likely to emigrate.


Proportion of evicted female meerkats that had not yet emigrated in relation to time since eviction at low (red), medium (light blue) and high (dark blue) population density.  Solid lines represent individual females, while dashed lines are coalitions of two or more females.

In addition to the density effects we just discussed, association with unrelated males from other groups early after eviction increased the probability that females would emigrate – presumably this increased the probability females would quickly create offspring in their new territory. Females also dispersed longer distances if unrelated males did not meet up with them, possibly to avoid inbreeding with closely-related males from neighboring groups.

Coalitions were more likely to return to the group if females were not pregnant – in fact 62% of pregnant evictees aborted their litters before being allowed back into the group.  Of the ones that did not abort before returning, only 42% of their litters survived to the first month.

The period of transience, when emigrators are seeking new territories can be prolonged and dangerous.  The mean dispersal distance was 2.24 km, and averaged about 46 days.  Larger coalitions with males present tended to disperse the shortest distances (left graph below). Dispersers took longest to settle at high population density – perhaps there were fewer available territories under those conditions (right graph below).


A. Effect of coalition size and presence of unrelated males on dispersal distance. B. Effect of population density on transience time (interval between emigration and settling).

Large coalitions settled more quickly than did small coalitions, particularly if accompanied by unrelated males.  Once settled, females successfully carried through 89% of their pregnancies (compare that to the 62% abortion rate of females that returned to their original group).  These females had a litter survival rate (to the first month) of 65%.

Social and non-social species are influenced by population density in different ways.  The situation is relatively simple for non-social species; as population size increases, competition between individuals increases, so dispersal is more likely.  However, even for non-social species, we might expect dispersal at very low population levels, if there are no mates available. For social species such as meerkats, the situation is more complex.  Cooperation enhances survival and reproduction, so it is better to be in a larger group (with more cooperators). At the same time, if the group is too large, then resource competition starts being an increasingly disruptive factor. As ecologists collect more dispersal data from other social species, they will be able to test the hypothesis that population density in many species influences dispersal in a non-linear way.

note: the paper that describes this research is from the journal Ecology. The reference is Maag, N. , Cozzi, G. , Clutton‐Brock, T. and Ozgul, A. (2018), Density‐dependent dispersal strategies in a cooperative breeder. Ecology, 99: 1932-1941. doi:10.1002/ecy.2433. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.