Coral can recover (occasionally)

Coral reefs have amazing species diversity, which depends, in part, on a mutualism between the coral animal and a group of symbiotic algae that live inside the coral. The algae provide the coral host with approximately 90% of the energy it needs (from photosynthetic product).  In return, the algae are rewarded with a place to live and a generous allotment of nitrogen (mostly fecal matter) from the coral.  Unfortunately, coral are under attack from a variety of sources. Most problematic, humans are releasing massive amounts of carbon dioxide into the atmosphere, which is increasing ocean temperatures and also making the ocean more acidic.  Both processes can kill coral by causing coral to eject their symbiotic algae, making it impossible for the coral to get enough nutrients.

Moorea Coral Reef LTER site

The coral reef at Mo’orea. Credit: Moorea Coral Reef LTER site.

But other factors threaten coral ecosystems as well. For example, the reefs of Mo’orea , French Polynesia (pictured above), were attacked by the voracious seastar, Acanthaster planci, between 2006-2010, which reduced the coral cover (the % of the ocean floor that is covered with coral when viewed from above) from 45.8% in 2006 to 6.4% in 2009.  Then, in Feb 2010, Cyclone Oli hit, and by April, mean coral cover had plummeted down to 0.4%.

Moorea swimmers

Researchers survey the reef at Mo’orea. Credit: Peter Edmunds.

Peter Edmunds has been studying the coral reef ecosystem at Mo’orea for 14 years, and has observed firsthand the sequence of reef death, and the subsequent recovery.  Working with Hannah Nelson and Lorenzo Bramanti, he wanted to document the recovery process, and to identify the underlying mechanisms.  Fortunately Mo’orea is a Long Term Ecological Research (LTER) site, one of 28 such sites funded by the United States National Science Foundation.  Consequently the researchers had long term data available to them, so they could document how coral abundance had changed since 2005.  Their analysis showed the decline in coral cover from 2007 to 2010, but a remarkable rapid recovery beginning in 2012 and continuing through 2017.

EdmundsFig1

% cover (+ SE) of all coral , Pocillopora coral (the species group that the research team focused on). and macroalgae at Mo’orea over a 13 year period based on LTER data. The horizontal bar with COTs above it represents the time period of maximum seastar predation.

What factors caused this sharp recovery? One general process that could be part of the answer is density dependence, whereby populations have high growth rates when densities are low and there is very little competition, and low growth rates when densities are high and there is a great deal of competition between individuals, or in this case, between colonies. The problem is that though density dependence makes intuitive sense, it is difficult to demonstrate, as other factors could underlie the coral recovery.  Perhaps after 2011 there was more food available, or fewer predators, or maybe the weather was better for coral growth.

EdmundsQuadrats

High density (top) and low density (bottom) quadrats of Pocillopora coral established by the research group.

To more convincingly test for density dependence, Edmunds and his colleagues set up an experiment, establishing 18 1m2 quadrats in April, 2016. The researchers reduced coral cover in nine quadrats to 19.1% by removing seven or eight colonies from each experimental quadrat (low density quadrats), and left the other nine quadrats as unmanipulated controls, with coral cover averaging 32.5% (high density quadrats).  They then asked if, over the course of the next year, more recruits (new colonies < 4cm diameter) became established in the low density quadrats.

Returning in 2017, the researchers discovered substantially greater recruitment in the low density quadrats than in the high density quadrats. This experiment provides strong evidence that the rapid recovery after devastation by seastars and Cyclone Oli was helped by a density dependent response of the coral population – high recruitment at low population density.

EdmundsFig3

Density of recruits just after (left), and one year after (right) the quadrats were established. Solid bars are means (+ SE) for high density quadrats, while clear bars are means (+ SE) for low density quadrats.

In recent years, many coral reef systems around the world have experienced declining coral cover, a loss of fish and invertebrate diversity and abundance, and an increase in abundance of macroalgae.  While many of these reefs continue to decline, others, such as the reefs at the Mo’orea LTER site, are more resilient, and are able to recover from disturbance.  The researchers argue that we need to fully understand the mechanisms underlying recovery – in other words what is causing the density dependent response? Is it simply competition between coral that cause high recruitment under low density, or may interactions between coral and algae be important?  And what types of interactions influence recruitment rates under different densities?  One possibility is that at high densities, coral are eating most of the tiny coral larvae as they descend from the surface after a mass spawning event.  This raises the important question of why many reefs around the world do not show this density dependent response.  Clearly there is much work remaining to be done if we want to preserve this critically endangered marine biome.

note: the paper that describes this research is from the journal Ecology. The reference is Edmunds, P. J., Nelson, H. R. and Bramanti, L. (2018), Density‐dependence mediates coral assemblage structure. Ecology, 99: 2605-2613. doi:10.1002/ecy.2511. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Beautiful buds beset bumblebees with bad bugs

Sexual liaisons can be difficult to achieve without some type of purposeful motion.  Flowering plants, which are rooted to the ground, are particularly challenged to bring the male close enough to the female to have sex.  One awesome adaptation is pollen, technically the male gametophyte –  or gamete (sperm)-generating plant. These tiny males get to females either by floating through the air, or by being transferred by animal pollinators such as bees. Plants can lure bees to their flowers by producing nectar – a sugar rich fluid – which bees lap up and use as a carbohydrate source.  While nectaring, bees also collect pollen, either intentionally or inadvertently, which provides them with essential proteins. When bees travel to the next flower, they may inadvertently drop some of their pollen load near the female gametophyte – in this case a tiny egg-generating plant (though tiny, the female gametophyte is considerably larger than is the male gametophyte).  We call this process of “tiny boy meets tiny girl” pollination. Once the two gametophytes meet, the pollen produces one or more sperm, which it uses to fertilize an egg within the female gametophyte.  There is more to it, but this will hopefully clarify the difference between pollination and fertilization.

monardadidyma.jpg

Bumblebee forages on beebalm, Monarda didyma. Credit: Jonathan Giacomini.

All of this business takes place within the friendly confines of the flower.  The same flower may be visited by many different bees of many different species. While feeding, bees carry on other bodily functions, including defecation.  They are not careful about where they defecate; consequently a bee’s breakfast might also include feces from a previous bee visitor. Bumblebee (Bombus impatiens) feces carries many disease organisms, including the gut parasite Crithidia bombi, which can reduce learning, decrease colony reproduction and impair a queen’s ability to found new colonies. Because pollinators are so critical in ecosystems, Lynn Adler and her colleagues wondered whether certain types of flowers were better vectors for harboring and transmitting Crithidia bombi to other bumblebees.

Antirrhinummajus

Bumblebee forages on the snapdragon, Antirrhinum majus. Credit: Jonathan Giacomini.

The researchers chose 14 different flowering plant species, allowing uninfected bumblebees to forage on inflorescences (clusters of flowers) inoculated with a measured amount of Crithidia bombi parasites.  The bees were reared for seven days after exposure, and then were assessed for whether they had picked up the infection from their foraging experience, and if so, how intense the infection was. The researchers dissected each tested bee and counted the number of Crithidia cells within the gut.

researcher-photo.jpg

Researcher conducts foraging trial with Lobelia siphilitica inflorescence. Credit: Jonathan Giacomini.

Adler and her colleagues discovered that some plant species caused a much higher pathogen count (mean number of infected cells in the bee gut) than did other plant species.  For example bees that foraged on Asclepias incarnata (ASC) had four times as many pathogens, on average, than did bees that foraged on Digitalis purpurea (DIG) (top graph below). Bees foraging on Asclepias were much more likely to get infected (had greater susceptibility) than bees that foraged on several other species, most notably Linaria vulgaris (LIN) and Eupatorium perfoliatum (EUP) (middle graph). Lastly, if we limit our consideration to infected bees, the mean intensity of the infection was much greater for bees foraging on some species, such as Asclepias and Monarda didyma (MON) than on others, such as Digitalis and Antirrhinum majus (ANT) (bottom graph).

AdlerFig1

(Top graph) Mean number of Crithidia (2 microliter gut sample) hosted by bees after foraging on one of 14 different flowering plant species. This graph includes both infected and uninfected bees. (Middle graph) Susceptibility – the proportion of bees infected – after foraging trials on different plant species. (Bottom graph) Intensity of infection – Mean number of Crithidia for infected bees only. The capital letters below the graph are the first three letters of the plant genus. Numbers in bars are sample size.  Error bars indicate 1 standard error.

It would be impossible to repeat this experiment on the 369,000 known species of flowering plants (with many more still to be identified).  So Adler and her colleagues really wanted to know whether there were some flower characteristics or traits associated with plant species that served as the best vectors of disease.  The researchers measured and counted variables associated with the flowers, such as the size and shape of the corolla, the number of open flowers and the number of reproductive structures (flowers, flower buds and fruits) per inflorescence.

bluelobelia.png

Flower traits measured by Adler and colleagues (example for blue lobelia, Lobelia siphilitica). CL is corolla length. CW is corolla width. PL is petal length. PW is petal width. Credit: Melissa Ha.

The researchers also wanted to know whether any variables associated with the bees, such as bee size and bee behavior, would predict how likely it was that a bee would get infected.  Surprisingly, the number of reproductive structures per inflorescence stood out as the most important variable. In addition, smaller bees were somewhat more likely to get infected than larger bees, and bees that foraged for a longer time period were more prone to infection.

AdlerFig2

Mean susceptibility of bees to Crithidia infection after foraging on 14 different flowering plant species, in relation to the number of reproductive structures (flowers, buds and fruits) per inflorescence.

These findings are both surprising and exciting. Adler and her colleagues were surprised to find such big differences in the ability of plant species to transmit disease.  In addition, they were puzzled about the importance of number of reproductive structures per inflorescence.  At this point, they don’t have a favorite hypothesis for its overriding importance, speculating that some unmeasured aspect of floral architecture influencing disease transmission might be related to the number of reproductive structures per inflorescence.

Penstemondigitalis

Bumblebee forages on Penstemon digitalis. In addition to the open flowers, note the large number of unopened buds.  Each of these counted as a reproductive structure for the graph above. Credit: Jonathan Giacomini.

The world is losing pollinators at a rapid rate, and there are concerns that if present trends continue, there may not be enough pollinators to pollinate flowers of some of our most important food crops. Disease is implicated in many of these declines, so it behooves us to understand how plants can serve as vectors of diseases that affect pollinators. Identifying floral traits that influence disease transmission could guide the creation of pollinator-friendly habitats within plant communities, and help to maintain diverse pollinator communities within the world’s ecosystems.

note: the paper that describes this research is from the journal Ecology. The reference is Adler, L. S., Michaud, K. M., Ellner, S. P., McArt, S. H., Stevenson, P. C. and Irwin, R. E. (2018), Disease where you dine: plant species and floral traits associated with pathogen transmission in bumble bees. Ecology, 99: 2535-2545. doi:10.1002/ecy.2503. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Dinoflagellates deter copepod consumption

Those of us who enjoy eating seafood are dismayed by the dreaded red tide, which renders some of our favorite prey toxic to us.  A red tide occurs when dinoflagellates and other algae increase sharply in abundance, often in response to upwelling of nutrients from the ocean floor.  Many of these dinoflagellates are red or brownish-red in color, so large numbers of them floating on or near the surface give the ocean its characteristic red color. These dinoflagellates produce toxic compounds (in particular neurotoxins) that pass through the food web, ultimately contaminating fish, molluscs and many other groups of species.

redtideCreditMarufish:FlickrIsahayaBay

Red tide at Isahaya Bay, Japan.  Credit: Marufish/Flickr.

Did toxicity arise in dinoflagellates to protect them from being eaten by predators – in particular by voracious copepods?  The problem with this hypothesis is that copepods eat an entire dinoflagellate.  Let’s imagine a dinoflagellate with a mutation that produces a toxic substance. At some point the dinoflagellate gets eaten, and the poor copepod consumer is exposed to the toxin.  Maybe it dies and maybe it lives, but the important result is that the dinoflagellate dies, and its mutant genes are gone forever, along with the toxic trait. The only way toxicity will benefit the dinoflagellate individual, and thus spread throughout the dinoflagellate population, is if it increases the survival/reproductive success of individuals with the toxic trait. This can occur if copepods have some mechanism for detecting toxic dinoflagellates, and are therefore less likely to eat them.

Jiayi Xu and Thomas Kiørboe went looking for such a mechanism using 13 different species or strains of dinoflagellates that were presented to the copepod Temora longicornis. This copepod beats its legs to create an ocean current that moves water, and presumably dinoflagellates, in its direction, which it then eats.  For their experiment, the researchers glued a hair to the dorsal surface of an individual copepod (very carefully), and they then attached the other side of the hair to a capillary tube, which was controlled by a micromanipulator. They placed these copepods into small aquaria, where the copepods continued to beat their legs, eat and engage in other bodily functions.

照片 3

Aquarium with tethered copepod and recording equipment: Credit: J. Xu.

The researchers then added a measured amount of one type of dinoflagellate into the aquarium, and using high resolution videography, watched the copepods feed over the next 24 hours.

Picture1

Tethered copepod beats its legs to attract a dinoflagellate (round blue circular cell). Credit: J. Xu.

Twelve of the dinoflegellate strains were known to be toxic, though they had several different types of poison. Protoceratium reticulatum was a nontoxic control species of dinoflagellate.  As you can see below, on average, copepods ate more of the nontoxic P. reticulatum than they did of any of the toxic species.

XuFig1

Average dinoflagellate biomass ingested by the tethered copepods.  P. reticulatum  is the nontoxic control.  Error bars are 1 SE.

Xu and Kiørboe identified two major mechanisms that underlie selectivity by the copepod predator.  In many cases, the copepod successfully captured the prey, but then rejected it (top graph below). For one strain of A. tamarense prey, and a lesser extent for K. brevis prey, the predator simply fed less as a consequence of reducing the proportion of time that it beat its feeding legs (bottom graph below).

XuFig3bd

Copepod feeding behavior on 13 dinoflagellate prey species.  Top graph is fraction of dinoflagellates rejected, while bottom graph is the proportion of time the copepods beats its feeding legs in the presence of a particular species/strain of dinoflagellate.  

If you look at the very first graph in this post, which shows the average dinoflagellate biomass consumed, you will note that both strains of K. brevis (K8 and K9) are eaten very sparingly.  The graphs just above show that the copepod rejects some K. brevis that it captures, and beats its legs a bit less often when presented with K. brevis. However, the rejection increase and leg beating decreases are not sufficient to account for the tremendous reduction in consumption. So something else must be going on.  The researchers suspect that the copepod can identify K. breviscells from a distance, presumably through olfaction, and decide not to capture them. This mechanism warrants further exploration.

One surprising finding of this study is that the copepod responds differently to one strain of the same species (A. tamarense) than it does to the other strains.  Xu and Kiorbe point out that previous studies of copepod/dinoflagellate interactions have identified other surprises.  For example, there are cases where a dinoflagellate strain is toxic to one strain of copepod, but harmless to another copepod strain of the same species. Also, within a dinoflagellate species, one strain may have a very different distribution of toxins than does a second strain.  So why does this degree of variation exist in this system?

The researchers argue that there may be an evolutionary arms race between copepods and dinoflagellates.  The copepod adapts to the toxin of co-occurring dinoflagellates, becoming resistant to the toxin. This selects for dinoflagellates that produce a novel toxin that the copepod is sensitive to. Over time, the copepod evolves resistance to the second toxin as well, and so on… Because masses of ocean water and populations of both groups are constantly mixing, different species and strains are exposed to novel environments with high frequency. Evolution happens.

note: the paper that describes this research is from the journal Ecology. The reference is Xu, J. and Kiørboe, T. (2018), Toxic dinoflagellates produce true grazer deterrents. Ecology, 99: 2240-2249. doi:10.1002/ecy.2479. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Decomposition: it’s who you are and where you are

“Follow the carbon” is a growing pastime of ecologists and environmental researchers worldwide. In the process of cellular respiration, organisms use carbon compounds to fuel their metabolic pathways, so having carbon around makes life possible.  Within ecosystems, following the carbon is equivalent to following how energy flows among the producers, consumers, detritivores and decomposers. In soils, decomposers play a central role in energy flow, but we might not appreciate their importance because many decomposers are tiny, and decomposition is very slow.  We are thrilled by a hawk subduing a rodent, but are less appreciative of a bacterium breaking down a lignin molecule, even though at their molecular heart, both processes are the same, in that complex carbon enters the organism and fuels cellular respiration.  However. from a global perspective, cellular respiration produces carbon dioxide as a waste product, which if allowed to escape the ecosystem, will increase the pool of atmospheric carbon dioxide thereby increasing the rate of global warming. So following the carbon is an ecological imperative.

As the world warms, trees and shrubs are colonizing regions that previously were inaccessible to them. In northern Sweden, mountain birch forests (Betula pubescens) and birch shrubs (Betula nana) are advancing into the tundra, replacing the heath that is dominated by the crowberry, Empetrum nigrum. As he began his PhD studies, Thomas Parker became interested in the general question of how decomposition changes as trees and shrubs expand further north in the Arctic. On his first trip to a field site in northern Sweden he noticed that the areas of forest and shrubs produced a lot of leaf litter in autumn yet there was no significant accumulation of this litter the following year. He wondered how the litter decomposed, and how this process might change as birch overtook the crowberry.

ParkerView

One of the study sides in autumn: mountain birch forest (yellow) in the background, dwarf birch (red) on the left and crowberry on the right. Credit: Tom Parker.

Several factors can affect leaf litter decomposition in northern climes.  First, depending on what they are made of, different species of leaves will decompose at different rates.  Second, different types of microorganisms present will target different types of leaves with varying degrees of efficiency.  Lastly, the abiotic environment may play a role; for example, due to shade and creation of discrete microenvironments, forests have deeper snowpack, keeping soils warmer in winter and potentially elevating decomposer cellular respiration rates. Working with several other researchers, Parker tested the following three hypotheses: (1) litter from the more productive vegetation types will decompose more quickly, (2) all types of litter decompose more quickly in forest and shrub environments, and (3) deep winter snow (in forest and shrub environments) increase litter decomposition compared to heath environments.

To test these hypotheses, Parker and his colleagues established 12 transects that transitioned from forest to shrub to heath. Along each transect, they set up three 2 m2 plots – one each in the forest, shrub, and heath – 36 plots in all. In September of 2012, the researchers collected fresh leaf littler from mountain birch, shrub birch and crowberry, which they sorted, dried and placed into 7X7 cm. polyester mesh bags.  They placed six litter bags of each species at each of the 36 plots, and then harvested these bags periodically over the next three years. Bags were securely attached to the ground so that small decomposers could get in, but the researchers had to choose a relatively small mesh diameter to make sure they successfully enclosed the tiny crowberry leaves. This restricted access to some of the larger decomposers.

ParkerLitterBags

Some litter bags attached to the soil surface at the beginning of the experiment. Credit: Tom Parker.

To test for the effect of snow depth, the researchers also set up snow fences on nearby heath sites.  These fences accumulated blowing and drifting snow, creating a snowpack comparable to that in nearby forest and shrub plots.

Parker and his colleagues found that B. pubescens leaves decomposed most rapidly and E. nigrum leases decomposed most slowly.  In addition, leaf litter decomposed fastest in the forest and most slowly in the heath.  Lastly, snow depth did not  influence decomposition rate.

ParkerEcologyFig1

(Left graph) Decomposition rates of E. nigrum, B. nana and B. pubescens in heath, shrub and forest. (Right graph) Decomposition rates of E. nigrum, B. nana and B. pubescens in heath under three different snow depths simulating snow accumulation at different vegetation types: Heath (control), + Snow (Shrub) and ++ Snow (Forest) . Error bars are 1 SE.

B. pubescens in forest and shrub lost the greatest amount (almost 50%) of mass over the three years of the study, while E. nigrum in heath lost the least (less than 30%).  However, B. pubescens decomposed much more rapidly in the forest than in the shrub between days 365 and 641. The bottom graphs below show that snow fences had no significant effect on decomposition.

ParkerEcologyFig2

Percentage of litter mass remaining (a, d) E. nigrum, (b, e) B. nana, (c, f) B. pubescens in heath, shrub, or forest. Top graphs (a, b, c) are natural transects, while the bottom graphs (d, e, f) represent heath tundra under three different snow depths simulating snow accumulation at different vegetation types: Heath (control), + Snow (Shrub) and ++ Snow (Forest) . Error bars represent are 1SE. Shaded areas on the x-axis indicate the snow covered season in the first two years of the study.

Why do mountain birch leaves decompose so much more than do crowberry leaves?  The researchers chemically analyzed both species and discovered that birch leaves had 1.7 times more carbohydrate than did crowberry, while crowberry had 4.9 times more lipids than did birch. Their chemical analysis showed much of birch’s rapid early decomposition was a result of rapid carbohydrate breakdown. In contrast, crowberry’s slow decomposition resulted from its high lipid content being relatively resistant to the actions of decomposers.

ParkerResearchers

Researchers (Parker right, Subke left) harvesting soils and litter in the tundra. Credit: Jens-Arne Subke.

Parker and his colleagues did discover that decomposition was fastest in the forest independent of litter type. Forest soils are rich in brown-rot fungi, which are known to target the carbohydrates (primarily cellulose) that are so abundant in mountain birch leaves.  The researchers propose that a history of high cellulose litter content has selected for a biochemical environment that efficiently breaks down cellulose-rich leaves. Once the brown-rot fungi and their allies have done much of the initial breakdown, another class of fungi (ectomycorrhizal fungi) kicks into action and metabolizes (and decomposes) the more complex organic molecules.

The result of all this decomposition in the forest, but not the heath, is that tundra heath stores much more organic compounds than does the adjacent forest (which loses stored organic compounds to decomposers).  As forests continue their relentless march northward replacing the heath, it is very likely that they will introduce their efficient army of decomposers to the former heathlands.  These decomposers will feast on the vast supply of stored organic carbon compounds, release large quantities of carbon dioxide into the atmosphere, which will further exacerbate global warming. This is one of several positive feedbacks loops expected to destabilize global climate systems in the coming years.

note: the paper that describes this research is from the journal Ecology. The reference is Parker, T. C., Sanderman, J., Holden, R. D., Blume‐Werry, G., Sjögersten, S., Large, D., Castro‐Díaz, M., Street, L. E., Subke, J. and Wookey, P. A. (2018), Exploring drivers of litter decomposition in a greening Arctic: results from a transplant experiment across a treeline. Ecology, 99: 2284-2294. doi:10.1002/ecy.2442. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Recovering soils suffer carbon loss

When dinosaurs roamed the Earth, and I was in high school, acid rain became big news.  Even my dad, who as an industrial chemist believed that industry seldom sinned, acknowledged that he could see how coal plants could release sulfur (and other) compounds, which would be converted to strong acids, borne by prevailing winds to distant destinations, and deposited by rain and snow into soils. Forest ecosystems in North America and Europe are happily, albeit slowly, recovering from the adverse effects of acid deposition, but there are some causes for concern.  At the Hubbard Brook Experimental Forest in New Hampshire, USA, researchers experimentally remediated some of the impacts of acid deposition by adding calcium silicate to a watershed (via helicopter!). A decade later, this treatment had caused a 35% decline in the total carbon stored in the soil. This result was very unexpected and alarming, as this could mean that acid-impacted temperate forests may become major sources of CO2, with more carbon running off into streams, and some becoming atmospheric CO2, as the effects of acid rain wane. Richard Marinos and Emily Bernhardt wanted to determine exactly what caused this carbon loss to better understand how forests will behave in the future as they recover from acidification.

hubbrook

The forest at Hubbard Brook in the Autumn. Credit: Hubbard Brook Ecosystem Study at hubbardbrook.org

The problem is that calcium and acidity (lower pH is more acid: higher pH is more alkaline) have different and complex effects on plants, soil microorganisms and the soils in which they live. Several previous studies demonstrated that higher soil pH (becoming more alkaline) caused an increase in carbon solubility, while higher calcium levels caused carbon to become less soluble. Soluble organic carbon forms a tiny fraction of total soil carbon, but is very important because it can be used by microorganisms for cellular respiration, and also can be leached from ecosystems as runoff. In general, soil microorganisms benefit as acidic soils recover because heavy metal toxicity is reduced, enzymes work better, and mycorrhizal associations are more robust.  Complicating the picture even more, both elevated calcium and increased pH have been associated with increased plant growth, but increased calcium is also associated with reduced fine root growth.

To help unravel this complexity, Marinos and Bernhardt experimentally tested the effects of increasing pH and increased calcium on soil organic carbon (SOC) solubility, microbial activity and plant growth.  They collected acidic soil from Hubbard Brook Experimental Forest, which formed three distinct layers: leaf litter on top, organic horizon below the leaf litter, and mineral soil below the organic horizon.

soil_excavation.jpg

Soil excavation site at Hubbard Brook. Credit: Richard Marinos.

The researchers then filled 100 2.5-liter pots with these three soil layers (in correct sequence) and planted 50 pots with sugar maple saplings, leaving 50 pots unplanted.  Pots were moved to a greenhouse, and that November given one of five treatments: calcium chloride addition (Ca treatment), potassium hydroxide addition (alkalinity treatment), Ca + alkalinity treatment combined, a deionized water control, and a potassium chloride control. The potassium chloride control had no effect, so we won’t discuss it further.

plants_outside

Potted sugar maple saplings used for the experiments. Credit Richard Marinos.

The following July, Marinos and Bernhardt harvested all of the pots, carefully separating plant roots from the soil, and analyzing the organic horizon and mineral soil levels separately (there wasn’t enough leaf litter remaining for analysis). The researchers measured SOC by mixing soil from each pot with deionized water, centrifuging at high speed to extract the water-soluble material, combusting the material at high temperature and measuring how much CO2 was generated. The result is termed water extractable organic carbon (WEOC).

Remember that previous studies had shown that higher calcium levels decreased carbon solubility, while higher alkalinity increased carbon solubility. Surprisingly, Marinos and Bernhardt found that in unplanted pots, the Ca treatment reduced WEOC in both soil layers, while the alkalinity treatment decreased WEOC in the organic horizon, but not in mineral soil. In pots planted with maple saplings, the Ca treatment had no effect on WEOC, while the alkalinity treatment, and the Ca + alkalinity treatment, increased WEOC markedly.

marinosfig1

Water-Extractable Organic Carbon in soil without plants (left column) and with plants (right column). Top graphs are organic horizon soils and bottom graphs are mineral horizon soils. Error bars are 1 standard error.

The next question was how might soil microorganisms fit into the plant-soil dynamics?

marinosfig2b

Soil respirations rates (top) over the short term (days 1-7 post-harvest) and (bottom) the long term (days 8-75 post-harvest). Error bars are 1 standard error.

Soil microorganisms use carbon products for cellular respiration, so the researchers expected that soils with more SOC would have higher respiration rates.  They measured soil respiration 1, 2, 4, 8, 16, 35 and 72 days after the harvest, so they could evaluate both short-term and long-term effects. In unplanted pots, soil respiration rates were unaffected by treatment.  But in planted pots, the alkalinity treatment increased soil respiration rates considerably in the short term (top graphs), but much less so in the long-term (bottom graphs). Putting the WEOC data from the figure above together with the respiration data from the two figures to your left, you can see that in pots with plants, increased alkalinity was associated with more SOC and higher respiration rates.

The researchers weighed the saplings after harvest and discovered that the sugar maples grew best in soils treated with calcium. Two previous studies had treated fields with calcium silicate and found better sugar maple growth in the treated fields.  Marinos and Bernhardt argue that their study provides evidence that it is the Ca enrichment, and not the increased pH, that caused increased growth for both of those studies.

Perhaps the most surprising finding is that higher alkalinity increased soil microbial activity only in pots with plants, and had no effect on soil microbial activity in pots without plants. Somehow, the plants in an alkaline environment are increasing the rate of microbial respiration, perhaps by releasing carbohydrates produced by photosynthesis into the soil, which could then stimulate decomposition of SOC by the microorganisms. Finding that this effect largely disappeared a few days after harvest (bottom graph above), supports the idea that the plants are releasing a substance that helps microorganisms carry on cellular respiration. But this idea awaits further study. In the meantime, we have a better understanding of how forest recovery from acid rain affects one aspect of the carbon cycle, though many other human inputs may interact with this recovery process.

note: the paper that describes this research is from the journal Ecology. The reference is Marinos, R. E. and Bernhardt, E. S. (2018), Soil carbon losses due to higher pH offset vegetation gains due to calcium enrichment in an acid mitigation experiment. Ecology, 99: 2363-2373. doi:10.1002/ecy.2478. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Rice fields foster biodiversity

Restoration ecologists want to restore ecosystems that have been damaged or destroyed by human activity.  One approach they use is “rewilding” – which can mean different things to different people.  To some, rewilding involves returning large predators to an ecosystem, thereby reestablishing important ecological linkages.  To others, rewilding requires corridors that link different wild areas, so animals can migrate from one area to another.  One common thread in most concepts of rewilding is that once established, restored ecosystems should be self-sustaining, so that if ecosystems are left to their own devices, ecological linkages and biological diversity can return to pre-human-intervention levels, and remain at those levels in the future.

ardea intermedia (intermediate egret). photo by n. katayama

The intermediate egrit, Ardea intermedia, plucks a fish from a flooded rice field. Credit: N. Katayama.

Chieko Koshida and Naoki Katayama argue that rewilding may not always increase biological diversity.  In some cases, allowing ecosystems to return to their pre-human-intervention state can actually cause biological diversity to decline. Koshida and Katayama were surveying bird diversity in abandoned rice fields, and noticed that bird species distributions were different in long-abandoned rice fields in comparison to still-functioning rice fields.  To follow up on their observations, they surveyed the literature, and found 172 studies that addressed how rice field abandonment in Japan affected species richness (number of species) or abundance.  For the meta-analysis we will be discussing today, an eligible study needed to compare richness and/or abundance for at least two of three management states: (1) cultivated (tilled, flood irrigated, rice planted, and harvested every year), (2) fallow (tilled or mowed once every 1-3 years), and (3) long-abandoned (unmanaged for at least three years).

koshidafig1

Three different rice field management states – cultivated, fallow and long-abandoned – showing differences in vegetation and water conditions. Credit: C. Koshida.

Meta-analyses are always challenging, because the data are collected by many researchers, and for a variety of purposes.  For example, some researchers may only be interested in whether invasive species were present, or they may not be interested in how many individuals of a particular species were present. Ultimately 35 studies met Koshida and Katayama’s criteria for their meta-analysis (29 in Japanese and six in English).

Overall, abandoning or fallowing rice fields decreased species richness or abundance to 72% of the value of cultivated rice fields. As you might suspect, these effects were not uniform for different variables or comparisons. Not surprisingly, fish and amphibians declined sharply in abandoned rice fields – much more than other groups of organisms. Abundance declined more sharply in abandoned fields than did species richness.  Several other trends also emerged.  For example, complex landscapes such as yatsuda (forested valleys) and tanada (hilly terraces) were more affected than were simple landscapes.  In addition, wetter abandoned fields were able to maintain biological diversity, while dryer abandoned fields declined in richness and abundance.

koshidafig2

The effects of rice field abandonment or fallowing for eight different variables.  Effect size is the ln (Mt/Mc), where Mt = mean species richness or abundance for the treatment, and Mc = mean species richness for the control.  The treated field in all comparisons was the one that was abandoned for the longer time.  A positive effect size means that species richness or abundance  increased in the treated (longer abandoned) field, while a negative effect size means that species richness or abundance declined in the treated field. Numbers in parentheses are number of data sets used for comparisons.

When numerous variables are considered, researchers need to figure out which are most important.  Koshida and Katayama used a statistical approach known as “random forest” to model the impact of different variables on the reduction in biological diversity following abandonment.  This approach generates a variable – the percentage increase in mean square error (%increaseMSE) – which indicates the importance of each variable for the model (we won’t go into how this is done!).  As the graph below shows, soil moisture was the most important variable, which tells us (along with the previous figure above) that abandoned fields that maintained high moisture levels also kept their biological diversity, while those that dried out lost out considerably.  Management state was the second most important variable, as long-abandoned fields lost considerably more biological diversity than did fallow fields.

koshidafig4

Importance estimates of each variable (as measured by %increase MSE).  Higher values indicate greater importance.

Unfortunately, only three studies had data on changes in biological diversity over the long-term.  All three of these studies surveyed plant species richness over a 6 – 15 year period, so Koshida and Katayama combined them to explore whether plant species richness recovers following long-term rice field abandonment. Based on these studies, species richness continues to decline over the entire time period.

koshidafig6

Plant species richness in relation to time since rice fields were abandoned (based on three studies).

Koshida and Katayama conclude that left to their own devices, some ecosystems, like rice fields, will actually decrease, rather than increase, in biological diversity.  Rice fields are, however, special cases, because they provide alternatives to natural wetlands for many organisms dependent on aquatic/wetland environments (such as the frog below). In this sense, rice fields should be viewed as ecological refuges for these groups of organisms.

rana-porosa-porosa-tokyo-daruma-pond-frog.-photo-by-y.g.-baba.jpg

Rana porosa porosa (Tokyo Daruma Pond Frog). Credit: Y. G. Baba

These findings also have important management implications.  For example, conservation ecologists can promote biological diversity in abandoned rice fields by mowing and flooding. In addition, managers should pay particular attention to abandoned rice fields with complex structure, as they are particularly good reservoirs of biological diversity, and are likely to lose species if allowed to dry out. Failure to attend to these issues could lead to local extinctions of specialist wetland species and of terrestrial species that live in grasslands surrounding rice fields. Lastly, restoration ecologists working on other types of ecosystems need to carefully consider the effects on biological diversity of allowing those ecosystems to return to their natural state without any human intervention.

note: the paper that describes this research is from the journal Conservation Biology. The reference is Koshida, C. and Katayama, N. (2018), Meta‐analysis of the effects of rice‐field abandonment on biodiversity in Japan. Conservation Biology, 32: 1392-1402. doi:10.1111/cobi.13156. Thanks to the Society for Conservation Biology for allowing me to use figures from the paper. Copyright © 2018 by the Society for Conservation Biology. All rights reserved.

Sweltering ants seek salt

Like humans, ants need salt and sugar.  Salt is critical for a functioning nervous system and for maintaining muscle activity, while sugar is a ready energy source. In ectotherms such as ants, body temperature is influenced primarily by the external environment, with higher environmental temperatures leading to higher body temperatures.  When ants get hot their metabolic rates rise, so they can go out and do energetically demanding activities such as foraging for essential resources like salt and sugar. On the down side, hot ants excrete more salt and burn up more sugar.  In addition, like humans, very high body temperature can be lethal, so ants are forced to seek shelter during extreme heat.   As a beginning graduate student, Rebecca Prather wanted to know whether ants adjust their foraging rates on salt and sugar in response to the conflicting demands of elevated temperatures on ants’ physiological systems.

Prather at field site

Rebecca Prather at her field site in Oklahoma, USA. Credit: Rebecca Prather.

Prather and her colleagues studied two different field sites: Centennial Prairie is home to 16 ant species, while Pigtail Alley Prairie has nine species.  For their first experiment, the researchers established three transects with 100 stations baited with vials containing cotton balls and either 0.5% salt (NaCl) or 1% sucrose.  The bait stations were 1 meter apart.  After 1 hour, they collected the vials (with or without ants), and counted and identified each ant in each vial.  The researchers measured soil temperature at the surface and at a depth of 10 cm. The researchers repeated these experiments at 9 AM, 1 PM and 5 PM, April – October, 4 times each month.

AntsinVial.jpg

Ants recruited to vials with 0.5% salt solution.  Credit: Rebecca Prather.

Sugar is easily stored in the body, so while sugar consumption increases with temperature, due to increased ant metabolic rate, sugar excretion is relatively stable with temperature.  In contrast, salt cannot be stored effectively, so salt excretion increases at high body temperature.  Consequently, Prather and her colleagues expected that ant salt-demand would increase with temperature more rapidly than would ant sugar-demand.

PratherFig1

Ant behavior in response to vials with 0.5% salt (dark circles) and 1% sucrose (white circles) at varying soil temperatures at 9AM, 1 PM (13:00) and 5PM (17:00). The three left graphs show the number of vials discovered (containing at least one ant), while the three right graphs show the number of ants recruited per vial.  The Q10 value  = the rate of discovery or recruitment at 30 deg. C divided by the rate of discovery or recruitment at 20 deg. C. * indicates that the two curves have statistically significantly different slopes.

The researchers discovered that ants foraged more at high temperatures. However, when surface temperatures were too high (most commonly at 1 PM during summer months), ants could not forage and remained in their nests.  At all three times of day, ants discovered more salt vials at higher soil temperatures. Ants also discovered more sugar vials at higher temperatures in the morning and evening, but not during the 1 PM surveys. Most interesting, the slope of the curve was much steeper for salt discovery than it was for sugar discovery, indicating that higher temperature increased salt discovery rate more than it increased sugar discovery rate (three graphs on left).

When ants discover a high quality resource, they will recruit other nestmates to the resource to help with the harvest.  Ant recruitment rates increased with temperature to salt, but not sugar, indicating that ant demand for 0.5% salt increased more rapidly than ant demand for 1% sugar (three graphs above on right).

The researchers were concerned that the sugar concentrations were too low to excite much recruitment, so they replicated the experiments the following year using four different sugar concentrations.  Ant recruitment was substantially greater to higher sugar concentrations, but was still two to three times lower than it was to 0.5% salt.

PratherFig2

Ant recruitment (y-axis) to different sugar concentrations at a range of soil temperatures (X-axis). Q10 values are to the left of each line of best fit.

Three of the four most common ant species showed the salt and sugar preferences that we described above, but the other common species, Formica pallidefulva, actually decreased foraging at higher temperatures.  The researchers suggest that this species is outcompeted by the other more dominant species at high temperatures, and are forced to forage at lower temperatures when fewer competitors are present.

In a warming world, ant performance will increase as temperatures increase up to ants’ thermal maximum, at which point ant performance will crash.  Ants are critical to ecosystems, playing important roles as consumers and as seed dispersers. Thus many ecosystems in which ants are common (and there are many such ecosystems!) may function more or less efficiently depending on how changing temperatures influence ants’ abilities to consume and conserve essential nutrients such as salt.

note: the paper that describes this research is from the journal Ecology. The reference is Prather, R. M., Roeder, K. A., Sanders, N. J. and Kaspari, M. (2018), Using metabolic and thermal ecology to predict temperature dependent ecosystem activity: a test with prairie ants. Ecology, 99: 2113-2121. doi:10.1002/ecy.2445Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.