Snails grow large to fight fear

In a recent post (Jan 12), I discussed research showing that song sparrow parents reduce provisioning to their offspring when threatened by predators, ultimately reducing offspring survival rates.  But in a turnabout that highlights the natural world’s dazzling diversity, a recent study by Sarah Donelan and Geoffrey Trussell revealed a very different impact of fear on the development of snail offspring. Donelan had worked as Trussell’s laboratory technician for two years and became fascinated by the egg capsules laid by the carnivorous snail Nucella lapillus, an ecologically important species in rocky intertidal communities. Earlier work had shown that predator-induced fear reduced snail feeding and growth rates, so Donelan decided that for her PhD work she would see how predator-induced fear influenced offspring development.


Adult Nucella alongside ca. 100 egg capsules. Credit: Sarah Donelan.

The researchers recognized that the fear environment experienced by parents before or during reproduction, and by the embryos during early development, could influence growth and development of those embryos. At their research site along the Massachusetts, USA coast, the predatory green crab, Carcinus maenas, can be a source of fear for these adult and embryonic snails. Donelan and Trussell exposed snails to fear by housing separately one male and one female snail in adjacent protected perforated containers (with six blue mussels in each container to feed them) that were set within a large plastic bucket. This bucket also had a somewhat larger perforated container (the risk chamber) containing the dreaded green crab (and two snails to feed it). The control risk chamber had two snails, but no crab.


Experimental setup with buckets containing egg capsules in perforated cages experiencing different exposure to fear. Credit: Sarah Donelan.

In late spring of 2015 and 2016, field-collected female and male snails were matched to create a total of 80 parental pairs. Donelan and Trussell set up experiments to explore the effects of parental experience with predation risk, embryonic experience with predation risk, and duration of embryonic experience.

Parent snails were exposed to a risk chamber (with a crab in the experimental group, and without a crab in the control group) for three days, and then placed together for four days (without risk) to mate. If an egg capsule was laid, the researchers removed it, and immediately exposed it to an experimental or control risk chamber for a week. Embryonic risk duration was further manipulated by continuing to expose half of the egg capsules to risk for a total of six weeks. The table below summarizes the treatments received by parents and offspring.



Mean (+ standard error) shell length (top graph) and tissue mass (bottom graph) of snail embryos exposed to predation risk. Parents were either exposed (solid circles) or not exposed (open circles) to risk before mating.


When parents were not exposed to risk, but their offspring were exposed, these offspring had shorter shells and reduced tissue mass compared to all other groups. When both parents and offspring were exposed to risk, offspring shell length increased by 8% and offspring mass increased by a whopping 40% over risk-exposed offspring whose parents were not exposed to risk (left data points in figures a and b). If embryos were not exposed to risk, parental exposure had no significant impact on embryonic development (right data points on figures a and b). Embryonic risk duration had no impact on development.


In addition, risk-exposed offspring of risk-exposed parents emerged from their egg capsules an average of 4.1 days sooner than other offspring.


Mean (+standard error) number of days until emergence of snail offspring that experienced the presence or absence of predation risk during early development.  Their parents were exposed to risk (solid circle) or no risk (open circle) before mating.

What could be causing these differences in size and rate of development? Donelan and Trussell hypothesized that embryonic snails could grow larger and more quickly if they were somehow able to reduce their metabolic rate. With a reduction in metabolic rate, more energy could be diverted to growth and development, resulting in larger and faster-growing snails. The researchers used an oxygen meter to measure oxygen consumption rates of individual egg capsules (from the eight different treatments in the first experiment) six weeks after deposition, about a week before embryos would begin to emerge. They exposed some of the capsules to predation risk during the experiment (current risk graph below), and left other capsules unexposed. When tested under risky conditions, capsules from parents who were exposed to risk, and that experienced risk as embryos during early development, had 56% lower metabolic rates than the other three groups (left graph), and similarly low metabolic rates as capsules tested without risk (right graph).


Mean (+ standard error) respiration rate of egg capsules that were (left graph) or were not (right graph) exposed to current predation risk.  During early development, the embryos in these capsules experienced risk or no risk, and were produced by parents exposed to risk (solid circles) or no risk (open circles) before mating.

Overall, parental experience with predation risk enhances offspring growth and development in the presence of risk. If the parents lack this exposure, risk-exposed offspring suffer the costs associated with small size and slower development. Currently Donelan and Trussell are trying to figure out what these costs are. Smaller snails have less energy reserves, may feed on a less diverse group of prey, and are less likely to remain in safer habitats than are larger juveniles. But we still don’t know whether these effects on early stages of life have lasting impacts as a snail gets older and larger. More generally, we don’t know whether there are similar types of interactions between parental and embryonic experiences of other stressors, most notably environmental stresses that are already being imposed by climate change.

note: the paper that describes this research is from the journal Ecology. The reference is Donelan, S. C. and Trussell, G. C. (2018), Synergistic effects of parental and embryonic exposure to predation risk on prey offspring size at emergence. Ecology, 99: 68–78. doi:10.1002/ecy.2067. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Field gentian – when it’s good to be eaten

We tend to think of plants as victims – after all any interested herbivore can simply walk, fly or crawl over to its favorite plant, and begin munching. But not so fast! In reality, plants have a variety of ways they can make life difficult for potential herbivores. Plants can escape herbivores by simply growing in places that are not easily accessible (such as in cracks, or high enough to be out of a herbivore’s reach) or by growing at a time of year when herbivores are away from the plant’s habitat. Plants also use mechanical defenses such as thorns or a diverse array of chemical defenses to thwart overzealous herbivores. A third approach – tolerance – can take many forms. For example, following attack by a herbivore some plants can increase photosynthetic rates or reduce the time until seed production . Tommy Lennartsson and his colleagues were interested in a particular form of tolerance that ecologists call overcompensation, in which damaged plants produce more seeds than undamaged plants.


Herbivores in action. Notice the difference in vegetation height inside and outside the pasture. Credit: Tommy Lennartsson.

Overcompensation is an evolutionary puzzle, because undisturbed plants produce fewer offspring than partially eaten plants. That outcome seems to fly in the face of the scientific principle that natural selection favors individuals with traits that promote reproductive success. Lennartsson and his colleagues investigated this evolutionary puzzle by comparing two subspecies of the herbaceous field gentian Gentianella campestris. The first subspecies, Gentianella campestris campestris (which we’ll just call campestris), has relatively unbranched shoot architecture when intact, growing to about 20 cm tall, but produces multiple fruiting branches when the dominant apical meristem is eaten. The second subspecies, Gentianella campestris islandica (which we’ll call islandica), is much shorter (about 5-10 cm tall), and always has a multi-branched architecture.


Two subspecies of field gentian – campestris (left) and islandica (right).

Environmental conditions and soils can vary dramatically, even on a small spatial scale. The field site was a gently-sloped grassland in Sweden that had coarser, dryer soil on the ridge, and finer, wetter and richer soil in the valley. This created a productivity gradient, with taller vegetation in the valley. The average  height of all the vegetation was 15 cm in the high-productivity valley, 10 cm on the medium-productivity slope and 5 cm on the low-productivity ridge.

The researchers used this natural variation to set up an experiment that would allow them to explore hypotheses about why an undisturbed campestris is less successful than one that is partially-eaten. One hypothesis (the overcompensation hypothesis) is that campestris restrains branching to conserve resources, so that when it is grazed it has plenty of resources in reserve to be used for regrowth and the production of prolific branches, flowers and seeds. Limited branching and limited seed production of ungrazed campestris are simply a cost of tolerance, while overcompensation after damage maximizes reproductive success. A second hypothesis (the competition hypothesis) is that restrained branching allows the plant to grow tall, so it can compete better in ungrazed pastures than can the much shorter islandica. These two hypotheses are not mutually exclusive.

To test these two hypotheses, the researchers set up 2 X 2 meter experimental plots in the valley (18 plots), slope (12 plots) and the ridge (6 plots). They planted 2000 seeds per subspecies in each plot, which ultimately yielded about 20 plants of each subspecies per plot. Of course there were many other neighboring plant species in these plots. In the high productivity plots (valley), the neighboring plants in six plots were clipped to a height of 12 cm, six plots to 8 cm and six plots to 4 cm. In the medium productivity plots (which naturally only grew to 10 cm), the researchers cut neighboring plants to 8 cm in 6 plots and 4 cm in six plots. Finally, in the low productivity plots, the researchers cut neighboring plants to 4 cm in all six plots. In mid July, half of the gentian plants in each plot were clipped to the same height as the surrounding vegetation, while the remainder were not clipped.


Experimental plots from the valley (left), slope (middle) and ridge (right).  Black squares represent plots where neighboring plants were clipped to 12 cm, grey squares to 8 cm, and clear squares to 4 cm. Squares with slashes through them (left)  represent plots that were used for a different purpose.

The beauty of this experimental design, is that by counting seeds, the researchers could assess the reproductive success of both subspecies under conditions of high competition (when surrounded by tall neighbors) and low competition (when surrounded by shorter neighbors). At the same time, clipping the two subspecies allowed the researchers to simulate grazing in these different competitive environments. Lennartsson and his colleagues found that unclipped islandica did better than unclipped campestris when surrounded by short or medium height neighbors, but that islandica success plummeted when the neighbors were very tall (see the left graph below). Campestris reproductive success also dropped when surrounded by tall competitors, but not as much as did islandica, so that campestris produced twice as many seeds than islandica in the high competition environment (also the left graph).

When plants were clipped to simulate grazing, campestris outperformed islandica in all three competitive environments. Campestris actually produced more seeds when it was clipped than when it was not clipped in the low and medium competition environments. Thus campestris overcompensated for grazing under conditions of low and moderate competition (see the right graph below).


Mean (+ standard error) seed production for unclipped (left graph) and clipped (right graph) field gentian subspecies in relation to surrounding vegetation height.  Sample sizes are in bars.

The researchers collected data on growth rates, development, survival probabilities and reproductive success for both species under conditions of being clipped or unclipped at different levels of competition. They then used these data to create a population growth model in relation to the percentage of grazing (damage risk) at different levels of productivity. In these graphs, a stochastic growth rate of 1.0 (on the y-axis) indicates that the population is stable, above 1.0 indicates it will increase and below 1.0 indicates a declining population.


Population growth rate of both subspecies in relation to damage risk at different levels of productivity.  These models predict that the population will increase at growth rates above the dotted line (growth rate = 1.0) and decline below the dotted line.

This model shows that in high productivity environments, campestris always does better than islandica (top graph). However, the model predicts that islandica will decline at any damage level (note in the top graph that all islandica damage values yield a growth rate below 1.0), while campestris will also decline except for very high damage risks. In medium and low productivity populations (middle and bottom graphs), islandica does better than campestris when damage risk is low, but the reverse is true at high damage risk.

So how do these results relate to the two hypotheses for why an undisturbed campestris is less successful than one that is partially-eaten. Campestris overcompensated for damage by producing more seeds and having positive population growth under most levels of productivity. In contrast, islandica undercompensated when damaged, but produced more seeds than campestris when ungrazed, except for in the high productivity environment. These differences in responses support the hypothesis that restrained branching is favored by natural selection in environments where damage from grazing is common (the overcompensation hypothesis). But, the superior performance by campestris in productive ungrazed environments supports the competition hypothesis.

Can we generalize these findings to other plants? Lennartsson and his colleagues point out that many short-lived grassland plants can’t grow tall enough to be effective competitors for light. These plants are thus restricted to environments where the surrounding plants are not very tall. Two factors commonly create conditions where there are short neighboring plants: grazing and unproductive (low nutrient) soils. When grazing is widespread, tolerance mechanisms such as overcompensation are favored by natural selection. When soils are unproductive, unrestrained branching is favored. Therefore, Gentianella campestris provides us with a natural experiment for testing hypotheses about how natural selection acts on plants to promote their reproductive success in a variable environment.

note: the paper that describes this research is from the journal Ecology. The reference is Lennartsson, T., Ramula, S. and Tuomi, J. (2018), Growing competitive or tolerant? Significance of apical dominance in the overcompensating herb Gentianella campestris. Ecology, 99: 259–269. doi:10.1002/ecy.2101. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.


Homing in on the micro range

I’ve always been fascinated by geography. As a child, I memorized the heights of mountains, the populations of cities, and the areas encompassed by various states and countries. I can still recite from memory many of these numbers – at least based on the 1960 Rand McNally World Atlas. Part of my fondness for geography is no doubt based on my brain’s ability to recall numbers but very little else.

Most geographic ecologists are fond of numbers, exploring numerical questions such as how many organisms or species are there in a given area, or how large an area does a particular species occupy? They then look for factors that influence the distribution and abundance of species or groups of species. Given that biologists estimate there may be up to 100 million species, geographic ecologists have their work cut out for them.

As it turns out, most geographic ecologists have worked on plants, animals or fungi, while relatively few have worked on bacteria and archaeans (a very diverse group of microorganisms that is ancestral to eukaryotes).


Two petri plates with pigmented Actinobacteria. Credit: Mallory Choudoir.

Until recently, bacteria and archaeans were challenging subjects because they were so small and difficult to tell apart. But now, molecular/microbial biology techniques allow us to distinguish between closely related bacteria based on the sequence of bases (adenine, cytosine, guanine, and uracil) in their ribosomal RNA. Bacteria which are identical in more than 97% of their base sequence are described as being in the same phylotype, which is roughly analogous to being in the same species.

As a postdoctoral researcher working in Noah Fierer’s laboratory with several other researchers, Mallory Choudoir wanted to understand the geographic ecology of microorganisms. To do so, they and their collaborators collected dust samples from the trim above an exterior door at 1065 locations across the United States (USA).


Dr. Val McKenzie collects a dust sample from the top of a door sill. Credit: Dr. Noah Fierer.

The researchers sequenced the ribosomal RNA from each sample to determine the bacterial and archaeal diversity at each location. Overall they identified 74,134 gene sequence phyloypes in these samples – that took some work.

On average, each phylotype was found at 70 sites across the USA, but there was enormous variation. By mapping the phylotypes at each of the 1065 locations, the researchers were able to estimate the range size of each phylotyope. They discovered a highly skewed distribution of range sizes, with most phylotypes having relatively small ranges, while only a very few had large ranges (see the graph below). As it turns out, we observe this pattern when analyzing range sizes of plant and animal species as well.


Mean geographic range (Area of occupancy) for each phylotype in the study.  The y-axis (Density) indicates the probability that a given phylotype will occupy a range of a particular size (if you draw a straight line down from the peak to the x-axis, you will note that most phylotypes had an AOO of less than 3000 km2

Taxonomists use the term phylum (plural phyla) to indicate a broad grouping of similar organisms. Just to give you a feel for how broad a phylum is, humans and fish belong to the same phylum. Some microbial phyla had much larger geographic ranges than others. Interestingly, it was not always the case that the phylum with the greatest phylotype diversity had the largest range. For example, phylum Chrenarchaeota had the greatest median geographic range (see the graph below), but ranked only 19 (out of 50 phyla) in number of phylotypes (remember that a phylotype is kind of like a species in this study).


Box plots showing range size distribution for individual phyla. Middle black line within each box is the median value; box edges are the 25th and 75th percentile values (1st and 3rd quartiles).  Points are outlier phylotypes. Notice that the y-axis is logarithmic.

With this background, Choudoir and her colleagues were prepared to investigate whether there were any characteristics that might influence how large a range would be occupied by a particular phylotype. We could imagine, for example, that a phyloype able to withstand different types of environments would have a greater geographic range than a phylotype that was limited to living in thermal pools. Similarly, a phylotype that dispersed very effectively might have a greater geographic range than a poor disperser.

The researchers expected that aerobic microorganisms (that use oxygen for their metabolism) would have larger geographic ranges than nonaerobic microorganisms, which are actually poisoned by oxygen. The data below support this prediction quite nicely.


Geographic range size in relation to oxygen tolerance.  In this graph, and the graphs below, the points have been jittered to the right and left of their bar for ease of viewing (otherwise even more of the points would be on top of each other).

Some bacterial species form spores that protect them against unfavorable environmental conditions. The researchers expected that spore-forming bacteria would have larger geographic ranges than non-spore-forming bacteria.


Geographic range in relation to spore formation (left graph) and pigmentation (right graph).

Choudoir and her colleagues were surprised to discover exactly the opposite; the spore forming bacteria had, on average, slightly smaller geographic ranges. Choudoir and her colleagues also expected that phylotypes that are protected from harsh UV radiation by pigmentation would have larger geographic ranges than unpigmented phylotypes – this time the data confirmed their expectations.

The researchers identified several other factors associated with range size. For example, bacteria with more guanine and cytosine in their DNA or RNA tend to have larger geographic ranges. Some previous studies have shown that a higher proportion of guanine and cytosine is associated with greater thermal tolerance, which should translate to a greater geographic range. Choudoir and her colleagues also discovered that microorganisms with larger genomes (longer DNA or RNA sequences) also had larger ranges. They reason that larger genomes (thus more genes) should correspond to greater physiological versatility and the ability to survive variable environments.

This study opens up the door to further studies of microbial geographic ecology. Some patterns were expected, while others were surprising and beg for more research. Many of these microorganisms are important medically, ecologically or agriculturally, so there are very good reasons to figure out why they live where they do, and how they get from one place to another.

note: the paper that describes this research is from the journal Ecology. The reference is Choudoir, M. J., Barberán, A., Menninger, H. L., Dunn, R. R. and Fierer, N. (2018), Variation in range size and dispersal capabilities of microbial taxa. Ecology, 99: 322–334. doi:10.1002/ecy.2094. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2017 by the Ecological Society of America. All rights reserved.

“Notes from Underground” – cicadas as living rain gauges

Given recent discussions between Donald Trump and Kim Jong-un about whose button is bigger, many of us with entomological leanings have revisited the question of what insects are most likely to dominate a post-nuclear world. Cicadas have a developmental life history that predisposes them to survival in the long term because some species in the eastern United States spend many subterranean years as juveniles (nymphs), feeding on the xylem sap within plants’ root systems. Magicicada nymphs live underground for 13 or 17 years, depending on the species, before digging out en masse, undergoing one final molt, and then going about the adult business of reproduction. This life history of spending many years underground followed by a mass emergence has not evolved to avoid nuclear holocausts while underground, but rather to synchronize emergence of billions of animals. Mass emergence causes predator satiation, an anti-predator adaptation in which predators are gastronomically overwhelmed by the number of prey items, so even if they eat only cicadas and nothing else, they still are able to consume only a small fraction of the cicada population.


Mass Magicicada emergence picturing recently-emerged winged adults, and the smaller lighter-colored exuviae (exoskeletons) that are shed during emergence. Credit: Arthur D. Guilani.

Less well-known are the protoperiodical cicadas (subfamily Tettigadinae) of the western United States that are abundant in some years, and may be entirely absent in others. Jeffrey Cole has studied cicada courtship songs for many years, and during his 2003 field season noted that localities that had previously been devoid of cicadas now (in 2003) hosted huge numbers of six or seven different species. He returned to those sites every year and high diversity and abundance reappeared in 2008 and 2014. This flexible periodicity contrasted with their eastern Magicicada cousins, and he wanted to know what stimulated mass emergence.



Protoperiodical cicadas studied by Chatfield-Taylor and Cole.  Okanagana cruentifera (top) and Clidophleps wrighti (bottom). Credit Jeffrey A. Cole.

Cole and his graduate student, Will Chatfield-Taylor, considered two hypotheses that might explain protoperiodicity in southern California (where they focused their efforts). The first hypothesis is that cicada emergence is triggered by heavy rains generated by El Niño Southern Oscillation (ENSO), a large-scale atmospheric system characterized by high sea temperature and low barometric pressure over the eastern Pacific Ocean. ENSO has a variable periodicity of 4.9 years, which roughly corresponds to the timing Cole observed while doing fieldwork. The second hypothesis recognized that nymphs must accumulate a set amount of xylem sap from their host plants to complete development. Sap availability depends on precipitation, and this accumulation takes several years in arid habitats. So while ENSO may hasten the process, the key to emergence is a threshold amount of precipitation over a several year timespan.

Working together, the researchers were able to identify seven protoperiodical species by downloading museum specimen data (including where and when each individual was collected) from two databases (iDigBio and SCAN). They also used data from several large museum collections, which gave them evidence of protoperiodical cicada emergences back to 1909. Based on these data, Chatfield-Taylor and Cole constructed a map of where these protoperiodical cicadas emerge.


Maps of five emergence localities discussed in this study.

The researchers tested the hypothesis that protoperiodical cicada emergences follow heavy rains triggered by ENSO by going through their dataset to see if there was a correlation between ENSO years and mass cicada emergences. Of 20 mass cicada emergences since 1918, only five coincided with ENSO events, which is approximately what would be expected with a random association between mass emergences and ENSO. Scratch hypothesis 1.

Let’s look at the second hypothesis. The researchers needed reliable precipitation data between years for which they had good evidence that there were mass emergences of their seven species. Using a statistical model, they discovered that 1181 mm was a threshold for mass emergences, and that three years was the minimum emergence interval regardless of precipitation. Only after 1181 mm of rain fell since the last mass emergence, summed over at least three years, would a new mass emergence be triggered.


Cumulative precipitation over seven time periods preceding cicada emergence.

The nice feature of this model is that it makes predictions about the future. For example, the last emergence occurred in the Devil’s punchbowl vicinity in 2014. Since then that area has averaged 182.2 mm of precipitation per year. If those drought conditions continue, the next mass emergence will occur in 2021 at that locality, which is longer than its historical average. Only time will tell. Hopefully Mr. Trump and Mr. Jong-un will be able to keep their fingers off of their respective buttons until then.

note: the paper that describes this research is from the journal Ecology. The reference is Chatfield-Taylor, W. and Cole, J. A. (2017), Living rain gauges: cumulative precipitation explains the emergence schedules of California protoperiodical cicadas. Ecology, 98: 2521–2527. doi:10.1002/ecy.1980. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2017 by the Ecological Society of America. All rights reserved.


Predators and livestock – “stayin’ alive.”

President Donald Trump was elected on a platform that included building a great wall whose purpose was to keep out unwanted intruders from the south, and that would be paid for (apparently magically) by these same intruders.  The idea of building a great wall has been around for a long time; the Great Wall of China was constructed over a time period of almost two thousand years to keep out unwanted intruders (this time from the north). Not surprisingly, the cost of that Great Wall was not borne by the unwanted intruders. More recently, in the 1880s, the government of Australia constructed a 5500 km fence designed to keep unwanted dingoes away from sheep that pasture in southeastern Australia. As Lily van Eeden describes, the Australian government spends about $10 million dollars per year to maintain the fence but there are almost no data to compare livestock losses on either side of the fence. Thus she and her colleagues decided to look at what was being done globally to evaluate the effectiveness of different methods of protecting livestock.

DingoFencePeter Woodard

The Dingo fence across southeastern Australia. Credit Peter Woodard.

The researchers grouped livestock protection approaches into five different categories: lethal control, livestock guardian animals such as dogs, llamas and alpacas, fencing, shepherding and deterrents. Lethal control includes using poison baits and systematic culling of populations of top predators. Deterrents include aversive conditioning of problem predators, chemical, auditory or visual repellents, and protection devices such as livestock protection collars.

Screen Shot 2018-01-23 at 10.28.16 AM

A guardian dog emerges from the midst of its flock in Bulgaria. Credit: Sider Sedefchev.

Van Eeden and her colleagues then did a meta-analysis to see which approach worked best. You can check out my blog from Aug. 2, 2017 (“Meta-analysis measures multiple mycorrhizal benefits to plants”) for a more detailed discussion of meta-analyses. Very briefly a meta-analysis is a systematic analysis of data collected by many other researchers. This is challenging because each study uses slightly different techniques and has different levels of rigor. For this meta-analysis, van Eeden and her colleagues used only two types of studies. One type is a before/after design, in which researchers kept data on livestock loss before the mitigation treatment as well as after. The second type is a control-impact design, in which there was a control group set aside, which did not receive the mitigation treatment. Each study also needed sample sizes (number of herds and/or number of years), means and standard deviations, and had to be run for at least two months to be used in the meta-analysis.

The researchers searched several databases (Web of Science, SCOPUS and European Commission LIFE project), Google Scholar, and also used more informal sources, to collect a total of more than 3300 records. However, after imposing the requirements for types of experimental design and data output, only 40 studies remained for the meta-analysis. Based on these data, all five mitigation approaches reduced predation on livestock. The effect size in the figure below compares livestock loss with the treatment to livestock loss without the treatment, so that a negative value indicates that the treatment is associated with reduced livestock loss. The researchers conclude that all five approaches are somewhat effective, but the large confidence intervals (the whiskers in the graph) make it difficult to unequivocally recommend one approach over another. The effectiveness of lethal control was particularly variable (hence the huge confidence interval), as three studies showed an increase in livestock loss associated with lethal control.

van EedenFig2

Mean effect size (Hedges’ d) and confidence intervals for five methods used to mitigate conflict between predators and livestock.  More negative effect size indicates a more effective treatment. Numbers in parentheses are number of studies used for calculating mean effect size.

Finding that non-lethal management is as effective (or possibly more effective) than lethal control tells us that we should probably be very careful about intentionally killing large carnivores, since, in addition to being cool animals that deserve a right to exist, they also perform some important ecosystem services. For example, in Australia, there are probably more dingoes northwest of the fence than there are south of the fence, so exclusion may  be working. However there is some evidence that there are also more kangaroos and rabbits south of the fence, which could be an unintended consequence of fewer predatory dingoes. Kangaroos and rabbits eat lots of grass, so keeping dingoes away could ultimately be harming the sheep populations. Dingoes may also kill or compete with invasive foxes and feral cats, which have both been shown to drive native species to extinction, so excluding dingoes may increase foxes and cats, threatening native species.  Van Eeden and her colleagues argue that different mitigation approaches work in different contexts, but that we desperately need evidence in the form of standardized evaluative studies to understand which approach is most suitable in a particular context.

van Eeden Fig.3

Context-specific approach to managing the co-exstence of predators and livestock.

In all contexts, cultural and economic factors interact in mitigating conflict between humans and carnivores. The dingo is officially labeled as a wild dog, which invaded Australia relatively recently (about 4000 years ago), so the public perception is that this species has a limited historical role. Other cultures may have a different view of their predators. For example, the Lion Guardian project in Kenya, which trains and supports community members to protect lions, has successfully built tolerance for lions by incorporating Maasai community cultural values and belief systems.

To use a phrase that President Trump recently forbade the Centers for Disease Control to use in their reports, our decisions about predator mitigation should be “evidence-based.” We need more controlled studies that address the success of different mitigation approaches in particular contexts. We also must understand the costs of removing predators from an ecosystem, as predator removal can initiate a cascade of unintended consequences.

note: the paper that describes this research is from the journal Conservation Biology. The reference is van Eeden, L. M., Crowther, M. S., Dickman, C. R., Macdonald, D. W., Ripple, W. J., Ritchie, E. G. and Newsome, T. M. (2018), Managing conflict between large carnivores and livestock. Conservation Biology, 32: 26–34. doi:10.1111/cobi.12959. Thanks to the Society for Conservation Biology for allowing me to use figures from the paper. Copyright © 2018 by the Society for Conservation Biology. All rights reserved.

Prey populations: the only thing to fear is fear itself

In reference to the Great Depression, Franklin Delano Roosevelt is famously quoted as stating during his 1933 inaugural speech “the only thing we have to fear is fear itself.” Roosevelt was no biologist, but his words could equally apply to a different type of depression – the decline of animal populations that can be caused by fear.


Roosevelt’s inauguration in 1933. Credit: Architect of the Capitol.

Ecologists have long known that predators can depress prey populations by killing substantial numbers of their prey. But only in the past two decades or so have they realized that predators can, simply by their presence, cause prey populations to go into decline. There are many different ways this can happen, but, in general, a predation threat sensed by a prey organism can interfere with its feeding behavior, causing it to grow more slowly, or to starve to death. As one example, elk populations declined after wolves were introduced to Yellowstone National Park. There are many factors associated with this decline, but one factor is fear of predators causes elk to spend more time scanning and less time foraging. Also, elk tend to stay away from wolf hotspots, which are often places with good elk forage.

Liana Zanette recognized that ecologists had not considered whether predator presence can cause bird or mammal parents to reduce the amount of provisioning they provide to dependent offspring, thereby reducing offspring growth and survival, and slowing down population growth. For many years, she and her colleagues have studied the Song Sparrow, Melospiza melodia, on several small Gulf Islands in British Columbia, Canada. In an early study, she showed that playbacks of predator calls reduced parental provisioning by 26%, resulting in a 40% reduction in the estimated number of nestlings that fledged (left the nest). But, as she points out, Song Sparrow parents provision their offspring for many days after fledging; she wondered whether continued perception of a predation threat during this later time period further decreased offspring survival and ultimately population growth.

Song sparrow

The Song Sparrow, Melospiza melodia. Credit: Free Software Foundation.

Zanette’s student, Blair Dudeck, did much of the fieldwork for this study. The researchers captured nestlings six days after hatching , weighed and banded them, and fit them with tiny radio collars. They then recaptured and weighed the nestlings within a few hours of fledging (at about 12 days post-hatching) to assess nestling growth rates.


Banded sparrow nestling with radio antenna trailing from below its wing. Credit: Marek C. Allen.

Three days after the birds fledged, Dudeck radio-tracked them, and surrounded them with three speakers approximately 8 meters from where they perched. For one hour, each youngster listened to recordings of calls made by predators such as ravens or hawks, followed, after a brief rest period, by one hour of calls made by non-predators such as geese or woodpeckers (or vice-versa). During the playbacks, Dudeck observed the birds to record how often the parents visited and fed their offspring, and whether offspring behavior changed in association with predator calls. This included recording all of the offspring begging calls.


Blair Dudeck simultaneously uses a tracking device to locate Song Sparrows and a recorder mounted to his head to record their begging calls. Credit: Marek C. Allen.

Fear had a major impact on parental behavior. Parents reduced food provisioning vists by 37% when predator calls were played in comparison to when non-predator calls were played. They also fed offspring fewer times per visit, which resulted in 44% fewer meals in association with predator calls.


Mean number of parental provisioning visits (in one hour) in relation to whether predator (red) or non-predator (blue) calls were played. Error bars are 1 SE.

Hearing predator calls had no effect on offspring behavior – they continued to beg for food at a high rate, and did not attempt to hide.

Some parents were much more scared than others – in fact, some parents were not scared at all. The researchers measured parental fearfulness by subtracting the number of provisioning visits by parents during predator calls from the number of visits during non-predator calls. A more positive number indicated a more fearful parent (a negative number represents a parent who fed more in the presence of predator calls). The researchers discovered that more fearful parents tended to have offspring that were in poorer condition at day 6 and at fledging.


Offspring weight on day 6 (open circles) and at fledging (solid circles) in relation to parental fearfulness.  Higher positive numbers on x-axis indicate increasingly fearful parents.

Importantly, more fearful parents tended to have offspring that died at an earlier age. Based on this finding, the researchers created a statistical model that compared survival of offspring that heard predator playbacks throughout late-development with survival of offspring that heard non-predator playbacks during the same time period. They estimated a 24% reduction in survival. Combined with their previous study on playbacks during early development, the researchers estimate that hearing predator playbacks throughout early and late development would reduce offspring survival by an amazing 53%.

This “fear itself” phenomenon can extend to other trophic levels in a food web. For example recent research by Zanette and a different group of researchers showed that playbacks of large carnivore vocalizations dramatically reduced foraging by raccoons on their major prey, red rock crabs. When these carnivore playbacks were continued for a month, red rock crab populations increased sharply. This increase in crab population size was followed by a decline of the crab’s major competitor – the staghorn sculpin, and the crab’s favorite food, a Littorina periwinkle. Thus “fear itself” can cascade through the food web, affecting multiple trophic levels in important ways that ecologists are now beginning to understand.

note: the paper that describes this research is from the journal Ecology. The reference is Dudeck, B. P., Clinchy, M., Allen, M. C. and Zanette, L. Y. (2018), Fear affects parental care, which predicts juvenile survival and exacerbates the total cost of fear on demography. Ecology, 99: 127–135. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2018 by the Ecological Society of America. All rights reserved.

Successful scavengers

Scavengers have a bad reputation. They reputedly eat foul smelly stuff, and are too lazy or incompetent to track down prey on their own, depending on “noble” beasts such as lions to kill prey, and then sneaking a few bites when the successful hunters are not looking (or after they’ve stuffed themselves). Of course the reality is that scavenging is simply one way that animals make a living. Many different species, including lions, will scavenge if given the opportunity, and from a human perspective, scavengers provide several important ecosystem services. As one example described by Kelsey Turner and her colleagues, ranchers in parts of Asia gave diclofenac, a non-steroidal anti-inflammatory drug, to their cattle, which had the unintended consequence of killing much of the vulture community. Losing vultures from the scavenging community increased the prevalence of rotting carcasses, which caused feral dog and rat populations to skyrocket, resulting in a sharp increase of human rabies cases in India. The take-home message is that we need to understand what factors influence scavenging behavior and scavenging success.


Golden eagle overwintering in South Carolina scavenges a pig carcass in a clearcut. Credit: Kelsey Turner.

Turner and her colleagues were particularly interested in whether the size of a carcass, the habitat in which an animal dies, and the time of year, influence scavenging dynamics.   The researchers varied carcass size by using three different species: rats (small), rabbits (medium) and pigs (large). Habitats were clearcuts, mature hardwood, immature pine, and mature pine forest. Time of year was divided into two seasons: warm (May – September) and cool (December – March). I should point out that the cool season was mild by many standards, as the research was conducted at the Savannah River Site in South Carolina, with a mean winter temperature of about 10 ° C.


Map of Savannah River Site showing the study sites and diverse habitats.

The researchers collected data by laying down carcasses of varying size in each of the habitats in both summer and winter. Each carcass was observed by a remote sensing camera that captured the scavenging events, allowing the researchers to identify the species of each scavenger and how long it took for the carcass to be detected and consumed.


Two coyotes captured by a remote sensing camera scavenging a pig carcass on a rainy day. Credit: Kelsey Turner.

Scavengers discovered 88.5% of the carcasses placed during the cool season, but only 65.4% of carcasses placed during the warm season. Carcass size was also important, with only 53.9% of rats detected, in contrast to 78.5% of rabbits and 97.8% of pigs detected. But habitat interacted with these general findings: for example scavengers consumed all (23) rabbits in clearcuts, but only about 70% of rabbits placed in the other three habitats.

Detection time also varied with carcass size; in general scavengers found pigs more readily than rats or rabbits. As the graphs below show, this relationship was quite complex. Pigs were detected much more quickly than the smaller carcasses in clearcuts, and somewhat more quickly in mature pine. Additionally, this difference between pigs and the other species is stronger in the warm season (left graph) than in the cool season (right graph). In fact, there is no difference in detection time of pigs, rabbits and rats placed in mature pine during the cool season.


Natural log of mean detection time (in hours) of rat, rabbit and pig carcasses in warm season (left) and cool season (right) in different habitats.  CC = clearcut, HW = mature hardwood, IP = immature pine, MP = mature pine.

Not surprisingly pigs tended to persist longer (before being totally consumed) than the other two species. More strikingly, persistence time for all three species was much greater in the cool season than in the warm season.


Natural log of mean carcass persistence time (in hours) of rat, rabbit and pig carcasses during the cool and warm seasons.

Turner and her colleagues identified 19 different scavenger species; turkey vultures, coyotes, black vultures, Virginia opossums, raccoons and wild pigs were the most frequent. The first scavengers to detect pig carcasses were usually turkey vultures (76.0%) or coyotes (17.3%). An average of 2.8 different species scavenged at pig carcasses, in contrast to 1.5 at rabbit carcasses and 1.04 at rat carcasses. As you might imagine, most scavengers made short work of rat carcasses, so there was not much opportunity for other individuals or species to move in. Carcasses that persisted longer generally had a greater diversity of scavengers; for example, carcasses scavenged by 1, 2 or 3 species persisted, on average, for 90.5 hours, while those scavenged by 4, 5 or 6 species persisted, on average for 216.5 hours.


A flock of turkey vultures in a clearcut surround and scavenge a pig carcass. Credit: Kelsey Turner.

Early ecologists viewed feeding relationships within an ecological community as a linear process in which plants extract nutrients from soils and calories from the air, which they pass onto herbivores and then to carnivores, with considerable energy being lost in each transfer. Now, we use a food web perspective, which considers the essential contributions of scavengers and decomposers (among others) to these feeding relationships. Carcasses decompose much more quickly during the warm season, returning calories and nutrients to lower levels of the food web. Microbial decomposers are, in essence, competing with vertebrates for carcasses, and being metabolically more active in warm months, are able to extract a greater portion of the resources from the carcass than they can during the winter. Slow decomposition in winter allows longer carcass persistence, leading to a greater number and greater diversity of scavengers. As a bonus for those who believe in human primacy, these same scavengers help to create a cleaner and healthier world.

note: the paper that describes this research is from the journal Ecology. The reference is Turner, K. L., Abernethy, E. F., Conner, L. M., Rhodes, O. E. and Beasley, J. C. (2017), Abiotic and biotic factors modulate carrion fate and vertebrate scavenging communities. Ecology, 98: 2413–2424. doi:10.1002/ecy.1930. Thanks to the Ecological Society of America for allowing me to use figures from the paper. Copyright © 2017 by the Ecological Society of America. All rights reserved.