Monitoring Shifts in Ecosystems with Computer Vision

Estimated time:

<h3 style="text-align: center;">By <a href="">Gaja Klaudel</a> and <a href="" target="_blank" rel="noopener noreferrer">Jędrzej Świeżewski, PhD</a></h3> <h2>Climate crisis at our doorstep</h2> The effects of the climate crisis are here. All around the world, flash floods, landslides, heat waves, and droughts can be witnessed. Central Europe is no different. In a recent study, scientists showed that large swaths of Polish forests are bound to undergo profound change as the climate makes them unsuitable for the most important tree species. Fortunately, the authors report with immediate and large-scale efforts, we could counteract those changes and possibly mitigate the negative effects. For those actions to gain support and to be applied in an informed way, it becomes vital that we are able to monitor and understand the changes in Polish forests’ ecosystems as they take place. In a recent effort, we partnered with Dr Radosław Puchałka (UMK) and Dr Marcin Dyderski (IDPAN) to establish a new method of assessing the shifts in the natural cycles of herbaceous plant species forming forest understory. The idea is to use several widespread forest flowers and the annual dynamics of their development as proxies for the shifts experienced by the entire ecosystem. To draw relevant insights from data, e.g., about the timing of particular species blooming, we needed to build computer vision models able to identify the stage in pictures of the understory plants. <h2>Why is it important?</h2> In recent decades human activity has led to unprecedented changes in the functioning of ecosystems on Earth. During this time we’ve learned a lot about how plants react to climate change. However, most of the studies concerned trees, as keystone elements of forests. Since the 1990s, we've known that coniferous tree species will retreat from Central Europe. And that this will change both forests and forestry-related economics. Dr Dyderski’s study showed that Scots pine, Norway spruce, and silver birch could retreat from >50% of their contemporary range by 2070. Three years later Dr Puchałka showed that most of the negative changes in tree species range expected by the 2070s will manifest as early as the 2050s, indicating an urgent need to counteract climate change.  However, the effects of changing climate are not limited to trees but also herbaceous plants. Despite their low dimensions, understory plants in forests are responsible for 20% of whole matter cycling and comprise a relevant part of forest biodiversity. Unfortunately, these species are insufficiently recognized, therefore the team led by Dr Puchałka and Dr Dyderski aimed to assess the responses of forest understory plants to climate change. Both in terms of their geographic distributions, as well as phenology, i.e., dates of flowering and fruiting. As forest herbs’ seasonal dynamics strongly depend on both tree leaves development and interactions with insects, shifts in phenology might decrease their chances for reproductive success.   <h3>Changing of the Seasons</h3> The conditions for the occurrence of early spring plants are shaped by the meteorological conditions of early spring. In many locations within the temperate climate zone, earlier flowering of early spring understory plants and earlier leaf flushing of the trees resulting from climate warming are observed. Scientific research aimed at estimating the effect of phenology shifts on the functioning of ecosystems has shown inconclusive or even contradictory results. Some studies have shown that co-occurring species have similar phenological responses to global warming. In such a scenario, dates of photophase occurrences would change without significantly affecting interspecies interactions. Other studies suggest that climate change will disrupt the synchrony in life cycles of co-occurring species. According to this, more pessimistic scenario, climate change may result in a mismatch between flowering and pollinator availability. Early spring plants are often found in large numbers, making them a substantial food base for insects, e.g., honey bees. Disruption of the synchronization between their flowering and the appearance of pollinators or flowering in weather conditions unfavorable for flower-visiting may result in the lack of unpollinated flowers and a decrease in the insect population. Moreover, the earlier development of leaves on trees can significantly reduce the amount of light and consequently the photosynthesis rate. This, in turn, may adversely affect the quality of seeds and the amount of carbon storage necessary for growth in the next growing season and vegetative reproduction.  <h3>Citizen Science</h3> The development of web-based citizen science observed in recent years makes it possible to collect information about spatio-temporal species occurrences on an unprecedented scale. Thanks to the photos sent by citizens to databases such as iNaturalist, it is possible to see where satellite observations are limited, such as a layer of herbaceous vegetation under dense canopy. The limitation of such research is the time-consuming analysis of many thousands of photographs. Machine learning methods can help automate data collection, hoping to improve such research enabling large-scale phenological study on many species. Thus, they can help understand the impact of climate change on individual species' phenological responses and, consequently, ecosystems functioning. From a practical point of view, such results can be beneficial to the mitigation of the adverse effects of climate change on biodiversity and forest management. <h2>Dataset is key</h2> The dataset provided by Dr Puchałka and Dr Dyderski consisted of 17329 images of the undergrowth. The images came from citizen-science platforms. They were diverse in size, quality, lighting, and perspective, among other aspects. They focused on four species that are easy to recognize and widespread: <i>Anemone nemorosa</i>, <i>Anemone ranunculoides</i>, <i>Convallaria majalis,</i> and <i>Maianthemum bifolium</i>. <img class="size-full wp-image-11740" src="" alt="Anemone nemorosa, Anemone ranunculoides, Convallaria majalis, Maianthemum bifolium" width="1045" height="414" /> <em>Anemone nemorosa, Anemone ranunculoides, Convallaria majalis, Maianthemum bifolium</em> Although most of the images were of good quality and appropriately sized, this dataset turned out to be quite challenging. The main challenge was a very severe class imbalance - per species and per plant’s life cycle phase (we focused on blooming and fruit-bearing). The largest class imbalance reached a ratio of 43:1 (for <i>Anemone nemorosa</i> - fruit-bearing vs no fruit). Moreover, the two phases were not disjoint, because in some cases, an image presented a specimen bearing some fruits while still displaying flowers. <h2></h2> <h2>Modeling approaches</h2> We considered several approaches to designing the models to capture the information in the data. <h3>One model to rule them all!</h3> One approach to the problem is creating one model with four classes, describing every possible state that can occur on the image. Since the entire dataset is not very large, we used transfer learning, and various image augmentations from the <a href="" target="_blank" rel="noopener noreferrer">albumentations</a> library, as well as the <a href="" target="_blank" rel="noopener noreferrer">timm</a> library. See our usage example to help quickly test various architectures. Since the dataset was imbalanced we experimented with techniques such as oversampling and focal loss. The first results reached a 75% accuracy. <img class="size-full wp-image-11744 aligncenter" src="" alt="" width="811" height="651" /> &nbsp; While the overall accuracy seems like a promising first result, a quick look at the confusion matrix delivers a strong warning: Don’t put trust in this result! Notice that the initial decision for randomly selecting the validation set resulted in a heavily imbalanced validation set - to the point that there were no images with both fruit and flower in it! The model did learn something about distinguishing between the plant blooming or not, but it learned little about fruit-bearing. The accuracy reported above was misleadingly tilted by the unfortunate way of selecting the validation set. We mention it here as it has been a recurrent theme in discussing the performance of machine learning models: <b>Until you understand how the evaluation method was constructed, don’t trust simple summary statistics such as accuracy</b>. In our case, we made sure to pick validation sets with an equal count of considered classes. This way, we could better assess which classes the models struggle with and which are solved. <h3>Model per lifecycle stage</h3> Taking into consideration the key foreseen application of the models we worked on, we experimented with training a separate model for detecting the presence of flowers and the presence of fruits. For such cases, the two subsets became much less imbalanced: <img class="alignnone size-full wp-image-11752 aligncenter" src="" alt="" width="474" height="209" /> &nbsp; <h4>Presence of flowers</h4> Quickly testing a few robust, yet simple architectures, utilizing transfer learning, a rather rich set of augmentations, and some oversampling, we reached a good performance at a balanced accuracy of 85%. <img class="size-full wp-image-11746" src="" alt="" width="596" height="405" /> Confusion matrix with results calculated on a balanced validation set, for a model predicting if the images include flowers. While those results don’t seem outstanding at a first glance, a closer look at the dataset revealed many unforeseen challenges, related to the diversity of the images used. <h4>Presence of fruits</h4> This subtask was bound to be more challenging. As indicated both by the larger class imbalance and smaller overall size of the minority class. Running quick experiments, we also reached an encouraging result with a balanced accuracy of 81%. <img class="size-full wp-image-11748" src="" alt="" width="590" height="407" /> Confusion matrix with results calculated on a balanced validation set, for a model predicting if the images include fruits. A closer study of the above models’ performance, the data itself, and the foreseen applications of the models pointed us in the direction of analyzing each of the four species individually. <h3>A model per species</h3> For each of the species, we built a separate model (using the same small robust architecture and the same set of other techniques) to study the key issues for machine learning in learning to understand the studied dataset. <img class="size-full wp-image-11738 aligncenter" src="" alt="" width="1174" height="619" /> &nbsp; The models are clearly not good enough to be useful. There are two key insights from this experiment:  <ol><li style="font-weight: 400;" aria-level="1"><i>Anemone nemorosa</i> has the highest class imbalance of the four, and </li><li style="font-weight: 400;" aria-level="1">its fruits seem hard to distinguish (which triggered a separate inquiry - see below).</li></ol> <h4><i>Anemone ranunculoides</i></h4> <img class="size-full wp-image-11732 aligncenter" src="" alt="" width="1206" height="637" /> &nbsp; Even though the models are slightly better than for <i>Anemone nemorosa</i>, the results are not good. The minority class in both cases was limited to a very small number of examples. Especially considering the diversity of the images themselves. <h4><i>Convallaria majalis</i></h4> <img class="size-full wp-image-11734 aligncenter" src="" alt="" width="1187" height="648" /> &nbsp; Interestingly, the model focusing on fruits performs better than the one looking for flowers! There are several reasons for this behavior. The first is that the fruits of <i>Convallaria majalis</i> are rather easily distinguishable from the background and have a larger collective size than the fruits of both <i>Anemone </i>species. On the other hand, since both fruits and flowers of <i>Convallaria majalis </i>are small, they often occupy a small portion of the images. Hence, after rescaling, many details signifying their presence can be lost. This makes it harder for models to identify them. <h4><i>Maianthemum bifolium</i></h4> &nbsp; <img class="size-full wp-image-11736 aligncenter" src="" alt="" width="947" height="519" /> For <i>Maianthemum bifolium</i>, both models obtained quite promising results, with an accuracy of around 80%. This clearly was the easiest species to understand for the models. To understand the discrepancy between accuracies in detecting the presence of fruits, we formulated two hypotheses: <ol><li style="font-weight: 400;" aria-level="1">The number of samples in the minority class was simply too small for a machine learning model to pick up their distinctive features considering the diversity of the dataset.</li><li style="font-weight: 400;" aria-level="1">The fruits of both <i>Anemone </i>species are systematically different (and harder to grasp) than those of <i>Convallaria</i> and <i>Maianthemum</i>.</li></ol> The first hypothesis can only be verified once we collect a larger dataset. Which is in the scope of future studies. To understand the justification for the second one, let’s have a look at sample images: <img class="size-full wp-image-11730" src="" alt="Anemone nemorosa (visible is both a flower and fruit forming inside it), Convallaria majalis, Maianthemum bifolium fruits" width="1144" height="315" /> <em>Anemone nemorosa</em> (visible is both a flower and fruit forming inside it), <em>Convallaria majalis</em>, <em>Maianthemum bifolium</em> fruits <i>Anemone</i> fruits are single, tiny green objects, extremely hard to distinguish from a green undergrowth. The <i>Convallaria</i> and <i>Maianthemum</i> fruits, while also small, appear in well-organized groups, and often have a distinctive color. But is this the reason or just intuition? To verify that, we created a collection of purposefully downsampled training sets for each species, to have exactly the same class balance as the others. After training models, we confirmed that the <i>Convallaria</i> and <i>Maianthemum</i> models have significantly higher accuracies (by 36% in each case!), proving it is the content of the images that make the difference. <h2>Summary - Computer Vision, Climate Change, and Biodiversity</h2> In this project, we partnered with ecologists from UMK and IDPAN to help develop a tool that will open the possibility of using citizen science data for monitoring changes in forest ecosystems, potentially with unprecedented spatial and temporal precision. This is important in the age of increasing data availability. It opens a way for discovering plant reactions to climate change that were previously impossible to assess. The results we obtained are promising and so we look forward to their further development. Our contribution will substantially support the assessment of plant reactions to climate change. Further supporting the development of novel solutions for nature conservation and mitigating climate change effects. Appsilon is involved in several projects applying machine learning techniques to enhance ecological analysis. Learn more about these projects and more on our blog. See how we use <a href="">AI for biodiversity conservation in Gabon</a>. And learn how to use <a href="">Computer Vision for genetic research and 'predicting' seed viability</a>.

Contact us!
Damian's Avatar
Damian Rodziewicz
Head of Sales
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
data for good