CNN For Image Classification: Does The Neural Network Really See The Seeds?

Reading time:
time
min
By:
Gaja Klaudel
October 17, 2021

Open a bag of garden variety seeds and take a close look. What do you see? Is it a healthy seed? A broken seed? Could you accurately predict if that seed will grow? That's what the Świeżewski lab sought to answer using Computer Vision models. And as it turns out, it can be done! By identifying morphological parameters of thale cress (<i>Arabidopsis thaliana</i>) seeds and applying a CNN to perform tasks like image segmentation and image classification the Świeżewski lab was able to predict seed dormancy. In short, a CNN for image classification can predict if a seed will germinate or stay dormant with just a photograph. <img class="aligncenter wp-image-8460 size-medium" src="https://wordpress.appsilon.com/wp-content/uploads/2021/10/thale-cress-334x500.png" alt="artistic drawing of Arabidopsis thaliana, the plant species used in the CNN Computer Vision model" width="334" height="500" /> The predictions were made based on images of seeds taken before germination. The results obtained by the topmost model, EfficientNet version B3, achieved a 70% accuracy for a dataset containing only around 3000, low-quality images. The images and models were created by Świeżewski Lab. The lab is hosted at the Institute of Biochemistry and Biophysics at the Polish Academy of Sciences.  The article below shares insights on improvements made to the model, the inclusion of a new dataset, and how they both play a role in improving accuracy. <ul><li><a href="#anchor-1" rel="noopener noreferrer">Convolutional Neural Networks (CNN) for image classification</a></li><li><a href="#anchor-2" rel="noopener noreferrer">Datasets</a></li><li><a href="#anchor-3" rel="noopener noreferrer">Architecture changes</a></li><li><a href="#anchor-4" rel="noopener noreferrer">Blurriness and sharpness experiments</a></li><li><a href="#anchor-5" rel="noopener noreferrer">What did the model really see?</a></li><li><a href="#anchor-6" rel="noopener noreferrer">Lessons learned</a></li></ul> <h2 id="anchor-1">Convolutional Neural Networks (CNN) for image classification</h2> Computer Vision is the artificial automation of information accumulation and interpretation tasks that are typically performed by biological visual systems. It's a complex feat of machine learning that requires the training of computers in contextualization. Something most biological systems absorb from years of experience. <blockquote><strong>Get started on your first Computer Vision project with Appsilon's <a href="https://appsilon.com/image-classification-tutorial/" target="_blank" rel="noopener noreferrer">image classification tutorial</a>.</strong></blockquote> Neural networks allow computer programs to identify patterns and solve problems by mimicking a biological brain. They are often described as the soul of deep learning algorithms. Neural networks work in a similar fashion to your brain's neurons by communicating with one another through layers of context development. Typically this starts with simple patterns and develops into higher complexities as it progresses. The artificial neurons in a neural network pass along information through layers of nodes - the digital equivalent of a soma. If a criterion is met, that piece of information is passed along to the next layer of nodes. If it fails to meet the set thresholds then the information is not passed along and the process either terminates or adjusts accordingly. <a href="https://appsilon.com/convolutional-neural-networks/" target="_blank" rel="noopener noreferrer">Convolutional Neural Networks</a> are a class of neural networks that deal with data that have grid-like structures. Examples of grid-like data with variable dimensions include time-series graphs (1D), images (2D), and elevation models (3d). One of the most common applications for Computer Vision is image and video recognition, but there are many ways CNN can be applied to deep learning. <h2 id="anchor-2">Datasets</h2> <img class="size-full wp-image-8465" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ff8a8912184dd3aaae9_old-dataset-of-seed-images.webp" alt="old dataset of seed images" width="634" height="640" /> The old dataset containing images of thale cress seeds These images show two sets of data - the old (above) and the new (below). At first glance, there’s not much difference between the two. But when you take a closer look you’ll notice a few key differences. <img class="size-full wp-image-8464" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ff93558cfc06df17212_new-dataset-of-seed-images.webp" alt="seed images from the new dataset" width="555" height="535" /> The new dataset containing images of thale cress seeds For example, you might notice the difference in colors, brightness, and sharpness. The new dataset, although darker in hue, has a more homogenous background and slightly better resolution. The new dataset contains twice the pixel count as the old dataset, which can be seen in the figures below. <img class="aligncenter size-full wp-image-8455" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ffa6f892f05d57f5072_graph-comparisons-showing-pixel-count-density-of-each-dataset.webp" alt="graph comparisons showing pixel count density of each dataset" width="1484" height="669" /> <h2 id="anchor-3">Architecture changes</h2> Several standard architectures were tested, including different versions of VGG, SquizeNet, DenseNet, and ResNet. As it turns out, ResNet50 was able to achieve a significant accuracy improvement at 77%, over the old dataset using EfficientNet B3 with 70%. To make an accurate comparison of the new to the old, the new model was trained on ResNet50 using the old dataset. It obtained similar results, at around 70% accuracy - which suggests that the lowest quality data also brings about the worst results, even though the first dataset is somewhat larger (~3000 images). <img class="size-full wp-image-8458" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ffc4d7961f82670965c_new-model-results-obtained-on-new-dataset.webp" alt="new model results" width="1232" height="310" /> The new model results including losses, accuracy, and error rate. Results were obtained using the new dataset. <h2 id="anchor-4">Blurriness and sharpness experiments</h2> One of the key differences in the new dataset is the sharpness of the images. I was curious to investigate if it affected the training of the neural network. And if it did, how much would it matter? <blockquote><strong>Leverage your experience and existing high-performance deep learning models with an <a href="https://appsilon.com/transfer-learning-introduction/" target="_blank" rel="noopener noreferrer">introduction to Transfer Learning</a>. </strong></blockquote> To test this on the new data, we chose to decrease the quality of the new set of images and see the effects on the model’s performance. For the alterations, we chose to blur and sharpen the images. To modify the images the Albumentation library was used, particularly the Gaussian Blur and Sharpen functions. Below, is an example of its implementation: <script src="https://gist.github.com/MicahAppsilon/ae05b94eaa31ae600ba79177e186d075.js"></script> Unfortunately, when the blurred images looked the way they were supposed to, the sharpening process failed to sufficiently sharpen the image.  <img class="aligncenter wp-image-8454 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ffda8912184dd3aabb2_examples-of-sharpned-blurred-and-original-images-respectively-from-the-new-dataset.webp" alt="examples of sharpened, blurred, and original images respectively from the new dataset used for the CNN image classification" width="833" height="835" /> Both transformations significantly influenced the predictions negatively, but the model was still functioning better than the “base” model, XGBoost at roughly 60% accuracy.  The best model without dataset transformations was able to achieve 77% accuracy. The blurred model achieved 67% while the sharpened model got up to 71%. These results point out that resolution has an important impact on classification accuracy. It also indicates that although sharpened images don’t necessarily look good to the human eye, they are more useful for Computer Vision models than blurred counterparts.  However, such an outcome is not surprising given that CNN models are strongly focused on capturing diverse edges from input data. In this case, the heavy pixelation allows the model to focus on small edge detection as the image has low complexity and minimal details. It proves that even imperfect data can be useful and contemporary CNN models can still extract relevant information from such data. It also emphasizes the importance resolution has on model training. However, even with low resolution, model predictions produced valuable output and should not be disregarded. It is worth the time and effort to experiment with neural networks even if a dataset is small and of low quality. <h2 id="anchor-5">What did the model really see?</h2> Computer Vision works similarly to biological sight recognition. A field of view is taken as input. Properties like color, edge-detection, shape, sharpness, etc., are registered by the ‘brain’ and interpreted within the context of experience or training. That being said, show one image to a group of people and what they see and interpret will likely change. An interesting aspect of Computer Vision modeling is investigating what the model is actually looking at in the images when making predictions.  <blockquote><strong><a href="https://appsilon.com/pp-yolo-object-detection/" target="_blank" rel="noopener noreferrer">PP-YOLO object detection</a> - is it really an improvement over YOLOv4?</strong></blockquote> The Explainable AI field has been attempting to answer this question for some time to make neural models’ results interpretable for humans. A higher level of interpretation is not always possible, because of the complexity of the model, the problem being solved, or the decision-making process. Currently, GradCAM is one of the most popular methods for checking what vision neural models focus on during inference.  <h3>GradCAM</h3> GradCAM (Gradient-weighted Class Activation Mapping) is used for visualizing the gradients of a model that make it into the final convolutional layer of a Neural Network. This produces a coarse map that highlights the most important regions in the image, meaning the areas with the largest logits values. For the updated model, the SmoothGradCAM++ method was selected from the <a href="https://github.com/frgfm/torch-cam">TorchCAM </a>implementation for Pytorch.  As it turned out, the updated model has a keen eye. It always looks at the focal point of the picture - the seed. It doesn’t focus on irrelevant areas of the image or develop a wandering eye. I suppose one could say Computer Vision syndrome doesn’t apply to our Computer Vision model.  In the images below you can see examples of the activations. The rightmost image is the original. The leftmost image shows a heatmap of the activations with the middle image presenting an overlay mask of the heatmap on top of the original image.  <img class="aligncenter wp-image-8456 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ffe6bdc8f59ff878605_heat-activations.webp" alt="heatmap of activations indicating where the CNN model is looking" width="844" height="865" /> GradCAM and its variations can be a useful tool in the analysis of models with convolutions. Not only for analyzing single models’ behavior but also for model comparisons and debugging. Although GradCAM gives us interesting insights into our models, it should be noted that it is a heuristic tool for looking into the model predictions and should be taken with a grain of salt. <h2 id="anchor-6">Lessons learned using CNN for image classification</h2><ol><li style="font-weight: 400;" aria-level="1">Armed with only a small dataset, no matter if it's imperfect with low resolution and blurred photos, you can still apply machine learning methods to make the most of your project. </li><li style="font-weight: 400;" aria-level="1">Highly blurred images in the dataset significantly degraded the model training and its predictions lowered accuracy to ~68%. As one might expect, fewer details mean less information for the model.</li><li style="font-weight: 400;" aria-level="1">Sharpening the pictures also worsened the predictions, but to a lesser extent with an accuracy level of ~71%.</li><li style="font-weight: 400;" aria-level="1">GradCAM is an interesting visualization tool when working with a CNN for image classification. It can be used to reveal the inner workings of models and provide valuable insight.</li></ol>

Have questions or insights?

Engage with experts, share ideas and take your data journey to the next level!
Explore Possibilities

Share Your Data Goals with Us

From advanced analytics to platform development and pharma consulting, we craft solutions tailored to your needs.

Talk to our Experts
python
ai&research