Ai.griculture — AI-supported weed- & pest control for your fields

This project was carried out as part of the TechLabs “Digital Shaper Program” in Münster (summer term 2021).

Abstract

The project Ai.griculture dealt with AI-supported early weed and pest control to improve harvest and revenue of agricultural fields. Our final vision contains drone-generated pictures that can quickly be examined for weed and pests with our software. We used a pre-categorized data set from kaggle that contained pictures of common agriculture plants as well as different weed species to train the AI. By analyzing the structure of the dataset with R we could find potential leakages that must be considered when building the AI. The training itself with python worked only to some extent. We figured out that the dataset contained too artificial and noisy pictures, so that the AI found patterns it should not. For further and better results one should focus on higher quality and quantity within the dataset.

Introduction

Heat waves, flooding, rising temperature — the climate change is inevitable for everyone around the globe. But not just we as human beings realize it, but also the plants surrounding us. Without plants and agriculture we will not survive — and not just us, but also the future generation will suffer from hunger. Because of the dramatic change in soil condition, it is important to know how the plants and fields need to be treated. For sure, farmers are experts in doing so but as there are less and less people working in the field of agriculture, there is a high potential for agriculture 4.0 to make an impact to make our world a better place for everyone.

This leads us to our idea how tech can be used to help farmers doing their work. We would like to use AI to analyze the fields (e.g. of corn or wheat) with the help of a drone and let the AI tell the farmers automatically what is needed e.g. water, fertilizer or even if there is an insect plague and where. Thus, the farmers can work more efficiently and are able to detect problems from an early stage on. Subsequently, there will be less crop failure and hunger for us and the upcoming generation.

Combining Python & R

Our group consists of 1 AI-Techie and 4 DS-Techies of which 3 are working on the R-track and 1 on the Python-track. As we are working with pictures and, as a final result, aim for an AI we had to figure out what each group could contribute. We then decided, that the people learning R should examine the general data structure in further detail. Our Python-members meanwhile focused on the AI itself.

As AI-Techies, we often have to take data as it was provided for us and it’s not seldom that a dataset has some kind of unconscious leakage that was introduced the way it was collected. Unfortunately, this can mislead a model during learning which results in unexpected “knowledge” of the model. However, we will briefly discuss the journey we had with our data in the following.

A research image dataset, including 12 different common species in Danish agriculture of cultivated and wild plants, was used in order to develop a model that is able to classify the correct species given an image with seedlings.

Before setting up the AI, we need to get some first insights into the data. How many images per species do we have? A lot that corresponds to wild plants, but fewer of seedlings belonging to common agriculture plants. With other words: We discovered a class-imbalance. That being said, how do the species classes look like? The images differ in scale, pixel ranges and resolution because some have been taken zoomed in or out. Furthermore, additional material can be observed in some of the images.

By taking a first look at our data set with R, it turned out that we simply had different files named by the plants which include several pictures of the belonging plant.

Different from what we expected, it is not as simple as we thought it would be, working with pictures in R which lead us to our first problem; importing the images into the R console and generate a usable table. First, we realized this by using readPNG together with dim as function and combined it with rbindlist. Secondly, we created a list with the images in a dataframe and used the name of our file to generate another column named by the belonging category. Subsequently, our data was in the right structure to start with our analysis. We therefore mainly used graphs for visualization.

We first simply counted the number of elements in each category and found that the “useful” plants, namely maize, common wheat and sugar beet are underrepresented in the data set. This could potentially lead to problems for our AI as we have unbalanced groups.

In a next step, we looked at the size of our pictures to make sure that this factor won’t influence the decision of our AI about what can be seen in it.

Unfortunately, we found that the images are of different quality and different size as you can see in the graph above. We were thinking about cutting the images in a consistent format. But first, we had to check if the different image sizes are independent from the plant categories. Therefore, we plotted the density on the y-axis and the image width on the x-axis.

Unfortunately, the image size and category are not independent from each other. Moreover, as the background of the pictures is not consistent and includes small stones of different sizes, this could possibly lead to biases for our AI. By taking another look at the images, we also find the potential problem of items which can be found in some pictures, in some not and which are of different sizes. This is also a potential problem for the further steps with our AI. To sum up, we already realized that the given data set might not work out perfectly to realize our idea.

After clustering the growth states, we found out that the image size is related to the growth state of the plants. This is a very bad target leakage, since our CNN could decide to predict a certain species only due to features characterizing the growth states, e.g., when the stones of the images look very big. This leakage becomes even more dramatic due to the class imbalance. In a next step, we wanted to discover illnesses of models applied to the data available. The most important library we’ve used is pytorch, in combination with the pre-trained model resnet18. After setting up the model and the training loop, a 80% accuracy score on the validation data has been achieved. If one tries to explain why a machine learning model made its predictions as they are, LIME can be used. Generally speaking, it analyzes how predictions changes when the inputs are perturbed. In this case, images can first be divided into contiguous components and then perturbed by truing some of the components “off” (gray). This modified image is then passed through the model and predictions are investigated. It turned out that the model uses image features that are located at the stones or material in the background to make its predictions. Hence, it is not sensitive to the actual plants. This analysis shows very accurately that just applying a model is dangerous if one does not care about what the model has learnt. As already said before, due to data leakage, the final model may not make predictions as expected. As an outlook, one has to ask what went wrong during data collection, and what to do instead. Long story short: Do not collect your data in too artificial setups, but in real world situations. Use as many cameras, distances and background you can get in order to let the model learn from the important, equal information on every image: the plant — and not stuff in the background.

Lessons learned

Some presumably easy things might take a lot longer than expected — especially if most of us are quite new to programming an AI. Also our decision to work with pictures instead of classical data made us realize that it is sometimes cumbersome and might not be the ideal way to start with. Data like revenues, time or sales would enable an analysis closer to what we have learned in university like regression-analysis or time-series-analysis. Last but not least we found that the requirements, as mentioned in the Python-part, the images should have, are considerably high. This need to be taken into account if you approach an AI that analyses images and is, along with the above mentioned lessons, helpful for projects that we conduct in the future.

Our Github Repository

The team

Johannes Bloms AI: Python

Nicolas Wenner Data Science: R

Sebastian Hantel Data Science: R

Frederike Biskupski Data Science: R

Leon Lepper Data Science: Python

Mentor

Marcus Cramer

Our community Members share their insights into the TechLabs Experience