Are you in the Mood 4 Food? — Let leftovers never be left over again!

This project was carried out as part of the TechLabs “Digital Shaper Program” in Münster (summer term 2021).

Abstract

In a world where resources are becoming scarce, food waste is no longer acceptable. Furthermore, lock down life brought up many hobby chefs. We combined these developments in our data science and AI project “Mood4Food”. On the basis of recipes from www.chefkoch.de, we analyzed existing recipes, developed filter options for recipes using ingredients you already have at home and finally created an AI which writes recipes itself.

Introduction

12 million tons of food are wasted every year in Germany. If you only take a look at private households, every person is responsible for 75 kilograms of food waste which refers to 200 grams every day. Not only would this huge amount of food help to feed hundreds and thousands of people, this grand number of lost food leads to 25 billion euros literally lost in the trash.

We wanted to set our project focus on sustainability and creating a positive impact on our future. That led us to our decision to concentrate on one of humankind´s main challenges — to feed and nurture all of the world´s inhabitants. Our project idea, we hoped, would start at least somewhere, in this great field of opportunities to tackle this huge topic.

By focusing on our own habits and consumption we think that this project can be one step of a long way to reduce our food waste and act more responsible in a world where at least privileged persons like us seem to have and get everything.

Webscraping

First of all, the raw data had to be scraped from the website. From a preceding work [https://github.com/Murgio/Food-Recipe-CNN] a csv-file containing the URLS for about 320.000 recipes was used to download the preparation text and other data like the ingredients list. Some data cleaning was already done during the scraping. The final file contains almost 310.000 different recipes.

Analysis of chefkoch-recipes: descriptive statistics

In a first step, the data (available as a pandas-dataframe) were cleaned by excluding missing and false values from the analysis. Then, various features of the dataset were analyzed. Those features are defined by the authors of the recipes. For the feature “cooking time”, for example, all time data had to be converted into minutes instead of hours and minutes in order to calculate some descriptive statistics. The results show that the cooking time (in minutes) varies a lot:

mean 29.50

std 42.52

min 1.00

25% 15.00

50% 20.00

75% 30.00

max 1200.00

In a next step, the matplotlib library was used to visualize the data. For instance, the feature difficulty is made up of three difficulty levels (“simple”, “normal”, “pfiffig”) whose ratio was calculated and visualized (Figure 1).

Figure 1: Distribution of the feature “difficulty”

Next, the previously analyzed features were combined to take a closer look at the average cooking time of easy, medium, and difficult recipes (Figure 2). Finally, we examined the level of difficulty with regard to vegetarian, vegan, and other recipes (Figure 3).

Figure 2: Preparation time vs difficulty
Figure 3: Difficulty of vegetarian, vegan, and other recipes

As figure 1 shows, the majority of chefkoch recipes is simple whereas only 1,1% are categorized as difficult. For those difficult recipes, the preparation time significantly exceeds the preparation times of simple and normal recipes.

As figure 3 shows, vegetarian and vegan dishes seem to be easier than meat dishes — or vegetarians and vegans state that they are easier because they are either better at cooking or want to convince more people to eat a meat-free diet. 😉

Filtering: Finding recipes based on individual input statements

After conducting some descriptive statistics to get basic information about the dataset, we implemented filtering and sorting functions with the help of the Pandas library. Since a goal of this project is to avoid food waste, the user of our program is asked to insert the ingredients he wants to cook with into an input box. That way one can still make good use of ingredients that are almost rotten but still edible.

Subsequently the dataset of recipes is filtered based on the input. This doesn’t mean that all recipes containing other ingredients than the input are deleted. Instead each recipe’s ingredients are compared to the input and its ratio of accordance is calculated. This is the value the recipes are sorted by in the next step so that the recipes with the highest accordance ratio are located at the top of the list.

In further input statements the user can further specify, how much time he wants to spend in the kitchen, how difficult the recipe shall be and if the recipe should be vegan or vegetarian.

In the end, the resulting dataframe of recipes is exported into an excel file for convenient post processing.

Creating the AI

When trying to create an AI for a specific task, the usual workflow nowadays is to use an existing model that was already pretrained for a higher level task. In this case the high level task is “Natural Language Generation” and the specific task is generating the preparation text for a recipe. Hence, the german version of the GPT2-model was leveraged to finetune it on the recipe preparation texts.

In order to let the model know that the data for finetuning it are coming from a specific domain a so called control token was placed at the start of each recipe text. Next the recipe text gets converted to subword tokens and either padding or truncation to a fixed total input length is applied. The padding and truncation is done because the model has to receive inputs of a constant size and each input should correspond to one recipe.

Since a lot of memory space is “wasted” due to padding, another option would have been to concatenate all recipes into one giant string and then divide them into evenly sized pieces. However, the big advantage of the GPT2-model over previous language models is its ability to preserve context over a long distance. Chopping a recipe text into two different inputs would mean that the divided parts of one text can no longer have an effect on each other during training and the information between them is no longer correlated.

The maximum input length for the model and hence the largest distance over which context can be preserved is 1024. After tokenization most recipes consist of 100–300 tokens (figure 4) and thus an input length of 256 was chosen, which means that only a small percentage of all recipes have to get truncated. Training for this configuration took about one hour on the TPU runtime on Google Colab.

The gist of the preparation texts generated from this model is, that they do indeed sound a lot like cooking instructions, but ingredients are often times used and processed in rather uncommon ways and instructions are repeated multiple times. Furthermore, sometimes only nutritional values are generated, probably because the recipes with a small amount of tokens (<~ 50) only consist of a table for the nutritional values.

In order to combat those problems and give the model a little more context to work with, a second version was created. This time the training input consists of the recipe token, a list of the ingredients types, a control token for the preparation text and lastly the preparation text. The quantity measurements for the ingredients were left out because the model is most likely incapable of making use of them in a sensible way at inference time.

This time a larger input length of 512 was chosen so as to make space for the additional tokens of the ingredients list and further reduce information loss due to truncation. Training for 8 epochs took about 9–10 hours with these settings. After four epochs the validation loss already begins to rise again while the training loss still declines. This indicates that the model is already overfitting on the training data.

With this model a recipe text can be generated in different ways by adjusting the prompt text:

1) Only the recipe token as prompt: Usually the model will generate a list of ingredients and the preparation text

2) Recipe token + open ended ingredients list: the model often times makes up more ingredients (mostly condiments because they commonly appear at the end of the ingredient lists) and then the preparation text

3) Recipe token + ingredients list + preptext token: the model adds the preparation text

4) recipe token + ingredients list + preptext token + prompt for the preparation text: This type of prompt was not tested yet

The preparation texts are surprisingly reasonable and even with some inconvenient mixtures of ingredients the model is able to make use of them. In other cases though, ingredients which are part of the prompt remain unused in the preparation text. Sometimes still only the nutritional values are being generated.

Conclusion

As one can see in our examples, we made it possible using artificial intelligence to generate senseful recipes. Helping other people using their leftovers and preventing them from throwing food away fulfilled our project idea and is one step in a marathon that requests persistence and time.

Our Github Repository

The team

Alexander Milden AI: Python

Lisa Strauß Data Science: Python

Arne H. Data Science: Python

Mariella Schmidt Data Science: Python

Mentor

Christian Porschen

Our community Members share their insights into the TechLabs Experience