Introduction to IncubAItor

Download this notebook

The IncubAItor aims to provide an introduction to artificial intelligence for start-ups. The focus is on a short introduction to programming simple AI applications with various frameworks and then going into detail on deploying an AI application in the cloud. The University of Münster’s infrastructure is used, but this is not a prerequisite for the incubAItor. Additionally, many tools and their use are described in detail, making the introduction suitable for beginners from all areas. For more advanced AI applications, refer to the further notebooks of InterKI. The deployment part refers to AI applications but can also be helpful for other applications.

Content

Introduction Introduction to the used programs and frameworks

  1. Train your own network In this part, models are created and trained with different frameworks. This chapter can be skipped if AI knowledge is already present, but the model from 1.2 should be trained once so it can be used in the following chapters.
    1. PhotonAI With the help of PhotonAI, the first network is constructed and trained for different parameters and evaluated.
    2. Tensorflow With the help of Tensorflow, the same network from 1.1 is constructed and trained. This model will be used in the following chapters.
    3. Torch With the help of Torch, the same network is constructed. Torch will also be used for PALMA later.
    4. Keras With the help of Keras, a more complex network for image-based emotion recognition is built.
    5. Huggingface With the help of Huggingface Transformers, language models are loaded.
  2. PALMA We provide an introduction to PALMA and run parts of the previous chapters on HPC resources. This chapter can be skipped if HPC resources are not needed.
  3. Web App With the help of Flask, a web interface (API) is created to access the predictions from a neural network via a web browser.
  4. Deployment We use Docker and Git to build Docker containers and deploy them on different infrastructures.
    1. Docker A Docker container of the program created in chapter 3 is built
    2. OpenStack The Docker container can be built with the help of GitLab Pipelines and automatically deployed on a cloud service.
    3. Kubernetes With the help of Kubernetes, a container-based application can be deployed in a fail-safe and scalable manner.
  5. Outlook

About the Organization of the Code

The complete incubAItor can be found in the University of Münster’s GitLab at https://zivgitlab.uni-muenster.de/reach-euregio/incubaitor/. It is divided into several chapters, where you can find the corresponding Jupyter notebooks with explanations in German and English, as well as the code belonging to the chapters. The notebooks should not only explain the functionality of the chapters but also, in particular, how to get the individual parts of the code running.

If words in this explanation sound all Greek to you, feel free to read on and start with this introduction!

By cloning the repository, the individual chapters can also be run directly, however, the notebooks should be payed attention to, as, for example, the data needs to be downloaded first and copied into the corresponding folders, or the model from chapter 1.2 needs to be trained before it can be used from chapter 3 onwards.

AI for Start-ups

The spread and further development of relevant AI frameworks now make it possible to easily implement simple and complex AI applications. Python is used as the programming language to access these frameworks. Fundamental AI frameworks include Tensorflow and Torch (and recently JAX). The Keras framework, on the other hand, uses one of these frameworks as a backend from version 3 onwards and provides many configurations for AI networks in a simple way. Additionally, there is the transformers package from Huggingface, which also uses Tensorflow or Torch in the background and provides common complex open-source models, but also allows for the implementation of pre-trained networks.