Projects


Eyeing the Big Play, Not Just the Moves

ML XAI Thesis
The Hierarchy of Time

With this work I seek to advance the interpretability of Reinforcement Learning (RL) agents through temporal abstraction. On complex problems, even purely logic agents can be hardly explainable because of the large rule set they learned. Here, I apply the options framework to split up a single, logic policy into a hierarchy of policies: the one, topmost policy (the meta policy) consists of logic formulae and decides upon lower-level, neural policies that are specialized on executing subtasks.

The idea is that the meta policy is reduced to determining which subtask is due to be solved. Instead choosing an action at each time step, the meta policy selects the subpolicy that solves the current subtask at hand. This way, the meta policy (which typically includes the information or strategy we are interested in), is much smaller in terms of rules and, hence, better interpretable.


Latent Space Optimization

ML Thesis
LSO with weighted retraining

My Maths Bachelor’s Thesis presents Latent Space Optimization (LSO), an emerging ML-based optimization method. It targets especially hard optimization problems, namely those which are discrete, high-dimensional and expensive to evaluate like protein design or neural architecture search.

“The core idea of LSO is (1) to fit a Deep Generative Model (DGM) to the distribution of the solutions, yielding a low-dimensional, continuous representation space—the latent space—and (2) to perform Bayesian optimization in that latent space using a cheap-to-evaluate surrogate function replacing the original objective.”


Telegram crawler to analyze German conspiracy theory chats

Misc

During the COVID-19 Pandemic, a lot of conspiracy theorists gathered on Telegram in various public groups and channels. I’m working on a crawler, the task of which is to scan these chats and analyze the public available information. At the time of writing (Jan 2022), the crawler collected >20 GB worth of text messages, 1.75 mio user accounts and >5300 German chats, the vast majority of which is promoting conspiracy theories.


pyAnt: RTS Serious Game

Misc
Game cover

pyAnt is a Serious Game (SG) prototype, the concept of which is to build an ant colony where the ants are controlled only via Python code. I developed that game together with my team as a project at the SG group. We implemented the game with the Godot engine and released it on itch.io.


Learning Angry Birds with RL

ML

I built a TensorFlow framework for learning Angry Birds (and other games) using basic state-of-the-art Reinforcement Learning (RL) techniques. One can easily exchange the AI model and the game environment. This is my largest project so far and is a continuation of a bonus project I finished together with three other team members.


Explainable AI: A systematic review to find a unified definition of explainability and interpretability

ML XAI

A scientific literature analysis I’ve done with a colleague for a seminar. Commonly used AI models, especially neural networks, are hard to explain and to understand. In order to support trust, explainability of these models is a desired property. However, there is no unique definition of explainability and interpretability. We examined the different definitions stated in the literature and, based on that, proposed an own one.


SinGAN: Train a GAN with only a single image

This is an implementation of the SinGAN presented in this paper. It is a bonus project I coded with a colleague in PyTorch. It can be used to train a GAN on just a single image. That GAN can then perform a bunch of tasks, e.g., super resolution or image inpainting. This project whips with a Django frontend ready to be deployed as an interactive webpage.


Fooling an image classifier with adversarial inputs

Saliency map

This little experiment showcases how easy neural network image classifiers can be fooled with a slight perturbation in the image. It also explains and shows Saliency Maps (heatmaps indicating the classifier’s attention in the image when classifying it).


Interpreting Sum-Product Networks via Influence Functions

ML XAI Thesis
Example influence plots

As my Computer Science Bachelor’s Thesis, I investigated Sum-Product Networks (SPNs) using so-called Influence Functions. My work contributes to the understanding of SPNs by showing, e.g., how single training instances affect the SPN’s prediction on other instances.