Fail Engine


What if computers could learn one of our strangest emotions: the tendency to laugh at others' misfortune?Help Train the SystemExperience the Gallery View

Abstract

Fail Engine is an aesthetic experiment that explores the intersection of emotion and artificial intelligence, particularly the complicated feeling of schadenfreude. By watching videos of potentially hilarious human foibles, can we teach a computational system to understand how to laugh at our misfortune?

Media

Overview

Human nature is sometimes inexplicable. Why do we have a tendency to laugh at someone else’s misfortune? Maybe it’s better explained through the Nietzsche saying “To see others suffer does one good” (Nietzsche, F. W., Kaufmann, W. A., & Hollingdale, R. J., 1989). When push comes to shove, we’re wired to avoid what we fear and seek what gives us pleasure (Aschwanden C. 2018). That said, this feeling seems to be highly individualized and inconsistent. When juxtaposed with the precision of computation being applied to emotion detection and sentiment analysis, the following questions may ensue: can schadenfreude be taught to a computer? Are there situations where its understanding can provide us value? Or will its understanding be misused in malicious ways?

Fail Engine is a poetic thought-experiment-turned-installation that explores the confusing nature of schadenfreude and serves as an effort to draw attention to the complexity of machine intelligence in an era when we may have barely scratched the surface of the human psyche. Through artificial intelligence services that involve affective computing, an online experience invites participants to watch popular fail videos and gifs. In the process, a real-time facial emotion detection service monitors facial expressions to determine specific points of cringe-worthy humor. These reactions serve as training data for a computational system that attempts to learn when and where schadenfreude occurs.

Participants can contribute their reactions to fail videos via a web application. Once access to the web cam is granted, the system will monitor facial expressions that simultaneously occur with video playback. The underlying data containing real-time emotional responses (without images of participants) fuels a computational system that attempts to learn when and where to laugh at these videos based on crowdsourced reactions. Then, in a video installation setting, a playlist of trained fail videos screens back to participants along with the computer’s reaction, completing the training loop.

References

How to contribute