Machine Learning & Arts: The Smart Photo Booth

Within this project, we want to develop a playful and interactive intelligent machine – the Smart Photo Booth – where the users can experiment with AI, and learn about the process of how intelligent machines are trained.


  • To motivate teenagers and younger audiences to engage with computer sciences and eventually to attract them to think about studies and/or a career in this domain,
  • To explain and demystify concepts of AI and Machine Learning in an interactive and entertaining manner for a wider audience (> 7 years old).

In particular we want to:

  • Introduce the main concepts of machine learning, and deep neural networks through one of its applications: style transfer. Moreover, we provide a hands-on experience of the implementation of the style transfer system. The system will transfer a user portrait as input (“selfie”) into a portrait of a specific style from art history (e.g. impressionism, post-impressionism, cubism, etc.).
  • Offer the participants an interactive and personalized introduction to the evolution of portrait painting through time.


This project is a collaboration with the:


Within the context of the Esch2022 European capital of culture, our project proposal for an AI & Art pavilion has been accepted.

The project is cofounded by Esch22 and the university of Luxembourg. PI: Prof. Leon van der Torre. Therefore, the AI Robolab will be responsible for the organization of an AI & Art pavilion that will be included in the activates of Esch 2022 European Capital of Culture.

The pavilion involves several corners and projects developed by artists and researchers interested in creative technologies as well as staff from the AI Robolab. The kickoff meeting of this project was organized by the AI Robolab and held on 25th of September 2020.

Project EXPECTATION (2021-2024) Accepted.

EXPECTATION is s CHIST-ERA (ERA-NET and FET supported project) on eXplainable AI (XAI) entitled: Personalized Explainable Artificial Intelligence for decentralized agents with heterogeneous knowledge.


The project involves 4 partners:

  • University of Luxembourg (PI: Prof. Leon van der Torre),
  • HES-SO, University of Applied Sciences and Arts Western Switzerland,
  • Alma Mater Studiorum Università di Bologna, Italy and
  • Özyeğin University, Turkey.

Project description

Explainable AI (XAI) has recently emerged proposing a set of techniques attempting to explain machine learning (ML) models. The recipients (explainee) are intended to be humans or other intelligent virtual entities. Transparency, trust, and debuging are the underlying features calling for XAI. However, in real-world settings, systems are distributed, data are heterogeneous, the “system” knowledge is bounded, and privacy concerns are subject to variable constraints. Current XAI approaches cannot cope with such requirements. Therefore, there is a need for personalized explainable artificial intelligence. We plan to develop models and mechanisms to reconcile sub-symbolic, symbolic, and semantic representations leveraging on the agent-based paradigm. In particular, the proposed approach combines inter-agent, intra-agent, and human-agent interactions to benefit from both the specialization of ML agents and the establishment of agent collaboration mechanisms, which will integrate heterogeneous knowledge/explanations extracted from efficient black-box AI agents. The project includes the validation of the personalization and heterogeneous knowledge integration approach through a prototype application in the domain of food and nutrition monitoring and recommendation, including the evaluation of agent-human explainability, and the performance of the employed techniques in a collaborative AI environment.