AI & Art seminar – Λi_robolab

Artificial intelligence (AI) is generally all about making computers and robots capable of doing things that only humans have done so far, it has already fueled many academic fields as well as industries. In the field of artistic creation, AI has demonstrated its great potential and gained popularity. AI art refers to art generated with the help of AI, AI generated art can range from AI paintings to music created by AI to poetry AI, etc. It is possible for an AI painting system to generate the desired content in the style of a specific famous painter. A large number of fans were attracted to the tunes composed by AI, others believed they came from masters of classical music.
From April 28 to June 2, each Wednesday AI Robolab will be offering the first of its kind Seminar on AI & Art, the seminar aims to bring forward cutting-edge technologies and most recent advances in the area of AI art, conducted by the leading experts crossing various fields such as machine learning, computer vision, art history, art conservation, and cultural heritage, this research interest group aims to explore how emerging AI techniques can shape the general area of art.

Each session will be presented by a speaker of notable specialists, who will deliver valuable insight through a 45 min talk followed by a 15 min open discussion with audience. The seminar is open to all and would particularly benefit: scientists who are eager to Art, artists seeking to explore the technology of AI to enhance their artwork, and individuals looking to gain a professional understanding of the domain.

Organizer:

Dr. Nouzri Sana
Postdoctoral Researcher at AI Robolab
Department of Computer Science

Seminar Schedule

Talk 1 :

Wednesday, April 28, 2021
Speaker:
Dr. Mohamed Elhoseiny
Time:
14h-15h
Presentation Recording 
Slides
ArtEmis: Affective Language for Art
Abstract:

We present a novel large-scale dataset and accompanying machine learning models aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, we focus on the affective experience triggered by visual artworks and ask the annotators to indicate the dominant emotion they feel for a given image and, crucially, to also provide a grounded verbal explanation for their emotion choice. As we demonstrate below, this leads to a rich set of signals for both the objective content and the affective impact of an image, creating associations with abstract concepts (e.g., “freedom” or “love”), or references that go beyond what is directly visible, including visual similes and metaphors, or subjective references to personal experiences. We focus on visual art (e.g., paintings, artistic photographs) as it is a prime example of imagery created to elicit emotional responses from its viewers. Our dataset, termed ArtEmis, contains 455K emotion attributions and explanations from humans, on 81K artworks from WikiArt. Building on this data, we train and demonstrate a series of captioning systems capable of expressing and explaining emotions from visual stimuli. Remarkably, the captions produced by these systems often succeed in reflecting the semantic and abstract content of the image, going well beyond systems trained on existing datasets.

Biography of Speaker:

Mohamed Elhoseiny is an assistant professor of Computer Science at KAUST. Previously, he was a visiting Faculty at Stanford Computer Science department (2019-2020), Visiting Faculty at Baidu Research (2019), Postdoc researcher at Facebook AI Research (2016-2019). Dr. Elhoseiny did his Ph.D. in 2016 at Rutgers University where also he spent time at SRI International in 2014 ( best intern award ) and at Adobe Research (2015-2016). His primary research interest is in computer vision and especially in efficient multimodal learning with limited data in zero/few-shot learning and Vision & Language. He is also interested in Affective AI and especially to understand and generate novel art and fashion. He received an NSF Fellowship in 2014 and the Doctoral Consortium award at CVPR’16. His zero-shot learning work was featured at the United Nations and his creative AI work was featured in MIT Tech Review, New Scientist Magazine, and HBO Silicon Valley. At the first AI Artathon 2020 (part of the Global AI summit), Dr. Elhoseiny served with Luba Elliot and Gene Kogan as a keynote speaker, panelist, and a judge to help qualify 20 out of 50 participating teams to the next stage, the 50 teams were formed from 300 artists and AI engineers who were selected to participate from 2000 applicants. This is 6% qualifying/pre-selection rate. He serve as Area Chair at CVPR21 and ICCV21 and organized CLVL workshops at ICCV’15, ICCV’17, and ICCV’19.

Talk 2
Wednesday, May 5, 2021
Speaker
Dr. Noemi Mauro
Time:
14h-15h
Presentation Recording
Slides
AI and HCI Methods for Cultural Heritage Exploration
Abstract:

The richness of tangible and intangible Cultural Heritage (CH) poses great opportunities and challenges to the development of successful ICT tools for its curation, exploration and fruition. Since CH sites are extremely rich in objects and data, they expose people to much more information than it might be realistically experienced; therefore, efficient methods for information search, filtering and presentation are needed to help users find the items they are most interested in and to explore them online and onsite, focusing on personal viewpoints and information needs.
Moreover, the user should be seen not only as the information consumer but also as the producer. For example, citizens can flag points of historical-artistic interest, their state of repair and any problems to provide tourists with promotional information and Public Administrations with monitoring information. In the last few years, methodologies and technological utilities for information sharing and participatory decision-making related to CH have emerged.
In this talk we will explore AI and HCI methods and tools useful to support users in the CH exploration and to involve them in the participatory decision-making process useful to enrich and improve existing knowledge related to CH.

Biography of Speaker:

Noemi Mauro is a Postdoctoral Researcher at the Computer Science Department of the University of Torino where she obtained a PhD in Computer Science with Honors. Her research interests concern user modeling, recommender systems, cultural heritage, information filtering and information visualization. She recently won the best paper award at UMAP 2020 with the paper "Personalized Recommendation of PoIs to People with Autism" and the outstanding program committee member award at HT 2020. During her PhD, she visited a research group working on Session-Based Recommender Systems at Alpen-Adria-Universität Klagenfurt and she collaborated with University College Dublin on the topics related to Geographic Information Retrieval. She obtained her BSc and MSc in Computer Science at the University of Torino where she won the best Artificial Intelligence master's thesis award. She is a program committee member of the top conferences in her research areas and reviewer for several related journals.
She has been co-chair of three editions of the Workshop on Personalized Access to Cultural Heritage (PATCH) and she is a co-guest editor of the special issue "AI and HCI Methods and Techniques for Cultural Heritage Curation, Exploration and Fruition" in the Applied Sciences journal.

Talk 3
Wednesday, May 19, 2021
Speaker
Dr. Ahmed Elgammal
Time:
14h-15h
Presentation Recording
The Shape of Art History in the Eyes of the Machine 
Abstract:

In this talk, I will argue that teaching the machine how to look at art is not only essential for advancing artificial intelligence, but also has the potential to help address the division between the arts and sciences. I will present results of recent research activities at the Art and Artificial Intelligence Laboratory at Rutgers University. We investigate perceptual and cognitive tasks related to human creativity in visual art. In particular, we study problems related to art styles, influence, and the quantification of creativity. We develop computational models that aim at providing answers to questions about what characterizes the sequence and evolution of changes in style over time. The talk will cover advances in automated prediction of style, how that relates to art history methodology, and what that tells us about how the machine sees art history. The talk will also delve into our recent research on quantifying creativity in art in regard to its novelty and influence, as well as computational models that simulate the art-producing system.

Biography of Speaker:

Dr. Ahmed Elgammal is a professor at the Department of Computer Science, Rutgers University. He is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers, which focuses on data science in the domain of digital humanities. He is also an Executive Council Faculty at Rutgers University Center for Cognitive Science. Prof. Elgammal has published over 180 peer-reviewed papers, book chapters, and books in the fields of computer vision, machine learning and digital humanities. He is a senior member of the Institute of Electrical and Electronics Engineers (IEEE). He received several National Science Foundation research grants, including the CAREER Award in 2006. Dr. Elgammal research on knowledge discovery in art history received worldwide media attention, including many reports on the Washington Post, New York Times, Boston Globe, NBC News, the Daily Telegraph, Science News, and others. In 2016 a TV short documentary produced for PBS about his research received an Emmy Award. Dr. Elgammal received his M.Sc. and Ph.D. degrees in computer science from the University of Maryland, College Park, in 2000 and 2002, respectively.

Talk 4
Wednesday, May 26, 2021
Speaker
Dr. Yezhou Yang
Time:
16h-17h
Presentation Recording
Slides
Perceiving beyond Visual Appearances: from Artistic techniques towards Robust AI
Abstract:

The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. This talk will present the speaker's efforts over the last decade, ranging from 1) reasoning beyond appearance for visual question answering, image understanding, and video captioning tasks, through 2) temporal and self-supervised knowledge distillation with incremental knowledge transfer, till 3) their roles in a Robotic visual learning framework via a Robotic Indoor Object Search task. The talk will also feature the speaker's effort of drawing inspirations from Artistic techniques to develop robust AI systems, especially in the Vision and Language research field.

Biography of Speaker:

Yezhou Yang is an Assistant Professor at the School of Computing, Informatics, and Decision Systems Engineering, Arizona State University. He is directing the ASU Active Perception Group. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language, and high-level reasoning over the primitives for intelligent robots. Before joining ASU, Dr. Yang was a Postdoctoral Research Associate at the Computer Vision Lab and the Perception and Robotics Lab, with the University of Maryland Institute for Advanced Computer Studies. He is a recipient of Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018, and the Amazon AWS Machine Learning Research Award 2019. He receives his Ph.D. from the University of Maryland at College Park and B.E. from Zhejiang University, China.

Talk 5
Wednesday, June 2, 2021
Speaker
Deniz Kurt
Time
14h-15h
Presentation Recording
Slides
 
Title: The Work of Art in the Age of Artificial Intelligence
Abstract:

The artness of the machine-produced artworks occurs as a broad discussion topic that concerns the fields from cognitive science and neuroscience,to art history and cultural studies. It brings out the much-debated questions such as; who is the artist of an AI generated artwork, whether an AI algorithm is a tool or a maker, and if an emotionless entity produces art without an intention of making art, can we still call it an artist,or an artwork? This talk will discuss these questions alongside the concept of creativity as a cognitive ability and its aspects that are now being modelled in artificial intelligence. While there production of artistic creativity in machine intelligence is the subject of cognitive studies,as a term coined by Walter Benjamin, the artistic‘aura’ of the generated output will be discussed in terms of art philosophy. The talk will also canvass the case studies of algorithmic artworks,and the speaker’s own project.

Biography of Speaker:

Deniz Kurt is a generative artist and multimedia designer from Amsterdam. Finishing her research on Artificial Intelligence in Radboud University,she has been giving lectures and conferences on computational creativity. Her study Artistic Creativity in Artificial Intelligence focuses on the concept of creativity as a cognitive ability and its applications in artificial intelligence systems, with various case studies from music generation, visual arts and literature.In pursuit of her academic background in Linguistics,she followed the M.Sc. Culture & Media Studies in METU and Tilburg University, and M.A. Creative Industries in Radboud University.She now combines computing and design as a programmer and artist, while also working on digital media fields of Virtual Reality, game development,graphic design, ML data art and algorithmic artworks.

Talk 6
Wednesday, June 9, 2021
Speaker
Siham Tabrik
Time
14h-15h
Presentation Recording
Slides

 

 

 
Title: MonuMAI: explainable monuments architectural style classification with AI and citizen science
Abstract:

An important part of art history can be discovered through the visual information in monument facades. However, the analysis of this visual information, i.e, morphology and architectural elements, requires high expert knowledge. An automatic system for identifying the architectural style or detecting the architectural elements of a monument based on one image will certainly help improving our knowledge in art and history. Building such tool is challenging as some styles share architectural elements, the bad conservation state of some monuments and the noise included in the image itself. In this seminar I will present MonuMAI (Monument with Mathematics and Artificial Intelligence) framework. This framework includes an explainable deep learning model, called MONUMAI, that (1) analyzes the monument image to determine the architectural style of that monument and (2) explains the decision of the model by highlighting the components that led the model to make its decision. MonuMAI also provides a free mobile app to be used in real life conditions.

Biography of Speaker:

Siham Tabik received the B.Sc. degree in physics from University Mohammed V, Rabat, Morocco, in 1998 and the Ph.D. degree in Computer Science from the University of Almería, Almería, Spain. She is currently Ramón y Cajal researcher (Tenure-Track position) at the University of Granada. Her research interests include deep learning models for art, remote sensing and video-surveillance. Find in this link a complete list of her publications .