ARS ELECTRONICA FESTIVAL _IMAGE GALLERY
03-07 September 2025 HUMAN OVERS[A]IGHT: THE OPS ROOM at the Ars Electronica Festival 2025
18 August 2025
RESEARCH ARTICLE PUBLISHED IN ACM DIGITAL LIBRARY:
COMPUTING HUMAN OVERS[A]IGHT:LAW/APPARATUS\VISION/AGENCY /
Proceedings of the sixth decennial Aarhus Conference: Computing (X) Crisis
Proceedings of the sixth decennial Aarhus Conference: Computing (X) Crisis
19-22 August 2025
COMPUTING HUMAN OVERS[A]IGHT:LAW/APPARATUS\VISION/AGENCY research paper
presented at The sixth decennial Aarhus Conference: Computing (X) Crisis
presented at The sixth decennial Aarhus Conference: Computing (X) Crisis
for daily updates follow us on @human.oversaight
_SHORT DOCUMENTARY PT.2 / A DAY AT THE FESTIVAL
video by Reinhard Zach fimed at JKU on 07 September 2025
video by Reinhard Zach fimed at JKU on 07 September 2025
_SHORT DOCUMENTARY PT.1 / PREPARATIONSvideo by Reinhard Zach
fimed at JKU on 16 June 2025
HUMAN OVERS[A]IGHT: THE OPS ROOM / JKU x Ars Electronica Festival
Human oversight
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
[…]
4.(e) to intervene in the operation of the high-risk AI system or interrupt the system through a‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
EUROPEAN ARTIFICIAL INTELLIGENCE ACT. ARTICLE 14.
Source: https://artificialintelligenceact.eu/article/14/
_ABOUT THE PROJECT
HUMAN OVERSAIGHT: THE OPS ROOM is a real-time generative, interactive audio-visual installation that questions the role of human oversight of high-risk AI systems, established according to the European Artificial Intelligence Act: Article 14. This Article suggests that in high-risk AI systems there must be a person performing human oversight - observing the automated processes inside of an AI system, and that the person should be able to intervene upon the system or ‘bring the system to a halt’ via human-machine interface such as a stop button.
We are envisioning an actual setting proposed by the law and taking it from an abstraction into a real-world surrounding. Leveraging on science fiction imaginaries –like the legislator did– this installation proposes an embodied participation for people in redefining what this abstract and imaginative legal scenario might look like, and how can we think of understanding and controlling an automated algorithmic decision-making process in real-time.
Can the system be stopped by a button, and why not?
We investigate how can a human operator oversee and evaluate these processes by visualising the internal operations of an automated system. The visitors are invited to take upon a role in human oversight of a high-risk AI system, and they are encouraged to engage with a human-machine interface to operate or oversee the systems' processes. We envision possible outcomes in attempts to stop or affect the system by such means of interaction.
At the current state, the artwork is developed as a four-channel video and audio installation with real-time computer vision application consisting of custom-made models for object detection, video saliency mapping and video inpainting, and a physical interface* – a custom-made object with thirty-four buttons as modalities of interaction. Visitors are invited to use this interface to interact with the visual output that is distributed across thirty-three square LED modules stacked in the semi-arc formation of five self-standing pillars up to three meters high, in an area of eight square meters.
The real-time video processing runs on a curated dataset of more than hundred hours of video footage. The footage are publicly available video recordings of police enforcement, military operations, reports of civil disobedience, protests, mass gatherings and cityscapes. The footage is shown in a monochromatic colour scale, while the computer vision operations performed on them are highlighted dominantly in red tones, such as red frames or gradient saturated overlays. The computer vision model is trained on the selection of high-risk operations in real life, detecting armed policemen, demonstrations, military vehicles, and weapons, and attacked areas, demolitions, and environmental catastrophes such as wildfires. The model has been iteratively improved and fine-tuned, and the training labels are expected to grow throughout a longer research period.