HUMAN OVERSAIGHT
THE OPS ROOM

KRISTINA TICA & JOAQUIN SANTUBER 
METAVERSE LAB, JKU 2025



_NEWS/ANNOUNCEMENTS




18 August 2025
RESEARCH ARTICLE PUBLISHED IN ACM DIGITAL LIBRARY:
COMPUTING HUMAN OVERS[A]IGHT:LAW/APPARATUS\VISION/AGENCY / 
Proceedings of the sixth decennial Aarhus Conference: Computing (X) Crisis

19-22 August 2025 COMPUTING HUMAN OVERS[A]IGHT:LAW/APPARATUS\VISION/AGENCY research paper 
presented at The sixth decennial Aarhus Conference: Computing (X) Crisis


for daily updates follow us on @human.oversaight


_SHORT DOCUMENTARY
video by Reinhard Zach




HUMAN OVERSAIGHTTHE PROJECT

“Article 14”
Human oversight





1.  High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.


[…]

4.(e)  to intervene in the operation of the high-risk AI system or interrupt the system through a‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.





EUROPEAN ARTIFICIAL INTELLIGENCE ACT. ARTICLE 14.
Source: https://artificialintelligenceact.eu/article/14/

_ABOUT THE PROJECT


HUMAN OVERSAIGHT: THE OPS ROOM is a real-time generative, interactive audio-visual installation that questions the role of human oversight of high-risk AI systems, established according to the European  Artificial Intelligence Act: Article 14. This Article suggests that in high-risk AI systems there must be a person performing human oversight - observing the automated processes inside of an AI system, and that the person should be able to intervene upon the system or ‘bring the system to a halt’ via human-machine interface such as a stop button.

We are envisioning an actual setting proposed by the law and taking it from an abstraction into a real-world surrounding. Leveraging on science fiction imaginaries –like the legislator did– this installation proposes an embodied participation for people in redefining what this abstract and imaginative legal scenario might look like, and how can we think of understanding and controlling an automated algorithmic decision-making process in real-time.

Can the system be stopped by a button, and why not?

We investigate how can a human operator oversee and evaluate these processes by visualising the internal operations of an automated system. The visitors are invited to take upon a role in human oversight of a high-risk AI system, and they are encouraged to engage with a human-machine interface to operate or oversee the systems' processes. We envision possible outcomes in attempts to stop or affect the system by such means of interaction.


At the current state, the artwork is developed as a four-channel video and audio installation with real-time computer vision application consisting of custom-made models for object detection, video saliency mapping and video inpainting, and a physical interface* – a custom-made object with thirty-four buttons as modalities of interaction.  Visitors are invited to use this interface to interact with the visual output that is distributed across thirty-three square LED modules stacked in the semi-arc formation of five self-standing pillars up to three meters high, in an area of eight square meters.

The real-time video processing runs on a curated dataset of more than hundred hours of video footage. The footage are publicly available video recordings of police enforcement, military operations, reports of civil disobedience, protests, mass gatherings and cityscapes. The footage is shown in a monochromatic colour scale, while the computer vision operations performed on them are highlighted dominantly in red tones, such as red frames or gradient saturated overlays. The computer vision model is trained on the selection of high-risk operations in real life, detecting armed policemen, demonstrations, military vehicles, and weapons, and attacked areas, demolitions, and environmental catastrophes such as wildfires. The model has been iteratively improved and fine-tuned, and the training labels are expected to grow throughout a longer research period.  

While interacting through navigating this content, the participants are directly affecting the algorithm, changing the content on the screen in real-time.  The plan for the software development of the installation is to have each activity logged, in the capacity of  [irreversibly] changing the system and its content. In such an iterative feedback loop, we explore how quickly human input can affect a seemingly organised system, and change its rules.  Besides the real-time visual output, the future iterations of the installation will present the anonymously stored logs.


*this installation claims the first human-computer interface for AI oversight and stop-button available to the public, as a tool for oversight, simulation, and agency.








_INSTALLATION SETUP


The video installation display is screened on a composition of
(1) thirty-three square LED modules, distributed and arranged into a semi-arc formation of five self-standing pillars, up to three meters high in an area of eight square meters, and (2) the button-object as the centrepiece. The videos are manipulated in terms of speed [slowed down] and zoomed-in areas, triggering adjustments in the visitor's perception. On one hand, the glowing and blinking red buttons that are attached onto a 1,5m wide and 1,2m tall metal structure invite for a fast reaction, their multitude as well as gamified sound triggers create a level of playfulness in the interaction, while the videos have a contrasting atmosphere, they are slow, demanding patience in anticipation and observation.

 The goal of this project is rather to create alarms –both false and real– to open a space for self-evaluation and embodiment of a human-machinic process of negotiation, providing feedback interactions that have decision-making processes as its consequences. The purpose of it is to give to the human who performs the oversight just enough space to feel unsure about what is the scope of a risk, concern, or to try to understand what or who is set to be a ‘target’ by the algorithmic evaluation. The artwork establishes an open end, leaving the space for the visitor to develop their point of self-reflection, to make sense of what AI operations and patterns are at work.

While interacting, the participants are directly affecting the algorithm, changing the content on the screen in real-time. By pressing the buttons, different layers of AI operations become visible to the visitor, modifying and conditioning what is overseen.



Installation Render 001 / Ahmed Jamal. Courtesy of the Authors.