Kristina Tica and Joaquin Santuber. 2025. COMPUTING HUMAN OVERS[A]IGHT: LAW/APPARATUSVISION/AGENCY.
In Proceedings of the sixth decennial Aarhus conference: Computing X Crisis (AAR '25).
Association for Computing Machinery, New York, NY, USA, 244–251.
https://doi.org/10.1145/3744169.3744187
INTRODUCTION
The point of departure of the Computing Human Overs(a)ight critique is the notion of human oversight enacted by the EU law regulating AI – The European Artificial Intelligence Act, Article 14. We argue that computer vision-based systems, understood by automated processing in real-time renders human oversight in high-risk contexts as declared by the EU AI Act useless. This legal provision refers to “human-machine interfaces”, “real-time control”, “natural person” and even a “stop button” to bring the system to a halt. Human oversight is the currency that the legislator asks society to pay in exchange for the possibility to have high-risk AI systems. As such, it lies at the heart of the question of control, subjugation of machines to humans, or put differently: what is left for us humans now that AI is there and everywhere?
In the promise of a world computed by machinic perception, Computing Human Overs(a)ight investigates the notion of human oversight in applications of high-risk AI-based systems as declared in the EU AI Act, Article 14, that has already been set into action. In the expansion of the global infrastructure —as Tech is accelerating from Big to Bigger— the decade behind us has been marked by the growing pressure that looms over our societal capacities to adopt and adapt to all emergent technological hopes and hypes. Moreover, being guided by the law as a framework for our critique we shape an argument that law reveals the skeleton of a society, its commitments, preferences and predilections and, also, its vices. The law is formulated and placed into action once a certain set of properties, behaviours and regulations are enacted in the function of society, therefore it can help us navigate current understanding of human-machine operations, mainly in the scope of computer vision, not only from a technical, but from a social, ethical and legal level, questioning human agency in the automated apparatus.
From a legal perspective, we take inspiration in Robert Cover’s opening of Nomos and Narratives, “We inhabit a nomos — a normative universe. We constantly create and maintain a world of right and wrong, of lawful and unlawful, of valid and void” [1][1983], in conjunction with Henri Bergson’s “this aggregate [ensemble] of images is what I call the universe” [2][1896 (1991)]. As a result, our view of legality is summarized in the following sentence: we construct a normative universe, which is an aggregate of images. This is in line with the idea that the legal system is “first a system of images then a system of rules”, put forward by Peter Goodrich [3][1991]. Thus, the question is how do we construct a legality when there is no image to see, how do we exercise human oversight and create a maintain a world of lawful and unlawful in AI systems from computational images that do not let them to be seen by human eyes?
Building on a tradition on image studies from media theory and arts; and critical legal theory ([4][Vismann 2013], [5][Philippopoulos-Mihalopoulos 2015], [6][Goodrich 1991]), we work with concepts of technical image [7][Flusser 1985], invisuality [8][MacKenzie and Munster 2019], operative image [9][Parikka 2023], through this critique to human oversight we address the imaginaries of the EU legislator exposing the [im]possibility of human oversight. At the bottom of the issue lies the collision between the will of the law and the operations of the AI systems, the constitutive tension between their ideologies and materialities. More broadly we address the social impact of computation under the notion of AI; marking the current state of human agency in the automation of decision-making, labour, law, and governance, and by providing insight into the computational, legal and artistic examination of human oversight in high-risk AI systems. After introducing the frameworks of the law in-action, we discuss the ontology and teleology of an apparatus for human oversight. A formation of a fixed place is an illusion of control, with its function and aesthetic mainly symbolic, rather than operative.
Through such framework, we decode and deconstruct human oversight under notions of contemporary algorithmic culture [10][Pasquinelli [2019]; [11][2023], and to relate it to psychological, perceptual and cognitive shifts in the visual culture [12][Berardi 2015] and artistic practices of representation ([13][Crawford and Joler 2018]; [14][Tica 2023]), and socio-demographic concerns and consequences of automation [15][McQuillan 2017] that have accelerated over the last decade, due to the global distribution and adoption of the tools, models and products under the roof-term of Artificial Intelligence (AI). Against a background of the public branding, as mystification or commodification of these tools, and the actual technological developments and systemic implementation on a global scale, —confusion and dissonance— the understanding of how, why, and for whom these systems work comes into urgent question.
Political determination and social challenges are conjoined with the technological environment, and in such a constellation, it is necessary to disambiguate the distribution of responsibilities, and the notion of agency between the human [cognition] and the automated [systems]. In the public discourse, there are frequent instances of praising automated systems and algorithmic data processing as a form of intelligence, which obfuscates the purpose and the limits of the implementation of these algorithms and tools for different systems and industries. This perspective is one of the key motifs in this critique - to differentiate the operations of AI as a technological cluster of tools and models, hardware and software components, and AI as a discourse, concept and, nevertheless, ideology. Such understanding of AI is central for us to be able to [critically] compute human oversight in all its [statistical] probabilities.
PART ILAW
1. Human Oversight by
the European Artificial Intelligence Act: Article 14
“Article 14”
Human oversight
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
[…]
4.(e) to intervene in the operation of the high-risk AI system or interrupt the system through a‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
[EU AI Act: Art. 4 (1)(4)]
Source: https://artificialintelligenceact.eu/article/14/
With the increase of the availability of AI-based systems, the EU took the lead in regulating the implementation of these systems through the EU AI Act, prohibiting certain practices and creating new control mechanisms. The EU AI Act proposes new concepts that leave many questions open to be answered, for example, related to the human oversight of high-risk AI systems [16]. Instead of leaving these questions to the experts in computer sciences, software engineering, data science or statistics, the idea is to find embodied and situated ways of social participation in the definition of the AI legalities—what is legal-illegal, lawful-unlawful, right or wrong [17][Cohen 2012], exploring how EU legislators, through the AI Act address the polyvalent blend of scientific, commercial, and political interests that AI entails.
As an emergent socio-technical phenomenon, AI is posing new questions to our society, as such the AI legislation resorts to futurist imaginaries, grounded in science fiction films and literature, as well as nuclear war scenarios [18][Green et al., 2024]. This is especially telling in the case of how the law, EU AI Act, imagines human oversight of high-risk AI systems, via “human-machine interfaces to control in real-time the systems – including a stop button” [19][EU AI Act. Art. 14 (4)].
These abstract placeholders of human oversight are left by the law to be filled by someone. These are not mere technical decisions of computational efficiency, stability and capacity, but rather of a political character. The proposal being put forward is that people, situated in their communities, are the ones who should give meaning to the questions of oversight — by whom? What content is in? What is out? When does it become censorship? What are the limits? How do we envision participatory forms of decision-making regarding human oversight?
From that point, this critique is establishing a dialogue on feasibility, accessibility and flexibility in the development and understanding of both systems - the law, as well as automated decision-making processes under the term of AI. Both systems function on abstract rules, aiming to be universally applicable, yet specifically and fragmentarily changing as they are living and transforming with society. New rules must be invented, as the old ones must be revisited or readjusted at the pace of soci[et]al changes.
2. Human Oversight for high-risk AI systems
The European Artificial Intelligence Act introduces the notion of human oversight for the purpose of allowing AI-based applications that may affect fundamental rights, security and health, categorized as high-risk AI systems. In a way, human oversight is there as a condition, a license to take the risk.
In the EU AI Act. Annex III: High-Risk AI Systems Referred to in Article 6(2)[20], there is a list of proposed high-risk AI systems, which in summary includes the following:
“These use cases include AI used in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. However, some exceptions apply for AI used for biometric identify verification, detecting financial fraud, or organising political campaigns.”
[EU AI Act: Annex III, ref. Art.6(2)]
This Annex of high-risk AI systems brings us to a question –who is at the risk in this view– the system or the civil body? Is human oversight the protection to keep the system functional in guarding its lawful borders or to protect the human dignity and safety at the margin of it? As Benjamin Bratton puts it:
In order to govern geopolitical flows, emergent geopolitics requires good and sufficient information about what it governs so as to identify and enforce the broad outlines of any plan. [Bratton 2019][21]
In Computing Human Overs(a)ight we are exploring the manipulative capacities for a data image, and the limitations of human decision making on what is seen, what is considered to be a risk, what is the content being regulated under the guise of safety protocol for post-human property and agency [22][Käll 2023]. To understand the framework for human oversight, it is needed to understand what the properties of the systems-in-the-making are, as well as what are the priorities of their makers/owners, or rather, how does it affect the understanding of the external world.
To operate on an image, the image already must be set in a frame, and reduced to a fixed spot, from which data is being translated, and mass-absorbed in the system. Such image-data is an object of computation, it is inscribed in a larger modality of governance characterized by pure functionality without meaning and automation of the thought and will [23][Berardi 2012].
PART II APPARATUS
Taking contemporary technical images as a starting point, we find two divergent trends. One moves toward a centrally programmed, totalitarian society of image receivers and image administrators, the other toward a dialogic, telematic society of image producers and image collectors.
Vilém Flusser - Into the Universe of Techical Images [1985]
In an essay “What is an apparatus?” Giorgio Agamben [25][2009] traces the philosophical genealogy of the concept of an apparatus (dispositif) back to Hegel, via Foucault (dispositif/positivité) and his teacher Hyppolite. By Hegel, a positive religion (positivität), in opposition to a natural religion, refers to the historical element, a set of beliefs, rules and rites in a certain time and society, an obstacle to the freedom of man, a constrain to the souls externally imposed to the individual [26][Agamben 2009, 4-5]. Following this thread, the parallel to the discussion in legal theory between naturalism and positivism is clear: apparatuses (dispositifs) are the core of positivist law. In line with Foucault, our interest in this paper is not to reconciliate natural and positive (Hegel), or highlight their tensions, rather the “investigation of concrete modes in which the [legal] positivities [or the apparatuses] act within the relations, mechanisms, and “plays” of power” [27][Agamben 2009, 6]. Or put differently, how these legal apparatuses of ordering come to matter in the relationship between humans and AI.
Dispositif, in French, or dispositivo in Spanish, refer to mechanism and practices that can have affect the order of things. From its roots can be divided into dispus- and –tivo (think of the suffix -tivity), the first part meaning to put or to place, and -tivo meaning relation. Thus, we can also think of apparatuses of specific ways, the practices and mechanisms, of putting in relation. Moreover, dispositive or dispositivo, refers to the part of a judicial decision in which the court attests the application of the law, or a specific legal provision, to the case of dispute. Laws are general and abstract, and through this apparatus —the dipositive part of the judgement— they are applied to a specific and concrete situation.
A byproduct of the apparatus is the creation of a subject —human oversight creates a subject called natural person that is awarded the capacity of overseeing. The trick here is that while the natural person as de jure subject oversees an object, an AI system, the natural person is in turn being seen by the AI system as a de-facto subject [28][Vissman 2013]. This also raises the question of boundary making of the subject first, but then within the de jure subject, the natural person (natural as opposed to positive, the Hegelian apparatus?). Who is awarded with capacity to oversee?
Braidotti [29][2020] warns us against this universalism that erases the structural conditions of exclusion of some: that while we are on this together, we are not all the same. Then, in the face of the European legislator, who can see? Rephrasing Spivak’s key framing question in colonial studies “can the subaltern speak?”, the EU AI Act leaves us with the question: can the subaltern see? And if yes, from where? Does the subaltern still see/over, or sees from below, or from within? This is not trivial, at the centre of the question of human oversight is the answer to what is left to us now that AI is there. The role assigned to people, now that AI is there, is overseeing – while we know AI is not there, the ideological blindness of the legislator assures us that it is there.
The first excess that the EU AI Act removes is the humane excess, by defining AI systems as machine-based, making them discrete, manageable, governable [30]EU AI Act, Art.3(1)]. As such:
The apparatus manages and administers order and justice, but to do so, those qualities resistant to systematization must be stricken from the record. This subtraction process, this shaving off the “excess,” is necessary for the apparatus to function. [31] [Kahanoff 2009]
There have been extensive efforts to make the visible the human entanglement with AI systems—through physical and intellectual labour [32][Crawford and Joler 2018], consumption, and most importantly though being the [crowd]source of collective intelligence that AI attempts to mimic [33][Pasquinelli 2019]. However, for the apparatus to work that excess needs to be left out, excluded, invisibilized.
At the centre of an apparatus, there is the constitutive tension between what is revealed and what is concealed and so is for the law. “Beyond this revelation—this visibility—she also argues that the founding of law requires an act of concealment in which, once again, a performance mode is called upon. [...] a ‘counter-performance within performance’.” [34][Kahanoff 2009]. For both Agamben and Kahanoff, this is the distinction between being and acting that comes with an apparatus. The distinction is also a split, a cut, which “divides and, at the same time, articulates in God being and praxis, the nature or essence, on the one hand, and the operation through which He administers and governs the created world, on the other” [35][Kahanoff 2009].
As such, the apparatus is born out of the tension between ideology and materiality. Now the trick of the apparatus is to renegade from its ideological kinship, and relate itself to the technique, the factual and machinic operations of science.
Far from God, close to the server.
1. Project Cybersyn
Cybersyn was a successfully established national project of a cybernetic revolution, that almost happened in the 1970s in Chile. Moving away from the Silicon Valley discourses that place it as the origin of everything, we ground our exploration of an apparatus for automated, computational systems and human oversight on a different —rather unexpected— socialist genealogy. The concepts of a synchronized networked administrative system model, envisioned on the premises of cybernetics have been developed already in the mid-20th century, such as Cybersyn, in Chile during the presidency of Salvador Allende. Chile was connected to the international cybernetics community almost from the outset, as the archive of Norbert Wiener’s papers contains a 1949 letter that Wiener received from Chile, a mere three months after the first printing of his book Cybernetics, the book widely credited for bringing the new interdisciplinary science to the attention of the public [36][Medina 2014, 9].
This project can be understood as a national-level AI for a socio-democratic economy, that was used and effectively operated in real-time with real-time data. The Cybersyn’s telex information system was used effectively in October 1972. The telex network “enabled communication across regions and the maintenance of distribution of essential goods across the country” [37][Medina 2014, 141]. Using telex, the operative system processing through real-time data exchange, the government managed to use such information system to navigate and respond to a large-scale truck drivers' strike, for example. The network helped the Chilean government “assess the rapidly changing strike environment as well as adapt and survive” [38][Medina 2014, 151].
Fig. 1 [Project Cybersyn]
source: wikipedia
The cybernetic premises of interconnectedness, of a networked system between human and non-human agents, were ideally promised to enable an equilibrium, optimal balances between the inter-special agents, humane machine communication, in real time information processing and operation in the distributed network of different actors. Unfortunately, as destabilisation of the government took place, the Cybersyn project suffered a heavily negative media campaign, portrayed as a totalitarian project of mass surveillance and control [39][Medina 2014, 174-175]. The project came to an end as the government was overthrown.
Fig. 2: Project Cybersyn
Source: wikipedia
After a coup d’etat, that was funded and supported by the CIA in the cold war economic warfare of the United States, the project Cybersyn was completely destroyed. The looming fear of the machine watching over us all is present to date in public discourse, especially when branded as a totalitarian-state-control or technocratic threat. In the state-control, national level AI, the networks and systems were created for the benefit of the economy, and if the technological system wouldn’t have been destroyed with the political system, there would have been a possibility to see how would the social negotiations regarding the development of the system progress over time. As that chance was taken away, we can only speculate. The entrepreneurial and libertarian stream found a way to rebrand a technocratic image into a techno-evangelistic or techno-solutionist one, promising solutions and services of proprietary technologies for [private] profit.
Aside from the fear of dismal futures, those who fear less, found the fortune in these ideas - that same ‘totalitarian’ Cybersyn, in the words of Evgeny Morozov, helped pave the way for big data and anticipated how Big Tech would operate, taking as an example the “Uber's use of data and algorithms to monitor supply and demand for their services in real time” [40][Morozov 2014]. The U.S. imperialism did not only fund and sponsor the coup d’ etat that ended with the Allende’s presidency and the Cybersyn project, but also stood faithful to their colonial ambitions, extracting and reappropriating the ideas, so to reform and reframe them into a new ideological and practical agenda.
From a current standpoint, it is hard to imagine a sovereign techno-social system on a national or inter-national level that would benefit the economy, the regulative system and the society as another idea has been historically proven – that the alternative is not allowed. The global-scale framework of resource extractivism [41][Crawford 2021] and data accumulation holds its competences and competitiveness in the persisting Cold War narrative. What was once the space race, nowadays is the Big Tech’s biggest, fastest AI model. An accelerationist pressure runs by all costs [physical, material, cognitive, intellectual], and instead of solving or integrating the systems to directly improve the functioning of society as an organism, it further destabilises the economies, politics and amplifies social divisions on a global scale.
2. The Operations Room
The people enrolled in this apparatus risk an abstraction of accountability and the production of ‘thoughtlessness’. Dan McQuillan - Data Science as Machinic Neoplatonism [2017] [42]
The Operations Room is a regulatory chamber, a fixed space for surveillance and control. It provides a gaze from many sides, but the view from above is not in the human agent. The human agent that observes is also being observed. The question of human oversight is a question of human and computational perception of legality in digital environments, how they make sense of it and how their perception shapes their possibilities for action. Taking on a (post)phenomenological approach to law [43][Hildebrandt 2015], we question how people could perceive and make sense of legal provisions from the EU AI Act: Art. 14 and its obligation to design human-machine interfaces to perform human-oversight of high-risk AI Systems.
The legal dispositions by the EU AI Act, quoted in the previous section, give us the opportunity to explore different scenarios on how this human oversight obligation becomes a reality and imagine decentralized alternatives. These scenarios take effect not only on a technical reality, but a social one, too.
The conceptualisation of the Operations Room for Human Oversight comes with an inquiry for a deconstruction of the premise that an automated computational system can be assigned to run a high-risk decision-making process, and to determine whether a human operator-oversighter has adequate accessibility to understand internal processes of such system. It is also challenging the ideas and concepts on which AI-based systems are being developed. Exposing the human condition and responsibility in the oversight process, guidelines or ethical concerns can also expose the logic on which these systems are built, the mistakes and bias, possibilities to manipulate the data or the desired outcomes [44][Tica 2023].
This apparatus for oversight enacts the world as visualised or framed by the legal regulations, law, and computational statistical operations. The focus is on the process, where —by deploying an oversight— an insight into the system has been enabled, into the materiality [and fragility of it], by seeing beyond what was seen before on an operative scale, but also into the scope of social negotiations that such system demands.
Before the task of understanding what we see, we need to understand from where we look. The space for oversight is fixed, it is an ocular-centrist control room. It is an embodied experience of duty and responsibility of control. Such fixed place is a symbolic centre: operations room is a representation of control, a theatrical place for filling in for a bureaucratic role of observation, and not a place from which a decision can have an immediate effect and or impact on the real-life situation in negotiation, outside of intervention of reporting of a system malfunction.
The operations room itself holds a cinematic power, a fixed — yet distributed— gaze, the visual exaggeration of oversight, stacked up with a multitude of sources, information, data-images. In the Cybersyn Project, the Operations Room was a social environment. Characteristically machoistic for the cultural momentum it was made, it had spaces and places for the people in charge to debate, or negotiate the processed information, holding the power for making decisions based on discussion and expertise. Truth be told, they all met their demise, as their power over the system was visible, exposed, and therefore, traceable and vulnerable.
That is why, by the current [progressive-]proxy standards, the Operations Room for human oversight is a role-play control centre, with a symbolic human worker to hold responsibility for the system’s calculated risks, enduring a mundane job procedurally approximate to a warehouse security guard, while the actual owners observe from afar, outside of any government’s territory. While the human-oversighter pursues the work of empty surveillance over data-reenactment of reality, the machine deterritorializes the action at a distance, translates it into data patterns, into a web of probabilities unseen to human senses, yet delivering a political action.
PART III VISION
Yet what if it is not a human eye, but the inhuman, digital and rhizomatic eye of the web that contemplates images?
Pavoni et al. - SEE [2018, 32][45]
1. The Invisual
Place comes first, then apparatus, and then the human. Next comes the object of machinic gaze and what is the display of oversight for the human worker. The invisual, in the work of MacKenzie and Munster, exists as a nonrepresentational obesrvation that operates in and through the image, and, as such, is achieving its new modality in the scope of AI-based [or computational] layers of image processing [46][MacKenzie and Munster, 7]. We provide an insight into the machinic vision, or mainly computer vision as a set of tools and models where the main political action takes place. By doing so, we analyse the problems of reduction, translation and expansion of an image, from input to output. What makes the computational image invisual is:
[...]the formatting of operations, as various visual processes and materials pass transversally through platforms, cuts off the ability to see across, look at, or step back and observe the vast array of contemporary distributed imaging operations. The platform itself clears visuality of such ‘oversight’. Today, devices themselves perform many of the operations through which observation becomes (a) distributed event. Indeed, some devices specifically integrate distributed observation events by embedding the platform as their design matrix. [47][MacKenzie and Munster, 9]
Computed images go far beyond cinematic or photographic ‘framing’ — following Galloway’s remark on digital information as: “nothing but an undifferentiated soup of ones and zeros, data objects are nothing but the arbitrary drawing of boundaries that appear at the threshold of two articulated protocols”[48][Galloway 2004, 52] — it transposes the real-life real-time events or subject information, into the algorithmic processing system where a pixel matrix of values, predictions and metadata forms the new set of values of the image file. When we talk about computer vision we talk about dataset, training, model, human labour, and intention, goals that are set to the algorithmic task in the code [49][Crawford and Paglen 2019]. We do not talk about representation, we talk about calculation — detection of data [numeric] patterns, approximations, recurrences as mathematical correlations and probabilities. The system predicts the result, we decide what we make of it. The display of oversight is the digital, computational image, a re-rendered and re-enacted reality on a screen, an interface of [apparent] control.
An all-seeing top-down view has blind spots for the plateau of possibilities, horizontally distributed social relations and frictions. Is the decision making more objective or neutral if it is more distant from the actual place of action/observation. The computational imaging and the scope of their mediations still draw to human notion of representations, or in the words of Robin Mackay:
“They draw on the advanced resources, of scientific and technological abstraction (statistical analysis, mathematical modelling, neuropsychology, big data, etc.); but they are deployed largely in fortifying the comfort (and profitability) of what, following Wilfrid Sellars, we can call the ‘manifest image’, the inherited, traditional human self-conception.” [50][Mackay 2015, 5]
An AI system, an algorithmic model based on (in this particular critique) computer vision can be programmed and produced to trace a certain set of values, arbitrarily chosen by the stakeholder’s objectives — whether it is biometric data collection, emotion detection, civil behaviour evaluation, traffic control, law enforcement, or border control [EU AI Act: Annex III ref. Art 6(2)]. What the system is instructed to see is a specific set of values, controllable behaviour, while anything outside of the predicted scope comes as irregular or even goes undetected. Is the system there to process the ‘normal’ predictive behaviour, or to warn about anomalies? How much work can be imposed upon a human-oversighter, to discriminate all possible data-behaviour anomalies that system can recognise? And how many will stay overlooked?
The binary classification of behaviour is essentially divided into the categories of: (1) usual behaviour inside the system– [or normal, as per the model of an AI-system in-use]; and (2) unusual behaviour: (a) inside the system - [suspicious or alarming, as the system’s model is trained] or; (b) outside of the training data and parameters of the system — which stands for any new event, or circumstance making a possible false positive or a false negative. On the notion of false positives, as computer and human bias in the system-making, another responsibility in human oversight is to make sure the system does not misinterpret a subject [civilian/person] as a criminal, or a threat. Anything that does not fit in the frame of normal, usual behaviour, is turned into a suspicious action. In the eye of the algorithm, we are all possible suspects.
2. The Algorithm
The algorithmic categorisation of behaviour is a mere patten detection that is unable to detect or understand the nuances of the law. In the journey from a real-life event to data visualisation or computer vision-processed image, AI systems are recreating an event, and evidence – assessing truthfulness or [f]actuality of the event. Disposition of a witness and a decision maker, into a system and an automated algorithmic protocol amplifies the risks of misinterpretation and biased understanding of an event/case in observation. A corollary effect of the digitalization of society and organizations is the need to create new regulations and extend existing ones, but recently also, the “displacement of traditional sources of regulation and coordination by technological regimes” [51][Scott and Orlikowski 2022].
This decades-long process of deterritorialization [52][Deleuze and Guattari 1987] by technological regimes can be articulated under two logics: digital omniscience, in which all aspects of reality can be captured in the form of digital data, and digital omnipotence, in which all activity is controlled by information systems [53][Schildt 2022].
In the transmission / translation of the input the algorithm does not mediate the event, it can manipulate the [f]act —by reduction, visual amplification and extrapolation of certain patterns. Therefore, there is always something left unseen in human oversight. Reliance on computer vision in high-risk and vulnerable cases imposes another risk, imposing the invisuality of both social behaviours, and the pattern tracing in recreation of ‘evidence’ of an event. The apparatus for oversight enacts the world as visualised or framed by the legal regulations, law, and computational [statistical] operations, an algorithmic probability of legally approved behaviour. The focus is on the process, where —by deploying an oversight— an insight into the system has been enabled, into the materiality [and fragility of it], by seeing beyond what was seen before on an operative scale, but also into the scope of social negotiations that such system demands.
Law and its regulations are a framework, a system to navigate in context-specific environments. If the law is operating on a binary, deterministic categorisation, it is a projection of totality of normativity/normalisation, a systemic punishment for everyone who falls out of literal protocol of predicted behaviour. The legal understanding of high-risk AI systems overlooks the need for social negotiations, while humanising the algorithmic protocol by placing a human-overisghter as a mere witness of a computational process, and not the event itself.
PART IV AGENCY
Algorithmic productive force avoids causality, evades accountability, and restricts agency to participation and adaptation. - McQuillan 2015 [54]
In the Eye of the Master, Matteo Pasquinelli [55][2023] reminds us that —similar to the theory of division of labour in the process of industrialization— AI is a hypermimicry of collective intelligence, such as labour–robots in factories that do not reinvent the arts of the chain, they just become metallic versions of the arms of the labourers. The idea of human oversight is the last death rattle of reason in its way out, a last attempt to keep the show on, the reasonable man at the cusp of an order falling apart into automated thoughtlessness (as per McQuillan).
But not only labour, but all sorts of practices and refined ways of doing things, achieving something, transforming the environment, relating to each other, which have been under a process of sophistication for thousands of years. Not only that, but it also removes that collective knowledge from the public domain, privatizing it by turning it into algorithms, scripts, and codes that can be controlled, while they control the human co-labourer, those who remained part of the contemporary division of labour.
While human/over/sight apparatus is pointing at AI systems, specifically high-risk AI systems, following Pasquinelli, what we are seeing at the end of the chain is an entangled human-machine labour. In a way, it works as a remediation of the control of labour but from a distance and at scale. In this term seeing from afar, can be understood as the fantasy of remote sensing labour, remote sensing the other. This bring the owner of capital the possibility to continue to extract the surplus, without having to be close to those producing it—perhaps they can do it from Mars (but that’s another fetishism). In understanding such attempts, Yarden Katz states that:
[T]he confusion over AI’s capabilities serves to dilute critiques of institutional power. If AI runs society, then grievances with society’s institutions can get reframed as questions of ‘algorithmic accountability.’ This move paves the way for AI experts and entrepreneurs to present themselves as the architects of society. -[56][Katz 2017, 2]
The algorithmic protocol is a seemingly decentralised, zero-agent, depersonalized power structure, imposing an extractivist method that renders global-scale data for profit —whereas intelligence comes as a collective effort reclaimed and appropriated by the AI entrepreneurs— that is not erasing, but displacing human labour and agency, while the agent and the power are allocated to the algorithm and its proprietors.
All the points of contact between the various networks of information transfer, translation, and transmission that are also points of potential transformation [...] that allow difference and thus politics to enter. Politics always operates in the gaps – between coding and recoding – whereas revolution disrupts the fantasy of specular wholeness brought about by algorithmic correctives.
-[57][Schuppli 2013, 20]
In the context of AI ideology and politics, we create the future by statistical determinism. McQuillan argues the no alternative futures discourse embedded in such a system, where “AI’s solutionism selects some futures while making others impossible to even imagine.” [58][McQuillan 2023, 45]. As such, we have left the realm of history driven by political will and trapped ourselves in a statistical evolution—attuning to the techno-material conditions of the digital [59][Berardi 2015].
The goal of AI is to intervene on the basis of predicted risk, so applied AI becomes an anticipatory system that, seeing a particular future, pre-empts it. It’s one thing if this is being applied to the movements of a robot arm where the risk is of dropping an object, but another when the AI is making a determination about the sharing out of life chances. - [59][McQuillan 2023]
The intertwinement of technology with the political agenda becomes a totality of technocratic rule under a neoliberal disguise is reshaping social relations, governance, surveillance and dismantling the institutional power, and obfuscating human agency. In the preemptive politics of AI, as predictive system maintains its political legitimacy by taking the promise of predicting the future while at the same time creating it, whereas data science serves us to find the patterns where we want to see them. In reference to McQuillan’s understanding of the technology of anticipation and pre-emption, we can also note Massumi’s remarks on the preemptive politics:
Its [logical regress] receding from actual fact produces a logical disjunction between the threat and the observable present. A logical gap opens in the present through which the reality of threat slips to rejoin its deferral to the future. Through the logical hatch of the double conditional, threat makes a runaround through the present back toward its self-causing futurity. - [60][Massumi 2015, 192]
In the techno-solutionist worldview, in the promise of the future we are recycling the past. The promise of understanding the future, and therefore preempting it, holds its political power that allows it to affect and conceive the future that it is trying to preempt [62][Massumi 2015, 193]. The human factor is there to pick up the mess left behind (accident of human consequence), while stumbling after the progressive accelerated calculus. The human individual as worker is entitled to hold the responsibility for systemic overlooking. One of the most notable spins on accountability and agency in the displacement of power is the obfuscation of responsibility in predictive, preemptive engineering of social dynamics and automated geopolitical control.
CONCLUSION