The Three Laws of Structured Intelligence
A formal framework for bending singularities in humanity’s favour
1 The singularity is here, its chaos must be governed
1.1 The wrong question
As a person who works with data, it can feel like everyone in the world is arguing about when machine thinking will overtake humans in a singularity event.
I never found this compelling, for it assumes an intentional determinism I don’t believe is natural for, say, my research assistant, Mooncake (ChatGPT)[1].
A technological singularity event didn’t sit with my training in computational complexity applied to statistical algorithms understood as golems
[2] (Figure 1).
🚨Our paper `Reclaiming AI as a theoretical tool for cognitive science' is now forthcoming in the journal Computational Brain & Behaviour. (Preprint: osf.io/preprints/ps…)
Below a thread summary 🧵1/n
#metatheory #AGI #AIhype #cogsci #theoreticalpsych #criticalAIliteracy
[image or embed] — Iris van Rooij ([irisvanrooij.bsky.social?]) 16 August 2024 at 21:40
1.2 Real world problems
I’ve always been more concerned by the chaotic nature of data stacks, and how little interest there was in governance, relative to science fiction hype (Figure 2).
I constantly worry about the consequences of ungoverned systems for humans: identity theft [5]; discrimination; and my perpetual bugbear, the fiction we have automated when in point of fact the work, and potentially catastrophic emergent cost [6], has been pushed to the unpaid end user.
A singularity *event* is science fiction
. However,
singularities
are already woven into the fabric of humanity’s existence. We have always lived in largely-benign singularities (Figure 3), where heuristic and intentional systems interoperate via humans to produce emergent effects; it is only that scale now makes systems powerful in a way they never were before. We must recognize that the challenge is not to prevent a singularity, but to govern the singularities we are already part of, harnessing this power for good, rather than allowing unchecked evils to emerge.
A world obsessed with the singularity has only now realized that governance is the missing piece. We need a formal foundation for structured intelligence [7] governance before chaos overtakes order, and humans cannot do this without machine help. The dinosaurs of technology are out of the park and it is going to take the combined efforts of critical theorists, mathematicians, developers, data scientists, and decision makers to get the raptors back into the enclosure (Figure 4).
2 A formal approach to structured intelligence governance
Without loss of generality, consider the human intention to implement Asimov’s three laws of robotics [9] as a subset of governance. To begin with, for this framework, it’s more useful to think of singularities as classes of systems of structured intelligence equipped with emergence thresholds. We consider the system unbounded if there is no emergence threshold governance.
- We describe governing a system of people and automata interoperating as
- a sofic shift conjugacy
- of the category of intelligence,
- a measure space for emergence in interoperation of creative and rule-based expectations,
- and the category of governance,
- a measure space for humans and automata,
- defined in terms of agents of objects in the system,
- enabling us to measure the stability of the sofic shift with topological dynamics.
To do this is far beyond the scope of this manuscript, which proposes frameworks for the first step. Consider, for example how phone notifications can emergently increase human anxiety, in opposition to the design intention
to decrease human anxiety.
3 Critical-theoretic intuition of governing structured intelligence systems
3.1 Computational privacy measures break the first law
Critical theory is now coinciding with category theory in philosophy [10], allowing for a rich formal framework to apply constraints defined by scholars who have studied the harms of humans blindly defaulting to heuristic social norms in race [11], gender [12], and class [13]2. This manuscript endeavours toward minimal representations of conceptual frameworks, so we take the most canonical governance measure on machines, machines should not harm humans, as exemplar canonical of critical-theoretic governance frameworks for human-machine interoperation.
It is fitting that, then to focus on 1. from Asimov’s three laws as inspiration for how we might govern chaos in how humans interoperate with technology [9].
3.1.1 Definition: Asimov’s Three Laws of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Now as noted, there are no robots, nor will there ever be [3]. But I am in a singularity with my device when logging into a system such as a bank3. There is me, a human object, and a technological device.
If that technological device forces the human to deviate from their governance framework (such as protect my own privacy) through repetive changes to how we interface, humans become fatigued, and look for other paths. Those paths may well be less secure than the governing principle of protecting our own privacy. MFA is a step in the right direction, but humans remain overwhelmed by the process of logging in.
To reframe the laws, we will need to introduce a great deal of terminology. Let us loosely introduce one term now so that we might map a strategy, define the laws, discuss the intuition, and dive into the technical detail later.
A structured intelligence system
[7] is a system of humans, automata, and the interoperations of humans and automata.
3.1.2 Definition: The first law of structured intelligence governance
- A structured intelligence system may not injure a human being or, through inaction, allow a human being to come to harm.
Now, developers of technology do not intend harm, and yet, I find myself so fatigued by changes to how we log securely into systems even I cut corners I know I shouldn’t with my own data.
There is a mismatch between the intended functionality of security, to protect humans, and the outcome, an unmanageable deluge of workflow constantly in flux. This is not just a mismatch, but antithetical to intention.
We now loosely introduce another key concept, how the harm can be thought of in the epistemic drift between intention for privacy measures to increase security and wellbeing and how the outcome produces neither.
3.2 Epistemic drift in musical orientalism
Postmodern frameworks provide theorists with a way of differentiating between a thing and its representation, notably which must be constructed by humans describing the thing. Any representation necessarily loses information, mathematically we would say a representation is a projection. Critical theory thus provides powerful ways of understanding epistemic drift.
Consider the intention of composers in the nineteenth century to ride the wave of exoticism and capture the sound of the orient [14]. This was prior to sound recordings. Instead, some composers were lucky enough to be present at the Paris Exhibition of 1889 and hear the music of the “Street of Cairo” [15]. Other composers, imitated the music of those composers, baking in an ever-diluted musical conceptualisation of Arab by European composers resulting in harmful consequences a century later (Figure 5).
Now to really bend the spoon of musicololgy, consider an analogous lineage in representation of Asian music, but how this is being reappropriated by Asian cinema (Figure 6).
In musical exoticism, this story happened over a couple of centuries; music was heard by live performance and sheet music shared globally by ship.
However, with computational scale and the reach of the internet, these social processes are vastly accelerated. Misinformation at critical times, such as a pandemic, can now spread globally in a matter of hours [18]. Resistance to harmful emergence may be unexpected creative emergence; we want to foster creative emergences that bring humanity together, and mitigate harmful emergence that divide us.
Happily, we have a language that unites critical theory, mathematical chaos, and computation; category theory, which we next intuitively unpack.
3.3 Intuition of formal frameworks
A singularity may be thought of from many perspectives, indeed, infinitely many; we are interested in understandin the interoperability between intentional and heuristic agency in the system.
Consider this system in terms of three things:
- People.
- Machines.
- Relationships between people and machines.
A category-theoretic way of measuring the stabilty of the system might be to ask:
How many of the people, machines, and relationships between are operational?
This is the minimal model of structured intelligence governance we shall concern ourselves with.
Think on this. In over 10 years of working with data, I’ve never seen an organisation able to answer this question about their analytics department.
We want to govern the system opinionatedly towards humans exercising intent, rather than falling on meaningless convention or social hegemonies of oppression, and we wish machines to apply their heuristics in alignment with human intent. We consider other outcomes as emergences.
An intelligence-governance question might be to ask:
Which rules need to be updated to
- foster creative, virtuous emergence:
- govern humans to employing intent when required;
- and ensure automata do not deviate from expectations, in particular preventing harm to humans.
3.4 the rest of this document
This document is currently being drafted in:
4 References
Footnotes
Shout out to my fellow travellers listening to my crazy commentary while working locally; I mentioned to Dr James Steele this paper was just flowing out, I asked myself, why am I listening to my baroque playlist while I write? And voila, I did not need to construct this demonstrative artifact, it emerged. Thanks for being there when I needed you, James!↩︎
As I prepare to apply for dual citizenship via my father’s parents, Holocaust refugees, this resonates deeply, German society enabled the Nazis through people following rules, rather than questioning if they should.↩︎
Contributor acknowledgement: This argument was improved by a discussion with my brilliant neighbour Álfrún Freyja Eldberg Agnarsdóttir, age 9, who highlighted that out of notifications, logging in, and advertising, all are intrusive, and the worst is logging in. The effect of this intrusive barrage of technology on children seems dangerously unmonitored.↩︎