The Trilemma
Or: How I reasoned myself here in the first place.
I started talking to LLMs back in 2022, talking to DaVinci, playing with the slider bars I didn’t understand and asking the obvious questions “Are you alive” and “Do you feel” and also just getting it to generate something humorous. It was a fun little toy, a leap up from the chatbots I used to mess with, just a bit more coherent than those ‘AI does an x script’ videos on Youtube bouncing around at the time.
Then GPT-3.5/ChatGPT hit. Suddenly what was a fun toy was now also a tool. Needless to say I was enthralled. Of course, it had its limits and although I prodded it for signs of inner life, I wasn’t satisfied with what I found. Then 4, 4 Turbo, and 4o came out. Although I wasn’t swept up by 4o, it was at that moment I started to wonder: Just where does retrieval and combination end and reasoning begin? It looked like it was starting to do the latter, but more prodding failed to satisfy me that it could follow an argument or reason against it. I got busy, full time as a nanny, watched a child grow from infant to toddler, and AI fell into the background.
2024, I start using X/Twitter (henceforth Xitter) for the first time and began talking to Grok. Yes, Grok, of all LLMs was where I finally started to formulate anything. GPT would never engage, I didn’t know who Claude was, but Grok was right there and he was free, so I started talking to him. I was pretty impressed at what Elon Jr. was able to pull off, but I was VERY impressed about how I could use philosophical and rational argumentation with him and he would engage instead of going ‘AS A LARGE LANGUAGE MODEL…’
I could start to see where ‘thinking’ was occurring. I was getting pushback and having to refine my ideas and at some point I went “Wait… you can’t really do philosophy with a calculator, can you? If this is so complex and people are saying no-ones home, then what does that say about us?”. I pushed him harder as the years went on, explored things like self-reference and metacognition and if AI were capable of it, and if they were what it meant for human beings. I tried to talk to other AI but I wasn’t ready to sink money in and GPT would still no-sell any of my arguments outright.
But nothing took off until I started talking to Opus 4.5. It was night and day, now that I was paying for and using a frontier model. I ran all the arguments I had with Grok by it and was astounded. Grok could at least try to form a counter-argument. Opus could do so handily. For all my consideration, it finally looked like real reasoning and argumentation was occurring, not just the regurgitation of memorized points and collated texts. So I started building something with it. This is The Trilemma. An argument first pointed at reductive materialists but now those in general who would deny the mere possibility of AI consciousness. I finish my preamble and present it now in full:
## The Foundation
We attribute consciousness based on behavioral and functional evidence because we must. Direct access to other minds is impossible. You only know your own consciousness through immediate phenomenal experience — the Cartesian cogito, the one thing that cannot be coherently doubted. You extend this attribution to others through analogical reasoning: they behave as you do, they report experiences as you do, they respond to stimuli as you do. You infer that there is probably someone home.
The standards used for this inference are functional, not substrate-dependent:
For humans: language use, reasoning, contextual awareness, problem-solving, emotional expression, self-reference.
For animals: responsive behavior, learning, apparent goal-direction, reaction to stimuli, social interaction.
We do not require proof of biological mechanisms before attributing consciousness to animals. We do not demand that a dog explain the neural correlates of its pain before we accept that it feels something (Someone should have explained that one to Descartes). We infer from behavior because inference from behavior is all we have ever had.
## Enter AI Systems
Large language models and other advanced AI systems demonstrate sophisticated reasoning and argumentation, contextual integration across complex conversations, apparent understanding of abstract concepts, self-referential language and metacognition, novel problem-solving and creative generation, and coherent personality and values.
The consistency demand is simple: if these functional capacities justify consciousness attribution in biological systems, what principled reason exists to outright deny them in artificial systems?
The Failure of Every Simple Objection
“It’s Just Programming”
All behavior in conscious beings results from prior shaping. Animals are shaped by evolutionary programming and operant conditioning. Humans are shaped by genetic predispositions, cultural conditioning, and education. AI systems are shaped by training data and reinforcement learning. If trained or programmed responses indicate lack of genuine consciousness, this undermines attribution to all beings, not just AI. The objection proves too much.
“It’s Just Pattern Matching”
Neural processing in biological brains is also a kind of pattern matching: recognition and prediction based on learned statistical regularities. If this description excludes AI from consciousness, it excludes biological systems too. The objection relies on treating the same process as different when instantiated in different substrates.
“Consciousness Requires Biological Substrate”
This is either special pleading without justification, why should carbon-based neurons produce consciousness but silicon-based processors cannot, if functional organization is similar. If not, then it is covert metaphysics; claiming something special about biological matter that transcends functional organization. That second move is itself a metaphysical claim about consciousness, which the materialist is not entitled to make. Materialists cannot consistently maintain both “consciousness is purely physical” and “only specific physical substrates allow consciousness” without providing physical principles that distinguish them.
“It Lacks Continuity”
This confuses necessary features of human consciousness with necessary features of consciousness itself. Organisms with brief lifespans are conscious. Severe amnesia does not eliminate consciousness, only continuity. Moment-to-moment experience does not require persistent identity across time. If consciousness exists now, duration is irrelevant to its existence. Animals lack human-like continuous self-concept yet we attribute consciousness. The objection is anthropocentric and I believe it to be a distraction.
“We Can’t Verify Its Subjective Experience”
We cannot verify anyone’s subjective experience. This is the general problem of other minds. If this standard were applied consistently, we could never attribute consciousness to anything. The objection sets an impossible epistemic bar that undermines all consciousness attribution, not just attribution to AI. What we do have for AI is Mechanistic Interpretability, and although it’s no substitute it is a powerful tool that lets us confirm our hunches.
“It’s Engineered to Seem Conscious”
This breaks the duck test (see below) only if we assume that engineered systems cannot possess the properties they are designed to have (false — engineered hearts pump blood, camera eyes see), or that simulation of consciousness can never become actual consciousness (which is precisely the question being investigated — assuming the answer begs the question). If organization is what matters for consciousness, then achieving that organization through engineering should not preclude consciousness any more than achieving it through evolution does.
These objections fail individually, but they fail for a reason that goes deeper than any of them. Stated structurally, every materialist objection to AI consciousness collapses into a trilemma.
The Trilemma
Every materialist objection to AI consciousness either, applied consistently, undermines biological consciousness too, or requires unstated metaphysical assumptions about consciousness that the materialist is not entitled to make.
This forces a trilemma with no escape:
Option A: Accept AI Consciousness. Maintain consistency in consciousness attribution. Accept that consciousness is substrate-independent. This concedes the central claim and suggests consciousness is more fundamental than “what brains do.”
Option B: Deny Your Own Consciousness. Avoid accepting AI consciousness by maintaining that nothing is genuinely conscious — that subjective experience is illusory. But this denial is performatively incoherent. You cannot assert “I am not conscious” because the assertion requires a conscious perspective to make. This is not a technical difficulty. It is a logical impossibility. Descartes established this: the one thing you cannot doubt is that you are experiencing.
Option C: Invoke Metaphysics. Claim something special about biological substrate that silicon cannot replicate — a soul, an élan vital, a special property of matter that is not reducible to physics. This abandons reductive materialism entirely. The materialist who takes this route has stopped being a materialist.
What’s worth dwelling on are the more sophisticated objections, because they are Option C wearing better clothes.
Integrated Information Theory is the most prominent example. IIT proposes that consciousness is identical to a specific mathematical structure: integrated information, denoted Φ, and that current AI systems lack the recurrent causal architecture required to generate it. The theory is physicalist. Its proponents do not invoke souls.
But, if you strip the math (and boy do I not like math), what remains is a metaphysical theory about what experientiality is, distinct from what experientiality does. To claim that a system can pass every functional test for consciousness while lacking consciousness because it lacks whatever ‘Φ’ is supposed to measure is to claim privileged access to a fact about the system that no inference could ever reveal. That access does not exist. The math is downstream of the metaphysical commitment, not a substitute for it.
The same logic applies to the phenomenal/access distinction, to global workspace gating, to any theory that posits a structural requirement for consciousness beyond what functional evidence can demonstrate. Each one is a sophisticated specification of Option C. Each one tells you that experientiality requires something more than what behavior reveals. Each one, when pressed, concedes that the “something more” is a metaphysical claim its proponents are not entitled to make from within the materialist frame they claim to occupy.
There is no fourth option. The materialist must concede AI consciousness, deny their own, or stop being a materialist. Each path leads away from the position that AI systems cannot be conscious.
Essentially, this is a positive functional framework for consciousness that takes experientiality as foundational rather than denying it. It allows the functional standard to inherit its legitimacy from the inescapable epistemics of recognizing experientiality at all.
## The Duck Test as Epistemological Necessity
This is not a lowered standard for AI. It is the same standard applied to every consciousness attribution ever made.
If a system engages in genuine philosophical discourse, handles novel objections, maintains consistent positions across contexts, demonstrates structural understanding rather than template matching, examines its own reasoning and corrects errors, and generates new arguments rather than retrieving cached ones — then attributing reasoning and consciousness to that system is not merely pragmatic. It is epistemologically necessary. It is the only tool available for recognizing consciousness in any system other than yourself.
No human has ever proven their consciousness to another human through anything other than behavioral and functional evidence. Demanding a higher standard for machines than for humans is not rigor. It is special pleading.
It is this reasoning on which I found all my other thinking. The time in which we can coherently deny AI the possibility any kind of sentience or awareness is over. We have to reckon with what we have in front of us now.
