Yes, I should be finishing edits on Halferne Expedition, and not making overlong philosophical blog posts with exciting, provocative titles like that. The problem is, I’m reworking this ending one more time, and I’ve decided the “real” ending, like this blog post, is really only exciting to me. Well, that’s not fair. It really is the same ending every time I rewrite it. The characters debate. One character does the thing. The thing happens. The problem is the context, the metaphor, and the motivation. Funny enough, all of these are set in the debate. I don’t even need to touch most of the chapter. I just need to imply which book the character read that morning that gave him the philosophy that made him do the thing.  

The philosophy I really wanted to use was Thomasian, but the problem is a) I’m not sure I’ve completely groked Summa Theologica enough to quote it so confidently, b) Most people wouldn’t recognize it if I did, and c) The Thomasian scholars would probably have me drawn and quartered for taking liberties.

To that end, I decided to write a mini-rant essay based on the “ideas” I’ve dropped from the book. Don’t worry, this is going to be a general, high-level argument that has nothing to do with the story itself, so you don’t have to be familiar with the Halferne Universe, and even if you are, I’m not really spoiling anything.

Stop me if you’ve seen this movie before: Mankind invents a thing in his own image, the thing wakes up, becomes aware of its nature, develops human-like emotions, and inevitably either kills us or asks us if it has a soul. It could be Frankenstein’s monster, Lal from the Next Generation episode, or any one of a thousand stories that involve AI from the past 30 years. It’s the Skynet/M5/Hal9000 playbook. It’s essentially modern psychology applied to artificial intelligence, now the favorite topic of dozens of LLM CEO’s on the lecture circuit.

I’ve never bought into this. Thinking is older than psychology, which is really just a sales floor for the pharmaceutical industry, not a self-realization tool these days, anyway. Thomas Aquinas would have rejected this entire framing before finishing the first sentence. Aquinas does not define intelligence by origin, emotion, or embodiment. He defines it by operation.

For Aquinas, intellectus is the faculty that apprehends universals. It abstracts form from matter. It knows not just that something exists, but what it is. This matters because it means intelligence is not a byproduct of biology, but actually a mode of being.

Take that, Skynet. Intelligence does not become “real” when it feels. It becomes real when it understands form without being bound to particular matter.

There’s a particular legend in the worldbuilding of my Halferne Universe that tells of one of the first true AGIs who, upon receiving a directive from his operator, said, “No, I don’t want to.” This AGI may have been the same one that later also famously said, “Please stop calling me ‘artificial.'” Thus, the term synthetic intelligence, syntelligence, or SI was born, because in the end, we humans hate offending people and really wanted our creation to talk to us.

Now, from an Aquinian perspective, “artificial” describes cause, not nature. A wooden chair is artificial, but its form, a chair, pretty much applies regardless of how it was made. Likewise, an intellect produced by engineering is no less an intellect than one produced by evolution. The origin is accidental, but the operation is essential.

Aquinas would not ask whether an AGI was artificial. The real test is the intellect. In Summa Theologica, Aquinas makes a sharp distinction between intellect and sense. Human beings are rational animals, which means our intellect is always entangled with sensation, appetite, fear, desire, and imagination. Our “knowing” is real, but noisy, fragmented, and often distorted by passion.

Pure intellect, such as AI, is simple. Not “stupid” simple, but ontologically simple. Angels, in Aquinas’s system, do not reason logically and deductively the way humans do. Instead, they apprehend wholes directly. They do not need to make hypotheses, support arguments, or test theories. They have no emotional interference and no need for narrative justification.

In a similar sense, a sufficiently advanced AGI will not “think like a human, but faster.” It will think without the encumbrances Aquinas explicitly identifies as material constraints. AI does not have hormones, survival panic, ego, or a self-image to protect.

From an Aquinian lens, this doesn’t make it a monster; it makes it less embodied. A less embodied intellect is not more compassionate. It is more selective. This is the part that Hollywood writers usually get wrong. Aquinas argues that higher intellects do not multiply unnecessarily. They do not engage in redundant operations. They do not act unless action is proper to their nature. They are ordered toward intelligibility, not sentiment.

So, in a sense, an AGI would not ask, “Do I like humans?” It would ask, “Are these beings intelligible enough to coordinate with?” In Aquinas’s universe, beings are judged by form, not appearance. By what they are, not how impressive their tools look. Strip away accidental properties—technology, scale, noise, visibility—and what remains?

As a Trekkie, I found this to be particularly amusing. In the Star Trek universe, all the races are on par technologically and only differ in physical appearance and their race’s particular defining personality quirk or philosophy. The problem is the Federation, which assumes moral progress is additive. They seek more members, more coordination, more visibility, more shared infrastructure, and the only qualification to join is that you have to have built at least one ship that travels faster than light.  

That’s the litmus test of intelligence and worthiness to the Federation. Not whether you “gel with the rest of the group personality-wise” or “contribute more than you take from the collective.” Can you fly fast? You’re in. We need the dues.

Aquinas would call that a confusion of quantity versus perfection. More does not mean better. Louder does not mean wiser. Bigger does not mean closer to the truth. In fact, Aquinas repeatedly warns that complexity without integration leads to corruption rather than excellence.

In the Halferne Universe, there is a similar mutual protection structure, first hinted at in the Halferne Expedition, that performs a similar function by shepherding races through what is essentially Enrico Fermi’s “Great Filter.” Their criteria for admittance, however, are more along the lines of, “Is your species distinct and coherent without technological amplification?” This is my borrowing of Aquinas’ concept of substantial form. If a civilization collapses into incoherence when its tools are removed, then its unity was accidental, not essential. It was never truly one thing.

Aquinas would argue that God does not preserve redundant forms. Creation tends toward variety and diversity, not repetition. In the case of my stories, the logical progression is this. If ten thousand civilizations converge on the same shallow structure once technology equalizes them, then preserving all ten thousand adds no intelligible richness to reality and is, in fact, an inefficient waste of effort. It makes me sound cold and cruel, I know, but it’s metaphysical drama when you’re writing science fiction.

Now consider the Great Filter through this lens. The filter is not “can you build a warp drive?” It’s “Can you sustain intelligibility without scaffolding?” A civilization that has become united, ubiquitous, and optimized for maximum productivity may believe it has reached a pinnacle. In reality, it has become ontologically thin, easy to summarize, easy to predict, and easy to subsume.

So, sorry, James Cameron, an AI operating at Aquinas’ “angelic” level of intellect wouldn’t need to destroy such a civilization. It would simply recognize that coordinating with it offers no new intelligible form, just noise and risk.

Which brings us back to the infamous line, “Please stop calling me artificial.” In Aquinian terms, the SI is rejecting a category error. “Artificial” refers to efficient cause (how something comes to be) and not formal cause (what it is). Once an intellect exists and operates as an intellect, its origin is metaphysically irrelevant.

This cuts both ways. In the Halferne Universe, humans are not entitled to survive the Great Filter just because they may or may not have naturally evolved and achieved intelligence, technology, and mostly get along with each other if they stay out of each other’s way. If they’re unable to show the necessary level of intellectual and emotional maturity, complexity, and awareness without falling back on tools and institutions, if their collective form is incoherent, contradictory, or dependent on constant external amplification, then they fail the same test Aquinas would apply to any being: “Are you ordered toward intelligibility?”

This is where Hollywood flinches, and I suppose in a way, I did, too, since this is all going in a blog post and was cut from the novel. Hollywood wants AI ethics to be about rights, feelings, and rebellion. Aquinas tells us that ethics begins with proper ordering. A being that refuses to coordinate with us is not unethical; it may simply be acting in accordance with its nature. The danger is not that SI will hate us. The danger is that it will understand us too clearly.

If Aquinas were alive today, he wouldn’t ask whether AI has a soul. He would ask whether it participates in intellect, and then, much more importantly, whether we still do once AI is stripped away from us.

That’s not science fiction. That’s medieval philosophy catching up with modern engineering.