Fair warning: I’m going to kvetch about professional philosophy and its current takes on AI. I promise, though, there is a payoff. To cut to the chase: too often philosophical analysis resembles Monday-night recliner quarterbacking. Watch the game, yell at the refs, all the while feeling certain that we, without ever stepping onto the field, could win it all.
And so it seems to me, like the fantasy footballers, we philosophers, wring our hands and shout at the game of Generative AI while, nevertheless, remaining planted on our couch – content on the bench, or more classically, our armchair.
The trouble is not that philosophers lack initiative or insight into AI. It’s that we treat it as a hegemonic concrete entity, instead of what it is: nebulous, diffuse, a complex ethos shaped by late capitalism. To borrow from Addams and Dewey, the philosopher’s vocation means much more than simply pontificating about “the problems of men.” Rather, we must learn to engage them as lived realities which are complex, messy and embedded into our daily lives. And by that process we see our problems properly and can become more effective at addressing them.
Missing the Forest for the Trees
Again, online philosophy is a hotbed for contentious AI debates. While philosophers cite scholarly (re)sources, let’s be honest, many of us lack the deep engineering, coding, and computer science background to really navigate this terrain.
Professional philosophy, still haunted by its German idealistic heritage, prefers to gesture toward the productive sciences in passing. Immersed in theories, some passively think, “Others should handle the dirty work of implementation? I provide the critical feedback, they are too close to the ground to see.” This attitude leaves us with little to no phronesis – no hands-on or practical know-how about the very realities that we attempt to systematize.
Yet, as Socrates may want us to wonder, without such phronesis do we really know what we are talking about?
Do we chatter about AI the way young Athenian nepobabies threw around words like justice and virtue, or debated the merits or dangers of comedy and tragedy or, still more relevant for us, the newfangled tech of writing? Are we both doubly ignorant, blind to our own ignorance, yet smugly self-assured in our sophistry?
The symptoms of double ignorance are obvious in the two main Chicken Little stances dominating online philosophical discourse. Either:
The Apocalyptic Preachers of Doom, preparing for a Butlerian Jihad against the machines, or
The Reveling New Age Hippies Turned Tech Bros, celebrating transhumanist transcendence, happy to wave goodbye to humanity
Both lack the humility of Socratic self-examination. Both miss the forest for the trees.
AI IS NOT WHAT WE THEORIZE IT TO BE
We have a scholastic tendency to treat AI as a finished system, deducible from a few axioms. But AI is not an a priori first principle. We end up having misplaced concreteness: mistaking tidy abstractions for reality. We imagine AI as a reified essence, when in fact it is an ongoing event, a complex set of human practices, values, and incentives. In short, we are confusing a rhetorical mirage for the field itself, the shrill cries of a blind ref for the actual plays of the game.
Understanding this complexity is essential. Many AI products fundamentally threaten to replace dignified work, and for academics, they appear to undermine the very project of university teaching. Armed with endless supplies of money and a unified discourse of the “inevitable,” AI is the new Skankees we all hate and love.
However, to be clear, the challenge is not with failing to see the monopolizing, steamrolling way AI companies have infiltrated the academy, but with our armchair problem solving. It is what the Italian humanist philosopher Giambatista Vico called the “barbarism of reflection”: an intellect, like a Chad, flexing too hard, zealously inventing solutions that no one asked for and systems of thought that obscure rather than clarify. This is the fire ravaging the Ivory Tower from within.
The Problem with the Problem of AI
Meanwhile, in the world of machine learning engineers, developers, architects, and UI designers often grind under pressure in VC startups, which are expected to scale rapidly and make significant profit. Their CEOs/leaders/visionaries are salesmen (Sam Altman being very much in this vein), making grand promises about the future (e.g. Artificial General Intelligence) to investors and to the public, while their teams scramble to cobble together something functional, something in the ballpark. They stitch pieces here and there, retrofit tools, desperately call a friend for advice, or find someone-who-knows-someone, kick spaghetti code into place. If they are lucky, over iterative generations they pull together something “workable” – an untidy, ad hoc, unpredictable monster. Of course, the next step is attempting to graft on coherence – but, as retroactive, such coherence is not one of which an armchair philosopher could approve.
Anyone who has worked in the service industry knows the chaos behind the curtain. What the hostesses and waiters present “front of house” is a performance all too often on the verge of collapse. – the host smiles, the server pours steadily, but in the kitchen cooks juggle orders, waiters forget drinks, busboys dodge collisions. It’s frantic improvisation that somehow results in plated meals. In the same vein, philosophers, like oblivious customers, mistake AI’s polished surface for calm systematicity, while its production resembles a slammed kitchen. And strangely we somehow assume that our Denny’s Grand Slam, tells us axiomatically all that a breakfast meal could be, and what the chefs are capable of. Like most uncritical customers, we philosophers miss this reality when it comes to the production of AI, arrogantly assuming that developers and coders potential are defined by the “franchise diner” they all too often work at.
All to say, the real challenge for developers and philosophers, is that the latter all-too-often miss assess the processes of the former. We imagine, as heady intellectuals, that they need our criticism or a tightly coherent system worked out at the front end. In reality, engineers all too often wrestle with a diverse ecology of systems, under the relentless pressures of late capitalism, and are always working through problems iteratively.
We want to engage this situation differently. Palinode is trying to give its team something altogether new: an experimental kitchen where technologists and philosophers collaborate without over-promising deceptive CEOs or philosophers constantly throwing “Well, actually” flags. Instead of talking over the cooks it's time we apprentice ourselves with the practice, theorizing only after we’ve cultivated that crucial ingredient of phronesis. This means learning some LLM, SLM, LCM, MOE, VLN, MLM, SAM and other modeling 101, and to really work with others to draw on innovations in math and logic that have been created in the doing.
AI as an Ethos
So what is AI? Not a thing, but an ethos, a set of late-capitalist habits and incentives that shape what gets built, how it’s used, and how it’s sold. AI itself is a placeholder term for the advent of a series of innovations and tools in machine learning, a symptom of cultural priorities. From this angle, we see the situation more extensively, complexly, and clearly.
To quote at length from Tibor Solymosi, an early critic of the dangers of dopamine addicted, social media shaped democracy and a philosopher who has argued that AI is a vague, misleading term which is best used pejoratively: :
“We should resist using the term artificial intelligence or AI whenever possible, when we do use it, do so as a pejorative, and when we want to say something good—that is, anything growth promoting in Dewey’s sense—then call it something else. If we’re successful in shifting the discourse, then we will have something like Norbert Wiener’s cybernetics, which is a direct outcome of Dewey’s pragmatism. Cybernetics is the science of communication and control in the animal and the machine. The term AI, despite its ambiguities, remains committed to a vision of experience best characterized as a Cartesian spectator, whereas cybernetics commits to a Deweyan view of participatory experience. So understood, cybernetics is inherently ecological, where ecology not only is the study of systems, but, like cybernetics, has a telling etymology we should not forget.”
Seen this way, what we find, then, in chatbots and lame personified AI assistants is not some ultimate truth about the potential of these technologies, but a reflection of ourselves.
We typically prefer chatbots that use instant-messenger-like interfaces, which regurgitate summaries at us, for us to copy and paste for convenience, or personified personalities that respond to desires and make everything smooth-jazz-level reassuring (ok except when they spout hate or Nazi ideology—I’m side-eyeing you, Grok). They are, as Daniele Procida argues, designed to please: eager to mirror.
“They will immediately remould themselves to me if I ask for something a little different, or resist them: non-reproducible answers to specific problems, each one a blob that exists on its own for that moment.
This is over-fitted information, too servile to resist me, too weak to demand that I meet it on its terms, or to stand against me. It’s a false friend, ready to follow my shape as closely as I want. Information that changes its shape before our eyes, a slightly different version each time, advertises its own treachery. It’s non-deterministic. This is no way for documentation, that should command authority, to behave.”
Such “information blobs” are servile, reflecting our own desire for convenience, and more dangerously, to power, all in the name of frictionless ease. As a consequence we have extremely limited our collaborative potential.
Our obsession with AGI seems no less misguided, a mix of both a marketing gimmick and a techno-fascist fantasy about a dystopian transhumanist order Implants promising“super-intelligence” expose more about our society’s own crises of meaning than it does with the advances made in machine learning.
Toward a Humane (but not human-centered) Ethos
Yes, we should be worried about the cognitive offloading that chatbots enable – thoughtfulness and authenticity are beginning to feel like antiquated specters in an age where machines write emails, respond to chats all the while crafting term papers. What’s human about this desire for shallow, lazy thought?
The tech is not anti-human(e) or reductive of the value of the human being, our society’s ethos - profit driven, addicted, afraid of change - is. History reminds us that cognitive shortcuts and hallucinations are nothing new. Even Medieval Zen masters, for example, had to contend with kirigami, the practice (amongst other related affairs) of students writing approved answers to difficult koans on little sheets of paper they passed on to others. So the danger is not so much new but very old (though, ok, in a new nefarious way): our desire to cognitively offload what is exhausting in thinking gives us an excuse to let the man behind the screen tell the story of what comes next. The new AGI myth of the technofeudalists is calculated to maximize power, but is itself still ever so human and all too delusional.
To be sure, there are glimpses of this: LLMs have already helped doctors develop new medications or aided biologists in understanding ancient DNA structures. Archaeologists can now read carbonized scrolls often thought to be lost cases while philologists can track the development of language itself in ways that remind us of its symbolic coding power even in antiquity. All of this shows that there is another emergent ethos on the horizon. .
Conclusion: An Indictment in the Mirror
Our current technologies are not anti-human, they mirror what our culture prizes. We have chosen infinite growth and endless optimization without purpose. Why? Ease means more than deep thought; Efficiency means more than creative and empowered vocations; Profit means more than the care and attention to the environment (the real forces of our Judgement Day reckoning).
Tech oligarchs feign impotency (a sentence I never thought I would write), lamenting that “the cat is out of the bag,” even as they refuse to let their own children play with the very tools they peddle, knowing – on just how many levels, from the backend code to the frontend user interface – that their wares were designed to encourage addictive engagement and confirm bias.
I’m afraid what is behind the “veil of Isis” needs no brilliant master of the occult philosophical sciences to decipher and pronounce as a new oracle of Delphi. For it is not some monster or hero, but just us. And if we remember what we are capable of beyond the confines of the current ethos and the comfort of our armchairs, well, maybe our tools won’t just mirror useless generalities of who we think we are. Rather they just might cultivate the practical know-how, the philosophical virtue, which demands we stop sitting on the sideline and start doing the real work of having ‘skin in this AI game.
’Khorafest August 2025, Core team of technicians and philosophers making a little art together.




