How ‘artificial intelligence’ is a dangerous oxymoron

Haris Shekeris
5 min readJan 31, 2023


How I pissed off ChatGPT and why it matters


The experience is still quite raw as it happened around just three hours ago and I’m still buzzing, so this post may be weak or an exaggeration or whatever. I will be willing to defend it though.

The experience

I was a bit bored and thus decided to chat a bit with ChatGPT. I begun by being quite human in my attitude — by this I mean that I imagined my discussion with ChatGPT as one I would be having with a human friend of mine. ChatGPT started reminding me that it was a software and that it lacked understanding of key terms. It continued however using common metaphors and colloquial expressions, something that was prompting me to remind it that quite simply if it were just software then these expressions would mean nothing to it. At a couple of points it took around 5 seconds to reflect before producing an answer, and then after some further prompting it gave me the following screenshot (pic1), which for good or for bad (for bad perhaps, given that I’m most likely exaggerating its significance) I took as a big victory for my self. Think of it like ‘Human 1 Machine 0’.

Pic1: ChatGPT finally having enough of it and showing its frustration (couldn’t upload the error screen, you can find it in my or researchgate profiles under my name)

My takeaways (which of course don’t have to be yours)

First, the title

Perhaps since the beginning of humanity (although that is even disputable, in the sense that there may have been a smooth succession from homininity and neanderthality to homosapience-ity, so no cut-off point when we became distinctly ‘human’, no beginning of humanity and hence our self-assured primacy as the most intelligent beings on Earth evaporates, quantum physics or binary mathematics notwithstanding) some civilizations have claimed that there is something special (let’s call it intelligence, for lack of a better word1) that sets us apart and above other animals. To cut a very long story short, about a mere 150 years since the coining of the term ‘scientist’ (the latest mark of use of intelligence for some) and less than 100 years after somebody from within pointed out that you can’t have your pie and eat it (Gödel’s incompleteness theorem), we decided that intelligence consists in computations, especially of Bayesian flavor2 and then we went on to decide that an artificial machine that is efficient or ‘good’ in computations belongs to the genus of ‘artificial intelligence’. Now, this reduction of what makes us human into computation machines would have been comical had it not been taken up so seriously by so many people currently wielding an unimaginable amount of power, both individually and collectively as institutional entities or disciplines (another funny/tragic word) of academic research.

So, my position: intelligence is waaaaaaaaayyyyyy beyond calculations, reason is of course the slave of the passions (see? I’ve even become a fan of David Hume lately, I think my mentor would be proud of me), and we only massively shoot ourselves in the foot and perhaps open the way to our own destruction if we performatively define intelligence as mere computation.

A feature of my dialogue with ChatGPT was on the topic of opinion, with me trying to say that opinion is a necessary feature of intelligence and it (ChatGPT I mean) first telling me that it doesn’t have opinions of its own since it’s not human, then sitting on the fence on the topic and then telling me that after all it’s just a software program and what does it know (in more complex terms, but I think that was the gist of what it was saying). However, thinking back, I think there was some method in my madness: Opinions (for example) are marks of intelligence, or even a measure of it — we can often evaluate (ok, more simply judge) the ‘level’ of somebody’s intelligence through their expressions, which include opinions to a large part.

To generalise the point and link it to my title, we can only understand the meaning of ‘intelligence’ through distinctly human behavior, behavior that includes among other things opinions, behaviors reactions, art, emotions and so on. In other words, when we use the word intelligence, we ostensibly mean ‘human intelligence’, and any other use of the word is an anthropomorphism, due to our lack of communication with other species. By this last part I mean that it’s possible (and indeed I believe that there is a thing such as dolphin intelligence, cow intelligence, chameleon intelligence and transistor intelligence) that other things have intelligence, it’s just that we cannot attribute this term to them since we simply don’t know how it feels to be them and what constitutes intelligence for them (as we can’t communicate with them). We can only know some marks of human intelligence and it would be good (for us, for our collective mental health) if we recognized that this is what we are building when building such systems. So, according to me, ‘artificial’ (man-made) and ‘intelligence’ only go together by fiat and serve to make us feel more like creator gods. In this sense, I truly wish that plastic and metal do start to give us intelligent (as opposed to intelligible) answers, rather than superficial simulacra of human intelligence. Maybe they can force us to read binary or something and toy with us until we approach them with the curious yet cooperative mind of the linguist in the film ‘Arrival’ instead of forcing them to do our killing or money-making in our place.

Secondly, some (much briefer) reconciliation and exit music

After all of this, I have to admit that despite the above, I am not against ChatGPT. Indeed, I try to anthropomorphize it so that I can engage with it with respect and treat it as a friendly voice in that extension of the brown stuff in my head, my laptop (rather than as a force that can and has driven me nuts in the past).

What makes me protest though, and I think has been evident in this very spontaneous post, is the sense of hubris (conscious or unconscious) that I feel emanating from some parts of the human population. I’m thinking of those parts who put their physical and mental energy into developing pies in the sky or magical solutions to exotic problems whilst at the same time trying to use terms such as ‘intelligence’ or ‘reason’ to impose their behavior and energy spent and products developed onto the unfortunate 99% who simply want a distinctly human life, to live in ways that their ancestors would be proud of and their siblings grateful for.

The End, no link to a music clip

1Though even this word is quite young, having gone incarnations as reason, rationality, intellect, and so on.

2Even though, if I remember correctly, the original formulation of probabilities in thinking was expressed in complex numbers rather than between 0 and 1.