Like a few million other people, I recently looked in to see what ChatGPT was all about.   Reactions to this chatbot from OpenAI has run the gamut from dismissing it as a useless toy to fear of  massive job obsolescence to total apocalypse for humans as a species. I wanted to experience for myself how it felt to converse with state-of-the-art artificial intelligence.  Having heard stories about ChatGPT writing poems and mimicking the styles of various writers, I asked it to write a poem about the computer language COBOL (COBOL stands for COmmercial Business Oriented Language and was popular in the 70s through the 90s). Here’s the first of four stanzas:

When you think about the various tasks that need to be performed to create this poem, information about COBOL, poetic form and context (“punch cards”, “in spite of its age”), one has to be impressed by this aspect of ChatGPT’s capabilities. I tried some other things, like having it explain in Shakespearean English how a gasoline fuel pump works.  The results were equally impressive (and hilarious).  I asked who is Roger Penrose. Here’s what came back:

Roger Penrose is a mathematical physicist and emeritus professor of mathematics at the University of Oxford. He is known for his work in the fields of cosmology and general relativity, and for his contributions to the mathematical physics of black holes and the unification of general relativity with quantum mechanics. He was awarded the Nobel Prize in Physics in 2020, together with Reinhard Genzel and Andrea Ghez, “for the discovery of a supermassive compact object at the centre of our galaxy”.

Not a bad capsule biography and more concise than a Google search. Roger Penrose, in addition to what ChatGPT told me, is also an AI skeptic. In his 1989 book “The Emperor’s New Mind: Computers, Minds and the Laws of Physics”, Penrose argued that the human mind is not algorithmic and can’t be modeled by a digital computer. (Spanning over 460 pages the case he makes is detailed and complex.)  In the book Penrose sets up a scene where a computer that embodies the latest advances in AI is being introduced to an audience. A child asks it a question: “How does it feel to be a computer?”  The AI is flummoxed. What does it even mean to feel? Penrose clearly felt consciousness was beyond programming. I tried the same question on ChatGPT.  Here’s the response:

As an AI model, I do not have the ability to feel emotions or consciousness. I am a computer program that processes information based on the parameters and data set that I have been trained on. My primary function is to respond to input and generate output based on that input. I am not capable of having experiences, emotions or consciousness. I can only process and generate text based on patterns learned from the data set used to train me.

The designers of ChatGPT obviously knew this question was coming. Is this response intended to reassure a skeptical public that there’s less here than meets the eye? It seems so. I followed up: Doesn’t your response mean you are less an intelligence and more of a mechanism for parsing data?


Yes, that is correct. As an AI model, I am a tool for processing and generating text based on patterns learned from a large dataset. I am able to understand and respond to natural language inputs, but I do not possess consciousness or emotions. My responses are generated based on patterns in the data that I have been trained on, rather than personal experiences or subjective feelings.

Are they underselling it?  I don’t think so.  Picking up on the idea that it can only respond within the scope of the data it was trained on, I asked how it decided what was within its scope. It’s response in part was: “As a language model, my training data consists of a large dataset of text. The scope of my training data is determined by the specific dataset that was used to train me.”  This struck me as merely tautological.  A lot is made about whether the Turing Test is still a useful way to judge AI.  In my mind this is the point at which ChatGPT failed the Turing Test. 

ChatGPT is impressive and fun to interact with. The technology it represents can probably do some useful things that will displace some humans from jobs but like other technological advances, it will probably open many more we hadn’t previously thought of.  There may be an apocalypse out there but look for another source. Humanity seems safe from AI for now.