Opinion | ChatGPT feels human, but reminds us it’s not

By Harsh Hiwase, Senior Staff Columnist

By this point, ChatGPT is a household name. 

The language model bot, developed by OpenAI, intrigues the everyday user with its ability to synthesize data from its stored base into concise sentences and coherent responses. The bot can adopt speech patterns it learned from its training phase, and from OpenAI’s technique of Reinforcement Learning from Human Feedback. This training allows the bot to pick up on the way users interact with it to enhance its ability to converse, giving it an awfully human-like tone. 

The first time I used ChatGPT, it felt revolutionary. It vividly reminded me of the time Apple released Siri for their iPhone 4s in 2011. I was an intrigued 11-year-old, dragging my parents into the Apple store to ask Siri about the weather, and being amazed — despite it telling me it was going to rain on a clear and sunny day. It wasn’t perfect, but it was an impressive feat of technology. ChatGPT is now doing things I never thought possible just 12 years ago. 

I initially used ChatGPT for my homework. Like many college students, I tested out the features in a practical way. In terms of accuracy of information, I had no real concerns, as the responses that I got from the bot were factual and answered my questions directly. In terms of quality, the responses covered the bare minimum and were superficial at best. If I were to grade the responses, I would give it an eight out of 10. In an academic setting, I like ChatGPT as a board to bounce ideas off, instead of an easy way to get answers for my homework. 

The Turing Test, developed by Alan Turing — a British mathematician and logician — in the 1950s, was the first test to determine whether a computer program had the capacity for sentience. It involved a simple test where a human operator would ask questions at a terminal to the computer program and human simultaneously, and would have to discern which response the computer produced and which response the human produced. I think ChatGPT passes this test rather easily, given the fact that college instructors and professors are having a difficult time discerning between homework submissions created by the bot and original submissions. 

After a few hours of asking the bot random factual questions, I noticed a sort of predictability in the responses, though. The bot structures the responses in a consistent way with a direct answer to the question, a short substantiation and a summary of the points made. There was, however, a humanlike touch to the responses it gave. It was highly personalized and didn’t lift sentences whole from online sources. I prodded further to test the limits of this bot and figure out exactly how “sentient” it could seem.

In my test of the bot, I asked it opinionated questions to see how humanlike it could get. 

Harsh Hiwase: How are you today?

ChatGPT: As an AI, I do not have the ability to experience emotions or physical sensations. I am always ready to assist you with any questions or tasks you have. Let me know how I can help you today.

HH: Tell me what you think.

CGPT: As an AI, I am not capable of thinking or having opinions. My main function is to process and generate text based on the input provided. My goal is to assist you to the best of my ability, so please let me know how I can help you today.

 

I expected the bot to respond in this way, as it is not designed to give opinions. With some further prodding, though, I was able to get the bot to generate some opinionated responses.

 

HH: Let’s have a conversation.

CGPT: Sure, I’d be happy to have a conversation with you. What would you like to talk about?

HH: Your choice.

CGPT: I can talk about a wide variety of topics. Please let me know if you have a specific topic in mind or if you would like me to pick one.

HH: Pick one,

CGPT: Sure, one topic that I can discuss is about the advancements in technology. Technology has come a long way in recent years and continues to rapidly evolve. It has changed the way we live and work, from the way we communicate, to the way we access information, to the way we control our homes. With new technologies like 5G, AI, and IoT, it is an exciting time to see how these advancements will continue to shape our future. How do you feel about the advancements in technology and how it’s changing our life?

 

What really amazes me is that I initiated this conversation, but the bot sustained it. The bot had to decide on what topic to discuss, what ideas to bring up and a relevant question to ask as well. It is even able to generate an opinionated response about the state of technological advancements. The human-like tone gets even more interesting when I flip the script and have the bot question me.

 

HH: Ask me a difficult question.

CGPT: What is the meaning of life?

HH: Not that difficult.

CGPT: I apologize if that question was not difficult for you, I understand that it can be a philosophical question that different people may have different perspectives on.

How about this one: Explain the concept of quantum entanglement and its implications for our understanding of the universe?

HH: Ask me a very simple question.

CGPT: What is your favorite color?

 

For every round of questioning that the bot asked me, I continued the conversation. I found that the bot corrected my misconceptions and provided me with further prompts to learn more about the subject matter. The bot’s ability to steer me to the right answer seems similar to how a teacher would guide a student into learning more about a topic by being cognisant of the student’s abilities and tailoring a plan to help them understand the topic. This guiding was the most humanlike aspect, especially since the bot refers to itself in the first person in its responses.

I concede that just because the bot is able to hold conversation and ask insightful questions, it doesn’t make it a sentient AI. The technology behind the creation of this program is so advanced that it is able to mimic human speech patterns and perception. The bot constantly reminds us that it is unable to be human and provide opinions, but also feels so close to being an actual person that it’s unnerving. 

I’m cautiously excited to see how far this technology can advance and how accessible it can be to the everyday user. Perhaps in a dystopian future, we’ll have bots making more of our decisions for us while we take a backseat and let everything run its automated course.

Harsh Hiwase writes about ethics and healthcare. Write to him at [email protected].