Hello everyone and welcome back to the Cognixia podcast. Every week, we bring you fresh new content from the exciting world of emerging digital technologies – news, current affairs, best practices, Q&A, and so much more.
If you were here last week or you looked through the list of episodes before tuning in today, you would know that we are currently doing a special five-part series on one of the most talked about tools of our times – ChatGPT. For five weeks, beginning last week, we bring to you some new dimensions, information, and insights about this amazing new tool that is taking the world by storm. In this series, we talk about the good and the bad, give you some useful tips, and bring to you a lot of interesting content about ChatGPT. Who knows, we might even get ChatGPT to write an episode for us, hahaha!
So DO NOT miss this five-part series on the Cognixia podcast! Make sure to hit ‘Follow’ and enable the notifications to be notified each time we post a new episode in this series.
Today’s episode is the second in this five-part series. We have heard a lot about what amazing things ChatGPT can do, and the wide variety of tasks it can help accomplish. From writing emails to doing calculations, we have seen ChatGPT in action, doing it all and acing it like a boss. But what if we told you that this all-rounder of a tool actually has some things it cannot do or cannot do well?
Now, that would be interesting, Sachi, won’t it?
Absolutely, Murtaza! When our writers told us this, even we refused to believe it. But they came packed with evidence and logic, it was hard to disagree. We knew ChatGPT could answer a lot of questions, write a lot of things for us, and even entertain you if you were getting bored, but a list of things that ChatGPT couldn’t do, took some time for us to wrap our heads around, didn’t it, Murtaza?
It did, I am still reeling from it. But this also makes it important for us to tell our audiences about it, right?
It does, it totally does. I couldn’t agree more.
So, let us begin today’s episode then. Let’s tell our audiences about what ChatGPT CANNOT do.
Before we begin, I would like to do a quick refresher on what is ChatGPT and how it functions.
Yeah, yeah, let’s do that quickly.
ChatGPT is a state-of-the-art language model developed by OpenAI, a leading artificial intelligence organization. ChatGPT has been designed to understand and generate human-like text making it a powerful tool for many applications. The tool is based on the original framework for pre-trained models, called the Generative Pre-trained Transformers or GPTs. The tool is trained so widely and effectively that it can generate content that is almost indistinguishable from human-generated content in both the matter and the style.
To get a bit more technical, ChatGPT is a transformer-based language model that uses deep neural networks to process & understands the text. Now, due to this architecture, ChatGPT can generate more fluent and natural-sounding text than any other models have been able to accomplish previously. Additionally, the ChatGPT model has been trained on a very wide range of content on the internet we could even say that ChatGPT has been trained using just about all the content available on the internet. Due to this, ChatGPT is better equipped to understand and respond to various topics, be it everyday conversations or technical discussions.
Alrighty. So, now that we have touch-based on what is GPT and how it functions, let us proceed to what we want to focus on in this episode – What ChatGPT cannot do.
If you have ever tried to ask malicious or hateful questions to ChatGPT, even just for fun, then you would have seen ChatGPT not respond to it as you expected. Instead, you might have seen it deny your request and tell you that it is trying to keep the environment safe and unbiased. However, sometimes users have succeeded in bringing down the bot’s guardrails.
The problem actually is that while ChatGPT is super smart and super intelligent, it still cannot really intuit what the users want from it. This causes ChatGPT’s responses to be quite varied even when there are very small differences in the questions asked. So, while one question may call for ChatGPT to deny the user request, a slight rephrase might trigger a full response from ChatGPT.
We must mention, though, that OpenAI, the company that created ChatGPT, is constantly working to improve the accuracy and reduce bias for ChatGPT, it is technology after all, and it is only so smart that it can be.
Interesting. You know, I was talking to ChatGPT one time, and I feel it occasionally lapses into a kind of state of senility, I would say. Like, it will just suddenly forget its own name or what we were talking about like it suddenly zoned out.
Does it, now? This sounds like some people I know actually, hahaha!
Probably an occasional algorithm malfunction or something, I guess.
Yeah, looks like it.
Another thing I have heard is, the software site Stack Overflow has completely banned the use of ChatGPT for the foreseeable future.
Oh yes, I heard that one too. Apparently, the average rate of getting correct answers from ChatGPT is quite low, so it can turn out to be substantially harmful to use the answers you get as is, without verifying. Some software programmers did try to use ChatGPT to do their coding jobs for them, but ChatGPT seems to have a bit of a problem, it inserts gibberish into codebases, which then becomes quite a challenge to clean up or discover, will require checking every line of code to correct the errors & bugs.
So, now we know that ChatGPT cannot be intuitive about what you are saying, it cannot stay coherent about its identity or maintain a proper memory of the conversation even when the conversation is ongoing, and it cannot code well.
So far, yes. What else do you know ChatGPT cannot do?
I believe ChatGPT can write a lot of things for you, like say an article or a blog, but its accuracy is very often quite questionable, especially if your request does not cover the knowledge it currently has. It is highly likely you will end up with a smart and well-written article that is prone to be fabricated and inaccurate.
Ahhh, see, now this is interesting. How many of us believe that ChatGPT is accurate and go by what it says? This is exactly like Googling your symptoms and discovering you have terminal-stage cancer, and you will likely die in the next six seconds or something, right?
That’s funny, you mention it. But in all seriousness, people, what we mean is, ChatGPT is superb at writing really smart stuff, but can’t do well on the accuracy front, we believe. ChatGPT seldom tends to hallucinate its way through conversations, making accuracy or sometimes, even relevance go for a complete toss. For instance, when we asked ChatGPT, what was heavier, 10kg iron or 10kg cotton, it said, 10kg iron! A lot of users have reported such anomalies or wrong information that they have bumped into when conversing with ChatGPT.
Actually, Sachi, I remember reading that the renowned AI researcher, Timnit Gebru, who had been working with Google when they wrote an influential research paper calling the AI models like ChatGPT as “stochastic parrots” who can repeat words without understanding them. In a way, ChatGPT also does not understand anything it says, and it also kinda doesn’t care. A lot of scholars and academics like Gebru have been warning us about the dangers and limitations of generative AI models, but the sheer excitement that the release and subsequent functioning of ChatGPT have generated has drowned out all the noise and warnings. In fact, Gebru had been fired from Google, it is said after she wrote her seminal paper.
Ooooh, so she was hushed up, is it?
Sounds like it, doesn’t it?
Well, we would need to dig further to confirm it, and I don’t think we would find a lot of concrete information there to verify either, would we?
Doesn’t look like it, I feel, Sachi.
Yeah… The times we live in…
Well, at least through this episode, we have been able to give our listeners a fair idea about what ChatGPT cannot do. I am sure our listeners would have found a lot of material about what ChatGPT can do, they would probably have signed up and tried ChatGPT themselves, so they would likely have a fair idea of what ChatGPT can do. But we hope today’s episode helped our listeners understand the limitations of ChatGPT.
Exactly, today’s episode was all about what ChatGPT cannot do. It is good to know what you are dealing with and what limitations you would likely encounter on the path when using a chatbot tool.
Totally. I enjoyed talking to you, it was fun to do today’s episode.
I agree. I loved talking to you about this too.
So, with that, we come to the end of today’s podcast episode. As we have mentioned before, this was the second episode in our five-part series on ChatGPT. We are thoroughly enjoying producing this special series for our listeners, and please make sure you follow us to make sure you don’t miss an episode.
Should we give a hint about the next episode in the series?
Maybe, just a little.
So, in the next episode, we will talk more about the threats of ChatGPT, the risks of using it, more from a privacy perspective, how friendly a tool ChatGPT is from a data privacy perspective, and something along those lines.
Ahh, that sounds really exciting, but I think that was more than just a hint, no? Now, I can’t wait for next week’s episode!
Hahahaha!
And you, our listeners, we are running some super promotions this month, the last month before the appraisal seasons begin everywhere. So, check out our website www.cognixia.com, and connect with us on the chat window there.
We promise we will have humans responding to you, and we will not make you talk to ChatGPT!
Hahahaha!
Until next week then! May the learning never stop!
Happy learning!