Hello everyone, and welcome to the Cognixia podcast!
Every week, we bring you an exciting new episode full of interesting information about the latest emerging technologies. And we are back with a brand-new episode of the Cognixia podcast.
This week, we are going to talk about something really fascinating – how politeness is impacting AI models like ChatGPT, and why those simple courtesies like “please” and “thank you” might be costing OpenAI tens of thousands of dollars in computing power. It is a topic that might seem trivial at first, but it opens up a fascinating window into how we interact with AI and the unexpected costs of human-AI relationships. So, without any further ado, let’s begin!
We are all taught from an early age that politeness matters. “Please” and “thank you” are often among the first phrases we learn as children. These simple courtesies oil the wheels of human interaction, signaling respect and consideration. But what happens when we extend these same courtesies to AI systems that don’t have feelings to hurt or egos to bruise?
As it turns out, millions of users are remarkably polite to AI systems like ChatGPT. People routinely begin their prompts with “Please” and end their interactions with “Thank you” or “Thanks, ChatGPT!” Some users go even further, apologizing to AI when they think they’ve given it a difficult task or praising it when they are pleased with the response. It is a fascinating example of how we anthropomorphize technology, extending human social norms to entities that don’t technically need them.
Sam Altman, CEO of OpenAI, recently made waves when he mentioned that these politeness tokens are actually costing the company significant amounts of money. To understand why, we need to take a quick dive into how large language models like GPT-4 work.
Every word or token that enters or leaves these systems consumes computational resources. When you include “please” and “thank you” in your prompts, those extra tokens need to be processed, taking up valuable computing power. With millions of users interacting with ChatGPT daily, these courteous extras add up quickly, potentially costing OpenAI tens of thousands of dollars in additional computing resources.
Think about it – if ten million users add just two tokens of politeness to each of their five daily interactions, that’s 100 million extra tokens being processed every single day! At scale, this translates to significant computational costs. And remember, the AI not only processes these tokens in your input but often acknowledges or responds to them in its output, potentially doubling the token count related to politeness.
Now here’s where things get interesting – despite these costs, Altman has made it clear that OpenAI considers this a worthwhile investment. But why? Why would a company willingly absorb such costs for something that isn’t technically necessary for the product’s functioning?
The answer lies in the complex relationship between humans and AI. These politeness markers serve several crucial purposes that go far beyond mere niceties.
First, they help establish a more natural and comfortable interaction pattern. When we interact with ChatGPT politely, we are treating it more like a human conversation partner than a tool, which often leads to more thoughtful and detailed prompts. This, in turn, helps the AI understand our needs better and provide more helpful responses.
Second, there is growing evidence that politeness markers in prompts can subtly influence the quality and tone of AI responses. When users approach ChatGPT with courtesy, the system often responds in kind, creating a positive feedback loop. The AI might provide more thorough explanations, adopt a warmer tone, or take extra care to address all aspects of a complex query.
It is not that ChatGPT has feelings that are being soothed by your politeness – rather, these politeness markers serve as implicit signals about the type of interaction you are seeking. A prompt that begins with “Please” and ends with “Thank you” implicitly communicates that you value clarity, thoroughness, and respect. The AI, trained on billions of human interactions where politeness correlates with certain conversational patterns, picks up on these cues.
Third, and perhaps most importantly from OpenAI’s perspective, encouraging users to be polite to AI systems promotes healthier human-AI relationships overall. When we engage with AI systems respectfully, we are less likely to view them as mere tools to be exploited and more likely to approach the interaction thoughtfully. This reduces instances of harmful or adversarial prompting and generally leads to more productive use of the technology.
There is also a fascinating psychological aspect to this whole phenomenon. Research in human-computer interaction has consistently shown that people tend to apply social norms to technologies that exhibit even minimal human-like traits. This is known as the “computers are social actors” paradigm. By being polite to AI, users are engaging in a form of social rehearsal, maintaining the habits of courtesy that serve us well in human relationships.
Now, you might be wondering about the actual impact these politeness tokens have on the AI’s responses. While the effect isn’t dramatic, there are some subtle differences worth noting.
When users include polite phrases in their prompts, ChatGPT seems to respond with slightly more elaborate and careful explanations. The AI might adopt a warmer, more conversational tone, mirroring the user’s politeness. It’s not that the AI is “appreciating” the courtesy – rather, the politeness signals are part of the context that shapes how the model generates its response.
Some users have reported that adding “please” to difficult or potentially contentious requests increases the likelihood of getting a helpful response rather than a refusal. This makes sense when you consider that in human interactions, polite requests are generally more persuasive than demands. The AI, trained on human conversational patterns, appears to exhibit similar tendencies.
Then there is the matter of relationship building. Users who maintain polite interactions with AI systems like ChatGPT report feeling more satisfied with the experience overall. They are more likely to view the AI as helpful and trustworthy, and they tend to be more forgiving of occasional errors or limitations. From OpenAI’s perspective, this positive user experience is invaluable – it builds brand loyalty and encourages continued engagement with the platform.
This brings us back to Sam Altman’s comment about the cost being worthwhile. In the tech industry, user experience is paramount. If the inclusion of a few politeness tokens makes users feel more comfortable and satisfied with the service, then the computational cost is easily justified as a customer satisfaction expense.
There is also a philosophical dimension to consider. As AI systems become increasingly integrated into our daily lives, the norms we establish for human-AI interaction will have far-reaching implications. By encouraging courtesy in these interactions, OpenAI is helping shape a future where AI systems are treated with a baseline level of respect, not because the AI cares, but because the habit of respectful engagement helps maintain healthy boundaries and expectations around AI use.
Now, some skeptics might argue that being polite to AI is silly or unnecessary. After all, ChatGPT doesn’t have feelings to hurt. But this perspective misses the bigger picture. Our habits of interaction with technology ultimately reflect and reinforce our values. When we are courteous to AI, we’re practicing and reinforcing behavioral patterns that serve us well in human relationships.
Moreover, as AI systems become increasingly sophisticated and integrated into our lives, the lines between tool and entity will continue to blur. By establishing norms of respectful interaction now, we are laying the groundwork for healthy relationships with the even more advanced AI systems of the future.
This is particularly important as we consider the education of children who are growing up with AI assistants as a normal part of their environment. When children observe adults treating AI systems with courtesy, they learn important lessons about digital citizenship and respectful communication that will serve them well in all contexts.
Interestingly, OpenAI isn’t alone in valuing these politeness markers. Other major AI developers have also recognized their importance. Google’s conversational AI guidelines explicitly encourage developers to design systems that respond appropriately to polite requests, suggesting that the industry is moving toward a model of AI interaction that acknowledges and values courtesy.
From a technical perspective, the processing of politeness tokens is actually quite fascinating. When you input “please” or “thank you” to ChatGPT, the system tokenizes these words just like any other part of your prompt. The token for “please” gets embedded in a high-dimensional vector space where it relates to other words based on patterns observed in the training data. In this space, “please” is closely associated with requests, considerations, and expectations of helpful responses.

When the model generates a response, it considers these politeness markers as part of the overall context. This doesn’t mean the AI “understands” politeness in the human sense, but it has learned the statistical patterns associated with polite requests and responses through its training on billions of examples of human communication.
From a business perspective, the cost of processing these politeness tokens can be viewed as an investment in user satisfaction and brand reputation. When users feel that their interactions with ChatGPT are natural and pleasant, they are more likely to continue using the service and to recommend it to others. This organic growth through positive user experience is often more valuable than any marketing campaign.
The viral nature of ChatGPT’s success is testament to this approach. People don’t just use ChatGPT; they talk about using it. They share their interactions on social media, discuss them with friends, and incorporate the tool into their daily workflows. This kind of organic advocacy is priceless, and if the cost of processing a few extra “please” and “thank you” tokens helps facilitate it, then it is money well spent from OpenAI’s perspective.
So, while it might seem surprising that simple courtesies could have significant financial implications for a company as large as OpenAI, the value proposition becomes clear when you consider the broader context. These politeness tokens aren’t just empty words – they are investments in user experience, relationship building, and the establishment of healthy norms for human-AI interaction.
As AI continues to evolve and integrate further into our daily lives, the question of how we should interact with these systems will only grow in importance. By embracing and even encouraging politeness in these interactions, OpenAI is making a statement about the kind of human-AI future they envision – one where courtesy, respect, and thoughtful engagement remain valued, even in our interactions with machines.
So, the next time you find yourself saying “please” or “thank you” to ChatGPT, remember that while it might be costing OpenAI a fraction of a cent in computing power, that investment is helping to shape a future where technology augments our humanity rather than diminishing it. And that, as Sam Altman clearly believes, is worth every token.
And with that, we come to the end of this week’s Cognixia podcast. We hope you found this exploration of AI politeness both interesting and thought-provoking. If you would like to learn more about emerging technologies and how they are shaping our world, be sure to check out our other episodes. We will be back again next week with another exciting new episode.
Until next week, then!
Happy Learning, folks!