Recently, OpenAI CEO Sam Altman shared a surprising statistic: people saying “please” and “thank you” to ChatGPT costs the company tens of millions of dollars a year in computing power.
Every extra word requires additional processing, and when you multiply that across hundreds of millions of users, basic politeness suddenly becomes expensive.
And yet people keep doing it anyway. Surveys suggest 67% of Americans say “please” or “thank you” when interacting with AI, even though everyone knows the chatbot doesn’t actually have feelings.
From a Jewish perspective, that instinct might make more sense than it first appears.
Jewish tradition, after all, encourages us to treat even inanimate objects with a surprising degree of thoughtfulness.
Take the challah on Shabbos. Normally, the blessing over bread comes before the blessing over wine. But on Shabbos we make Kiddush first, so the challah is covered during the blessing over the wine so it won’t be “embarrassed.” Of course, the challah itself has no feelings.
The Midrash offers another example. When it came time to turn the Nile into blood, Moshe wasn’t permitted to strike the river himself. Why? Because the Nile had protected him as a baby when he was placed there in a basket. Out of gratitude, Moshe could not be the one to harm it.
The river, needless to say, didn’t have feelings either, and yet the Torah insists on the lesson.
Judaism understands something subtle but powerful about human behavior: our actions don’t just affect the people (or objects) around us — they shape who we become. When we train ourselves to show gratitude even toward a river, or to avoid “embarrassing” a loaf of bread, those habits are quietly building our moral reflexes. The point isn’t really the challah. It’s the character we develop by treating the world with care.
Interestingly, recent research suggests that politeness may even help when interacting with AI itself.
A 2024 study from Waseda University looked at how large language models responded to prompts written with different levels of politeness in English, Chinese, and Japanese. The researchers found that blunt or rude prompts were more likely to produce weaker responses, including more mistakes or refusals, while moderately polite requests tended to generate the most reliable answers.
The reason is fairly intuitive once you think about it. These models learn patterns from the text they were trained on. When someone writes something like, “Could you help me structure this analysis?” the phrasing resembles thoughtful, professional writing. A command like “give me the answer,” on the other hand, is more likely to resemble casual internet language.
Google DeepMind researcher Murray Shanahan put it nicely: interacting with a chatbot is a bit like working with a very capable intern. If you treat the intern like a colleague, you’re more likely to get colleague-level work. Bark orders, and you may get the minimum required response. In short? Speaking politely to your AI chatbot may actually yield better answers.
Seen through that lens, saying “please” to ChatGPT might not be quite as irrational as it sounds. The habit of kindness still changes us, even if the chatbot doesn’t care. In Judaism, that’s always worth the effort.
If you found this content meaningful and want to help further our mission through our Keter, Makom, and Tikun branches, please consider becoming a Change Maker today.