There have been numerous studies and articles published about AI’s limitations and areas where it struggles. We all know the famous how many “Rs” in strawberry test, and you can visit software development forums to get a rundown on the myriad of problems with vibe coding. Moreover, there are plenty of mathematicians who’ll explain AI’s limitations in solving complex equations.
Yet, the vast majority of us use AI in everyday activities. Could be helping to write a college paper or a plethora of other tasks, many of which are social activities. And there are areas where AI chatbots – like ChatGPT and Gemini – struggle. So, in this respect, we aren’t talking about proprietary AI models that have been optimized for specific subjects. Rather, it’s the consumer-facing chatbots that normal people use for everyday tasks and queries. Consider these five areas:
Puns, Jokes and Literary-Style Wit
Here’s a little test: Go to ChatGPT and give it a story or information about a topic, and then ask it to make up some fun puns based on the topic. Then go ask your friends to do the same thing. AI’s answers will be scattered (it’s not even clear it knows exactly what a pun is), mostly garbage. AI can write, of course, and it can write well at times, but it still has issues with the rhythm of language, which often shows up in jokes and literary wit. It feels unnatural and inhuman. Will it improve over time? Sure, but, at the moment, AI just isn’t very funny, and we include the forced gallows-humor of Elon Musk’s Grok in that assertion.
Sports Analysis and Predictions
AI is fairly decent if you want, for example, to get a general feel about the sentiment of a football game, as it is able to frame its answer using a collection of expert opinions and reports. But if you want to do something like construct an elaborate strategy to bet on the Super Bowl, its predictions can’t go much further than providing the information you can find elsewhere online. The problem is that it doesn’t ‘see’ sports, giving you insight that others have missed. It can certainly come up with a strategy if you were to feed it a ton of proprietary data, but that’s usually out of reach for normal sports fans.
Explaining Its Decisions
Sometimes we need to know why, and AI can struggle to explain why it reached a particular conclusion or decision. Some have pointed out that this poses a difficulty for areas like healthcare – consider AI screening patients and a doctor needing to understand why a patient was declined – but in the humdrum of normal life, it can pose a problem for areas as diverse as doing your kid’s homework to filing a tax return. It’s often called the black-box problem. And while AI is improving in that area, it still needs to become more transparent.
Healthcare Questions
There is a reason that governments and regulators have been piling on the pressure to get AI companies to put guardrails on medical questions. The issue is broadly simple: For years, we were told “don’t Google your health problems” – it became a meme, and with AI, you are essentially doing the same thing, albeit with more concentrated answers. Now, once again, AI is improving, and you can see with areas like the ChatGPT Health Feature that there is a future for consumer AI and healthcare, but the rehashing of internet advice remains a problem.
Biases in Financial Advice
This is quite interesting, as, just like with sports analysis, AI is good at getting a measure of sentiment based on financial reporting. The problem, however, comes with AI’s positivity toward your ideas. In a sense, it affirms your theories. Say you explained to AI that you thought investing in Bitcoin was a good idea, it normally affirms it positively, with perhaps a brief warning about risk. It’s a world away from the real hard-nosed, unemotional world of financial trading. Traders need to be skeptical, and the relentless positivity does causal investors a disservice.



