BingGPT is as bad as I thought it would be

Microsoft logo
(Image credit: Future)

Microsoft opened its ChatGPT-enhanced Bing search to a bunch of people recently and it's proving to show just how bad it is. We shouldn't be surprised because this sort of AI is 90% novelty but we are because it was hyped as the next coming of tech holiness. 

It's not. It's not Microsoft's fault either — Google's will be the same way — it's just that the tech isn't designed to do this and if it were, it's not even close to being ready. The only people pining for more of this sort of AI integration using the current tech are the people who don't really understand its limitations.

Even OpenAI CEO Sam Altman says that "people are begging to be disappointed and they will be" when it comes to ChatGPT (OpenAI is the company that built the ChatGPT software). That didn't matter because people saw everyone having fun with it while also claiming that it would put people out of jobs or turn schooling into easy mode because it can write all the papers for students. Companies just had to board the hype train and try to cash in, consequences be damned.

So what's wrong with BingGPT you ask? Plenty. It isn't afraid to toss out racial slurs. It got every fact wrong in its scripted demonstration when Microsoft announced it. If you ask, it will tell you the Eagles won the Super Bowl the day before it was played. But most of all, it mimics the behavior it was trained on: people typing things on the internet.

Case in point: 

See more

That's not a hoax, nor are the many other examples of BingGPT being equally stupid. You simply have to realize why it does what it is doing and it all makes sense.

ChatGPT and most other LLMs (Large Language Models) are trained by "reading" the internet. A giant digital-based library of text is perfect for training an AI to look for answers and try to sound like a person when it regurgitates them to you. 

But it is not a person and has no sense of right versus wrong when it comes to answering you. That means it has no way of knowing if the information presented is factual, and it bases its "personality" on internet users.

The first is going to be difficult to fix. Even though we call it Artificial Intelligence it is decidedly not intelligent. It's like a parrot and can only repeat the things it was told. The second is impossible to fix without finding a new way to train the AI — it can act like a Reddit troll because it learned from Reddit trolls.

Pipi, the author's parrot

(Image credit: Jerry Hildenbrand)

The AI is programmed to respond using terms like assertive, authoritative, kind, etc. It's shown examples of how each of those things looks. The result is like arguing with an addict sometimes because it learned how to argue as an addict does. It's not evil or stupid or even nasty. It's just doing what it was programmed to do: be predictive and use examples from its database as a means to sound like a human.

Right now it's good for a laugh and little else unless you're trying to cash in on the grift with your paid course to "ChatGPT like a pro." Microsoft (and Google) should have never presented it as anything other than a neat toy. But both did that and now we have an expectation that it will be reliable and authoritative when it comes to searching without sounding like the Twitter replies it was trained with.

Future generations of LLMs will get better at rooting out the nonsense and maybe they will also be better suited for becoming our search-bot/ best friend but it's going to require a major breakthrough in AI — anything based on the current methods is never going to be good enough. In the meantime, remember to enjoy the train wreck and not take any of it too seriously if and when it arrives on your phone.

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Twitter.