Artificial Intelligence (AI) has won a lot of solid ground in 2023 and late 2022. Unless you’re coming from another planet, surely you’ve already heard dozens of times already about the hype surrounding OpenAI’s ChatGPT AI chatbot. It rapidly became clear that the dominance of Google’s search engine is put into danger. As a result, the famous Mountain View-based tech behemoth brought its own version of an ‘alknowing’ AI chatbot: the one called Bard.
But are apps similar to ChatGPT really that powerful? Well, the truth is that we humans tend to exaggerate tech inventions a lot. AI chatbots are far from perfect and anywhere near as intelligent as skilled and educated human beings. They only rely on a huge database and algorithms to figure out how to respond to the user and his prompts. In fact, AI chatbots have no idea what the user asks them to do. In other words, these chatbots don’t have some sort of consciousness, and they don’t “think” in the way we are tempted to believe and expect from a superintelligent computer. In other words, there’s no use fearing that AI chatbots will reckon that humans are useless and be willing to exterminate us, as we’ve seen so many times already in sci-fi movies.
Therefore, don’t be surprised if ChatGPT will provide you totally fallacious answers to pertinent questions of general knowledge from fields such as tech, history, entertainment, and so on. That’s one of the best pieces of evidence that ChatGPT doesn’t actually have any idea of what the user wants it to do, despite the prompts being written in a very coherent way. Those are the scenarios that NVIDIA hopes to solve in the near future, and luckily, they have an idea of how to do it.
NeMo Guardrails is NVIDIA’s new software for overcoming AI ‘hallucination’
According to CNBC, NVIDIA unveiled a new software known as NeMo Guardrails, which aims to overcome the wrong answers that AI chatbots can provide to users. These answers include fallacious information, the mentioning of harmful objects, as well as opening up security holes.
The new software of tech giant NVIDIA can overcome AI ‘hallucination’ by relying on guartrails. This scheme will prevent the software from addressing specific topics.
Jonathan Cohen, who is the vice president of applied research at NVIDIA, explained as CNBC quotes:
“You can write a script that says, if someone talks about this topic, no matter what, respond this way,”
“You don’t have to trust that a language model will follow a prompt or follow your instructions. It’s actually hard coded in the execution logic of the guardrail system what will happen.”
Let’s face it: calling off AI chatbots is surely not a feasible scenario, regardless of how much some groups of people hate the technology. It’s a revolutionary technology that can shape the world into a better place, but only if it’s being used responsibly and in the right way.