Hallucination photo

AI is getting really good at making things like text, pictures, and even computer code. But there’s a big problem: sometimes AI makes stuff up that isn’t true. This is called AI hallucination. Let’s talk about what this means and why it matters.

What is AI Hallucination?

AI hallucination happens when an AI system creates something that looks real but isn’t based on facts. It’s different from when AI just makes new things up. With hallucination, the AI output seems like it’s based on real info, but it’s actually not.Emily M. Bender, who studies language at the University of Washington, explains it like this to built in

 

“If you see the word ‘cat,’ that immediately evokes experiences of cats and things about cats. For the large language model, it is a sequence of characters C-A-T,”

“Then eventually it has information about what other words and what other sequences of characters it co-occurs with.”

 This means AI doesn’t really understand words like we do. It just knows which words often go together.

 Shane Orlick, who works at an AI company called Jasper, says it even more simply:

 “[Generative AI] is not really intelligence, it’s pattern matching,”

he told Built In.

“It’s designed to have an answer, even if the answer is not factually correct.”

 So AI isn’t really thinking. It’s just putting words together based on patterns it’s seen before.

 Why Should We Worry About AI Hallucination?

 AI hallucination can be dangerous because it can spread false information. As we see more AI-created stuff, it might get harder to know what’s true and what’s not. This could make people believe things that aren’t real or get tricked by fake news. Christopher Riesbeck, who teaches about computers at Northwestern University, explains:

 “They’re always generating something that’s statistically plausible,”

he declared to Built In.

“It’s only when you look closely that you may say, ‘Wait a minute, that doesn’t make any sense.”

 This means AI can make stuff that looks right at first, but isn’t actually true when you look closer.

 Real Examples of AI Getting Things Wrong

 Let’s look at some real times when AI messed up:

 1. Google’s AI and Space News

 In February 2023, Google showed off its AI called Bard (now it’s called Gemini). Bard made a big mistake. It said the James Webb Space Telescope took the first picture of a planet outside our solar system. But that’s not true. NASA says the first pictures of planets outside our solar system were taken in 2004. The James Webb telescope wasn’t even launched until 2021. This shows how AI can confidently say things that are totally wrong. (Example from built in)

 2. Microsoft’s AI and Money Stuff

 When Microsoft showed people its new AI for Bing, it made mistakes about money. It got facts wrong when it talked about how much money companies like Gap and Lululemon made. This is scary because people might make important money decisions based on wrong AI info.

 3. AI in Court

 In June 2023, a lawyer in New York got in big trouble for using ChatGPT to write a legal paper. The AI made up fake court cases and legal opinions that didn’t exist. The lawyer said he didn’t know ChatGPT could make things up. He had to pay money and was punished for this mistake.

 Besides,  Emily Bender explains why this happened:

 

“It has been developed to produce output that is plausible and pleasing to the user,” she told Built In.

“So when a lawyer comes in and says ‘Show me some case law that supports this point,’ the system is developed to come up with a sequence of words that looks like case law that supports the point.”

 His words show how dangerous it can be to trust AI without checking its work, especially for important things like legal cases.

 4. AI Making Up Stories About Real People

 ChatGPT once made up a fake story about a real law professor. It said he sexually harassed students on a school trip that never happened. The AI probably mixed up real information about the professor’s work to stop sexual harassment with made-up details.

 In another case, ChatGPT falsely said a mayor in Australia was found guilty of bribery. In reality, he had helped catch the real criminals. This shows how AI can get things backwards and hurt real people’s reputations.

Personal experiences 

I’ve already experienced a hallucination from ChaGPT recently. I asked chatgpt to tell me what the intro song to a movie studio logo was called. He then told me it’s called a fanfare. Which was wrong, because the correct answer was a jingle. And then I asked him if a jingle was anything like the introductory music for Marvel studios, and he answered in the affirmative. I then asked if a jingle was a bit like Paradox studios too and he said yes, even though as far as I can remember there isn’t a film studio called Paradox studios, is there? 

 How Often Does AI Make Stuff Up?

 Daniela Amodei, who helps run an AI company called Anthropic, says:

 

“I don’t think that there’s any model today that doesn’t suffer from some hallucination,”

she told AP News.

“They’re really just sort of designed to predict the next word. And so there will be some rate at which the model does that inaccurately.”

 

Some experts think AI hallucinates between 3% and 10% of the time. That might not sound like a lot, but it means you can’t trust AI to always tell the truth.

 How to Spot and Stop AI Hallucination

 Here are some ways to deal with AI hallucination:

1. Check Multiple Sources

Always look at more than one trusted source to make sure information is correct. Don’t just trust what AI tells you.

 2. Use Fact-Checking Tools

 There are websites and tools made to help find false information like wiston AI or originality AI. Use them to check AI-generated content.

 3. Look for Things That Don’t Make SenseI

If something in AI-generated content seems odd or doesn’t fit with what you know, it might be made up.

4. Understand AI’s Limits

 Knowing that AI can make mistakes helps us be more careful with the information it gives us.

 5. Be Extra Careful with Important Stuff

 

For things like legal documents, medical advice, or financial decisions, always have a human expert double-check AI-generated content

Teaching People to Be Careful with AI Information

 It’s really important to teach people how to think carefully about information, especially when it comes from AI. This includes:

 1. Questioning what you read or see online

2. Checking who created the information and if they’re trustworthy

3. Learning how AI works and what it can and can’t do

4. Being aware that some content is made to make you feel strong emotions, which might be a sign it’s not true

5. Looking at different sources to get a full picture of a topic

 Making AI Better in the Future

People are working on ways to make AI more accurate and less likely to make stuff up. Some ideas include:

 

1. Creating AI that can explain its answers

2. Using better information to train AI

3. Having AI experts work with fact-checkers to find and fix mistakes

4. Making sure AI is developed in a way that cares about being truthful

 

Sam Altman, who runs OpenAI (the company that made ChatGPT), is hopeful about fixing this problem:

 “I think we will get the hallucination problem to a much, much better place,”

he told AP News.

“I think it will take us a year and a half, two years. Something like that. But at that point we won’t still talk about these. There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.”

 But Emily Bender warns that the problem might not go away completely:

 

“But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,”

she disclosed to AP News.

“Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”

 

This means we might always need to be careful with AI-generated information.

What Some Smart People Think About This

 Bill Gates, who helped start Microsoft, is hopeful:

 

“I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction,”

he wrote in a blog post in July 2023, as reported by AP News.

Even Sam Altman, who runs one of the biggest AI companies, says we should be careful:

 

“I probably trust the answers that come out of ChatGPT the least of anybody on Earth,”

he said at a university event, according to AP News.

Some people think AI hallucination isn’t all bad. Shane Orlick from Jasper says:

 

“Hallucinations are actually an added bonus,”

he shared with Built In.

“We have customers all the time that tell us how it came up with ideas — how Jasper created takes on stories or angles that they would have never thought of themselves.”

  He therefore communicated how sometimes AI making stuff up can lead to new ideas. But it’s important to know when we want AI to be creative and when we need it to stick to facts.

What Can We Do About AI Hallucination?

 Here are some things we can all do to deal with AI hallucination:

 

1. Be skeptical:   Don’t automatically believe everything AI tells you.

2. Check facts: Use trusted sources to verify important information.

3. Learn about AI: Understanding how AI works can help you spot when it might be wrong.

 4. Support good journalism: Real human reporters are important for finding and sharing true information.

5. Use AI responsibly: If you use AI tools, be clear about when you’re using them and check their work.

10. Think critically: Don’t let AI replace your own thinking and judgment.

Conclusion

AI hallucination is a big challenge as AI becomes more advanced. While AI can do amazing things, we need to be careful about the information it creates.

We all have a role to play in dealing with AI hallucination. We need to think critically about the information we see, especially if it comes from AI. People who make AI need to work on making it more accurate and honest.

 The future of AI and truth is uncertain, but by working on the problem of AI hallucination, we can try to make sure AI helps us get good information instead of spreading false ideas and fake news because they are actually a serious threat for the coming elections fake news because they are actually a serious threat for the coming elections

 Remember, even the smartest AI today can make mistakes. It’s up to us to use AI wisely and always think for ourselves.

 



Leave a Reply

Your email address will not be published. Required fields are marked *