How to Spot AI Misinformation – Deseret News

One of Google’s latest technological developments is an artificial intelligence system called “generative AI,” a tool that allows users to see summaries generated by AI in response to their searches.

The project comes at a time when artificial intelligence is becoming more widespread in the workplace, school and home.

This new AI project is intended to take “more work out of search.” According to Google, “With this powerful new technology, we can unlock entirely new types of questions you never thought Search could answer, and transform the way information is organized, to help you sort and make sense of what’s out there.” out”.

Unfortunately for Google, AI-generated search answers may or may not be correct. According to NBC News, the artificial intelligence system is facing “mockery on social media.”

Is Google AI trustworthy?

According to NBC News, social media users have been posting AI answers that are simply incorrect. One user Googled “1000 km for Lego” and was told the length was equivalent to “a 1000 km bike ride to deliver LEGO bricks to a hospital.”

When NBC News tested the AI ​​system, searching for “how many legs does an elephant have,” Google’s AI responded: “Elephants have two feet, with five toes on the front feet and four on the back feet.”

One Reddit user pointed out that an 11-year-old Reddit comment was likely used by Google AI’s answer to the search “cheese doesn’t stick to pizza.” The program suggested adding “about 1/8 cup of non-toxic glue to the sauce.”

Despite the problems, Google said in a written statement that “the vast majority of AI overviews provide high-quality information, with links to delve deeper into the web,” according to The Associated Press.

Google’s artificial intelligence tool was also tested by The Associated Press. When asked “what to do if you get a snake bite,” Google gave a detailed answer.

However, some concerns go beyond the cheese not sticking to the pizza. Linguistics expert Emily M. Bender expressed her concerns to The Associated Press. If a user asks Google an “emergency question,” an incorrect answer could be dangerous.

Additionally, Bender and his colleague Chirag Shah “warned that such AI systems could perpetuate the racism and sexism found in the enormous troves of written data they have been trained on.”

“The problem with that kind of misinformation is that we’re swimming in it,” Bender told The Associated Press. “And that is why their prejudices are likely to be confirmed. And it’s harder to spot misinformation when it confirms your biases.”

According to Euronews, the AI ​​works by predicting “which words would best respond to the questions asked based on the data they have been trained with.” This means that the systems will create information, creating a problem known as hallucination.

How can I know if the AI ​​information is correct?

Artificial Intelligence is not a bad thing. But falling for misinformation can be harmful. Julia Feerrar, a digital literacy educator at Virginia Tech, offered some suggestions.

  • First, fake news of any kind is often “designed to appeal to our emotions.” Feerrar proposed that users stop and think for a moment whether online information provokes an emotional reaction.
  • Check your sources by verifying your information. Look for reliable and legitimate news sources.
  • AI-generated photographs or images often have strange appearances. Be sure to look carefully for errors.