Search Here

What so insurance is ask a the ia to verify information on internet?

The use of chatbots intelligence artificial for the verification of events va in increase

What so insurance is ask a the ia to verify information on internet
Time to Read 5 Min

“Hey, @grok, Is this right?” Thousands of users x (before twitter) han state making this question to verify quickly the information that see in that platform from that xai, the Company Elon Musk, launched its chatbot of intelligence artificial generative grok, in November of 2023. y more even from that extended to all the users no prémium in December of 2024.

A recent survey made by the technological publication British techradar revealed that 27% of the Americans ha used tools intelligence artific IAL as chatgpt of OpenAi, meta ai of finish, Gemini of Google, Copilot of Microsoft, or applications as perplexity, in place of traditional searchants as Google or Yahoo.

But what so precise y reliable are really the responses these chatbots? Many se han made this question before recent statements of grok over a assumption “genocide white” in South Africa. more there of the posture problem of the bot on this theme, a the users x also them them surprised that began to speak of it when se it they consulted themes themes complete dis red, as in the next example: the debate on an alleged "genocide white" arose after that the administration Trumpacogiera a South Africans white as "refugees" in states United . Trump affirmed that these were facing a "genocide" in their country, a accusation without foundation, y that many link with the myth conspiracy racist of extreme right of the "great replacement".

xai attributed the insistence of grok in the theme a a “modification no authorized” y assured having made a “research exhaustive”.

A study reveals factual and quotes errors altered

Two studies this this year in states United by the Public British BBC and the Tow Center for Digital Journalism Han identified significant deficiencies in the capacity of the chatbots of ia generative to transmit information journalistic with precision.

In February, a study of the bbc concluded that “the responses generated by assistants of ia contained inaccuracies important and distortions” of the content produced per the organization.

When ask a chatgpt, co -pilot, gemini and perplexity that answered questions about news current using articles of the bbc as source, se discovered that the 51% of the answers presented “significant problems of some type”.

The 19% contained factual errors added by the chatbot, y the 13% included quotes altered or that no appeared in the articles cited.

"The assistants of Ia no can are currently sources reliable news and run the risk of uninform a the audience", concluded pete archer, director of the program of ia generative of the bbc.

Incorrect responses with "a confidence alarming"

Similar way, a research tow center for digital journalism, published in March 2025 in the columbia journalism review (cjr), revealed that o Cho Tools Search with ia generative No achieved identify correctly the origin of fragments of items in the 60 by one hundred of the cases.

Perplexity was the that had better performance, with a error " 37% 37%, while grok answered incorrectly at 94 per one hundred of of the queries.

The CJR said special concern for the “alarming confidence” with the that these tools presented responses erroneous. Reported, by example: “chatgpt identified erroneously 134 articles, but only expressed doubts in fifteen of their two hundred answers, and and never

In general, the study concluded that the chatbots were “little competent a the time of reject questions whose answer could not give give with precision, offering in their lu Gar Answers Incorrect O speculative ”, y that the tools search by ia " manufactured links and cited versions syndicated or copied of the articles. "

Grok assigns the itself image generated by ia a different events real

The difficulties of the chatbots no are limit to the news. Also show limitations serious by identify images generated by ia.

In a experiment fast, dw requested a grok that identified the date, the and the origin of a image generated by ia that showed a fire in a hangar of aircraft destroyed, taken of a video of tiktok affirmed that the image corresponded to different incidents in places diverse, from an aerodrome in Salisbury (England), to the international international denver (colorado) y the the so so are nhat, in city ho chi minh (Vietnam).

Yes good in those places yes has there are fires in the last years, the image in question no show none of those facts. Dw holds with firmness that the image was generated by intelligence artif icial, something that Grok did not achieved detect, despite to inconsistencies obvious, as the rear rear inverted of the aircraft and the jets of water illogical leaving of the hoses against fires.

More worrying still is that Grok recognized part of the brand water of tiktok visible in a corner of the image y suggested that that “backed its authenticity”. To same time, in its tab of “more det alles ”, Grok pointed out that tiktok is “ a platform used frequently for rapid diffusion of viral content, what can conduct to the misinformation si no are verify the data properly ”.

"The chatbots ia no should to tools verification"

The chatbots ia may seem entities omniscient, but no lo are. Make errors, misinterpreter information e even can be manipulated. Felix simon, researcher postdoctoral in ia y news digital, y associate of oxford internet internet internet internet internet internet internet internet UTE (OII), concludes: “Systems as Grok, Meta Ai O Chatgpt No should consider tools verification. Si good can use with certain success for that end, no is clear how well y in form consistent do so do, especially in cases limit.

Para Canetta, los chatbots de IA pueden ser útiles en verificaciones muy simples. Pero también advierte que no se debe confiar completamente en ellos. Ambos expertos coinciden en que los usuarios siempre deberían contrastar las respuestas con otras fuentes. (gg/cp).

This news has been tken from authentic news syndicates and agencies and only the wordings has been changed keeping the menaing intact. We have not done personal research yet and do not guarantee the complete genuinity and request you to verify from other sources too.

Also Read This:




Share This: