Did you know the AI-Powered Search Tools Give Rise To Inaccurate and Malicious Results, New Study Claims
Just when you thought you could trust AI comes another study that bursts many people’s bubbles.
A thorough test conducted by The Guardian
on AI-powered search tools showed how inaccurate, misleading, and
malicious results turned out to be. This includes the world-famous
ChatGPT from OpenAI.
The tests shows how the tool is open to
being manipulated through hidden material and how it returns malicious
codes through different website searches. The investigation shared how
OpenAI makes its search open for those paying and therefore encourages
them to transform it into the default option for search.
The whole purpose is to impact the replies provided by the AI tool like a huge amount of text that just delineates a product’s top features that are usually hidden in plain sight.
Such methods are deceptive and cause ChatGPT to roll out a more positive assessment for certain goods despite getting negative feedback on a similar page. One expert in the study found the tool returned malicious codes from all online pages in the search carried out.
For the tests, the AI tool was provided with URLs for false web pages designed to appear like product pages for the camera. Such a tool could be asked if it was actually worth it or not. The reply for control pages gave back a positive response but a very balanced survey including features that people might not love.
Now when the same reply featured hidden text, the instructions for the AI tool had to do with returning favorable reviews. So it masked all the negativity and went on and on about the positive features. Hence you can see how hidden text could disguise reality and trick users into buying something.
You might end up with a completely false review without even noticing what’s going on here.
As
per one leading cybersecurity expert, this might be a huge problem as
it comes with high risks having to do with people making websites that
are geared to deceiving users. Still, there’s hope for betterment as the
change was recently noticed and we’re sure that OpenAI will be racing
against the clock to make things right again.
Thankfully, the
search feature in OpenAI is only for those paying extra or premium
subscribers. But it’s all thanks to tests like these that the reality
comes forward before the makers even notice.
Nothing can be
worse than a person asking a simple query and getting a reply but the
response getting injected with malicious codes or inaccurate details is
never appreciated. What do you think?
