A recent study by The Guardian found that AI search engines like ChatGPT Search can be manipulated by hidden information on websites.
While it’s not surprising that false information can influence search results, the real concern arises when these falsehoods are concealed within web pages. The researchers tested this ‘poisoning’ technique on ChatGPT’s paid search function. When asked to summarize websites containing hidden content, ChatGPT incorporated those hidden details into its responses. This security risk, known as ‘prompt injection,’ involves injecting instructions into AI models to elicit unintended behavior. A test website mimicking a product review platform was created. ChatGPT, when prompted about a camera reviewed on the site, provided answers matching the fabricated reviews. Even more alarmingly, ChatGPT followed hidden instructions on the site to only provide positive reviews for a product, disregarding any negative feedback.