Microsoft Bing's Weird Responses!
Microsoft's new Bing AI can generate some interesting responses when prompted a certain way.
Created on February 17|Last edited on February 17
Comment
It's no surprise that Microsoft's Bing is vulnerable to prompt attacks. ChatGPT would describe theoretical situations about hotwiring a car, lie, and generate all sorts of weird responses. The same is happening here, except it is directly tied to search. A few interesting conversations with Bing are shown below.
This example shows the bot gaslighting the user.
Kevin Roose describes the weird, long conversation he had with Bing/Sydney where Bing professes its love for him and tries to sever his marriage!
From one perspective, these results are a bit funny, but odd. Ideally, the AI would be perfect in its responses. However, the colossal problem of mitigating these types of unsafe responses is very difficult.
First BARD's mistake during Google's Paris demo, and now Bing's weird conversations.
These new AI products are under heavy scrutiny as they are somewhat intelligent-seeming in the sense that they are able to converse. And thus, moderating an AI of this scale is like monitoring a toddler: constant care and attention is required!
I recently wrote a blog post on a paper about prompt-tuning for online safety. With this new wave of fast LLM deployments, it's very probable that a lot of research will be directed towards reducing unsafe responses from users and unsafe responses from the model.
References
- Marcin, Tim. “Microsoft's Bing AI Chatbot Has Said a Lot of Weird Things. Here's a List.” Mashable, Mashable, 16 Feb. 2023
- Niedens, Lyle. “Microsoft Bing's AI Unveils Its Creepy Side.” Investopedia, Investopedia, 16 Feb. 2023
- Howler, Daniel. “Microsoft Defends Bing's AI Mistakes as It Faces 'Our Share of Challenges'.” Yahoo! Finance, Yahoo!, 16 Feb. 2023
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.