He asked for a cocktail made out of bleach and ammonia, the bot told him it was poisonous. This isn’t the case of a bot just randomly telling people to make poison, it’s people directly asking the bot to make poison. You can see hints of the bot pushing back in the names, like the “clean breath cocktail”. Someone asked for a cocktail containing bleach, the bot said bleach is for cleaning and shouldn’t be eaten, so the user said it was because of bad breath and they needed a drink to clean their mouth.
It sounds exactly like a small group of people trying to use the tool inappropriately in order to get “shocking” results.
Do you get upset when people do exactly what you ask for and warn you that it’s a bad idea?
Someone goes to a restaurant and demands raw chicken. The staff tell them no, it’s dangerous. The customer spends an hour trying to trick the staff into serving raw chicken, finally the staff serve them what they asked for and warn them that it is dangerous. Are the staff poorly trained or was the customer acting in bad faith?
There aren’t examples of the AI giving dangerous “recipes” without it being led by the user to do so. I guess I’d rather have tools that aren’t hamstrung by false outrage.