J A B B Y A I

Loading

Will AI give better answer when you threaten it ?

Old news, but wild enough to resurface.

Google co-founder Sergey Brin once said on the All-In podcast that Al models (including Google’s Gemini) actually perform better when you threaten them.

“Not just our models, but all models tend to do better if you threaten them, like with physical violence.”

Apparently, intimidation is the new prompt engineering.

Forget “please” and “thank you.”

Al was built on human data, so maybe it responds to human psychology more than we think.

What do you think – is this true? Or just Al placebo?

submitted by /u/theMonarch776
[link] [comments]

Leave a Comment