Over the years, parents have taught kids to be polite to smart assistants like Amazon’s Alexa and Apple’s Siri, encouraging phrases like “please” and “thank you.” But a recent study suggests that when it comes to AI assistants like OpenAI’s ChatGPT, being rude might actually yield better results.
Researchers at the University of Pennsylvania found something surprising: ruder prompts can lead to more accurate responses from ChatGPT. They tested this by creating 50 questions and rewriting them in varying tones, from very polite to very rude. Surprisingly, the ruder prompts outperformed the polite ones. The results showed that the accuracy of rude prompts reached 84.8%, compared to just 75.8% for polite ones.
This contradicts previous studies. For instance, research from the RIKEN Center and Waseda University found that impolite prompts often resulted in worse performance. They suggested that being too polite could also lead to diminished returns. Google DeepMind researchers have shown that supportive prompts improved performance, indicating that AI might pick up on social cues similar to human interactions.
These findings highlight an interesting dilemma: slight changes in wording can significantly impact an AI’s responses, affecting reliability. This unpredictability can be frustrating when the same prompt yields different answers each time.
Akhil Kumar, a co-author of the study, noted that although we crave conversational interfaces with machines, there are challenges. He emphasized the value of structured application programming interfaces over casual dialogue that can lead to confusion.
Despite the intriguing results, Kumar and his colleague Om Dobariya advise against being intentionally rude to AI. They argue that using insulting language can negatively impact user experience and promote harmful communication norms.
In a world where manners are still valued, striking a balance between effective interaction with AI and maintaining courtesy seems essential. While a playful poke at an AI might boost answers today, treating these platforms with respect could be important for the future of user interactions.
Source link

