Krista Pawloski had an eye-opening experience as a worker on Amazon Mechanical Turk, a site for small tasks like data entry and AI moderation. One day, while labeling tweets for racist content, she stumbled upon a slur disguised in everyday language. This made her rethink how easy it is to miss harmful messages.
“How many times have I let something slip?” she wondered. After that moment, Pawloski decided not to use generative AI tools herself. She even advises her family against them. “It’s a hard no in my house,” she says about keeping her daughter away from ChatGPT and similar products.
Pawloski isn’t alone in her distrust. Several AI raters have shared similar concerns. Many of them, who analyze AI responses for accuracy, admit they’ve become wary of AI systems. One rater who works with Google stated that she keeps AI use to a minimum. “I worry about its accuracy, especially for health-related questions,” she confessed. She’s even kept her daughter away from AI to encourage critical thinking.
The issue goes beyond individual choice. Experts like Alex Mahadevan from Poynter point out that as AI evolves rapidly, it often prioritizes speed over quality. “This approach doesn’t bode well for a public that increasingly relies on AI for information,” he warns.
Recent statistics paint a concerning picture. A report by NewsGuard showed that the rate of false information provided by top AI models nearly doubled from 18% to 35% between 2024 and 2025. The same audit noted that while chatbots became quicker in responding, the quality of information dropped significantly.
Despite the shortcomings, many AI raters are still dedicated to their roles and aware of the stakes involved. Brook Hansen, also an Amazon Turk worker, expressed frustration about being pushed to deliver results without adequate training or resources. “If we are not given the support we need, how can these systems be safe?” she asks, highlighting a gap that remains troubling.
The expression “garbage in, garbage out” resonates here. For AI to be reliable, the data fed into it must be solid. An AI rater pointed out that when he questioned historical topics, the model gave incomplete answers due to poor data quality. “I’ve seen how bad the data can be, and it’s alarming,” he noted.
Rather than viewing AI as a futuristic marvel, many within this workforce see it as fragile. They often wish to enlighten others about the hidden labor and environmental costs behind the technology. Hansen and Pawloski recently shared their insights at a conference for school administrators, aiming to raise awareness about these critical issues. Their audience was mixed; while some were relieved to learn more, others felt defensive about technology they view as cutting-edge.
In a moment of reflection, Pawloski compared AI ethics to the textiles industry. Just as consumers became aware of the human and environmental costs of fast fashion, she believes society must also start questioning the origins of AI data and the treatment of workers involved.
As conversations continue, the key takeaway is this: Just like any powerful tool, AI requires careful handling. The technology can shape our lives significantly, but its ethical implications must not be ignored. For those utilizing AI, asking the right questions may lead to more responsible usage in the future.
Source link

