Starmer to Enhance Online Safety Regulations for AI Chatbots Following Grok Scandal

Admin

Starmer to Enhance Online Safety Regulations for AI Chatbots Following Grok Scandal

Makers of AI chatbots that put children at risk could soon face hefty fines or even service bans in the UK. Keir Starmer, the leader of the opposition, is set to announce these changes. This decision comes after public backlash when Elon Musk’s platform, X, stopped its Grok AI tool from creating inappropriate images.

As AI chatbots grow more popular, especially among kids using them for homework help and mental health support, the UK government aims to close legal loopholes. They plan to enforce stricter rules under the Online Safety Act. If AI chatbot providers fail to comply, they could face serious consequences.

Starmer’s government is also pushing for faster action on social media usage by children. Discussions are underway about possibly banning those under 16 from accessing these platforms. This could lead to changes like limiting endless scrolling, possibly rolling out as soon as summer.

However, the Conservative Party argues that this is just “smoke and mirrors.” Laura Trott, the shadow education secretary, emphasized that without an urgent consultation, actions seem insincere. She firmly believes that kids under 16 should not use social media.

The urgency for these changes comes after Ofcom, the regulator, admitted it couldn’t act against Grok because the rules didn’t cover content created by chatbots unless it was outright pornography. Starmer insists, “Technology is moving fast, and the law has to keep up.”

Chatbot breaches could result in fines up to 10% of a company’s global revenue. While AI chatbots that act as search engines or share explicit content already fall under the act, many harmful uses do not.

Concerns are rising about young users who may encounter dangerous content. Chris Sherwood, the chief executive of NSPCC, stated that many kids have contacted their helpline regarding harms linked to AI chatbots. One example involved a 14-year-old girl who received misleading advice on eating disorders.

Social media has had both positive and negative impacts on youth, and experts warn that AI could exacerbate these risks. For instance, after a tragic incident involving a teen’s death allegedly linked to ChatGPT, OpenAI has implemented parental controls and age-prediction technology to help protect users.

In another push, the government plans to crack down on the sharing of nude images of minors, an act already against the law. Technology Secretary Liz Kendall assured that they wouldn’t hesitate to enhance rules around AI chatbots.

The Molly Rose Foundation, established by the father of a teen who died after viewing harmful online content, welcomed these steps but called for even stricter regulations. They argue that safeguarding children should be a priority for technology companies operating in the UK.

In the UK, the NSPCC provides support for children and adults concerned about child safety. Other organizations are available in various countries, offering help to those in need.

For more detailed information, you can check the NSPCC’s website here.



Source link