On Monday, the sheet music platform Soundslice made headlines for a unique reason: they introduced a new feature after realizing that ChatGPT had misled users about their service. The AI was erroneously suggesting that Soundslice could import ASCII tablature, a text format used by guitarists, which the platform had never supported before.
Soundslice specializes in turning sheet music from images or PDFs into digital formats that sync with audio or video. This helps musicians play along easily. Users can slow down the music or loop tricky sections for practice.
Adrian Holovaty, one of Soundslice’s co-founders, shared his experience in a blog post. A few months back, he noticed odd patterns in the company’s error reports. Instead of the usual sheet music uploads, users were sending screenshots of ChatGPT chats featuring ASCII tablature. This puzzled him.
“I couldn’t figure out why this was happening until I tried ChatGPT myself,” Holovaty wrote. In his investigation, he realized that the AI was encouraging users to create accounts with Soundslice to import ASCII tabs, claiming it was a supported feature. “ChatGPT was completely mistaken,” he explained, noting that this misinformation gave a wrong impression of the service and confused many users.
This incident shines a light on a broader issue with AI models, known as “hallucination.” When AI generates incorrect information confidently, it poses challenges for both companies and users. The phenomenon has been present since the popular release of ChatGPT in late 2022, where people often relied on it for facts, rather than as a creative assistant.
Interestingly, a recent survey from Pew Research Center revealed that about 47% of respondents had experienced or knew someone who had faced misinformation while using AI tools. Many users expressed frustration over receiving incorrect guidance, which complicates their tasks.
As technology advances, the need for clarity and accuracy in AI-generated content is becoming increasingly important. Researchers and industry experts emphasize the need for better training and oversight of AI systems to minimize these confounding errors.
In a rapidly changing digital landscape, Soundslice’s quick response to ChatGPT’s misinformation is a reminder of the importance of adaptability in tech. It highlights how companies can evolve based on user feedback, even if that feedback comes from unexpected sources.
For more on this topic, you can read Soundslice’s official blog post detailing their feature update here.


















