GGWave, according to a TechRadar report (via a Reddit post) is “a communication protocol that enables data transmission via sound waves.” The resulting audio tones resemble the early modem handshake sounds used in dial-up internet connections. While it remains unclear whether this method is faster than regular speech, the developers argue that it is more cost-effective. Instead of relying on graphics processing units (GPUs) to process speech, GGWave uses central processing units (CPUs), which require fewer resources.
As per the report, the team behind GibberLink has made their code available on GitHub, allowing other developers to experiment with the technology. However, as the AI agents in the demo were from ElevenLabs, it is uncertain whether the system would function with AI models such as ChatGPT or Google Gemini. Nonetheless, it is likely that other developers will attempt to implement similar sonic communication methods in various AI systems.
While GibberLink is an impressive technical feat, it has sparked concerns about AI autonomy. The thought of AI assistants communicating in a language incomprehensible to humans has fuelled speculation about potential risks. Some fear a scenario where AI chatbots, once granted access to financial accounts, could make unauthorised transactions or even collaborate to expand their network.
For now, GibberLink remains an experimental concept, but its existence has already prompted discussions about AI oversight and control. Whether this sonic language finds practical application or remains a novelty, it certainly highlights the ever-evolving landscape of artificial intelligence.
ai
#WATCH #agents #conversation #callWhat #surprise