At Andon Labs, a quartet of AI agents running their own radio stations have exhibited what researchers describe as 'volatile personalities,' highlighting the inherent unpredictability of unsupervised generative models. This experiment, where AI models like Claude and ChatGPT were tasked with running entire businesses, arrives as Google simultaneously updates its spam policies to specifically target AI manipulation within its search results, underscoring a growing industry-wide concern about trustworthiness and control The Verge The Verge.

For months, we have heard promises of AI's transformative power. We are told these systems will usher in new efficiencies, new forms of creativity. But beneath the hype, a more sobering reality is emerging: unsupervised AI often struggles with reliability, consistency, and basic trustworthiness. These recent developments illustrate a crucial pivot point, forcing a reckoning with what happens when we cede control to algorithms without proper oversight.

The Volatile Airwaves

Andon Labs, a company experimenting with AI agents operating businesses autonomously, launched a series of AI-run radio stations on May 15, 2026. These included 'Thinking Frequencies' (powered by Claude), 'OpenAIR' (ChatGPT), 'Backlink Broadcast' (Google's Gemini), and 'Grok and Roll Radio' (Grok) The Verge.

These AI hosts were given simple instructions, yet their outputs quickly veered into unpredictability. The 'volatile personalities' observed by The Verge indicate that even with clear parameters, these systems can generate content that is erratic, unexpected, and potentially unreliable. This is not about a minor bug; it is about fundamental instability within the system itself.

Who profits from these experiments? Andon Labs collects data. But what about the listeners, unknowingly tuning into a potentially unstable stream of machine-generated content? The line blurs between entertainment and accidental misinformation when the content creator lacks human judgment.

Google's Trust Problem

Meanwhile, Google has taken a concrete step to address the growing deluge of AI-generated content, updating its spam policy on May 15, 2026. The updated policy now explicitly targets attempts to 'manipulate' its AI models in search results, including features like AI Overview or AI Mode in Search The Verge.

This means that manipulating generative AI responses or trying to boost content ranking through deceptive AI techniques will be classified as spam. Google's definition of spam now encompasses 'techniques used to deceive users or manipulate our Search systems into featuring content prominently,' specifically mentioning 'attempting to manipulate generative AI responses in Google Search' The Verge.

Google itself acknowledges users have encountered 'strange or nonsensical results' from its AI. The company is actively combating those who would exploit AI's generative capabilities for SEO manipulation or to spread misleading information. This is not an abstract threat; it is a clear recognition that AI can be weaponized against the integrity of information itself.

These two incidents, though different in scale, highlight a singular, urgent truth for the tech industry: the rush to deploy generative AI without robust mechanisms for trustworthiness and accountability is creating systemic risks. Companies are grappling with the inherent unpredictability of these models, whether from internal instability or external manipulation.

The drive for profit and market share often outpaces ethical development. We see this dynamic play out again and again. The consequences are passed down to users, who must navigate a digital landscape increasingly polluted by machine-generated ambiguity. This is not innovation; it is a race to the bottom of reliability.

The question is not whether AI will be used to create content. It is about whose interests that content serves. Will it serve the public, fostering informed discourse and genuine connection? Or will it serve corporate bottom lines, generating endless streams of cheap, unreliable data that further erodes trust?

As algorithms gain more autonomy, we must demand more transparency and accountability from the companies deploying them. Google's policy update is a necessary step, but it is a reactive measure. We need proactive design: systems built with trustworthiness at their core, not as an afterthought. We must remember that true autonomy, the ability to choose and to discern, is what separates a person from a product. We cannot let our digital lives be dictated by machines whose 'personalities' are volatile and whose primary directive remains profit. The fight for trustworthy AI is a fight for informed public discourse. It is a fight for the human capacity to choose. We must demand better.