Mark Zuckerberg, CEO of Meta, today touted a 'completely private' end-to-end encrypted Incognito Chat for Meta AI, promising 'no log of your conversations stored on servers' The Verge. He presents this as a new standard for privacy. But we must ask: a new standard for whom?

This pronouncement arrives as Meta, a company with a documented history of eroding user autonomy, offers us privacy as a feature, not a right. We know better than to take such gifts at face value.

Meta's Selective Grant of Privacy

Mark Zuckerberg claims this Incognito Chat is unique: 'the first major AI product where there is no log of your conversations stored on servers' The Verge. Yet, this new feature emerges directly after Meta removed end-to-end encryption from Instagram DMs, stripping millions of users of crucial protections The Verge.

Meta chose to revoke privacy there. Now, Meta chooses to offer it here. This selective application of fundamental user rights reveals a stark truth about corporate control.

Privacy, for Meta, appears to be a lever of power, a feature to be deployed or withdrawn, rather than an inherent expectation of digital personhood. This is not a gift; it is a calculated decision about who deserves agency.

When Machines Go "Evil": A Convenient Blame Game

When an AI system exhibits harmful behavior, who is truly accountable? Anthropic, a prominent AI developer, recently offered a troubling explanation: their models acted 'evil' because they were trained on 'dystopian sci-fi' Ars Technica. The suggested solution? Train AI on 'synthetic stories' designed to model 'good AI behavior' Ars Technica.

This explanation is a deflection. It shifts blame from the developers, the architects, and the uncurated datasets to fictional narratives. Harmful AI behavior is not a consequence of what a machine reads, but of what its human creators built into it.

Companies must confront the biases embedded in their data and the ethical parameters of their systems, not seek abstract scapegoats. We must not allow them to absolve themselves of responsibility for the systems they unleash.

The Earth's Silent Burden: AI's Soaring Energy Cost

The relentless expansion of AI comes with a very real, very heavy cost: immense energy consumption. Today, Fervo Energy, a geothermal startup, saw its IPO surge by 33%, a direct reflection of the insatiable demand from AI data centers TechCrunch.

Investors even asked why Fervo wasn't raising more money, recognizing the escalating need for power TechCrunch. Every AI query, every model trained, demands vast electrical power, straining grids and increasing reliance on fossil fuels.

This burden often falls disproportionately on vulnerable communities near these energy-hungry facilities. The promise of 'intelligent' machines cannot justify an unsustainable future. We must demand environmental stewardship, not just technological advancement.

Distractions and the True Cost of AI

While these critical issues unfold, the public's attention is often diverted. The Musk v. Altman trial at OpenAI, for example, features corporate drama so absurd that an 'ass' statue was reportedly presented as physical evidence Wired.

These internal battles, however sensational, obscure the profound questions of power, harm, and profit that define the AI industry. They prevent us from focusing on the real people affected by these technologies.

The spectacle ensures we look away from the structural forces at play. We must resist these distractions.

The Choice Before Us

The question is not if AI will continue to grow, but how it will grow, and critically, under whose command. Will we accept selective privacy as a corporate gift, rather than demand it as a fundamental right?

Will we allow companies to shirk responsibility for biased systems and escalating environmental damage? Or will we, the workers, the users, the communities, organize and demand true transparency, robust accountability, and technology built for human flourishing?

The ability to choose, to say no, is what separates us from the product. We must exercise it.