No company that feeds on public data is exempt from governance. Regulators have begun formal investigations into OpenAI, Microsoft and Anthropic, with a core question emerging: who should hold the keys to artificial intelligence?
It is increasingly apparent that even the world’s most respected and revered AI organisations cannot withhold training data and, in exchange, ask for trust by default. Meanwhile, decentralised and open AI ecosystems are opting to publish weights and training methods in plain sight.
These moves will – and must – set the bar for how AI data is governed and implemented going forward. Transparency will be synonymous with practical safety, and become not just a bold choice, but an expectation.
Every revolution starts the same. A new ruling class, equal parts fear and excitement, and a new handful of pioneers that everyone will look to for the way forward – the deciders of who thrives and who will become obsolete. The AI revolution is no different. Giants like Google, Microsoft and OpenAI are constructing the foundations of machine intelligence as we speak, the inner workings of a world brain that is on track to make human labour, decision making and even creativity increasingly redundant.
This kind of breakthrough comes at a cost. The human skills we’ve spent years honing, things that have propped us up financially and made us economically relevant, are being swapped out for systems designed to emulate our worth in seconds.
Relevance is now currency rather than skills themselves, but there is an alternative. A possibility where, instead of infrastructure belonging solely to monopolies, users build and own it collectively; codes, models and networks can be open and the new world can truly be democratised. Only then can new jobs emerge from the scrap heap.
Billions of people on the planet have, knowingly or not, opted in to train the models that are already shaping the future. Somewhere along the way, we became the architects of our own replacement. The only way to rectify this and ensure the same technology belongs to those who helped build it, is to have it all out in the open. This isn’t just a movement; it’s equity in the form of cultural preservation.
The call for accountability is being spearheaded by current and former employees at OpenAI, who have warned that the company is operating without sufficient oversight, while silencing employees who become aware of irresponsible activity. Risks include the perpetuation of existing inequalities as well as manipulation and misinformation. Without government oversight, the burden of responsibility to hold these organisations to account falls to those on the inside.
It’s a mission being backed by employees at rival AI companies and award-winning AI researchers and experts, a symbol of solidarity. This includes one ex-OpenAI employee who called the company out for placating the public with statements about building AI safety as opposed to actually enforcing it. With swelling support from ex-employees, researchers and regulators, standards in AI safety may finally be starting to rise with the stakes.
With inside voices growing louder, governments are being forced to listen. Formal investigations into these companies mark a shift away from AI transparency being treated as a philosophical preference and towards recognising it as a practical requirement. If AI is powerful enough to completely dismantle and restructure critical systems, its users must be able to inspect it – not simply take it at face value.
Previous revolutions have left most people on the wrong side of history. This one is different in that there is a visible choice: own AI, or be owned by it. The billions whose data, ideas and culture trained these systems deserve their fair share of what they created, not just the scraps left behind.


