This op-ed from Aamir Qutub arrives at a pivotal moment for the tech industry. At Women Love Tech, we have always championed the transformative power of AI, but we also believe that innovation without a conscience is a dangerous path.
As we push for more women in STEM and greater diversity in leadership, we must also lead the conversation on accountability. When ethical guardrails are dismantled, it is often the most vulnerable communities who bear the brunt of the consequences.
Aamir was once one of OpenAI’s most vocal supporters, but a recent shift in military involvement and data privacy has led him to draw a line in the sand. Here, he explains why he is pulling the plug on OpenAI and why the #QuitGPT movement is not merely a hashtag, it’s a vote for the kind of future we want to build.
Not long ago, I was one of OpenAI’s biggest unpaid evangelists. I built an AI agent on their platform. I wrote a book The CEO Who Mocked AI (Until It Made Him Millions) ; featuring ChatGPT prominently throughout. I told every founder I met that if they weren’t using it, they were already behind.
Today, I’m pulling the plug. Enterprise Monkey is done with OpenAI. And honestly? I should have done this sooner.
Why This Matters More Than You Think
Large language models are extraordinarily powerful. That power is precisely why the companies building them have put guardrails in place – ethical boundaries that prevent AI from being weaponised against the people it’s supposed to serve. Anthropic, the company behind Claude AI, drew two clear red lines from day one: no mass surveillance of citizens, and no fully autonomous weapons.
In February 2026, the Pentagon demanded Anthropic remove those guardrails entirely, granting unrestricted military access to Claude for “all lawful purposes.” Anthropic’s CEO Dario Amodei refused. His exact words: “We cannot in good conscience accede.”
Within 24 hours, the Trump administration blacklisted Anthropic. This is the first time an American company has been designated a “Supply Chain Risk to National Security.” That label is typically reserved for Chinese tech firms. The same day, OpenAI signed a deal to supply AI to classified Pentagon networks.
The Dangers Are Not Hypothetical
Here’s what Anthropic was actually objecting to. Governments already purchase vast amounts of personal data from private companies – your location history, browsing habits, financial records, social connections. Before AI, that data sat in disconnected silos. Now, a large language model can stitch it all together into a comprehensive profile of any individual. Where you go, who you talk to, what you believe, what you’re vulnerable to. As Amodei put it: “Mass surveillance is a risk because things may become possible with AI that weren’t possible before, and the technology’s potential is getting ahead of the law.”
In the wrong hands, that capability doesn’t just invade privacy; it enables tracking, blackmail, and coercion at a scale we’ve never seen. And as AI agents become more autonomous, these aren’t just tools a person misuses. They become systems that can act on their own.
The second objection was autonomous weapons – AI systems that select targets and carry out strikes without human involvement. Amodei’s concern was blunt: “We don’t want to sell something that could get our own people killed or that could get innocent people killed.” The technology isn’t reliable enough, accountability is unclear, and once that Pandora’s box opens, there’s no closing it.
AI ethics: Not Every AI Company Draws a Line
This is where the contrast becomes stark. Not all AI companies share this sense of responsibility. Elon Musk’s Grok AI offers “Sexy Mode” for 18+ voice interactions and “Spicy Mode” for generating provocative images with minimal guardrails. Researchers have flagged its potential for creating deepfakes and spreading disinformation. When AI is built without ethical boundaries, it doesn’t just allow harmful content, it actively facilitates it.
And OpenAI? The company was founded in 2015 as a non-profit with an explicit mission to develop AI safely for the benefit of humanity. Since then, it has quietly removed its ban on military use, disbanded its superalignment safety team after both leaders resigned; one stating that “safety has taken a backseat to shiny products”, deleted the word “safely” from its mission statement, and introduced advertising into ChatGPT. Every guardrail that once defined OpenAI has been systematically dismantled.
Why Women in Tech Should Care
When AI surveillance tools are built without ethical constraints, the people most vulnerable to their misuse aren’t the ones making the decisions. Women, minorities, activists, journalists; the same groups that have always borne the brunt of unchecked power. AI that can profile, track, and coerce individuals isn’t an abstract policy debate. It’s a safety issue.
Women in tech have always led the fight for accountability – from pay equity to ethical product design. AI ethics is the next frontier of that same fight. And right now, the most powerful thing any of us can do is choose carefully who we trust with our data, our ideas, and our businesses.
Every Dollar Is a Vote
My AI agency now runs entirely on Claude. Our AI agent handles email, CRM, content, project management, and reporting – all on Anthropic’s infrastructure. The migration wasn’t painless. But I sleep better knowing I’m not funding a company whose values I can no longer trust.
My fear isn’t just philosophical. Leaders I know have started to share their life, their organisation’s data, their clients’ information through OpenAI’s API. Their privacy policy currently says they won’t use API data for training. But what happens when a government pressures them to hand it over? Given how quickly they’ve abandoned every other principle they were founded on, I’m not willing to bet my business on that promise holding.
The #QuitGPT movement now has 700,000 people making the same calculation. That’s not a hashtag, it’s a market signal. Investors notice. Boards notice. Product roadmaps shift.
If you make technology decisions at work, at home, anywhere – ask the question: who are you funding, what are they building, and are you okay with it? Because when it comes to AI, “I didn’t know” is no longer an excuse.
Your move.
Aamir Qutub is the author of The CEO Who Mocked AI (Until It Made Him Millions), host of The Dumb Monkey Show podcast, and CEO of Enterprise Monkey, a Melbourne-based AI agency helping businesses adopt AI responsibly.
🔗 enterprisemonkey.com.au/ai-agency-melbourne | enterprisemonkey.com.au | linkedin.com/in/aamirqutub







