Mentoring
  • Categories
    • News
    • Careers
    • Reviews
    • Lifestyle
    • Apps
    • Podcasts
    • Technology
    • Gaming
  • Our Story
  • Media
    • Advertise With Us
    • Privacy Policy
    • Partnerships
    • Terms of Use
  • Contact
No Result
View All Result
  • Login
Women Love Tech
  • Apps
  • Careers
  • Gaming
  • Lifestyle
  • News
  • Reviews
  • Podcasts
  • Technology
  • Apps
  • Careers
  • Gaming
  • Lifestyle
  • News
  • Reviews
  • Podcasts
  • Technology
Women Love Tech
Home News

I Wrote a Book Praising ChatGPT. Then I Told My Company to Stop Using It

Robyn Foyster by Robyn Foyster
5 March 2026
Share on FacebookShare on Twitter

This op-ed from Aamir Qutub arrives at a pivotal moment for the tech industry. At Women Love Tech, we have always championed the transformative power of AI, but we also believe that innovation without a conscience is a dangerous path.

As we push for more women in STEM and greater diversity in leadership, we must also lead the conversation on accountability. When ethical guardrails are dismantled, it is often the most vulnerable communities who bear the brunt of the consequences.

Aamir was once one of OpenAI’s most vocal supporters, but a recent shift in military involvement and data privacy has led him to draw a line in the sand. Here, he explains why he is pulling the plug on OpenAI and why the #QuitGPT movement is not merely a hashtag, it’s a vote for the kind of future we want to build.

Not long ago, I was one of OpenAI’s biggest unpaid evangelists. I built an AI agent on their platform. I wrote a book The CEO Who Mocked AI (Until It Made Him Millions) ; featuring ChatGPT prominently throughout. I told every founder I met that if they weren’t using it, they were already behind.

Today, I’m pulling the plug. Enterprise Monkey is done with OpenAI. And honestly? I should have done this sooner.

Why This Matters More Than You Think

Large language models are extraordinarily powerful. That power is precisely why the companies building them have put guardrails in place – ethical boundaries that prevent AI from being weaponised against the people it’s supposed to serve. Anthropic, the company behind Claude AI, drew two clear red lines from day one: no mass surveillance of citizens, and no fully autonomous weapons.

In February 2026, the Pentagon demanded Anthropic remove those guardrails entirely, granting unrestricted military access to Claude for “all lawful purposes.” Anthropic’s CEO Dario Amodei refused. His exact words: “We cannot in good conscience accede.”

Within 24 hours, the Trump administration blacklisted Anthropic. This is the first time an American company has been designated a “Supply Chain Risk to National Security.” That label is typically reserved for Chinese tech firms. The same day, OpenAI signed a deal to supply AI to classified Pentagon networks.

The Dangers Are Not Hypothetical

Here’s what Anthropic was actually objecting to. Governments already purchase vast amounts of personal data from private companies – your location history, browsing habits, financial records, social connections. Before AI, that data sat in disconnected silos. Now, a large language model can stitch it all together into a comprehensive profile of any individual. Where you go, who you talk to, what you believe, what you’re vulnerable to. As Amodei put it: “Mass surveillance is a risk because things may become possible with AI that weren’t possible before, and the technology’s potential is getting ahead of the law.”

In the wrong hands, that capability doesn’t just invade privacy; it enables tracking, blackmail, and coercion at a scale we’ve never seen. And as AI agents become more autonomous, these aren’t just tools a person misuses. They become systems that can act on their own.

The second objection was autonomous weapons – AI systems that select targets and carry out strikes without human involvement. Amodei’s concern was blunt: “We don’t want to sell something that could get our own people killed or that could get innocent people killed.” The technology isn’t reliable enough, accountability is unclear, and once that Pandora’s box opens, there’s no closing it.

AI ethics: Not Every AI Company Draws a Line

This is where the contrast becomes stark. Not all AI companies share this sense of responsibility. Elon Musk’s Grok AI offers “Sexy Mode” for 18+ voice interactions and “Spicy Mode” for generating provocative images with minimal guardrails. Researchers have flagged its potential for creating deepfakes and spreading disinformation. When AI is built without ethical boundaries, it doesn’t just allow harmful content, it actively facilitates it.

And OpenAI? The company was founded in 2015 as a non-profit with an explicit mission to develop AI safely for the benefit of humanity. Since then, it has quietly removed its ban on military use, disbanded its superalignment safety team after both leaders resigned; one stating that “safety has taken a backseat to shiny products”, deleted the word “safely” from its mission statement, and introduced advertising into ChatGPT. Every guardrail that once defined OpenAI has been systematically dismantled.

Why Women in Tech Should Care

When AI surveillance tools are built without ethical constraints, the people most vulnerable to their misuse aren’t the ones making the decisions. Women, minorities, activists, journalists; the same groups that have always borne the brunt of unchecked power. AI that can profile, track, and coerce individuals isn’t an abstract policy debate. It’s a safety issue.

Women in tech have always led the fight for accountability – from pay equity to ethical product design. AI ethics is the next frontier of that same fight. And right now, the most powerful thing any of us can do is choose carefully who we trust with our data, our ideas, and our businesses.

Every Dollar Is a Vote

My AI agency now runs entirely on Claude. Our AI agent handles email, CRM, content, project management, and reporting – all on Anthropic’s infrastructure. The migration wasn’t painless. But I sleep better knowing I’m not funding a company whose values I can no longer trust.

My fear isn’t just philosophical. Leaders I know have started to share their life, their organisation’s data, their clients’ information through OpenAI’s API. Their privacy policy currently says they won’t use API data for training. But what happens when a government pressures them to hand it over? Given how quickly they’ve abandoned every other principle they were founded on, I’m not willing to bet my business on that promise holding.

The #QuitGPT movement now has 700,000 people making the same calculation. That’s not a hashtag, it’s a market signal. Investors notice. Boards notice. Product roadmaps shift.

If you make technology decisions at work, at home, anywhere – ask the question: who are you funding, what are they building, and are you okay with it? Because when it comes to AI, “I didn’t know” is no longer an excuse.

Your move.


Aamir Qutub is the author of The CEO Who Mocked AI (Until It Made Him Millions), host of The Dumb Monkey Show podcast, and CEO of Enterprise Monkey, a Melbourne-based AI agency helping businesses adopt AI responsibly.

🔗 enterprisemonkey.com.au/ai-agency-melbourne | enterprisemonkey.com.au | linkedin.com/in/aamirqutub

Tags: AnthropicAnthropic’s CEO Dario AmodeiChatGPTOpenAI
Previous Post

International Women’s Day: Balancing the Scales in Technology

Next Post

How to build a workplace for women in tech, according to Cotality’s Lisa Claes

Robyn Foyster

Robyn Foyster

A multi award-winning journalist and editor and experienced executive, Robyn Foyster has successfully led multiple companies including her own media and tech businesses. She is the editor and owner of Women Love Tech, Women Love Health, and Women Love Travel plus The Carousel and Game Changers. A passionate advocate for diversity, with a strong track record of supporting and mentoring young women, Robyn is a 2025 Winner of the Samsung IT Journalism Awards. She is also a 2023 Women Leading Tech Champion of Change finalist, 2024 finalist for the Samsung Lizzies IT Awards and 2024 Small Business Awards finalist. A regular speaker on TV, radio and podcasts, Robyn spoke on two panels for SXSW Sydney in 2023 and Intel's 2024 Sales Conference in Vietnam and AI Summit in Australia. She has been a judge for the Telstra Business Awards for 8 years. Voted one of B&T's 30 Most Powerful Women In Media, Robyn was Publisher and Editor of Australia's three biggest flagship magazines - The Weekly, Woman's Day and New Idea and a Seven Network Executive.

Next Post
Cotality CEO Lisa Claes: Creating Better Workplaces for Women in Tech

How to build a workplace for women in tech, according to Cotality’s Lisa Claes

No Result
View All Result

Recent.

Casetify x Tate

From Canvas to Case: The Tech Brand Placing Iconic Art in Your Hand

25 March 2026
Saveful Food Waste App

Turning Leftovers into Lifestyle: Aussie Food Waste App Makes Its Indian Debut

25 March 2026
The Razer Kiyo V2 Actually Like? Spoiler I love it AND bonus points as it comes in pink too!

The Razer Kiyo V2 Actually Like? Spoiler I love it AND bonus points as it comes in pink too!

24 March 2026
Women Love Tech

Foyster Media Pty Ltd Copyright 2026

Navigate Site

  • Apps
  • Careers
  • Gaming
  • Lifestyle
  • News
  • Reviews
  • Podcasts
  • Technology

Follow Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Apps
  • Careers
  • Gaming
  • Lifestyle
  • News
  • Reviews
  • Podcasts
  • Technology

Foyster Media Pty Ltd Copyright 2026