Is AI Failing Women? A Reality Check with Dr Elise Stephenson

Is AI Failing Women? A Reality Check with Dr Elise Stephenson

Is AI Failing Women? A Reality Check with Dr Elise Stephenson

0:00/1:34

Episode Summary

Artificial intelligence is reshaping everything from work to healthcare to the way we interact online, but it’s also exposing deep gender gaps that we can’t afford to ignore. At the eSafety Summit in Canberra, Georgie sits down with award-winning researcher and gender equality expert Dr Elise Stephenson for a live conversation on the uncomfortable truth behind AI’s gender problem.

Only 22% of the global AI workforce is women.

Only 2% of Australian startup funding goes to female founders.

And when generating images of British women, some AI models label them as models or prostitutes 30% of the time.

In this episode of In The Blink of AI, Georgie and Elise dig into how bias creeps into AI systems, who’s responsible, and what needs to change, from data collection to funding incentives to the way we teach young people about online safety. They also explore the surprising ways women are using AI, why representation matters at every layer of the stack, and what a truly gender-responsive AI future could look like.

This is one of the most important episodes we’ve made, equal parts confronting and constructive, and a must-listen for anyone who cares about building tech that works for everyone.

Chapters:

00:00 — Intro

02:31 — What AI actually is (and what it definitely is not)

04:00 — How AI is showing up in daily life: usage trends among women and men

06:40 — Physical AI, robotics, and Grace Brown’s loneliness-fighting invention

08:44 — The hidden gender power imbalances behind AI development

09:29 — A history lesson: how women were pushed out of computing

11:19 — Privacy, consent, and the fear of being recorded by your own doctor

12:35 — Deepfakes, blackmail, and why women are disproportionately targeted

14:45 — The “ghost workforce”: who actually labels the data AI learns from

15:26 — How unrepresentative datasets become harmful outcomes

16:35 — When Google Photos labeled a Black man as a gorilla

18:04 — The case for optimism: can AI reduce bias if we design it right?

21:32 — Who’s responsible for gender-safe AI: companies, funders, or users?

23:30 — The Inclusive Innovation Playbook: how to build fairer AI ecosystems

25:40 — Why regulating AI is so hard (and why countries disagree wildly)

27:23 — Dual-use tech, human oversight, and what companies like Unilever get right

29:03 — AI we should be excited about: healthcare, diagnostics, and robotics

31:39 — When an AI coworker goes rogue: who’s accountable?

32:56 — What a gender-responsive AI future actually looks like

34:08 — Top recommendations: feminist tech diplomacy and moving beyond critique

36:28 — Where to find Georgie and Elise

Related Posts

Let's work together

We help founders scale their voice

Discover how we can help you build a media engine for your startup

Let's work together

We help founders scale their voice

Discover how we can help you build a media engine for your startup

Let's work together

We help founders scale their voice

Discover how we can help you build a media engine for your startup

© Copyright W2D1 Media Pty Ltd. All rights reserved. 2025

© Copyright W2D1 Media Pty Ltd. All rights reserved. 2025

© Copyright W2D1 Media Pty Ltd. All rights reserved. 2025