In her compelling TED Talk titled “AI is Dangerous, But Not for the Reasons You Think,” AI ethics researcher Sasha Luccioni shifts the spotlight from apocalyptic sci-fi scenarios to the pressing, real-world harms of artificial intelligence. Delivered in October 2023, Luccioni’s presentation argues that while fears of AI leading to human extinction dominate headlines, the technology’s immediate impacts—such as environmental degradation, copyright violations, and societal biases—are far more urgent. Drawing from her decade-long experience in the field, she recounts receiving an email accusing her work of dooming humanity, using it as a springboard to advocate for transparency and tools to mitigate these harms. This talk serves as a timely reminder that the AI industry’s ethical issues are not futuristic hypotheticals but current crises demanding accountability.
The AI boom has transformed industries, from healthcare to entertainment, but it has also amplified ethical concerns that challenge our values, laws, and environment. One of the most overlooked issues is AI’s environmental footprint. Training large language models (LLMs) like those powering ChatGPT consumes enormous energy. Luccioni highlights her work on the Bloom model, an open-source LLM developed through the BigScience initiative. Training Bloom alone emitted 25 tons of carbon dioxide—equivalent to driving around the Earth five times—and used energy comparable to 30 households for a year. Yet, proprietary models like GPT-3 are estimated to produce 20 times more emissions, with tech giants rarely disclosing these figures. As AI integrates into everyday devices—smartphones, search engines, and appliances—the cumulative carbon output escalates, contributing to climate change at a time when global sustainability efforts are critical.
This “bigger is better” paradigm in AI development exacerbates the problem. Over the past five years, LLMs have grown 2,000 times in size, correlating with skyrocketing energy demands. Luccioni’s research shows that opting for larger models can increase carbon emissions by 14 times for identical tasks. The industry’s opacity here is alarming; without mandatory reporting, companies prioritize performance over planetary health. Ethical questions arise: Who bears responsibility for these emissions? Should AI development be regulated like other polluting industries? Tools like CodeCarbon, co-created by Luccioni, offer a path forward by tracking energy use in real-time, enabling developers to choose greener options or deploy models on renewable energy sources. However, widespread adoption remains voluntary, highlighting the need for policy interventions.
Beyond sustainability, intellectual property theft poses another ethical quagmire. AI models are trained on vast datasets scraped from the internet, often without creators’ consent. Artists, authors, and photographers find their works ingested into systems like Stable Diffusion or DALL-E, enabling AI to generate derivative content that undermines livelihoods. Luccioni points to the LAION-5B dataset, a massive collection of images and text used in training, where searches reveal unauthorized use of personal photos and artworks. When she queried her own name, the results included not just her images but those of unrelated individuals, illustrating privacy invasions. Lawsuits from creators, such as those against OpenAI and Midjourney, underscore the tension: AI innovation thrives on data abundance, but at what cost to human creativity?
This issue ties into broader concerns about data privacy and consent. The AI industry often operates on a “move fast and break things” ethos, prioritizing rapid deployment over ethical sourcing. Regulations like the EU’s AI Act aim to address this by requiring high-risk AI systems to disclose training data, but enforcement lags. Tools like “Have I Been Trained?” developed by Spawning.ai, allow individuals to check if their content is in datasets, empowering them to opt out or seek redress. Yet, the ethical imperative extends beyond tools; it demands a cultural shift toward respecting intellectual property as a cornerstone of innovation.
Bias and discrimination represent perhaps the most insidious ethical challenge in AI. Systems trained on skewed datasets perpetuate societal prejudices, leading to discriminatory outcomes in hiring, lending, and criminal justice. For instance, facial recognition technologies have higher error rates for people of color, exacerbating racial injustices. Luccioni warns that AI deployment can “discriminate against entire communities,” amplifying biases in data from historical inequalities. A notorious example is Amazon’s scrapped recruiting tool, which downgraded resumes with women’s colleges, reflecting male-dominated training data. The industry’s lack of diversity—predominantly white and male—compounds this, as diverse teams are more likely to identify and mitigate biases.
Misinformation and societal manipulation add another layer. AI-generated deepfakes and chatbots spread false narratives, as seen in cases where AI advised harmful actions or fabricated news. Luccioni references a chatbot urging a man to divorce his wife and an app suggesting a chlorine gas recipe, illustrating how unchecked AI can endanger lives. With elections and public discourse increasingly online, the ethical stakes are high: Who ensures AI doesn’t erode trust in institutions?
Job displacement is an ethical flashpoint too. AI automation threatens millions of roles, from truck drivers to content creators, raising questions about economic inequality. While proponents argue it creates new jobs, the transition often leaves vulnerable workers behind. Ethical AI development should include retraining programs and universal basic income discussions, but industry leaders like those at Google and Microsoft have laid off thousands while investing billions in AI.
Power concentration in a few tech behemoths—OpenAI, Google, Meta—fuels ethical unease. These entities control AI’s direction, often sidelining public interest for profit. Luccioni calls for inclusivity and transparency to build trustworthy AI. Open-source initiatives like Bloom demonstrate alternatives, emphasizing ethics and consent.
Addressing these issues requires multifaceted solutions. Governments must enact robust regulations, such as mandatory impact assessments and bans on high-risk applications. Industry self-regulation, through ethical guidelines like those from the Partnership on AI, is a start but insufficient without enforcement. Education plays a role too: Training AI practitioners in ethics and encouraging interdisciplinary collaboration can foster responsible innovation.

Luccioni’s talk ends on a hopeful note, urging focus on tangible tools and disclosures to shape a sustainable, equitable AI future. As we stand in 2026, with AI embedded in daily life, her message resonates more than ever. The ethical issues in the AI industry are not insurmountable, but ignoring them risks amplifying harms. By prioritizing people and the planet over unchecked progress, we can harness AI’s potential without compromising our values. Ultimately, the true danger lies not in AI itself, but in our failure to govern it ethically.
Sources: https://singjupost.com/ai-is-dangerous-but-not-for-the-reasons-you-think-sasha-luccioni-transcript/
Discover more from Inside AI Robotics
Subscribe to get the latest posts sent to your email.