top of page

Technology VS. Society & Survival: Artificial Intelligence, Social Media & Deepfakes

Some of our new digital technologies are helping to tear our politics apart, make us more angry and hateful, and even make it impossible to know what’s true. Facebook gets a lot of the negative attention (rightfully), but the problem is so much bigger. Our society is clearly not dealing with these technologies effectively. Basically, people are allowed to invent and put into the world whatever they want, no matter how harmful – and there are no rules. Then, on top of this very shaky foundation, people are creating artificial intelligence that will be profoundly more powerful than anything humanity has created before.

Computing is advancing at an exponential rate. Things are already moving faster than we’ve been able to keep up with, and it’s about to go a LOT faster.

Should we keep idolizing technology, and let tech companies and inventors do whatever they want? Or should we responsibly manage these transformative changes to our society, with political and economic systems that encourage safety?


Deep Fakes:

~ Bloomberg News, 9/27/18 It's Getting Harder to Spot a Deep Fake Video

~ Radiolab, July 2017 Breaking News (go here to see videos of researchers developing deep fake software)

~ 80,000 Hours Podcast, 4/6/21

Nina Schick on Disinformation and the Rise of Synthetic Media

Social Media:

~ New York Times, 10/15/18 A Genocide Incited on Facebook, With Posts From Myanmar’s Military

~ BBC News, 9/12/18 The country where Facebook posts whipped up hate

~ MIT Technology Review, 3/11/21 How Facebook got addicted to spreading misinformation

~ Center for Humane Technology

Artificial Intelligence:

~ The Independent, 5/1/14 Stephen Hawking: Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?

~ CNBC, 3/13/18 Elon Musk: ‘Mark my words — A.I. is far more dangerous than nukes’

~ SingularityHub, 7/15/18 Why Most of Us Fail to Grasp Coming Exponential Gains in AI

~ AlphaZero (chess-playing artificial intelligence)

~ MuZero (chess- and Atari-playing artificial intelligence)

~ SingularityHub, 6/18/20 OpenAI’s New Text Generator Writes Even More Like a Human

~ SingularityHub, 8/2/20 This AI Could Bring Us Computers That Can Write Their Own Software

~ SingularityHub, 5/31/17

Google’s AI-Building AI Is a Step Toward Self-Improving AI

~ Science, 4/13/20

Artificial intelligence is evolving all by itself

~ Future of Life Institute - Benefits & Risks of Artificial Intelligence

~ Future of Life Institute Podcast, 3/19/21

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

~ AI Research Considerations for Human Existential Safety (ARCHES) by Andrew Critch & David Krueger, 6/11/20 (This academic paper is a long read, but I highly recommend it. It’s understandable and well-written. It does an excellent job of explaining why AI safety is quite difficult because of complex interactions between multiple people and organizations, and multiple AI systems.)

Lethal Autonomous Weapons:

~ Campaign to Stop Killer Robots

~ Lethal Autonomous Weapons Systems

Efforts to regulate artificial intelligence:

~ International Congress for the Governance of AI

~ Future of Life Institute

~ Centre for the Governance of AI / Future of Life Institute at University of Oxford

~ Center for AI and Digital Policy

~ Global Partnership on Artificial Intelligence


bottom of page