A.I. Generated Deception & Manipulation
School of Media Studies
The New School
Spring 2026

Instructor: Prof. Peter Asaro asarop AT newschool.edu
NMDS: 5352 B CRN: 15432
Time: Mondays, 8:00 - 9:50 pm
Location: 6 East 16th Street, Room 910, In-Person

Course webpage is here: http://peterasaro.org/courses/2026MDisinfo.html

Course blog is here: http://disinfo2026.wordpress.com/

Course Description

This course will examine the ways in which AI is transforming media production and communications, along with its current and potential role in strategic misinformation, deception and manipulation. The course will cover classic work and emerging research on deception and manipulation. We will spend the first part of the semester examining how AI is being used to generate media, including the creation and use of deepfakes, and the role of AI in targeted marketing and surveillance capitalism more generally. The second half of the semester will examine why mis- and disinformation are successful, in terms of both cognitive biases and the socio-technical media infrastructure, as well as the potential for applying AI to increasingly sophisticated psychological micro-targeting, and large-scale media manipulation. This will include an examination of the socio-psychological dimensions of deception, coercion and manipulation among humans, and its potential automation and amplification with AI. Throughout the semester we will return to questions of how these technologies impact or advance social justice, economic and political equality, and what it means to engage with them through ethical practice.


COURSE REQUIREMENTS & GRADING:

Class Attendance & Participation 30%
Blog Entries & Comments: 20%
Research Project Idea: 5%
Research Project Proposal/Draft: 10%
Final Project Presentation 15%
Final Research Project: 20%

Class Attendance and Participation: 30%

You are expected to have thoroughly and thoughtfully read the assigned texts, viewed the assigned videos, and to have prepared yourself to contribute meaningfully to the class discussions. For some people, that preparation requires taking copious notes on the assigned readings; for others, it entails supplementing the assigned readings with explanatory texts found in survey textbooks or in online sources; and for others still, it involves reading the texts, ruminating on them afterwards, then discussing those readings with classmates before the class meeting. Whatever method best suits you, I hope you will arrive at class with copies of the assigned reading, ready and willing to make yourself a valued contributor to the discussion, and eager to share your own relevant media experiences and interests. Your participation will be evaluated in terms of both quantity and quality.

As this is a seminar, regular attendance is essential. You will be permitted two excused absences (you must notify me of your inability to attend before class, via email). Any subsequent absences and any un-excused absences will adversely affect your grade.

Blog Entries & Comments: 20% (+ up to 5% extra credit)

You will be required to make weekly blog entries commenting on the readings for the week. You will be required to create an account on WordPress (if you do not already have one), and send me an email with your LoginID and the EMAIL ADDRESS used to create the account, so that you can be added as an author for the collective course blog. Everyone will be posting to a common blog page, and this will be readable by your classmates. When writing and making comments, you are expected to treat other students with the same respect and courtesy as you should in the classroom.

Discussion questions will be posted each week to help stimulate the writing process. You are also expected to read the posts of your classmates, and encouraged to comment on other people's posts each week. Posts will not be graded (they will receive 2 {on-time], 1 [late] or 0 [not completed] points), but I will read them and occasionally comment on them. There will be 10 posts required through the semester, thus 20 points, constituting 20% of your grade.

Comments are strongly encouraged, and you can receive up to 10 points (extra credit) for each substantial comment (paragraph or longer) that you make on someone else's post.

Blog posts will be due before the start of each class. They are time stamped when you post them, and late posts will only receive half credit (1 point). THere is no specific topic for each post, but they should express your reactions to and reflections on the readings for that week.


Research Project Idea: 5%

Research Project Idea Due: February 25
Length: 300-500 words (approx. 1 page)

Research Project Proposal/Draft: 10%

Research Project Proposal/Draft Due: April 8
Length: 500-2000 words (approx. 1-4 pages)

Final Project Presentation 15%

Final Project Presentations: May 6
Oral Presentation, 15 minutes (Powerpoint Optional) plus discussion

Final Research Project: 20%

Final Project Due: May 13
Length (media project description): 500-3000 words (approx. 1-10 pages) + Media Project
Length (research paper option): 3000-5000 words (approx. 10-18 pages)


There will be no final exam. Instead, a final research project will be required. There are 2 options: Research Paper Option, and Media Project Option.

Final Project will be due after the last day of class and presentations. If that deadline will not work for you, you need to make other arrangements one week in advance, at the latest. We will set aside time in the last day(s) of class for presentations of final projects. These will not be graded but will offer an opportunity for feedback before submitting your final project.

Project topics can address any aspect of the topics and materials discussed in class. Projects should include materials beyond what is directly covered in class, as appropriate for your topic. In other words, they should require research. The blog will provide many ideas for projects, as will class discussion. You will be asked to submit a short description of your Project Idea early in the semester, and will receive feedback on it.

Later in the semester you will have to write a more formal Proposal/Draft for your project, based on feedback and further research. Project proposals should state the research question, problem, or phenomenon that will be the focus of your research. It should also state your thesis or position on the issue, as well as outline the argument you will use to support your position.This applies to both papers and media projects. You should also indicate the sources and materials you will consult and utilize in making your argument and producing your final project. For the Media Project Option, you should state as clearly as possible what you intend to deliver for the final draft (i.e., video length, style, format, content; website; set of infographics, etc.).

Final Project Presentations will occur on the last days of class. These should be short 5-10 minutes summary of your research paper or project, allowing 5-10 minutes for discussion. Group projects can be presented collectively.

Research Paper Option
This will take the form of a 3000-5000 word (Times New Roman, 12pt font, double spaced) term paper. You should draw upon sources from the course readings as well as beyond the course readings. You should cite your sources properly.

Media Project Option
Media Projects can take the form of film and video pieces, audio documentaries, websites, interactive media, performance pieces, infographics, a social media campaign strategy, or other ideas. In addition to the actual media product, you will need to submit your Idea, Proposal, and a Final short written piece explaining your project, its motivations, methods and what you did to realize it.

Group Project Option
Those pursuing the Media Project Option have the further option of participating in a group research project. For the students pursuing this option, the process will be much the same, with the Idea being an individual statement of what you plan to contribute to the group project, and the Proposal and Final projects being collective efforts to realize the research project. In addition, each person choosing this option must submit a 1-page self-assessment of their participation in the group, due at the same time as the Final project.

Papers and written ideas and proposals should be submitted to me in electronic form by email (Word Perfect, MS Word, PDF, HTML and plain TXT are all fine).All assignments are due at 6pm at the start of class on the day they are due. Late final papers will not be accepted, as I must turn in grades shortly thereafter.

Generative AI Policy

You are expected to do your own writing for this class. While you may use generative AI creatively in you final project, you must carefully describe its use and your own original conrtibutions to your final project as part of your proposal and final paper. You may also use generative AI to correct and improve your grammar and use of language, but the ideas and arugments of your texts should be yours. Your weekly blog posts should be your own writing and ideas. Any and all use of generative AI should be disclosed in the assignment when you turn it in. Violation of this policy will be treated as plagarism.

Making Up Missed CLasses

If you are ill and cannot attend class in-person, let me know and I will provide a Zoom link for you. If you are unable to attend the Zoom meeting, for any reason, you can make-up the missed class by watching the Zoom recording of the class discussion, and writing an additional blog post reacting and contributing to that discussion. Make-ups should be done promptly following a missed class. You are allowed two such missed classes without any penalty to your grade. After that you only receive partial credit for class participation for completing the make-up posts.

READINGS

All readings will be available electronically, via the web, in PDF, MS Word, HTML, or similar format. You are welcome and encouraged to buy any of the books used.

Tuesday, January 27, 9:30am-11am ET.
Democratizing AI: Conceptualizing US-India Collaboration
Stimson Center
Exploring the role of open-source AI development and deployment in the context of the U.S.-India relationship
The U.S.-India joint statement issued after Prime Minister Modi’s first visit to Washington in President Trump’s second term set forth an ambitious bilateral agenda on artificial intelligence — to collaborate on “innovations in AI models and building AI applications for solving societal challenges while addressing the protections and controls necessary to protect these technologies.” The Stimson Center’s Strategic Foresight Hub and South Asia programs join in discussion to conceptualize how open-source AI development—in the United States, India, and collaboratively—can support both countries’ efforts to build a robust domestic AI ecosystem. (On-line, free registration required).

Week 1: January 26
Course Introduction

Student Introductions

How to create a WordPress Account, and make a Blog Entry

Watch in Class (segment): NOVA, A.I. Revolution, PBS, March 27, 2024, 54 min.

Read Before Class: Fergus McIntosh, "What's a Fact, Anyway?" The New Yorker, January 11, 2025.

Week 2: February 2
CLASS MEETS ONLINE You will receive a Google Calendar Invitation with the ZOOM link.
A.I. & Non-Consensual Deepfakes

Required:

Kate Conger, Dylan Freedman and Stuart A. Thompson, "Musk’s Chatbot Flooded X With Millions of Sexualized Images in Days, New Estimates Show", New York Times, January 22, 2026.

Adam Satariano, "Elon Musk’s X Faces European Inquiry Over Sexualized A.I. Images", New York Times, January 26, 2026.

Matteo Wong, "A Tipping Point in Online Child Abuse", The Atlantic, January 15, 2026.

Ross Higgins, Connor Plunkett, George Katz, Kolina Koltai and Katherine de Tolly , "Faking It: Deepfake Porn Site’s Link to Tech Companies," Bellingcat, January 28, 2025.

Watch: Sophie Compton and Reuben Hamlyn, "Another Body: My AI Nightmare," WeeWatch Documentaries, 75 minutes.

Recommended:

Listen: Charlie Warzel, "The Problem Is So Much Bigger Than Grok: The internet was built to objectify women," The Atlantic, Galaxy Brain Podcast, 36 minutes.

Tuesday, February 3, 5:30pm-7pm ET.
The Extraction Economy with Tim Wu and Julia Angwin
The Forum at Columbia University, 601 W. 125th St., New York, NY 10027
Room/Area: Foyer
How did a small set of technology platforms rise to command our attention, our data, and our economy? What has this concentration of power cost us in terms of innovation, prosperity, and democratic possibility?
In his latest book, The Age of Extraction, Columbia Law School professor and former White House official Tim Wu contends that today’s dominant firms have mastered an extractive business model that pulls value upward — from users, workers, and entire markets — while eroding political freedoms and narrowing the space for shared prosperity.
Join us for "The Extraction Economy: Platforms, Power, and the Fight for Prosperity" on Tuesday, February 3, from 5:30 to 7:00pm at The Forum at Columbia University. This dynamic in-person conversation between Professor Wu and Julia Angwin, award-winning investigative journalist and founder of Proof News, will dig into how platform power has transformed whole sectors of the economy, how emerging AI systems may accelerate inequality, and what bold legal, institutional, and civic interventions are needed to build a more democratic digital future.
This event is hosted by Columbia World Projects, with support from the John S. and James L. Knight Foundation, and co-sponsored by All Tech Is Human. The discussion will be followed by an audience Q&A. This event is free and open to the public with registration required.
(In-person, free registration required).

Week 3: February 9
What is AI?

Required:

Watch: Mustafa Suleyman, "What Is an AI Anyway?", TED Talk, April 22, 2024, 22 min.

Watch: Mirella Lapata, What is generative AI and how does it work?, The Turing Lectures, October 12, 2023, 46 min.

Watch: Meredith Whittaker, "What is AI? Part 1" AI Now, July 19, 2023, 22 min.

Watch: Lucy Suchman, "What is AI? Part 2" AI Now, July 19, 2023, 33 min.

Watch: Jon Stewart, Jon Stewart On The False Promises of AI, Daily Show, April 1, 2024, 15 min.

Recommended:

Watch: Sasha Luccioni, AI Is Dangerous, but Not for the Reasons You Think, TED Talk, November 6, 2023, 10 min.

Watch: Lilly Irani, "The Labor that Makes AI "Magic"," AI Now, July 7, 2016, 7 min.

Wednesday, February 11, 4pm ET.
Co-Opting AI: Kids
University of Virginia
Co-Opting AI: Kids will explore how growing up in the age of AI is reshaping children’s experiences and consider questions around agency, creativity, participation, and digital rights.(On-line, free registration required).

Week 4: Presidents Day & Movie: February 16
NO CLASS MEETING We will watch a film and discuss it in class or on the blog.

Thursday, February 19, 12pm ET.
Neurotechnologies, AI, and the Right to Mental Privacy: Can the Law Save us from Losing our Minds?
The Washington Foreign Law Society, in collaboration with the Stimson Center and the Carr-Ryan Center for Human Rights
What if AI could power devices to read our thoughts? It's no longer science fiction. Neurotechnologies, which refers to devices capable of recording, decoding, or altering brain activity, can already treat certain serious brain diseases with implantable devices and can already decode thought in the form of words to text at 85 words a minute with 95 percent accuracy. Beyond medical settings, consumer neurotechnologies such as brain-training kits for meditation and sleep already provide unprecedented access to our highly-sensitive and revealing neural data. These advances presage the development of technologies with the potential to surveil our minds, as recent experiments in China, Australia, and elsewhere demonstrate. Using AI to decode brain scans, extract information, and even alter our consciousness present us with a new world of ethical challenges, from State brain surveillance of dissidents to personal augmentation of mental capacity. Can law and policy protect our neurorights to mental integrity, agency and privacy? (On-line, free registration required).

Week 5: February 23
Generative AI, Dark Info & Hallucinations

Required:

Graham Fraser (2024) "Apple urged to axe AI feature after false headline," BBC, December 19, 2024.

Watch: IBM Technology, Why Large Language Models Hallucinate , YouTube, April 20, 2023, 10 min.

Shomit Ghose, "Why Hallucinations Matter: Misinformation, Brand Safety and Cybersecurity in the Age of Generative AI," UC Berkeley Sutardja Centter for Entrepreneurship & Technology, May 2, 2024

Nicola Jones, "AI hallucinations can’t be stopped — but these techniques can limit their damage," Nature, January 21, 2025

Howard Taylor, "An Invisible Threat: How AI Hallucinations Threaten The Software Supply Chain," Forbes, February 3, 2025

Rebecca Sohna (2022) "AI Drug Discovery Systems Might Be Repurposed to Make Chemical Weapons, Researchers Warn," Scientific American, April 21, 2022.

Jonathan L. Zittrain (2024) "The Words That Stop ChatGPT in Its Tracks," The Atlantic, December 17, 2024.

Watch: KARE 11, Testing the limits of ChatGPT and discovering a dark side, YouTube, February 15, 2023, 11 min.

Ben Fritz, "Why Do AI Chatbots Have Such a Hard Time Admitting ‘I Don’t Know’?," Wall Street Journal, February 11, 2025

Recommended:

Sam Schechner, "DeepSeek Offers Bioweapon, Self-Harm Information: Testing shows the Chinese app is more likely than other AIs to give instructions to do dangerous things," Wall Street Journal, February 10, 2025

"Reinforcement Learning From Human Feedback," Wikipedia.

Watch: IBM Technology, "Reinforcement Learning from Human Feedback (RLHF) Explained," YouTube, August 7, 2024, 11 min.

Watch: IBM Technology, "Tuning Your AI Model to Reduce Hallucinations," YouTube, February 7, 2024, 9 min.

Rebecca Sohna (2022) "Bias baked in: How Big Tech sets its own AI standards," Corporate Europe Observatory, January 1, 2025.

Oscar Oviedo-Trespalacios, Amy E Peden, Thomas Cole-Hunter, Arianna Costantini, Milad Haghani, J.E. Rod, Sage Kelly, Helma Torkamaan, Amina Tariq, James David Albert Newton, Timothy Gallagher, Steffen Steinert, Ashleigh J. Filtness, Genserik Reniers (2023). "The risks of using ChatGPT to obtain common safety-related information and advice," Safety Science, Volume 167, November 2023.

Free Online Course: Center for an Informed Public, "Modern-Day Oracles or Bullshit Machines? AI course," University of Washington, February 5, 2025, 18 Lessons.
Direct Link to Course

Week 6: March 2
Project Ideas Due
CLASS MEETS ONLINE You will receive a Google Calendar Invitation with the ZOOM link.
A.I. and Deception: Deepfakes & Fraud

Required:

Charles Bethea (2024). "The Terrifying A.I. Scam That Uses Your Loved One's Voice," New Yorker, March 7, 2024.

Tais Fernanda Blauth, Oskar Josef Gstrein, and Andrj Zwitter (2022) "Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI," IEEE Access, Volume 10, 2023, pp. 77110-77122.

Judd Legum (2025) "AI costs American renters over $3.6 billion annually, according to new report," Popular Information, January 6, 2025.

Watch: Diep Nep, "This is not Morgan Freeman - A Deepfake Singularity," Instagram, July 20, 2021, 1 min.

"Detect DeepFakes: How to counteract misinformation created by AI," MIT Media Lab, January, 2025.

Daniel Immerwahr (2023). "What the Doomsayers Get Wrong About Deepfakes," New Yorker, November 13, 2023.

Recommended:

Watch: Mhairi Aitken, "What are the risks of generative AI?" The Turing Lectures, November 9, 2023, 48 min.

Dan Milmo, "Russia targets Paris Olympics with deepfake Tom Cruise video," The Guardian, June 3, 2024.

Week 7: March 9
CLASS MEETS ONLINE You will receive a Google Calendar Invitation with the ZOOM link.
Influence & Persuasion Part I

Required:

John Lanchester, "You Are the Product," London Review of Books, August 17, 2017.

Watch: Tristan Harris, "How a handful of tech companies control billions of minds every day," TED, July 28, 2017, 17 min.

Shoshana Zuboff, "The Big Other: Surveillance Capitalism and the Prospects of an Information Civilization," Journal of Information Technology, 30(1), 2015, pp. 75-89.

Donnarumma, Marco. "AI Art is Soft Propaganda for the Global North." Hyperallergic, 24 Oct. 2022.

Walsh, Dylan. "The Disinformation Machine: Susceptible Are We to AI Propaganda?" HAI Stanford University, 1 May 2024.

Goldstein, Josh A., and et el. "How persuasive is AI-generated propaganda?" PNAS Nexus, 3, 2024, pp. 1-7

Zuboff, Shoshana. "You Are Now Remotely Controlled." The New York Times, 24 Jan. 2020.

Woolley, Samuel. "To Overcome AI-Enabled Propaganda, Support Communities Already Fighting It." Centre for International Governance Innovation, 17 Oct. 2024.

Watch: Jeff Orlowski, The Social Dilemma NetFlix, 2020, 94 min.

Recommended:

Watch: VPRO Documentary, "Shoshana Zuboff on surveillance capitalism," YouTube, December 20, 2019, 50 min.

Shoshana Zuboff, The Age of Surveillance Capitalism: The The Fight for a Human Future at the New Frontier of Power, Public Affairs, 2018.

Week of Monday, March 16
SPRING BREAK: NO CLASS

Week 8: March 23
AI Deception

Required:

Heather Roff, AI Deception: When Your Artificial Intelligence Learns to Lie We need to understand the kinds of deception an AI agent may learn on its own before we can start proposing technological defenses," IEEE Spectrum, February 24, 2020.

Michelle Starr, "AI Has Already Become a Master of Lies And Deception, Scientists Warn," Science Alert, May 11, 2024.

Peter S. Park, Simon Goldstein, Aidan O'Gara, Michael Chen, Dan Hendrycks, "AI Deception: A Survey of Examples, Risks, and Potential Solutions," Patterns, Patterns Volume 5, Issue 5, 10 May 2024, 100988.

Steven Umbrello and Simone Natale (2024) "Reframing Deception for Human-Centered AI," International Journal of Social Robotics, 2024, 16:2223–2241.

Stefan Sarkadi, Peidong Mei, and Edmond Awad (2023) "Should My Agent Lie for Me? A Study on Attitudes of US-based Participants Towards Deceptive AI in Selected Future-of-work Scenarios," AAMAS 2023, May 29–June 2, 2023, London, United Kingdom.

Recommended:

Watch: DW Documentary, "Neuromarketing: How brands are getting your brain to buy more stuff," YouTube, June 18, 2022, 12 min.

Edward Bernays, Propaganda, Horace Liveright Inc., 1928, pp. 1-61 and 135-153.

Watch: Terry Wu, TED Talk, "Neuromarketing: The new science of consumer decisions," YouTube, June 6, 2019, 17 min.

Watch: Patrick Renvoise, Ted Talk, "Is There a Buy Button Inside the Brain," YouTube, May 20, 2013, 18 min.

Week 9: March 30
AI, Emotional & Social Attachment & Vulnerability

Required:

Anna Tong (2023) "What happens when your AI chatbot stops loving you back?" Reuters, March 21, 2023.

Tom Singleton, Tom Gerken & Liv McMahon (2023) "How a chatbot encouraged a man who wanted to kill the Queen," BBC News, October 6, 2023.

Malathi Nayak (2025) "Teen’s Suicide Turns Mother Against Google, AI Chatbot Startup," Bloomberg, March 18, 2025.

Eileen Guo (2025) "An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it," MIT Technology Review, February 6, 2025.

Turkle, Sherry (2003). Technology and Human Vulnerability: A Conversation with MIT's Sherry Turkle. Harvard Business Review, 81(9), 43-50.

Ryan Calo, Daniella DiPaola (2023)"Socio-Digital Vulnerability," WeRobot 2023, Boston University, September 29, 2023.

Watch: The Philosopher, "Who Do We Become When We Talk to Machines?": Sherry Turkle in conversation with Audrey Borowski," YouTube, June 14, 2024, 60 min.

Recommended:

Watch: Goal 17 Media, "MIT’s Sherry Turkle: Vulnerability Is A Superpower," YouTube, November 16, 2021, 47 min.

Kevin Roose (2023). "A Conversation With Bing’s Chatbot Left Me Deeply Unsettled," New York Times Opinion, February 16, 2023.

Victor Tangermann (2003). "Bing AI Responds After Trying to Break Up Writer's Marriage," Futurism, February 16, 2023.

Listen: Marti Deliema (2023) "Financial Vulnerability and Social Isolation," Easy Prey Podcast, May 10, 2023, 32 min.

Week 10: April 6
What is Manipulation & Coercion?

Required:

Susser, Daniel, Beate Roessler, and Helen Nissenbaum (2019) "Technology, autonomy, and manipulation," Internet Policy Review, Vol. 8, No. 2, pp. 1-11.

Scott Anderson (2023) "Coercion," Stanford Encyclopedia of Philosophy, Edward N. Zalta & Uri Nodelman (eds.), (Spring 2023 Edition).

Christian Tarsney (2024) "Deception and manipulation in generative AI," Philosophical Studies, November 3, 2024, 23 pages.

Micah Carroll, Alan Chan, Henry Ashton, David Krueger (2023) "Characterizing Manipulation from AI Systems," Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, October 30, 2023, 13 pages.

Marguerite DeLiema, Yiting Li, Gary Mottola (2022) "Correlates of responding to and becoming victimized by fraud: Examining risk factors by scam type," International Journal of Consumer Studies, November 11, 2022, 18 pages.

Recommended:

Nikola Banovic, Zhuoran Yang, Aditya Ramesh, and Alice Liu (2023) "Being Trustworthy is Not Enough: How Untrustworthy Artificial Intelligence (AI) Can Deceive the End-Users and Gain Their Trust," Proceedings of the ACM on Human-Computer Interaction, 7, CSCW1, Article 27 (April, 2023), 17 pages.

Watch: Sandra Matz, "Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior," YouTube Columbia Business School, July 8, 2024, 56 min.

Sandra Matz, Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior, Harvard Business Review Press, 2025, Chapters 8-10, pp. 155-195.

Week 11: April 13
Project Proposals/Drafts Due
Democracy & Election Manipulation

Required:

Zuboff, Shoshana (2021). "The coup we are not talking about," New York Times, January 29 2021.

Kamya Yadav, Samantha Lai (2024). "What Does Information Integrity Mean for Democracies?" Lawfare, Friday, March 22, 2024.

Nathan Sanders and Bruce Schneier (2021) "Machine Learning Featurizations for AI Hacking of Political Systems," arXiv, 11 pgs.

Andreas Jungherr, Adrian Rauchfleisch, and Alexander Wuttke (2024) "Deceptive uses of Artificial Intelligence in elections strenghten support for AI ban," arXiv, 59 pgs.

Cade Metz and Tiffany Hsu (2024). "An A.I. Researcher Takes On Election Deepfakes," New York Times, April 2, 2024.

Watch: The National, "Can you spot the deepfake? How AI is threatening elections," CBC News, Jan 17, 2024, 7 min.

Watch: UVA's Digital Technology for Democracy Lab, "Co Opting AI: Campaigningy," You Tube, February 4, 2025, 75 min.

Watch: Karim Amer and Jehane Noujaim, "The Great Hack," Netflix, 2019, 114 minutes.

Recommended:

Hunt Allcott and Matthew Gentzkow, "Social Media and Fake News in the 2016 Election," Journal of Economic Perspectives, 31(2), Spring 2017, pp. 211-236.

Jane Mayer (2018) "How Russia Helped Swing the Election for Trump," New Yorker, September 24, 2018.

Pauwels, Eleonore and Sarah W. Deton (2020) "Hybrid Emerging Threats and Information Warfare: The Story of the Cyber-AI Deception Machine," from 21st Century Prometheus, M. Martinelli and R. Trapp (eds.), Springer, 2023, pp 107-124.

Watch: PBS Frontline, "United States of Conspiracy," July 28, 2020, 54 min.

Week 12: April 20
AI & Mental Health

Required:

Chloe Xiang (2023) "Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It ‘Feels Weird’," Vice, January 10, 2023.

Simon Coghlan, Kobi Leins, Susie Sheldrick, Marc Cheong, Piers Gooding and Simon D’Alfonso (2023) "To chat or bot to chat: Ethical issues with using chatbots in mental health," Digital Health, Vol 9, 1-11, October 2023, 11 pages.

Isobel Butorac and Adrian Carter (2021) "The Coercive Potential of Digital Mental Health," The American Journal of Bioethics, 21:7, 2023, pp. 28-30.

Watch: 60 Minutes, "AI-powered mental health chatbots developed as a therapy support tool," You Tube, April 24, 2024, 13 min.

Recommended:

Arthur C. Brooks (2025) "A Defense Against Gaslighting Sociopaths," The Atlantic, April 10, 2025.

Amelia Fiske, Peter Henningsen, and Alena Buyx (2019) "Your Robot Therapist Will See You Now: Ethical Implications of Emobided Artificial Intelligence in Psychiatry, Psychology and Psychotherapy," Journal of Medical Internet Research, Volume 21. Issue 5, pp. 1-12.

Week 13: April 27
Data Privacy

Required:

Carissa Veliz, Privacy is Power: Why and How You Should Take Back Control of Your Data, Melville House, 2021, Intro, Chapters 3, 4, 5 & 6.

Solove, Daniel J. and Woodrow Hartzog (2024) "Kafka in the Age of AI and the Futility of Privacy as Control," Boston University Law Review, GWU Law School Public Law Research Paper No. 2024-31, Boston Univ. School of Law Research Paper No. 4685553, January 5, 2024, 22 pages.

Kyle Chayka (2024) "How to Opt Out of A.I. Online," The New Yorker, October 2, 2024.

Watch: UVA's Digital Technology for Democracy Lab, "Co Opting AI: Privacy," You Tube, March 26, 2025, 75 min.

Recommended:

Meredith Whitakker (2024) "The State of Personal Online Security and Confidentiality" SXSW Live, March 7, 2025, 60 min.

Payal Dhar (2023) "Protecting AI Models from Data Poisoning," IEEE Spectrum, March 24, 2023.

Finn Brunton & Helen Nissenbaum (2015) Obfuscation A User's Guide For Privacy And Protest, MIT Press.

Nicole Nguyen (2025) "Go Delete Yourself From the Internet. Seriously, Here’s How," Wall Street Journal, April 20, 2025.

Watch: Terry Gilliam, Brasil (The Director's Cut), 1985, 132 min.

Watch: John Oliver, Sports Betting, Last Week Tonight, March 17, 2025, 32 min.

Week 14: May 4
AI Governance & Regulation

Required:

Maurice E. Stucke and Jathan Sadowski (2021) "I'm a Luddite. You Should Be One Too," The Conversation, August 9, 2021.

John Cassidy (2025) "How to Survive the A.I. Revolution," The New Yorker, August 21, 2025.

Tressie McMillan Cottom (2025) "The Tech Fantasy That Powers A.I. Is Running on Fumes," New York Times Opinion, March 29, 2025.

Peter Asaro (2019). "What is an 'AI Arms Race' Anyway?," I/S: A Journal of Law for the Information Society, Vol. 15, No. 1-2 (Spring 2019), pp. 45-64.

Ryan Nabil (2024) "Global AI Governance and the United Nations," Yale Journal of International Affairs, February 2, 2024.

Markus Anderljung, Anton Korinek (2024) "Frontier AI Regulation: Safeguards Amid Rapid Progress," Lawfare, Thursday, January 4, 2024.

Watch: Deutsche Welle, EU lawmakers approve world's first legal framework on Artificial Intelligence, DW Documentary, March 13, 2024, 8 min.

Watch: Jon Stewart, Lina Khan – FTC Chair on Amazon Antitrust Lawsuit & AI Oversight, Daily Show, April 1, 2024, 21 min.

Recommended:

Watch: UNIDIR, "Global Conference on AI, Security and Ethics 2025 - Day 2," Panel on Trust, You Tube, March 28, 2025, 100 min.

Lewin Schmidt (2021) "Mapping global AI governance: a nascent regime in a fragmented landscape," AI and Ethics, August 17, 2021, Volume 2, pp. 303–314.

"Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," The White House, October 30, 2023.

Explore: G7 Comprehensive Policy Framework, "Hiroshima AI Process," October, 2023.

Week 15: May 11
Presentation of Final Projects

May 13
Final Projects Due by Midnight ET, Thursday, May 13.