Lesson Plan
Ethics Expedition Guide
Students will analyze real-world AI ethical dilemmas, collaborate to propose responsible solutions, and reflect on balancing innovation with societal values.
This lesson develops students’ moral reasoning and digital citizenship, preparing them to navigate an AI-driven world with responsibility and empathy.
Audience
12th Grade Students
Time
55 minutes
Approach
Scenario analysis and collaborative discussions spark critical ethical thinking.
Materials
- Projector or Smartboard, - Whiteboard and Markers, - Sticky Notes, - Dilemma Roundtables Prompts, and - AI Ethics Case Studies Handout
Prep
Review and Setup
10 minutes
- Print enough copies of AI Ethics Case Studies Handout for each group
- Familiarize yourself with the scenarios in Dilemma Roundtables Prompts
- Arrange desks or tables into small discussion clusters
- Queue up any relevant slides or projected overview for the intro
Step 1
Introduction & Hook
10 minutes
- Project a brief AI ethics scenario (e.g., bias in hiring algorithms)
- Ask students to share quick reactions: What feels fair or unfair?
- Introduce lesson goals: analyze dilemmas, discuss solutions, reflect on digital citizenship
Step 2
Case Study Analysis
15 minutes
- Distribute AI Ethics Case Studies Handout
- In groups of 3–4, students read one case study
- Guide questions:
- What ethical conflict arises?
- Who are the stakeholders?
- What values are at stake?
- Each group records key points on sticky notes
Step 3
Dilemma Roundtables
20 minutes
- Hand out Dilemma Roundtables Prompts to each group
- Rotate prompts every 5 minutes, encouraging groups to:
- Debate potential solutions
- Weigh pros and cons
- Identify unintended consequences
- Groups jot down consensus recommendations on chart paper or board space
Step 4
Whole-Class Debrief
10 minutes
- Invite each group to share one insight or proposed solution
- Highlight common themes and contrasting viewpoints
- Facilitate reflection:
- How might these ethical frameworks apply in future careers?
- What responsibilities do developers and users bear?
- Close with takeaway: Balancing innovation with human values strengthens digital citizenship
use Lenny to create lessons.
No credit card needed
Discussion
Dilemma Roundtables Prompts
Purpose: In small groups, you’ll grapple with one AI ethics scenario at a time. Use the guiding questions to structure your discussion, propose a responsible solution, and identify any unintended consequences. After 5 minutes, rotate to the next prompt.
Instructions:
- Assign one scenario per group and spend 5 minutes on each.
- Discuss the questions below, record key points on your chart paper or board space.
- After 5 minutes, rotate prompts so each group tackles every scenario.
Scenario 1: Bias in Hiring Algorithms
A tech company uses an AI tool to screen job applicants. Later, it discovers the tool rejects qualified candidates from certain demographic groups because historic data reflected biased hiring practices.
- What is the core ethical dilemma?
- Who are the stakeholders (applicants, company, society, etc.)?
- Which values (fairness, equality, efficiency) are in conflict?
- Propose a policy or technical fix. What are its pros and cons?
- What unintended consequences should the company watch for?
Scenario 2: Facial Recognition & Privacy
A city deploys facial recognition cameras in public spaces to reduce crime. Civil liberties groups argue it invades privacy, misidentifies individuals, and lacks oversight.
- What ethical tensions arise between safety and privacy?
- Who benefits, who is at risk?
- What safeguards or policies could balance both concerns?
- How might marginalized communities be affected differently?
- How would you monitor or audit the system?
Scenario 3: Autonomous Vehicles & Moral Decisions
A self-driving car must choose between swerving into a barrier (injuring its passenger) or staying on course (hitting pedestrians).
- What moral framework would guide the car’s decision? (e.g., utilitarianism, rights-based)
- Who “owns” responsibility for that decision?
- Should passengers be informed or have a choice beforehand?
- How could manufacturers and regulators work together to set standards?
- Identify any slippery-slope risks with these systems.
Scenario 4: AI Chatbots & Misinformation
An AI-powered assistant provides health or legal advice. It makes an error that leads to harm because it relied on faulty or biased data.
- What duties do developers and platforms have to ensure accuracy?
- How should liability be assigned when users act on AI advice?
- What verification or disclaimer mechanisms could reduce risk?
- Could open-source or third-party audits help? Why or why not?
- How might this scenario change if the AI were behind a paywall vs. free to all?
After rotations:
- Review each group’s recommendations and note recurring themes.
- Prepare to share one insight or proposed solution in the whole-class debrief.
Reading
Case Studies in AI Ethics
Below are three concise, real-world cases illustrating different ethical dilemmas posed by AI systems. Read each scenario, then discuss the guiding questions in your group and record your responses in the spaces provided.
Case Study 1: Biased Credit Scoring
A major financial institution deploys an AI-driven credit scoring model to evaluate loan applicants. After rollout, data analysts discover that the model systematically assigns lower scores to applicants from certain ZIP codes and minority communities—reflecting historical economic disparities in the training data.
- What is the core ethical issue in this scenario?
- Who are the stakeholders affected by the AI’s decisions?
- Which values (e.g., fairness, transparency, accuracy) are in conflict?
- Propose one technical or policy solution to mitigate bias. What are its benefits and limitations?
- What unintended consequences might arise from your proposed fix?
Case Study 2: Predictive Policing and Community Trust
A city police department adopts a predictive policing AI tool that analyzes crime data to forecast high-risk locations and times. Community advocates later report that patrols are disproportionately concentrated in low-income neighborhoods, leading to increased stops and tensions.
- What ethical tensions emerge between public safety and civil rights?
- Who benefits from the system, and who bears the risks?
- Suggest one audit or oversight mechanism to ensure accountability. How would it work?
- How might the algorithm’s data inputs reinforce or challenge existing social biases?
- What long-term impacts could predictive policing have on community–police relationships?
Case Study 3: Deepfakes and Media Trust
A news platform integrates an AI deepfake generator to produce lifelike video reenactments of historical events. Soon after launch, malicious actors use the same technology to create politically charged fake speeches that circulate on social media, eroding public trust.
- What responsibilities do developers and platforms have when releasing powerful generative AI?
- How should liability be assigned if deepfakes cause real-world harm?
- Propose two verification or authentication strategies to help audiences distinguish real from fake.
- In what ways could misinformation impact democratic processes or social stability?
- How might regulation or public policy address the challenges of deepfake technology?
When you finish, compare your group’s recommendations with other teams and prepare to share one key insight during the class debrief.