lenny

Ethics Unlocked

user image

Lesson Plan

Responsible AI Use Intro

Students will examine real-world AI misuse case studies, discuss ethical dilemmas in small groups, apply a five-step decision-making framework, and reflect on responsible AI practices.

This lesson builds critical digital and AI literacy by equipping 12th graders with tools to identify ethical issues, make informed decisions, and foster accountability in AI use.

Audience

12th Grade Small Group

Time

50 minutes

Approach

Case studies, framework, group discussion

Prep

Review Materials

10 minutes

Step 1

Warm-Up Case Scenario

5 minutes

  • Present a brief AI misuse scenario (e.g., facial recognition misidentification leading to wrongful arrest).
  • Ask students to jot down 1–2 ethical issues they notice.
  • Invite 2–3 volunteers to share quick observations.

Step 2

Mini-Lecture on AI Ethics

10 minutes

  • Use the AI Ethics in Action Slide Deck to define key principles: fairness, accountability, transparency, privacy.
  • Highlight real-world examples of misuse and best practices.
  • Pause for student questions after each principle.

Step 3

Guided Group Discussion

15 minutes

  • Divide students into groups of 3–4.
  • Provide each group a new AI case scenario.
  • Instruct groups to identify conflicting values, stakeholders, and potential harms.
  • Assign one reporter per group to summarize findings.

Step 4

Ethical Decision-Making Framework Activity

10 minutes

  • Introduce a five-step framework: (1) Identify the issue, (2) Gather facts, (3) Evaluate options, (4) Choose action, (5) Reflect.
  • Distribute the Case Study Ethics Evaluation Worksheet.
  • Groups apply the framework to their scenario and complete the worksheet.

Step 5

Cool-Down Reflection

10 minutes

  • Reconvene as a whole class.
  • Each group reporter shares one key insight from the framework activity.
  • Lead a brief discussion: how can these ethical principles guide students’ own use of AI tools?
  • Summarize main takeaways and encourage ongoing responsible AI habits.
lenny
0 educators
use Lenny to create lessons.

No credit card needed

Slide Deck

AI Ethics in Action

Welcome to “AI Ethics in Action”

Explore key principles, real-world case studies, and best practices for responsible AI use.

Welcome students and introduce the session. Explain that they’ll learn key AI ethics principles, see real-world misuse examples, and discuss best practices.

Learning Objectives

By the end of this session, you will be able to:

  • Define four foundational AI ethics principles: fairness, accountability, transparency, privacy
  • Identify ethical issues in real-world AI scenarios
  • Analyze case studies of AI misuse
  • Propose best-practice strategies for responsible AI development and use

Review the learning objectives aloud. Emphasize why each objective matters for students’ understanding of AI in society.

Key Principle: Fairness

Fairness means:

  • Avoiding bias or discrimination in AI decisions
  • Ensuring equitable outcomes across demographic groups
  • Auditing datasets and models for imbalances

Define fairness: ensuring AI treats all groups equitably. Invite students to share examples of unfair AI outcomes they’ve heard about.

Key Principle: Accountability

Accountability entails:

  • Clear ownership of AI system outcomes
  • Mechanisms for redress when harm occurs
  • Documented development processes and governance

Explain accountability: assigning responsibility for AI decisions. Ask: Who is responsible when AI makes a harmful mistake?

Key Principle: Transparency

Transparency involves:

  • Explaining how AI models make decisions
  • Providing clear documentation and user-facing explanations
  • Enabling stakeholders to inspect system logic

Discuss transparency: making AI processes understandable. Show examples of explainable vs. opaque AI systems.

Key Principle: Privacy

Privacy requires:

  • Minimizing data collection to what’s necessary
  • Protecting personal information from misuse or exposure
  • Applying data‐protection and consent frameworks

Cover privacy: protecting personal data used by AI. Highlight legal frameworks (e.g., GDPR) and privacy-by-design.

Case Study: Fairness Misuse

Scenario: A company’s AI hiring tool favors candidates from certain universities.

Issues:

  • Bias against underrepresented schools
  • Reinforcing existing inequalities
  • Lack of diverse training data

Present the hiring algorithm case. Ask students to spot fairness issues and potential impacts on applicants.

Case Study: Accountability Lapse

Scenario: An autonomous car causes a pedestrian accident.

Issues:

  • Unclear responsibility among engineers, manufacturers, or operators
  • No defined process for investigating AI errors
  • Victim’s inability to seek redress

Describe the autonomous vehicle incident. Prompt discussion on who is accountable: manufacturer, programmer, or user.

Case Study: Transparency Issue

Scenario: A bank’s AI credit-scoring model denies loans without explanation.

Issues:

  • Applicants cannot understand or challenge decisions
  • Hidden factors influencing outcomes
  • Potential regulatory non-compliance

Share the credit-scoring example. Encourage students to consider transparency needs in financial services.

Case Study: Privacy Breach

Scenario: A smart assistant records and shares private conversations.

Issues:

  • Inadequate data encryption and access controls
  • Consent not clearly obtained or documented
  • Sensitive personal data exposed

Explain the voice assistant data breach. Ask students how privacy-by-design might have prevented it.

Best‐Practice Tips

Implement these strategies:

  • Fairness: Use diverse, representative datasets and bias tests
  • Accountability: Establish clear governance and logging
  • Transparency: Provide user-friendly explanations and documentation
  • Privacy: Apply data minimization, encryption, and consent protocols

Summarize best practices across principles. Encourage groups to note which they would prioritize in future projects.

Discussion & Reflection

Reflect and discuss:

  • Which principle feels most challenging to implement, and why?
  • How would you address an ethical dilemma in your own AI project?
  • What steps can you take now to foster responsible AI in everyday life?

Pose reflective questions and invite group discussion. Close by linking back to their worksheet activity.

lenny

Worksheet

Case Study Ethics Evaluation Worksheet

Group Members: ____________________________ Scenario Title: ____________________________

Directions

Use the five-step ethical decision-making framework to analyze your assigned AI case scenario. Provide thoughtful answers and discuss them with your group.


Step 1: Identify the Ethical Issue

Briefly describe the core ethical dilemma or conflict in your case scenario. What principle(s) are at stake?



Step 2: Gather Facts

List the key facts, data points, and stakeholders involved. Consider potential harms and benefits to each stakeholder.






Step 3: Evaluate Options

Describe at least two possible courses of action. For each option, list pros and cons, referencing relevant ethical principles (fairness, accountability, transparency, privacy).











Step 4: Choose an Action

Select the option you recommend. Explain your reasoning, and specify which AI ethics principles guided your choice.






Step 5: Reflect

  1. How might this decision impact each stakeholder?



  2. What lessons about responsible AI use did you learn from this exercise?



  3. How will you apply this ethical framework in your own AI projects or daily use of AI tools?





Thank you for your thoughtful analysis! Use your findings to guide responsible AI decisions in real-world contexts.

lenny
lenny