AI Risks & Concerns

Understanding and mitigating AI risks in education

AI is powerful, but it's not perfect. Understanding the risks helps you use it responsibly and maintain the trust of your community.

Hallucinations

The problem: AI can confidently state things that are completely false. It doesn't "know" when it's wrong.

Why it happens

AI generates text that is statistically likely to follow from your prompt, not necessarily true. It has no concept of truth, only patterns.

Education examples

  • Citing non-existent research studies
  • Attributing quotes to the wrong person
  • Inventing statistics that sound plausible
  • Creating fictional legal precedents

Mitigation

  • Always verify facts, citations, and statistics independently
  • Use AI for drafting and ideation, not as a source of truth
  • Ask AI to include sources, then check those sources exist
  • Be especially careful with numbers and dates

Bias

The problem: AI reflects biases present in its training data, which includes historical inequities and societal biases.

Manifestations

  • Stereotyping in generated content
  • Uneven quality of responses about different groups
  • Reinforcing historical inequities
  • Lack of diverse perspectives

Education implications

  • IEP language that inadvertently stereotypes
  • Curriculum suggestions that lack cultural diversity
  • Assessment questions with embedded bias
  • Communications that don't resonate with all families

Mitigation

  • Review AI outputs critically for bias
  • Provide diverse examples in your prompts
  • Include explicit instructions about inclusive language
  • Have diverse reviewers check important content

Privacy & Data Security

The problem: Information you share with AI may be stored, used for training, or potentially exposed.

Concerns

  • Student personally identifiable information (PII)
  • FERPA compliance
  • Staff personnel information
  • Sensitive district communications

Best practices

  • Never input student PII into general AI tools
  • Use enterprise versions with data protection agreements
  • Anonymize data before analysis
  • Understand your vendor's data retention policies
  • Follow your district's acceptable use policy

Over-reliance

The problem: AI can make us lazy thinkers if we stop engaging critically with its outputs.

Warning signs

  • Accepting first drafts without review
  • Stopping your own research process
  • Deferring to AI on judgment calls
  • Losing skills through disuse

Maintaining balance

  • Use AI as a starting point, not an endpoint
  • Continue developing your own expertise
  • Question AI recommendations
  • Maintain human decision-making authority

Academic Integrity

The problem: The line between AI assistance and AI replacement is unclear.

Considerations

  • When does AI help cross into AI doing the work?
  • How should policies address AI use?
  • What are the learning implications?
  • How do we prepare students for an AI-assisted world?

A framework

Instead of banning AI, consider:

  1. Define acceptable use clearly
  2. Focus on process, not just product
  3. Teach effective AI collaboration
  4. Assess understanding, not just output

The Balanced Approach

Risk doesn't mean avoidance. It means thoughtful adoption with appropriate safeguards.

For every AI use case, ask:

  1. What could go wrong?
  2. What's the worst-case impact?
  3. What verification steps are needed?
  4. Who needs to review the output?
  5. What's our fallback if AI fails?

The goal is confident, responsible use, not fearful avoidance or reckless adoption.