AI Risk and Ethics Analysis

Eric Loomis Case Study

COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a risk assessment algorithm that played a significant role in this case.

Context: In 2013, Loomis was arrested for driving a car used in a shooting. After pleading guilty to eluding police, he received a six-year prison sentence, influenced by the COMPAS assessment.

On the probation hearing:

You’re identified, through the COMPAS assessment, as an individual who is at high risk to the community. In terms of weighing the various factors, I’m ruling out probation because of the seriousness of the crime and because your history, your history on supervision, and the risk assessment tools that have been utilized, suggest that you’re extremely high risk to re-offend.

Key Question: Is it okay for the judge to rely on AI this much?

Ulrich Beck’s Analysis: Risk Society

Historical View of Catastrophes

  • Attributed to bad luck
  • Seen as Divine acts
  • Considered beyond human control

Modern Risk Society Characteristics

  • Risks are perceived as controllable
  • Humanity takes responsibility for potential negative consequences
  • This increased sense of control paradoxically undermines societal institutions

Evolution of Risk Perception

Modernity 1Modernity 2
Localized risksRisks are more evenly distributed
Wealthy could avoid risks by moving awayHarder to escape and Global in nature
Example: Living near a factory was risky, but rich people could relocateExamples: Ozone depletion, global warming affect everyone regardless of wealth

The EU AI Act

A comprehensive regulatory framework by the European Union for AI technologies.

Risk Classification System

By Risk Level

Risk LevelDefinitionKey ExamplesRequirements
UnacceptableProhibited AI systems that pose fundamental rights risks- Social scoring systems
- Manipulative AI
- Workplace emotion inference
- Most real-time biometric ID
Complete prohibition
High-RiskSystems with significant potential impact requiring strict oversight- Critical infrastructure
- Education systems
- Employment screening
- Law enforcement
Comprehensive compliance framework
Limited RiskSystems requiring basic transparency- Chatbots
- Deepfakes
Transparency obligations
Minimal RiskLow-impact systems- Basic AI tools
- Simple automation
No specific requirements

AI Categories

General Purpose AI (GPAI)

  • Definition: Systems showing significant generality and competence across various tasks
  • Subcategories:
    • Standard GPAI
    • Systemic Risk GPAI (compute > FLOPs)
    • Open Source GPAI

Specialized AI

  • Definition: Systems designed for specific use cases
  • Examples:
    • Biometric identification systems
    • Educational assessment systems
    • Employment screening tools

Philosophical Perspective: Laplace’s Demon

A thought experiment exploring determinism and scientific prediction. Laplace’s Demon is a theoretical entity possessing complete knowledge of every particle in the universe.

Key Capabilities

  1. See the entire future unfold like a movie
  2. Trace backwards through time to know everything that ever happened
  3. Calculate everything using physics laws

Implications

  • Universe as deterministic
  • Free will potentially illusory
  • Future as fixed as the past

Prohibited AI Practices

Manipulation and Exploitation

  • Subliminal and Manipulative Techniques: Systems that deploy techniques beyond a person’s consciousness or use purposefully manipulative/deceptive methods are prohibited.
  • Vulnerability Exploitation: Systems cannot exploit vulnerabilities related to age, disability, or socio-economic situations.

Social Scoring and Assessment

  • Social Behavior Classification: Systems that evaluate or classify people based on social behavior or personality characteristics are prohibited when they lead to:
    1. Unfavorable treatment in contexts unrelated to the original data collection
    2. Unjustified or disproportionate treatment based on social behavior

Crime Prediction and Risk Assessment

  • Prohibited: AI systems that predict criminal behavior based solely on personality profiling
  • Exception: Systems may support human assessment when based on objective, verifiable facts directly linked to criminal activity

Biometric Systems and Privacy

  • Database Creation: Prohibited to create/expand facial recognition databases through untargeted scraping of internet/CCTV footage
  • Emotion Recognition: Banned in workplace and educational settings (except for medical/safety purposes)
  • Biometric Categorization: Systems cannot categorize individuals based on sensitive characteristics like:
    • Race
    • Political opinions
    • Trade union membership
    • Religious/philosophical beliefs
    • Sexual orientation/sex life

Law Enforcement Restrictions

  • Real-time Biometric Identification: Only allowed in public spaces for:
    1. Searching for specific victims (abduction, trafficking, exploitation)
    2. Preventing imminent threats to life/safety or terrorist attacks
    3. Locating suspects of serious crimes (punishable by 4+ years)
Link to original