Based on ISO 24027 (2021)
Section 1: Fundamental Concepts of Bias and Fairness
Key Definitions
- Bias: A systematic difference in the treatment of certain objects, people, or groups compared to others. Important note: bias isn’t inherently negative.
- Fairness: Treatment, behavior, or outcome that respects established facts, beliefs, and norms without unjust discrimination.
Understanding Bias Types
-
Positive Bias
- Example: Providing extra support to disadvantaged regions
- Can be intentional and beneficial
-
Neutral Bias
- Example: Self-driving car recognizing mailboxes more accurately than garbage bins
- Not inherently problematic
-
Context Dependency
- Age discrimination case study:
- Unfair: Rejecting qualified job candidates based on youth
- Fair: Age restrictions for alcohol purchases
- Cultural: Young people giving seats to elderly on public transport
- Age discrimination case study:
Section 2: Hidden Bias and Real-World Applications
CV Screening Case Study
-
Initial Approach
- Remove gender from classification features
- Focus on work experience, employment duration, performance consistency
-
Hidden Problems Discovered
- Seasonal/temporary work patterns
- Forced entrepreneurship
- Childbirth-related career gaps
- Result: Unintended discrimination against women
Important Insight
- Fairness issues can exist without bias (Example: system rejecting all candidates)
Section 3: Fairness Metrics and Classification
Classification Examples
-
Simple Classification (Iris Dataset)
- Clear ground truth available
- Objective verification possible
-
Complex Classification (Medical School/Firefighter)
- Multiple assessment criteria
- Ethical considerations in defining success
Confusion Matrix Analysis
- Groups typically divided into:
- Privileged (1)
- Non-privileged (0)
Key Metrics:
- Equal Opportunity: TPR₀ ≈ TPR₁
- Equalized Odds: TPR₀ ≈ TPR₁ and FPR₀ ≈ FPR₁
Section 4: Types of Data Distortions
Selection Bias
- Sampling Bias: Non-random sampling
- Coverage Bias: Incomplete population coverage
- Non-response Bias: Systematic refusal to participate
Cognitive Biases in AI
-
Confirmation Bias
- People perceive confirming evidence more strongly
- Creates filter bubbles in content recommendation
-
Group-based Biases
- In-group Favoritism
- Out-group Homogenization
-
Automation-related Biases
- Over-reliance on automated systems
- Selective Adherence: accepting only confirming results
Section 5: AI Transparency
Transparency Levels (IEEE P7001)
-
For Users (Levels 0-5)
- Level 1: Basic system information
- Level 3: Immediate explanations
- Level 5: Continuous behavior explanation
-
For Public/Bystanders (Levels 0-5)
- Level 1: System identification
- Level 3: Purpose and contact information
- Level 5: Data governance
-
For Validation/Certification (Levels 0-5)
- Level 1: System specifications
- Level 3: High-level design documentation
- Level 5: Full source code and training data
Explainability Approaches
-
Inherently Transparent Solutions
- Rule-based systems
- Formal methods
- Logic-based reasoning
-
Black-box Solutions
- Post-hoc explanations
- Saliency maps
- Surrogate systems (LIME)