AI Ethics and Values Notes – Class 11 AI(843) | Quick Revision Booster
Here is comprehensive, well-structured, and curriculum-aligned study notes for AI Ethics and Values for Class 11 AI. Each topic of this AI Ethics and Values Notes is explained in a simple and student-friendly manner, keeping the latest CBSE syllabus and board exam requirements in mind.
Ethics In Artificial Intelligence
AI Ethics means the set of rules and principles that guide how Artificial Intelligence (AI) should be created and used. It ensures that AI systems are built and used in a way that is fair, clear, responsible, and safe for people. The main goal of AI ethics is to make sure AI respects human values and is used in a proper and trustworthy manner.
The Five Pillars of AI Ethics
- Explainability
- Refers to how clearly an AI system can explain its decisions and predictions.
- Helps users understand how and why an AI model gives a specific output.
- Builds trust, accountability, and ethical use of AI systems.
- Ensures AI is transparent and understandable to stakeholders.
- Fairness
- Focuses on removing bias and discrimination from AI systems.
- Ensures equal treatment in decision-making models.
- Works to eliminate bias based on sensitive factors like:
- Race and ethnicity
- Gender
- Sexual orientation
- Disability
- Socioeconomic status
- Aims for equal and unbiased outcomes.
- Robustness
- Refers to the ability of AI systems to give accurate and reliable results consistently.
- Ensures AI works properly under different conditions and over time.
- Focuses on:
- Stability of algorithms
- Reproducible results
- Consistent performance across datasets and environments
- Requires testing, validation, and quality assurance.
- Transparency
- Means openness about how AI systems are designed and how they work.
- Includes clear information about:
- Data used
- Algorithms applied
- Decision-making process
- Helps users and stakeholders understand AI systems.
- Supports accountability, evaluation, and informed decision-making.
- Privacy
- Refers to an individual’s right to control their personal data.
- Protects people from unwanted access or intrusion.
- Includes safeguarding:
- Personal communications
- Activities
- Personal information
- Ensures autonomy, dignity, and personal freedom.
Bias
- Bias means having a preference or tendency toward something or someone over others without considering all relevant information fairly.
- It can lead to unfair treatment or decisions may be based on:
- Personal beliefs
- Past experiences
- Stereotypes
- In AI, bias means unfair or inaccurate decisions made by systems due to Flawed data or Built-in assumptions
- This can result in unfair outcomes for certain groups of people.
Bias Awareness
- Bias awareness means understanding that AI systems may have unfair preferences.
- It helps in recognizing that AI may sometimes make unfair decisions due to training or design.
Sources of Bias in AI
Training Data Bias
- AI systems learn from training data, so biased data leads to biased outputs.
- Bias can occur due to over-representation or under-representation of groups in datasets.
- Example issues:
- Facial recognition data mostly containing white people can cause errors for people of color.
- Police data from predominantly Black geographic areas can create racial bias.
Algorithmic Bias
- Happens when flawed training data leads to incorrect or unfair algorithm outputs.
- Can also occur due to programming mistakes or unfair weighting of factors.
- Example:
- Using factors like income or vocabulary may unintentionally discriminate based on race or gender.
Cognitive Bias
- Comes from human thinking, judgment, and personal experiences.
- Humans may unintentionally introduce their own biases while designing AI systems.
- Example:
- Preferring data only from Americans instead of using global datasets.
Examples of AI Bias in Real Life
- Healthcare
- Underrepresented data of women or minority groups affects AI predictions.
- CAD systems show lower accuracy for Black patients than White patients.
- Online Advertising
- AI ad systems can reinforce gender bias in job roles.
- Google ads showed high-paying jobs more often to men than women.
- Image Generation
- AI tools like Midjourney show gender and age bias in generated images.
- Older people in professional roles are often shown as men only, reinforcing workplace gender bias.
Mitigating Bias in AI Systems
- AI bias can increase unfairness and discrimination in society.
- Example: Biased hiring systems may unfairly disadvantage certain groups, leading to systemic discrimination.
- Bias in AI reduces trust in technology, making people less willing to use it.
- Addressing bias is necessary for ethical and responsible use of AI systems.
Strategies for Mitigating Bias
- Using Diverse Data
- Train AI using varied and inclusive datasets.
- Helps AI learn from different viewpoints and reduces bias.
- Detecting Bias
- Identify and measure bias before AI systems are used.
- Check how AI behaves for different groups.
- Use tools to test whether decisions are fair.
- Fair Algorithms
- Use algorithms designed to ensure fair decision-making.
- Include fairness as a key factor in AI model design.
- Being Transparent
- AI systems should clearly explain how decisions are made.
- Transparency helps users identify and correct bias.
- Inclusive Teams
- Build AI with teams from diverse backgrounds.
- Different perspectives help identify hidden biases.
- Ensures AI systems are fair for everyone.
Developing AI Policies
- Developing AI policies is important to ensure AI is used responsibly, safely, and ethically. It also helps in promoting innovation and public trust.
- AI rules should be based on human values, such as:
- Treating people fairly
- Respecting human rights
- Being honest about how AI works
- Ensuring safety
- Taking responsibility if something goes wrong
- There should be clear rules and standards for using AI, including:
- Protecting personal information
- Preventing bias in AI systems
- Ensuring safety of AI applications
- Allowing people to question AI decisions
- AI policy-making should involve different stakeholders, such as:
- Government representatives
- Business leaders
- Scientists
- Community groups
- General public
- This ensures that everyone’s views are considered, since AI affects all.
- Before deploying AI systems, it is important to:
- Identify possible risks or problems
- Analyze what could go wrong
- Prepare solutions or safety measures in advance
Components of AI Policies
- IBM AI Ethics Board focuses on ethical AI development with principles like fairness, transparency, accountability, and bias reduction. It also promotes collaboration and awareness of AI ethics.
- Microsoft Responsible AI includes principles such as fairness, reliability, privacy, and inclusivity, along with tools for bias detection and fairness checking.
- Google AI Principles emphasize fairness, safety, privacy, and accountability, and focus on building AI based on human values and societal well-being with transparency and continuous improvement.
- European Union AI Ethics Guidelines focus on trustworthy AI with principles like human autonomy, prevention of harm, fairness, and accountability, along with transparency, explainability, and human oversight.
Moral Machine Game
- An ethical dilemma is a situation where there is no clear right or wrong choice, and every option has both positive and negative outcomes.
- In AI, ethical dilemmas arise when there is a conflict between moral values during the design, development, or use of AI systems.
- These dilemmas affect individuals, society, and even the environment due to the wide impact of AI.
- The Moral Machine is an online platform developed by MIT to explore AI ethical dilemmas.
- It presents users with hypothetical scenarios involving autonomous vehicles making difficult decisions.
- Users must choose between options like:
- Protecting passengers vs pedestrians
- Obeying traffic rules vs avoiding harm
- Considering factors like age, gender, or social status
- The game raises questions like what decision should be made and why, helping users think about moral trade-offs.
- Although scenarios are fictional, they reflect real-world ethical challenges in AI systems.
- The Moral Machine helps in discussion, awareness, and understanding of AI ethics in real-life situations.
Survival of the Best Fit Game
- Survival of the Best Fit is an educational game focused on hiring bias in AI.
- It explains how misuse of AI can cause machines to learn and repeat human biases.
- The game shows how biased AI systems can increase inequality in job selection processes.
- It helps students understand how AI is used in recruitment and hiring decisions.
- The activity demonstrates how certain candidates may be unfairly selected or rejected due to biased algorithms.
- It builds awareness of how data and decision-making systems affect fairness in hiring.
- The game is used as a practical learning activity to understand bias in real-world AI applications.