0% found this document useful (0 votes)
1K views17 pages

Class 10 CH 1 Solution

The document discusses various case studies highlighting the ethical challenges and biases associated with AI applications in sectors like finance, recruitment, healthcare, and surveillance. It emphasizes the importance of fairness, transparency, and accountability in AI systems, as well as the need for diverse data and human oversight to mitigate bias. Additionally, it explores the ethical dilemmas posed by AI in decision-making scenarios, particularly in life-and-death situations, and advocates for the integration of bioethical principles to ensure responsible AI use.

Uploaded by

mokshatalwar25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views17 pages

Class 10 CH 1 Solution

The document discusses various case studies highlighting the ethical challenges and biases associated with AI applications in sectors like finance, recruitment, healthcare, and surveillance. It emphasizes the importance of fairness, transparency, and accountability in AI systems, as well as the need for diverse data and human oversight to mitigate bias. Additionally, it explores the ethical dilemmas posed by AI in decision-making scenarios, particularly in life-and-death situations, and advocates for the integration of bioethical principles to ensure responsible AI use.

Uploaded by

mokshatalwar25
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CH -1 REVISITING AI PROJECT CYCLE & ETHICAL FRAMEWORKS FOR AI

Case Study-1: Al in Automated Loan Approvals - Bias in Financial Al


A leading bank introduced an Al-powered loan approval system to speed up the loan processing
time and make fair, data-driven decisions. The Al was trained on historical data, analyzing credit
scores, income, and repayment history to determine whether applicants. were eligible for loans.
However, after implementation, reports showed that the system was rejecting more applications from
certain racial and lower-income communities, even when they had similar financial qualifications to
approved applicants.

Why Did This Happen?

1. Biased Data: The Al model learned from historical loan approval records, which had past biases
due to discriminatory lending practices. As a result, the Al inherited these biases and unfairly rejected
qualified applicants.

2. Lack of Fair Testing: The bank did not test the Al model on diverse applicant groups, leading to
discriminatory outcomes. Since the Al was trained mostly on data from higher-income applicants, it
favored similar profiles.

3. Over-Reliance on Al: Loan officers started relying entirely on Al decisions instead of reviewing
cases manually, reducing human oversight in the decision-making process.

The Consequences

 Unfair Loan Denials: Qualified applicants from certain backgrounds were denied loans, affecting
their ability to buy homes, start businesses, or pay for education.
 Legal & Ethical Issues: The biased Al system raised concerns about fairness and
discrimination, leading to complaints and potential legal actions against the bank.
 Loss of Trust: Customers lost confidence in the bank's system, affecting its reputation and
financial success.

1. What caused the Al loan approval system to make biased decisions, and how did historical
data play a role?
Biased Data: The Al model learned from historical loan approval records, which had past biases
due to discriminatory lending practices. As a result, the Al inherited these biases and unfairly rejected
qualified applicants.

2. Do you think Al should be used for critical decisions like loan approvals if it can be biased?
Why or why not?
Yes, but with strong safeguards.
 Why: AI can process applications quickly, spot patterns in large datasets, and apply
consistent rules (reducing random human errors).
 Why not: (Refer Consequences for explanation)
o Unfair Loan Denials
o Legal & Ethical Issues
o Loss of Trust

Rosary School 1 Class X/AI/Ch-1


3. If you were designing this Al system, what steps would you take to ensure fair loan approvals
for all applicants?
 Bias Mitigation Techniques
 Explanability and Transparency (Refer Pg No. 23 from book)

Case Study-2: A multinational company implemented an Al-driven recruitment system to shortlist


candidates efficiently. However, after a year of implementation, reports revealed that significantly
fewer female candidates were being selected compared to male applicants, even when they had
similar qualifications. Further investigation showed that the Al system had learned from historical
hiring data, which was biased toward male candidates. This raises concerns about discrimination,
fairness, and bias in Al decision-making.
Q: What ethical concerns does this case highlight? How can bias be reduced in Al hiring systems?
Ethical concerns:
 Discrimination
 Bias Mitigation
 Lack of fairness
 Transparency
 Accountability and Governance

Ways to reduce bias in AI hiring:


 Use diverse and representative training datasets.
 Apply bias detection and correction techniques during model training.
 Involve human oversight in final decisions.
 Regularly audit AI systems for fairness.

Case Study-3: A government installed Al-powered surveillance cameras in public areas to monitor
suspicious activities and prevent crimes. These Al systems track individuals' movements and
analyze behaviors to detect potential threats. However, privacy activists argue that constant
surveillance violates personal freedoms, raises concerns about misuse of personal data, and could
lead to mass surveillance abuse. The debate highlights the challenge of balancing security with
individual privacy rights.
Q: Discuss the ethical dilemma between public safety and privacy in Al surveillance.
Ethical dilemma:
 Public safety –
o The use of AI surveillance systems can discourage crime.
o identify threats quickly
o protect citizens.
 Privacy rights –
o Constant monitoring can breach/violate on personal freedoms
o collect sensitive data without consent
o risk misuse or abuse by authorities.

Case Study-4: A hospital introduced an Al-powered diagnostic tool to help doctors identify early
signs of cancer. Initially, the system showed promising results, but later studies revealed that the Al
misdiagnosed certain racial groups more frequently than others. This happened because the Al was
trained on data that primarily represented one demographic group, leading to biased medical
assessments. This raises serious concerns about fairness, inclusivity, and reliability in Al-driven
healthcare solutions.
Q: What steps should be taken to ensure fairness in Al-driven healthcare tools?
Rosary School 2 Class X/AI/Ch-1
The four principals of bioethics can help create an ethical AI system that ensures fairness and
accuracy in healthcare systems.
1. Respect for autonomy
2. Do not harm (Non-maleficence)
3. Ensure maximum benefits for all (Beneficence)
4. Give Justice (Fairness and equality) Refer Pg No. 26 from book

Case Study-5: A social media platform uses Al algorithms to recommend content to users based on
their past interactions. However, studies show that the platform prioritizes sensational or misleading
content because it drives more engagement and keeps users online for longer. This has resulted in
the spread of misinformation, political bias, and social polarization. The ethical dilemma here is how
Al can be designed to balance user engagement while ensuring responsible and accurate content
recommendations.
Q: How Al-driven platforms balance user engagement and content moderation?
Balancing user engagement and content moderation:
• Al should promote honest and fair decision-making.
• Al developers and users should act with integrity and responsibility.
• Al must serve humanity rather than simply maximizing profits. (Key points

Case Study-6: Al in Recruitment


A technology company has introduced an Al-based hiring system to screen job applications
efficiently. The Al model, trained on historical hiring data, quickly shortlists candidates. However,
after a few months, it was discovered that the system disproportionately rejects female candidates
and those from certain ethnic backgrounds. This raises concerns about fairness and bias in Al-based
decision-making.
Q1. Identify the ethical issues present in the company's Al-based recruitment system.
Ethical issues:
 Bias and discrimination against women and certain ethnic groups.
 Violation of equal opportunity in employment.
 Lack of transparency in how decisions are made.
Q2. How can bias in Al models be detected and mitigated?
Detecting and mitigating bias:
 Conduct bias audits using diverse test datasets.
 Apply fairness metrics (e.g., demographic parity, equal opportunity).
 Retrain models with balanced and representative data.
 Use human oversight to review AI decisions.
Q3. Which ethical frameworks should be implemented to ensure fairness in Al-driven hiring?
Ethical frameworks:
 Fairness, Accountability, and Transparency (FAT) principles.
 Equal Employment Opportunity (EEO) guidelines.
 Human-in-the-loop decision-making framework.

Case Study-7: AI in Healthcare Diagnosis


A hospital has implemented an Al-driven system to diagnose diseases based on patient medical
records. While the system has improved efficiency, doctors have observed that it makes inaccurate
diagnoses for patients from underrepresented backgrounds due to limited training data. This has led
to delayed treatments and incorrect prescriptions.
Q1. What are the potential risks of using biased Al models in healthcare?
Potential risks:
Rosary School 3 Class X/AI/Ch-1
 Misdiagnosis leading to delayed or harmful treatments.
 Unequal quality of care for underrepresented groups.
 Loss of trust in healthcare systems.
 Legal and ethical liabilities for patient harm.
Q2 How does bioethics play a role in ensuring Al is used responsibly in medical applications?
Role of bioethics:
 Justice: Ensure equal treatment regardless of background.
 Beneficence: Promote patient well-being through accurate diagnosis.
 Non-maleficence: Avoid harm caused by biased AI decisions.
 Autonomy: Support informed patient consent when using AI tools.
Q3. Suggest ways in which the hospital can improve the fairness and accuracy of the Al system.
Improvements:
 Expand training datasets to include diverse and representative patient records.
 Conduct bias testing and continuous monitoring of AI performance.
 Use human-AI collaboration—doctors verify AI recommendations.
 Regularly update and retrain the AI with new, inclusive data.

Case Study-8: Al-Powered Surveillance


A city has deployed Al-powered facial recognition cameras to enhance public security. While the
system helps identify criminals, citizens have raised concerns about privacy violations, wrongful
identifications, and potential misuse by authorities. Some reports indicate that Al misidentifies
individuals from certain racial backgrounds more frequently.
Q1. Discuss the ethical concerns associated with Al-powered surveillance.
 Privacy Violation – Constant monitoring can track individuals without consent.
 Bias and Discrimination – Higher error rates for certain racial or ethnic groups.
 Wrongful Identification – Misidentification can lead to false arrests or harassment.
 Lack of Transparency – Citizens may not know how and where the system is used.
 Potential Misuse – Authorities could use it for political targeting or mass profiling.
Q2. How does the principle of autonomy relate to the use of Al in public spaces?
 Autonomy means individuals should have control over their personal information and freedom
of movement without unnecessary surveillance.
 Excessive AI monitoring can undermine personal freedom and limit people’s ability to act
without fear of being tracked.
Q3. Which ethical frameworks can help balance security needs with individual privacy rights?
 Utilitarianism – Weighing public safety benefits against potential harms to individuals.
 Rights-Based Ethics – Prioritizing fundamental rights such as privacy and freedom of
expression.
 Justice and Fairness – Ensuring the system is unbiased, treats all groups equally, and applies
safeguards consistently.

Case Study-9: Al and Fake News Detection


A social media company uses Al to detect and remove fake news articles. However, critics argue
that the Al system sometimes flags genuine news sources as fake while allowing misleading content
to remain. There are concerns about transparency, accountability, and potential censorship of free
speech.
Q1. How can Al be improved to differentiate between fake and real news accurately?
 Use verified datasets: Train AI with high-quality, fact-checked, and unbiased news data.
 Multisource verification: Cross-reference information from multiple trusted sources.

Rosary School 4 Class X/AI/Ch-1


 Human-AI collaboration: Combine AI analysis with human fact-checkers for better accuracy.
 Regular updates: Continuously update the system to adapt to new types of misinformation and
language styles.
 Detect source credibility: Analyze the reliability and history of the source before flagging
content.
Q2. What ethical challenges arise when Al is used for content moderation?
 Bias and Discrimination – AI may reflect or amplify biases in training data, unfairly targeting
certain groups or viewpoints.
 Transparency Issues – Users may not know why content was flagged or removed.
 Accountability – It’s often unclear who is responsible for mistakes: the AI developers or the
platform.
 Free Speech Concerns – Over-filtering can suppress legitimate opinions and journalism.
 Inaccuracy – Risk of false positives (flagging true content as fake) and false negatives
(missing actual fake news).

Q3. How can the principles of justice and fairness be applied in designing Al-powered fact-checking
systems?
 Equal Treatment – Ensure the system applies the same rules to all users and sources,
regardless of identity or ideology.
 Bias Mitigation – Use diverse, balanced datasets and regularly audit for unfair outcomes.
 Right to Appeal – Provide a clear process for users to contest moderation decisions.
 Transparency – Clearly explain fact-checking criteria and AI decision-making logic.
 Continuous Improvement – Update models with verified corrections to avoid repeating
mistakes.

Case Study-10: Al in Autonomous Vehicles


A self-driving car company has launched its latest model, which relies entirely on Al for navigation
and decision-making. In an accident scenario where the Al must choose between hitting a pedestrian
or swerving into a barrier, ethical dilemmas arise regarding which life to prioritise.
Q1. What ethical challenges do autonomous vehicles pose in decision-making during accidents?
 Life-and-death decisions: AI must choose between harming passengers, pedestrians, or
others in unavoidable accidents.
 Value judgment dilemma: Deciding whose life to prioritize (e.g., child vs. adult, one person vs.
many).
 Lack of human empathy: AI lacks moral intuition and human context for decisions.
 Accountability issues: Uncertainty about who is responsible—the car manufacturer,
programmer, or AI itself.
 Bias and fairness: Risk of algorithmic bias affecting decisions based on age, gender, or
appearance.
 Public trust: Ethical failures may reduce user trust in autonomous vehicles.
Q2. How can utilitarian and deontological ethics be applied in programming self-driving cars?
Utilitarian Ethics (Consequences-Based):
 Goal: Minimize overall harm and maximize lives saved.
 Example: The car swerves into a barrier, sacrificing the passenger to save multiple
pedestrians.
 Challenge: Can seem cold and impersonal, raising questions about sacrificing some for many.
Deontological Ethics (Duty-Based):
 Goal: Follow moral rules, such as “do not intentionally harm others.”

Rosary School 5 Class X/AI/Ch-1


 Example: The car avoids intentionally harming pedestrians, even if it risks higher overall
casualties.
 Challenge: May lead to more harm overall, but respects moral duties and rights.
Q3. What measures can be implemented to ensure Al in self-driving cars aligns with
ethical principles?
 Ethical programming standards: Develop globally accepted ethical guidelines for AI behavior
in vehicles.
 Transparency: Make AI decision-making processes explainable and understandable to the
public.
 Accountability frameworks: Clearly define who is responsible in the event of ethical failures or
accidents.
 Public involvement: Include community and expert input in ethical decision-making models.
 Bias testing and auditing: Regularly test algorithms to detect and remove bias.
 Legal regulations: Governments must enforce laws that ensure ethical compliance in
autonomous systems.

Case Study 11: Al used in Predicting Chronic Disease Risk Based on Lifestyle
A Health Tech company created an Al system that helps predict a person's risk for chronic diseases
like Type 2 diabetes, obesity, and heart disease. The system used information about people's
lifestyles such as diet, exercise, sleep, and smoking-along with medical history and genetic data.
The goal was to help people identify health risks early and make changes to prevent these diseases.
But the system caused some problems that raised ethical concerns such as data privacy, bias and
discrimination, lack of transparency. How can Health Tech company overcome these problems using
Bioethics?
1. Data Privacy (Autonomy)
 Obtain informed consent before collecting personal and genetic data.
 Allow users to access, modify, or delete their data.
 Use data encryption and anonymization to protect user identity.
2. Bias and Discrimination (Justice)
 Use diverse and representative datasets to reduce bias.
 Regularly audit algorithms for racial, gender, or socioeconomic bias.
 Ensure equal access to AI tools across all communities.
3. Lack of Transparency (Autonomy & Beneficence)
 Use explainable AI (XAI) to make predictions understandable.
 Provide clear, simple explanations of how the system works and its limitations.
 Be transparent about data usage and model decisions.
4. Preventing Harm (Non-Maleficence)
 Test system thoroughly to avoid false predictions and misdiagnosis.
 Provide balanced, non-alarming results to users.
 Avoid use of AI outputs for discriminatory purposes (e.g., insurance decisions).
5. Promoting Good (Beneficence)
 Ensure the AI promotes health awareness and prevention.
 Continuously improve accuracy and adapt based on user feedback.
 Offer personalized recommendations that genuinely benefit health.

Case Study 12: In a corporate setting, a multinational company is facing scrutiny over its
environmental practices, particularly regarding the disposal of industrial waste. The company has

Rosary School 6 Class X/AI/Ch-1


historically prioritized profit maximization and cost-cutting measures, leading to practices that result
in environmental harm and negative impacts on local communities. As public awareness and
concern about environmental sustainability grow, stakeholders, including investors, customers, and
advocacy groups, are calling for the company to adopt more responsible and sustainable business
practices.
Q: Drawing from the case presented, analyze the ethical considerations surrounding the company's
environmental practices through the lens of value-based frameworks in ethics. (CBSE)
1. Environmental Ethics (Ecocentrism)
 Nature has intrinsic value, not just utility for human gain.
 Harmful waste disposal violates the moral duty to protect ecosystems.
 Ethical action: Shift toward sustainable production and eco-friendly waste management.
2. Virtue Ethics
 Focuses on the moral character of the company.
 Irresponsible environmental practices show greed, negligence, and lack of integrity.
 Ethical action: Cultivate virtues like responsibility, sustainability, and care for future
generations.
3. Ethics of Care
 Emphasizes relationships and responsibility toward affected communities.
 Ignoring community harm violates the duty of care and empathy.
 Ethical action: Engage in dialogue with local communities and mitigate negative impacts.
4. Stakeholder Theory (Value-Based Business Ethics)
 Ethical businesses must balance the interests of all stakeholders, not just shareholders.
 Environmental harm affects customers, employees, communities, and future generations.
 Ethical action: Adopt transparent, inclusive, and sustainable business practices.
5. Common Good Approach
 Prioritizes actions that support the well-being of society as a whole.
 Environmental damage undermines public health and shared natural resources.
 Ethical action: Invest in green technologies and corporate social responsibility (CSR).

Case Study 13: In a rural farming community, a group of small-scale farmers is faced with a dilemma
regarding the use of pesticides on their crops. The farmers have traditionally relied on chemical
pesticides to control pests and maximize crop yields. However, concerns have been raised about
the potential environmental and health impacts of pesticide use, including soil contamination, water
pollution, and adverse effects on human health. Additionally, neighboring communities and
environmental advocacy groups have expressed opposition to the widespread use of pesticides,
citing ecological damage and risks to biodiversity.
Q: Using the case provided, examine the ethical considerations surrounding pesticide use in the
agricultural sector, applying ethical frameworks to analyze the competing interests and values at
stake. (CBSE)
1. Utilitarianism (Greatest Good for the Greatest Number)
 Pros: Pesticides increase crop yield, reduce food insecurity, and support farmer income.
 Cons: Long-term harm to human health, environment, and biodiversity outweigh short-term
benefits.
 Ethical Action: Shift to safer pest control methods to maximize overall well-being.
2. Deontological Ethics (Duty-Based/Right-Based)
 Farmers have a duty to avoid causing harm to others and the environment.
 Pesticide use that pollutes water or harms nearby communities violates ethical duties.
 Ethical Action: Adopt practices that uphold moral responsibility, even if less profitable.
3. Environmental Ethics
Rosary School 7 Class X/AI/Ch-1
 Ecocentrism: Ecosystems have intrinsic value beyond human use.
 Pesticide use disrupts soil health, pollinators, and biodiversity.
 Ethical Action: Use eco-friendly alternatives like organic farming.
4. Ethics of Care
 Emphasizes relationships and community well-being.
 Pesticides can harm the health of farmers’ families and neighbors.
 Ethical Action: Prioritize farming methods that show care for people and the environment.
5. Justice and Fairness
 Nearby communities bear the unfair burden of environmental pollution.
 Poorer farmers may lack access to safer alternatives.
 Ethical Action: Ensure fair access to sustainable farming support and protect vulnerable groups.

Rosary School 8 Class X/AI/Ch-1


A. Multiple Choice Questions

1. A company is developing an Al chatbot for customer service. Before they start, they need to define
the scope of their project. What is the primary purpose of this step?
a) To collect customer feedback
b) To understand the goal and challenges of the chatbot
c) To train the chatbot using large datasets
d) To deploy the chatbot for customer interactions
2. A self-driving car needs to decide whether to stop suddenly, potentially causing an accident behind
it, or to keep moving and risk hitting a pedestrian. Which ethical framework best applies here?
a) Virtue-Based Ethics b) Utility-Based Ethics
c) Rights Based Ethics d) Sector-Based Ethics
3. in an Al powered hiring system, some candidates with excellent skills are being rejected because
of biased past hiring data. This reflects an issue in which aspect of Al ethics?
a) Transparency b) Bias Mitigation
c) Security d) Data Privacy
4. A school is using Al to predict students' performance based on their previous test scores. What Al
domain is being used here?
a) Computer Vision b) Natural Language Processing
c) Statistical Data d) Supervised Learning
5. Which of the following is a key function of Natural Language Processing (NLP)?
a) Translating text into numerical data
b) Enabling interaction between humans and computers using natural language
c) Identifying objects in images d) Processing only structured data
6. An Al-based email system automatically sorts emails into Primary, Social, and Pro folders. This is
an application of which Al domain?
a) Computer Vision b) Machine Learning
c) Natural Language Processing d) Robotics
7. Which of the following best describes bioethics?
a) Ensuring that Al in finance follows legal policies
b) Protecting individual rights in Al decision-making
c) Ethical guidelines for Al applications in healthcare
d) Preventing Al from replacing human workers
8. A drone equipped with Al is being used to monitor agricultural fields, identify pest infestations, and
detect irrigation issues. This is an example of which Al domain?
a) Computer Vision b) Statistical data
c) NLP d) Robotics
9. Which ethical framework focuses on ensuring Al decisions prioritize fairness and equality?
a) Rights-Based Ethics b) Virtue-Based Ethics
c) Utility-Based Ethics d) None
10. Why do Al models require regular evaluation?
a) To ensure they do not make biased or incorrect decisions
b) To allow Al to modify itself without human intervention

Rosary School 9 Class X/AI/Ch-1


c) To stop Al from learning new patterns d) To eliminate the need for human oversight
11. What is the major challenge in Al-based surveillance systems?
a) Lack of cameras b) Privacy concerns and ethical use of data
c) Al's inability to process images d) Inability to track moving objects
12. A financial institution is using Al to detect fraudulent transactions. This Al system needs to be
transparent and accountable. Why is transparency important here?
a) So that customers can understand how Al makes decisions
b) To keep Al decisions secret
c) To ensure Al replaces human workers in banking
d) To improve the accuracy of transactions
13. A translation Al incorrectly interprets medical terms in a prescription, leading to confusion for a
non-English speaking patient. This highlights the importance of which Al ethical concern?
a) Bias Mitigation b) Explainability and Transparency
c) Data Security d) Al Model Training
14. Which of the following ethical frameworks ensures Al prioritizes actions based on positive
outcomes for the majority?
a) Rights-Based Ethics b) Sector-Based Ethics
c) Utility-Based Ethics(Utilitarianism) d) Bioethics
15. An Al-based social media platform is found to promote misleading information because it
generates higher user engagement. Which ethical principle is being violated here?
a) Transparency b) Virtue Ethics
c) Fairness d) Data Security
16. Which of the following is the first step in the Al Project Cycle?
a. Model Training b. Problem Scoping
c. Data Acquisition d. Deployment
17. Which ethical principle emphasises 'Do No Harm' in AI?
a. Beneficence b. Non-maleficence
c. Justice d. Autonomy
18. Which ethical framework focuses on maximising overall happiness and minimising suffering?
a. Virtue Ethics b. Utilitarianism
c. Deontology d. Bioethics
19. Which Al domain is responsible for spam filtering in emails?
a. Statistical Data b. Natural Language Processing
c. Computer Vision d. Data Acquisition
20. Which ethical principle ensures that Al decisions are fair and equal for all?
a. Justice b. Autonomy
c. Beneficence d. Non-maleficence
21. What is the primary purpose of Data Acquisition in an Al project?
a. Testing the Al model b. Gathering data for Al training
c. Making ethical decisions d. Deploying the Al model
22. Which Al domain is used for analysing images and videos?
a. Statistical Data b. Natural Language Processing
c. Computer Vision d. Reinforcement Learning

Rosary School 10 Class X/AI/Ch-1


23. What is the main goal of Natural Language Processing (NLP)?
a. Processing numerical data b. Understanding human language
c. Identifying objects in images d. Making ethical decisions
24. Which of the following is NOT a step in the Al Project Cycle?
a. Model Selection b. Data Exploration
c. Ethical Decision Making d. Deployment
25. Which Al technology is commonly used for self-driving cars?
a. Natural Language Processing b. Reinforcement Learning
c. Computer Vision d. Supervised Learning
26. What is the purpose of Model Training in the Al Project Cycle?
a. Gathering data b. Testing Al models with training
c. Visualising datasets d. Deploying the Al system
27. Which type of ethical framework is specific to a particular industry like healthcare or finance?
a. Value-Based Framework b. Sector-Specific Framework
c. Rights-Based Framework d. Utility-Based Framework
28. What is the main goal of deploying an Al model?
a. To discard unwanted data b. To test Al algorithms
c. To implement Al in real-world applications d. To collect new datasets
29. Which of the following best describes 'Bias in Al'?
a. Al models making fair decisions b. Al being completely transparent
c. Al producing unfair or discriminatory results due to biased data
d. Al ensuring all decisions are correct
30. What is the main challenge with biased Al models?
a. They always perform better b. They require less data for training
c. They can reinforce existing societal inequalities
d. They make ethical decision-making easier
31. What is the primary purpose of AI project Cycle?
a. To provide a set of random steps of AI development.
b. To systematically plan, develop and deploy AI solutions
c. To focus only on the final implementation of AI systems
d. To perform data analysis without a structured plan
32. What is the main goal of the data acquisition stage in Al project?
a. To collect raw data for analysis and reference
b. To visualise data using statistical method
c. To test the Al model d. To deploy the model into production
33. What is the primary focus during the modelling phase of the Al project cycle?
a. Collecting additional data
b. Testing the final model in real-world scenarios
c. Deploying the model for practical use
d. Selecting and evaluating the best algorithm to build the model

Rosary School 11 Class X/AI/Ch-1


34. Which of the following is NOT an application of computer vision?
a. Categorising photos in a smartphone b. Identifying faces in CCTV footage
c. Playing music based on mood d. Enabling self-driving cars
35. What is one of the primary uses of drones equipped with computer vision?
a. Capturing high-resolution videos for entertainment
b. Performing aerial inspections and monitoring large areas
c. Delivering packages to remote locations d. Controlling air traffic
36. Which of the following is NOT a real-time application of NLP?
a. Image recognition b. Chatbots
c. Plagiarism checker d. Sentiment analysis
37. What is one of the Sustainable Development Goals (SDGs) addressed by ethical Al frameworks?
a. Enhancing global trade efficiency b. Ensuring Al is trustworthy and safe for societal use
c. Eliminating all human labour d. Increasing Al production
38. Which ethical framework emphasises good character traits such as kindness and compassion?
a. Value-based ethical framework b. Virtue-based ethical framework
c. Rights-based ethical framework d. Utility-based ethical framework
39. What is the main goal of Computer Vision projects?
a. Translating audio data into visual descriptions
b. Converting digital data into analogue signals
c. Teaching machines to understand textual information
d. Converting digital visual data into computer-readable language
40. What do frameworks provide in the context of problem-solving?
a. Random solutions b. Step-by-step guidance
c. Legal advice d. Ethical justifications
41. Which of the following is not a typical stage in the Al Project Cycle?
a. Modelling b. Data exploration
c. Deployment d. Data generation
42. What is the main focus of the first stage of Al project cycle?
a. Data acquisition and exploration b. Problem scoping and developing a vision
c. Model evaluation d. Deployment of the AI system
43. Which W in 4W canvas explores the reason behind solving the problem?
a. What b. Why
c. Where d. Who
44. Which fields does NLP combine to achieve its goals?
a. Physics and computer science b. Linguistics and mathematics
c. Linguistics and computer science d. Psychology and computer science
45. How does Google Translate utilise NLP?
a. By analysing sentence structure and context to translate test
b. By categorising languages into specific types
c. By predicting the next word in a sentence
d. By checking for grammatical errors in documents

Rosary School 12 Class X/AI/Ch-1


46. How do ethical frameworks ensure fairness in Al systems?
a. By prioritising economic status over other factors
b. By eliminating biases in training data and ensuring equal treatment
c. By removing complex algorithms from Al systems
d. By avoiding data collection

47. Which ethical framework is based on respecting and upholding individual rights?
a. Rights-based ethical framework b. Utility-based ethical framework
c. Sector-based ethical framework d. Virtue-based ethical framework

48. What does beneficence in bioethics involve?


a. Preventing individuals from making their own decisions
b. Ensuring healthcare resources are distributed equally
c. Avoiding harm at all costs
d. Promoting the welfare and well-being of others

49. What is the primary domain of application for Bioethics?


a. Agriculture b. Healthcare and life sciences
c. Information technology d. Environmental conservation

50. What is the purpose of defining the problem statement during the Problem Scoping stage in an
Al project cycle?
a. To collect data b. To understand the aim and objective of the project
c. To train the model d. To process data

Rosary School 13 Class X/AI/Ch-1


B. Assertion & Reasoning
Answer the questions by selecting the appropriate option given below:
a) Both Assertion and Reasoning are true, and Reasoning correctly explains the Assertion.
b) Assertion is true, but Reasoning is false.
c) Both Assertion and Reasoning are true, but Reasoning is not the correct explanation of the
Assertion.
d) Assertion is false, but Reasoning is true.
1. (A): The Al Project Cycle ensures systematic Al development.
(R): The cycle consists of steps like data acquisition, model building, and evaluation. (A)
2. (A): The AI Project Cycle ensures structured and organized Al development.
(R): Al models should be built randomly to generate unpredictable outcomes. (B)
3. (A): Al in surveillance helps detect security threats.
(R): Al-powered surveillance should balance security with privacy concerns. (C)
4. Assertion: Bioethics ensures ethical Al applications in healthcare.
Reasoning: Al in healthcare should always prioritize accuracy over patient well-being (B)
5.(A): Al in hiring should be free from bias.
(R): Bias in Al cannot be removed and should be accepted as part of automation. (B)
6 (A): The Al Project Cycle helps in systematically developing Al solutions. (A)
(R): It provides a structured framework that includes steps like problem scoping, data acquisition,
and model training
7. (A): Computer Vision is used in self-driving cars for navigation (B)
(R): Computer Vision enables Al to process and understand human speech.
8. (A): Ethical frameworks are essential in Al to ensure fairness and prevent bias. (A)
(R): Al systems can make unfair decisions if they are trained on biased data
9. (A): Natural Language Processing (NLP) is used only for text-based applications.
(R): NLP allows Al to process and understand both spoken and written human language. (D)
10. (A): Bias in Al models can lead to discrimination in decision-making. (A)
(R): Al models learn from data, and if the data contains biases, the model may replicate them.
11. (A): Bioethics provides a framework to ensure decision in healthcare are made tasty and respect
individual rights. (D)
(R): The primary goal of bioethics is to ensure that decisions in healthcare protect human dignity.
12. (A): Data Exploration involves analysing large datasets to uncover patterns, trends, and
relationships. (C)
(R): Visualisation tools such as charts, graphs, and plots are used to make the analysis of complex
data.
13. (A): Processing and analysing visual data is the final step in Computer Vision systems.
(R): After analysing visual data, machines can make decisions or take actions based on their
understanding. (C)
14. (A): Virtue-based ethical frameworks focus on building good character traits, such as kindness
and empathy, in decision-making.

Rosary School 14 Class X/AI/Ch-1


(R): Virtue-based ethical frameworks prioritise maximising overall good and minimising harm through
ethical behaviour. (C)
15. (A): Ethics provide guidance in distinguishing right from wrong.
(R): Ethics consist of a set of values and morals that aid individuals in making moral judgments and
decisions. (A)
16. (A): Value-based frameworks in ethics provide guidance by focusing on fundamental ethical
principles and values.
(R): These frameworks reflect different moral philosophies guiding ethical reasoning and are
concerned with assessing the moral worth of actions. (A)

C. Fill in the blanks


1. The first step in the Al Project Cycle is problem scoping
2. NLP is a branch of Al that enables computers to understand and process human language.
3. Al systems used in hiring should ensure fairness and transparency to prevent biased selection.
4. Bioethics is primarily applied in the health care industry.
5. Ethical frameworks are divided into Sector-Based and value based frameworks.
6. Once the model is confirmed to meet the desired goals, it is ready for deployment, where it will be
integrated into the production environment.
7 CV is used in agriculture for tasks like monitoring crops, detecting pests, and estimating yields.
8. Predictive models, which are built from historical data patterns, can be used forecasting,
healthcare, finance, and more.
9 NLP focused on enabling machines to understand, analyse, and interact with humans through
natural language.
10. Machines and robots powered by Al can replace human workers, which could lead to
unemployment.
11. The principle of autonomy emphasises respecting an individual's right to make decisions about
their own body and life.
12. Data exploration is a crucial step that involves analysing large volumes of data to uncover
meaningful patterns, trends, and relationships.
13. Self-driving cars utilize CV to recognise objects such as lamp posts, pedestrian crossings, and
stop signs.
14. NLP helps machines understand and interact with humans through text and speech
15. One of the applications of statistical data is the recommendation system, which recommends
products, movies, or music to users.
16. The principle of non-maleficence is summarised as "do no harm" and emphasises avoiding harm
to individuals or communities.
17. Bioethics combines ideas from medicine, law, and philosophy to ensure healthcare decisions
are fair, respectful and protect everyone's rights.
18. Data acquisition stage involves gathering raw data, which is essential for referencing or
performing analyses that will guide the project.
19. Problem Scoping is a crucial step where the focus is on thoroughly understanding the problem,
considering the various factors that influence it, and determining an appropriate solution using Al
technology.
20. Chatbots are software applications that use NLP to communicate with humans using text to
speech.
21. In Computer Vision Al systems are designed to interpret and analyse visual data, such as images
or videos.

Rosary School 15 Class X/AI/Ch-1


D. State T for True or F for False statements
1. Ethical Al frameworks are only applicable in healthcare. F
2. The Al Project Cycle is a linear process without iteration. F
3. Natural Language Processing is used in email spam detection. T
4. Rights-Based Ethics ensures that Al decisions prioritize human dignity. T
5. Computer Vision only works with numerical data. F
6. The more data we have, the more difficult the analysis will be, leading to more accurate
predictions. T
7. Ethical frameworks are important to ensure Al development matches the goals of the development
of human beings. T
8. The types of data that can be used by NLP are text and video. F
9. Smartphones use NLP to categorise the photos on your phone under the different categories. F
10. Data science is implemented in e-commerce websites like Amazon and Flipkart. T
11. Ethical frameworks help minimise bias and ensure fair treatment for all. T
12. Al systems are capable of inheriting intelligence from training data, leading to discrimination or
unfair treatment. T
13. Al often relies on lots of personal data, maintaining its privacy and protection. F
14. With the emergence of Al and automation, there will be technology-driven societal changes. T
15. In retail store, smart checkout systems use NLP to recognise products and streamline the
payment process by automating transactions. F

D. Match the following:


1. NLP a. Computer Vision 2
2. Face lock b. Healthcare 4
3. Weather Forecasting c. ChatGPT 1
4. Bioethics d. Problem scoping 5
5. 4W Canvas e. Statistical Data 3

Rosary School 16 Class X/AI/Ch-1


CBSE PREVIOUS YEAR QUESTIONS
1. What is the purpose of Evaluation stage of Al project cycle? Discuss [Link] 2022

2. Identify the incorrect statements from the following. CBSE 2023


(i) Al models can be broadly categorised into four domains.
(ii) Data sciences is one of the domain of Al model.
(iii) Price comparison websites are examples of data science.
(iv) The information extracted through data science can be used to make decision about it.
a. Only (iv) b. (iii) and (iv) c. Only (i) d. (il) and (iii)

3. Explain any one example of Al bias. CBSE 2023

4. What is the significance of Al project cycle? Also explain in detail about how Data Acquisition is
different from data exploration. CBSE 2023

5. In the Al project cycle, which of the following represents the correct order of steps? CBSE 2024
a. Data Exploration, Problem Scoping, Modelling, Evaluation, Data Acquisition
b. Problem Scoping, Data Acquisition, Data Exploration, Modelling. Evaluation
c. Modelling. Data Acquisition, Evaluation, Problem Scoping, Data Exploration
d. Data Acquisition, Data Exploration, Problem Scoping, Modelling, Evaluation

6. What is the primary need for evaluating an Al model's performance in the Al Model Development
process? CBSE 2024
a. To increase the complexity of the model
b. To visualise the data
c. To assess how well the chosen model will work in future
d. To reduce the amount of data used for training

7. When a machine possesses the ability to mimic human traits, i.e., make decisions, predict the
future, learn, and improve on its own, it is said to have: CBSE 2024
a. Computational Skills b. Learning Capability
c. Artificial Intelligence d. Cognitive Processing

8. Give any four examples of applications of Al that we see around us. CBSE 2024

9. Assertion (A): When a machine is able to mimic human traits, it is said to be artificially intelligent.
Reason (R): A fully automatic washing machine is artificially intelligent. (B) CBSE 2025

10. Platforms such as Spotify, Facebook, Instagram, Amazon, Netflix etc. shows recommendation
on the basis of what you like. Which is the technology behind this? CBSE 2025
a. Human Intelligence b. Platform intelligence
c. Artificial Intelligence d. Application Intelligence

11. Whenever we want an Al project to be able to predict an output, we need to: CBSE 2025
a. first test it using the data b. first train it using the data
c. Both (a) and (b) d. Neither (a) nor (b)

12. Define the following with respect to Al Project Cycle; CBSE 2025
a. Data exploration b. Data feature

Rosary School 17 Class X/AI/Ch-1

You might also like