0% found this document useful (0 votes)
224 views21 pages

AI Project Ideas For Computer Science

This report outlines innovative AI applications for final year Computer Science projects, focusing on addressing real-world challenges while promoting responsible AI practices. It emphasizes the importance of integrating ethical considerations, data privacy, and sustainability into AI solutions, particularly in sectors like healthcare, education, and environmental sustainability. The document encourages students to identify gaps in current AI applications and develop novel, impactful projects that leverage emerging trends such as Agentic AI and Generative AI.

Uploaded by

gulfareentayyab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
224 views21 pages

AI Project Ideas For Computer Science

This report outlines innovative AI applications for final year Computer Science projects, focusing on addressing real-world challenges while promoting responsible AI practices. It emphasizes the importance of integrating ethical considerations, data privacy, and sustainability into AI solutions, particularly in sectors like healthcare, education, and environmental sustainability. The document encourages students to identify gaps in current AI applications and develop novel, impactful projects that leverage emerging trends such as Agentic AI and Generative AI.

Uploaded by

gulfareentayyab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Innovative AI Applications for Computer

Science Final Year Projects: Addressing


Real-World Challenges and Advancing
Responsible AI
Executive Summary
The rapid evolution of Artificial Intelligence (AI) presents a fertile ground for impactful innovation,
particularly in addressing complex real-world problems. This report provides unique, AI-driven
project ideas for Bachelor of Computer Science final year students, focusing on applications that
are either novel, less implemented, or offer significant improvements over existing solutions.
The proposed concepts align with leading AI research trends for 2024-2025, emphasizing
Agentic AI, AI for Social Good (spanning healthcare, education, environmental sustainability,
and smart cities), and Green AI. Each idea is crafted to solve a tangible problem, demonstrating
product viability and technical feasibility within an academic context. A core emphasis is placed
on integrating ethical considerations, data privacy, human-in-the-loop design, and sustainability,
recognizing AI's growing socio-technical nature.

1. Introduction: The Evolving Landscape of Applied AI


Artificial Intelligence is undergoing a profound and rapid transformation, reshaping not only its
technical methodologies and topics but also the broader research community and working
environments. This dynamic evolution is driven by AI's increasingly pervasive integration into
daily life, underscoring its role as a socio-technical field of study.

1.1. AI's Role in Addressing Real-World Challenges

AI's capabilities are expanding at an unprecedented pace, leading to its embedding across a
multitude of industries and applications. Its transformative potential is evident in horizontal
applications such as content creation, workflow automation, and data analysis, as well as in
vertical domains like software engineering, sales, marketing, finance, fraud detection, and risk
management. This widespread adoption highlights AI's fundamental utility in adapting to diverse
data and problem structures, enabling innovation across various sectors.
Beyond commercial applications, themes such as AI ethics and safety, AI for social good, and
sustainable AI have become central to major AI conferences, reflecting a growing recognition of
AI's societal impact. This shift necessitates collaboration with experts from other disciplines,
including psychologists, sociologists, philosophers, and economists, to navigate the complex
implications of AI integration. The development of AI is no longer solely a technical endeavor; its
design and deployment must inherently consider human factors, ethical implications, and
broader societal consequences from the very outset. This expands the traditional understanding
of "technical feasibility" to encompass responsible deployment and user acceptance.
Specific domains where AI is already making a tangible difference include healthcare, where it
aids in early disease detection, diagnostic accuracy, and treatment planning, while also
alleviating administrative burdens. In education, AI facilitates personalized learning and adaptive
technologies. Environmental monitoring benefits from AI's capacity to analyze vast datasets ,
and smart cities leverage AI for improved infrastructure, resource optimization, and more livable
environments. The ability of AI to generalize and adapt its core capabilities—such as pattern
recognition, prediction, and automation—to disparate data sources and problem structures
allows it to bridge traditional industry boundaries and foster the creation of entirely new product
categories.

1.2. The Imperative for Novelty in AI Applications

The demand for unique and innovative AI solutions is a direct consequence of the field's rapid
advancement. The sheer volume of AI research publications is increasing exponentially, and the
speed of innovation is such that immediate release of papers, even without peer review, has
become widely accepted. This dynamic environment creates an intense competitive landscape
where being first to market with a novel idea or a significant improvement is highly valued,
reflecting both academic and industry pressures.
For a final year project, this environment implies a need to select an idea that offers a clear and
defensible contribution. It encourages students to think beyond incremental enhancements and
to identify genuine gaps or opportunities for transformative applications, even if on a smaller
scale. Emerging technology trends for 2025 underscore this focus, highlighting "innovation
around Generative AI (GenAI) and AI agents". New categories like "agentic AI and
application-specific semiconductors" have recently been added to the list of top tech trends.
This indicates that while general AI capabilities continue to advance, a significant portion of
innovation lies in "applied AI" and "application-specific" solutions. Consequently, the novelty
often stems not from inventing entirely new algorithms, but from creatively applying existing or
emerging AI techniques to solve specific, previously unaddressed real-world problems or to
radically enhance current solutions within a particular domain. Students are thus encouraged to
identify specific pain points where current AI solutions are either absent, insufficient, or could be
fundamentally improved by leveraging recent AI advancements like agentic AI or specialized
generative models.

2. Current AI Trends and Emerging Paradigms (2024-2025)


The landscape of AI is continually reshaped by new research directions and technological
breakthroughs. Several key paradigms are currently driving innovation, offering rich
opportunities for novel applications.

2.1. Agentic AI and Multi-Agent Systems

The concept of AI agents, systems capable of autonomous action and sophisticated reasoning,
is transitioning from theoretical exploration to practical implementation. While AI reasoning and
agentic AI have been studied for decades, their scope has significantly expanded in light of
current AI capabilities and limitations. There is a critical and growing need for verifiable
reasoning in AI systems, particularly for autonomously operating AI agents, where correctness
and depth of reasoning are paramount.
Research interest in Multi-Agent Learning in Dynamic Environments (MARL) is surging, with
notable successes observed in complex tasks such as playing Go, video games, robotics, and
autonomous driving. These systems demonstrate the potential for AI entities to interact, learn,
and coordinate within intricate environments. The progression of agentic AI suggests a
fundamental shift from AI merely serving as a tool to AI functioning as a proactive partner.
Earlier AI applications often performed specific tasks upon command. Now, agentic AI systems
are designed to independently initiate actions, adapt to changing circumstances, and even
collaborate seamlessly with human counterparts. This implies a higher degree of autonomy and
a capacity for proactive problem-solving, especially in scenarios where human cognitive load is
substantial. For instance, agentic AI is being explored to reduce burnout in healthcare by
automating repetitive tasks like analyzing test results, cross-checking patient records for
prescription accuracy, and managing appointment scheduling. Similarly, in education, agentic AI
can address learning gaps by providing 24/7 personalized tutoring, grading assistance, and
troubleshooting support. Autonomous systems, including both physical robots and digital
agents, are moving beyond pilot projects into practical applications, learning, adapting, and
collaborating, with visions of them acting as virtual coworkers. JPMorgan's 2025 technology
report underscores this focus, highlighting innovation around AI agents, including their role in
agentic software development and security capabilities.
Despite impressive advancements in reasoning capabilities demonstrated by large pre-trained
systems like Large Language Models (LLMs), a significant challenge remains: guaranteeing the
correctness and depth of their reasoning, especially for autonomously operating agents. This
fundamental limitation of current large models presents a substantial opportunity for further
development. Projects could explore hybrid AI systems that combine the powerful pattern
recognition and generative abilities of LLMs with more traditional, verifiable formal reasoning
techniques. Integrating symbolic AI or constraint solvers to validate or refine LLM outputs in
critical decision-making contexts within agentic systems could provide the necessary
guarantees for reliable autonomous operation.

2.2. Generative AI and its Broader Applications

Generative AI (GenAI) continues to be a primary driver of innovation, extending its influence


beyond content creation to impact various operational and strategic facets across industries. In
2024, the excitement surrounding GenAI persisted, demonstrating its transformative potential
across diverse sectors. It has become embedded in horizontal applications, including content
creation, workflow automation, and data analysis, and is increasingly integrated into vertical
applications for professionals in software engineering, sales, marketing, finance, fraud
detection, and risk management.
Emerging trends within GenAI include the ongoing debate between proprietary and open
models, the development of domain-specific models (e.g., for coding, image, and video
creation), and the increasing use of smaller models to enable broader deployment options with
optimized performance, particularly for mobile and edge devices. Retrieval Augmented
Generation (RAG) is also highlighted as a crucial technique for integrating proprietary data into
model responses, enhancing the relevance and accuracy of generated content.
However, the rapid development and deployment of powerful generative AI models come with
significant environmental consequences. The computational power required to train GenAI
models, which often comprise billions of parameters, demands staggering amounts of electricity,
leading to increased carbon dioxide emissions and pressure on electrical grids. This energy
demand continues during deployment and fine-tuning, as millions of users interact with these
models daily. Data centers, which house the necessary computing infrastructure, are major
contributors to this electricity consumption, with their global consumption projected to nearly
double by 2026. Furthermore, a substantial amount of water is required to cool the hardware in
these data centers, which can strain municipal water supplies and disrupt local ecosystems. The
surging demand for high-performance computing hardware, such as GPUs, also contributes
indirect environmental impacts through their manufacturing and transport processes.
The simultaneous growth of vast, general-purpose model training infrastructure in power-hungry
data centers and accelerating innovation "at the edge" with lower-power technology embedded
in various devices indicates a strategic tension. While large models offer broad capabilities,
smaller, specialized models provide optimized performance, broader deployment, and
potentially lower resource consumption. This suggests that projects could focus on developing
or fine-tuning smaller, domain-specific generative models for niche applications where efficiency
and localized deployment are critical, rather than attempting to build or use massive
general-purpose models. Such an approach directly addresses feasibility for a bachelor's project
and aligns with "Green AI" principles. The extensive environmental impact of GenAI, explicitly
identified as an "unsustainable path" , underscores a critical dilemma. Any GenAI project should
ideally incorporate "Green AI" principles or, at minimum, acknowledge and discuss its
environmental footprint. Projects could even specifically focus on developing methods or tools to
measure, optimize, or mitigate the environmental impact of generative models, contributing to a
more sustainable AI future.

2.3. AI for Social Good: Key Domains

Leveraging AI for societal benefit has become a significant area of focus, extending beyond
purely commercial applications. AI ethics and safety, alongside AI for social good, are central
themes in major AI conferences. Researchers, such as those at Duke University, are actively
using AI tools to address critical societal problems including healthcare, criminal justice, fake
news detection, equitable allocation of public resources, environmental sustainability, and
energy management. The AI for Social Good (AI4SG) movement specifically aims to harness AI
and Machine Learning (ML) to achieve the United Nations Sustainable Development Goals
(SDGs).
For many AI4SG applications, conditions such as interpretability, transparency, morality, and
fairness are considered essential. This highlights that simply applying AI to a social problem is
insufficient; the manner in which it is applied, and the ethical safeguards integrated into its
design, are equally critical. However, a critique has emerged regarding AI4SG, suggesting it can
sometimes manifest as "technosolutionism" driven by large technology companies. This
perspective argues that such initiatives might inadvertently depoliticize datafication and obscure
underlying power relations, potentially serving corporate interests rather than genuinely
addressing societal needs.
The explicit emphasis on interpretability, transparency, morality, and fairness as "essential" for
AI4SG , coupled with the critique of "technosolutionism" , underscores that ethical
considerations are not merely an add-on but an intrinsic requirement for impactful AI for social
good. Projects in this domain must integrate ethical considerations and fairness metrics into
their design and evaluation from the outset. This could involve developing explainable AI (XAI)
components, implementing bias detection and mitigation strategies, or establishing mechanisms
for community oversight to ensure the technology genuinely serves the public good and avoids
unintended harm. Furthermore, AI research is increasingly tied to collaboration with experts
from diverse disciplines, including psychologists, sociologists, philosophers, and economists.
This interdisciplinary requirement indicates that solving complex social problems with AI
demands more than just computer science expertise. Students should consider projects that
inherently require an understanding of the domain problem from a non-technical perspective,
perhaps through user research with domain experts, incorporating social science theories into
the AI design, or focusing on human-AI interaction in sensitive contexts.

3. Identifying Unaddressed Problems and Innovation Opportunities


The current landscape of AI applications, while vast, still contains significant gaps and
challenges that present unique opportunities for innovation. By systematically analyzing these
limitations, specific areas for impactful final year projects can be identified.
Table 2: Key Gaps and Opportunities in AI Application Domains
Application Domain Identified Opportunity for AI Relevant Snippet IDs
Gap/Challenge Innovation
Healthcare Skill erosion in AI for skill maintenance
clinicians due to AI and continuous
over-reliance professional
development
Healthcare Low AI adoption Agentic AI for
despite workforce administrative burden
shortages, regulatory reduction, explainable
complexity, data issues, AI for trust, solutions for
lack of trust low-resource settings
Education Digital literacy divide, AI literacy platforms, AI
lack of teacher AI as "co-intelligence" for
literacy, limited access teachers, accessible AI
for underserved for diverse learners
communities
Education Over-reliance on AI, AI tools that encourage
diminished critical critical thinking and
thinking, academic active learning, not
dishonesty shortcuts
Environmental High cost/low Hyper-local citizen
Monitoring granularity of traditional science platforms,
methods, shortage of decentralized
AI experts in sector environmental
intelligence
Environmental Environmental footprint Green AI techniques for
Monitoring of AI itself (energy, energy-efficient models
water, hardware) and inference
Smart Cities "Sim-to-real" gap, lack Human-in-the-loop AI
of with explainability and
transparency/accounta uncertainty
bility, trust deficit quantification
Smart Cities Exacerbating social Ethical Digital Twins for
inequalities, data social impact
privacy, digital divide, assessment, inclusive
lack of standardization citizen engagement
platforms
Application Domain Identified Opportunity for AI Relevant Snippet IDs
Gap/Challenge Innovation
Cross-cutting Algorithmic bias, ableist "Ethics by Design,"
assumptions, lack of inclusive AI
diverse representation development with lived
in AI development experiences, bias
mitigation frameworks
Cross-cutting Data privacy and Robust data
security concerns in governance,
sensitive applications anonymization
techniques, user
consent mechanisms
3.1. Gaps in Healthcare AI: Beyond Diagnostic Assistance

While AI has demonstrated immense promise in healthcare, particularly in improving early


disease detection, diagnostic accuracy, and treatment planning , a critical and often overlooked
challenge is emerging: the potential for skill degradation among healthcare professionals due to
over-reliance on AI systems. A recent study revealed a concerning trend where doctors' ability
to detect tumors declined by approximately 20% when AI support was removed, even among
highly experienced clinicians. This suggests an unconscious reliance on AI cues, potentially
leading to reduced motivation, focus, and responsibility in independent decision-making. This
presents a paradox: AI improves immediate performance but risks long-term human proficiency,
creating a human-factors and training challenge. Projects in this area should focus on AI
systems specifically designed to prevent skill degradation, perhaps through adaptive training
modules , "challenge" modes, or AI that provides "scaffolding" rather than direct answers,
thereby actively encouraging human engagement and critical thinking. This moves beyond
simple diagnostic tools to AI functioning as a perpetual professional development coach.
Despite its potential, healthcare has been "below average" in its adoption of AI compared to
other industries, even with 4.5 billion people lacking access to essential services and an
anticipated shortage of 11 million health workers by 2030. This suggests that current AI
solutions are not fully addressing the root causes of these adoption gaps, which include
regulatory complexities, a lack of widespread evidence for use (e.g., in radiology), insufficient
incentives for adoption, fragmented and inconsistent data, and significant privacy concerns.
Furthermore, standard Large Language Models (LLMs) often fail to provide sufficiently relevant
or evidence-based medical answers to clinicians. Projects could explore AI solutions inherently
designed for low-resource settings, address data fragmentation through novel integration
methods, or build trust by focusing on explainability and human oversight. Agentic AI could also
play a crucial role in automating administrative burdens, thereby freeing up human professionals
and directly addressing burnout and workforce shortages.

3.2. Bridging Educational Divides with AI

AI offers significant benefits in education, including personalized learning, adaptive curricula,


intelligent tutoring systems, smart content development, automated grading, and student
support chatbots. Agentic AI, in particular, can provide 24/7 responsive support, personalized
tutoring plans, and assistance with grading and troubleshooting. However, significant learning
gaps persist, with 21% of public schools having multiple teaching vacancies, only 2% of
C-average students receiving high-quality tutoring, and 44% of students falling behind in at least
one subject. A considerable "digital divide" exists, where "AI-Excluded" populations in low- and
middle-income countries lack basic digital resources, connectivity, and the necessary digital and
AI literacy to effectively utilize AI-enhanced learning. This reality risks exacerbating existing
educational inequalities.
Challenges in AI integration include potential over-reliance on AI, which can diminish critical
thinking skills, data privacy risks, and issues of academic dishonesty. AI may also lack the
nuanced understanding, creativity, and empathy inherent in human cognition. Furthermore,
algorithmic bias and ableist assumptions in AI systems can perpetuate harmful stereotypes and
limit access for people with disabilities. The research highlights that without "capable and
digitally literate teachers" and an "enabling digital ecosystem" (including affordable internet,
devices, and infrastructure), AI-enhanced learning remains a "distant concept" for many. This
indicates that merely providing AI tools is insufficient; the foundational digital literacy required to
use them effectively is often missing. Projects could therefore focus on developing AI-powered
platforms specifically designed to teach AI literacy and digital skills to underserved communities
and educators. Such platforms would involve adaptive learning for digital skills, simplified
interfaces, and content that explains AI concepts accessibly, directly addressing this
foundational gap.
There is also a tension between AI's potential to automate educational tasks and the risk of it
replacing human interaction or stifling critical thinking. The ideal scenario is for AI to function as
a "co-intelligence" for teachers, boosting their productivity without undermining their crucial
human roles. Projects should prioritize AI applications that augment, rather than replace, human
educators and student cognitive processes. This could involve AI tools that provide data-driven
insights to teachers for tailored interventions , or AI tutors that encourage problem-solving and
critical inquiry instead of simply providing answers, ensuring students genuinely "learn, not
shortcut".

3.3. Advancing Environmental Sustainability through AI

AI offers powerful tools for environmental sustainability, capable of enhancing environmental


monitoring (through satellite imagery and sensors), optimizing energy consumption (via smart
grids and building management systems), advancing sustainable agriculture (through precision
farming), facilitating efficient waste management, supporting sustainable urban development,
predicting climate change impacts, and promoting a circular economy.
However, the rapid growth of generative AI presents a critical challenge: its own significant
environmental footprint. The computational power required for training, deploying, and
fine-tuning large generative AI models demands staggering amounts of electricity, leading to
increased CO2 emissions and pressure on electrical grids. Data centers' electricity consumption
is projected to nearly double by 2026. Furthermore, substantial water is needed to cool the
hardware, straining municipal supplies, and the manufacturing of high-performance hardware
(GPUs) carries indirect environmental impacts. This creates a clear contradiction: AI is a
powerful tool for environmental sustainability, yet its own growth is environmentally
unsustainable. This critical dilemma must be resolved for AI to truly be a net positive for the
environment. Projects should not only focus on applying AI to environmental problems but also
on making AI itself more sustainable. This means incorporating "Green AI" principles into the
project design, such as optimizing model size, training efficiency, or exploring edge computing
for lower power consumption. A project could even focus solely on developing Green AI
techniques or tools.
Challenges in traditional environmental monitoring include high costs, time-intensive
procedures, and a lack of specialized AI experts in the environmental sector, as well as issues
related to data access, control, and privacy. These limitations often result in a narrow focus and
insufficient granularity of environmental data. AI's capacity to process vast amounts of data from
diverse sources, including IoT sensors , suggests an opportunity for more distributed and
accessible monitoring. Projects could explore citizen science platforms or low-cost IoT
deployments combined with AI for hyper-local environmental monitoring, democratizing data
collection and empowering communities. This approach would address the expert shortage and
data access challenges by leveraging collective intelligence and distributed sensing. "Green AI"
aims to mitigate these impacts by developing environmentally sustainable AI models and
practices through energy-efficient hardware (e.g., Edge AI, low-power GPUs/TPUs), algorithmic
optimizations (e.g., lightweight architectures, model pruning, low-precision computation, transfer
learning), and leveraging renewable energy sources for data centers. Tools like Carbontracker4
are also emerging to estimate the energy consumption of AI models.

3.4. Enhancing Smart City Functionality and Inclusivity

Smart cities leverage AI for enhanced efficiency in areas like traffic flow management, energy
optimization, waste management, infrastructure, and urban planning through digital twins.
Singapore and Barcelona are notable examples, utilizing AI for real-time traffic management,
public transport optimization, and comprehensive urban simulations.
However, significant hurdles exist in real-world deployment. A key challenge is the "sim-to-real"
gap, where AI systems perform well in simulations but struggle with the complexities of
real-world cities, such as broken sensors or unpredictable traffic patterns. A major concern is
the lack of transparency in AI decisions; city officials express skepticism about automated
systems if they cannot understand why a decision was made or who is responsible if something
goes wrong. This points to a fundamental "trust deficit." If citizens and administrators do not
understand or trust AI decisions, widespread adoption will be limited, regardless of technical
prowess. Projects should prioritize "human-in-the-loop" AI systems for smart cities, where AI
provides suggestions but human experts retain final decision-making authority. This also
necessitates developing explainable AI (XAI) components and uncertainty quantification to build
confidence and ensure responsible deployment in high-stakes urban environments.
Other challenges include high initial implementation costs, the need for significant infrastructure
upgrades, and the complexity of system integration. Data privacy, ethical dilemmas,
cybersecurity threats, the digital divide, fragmented technologies, and a lack of standardization
further impede progress. Language bias and unequal access to technology also risk
exacerbating existing inequalities. While current smart city initiatives often focus on efficiency,
the concerns about exacerbating social inequalities, data privacy, and the digital divide highlight
a need for a broader, human-centric vision, as exemplified by Barcelona's "15-minute city"
concept. Projects could explore AI applications that explicitly aim for social equity and inclusivity
in urban planning, rather than solely optimization. This might involve developing AI models that
simulate the social impact of urban changes, identify underserved areas for resource allocation,
or create accessible citizen engagement platforms that bridge language and digital literacy
barriers.

3.5. Addressing AI's Own Environmental Footprint (Green AI)

The environmental impact of AI development and deployment is a critical, cross-cutting concern.


The massive energy and water demands associated with training and deploying large
generative AI models are widely recognized as unsustainable. This recognition places a direct
responsibility on AI developers and researchers to consider not only what AI can do, but what it
should do sustainably.
Green AI is an emerging paradigm that aims to develop environmentally sustainable AI models
and practices while maintaining high performance and efficiency. It focuses on reducing the
carbon footprint of machine learning through several key strategies:
●​ Energy-efficient hardware: This includes the use of low-power GPUs and TPUs, as well
as embracing Edge AI and decentralized computing to reduce reliance on
energy-intensive cloud-based infrastructure by performing computations locally.
●​ Algorithmic optimizations: Techniques such as developing lightweight architectures,
knowledge distillation, model pruning, low-precision computation, and leveraging transfer
learning from pre-trained models can significantly reduce computational complexity and
energy consumption during training and deployment.
●​ Renewable energy integration: Shifting data centers that power AI operations to run on
green energy sources like wind and solar power is a critical step.
●​ Tools are also emerging to calculate and predict the energy consumption of AI algorithms,
such as Carbontracker4, which helps estimate the footprint of deep learning models.
Every AI project, regardless of its primary domain, should ideally consider its computational
efficiency and potential environmental impact. A dedicated project in this area could involve
developing novel algorithms for energy-efficient training or inference, creating tools for carbon
footprint estimation of AI models, or exploring federated learning approaches to reduce
centralized compute needs. This proactive approach to sustainability adds significant value to
any AI solution.

3.6. Ethical and Accessibility Considerations as Innovation Drivers

Ethical challenges and accessibility gaps in AI are not simply problems to be solved; they
represent rich areas for innovative project development that can lead to more robust,
trustworthy, and inclusive AI systems. Ethical AI development and safety are central themes in
major AI conferences. Key ethical considerations include privacy, confidentiality, informed
consent, bias, fairness, transparency, accountability, autonomy, human agency, and safety.
Non-transparent reasoning from AI systems can undermine user confidence and trust.
Bias in AI is a complex issue that extends beyond mere technical fixes, requiring a broader
assessment involving business leaders and cross-functional teams. Algorithmic bias and ableist
assumptions can perpetuate harmful stereotypes and limit access for people with disabilities,
often stemming from biased datasets and a lack of diverse perspectives in the AI development
process. AI systems frequently assume a "one-size-fits-all" model, overlooking the diverse
needs of users. Privacy concerns are particularly significant in sensitive applications such as
healthcare, smart cities, and mental health.
To mitigate these issues, an "ethics by design" approach is critical. This means integrating
ethical principles from the initial problem formulation through data collection, model
development, and deployment. This approach builds trust and ensures the AI system serves its
intended purpose equitably and responsibly. Mitigation strategies include designing with
empathy, using user personas and journey mapping , and crucially, involving people with lived
experiences (e.g., disabled individuals, minority communities) in the design and deployment
process. Proactively addressing bias through thorough testing and ensuring diverse
representation in training data is also essential. Examples of existing accessibility tools
leveraging AI include applications for visual impairment (e.g., Be My Eyes), live captions and
transcripts for hearing impairment, AI assistants for digital support, and simplified language
translation.
The current shortcomings of AI for people with disabilities stem from a lack of diverse
representation in development and data. This highlights that designing for accessibility from the
start can lead to more robust and universally beneficial AI systems. Projects could focus on
developing AI tools that are inherently inclusive, perhaps by incorporating diverse data, building
adaptive interfaces for various needs, or creating AI systems that actively learn from and adapt
to individual user preferences and abilities, rather than assuming a "norm." This aligns with the
principle of designing with people with disabilities, not just for them. Furthermore, human
oversight and decision-making are crucial, especially in sensitive contexts like mental health,
where AI insights should remain advisory rather than directive.

4. Proposed Unique AI Project Ideas for Computer Science


The following project ideas are designed to address identified gaps and leverage emerging AI
paradigms, offering unique opportunities for a final year Bachelor of Computer Science project.
Table 1: Overview of Proposed AI Project Ideas
Project Title Primary Problem Core AI Key Potential
Domain Addressed Techniques Innovation/Uniq Impact
ueness
Adaptive AI Healthcare AI-induced skill Reinforcement AI as an active Safeguards
Co-Pilot for erosion in Learning, NLP, skill human
Clinical Skill medical Computer preservation expertise,
Maintenance professionals Vision, and improves
Multimodal AI enhancement long-term
agent clinician
competency
AI-Powered Education Digital/AI NLP, Adaptive Teaches AI Reduces
"Digital Literacy literacy divide Learning, literacy educational
Bridge" in underserved Reinforcement responsibly in inequality,
communities Learning, resource-constr fosters
Explainable AI ained informed AI
environments engagement
Hyper-Local Environment Cost/granularity Time-series Citizen-driven Empowers
Environmental issues in Analysis, monitoring with communities
Monitoring & monitoring, AI's Computer inherent with real-time
Citizen environmental Vision, sustainability data,
Engagement footprint Predictive demonstrates
(Green AI) Modeling, sustainable AI
Anomaly
Detection,
Green AI
Optimization
AI-Driven Smart Cities Social Multi-Agent Explicit ethical Leads to
"Ethical Digital inequality, bias, Systems, reasoning and equitable urban
Twin" for Urban trust deficit in Formal social impact development,
Project Title Primary Problem Core AI Key Potential
Domain Addressed Techniques Innovation/Uniq Impact
ueness
Planning urban AI Reasoning/Sy simulation in enhances
mbolic AI, urban planning public trust
Predictive
Modeling,
Explainable AI
4.1. Idea 1: Adaptive AI Co-Pilot for Clinical Skill Maintenance and Upskilling

4.1.1. Problem Statement and Current Limitations:

The growing integration of AI in healthcare, while enhancing diagnostic efficiency and detection
rates, presents a significant risk of "skill erosion" among healthcare professionals. Research
indicates that doctors' independent diagnostic abilities can decline by as much as 20% when AI
support is removed, even for highly experienced clinicians, suggesting an unconscious
over-reliance on AI cues. This issue is particularly critical for trainees who might become overly
dependent on AI before fully mastering fundamental diagnostic and observational skills. Current
AI applications predominantly focus on providing diagnostic assistance or automating
administrative tasks, rather than actively preserving or enhancing human cognitive and
observational skills.

4.1.2. Proposed AI-Driven Solution (Core AI Techniques):

The proposed solution involves developing an agentic AI system designed to function as a


dynamic, adaptive "co-pilot" for clinicians within simulated diagnostic or procedural
environments. This AI system would refrain from providing direct answers or diagnoses. Instead,
it would observe the clinician's decision-making process in real-time, identify patterns indicative
of potential over-reliance on AI outputs, and generate personalized prompts or "nudges" to
encourage critical thinking, independent observation, and active recall of medical knowledge.
The core AI techniques for this system would include:
●​ Reinforcement Learning: To enable the AI to learn and optimize its feedback strategies
based on the clinician's performance and skill retention outcomes. The system would
adapt its prompts to maximize skill improvement, not just task completion.
●​ Natural Language Processing (NLP): For creating interactive prompts, facilitating
debriefing sessions, and understanding clinician queries or responses.
●​ Computer Vision: To analyze simulated medical images (e.g., X-rays, MRI scans,
colonoscopy images) and track the clinician's visual focus and diagnostic process within
the simulation.
●​ Multimodal AI: Potentially to integrate various data streams (visual, verbal, procedural) to
gain a holistic understanding of the clinician's actions in the simulated environment.

4.1.3. Uniqueness and Innovation (Why it's novel/improved):

This project innovates by fundamentally reorienting the role of AI in medical training and
continuous professional development. Instead of AI serving as a primary diagnostic tool, it
becomes an active skill preservation and enhancement agent. Unlike existing AI training
modules that might offer static learning paths or pre-programmed simulations, this "co-pilot"
dynamically adapts to a clinician's real-time performance, specifically targeting and mitigating
the identified risk of AI-induced skill degradation. This approach embodies the "balanced
approach—where AI supports rather than replaces human judgment" , ensuring that human
expertise remains central and resilient alongside technological advancements.

4.1.4. Potential Impact and Product Viability:

The impact of such a system would be significant, directly addressing a critical emerging
challenge in healthcare by safeguarding and enhancing human expertise, which is paramount
for patient safety and the delivery of high-quality care. It can substantially improve the long-term
competency of medical professionals, particularly during their formative training years. As a
product, it holds high viability for medical schools, teaching hospitals, and organizations focused
on continuous professional development. It offers a unique value proposition by ensuring a
resilient, skilled human workforce that can effectively collaborate with AI, rather than being
diminished by it.

4.1.5. Technical Feasibility and Key Challenges for a Bachelor's Project:

Feasibility: The project can be realistically scoped by focusing on a specific medical domain,
such as interpreting X-rays for bone fractures or analyzing colonoscopy images for
pre-cancerous growths. Developing a simplified simulated environment for clinician interaction is
achievable. Leveraging existing open-source medical image datasets and readily available
reinforcement learning frameworks would be crucial for project implementation.
Challenges: Key challenges include acquiring or generating sufficiently realistic and diverse
simulated medical data, designing effective reinforcement learning reward functions that
incentivize genuine skill improvement (as opposed to merely task completion), ensuring the AI's
prompts are genuinely helpful and non-distracting, and rigorously evaluating the system's
long-term impact on human skill retention. Ethical considerations around data privacy (even with
simulated data) and the psychological impact of AI-generated feedback on clinicians would
require careful consideration and design.

4.2. Idea 2: AI-Powered "Digital Literacy Bridge" for Underserved Communities

4.2.1. Problem Statement and Current Limitations:

A significant "digital divide" persists globally, where underserved communities often lack access
to basic digital resources, reliable connectivity, and, crucially, the foundational digital and AI
literacy required to effectively benefit from AI-enhanced learning opportunities. This disparity
exacerbates existing educational inequalities and prevents the effective integration of AI in
classrooms and communities where its potential impact is most needed. Current AI education
tools frequently assume a baseline of digital literacy and access, or they inadvertently
perpetuate biases by failing to account for diverse learning needs, cultural backgrounds, and
socio-economic realities.

4.2.2. Proposed AI-Driven Solution (Core AI Techniques):

The proposed solution involves developing an accessible, multimodal agentic AI platform


specifically designed to teach foundational digital and AI literacy. The platform would prioritize
intuitive voice input and output, alongside simplified graphical user interfaces, to cater to users
with varying levels of digital familiarity. The AI would dynamically adapt to the user's existing
knowledge level, providing context-sensitive explanations of core AI concepts (e.g., how AI
learns, basic principles of data privacy, understanding algorithmic bias) through interactive
dialogues and practical, guided exercises. To ensure broader deployment in low-resource
settings, the system would utilize smaller, domain-specific AI models optimized for mobile or
edge devices , minimizing computational requirements and energy consumption.
The core AI techniques for this system would include:
●​ Natural Language Processing (NLP): For enabling a conversational interface,
simplifying complex explanations, and understanding user queries in natural language.
●​ Adaptive Learning Algorithms: To tailor the content delivery, pacing, and complexity of
lessons to each individual user's progress and learning style.
●​ Reinforcement Learning: Potentially to guide users through optimal skill acquisition
pathways, adapting based on their engagement and comprehension.
●​ Explainable AI (XAI): A crucial component for transparently demonstrating AI concepts,
helping users understand how AI works and its limitations, thereby building trust and
promoting responsible use.

4.2.3. Uniqueness and Innovation (Why it's novel/improved):

This project is unique because its primary objective is to build AI literacy itself within
marginalized populations, directly addressing the needs of the "AI-Excluded". It transcends the
conventional use of AI to teach traditional subjects; instead, it focuses on teaching about AI
responsibly. The emphasis on multimodal accessibility (e.g., voice-first interaction) and
optimization for resource-constrained environments (e.g., mobile, edge devices) significantly
distinguishes it from mainstream AI educational tools. This approach aligns with the principle of
"guiding, not just giving" , and proactively addresses potential biases by educating users on AI's
limitations and ethical considerations.

4.2.4. Potential Impact and Product Viability:

The potential impact of this platform is substantial: it would empower individuals and
communities by equipping them with essential digital and AI skills for the evolving digital age.
This directly contributes to reducing educational inequality, fostering informed engagement with
AI technologies, and promoting ethical technology use across society. As a product, it
possesses high viability for non-governmental organizations (NGOs), community learning
centers, and educational initiatives operating in developing regions or underserved domestic
areas. It fills a critical gap in foundational digital and AI education, providing a scalable solution
for widespread impact.

4.2.5. Technical Feasibility and Key Challenges for a Bachelor's Project:

Feasibility: The project can be realistically scoped by focusing on a specific set of core AI
literacy concepts (e.g., "What is machine learning?", "How does AI use data?", "What is AI
bias?") and targeting a defined demographic for initial testing. Leveraging existing open-source
NLP models and adaptive learning frameworks is feasible for implementation.
Challenges: Key challenges include designing a truly intuitive and accessible multimodal
interface that caters to diverse user needs, ensuring the AI's explanations are clear, accurate,
and culturally sensitive, and developing robust assessment methods for digital literacy
acquisition. Furthermore, gathering diverse and representative data for model training is crucial
to avoid perpetuating existing biases. Ethical considerations around data collection from
vulnerable populations and ensuring their informed consent are paramount throughout the
project lifecycle.

4.3. Idea 3: Hyper-Local Environmental Monitoring and Citizen Engagement


Platform (Green AI Focus)

4.3.1. Problem Statement and Current Limitations:

Traditional environmental monitoring methods are often characterized by high costs,


time-consuming procedures, and a lack of sufficient spatial and temporal granularity. This
results in a scarcity of real-time, hyper-local data, which is crucial for effective urban
sustainability planning and rapid response to localized environmental issues. Compounding this,
the environmental impact of AI itself, particularly the energy and water demands of large
generative models, is a growing and unsustainable concern.

4.3.2. Proposed AI-Driven Solution (Core AI Techniques):

The proposed solution is a mobile-first, citizen-science enabled AI application designed for


hyper-local environmental monitoring. This platform would integrate data from low-cost,
distributed Internet of Things (IoT) sensors (e.g., measuring air quality, noise levels, water
quality, and micro-weather patterns) with publicly available satellite imagery and other relevant
environmental datasets. The AI would analyze this real-time, hyper-local data to identify
anomalies, predict localized environmental risks (e.g., pollution hotspots, urban heat islands),
and provide actionable insights and alerts directly to citizens and local authorities.
Crucially, the AI models and their inference processes within this platform would be designed
following "Green AI" principles, emphasizing lightweight architectures, optimized training
methodologies, and edge computing to minimize energy consumption and overall environmental
footprint.
The core AI techniques for this system would include:
●​ Time-series Analysis: For processing and interpreting continuous data streams from IoT
sensors to detect trends and anomalies.
●​ Computer Vision: For analyzing satellite imagery and other visual data to identify
environmental changes like deforestation or urban heat islands.
●​ Predictive Modeling: For forecasting localized environmental changes and potential
risks, such as air quality degradation or extreme weather events.
●​ Anomaly Detection: To identify unusual patterns in environmental data that may indicate
pollution incidents or other hazards.
●​ Green AI Optimization: Implementing techniques like model pruning, quantization, and
efficient hardware utilization to ensure the AI system itself is energy-efficient.

4.3.3. Uniqueness and Innovation (Why it's novel/improved):

This project is unique in its dual focus: it combines hyper-local, citizen-driven environmental
monitoring with a foundational commitment to "Green AI" principles. It is not merely an AI for
sustainability application, but an AI application that is itself designed to be sustainable,
addressing the "AI for Good vs. AI's Footprint" dilemma. This approach directly tackles the data
access and cost challenges of traditional monitoring by decentralizing data collection and
empowering citizens with actionable environmental intelligence. It also provides a tangible
demonstration of how AI can be developed and deployed responsibly with a minimal ecological
footprint.

4.3.4. Potential Impact and Product Viability:

The impact of this platform would be significant, empowering communities with real-time,
localized environmental data, enabling more proactive local environmental management, and
fostering greater environmental awareness and engagement among citizens. It serves as a
practical demonstration of how AI can be developed and deployed responsibly with minimal
ecological footprint. As a product, it possesses high viability for smart city initiatives,
environmental agencies, community advocacy groups, and educational institutions focused on
sustainability. It offers a scalable model for widespread environmental data collection and
analysis, complementing broader monitoring efforts.

4.3.5. Technical Feasibility and Key Challenges for a Bachelor's Project:

Feasibility: The project can be realistically scoped by focusing on a specific environmental


parameter (e.g., air quality or noise pollution) and a limited geographical area for data collection.
Integration with a few low-cost IoT sensors and publicly available APIs for satellite data or
weather information is achievable. The Green AI aspect could involve comparing the energy
consumption of different model architectures or inference strategies (e.g., on edge devices vs.
cloud).
Challenges: Key challenges include ensuring data quality and reliability from diverse,
potentially low-cost sensors, effectively handling missing or noisy data, developing robust
predictive models for localized phenomena, and creating intuitive interfaces that effectively
communicate complex environmental insights to a general audience. Implementing and
accurately measuring Green AI optimizations to demonstrate energy efficiency would be a core
technical challenge requiring careful methodology.

4.4. Idea 4: AI-Driven "Ethical Digital Twin" for Urban Planning

4.4.1. Problem Statement and Current Limitations:

Smart city initiatives frequently prioritize efficiency gains in areas such as traffic flow and energy
optimization but often inadvertently overlook or even exacerbate social inequalities, data privacy
concerns, and ethical dilemmas. Existing "Digital Twins," such as Barcelona's virtual replica ,
primarily simulate physical urban systems and infrastructure changes. However, they typically
lack explicit ethical reasoning capabilities or comprehensive social impact assessment
functionalities. This leads to a "trust deficit" among citizens and administrators due to a lack of
transparency and accountability in smart city AI decisions. City officials, for instance, express a
desire to understand why an AI makes a particular decision and who is responsible if something
goes wrong.

4.4.2. Proposed AI-Driven Solution (Core AI Techniques):

The proposed solution involves extending the concept of an urban "Digital Twin" by integrating
an "Ethical Layer" powered by AI. This AI would simulate not only physical and infrastructure
changes (e.g., new building developments, public transport routes) but also their potential social
and ethical impacts on various demographic groups within the city. For example, it could assess
the risk of gentrification from new developments, evaluate equitable access to public services,
or analyze the privacy implications of new surveillance technologies. The system would
leverage formal reasoning techniques and multi-agent simulations to identify potential ethical
conflicts or biases inherent in proposed urban interventions. The "Ethical Digital Twin" would
provide urban planners with "what-if" scenarios, complete with ethical impact assessments and
explainable insights (XAI), supporting a "human-in-the-loop" decision-making process where AI
offers suggestions but human experts retain final authority.
The core AI techniques for this system would include:
●​ Multi-Agent Systems: To simulate the diverse behaviors, interactions, and needs of
different citizen groups within the urban environment, allowing for the observation of
emergent social dynamics.
●​ Formal Reasoning/Symbolic AI: For encoding and applying ethical rules, detecting
logical inconsistencies, and identifying potential conflicts or biases in proposed urban
policies or designs.
●​ Predictive Modeling: For forecasting social impacts, such as changes in accessibility,
affordability, or community cohesion resulting from urban interventions.
●​ Explainable AI (XAI): To provide transparent and understandable explanations for the
AI's ethical assessments and predictions, building trust and facilitating informed human
decision-making.

4.1.3. Uniqueness and Innovation (Why it's novel/improved):

This project is unique in its explicit integration of ethical reasoning and social equity simulation
into a digital twin for urban planning. It represents a significant advancement beyond purely
technical or efficiency-driven smart city models by prioritizing human well-being and fairness. By
providing an "ethical lens" for urban development, it directly addresses the persistent challenges
of bias, privacy, and social inequality in smart cities. This approach fosters the creation of more
inclusive, equitable, and trustworthy urban environments, moving beyond optimization to truly
human-centered urban design.

4.1.4. Potential Impact and Product Viability:

The impact of this "Ethical Digital Twin" would be profound, leading to more equitable, inclusive,
and socially responsible urban development. It provides a powerful tool for policymakers and
urban planners, enabling them to foresee and mitigate negative societal impacts before
implementing changes, thereby enhancing public trust in smart city initiatives. As a product, it
possesses high viability for municipal governments, urban planning departments, architectural
firms, and academic research institutions focused on responsible urban development and social
impact assessment.

4.1.5. Technical Feasibility and Key Challenges for a Bachelor's Project:

Feasibility: The project can be realistically scoped by focusing on a specific urban planning
challenge (e.g., assessing the impact of a new public transport line on accessibility for different
income groups) within a simplified representation of a city district. Leveraging open-source
Geographic Information System (GIS) data and multi-agent simulation libraries is feasible. The
ethical reasoning component could start with a limited, well-defined set of ethical rules or
principles.
Challenges: Key challenges include defining and quantifying abstract concepts such as "ethical
impact" and "social equity" metrics in a measurable way, integrating diverse data types
(demographic, economic, spatial) from various sources, and developing robust multi-agent
simulations that accurately reflect real-world social dynamics. Creating intuitive XAI explanations
for complex ethical trade-offs and ensuring the ethical layer itself is unbiased and culturally
sensitive would be paramount.

5. Cross-Cutting Considerations for Project Development


Beyond the specific domain applications, several overarching considerations are critical for the
successful and responsible development of any AI project. These cross-cutting themes are not
mere afterthoughts but fundamental pillars that enhance the robustness, trustworthiness, and
societal value of AI solutions.

5.1. Ethical AI Development and Bias Mitigation

Ethical considerations are not optional add-ons but are fundamental to the responsible and
effective deployment of AI, particularly in sensitive domains. AI ethics and safety are central
themes in major AI conferences. For AI applications aimed at social good, interpretability,
transparency, morality, and fairness are considered essential conditions. The presence of bias in
AI is not solely a technical issue; it requires a broader, systemic assessment involving various
stakeholders, including business leaders and cross-functional teams. Algorithmic bias and
ableist assumptions can perpetuate harmful stereotypes and limit access for certain
populations, often stemming from biased datasets and a lack of diverse perspectives during
development. Key ethical considerations encompass privacy, confidentiality, informed consent,
bias, fairness, transparency, accountability, autonomy, human agency, and safety.
Non-transparent reasoning from AI systems can significantly undermine user confidence.
For any AI project, especially those with societal impact, an "ethics by design" approach is
critical. This means integrating ethical considerations from the initial problem formulation
through data collection, model development, and deployment. This approach is vital for building
trust and ensuring the AI system serves its intended purpose equitably and responsibly.
Mitigation strategies include designing with empathy, utilizing user personas and journey
mapping to understand diverse user perspectives. Crucially, this involves actively including
people with lived experiences (e.g., disabled individuals, minority communities) in the design
and deployment process to ensure inclusivity. Proactively addressing bias through thorough
testing and ensuring diverse representation in training data is also a key strategy.

5.2. Data Privacy and Security in AI Applications

AI systems are inherently data-driven, making robust data privacy and security paramount,
especially when dealing with sensitive personal or environmental information. Significant privacy
concerns exist around the use of AI in healthcare, which has led to reluctance in sharing
medical data. In smart cities, extensive data collection raises considerable concerns about data
privacy and security, with technologies like facial recognition risking individual privacy and
freedom rights, particularly when data is misused or misinterpreted. For mental health
applications that require continuous data collection, clear guidelines are needed for data
storage, processing, and sharing to maintain user trust. Safeguarding Indigenous data
sovereignty is also an important consideration in areas like traditional medicine.
To address these concerns, implementing robust cybersecurity measures, data encryption, and
anonymization techniques is essential. Adhering to established data protection laws, such as
the GDPR in Europe, and educating users on how their data is collected, stored, and used are
crucial steps in building trust and ensuring responsible data handling. Data privacy and security
are non-negotiable for AI projects that handle personal or sensitive information. Students must
consider comprehensive data governance frameworks, explicit consent mechanisms, and
effective anonymization strategies from the outset of their project planning. This not only
protects users but also establishes the necessary foundation of trust for real-world adoption and
sustained impact.

5.3. Human-in-the-Loop Design Principles

The most effective AI systems are those designed to augment, rather than replace, human
capabilities, fostering collaboration and ensuring human oversight in critical decisions. A
balanced approach, where "AI supports rather than replaces human judgment," is considered
essential for sustaining high-quality patient care and professional expertise. In smart city traffic
management, for instance, officials express skepticism about fully automated systems if they
lack transparency, wanting to understand "why the machine made a decision" and "who is
responsible" if errors occur. Consequently, a "human-in-the-loop" approach, where AI suggests
decisions but human experts retain the final say, is emphasized as crucial for building trust and
accountability. Similarly, in mental health, human providers bring irreplaceable qualities such as
empathy and cultural understanding; therefore, AI should assist them rather than replace their
role. AI-generated insights in such sensitive contexts must remain advisory, not directive. The
overarching goal is to strike a balance in human involvement, utilizing uncertainty indicators to
inform users when to trust the system's recommendations and when to override them based on
their expert judgment. Human-in-the-loop (HITL) design is critical for developing trustworthy and
responsible AI systems, particularly in high-stakes domains. Projects should actively incorporate
mechanisms for human oversight, intervention, and feedback. This ensures that AI functions as
a powerful assistant, allowing humans to focus on tasks requiring empathy, creativity, and
nuanced judgment, while also maintaining clear lines of accountability.

5.4. Scalability and Sustainability of AI Solutions

As AI adoption accelerates, the practical challenges of deploying and maintaining AI systems at


scale, particularly concerning their environmental footprint, become increasingly prominent. The
surging demand for compute-intensive workloads, driven by generative AI, robotics, and
immersive environments, places new demands on global infrastructure. This leads to data
center power constraints, physical network vulnerabilities, and escalating compute demands.
The environmental impact of generative AI, including its massive electricity and water
consumption, as well as the impacts from hardware manufacturing, is explicitly described as an
"unsustainable path".
To address these challenges, "Green AI" offers a set of mitigation strategies. These include
developing energy-efficient hardware (e.g., Edge AI, decentralized computing, low-power
GPUs), implementing algorithmic optimizations (e.g., lightweight architectures, knowledge
distillation, low-precision computation, transfer learning), and actively shifting data centers to
renewable energy sources. The concurrent trend of "scale and specialization growing
simultaneously" suggests a strategic balance between vast general-purpose models and a
growing range of domain-specific AI tools that can operate almost anywhere, including at the
edge. A successful final year project should not only demonstrate technical innovation but also
consider its practical implications for scalability and sustainability. This involves careful thought
about the computational resources required, the potential energy footprint, and how the solution
could be deployed efficiently in a real-world setting. Incorporating Green AI principles is a
forward-thinking approach that adds significant value and addresses a critical global challenge.

6. Conclusion and Recommendations for Project Selection


6.1. Summary of Opportunities

The current landscape of Artificial Intelligence offers vast and exciting opportunities for
innovation, particularly for Bachelor of Computer Science students seeking impactful final year
projects. The analysis presented in this report highlights significant potential in developing AI
solutions that address critical societal challenges across healthcare, education, environmental
sustainability, and smart cities. Furthermore, there is a compelling need and opportunity to
contribute to the development of more responsible and sustainable AI systems themselves,
aligning with the principles of Green AI. The emergence of agentic AI and specialized
generative models provides powerful new tools to create truly novel applications or to
significantly improve upon existing ones, pushing the boundaries of what AI can achieve.

6.2. Guidance for Project Implementation

For students embarking on their final year projects, the following recommendations are provided
to maximize impact, ensure feasibility, and align with the evolving demands of the AI field:
●​ Start with a Clear Problem: Select a project idea that genuinely resonates and for which
a well-defined problem statement can be articulated, emphasizing its real-world
significance. A clear understanding of the problem space will guide the entire
development process.
●​ Scope Realistically: For a Bachelor's project, it is crucial to narrow the scope to a
manageable yet impactful sub-problem. The focus should be on demonstrating core AI
principles and the unique aspect of the proposed solution, rather than attempting to build
a full-scale commercial product.
●​ Prioritize Data Availability: Recognize that data quality and quantity are critical for AI
model performance. Early in the project planning, identify or plan for the generation of
suitable datasets. This proactive approach can prevent significant roadblocks later in
development.
●​ Integrate Ethical Considerations: Embed ethical AI development, bias mitigation, and
data privacy considerations into every stage of the project. This demonstrates a holistic
understanding of responsible AI and ensures the solution is fair, transparent, and
trustworthy.
●​ Embrace Human-in-the-Loop Design: Design AI systems that augment human
capabilities and include clear mechanisms for human oversight and collaboration. This
approach ensures that AI acts as a powerful assistant, allowing humans to focus on tasks
requiring empathy, creativity, and nuanced judgment, while also maintaining
accountability.
●​ Consider Sustainability: Actively think about the computational efficiency and
environmental footprint of the chosen AI solution. Where possible, align the project with
Green AI principles, such as optimizing model size, training efficiency, or exploring edge
computing for lower power consumption. This forward-thinking approach adds significant
value.
●​ Seek Interdisciplinary Input: Consult with domain experts (e.g., healthcare
professionals, educators, urban planners, environmental scientists) to gain deeper
insights into the problem space. This collaboration ensures the proposed solution is
practical, impactful, and addresses genuine needs.
●​ Document Uniqueness: Clearly articulate why the chosen idea is unique, less
implemented, or offers significant improvements over existing solutions. This requires a
thorough understanding of the current state of the art and careful justification of the
project's novel contribution.

Works cited

1. Future of AI Research - AAAI,


https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-FINAL.pdf 2.
Emerging Technology Trends - J.P. Morgan,
https://www.jpmorgan.com/content/dam/jpmorgan/documents/technology/jpmc-emerging-techno
logy-trends-report.pdf 3. AI for Social Good | Department of Computer Science,
https://cs.duke.edu/research/ai-social-good-data 4. Full article: AI for social good and the
corporate capture of global development,
https://www.tandfonline.com/doi/full/10.1080/02681102.2023.2299351 5. New study warns!
Routine AI use may affect doctors’ tumor diagnostic skills by 20%,
https://timesofindia.indiatimes.com/life-style/health-fitness/health-news/new-study-warns-routine
-ai-use-may-affect-doctors-tumor-diagnostic-skills-by-20/articleshow/123280972.cms 6. 7 ways
AI is transforming healthcare - The World Economic Forum,
https://www.weforum.org/stories/2025/08/ai-transforming-global-health/ 7. 5 Real-World
Problems Agentic AI Solves Today | FullStack Blog,
https://www.fullstack.com/labs/resources/blog/5-real-world-problems-agentic-ai-is-solving-today
8. How artificial intelligence in education is transforming classrooms - Learning Sciences,
https://learningsciences.smu.edu/blog/artificial-intelligence-in-education 9. How Can AI Be Used
in Sustainability? | NC State MEM,
https://mem.grad.ncsu.edu/2025/04/22/how-can-ai-be-used-in-sustainability/ 10. Smart Cities,
Green Futures: How Artificial Intelligence is Powering Urban Sustainability - Earth Day,
https://www.earthday.org/smart-cities-green-futures-how-ai-is-powering-urban-sustainability/ 11.
McKinsey technology trends outlook 2025,
https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-top-trends-in-tech 12.
AAAI-25 New Faculty Highlights Program,
https://aaai.org/conference/aaai/aaai-25/new-faculty-highlights-program/ 13. Explained:
Generative AI's environmental impact | MIT News ...,
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117 14. Green AI:
Strategies for Reducing the Carbon Footprint of Machine Learning,
https://www.researchgate.net/publication/389099897_Green_AI_Strategies_for_Reducing_the_
Carbon_Footprint_of_Machine_Learning 15. Green Machine Learning,
https://www.esann.org/sites/default/files/proceedings/2023/ES2023-3.pdf 16. AI in Healthcare
Upskilling: How Artificial Intelligence is Shaping Workforce Training,
https://shccares.com/blog/workforce-solutions/healthcare-upskilling-with-ai/ 17. Why AI in
Healthcare Has Failed in 2024 - Oatmeal Health,
https://oatmealhealth.com/why-has-ai-failed-so-far-in-healthcare-despite-billions-of-investment/
18. AI in education: A privilege for the few or an opportunity for all? - World Bank Blogs,
https://blogs.worldbank.org/en/latinamerica/inteligencia-artificial-ia-privilegio-para-pocos-oportun
idad-para-todos 19. AI Can Close the Learning Gap in Underserved Classrooms. But We Have
to Guide, Not Just Give - Sam Whitaker, Director of Social Impact at StudyFetch - The TechEd
Podcast, https://techedpodcast.com/whitaker/ 20. The Impact of Artificial Intelligence (AI) on
Students' Academic Development - MDPI, https://www.mdpi.com/2227-7102/15/3/343 21.
Accessible AI Requires Involving and Collaborating with People with Disabilities,
https://www.everylearnereverywhere.org/blog/accessible-ai-requires-involving-and-collaborating-
with-people-with-disabilities/ 22. How AI is Closing Education Gaps and Transforming Learning
Worldwide,
https://educationrecoded.org/how-ai-is-closing-education-gaps-and-transforming-learning-world
wide/ 23. (PDF) Artificial Intelligence in Environmental Monitoring: Advancements, Challenges,
and Future Directions - ResearchGate,
https://www.researchgate.net/publication/385059797_Artificial_Intelligence_in_Environmental_
Monitoring_Advancements_Challenges_and_Future_Directions 24. Smart Cities Dive: Smart
Cities News, https://www.smartcitiesdive.com/ 25. Smarter AI-powered tools for real-world urban
decisions | ASU News,
https://news.asu.edu/20250814-sun-devil-community-smarter-aipowered-tools-realworld-urban-
decisions 26. Revolutionizing Urban Mobility: A Systematic Review of AI, IoT, and Predictive
Analytics in Adaptive Traffic Control Systems for Road Networks - MDPI,
https://www.mdpi.com/2079-9292/14/4/719 27. Challenges and opportunities for developing a
smart city based on artificial intelligence - International Scientific Hub,
https://www.iscihub.com/index.php/AIIM/article/download/35/77 28. Envisioning an AI-Enhanced
Mental Health Ecosystem - arXiv, https://arxiv.org/html/2503.14883v1 29. Ethical Considerations
in Artificial Intelligence Interventions for Mental Health and Well-Being: Ensuring Responsible
Implementation and Impact - MDPI, https://www.mdpi.com/2076-0760/13/7/381 30. Mitigating
Bias in Artificial Intelligence - Berkeley Haas,
https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf 31.
Accessibility, Bias Mitigation, and AI in UX Research - Userlytics,
https://www.userlytics.com/resources/podcasts/accessibility-bias-mitigation-and-ai-in-ux-researc
h/ 32. AI and inclusion: Opportunities, challenges and action | by David Scurr | CAST Writers,
https://medium.com/we-are-cast/ai-and-inclusion-opportunities-challenges-and-action-003f58ff8
aa3

You might also like