0% found this document useful (0 votes)
141 views36 pages

Project Report 8th Sem

The project report details the development of an LLM-powered resume analyzer aimed at enhancing recruitment efficiency through automation and improved candidate matching. Utilizing advanced machine learning techniques, the system streamlines the hiring process by accurately ranking candidates and minimizing bias, while ensuring data protection and seamless integration with existing ATS software. The report provides insights into the methodologies, challenges, and future scope of this innovative approach to recruitment.

Uploaded by

Ujju Dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views36 pages

Project Report 8th Sem

The project report details the development of an LLM-powered resume analyzer aimed at enhancing recruitment efficiency through automation and improved candidate matching. Utilizing advanced machine learning techniques, the system streamlines the hiring process by accurately ranking candidates and minimizing bias, while ensuring data protection and seamless integration with existing ATS software. The report provides insights into the methodologies, challenges, and future scope of this innovative approach to recruitment.

Uploaded by

Ujju Dutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Enhancing recruitment through AI with an LLM-powered

resume analyzer for smarter and more efficient hiring


Project report in partial fulfillment of the requirement for the award of the
degree of Bachelor of Technology
In
CSE (IoT, CS, BT)
Submitted By

Arkaprava Roy (Section A ) 12021002029096

Aakash Nath ( Section A ) 12021002029097

Akash Dey ( Section A ) 12021002029098

Sumon Santara ( Section B ) 12021002029095

Promit Kumar Bodak ( Section A ) 12021002029094

Suman Mondal ( Section A ) 12021002029085

Sneha Chakraborty ( Section A ) 12021002029190

Rounak Chatterjee ( Section A ) 12021002029102

Kingshuk Bhowmick ( Section B ) 12021002029088

Tanmoy Daw ( Section A ) 12021002029035

Under the guidance of


Prof. (Dr.) Siddhartha Roy
Department of

CSE (IOT,CS,BT)

University of Engineering & Management, Kolkata


University Area, Plot No. III – B/5, New Town, Action Area – III, Kolkata – 700160
CERTIFICATE

This is to certify that the project titled “Enhancing recruitment through AI with an
LLM-powered resume analyzer for smarter and more efficient hiring” submitted by
Arkaprava Roy (Section A) 12021002029096, Aakash Nath (Section A)
12021002029097, Akash Dey (Section A) 12021002029098, Sumon Santara
( Section B ) 12021002029095, Promit Kumar Bodak (Section A) 12021002029094,
Suman Mondal (Section A) 12021002029085, Sneha Chakraborty (Section A)
12021002029190, Rounak Chatterjee (Section A) 12021002029102, Kingshuk
Bhowmick (Section B) 12021002029088, Tanmoy Daw (Section A)
12021002029035 students of University of Engineering & Management, Kolkata, in
partial fulfillment of requirement for the degree of Bachelor of Computer Science and
Engineering (Internet of Things, Cybersecurity, Blockchain Technology), is a bonafide
work carried out by them under the supervision and guidance of Prof. (Dr.) Siddhartha
Roy during 8th Semester of academic session of 2021-25. The content of this report has
not been submitted to any other university or institute. I am glad to inform that the
work is entirely original and its performance is found to be quite satisfactory.

Signature of Supervisor Signature of Head of the Department


Date: Date:
ACKNOWLEDGEMENT

We would like to take this opportunity to thank everyone whose cooperation and
encouragement throughout the ongoing course of this project remains invaluable to
us.
We are sincerely grateful to our guide Prof. (Dr.) Siddhartha Roy and Head of the
Department Prof. (Dr) Sandip Mondal of CSE (IoT, CS, BT), University of
Engineering & Management, Kolkata, for his wisdom, guidance and inspiration that
helped us to go through with this project and take it to where it stands now.

Last but not the least, we would like to extend our warm regards to our families

and peers who have kept supporting us and always had faith in our work.

Arkaprava Roy

Aakash Nath

Akash Dey

Kingshuk Bhowmick

Promit Kumar Bodak

Rounak Chatterjee

Sumon Santara

Suman Mondal

Sneha Chakraborty

Tanmoy Daw
TABLE OF CONTENTS

ABSTRACT
Page No. 1
1. INTRODUCTION
Page No. 2
2. LITERATURE SURVEY

3. RESEARCH GAP Page No. 4

4. PROBLEM STATEMENT Page No. 5

5. PROPOSED SOLUTION Page No. 6

6. SYSTEM DESIGN Page No. 7

7. EXPERIMENTAL (SETUP, CODE, FLOWCHART) Page No. 10

8. RESULT AND DISCUSSION Page No. 26

9. FUTURE SCOPE Page No. 28

10. CONCLUSION Page No. 29

11. BIBLIOGRAPHY Page No. 30


ABSTRACT

The fast paced environment of today's employment market, it is crucial to have


the right individual efficiently. This project takes a significant step in this
direction by bringing a Large Language Model powered resume analyzer for
smarter and more efficient hiring to speed with the newest Models and Machine
Learning. Not only it brings better analysis and matching of resumes with job
postings, but also automates much of the necessary recruitment process. It
makes the entire process more efficient and transparent.

This project aims to automate the hiring process with the help of the LLM
powered resume matching and automate the hiring process. It screens the
candidates resumes and also classifies candidates and ranks applications in terms
of how well they match the job description. We have developed a feature to help
the recruiter shortlist the candidates on a large scale . It will help the recruiter to
save time and reduce the hustle to manage the candidate.

To ensure equity, we have used a pre-trained model to minimize bias and


maximize inclusivity in the recruitment process. We're even using explainable
Large Language Models to provide clear explanations of how the resumes are
scored, so that the recruiters and job seekers themselves understand why the
ratings are given. We've added advanced data protection to secure the sensitive
information . It make sure the system is able to sift through enormous collection
of resume data easily. It makes seamless integration with the rest of the ATS
softwares .

It goes through every single resume and creates a custom profile for each
candidate and gives them a score after matching with the job description. It is
more accurate, effective, and fair. This report gives a detailed overview of our
project development, methodologies, and impact on modern recruitment. It
offers insights for future technological innovation in automated hiring solutions.
INTRODUCTION

When a recruiter hires candidates in a large number , it becomes very time


consuming and needs a lot of effort . With time the recruitment process changes.
The businesses want quicker and more accurate solutions. Traditions screening
process is slow, has human-error and misses talent. That is why we have
created a Resume Analyzer based on Large Language Models and Machine
Learning to automate the recruitment process.

This smart Applicant Tracking System (ATS) refines candidate shortlisting .It
interprets relevance of experience and skills rather than keyword matching. It
accurately ranks and filters the candidates to select the best fit candidate for the
respective job. To increase accuracy, we have used Google's Gemini 2.5 Flash
Large Language Model which efficiently scans the job titles and description. It
creates an individual profile for each candidate and gives them a score
individually out of 1000.

This website is also capable of creating individual profiles for candidates and
the recruiter. The candidate can apply for as many jobs as they want. And
everything will be reflected in the candidate's profile. Similarly the recruiter can
manage the job posting. Also can see the candidates profile and their resume.

This report briefs the development, concept, and effect of HireLine in the
market. Resume automation,large hiring choices, and candidate ranking,which
increases the recruitment efficiency. It is a crucial tool in the current job market
fro the recruiter and as well as for the job seekers.

Page 1
LITERATURE SURVEY

Development of resume parsing utilizes Natural Language Processing (NLP) and


Machine Learning to streamline recruitment process. It makes the process highly
efficient and accurate compared to traditional hiring processes. Collobert et al. had
initiated a single deep model for NLP tasks. This is now a building block in resume
parsing models . It facilitates context-based text processing using deep neural networks.
This deep learning model is used to support a range of NLP tasks. It includes parts-of-
speech and entity recognition with sentiment analysis . This all is essential to extract
structured data from unstructured resume text [1].

The researchers have created new segmentation algorithms and split resumes into
significant text blocks. The algorithms recognize important sections like work
experience, education, skills, and certifications. It allows information to be properly
categorized and organized. Through the segmentation of resumes into smaller pieces,
parsers can minimize the errors and improve the accuracy. It makes them more efficient
in bulk recruitment processes [2].

New learning techniques have also contributed to refining resume parsing models. It
uses prompt based approaches. This model allows Natural Language Processing systems
to predict from a small amount of training data. It makes them adaptable to new resume
formats with minimal error. This is particularly useful in recruitment.I is useful when
resumes vary in structure and style. Prompt based learning allows these models to make
inferences of relationships between job postings and resume content. It enhances
candidate-job posting matching accuracy [3].

Old type analyzers also increase the time and efforts. It is required for resume parsing by
combining Natural Language Processing with context awareness. Conventional rule-
based systems tend to fail with vague language and diverse resume formats.It results in
candidate evaluation. But contemporary models take advantage of deep learning and
contextual embeddings to understand resume content. These models have the capacity to
differentiate between jobs with similar job titles. It connects hierarchical relationships
between work experiences. It captures key extraction from natural language descriptions.
This capacity for free text resume processing enhances candidate job fit .[4].

Page 2
Python library Pyresparser is now a precious asset in applicant tracking systems (ATS).
They use tokenization, named entity recognition.Vector embeddings to extract
information from resumes with great efficiency. Pyresparser can accurately extract
names, job titles, company names, skills, and educational information. With the
integration of such libraries in ATS platforms, recruiters can make resume screening
automated. It saves manual effort . It makes the initial candidate selection more efficient
[5].

Text mining and similarity scoring algorithms are also commonly used to shortlist
candidates with job description match . These algorithms apply cosine similarity Term
Frequency-Inverse Document Frequency. The word embeddings to match resume content
against job descriptions. The higher the score, the higher the similarity between a
candidate's background and the job description. This automated ranking process enables
recruiters to prioritize top candidates efficiently.It minimizes the risk of overlooking
qualified applicants in bulk hiring scenarios [6].

Large language models have transformed resume parsing. It increases data extraction
accuracy and better interpreting. To the conventional Machine Learning models they are
heavily dependent on pre-defined rules. Structured data, Large Language Models are
capable of parsing resumes in a more human-like fashion. These models can handle
multilingual resumes. It makes them useful in international hiring scenarios [7].

ATS applications such as Mettle and Hirepro offer expansive data sets. It enables data-
backed hiring decisions. The applications allow integration of resume parsing
capabilities. And also organizes candidate’s information in an efficient
manner.Inconsistent formats, graphical elements, and different layouts slow down
automated processes. Solution of these kinds of inconsistencies in the documents and
automated template adaptation in these models [8].

The resume parsers in ATS systems has resulted in structured data extraction. It reduces
the hiring time and overall recruitment efficiency. Resumes continue to be limited by
variations in format. Difficult sentence structures, and domain-specific terminology
despite these benefits. Future advances such as more powerful deep learning
architectures and fine-tuned LLMs. Those will be instrumental in breaking these
limitations. I will be realizing seamless automation in recruitment processes [9].

Page 3
RESEARCH GAP

Despite outstanding developments in resume parsing using NLP and machine learning,
some outstanding issues are yet to be solved. Current models, including deep neural
networks and segmentation approaches, have enhanced the extraction of structured data
from resumes but are still challenged by non-standard resume formats, graphical
content, and domain-specific vocabularies. These variations prevent the full automation
of resume screening and necessitate more enhancements in AI-based document
standardization and template fitting [1][8].

Few-shot learning and prompt-based methods have increased flexibility to various


resume structures with little retraining. Accuracy of candidate-job matching is still
constrained by differences in writing styles and implicit skill detection. Large language
models (LLMs) such as GPT-based systems have increased context comprehension but
need fine-tuning for industry-specific recruitment requirements and bias reduction
[3][7].

Furthermore, although ATS platforms utilize text mining and similarity scoring methods
(e.g., TF-IDF, cosine similarity), they also struggle to properly rank candidates when
resumes have ambiguous wording. Further, current AI models tend not to consider soft
skills, career path, and cultural alignment—crucial considerations in end-to-end hiring
decisions [6][9].

Our project fills these gaps through the use of LLM-enabled ATS (via Gemini-2.0-
Flash) to improve resume parsing and ranking. In contrast to traditional parsers, our
method combines deep-learning-driven contextual comprehension for improved
processing of unstructured data, multilingual resumes, and implicit skill identification.
Future directions will include reduction of bias, adaptive learning, and enhanced
semantic analysis to further improve automated hiring efficiency.

Page 4
PROBLEM STATEMENT

The Applicant Tracking System aims to make the hiring process simpler job postings
and application reception. It organizes recruitment workflows. It ensures a systematized
and efficient way of hiring, promoting effective coordination.

The Users:

The candidate signs up in the portal then searches job vacancies, and applies for positions.

The candidate uploads a resume and fills out the necessary check form before submission

The recruiter screens the applications.Shortlists candidates according to the information


submitted, and fills the form.

Job Posting Flow:


The recruiter screens the applications.Shortlists candidates according to the information
submitted, and fills the form.

Shortlisted candidates' applications are shown on both the employer's and the
dashboards for final inspection.

This website provides a smooth hiring process and also enhances efficiency and
transparency with coordination throughout all phases of recruiting.

Application Flow:
The candidate signs up in the portal then searches job vacancies, and applies for positions.

The candidate uploads a resume and fills out the necessary check form before submission

The recruiter screens the applications.Shortlists candidates according to the information


submitted, and fills the form.

Shortlisted candidates' applications are shown on both the employer's and the
dashboards for final inspection.

This website provides a smooth hiring process and also enhances efficiency and
transparency with coordination throughout all phases of recruiting.

Page 5
PROPOSED SOLUTION

When a recruiter hires candidates in a large number , it becomes very time consuming
and needs a lot of effort . With time the recruitment process changes. The businesses
want quicker and more accurate solutions. Traditions screening process is slow, has
human-error and misses talent. That is why we have created a Resume Analyzer based on
Large Language Models (L and Machine Learning to automate recruitment process.This
smart Applicant Tracking System (ATS) refines candidate shortlisting .It interprets
relevance of experience and skills rather than keyword matching. It accurately ranks and
filters the candidates to select the best fit candidate for the respective job. To increase
accuracy, we have used Google's Gemini 2.5 Flash Large Language Model which
efficiently scans the job titles and description. It creates an individual profile for each
candidate and gives them a score individually out of 1000.

This website is also capable of creating individual profiles for candidates and the
recruiter. The candidate can apply for as many jobs as they want. And everything will
be reflected in the candidate's profile. Similarly the recruiter can manage the job posting.
Also can see the candidates profile and their resume.This report briefs the development,
concept, and effect of HireLine in the market. Resume automation,large hiring choices,
and candidate ranking,which increases the recruitment efficiency. It is a crucial tool in the
current job market from the recruiter and as well as for the job seekers.

Security and compliance are the foundation of the system. Resume Analyzer employs a
hybrid database to handle different resume types efficiently. It provides strong APIs for
integration with installed ATS platforms. MongoDB, which is a NoSQL database, is
employed to store job listings and user information. It minimizes the recruitment process
ensuring secure, and highly efficient hiring procedure

Page 6
SYSTEM DESIGN

Developing the Applicant Tracking System (ATS) involves creating the structure,
components, and designed to make it function effectively and scalable easily. The system
comprises a user friendly interface, a backend unit, and an AI resume analyzer.

The frontend is developed as a web site where recruiters, employers and applicants
communicate. It has job shortlisting, application forms, resume upload, and recruiter
dashboards.

The backend is responsible for storing information, processing, and managing data. It
also has a database to monitor candidate information, job listings, and updates to
application status. The resume parser and ranking algorithms employ Natural Language
Processing (NLP) and Machine Learning (ML).

The AI model which is Gemini-2.0-Flash LLM is used to search resumes for vital details
like skills, experience, and qualifications to match candidates with the job. The system
also has security features to protect user data and follow hiring regulations.

Therefore the ATS system facilitates fast, automatic, and unbiased hiring and enhances
the effectiveness of hiring and decision-making.

Data Flow Diagram of HireLine - The Resume Parsing System :

Figure(1) shows that When a candidate submits their resume for a job application, the
resume is first stored in the database. The same resume is then processed by the Large
Language Module (LLM) to calculate its score. The recruiter shortlisted the candidate
based on the criteria for the job. The coordinator manages both employer and job posts.

Fig(1):Data Flow Diagram(Level:1) of HireLine

Page 7
Activity Diagram of HireLine- The Resume Parsing System :

Figure(2) shows that It starts with a candidate submitting their application and
uploading their resume. The AI checks the resume validity if invalid the application is
rejected. Valid resumes are processed using a Large Language Model (LLM) to
generate a resume score. Based on these scores candidates are shortlisted. The recruiter
then verify the shortlisted candidates leading to the final selection process. This process
ensures that only qualified candidates advance through the selection stages.

Fig(2): Activity Diagram of HireLine

Page 8
Flowchart of the uploading jobs and hiring process in HireLine :

Figure(3) shows that the job application process starts with account creation and job
posting review. Candidates upload resumes and create a profile with their information
which recruiters review. If the recruiter’s criteria are met candidates are shortlisted
otherwise they are rejected. Shortlisted applications are sent to employers for final
review. Selected candidates receive offers while others are rejected. This process ensures
only qualified and verified candidates reach the final stages.

Fig(3):Flow Chart of HireLine

Page 9
EXPERIMENTAL SETUP

The Experimental setup for the Applicant Tracking System (ATS) involves the latest AI
innovations, such as the integration of Large Language Models (LLMs) like Gemini-
2.0-Flash, to enhance resume parsing, job matching, and recruitment processes. The
system is engineered to be highly scalable and efficient, with smooth operations at every
phase of the hiring process. A cloud-based relational database is set up to keep job
postings, candidate information, resumes, recruiter comments, and application statuses
secure and intact. A web-based user interface is developed to offer easy navigation to
employers, recruiters, and candidates, automating interactions in the system.

In order to boost the precision of resume analysis, Gemini-2.0-Flash is utilized to pull


out essential candidate details such as skills, education, and experience, all while
maintaining contextual comprehension of varying resume structures. Machine learning
algorithms are trained on various datasets in order to sharpen job-candidate matching,
while Natural Language Processing (NLP) ensures semantic comprehension of job
profiles and candidate applications. The hiring process is completely mimicked, with
employers posting vacancies that are inspected and approved by coordinators first
before going live. Applicants submit themselves by posting resumes and completing
required screening documents, after which the system would automatically sift through,
rank, and highlight candidates who best match the requirements. Recruiters would then
inspect and shortlist applicants using a specific dashboard in an efficient manner.

To make it robust, the system is subjected to intense testing, including stress testing to
assess its capacity to process large numbers of applications, accuracy testing to
determine the precision of AI-based resume parsing and ranking, and security testing to
safeguard candidate data against threats. Recruiters' and applicants' feedback is obtained
to further refine the system's efficacy and precision. By combining AI-driven
automation with LLM-based resume analysis, the ATS minimizes human effort,
enhances the accuracy of hiring, and provides a fair and streamlined recruitment
process, and thus, it is a trustworthy solution for the needs of current hiring.

Page 10
SETUP

Home Page:

The homepage [Figure 4] of HireLine a hiring and job application tracking


website. The design is modern and easy to use with a background that is
easy to read. The logo and company name are easily displayed at the top left
to assist with branding. The top menu allows people to easily access key
areas like Home, All Jobs and Dashboard to make it simple to navigate.

One of the standout features is the title Connecting TALENT with


Opportunity which succinctly captures the platform's function connecting
talent to opportunity. Beneath the slogan is a short explanation of the
platform's working, highlighting job postings, tracking applications, and
recruitment management. The Let's Explore button encourages one to
learn more about the platform.

Fig(4): Home page Of HireLine

Page 11
Login Page:

The image [Figure 5] captures the login page of HireLine Platform which is an
employment and applicant tracking system. The appearance has a gradient
background and a login form in the middle where users insert their email and
password. A New user Register here link assists new users in registering. The
top navigation menu assists users in accessing Home, All Jobs, and Dashboard,
while the footer contains Jobs, Login, Signup, and Post Job links. This login
system is an important part of authenticating users and securing the platform.

Fig(5): Login Page of HireLine

Company Setup Page:


This page [Figure 6] is the Company Setup interface of the HireLine platform,
where employers input essential details about their organization such as name,
description, website, location, and logo. It allows companies to create or update
their profile for visibility within the recruitment system

Fig(6): Company Setup page of HireLine

Page 12
Register Page:

The picture [Figure 7] indicates the registration page of HireLine which is a


job recruitment website. There is a form that contains full name, email,
password, gender, address, and user type one for Candidate and Employer. A
Register button is provided where new users can register and there is a link
for login for registered users. The page is of modern design with a gradient
background and is hence professional and interactive. The top navigation bar
provides access to Home, All Jobs, and Dashboard, and the footer provides
links to key features such as Jobs, Login, Signup, and Post Job.

Fig(7): Register Page of HireLine

Page 13
Jobs Page:

HireLine features a job listing interface that displays six job postings in a clean card
layout. Each job card displays key details like company name, job title, description,
salary, available positions, and relevant tags Engineering Trainee or Full-Time.Users
can click Details for more information or Save for Later to bookmark jobs. The
header includes navigation options Home, Browse, Jobs and a user profile icon.The
design features a layout with well-organized sections for easy understanding. The
interface is designed for job seekers to browse job listings easily while maintaining a
visually appealing and user-friendly experience.

Fig(8):Job Page of HireLine

Dashboard Page:

The image is of the HireLine registered company page where companies update
their profile and can add jobs later on. In this page as an example we used TCS
and LTIMindtree as an example.

Fig(9):Dashboard Page of Hireline

Page 14
Job Posting Page:

This is the image from the HireLine Application about the companies uploading
there jobs in the portal.They can also set numerical criteria (e.g., educational
qualifications, CGPA, language proficiency) and qualitative factors (e.g., problem-
solving ability, leadership skills, technical expertise). Recruiters can also assign
weights to these criteria, ensuring evaluations align with company priorities which
will be described later on the Code Snippet Section.

Fig(10):Job Posting Page of HireLine

Students or Job Seeker Profile Page:


HireLine makes it easier for the ones who seek jobs through this portal by updating
their profile in real time. Users can easily view their details such as name , email ,
phone number , skills. They can also update their bio. Users can easily update their
resumes at an instant whether it is drag and drop or click to upload the file.

Fig(11):Profile page of HireLine

Page 15
CODE SNIPPETS

Fig(12): Database

Understanding the Code:

1. import mongoose from 'mongoose';

Think about it: If you have to employ special tooling in order to do work with
MongoDB, that statement is akin to saying, "Go get the 'mongoose' toolkit,
which enables us to talk to MongoDB." Mongoose is a widely used library that
gives us a simpler way of talking to MongoDB in JavaScript.

2. // Connect to MongoDb

This is a comment. Comments are pieces of code that are not run by the
computer. They exist in the code to assist individuals in understanding what
the code is performing.

Page 16
1. const connectDB = async () => {. };

This line creates a function named connectDB. A function is a group of steps that
may be run by the computer. The async term means that the function may be
very slow in completing (since connecting to a database takes some time).

2. try {. } catch (error) {. }

This is a "try-catch" block. It's like, "Try doing this, and if that fails, catch the
error and do this instead." This makes the program handle problems nicely.

3. await mongoose.connect(process.env.MONGO_URI);

This is the core of the connection.

mongoose.connect() is the method that actually connects to the MongoDB


database.

process.env.MONGO_URI is how we retrieve the database URL. Imagine it like a


web URL or a secret key that informs the program where the database resides.
Placing it in process.env is one way of setting sensitive data (such as database
passwords) apart from the rest of the code.

await is used as it takes time to connect to the database. It makes the program
wait for the connection to establish before moving further.

4. console.log('MongoDB Connected.');

the connection is established, this line prints the message "MongoDB


Connected." to the console (a window that displays text). This informs the
programmer that the connection was successful.

5. console.error('Failed to connect to MongoDB', error.message);

If there's an error with the connection, this line prints an error message to the
console, and also information about the error (error.message).

Page 17
6. process.exit(1);

If there is a mistake in the connection, this line halts the program. The 1 shows
that the program halted due to an error. export default connectDB; This line
exposes the connectDB function to other sections of the program. This is like
expressing, "If any section of the program wishes to access the database, use this
function."

Fig(13):Application model

Page 18
Understanding the Code:

1. import mongoose from 'mongoose';

Just like the above code, this line is importing the Mongoose library, which is
utilized for MongoDB interaction.

2. const applicationSchema = new mongoose.Schema({. });

This line creates a new "schema" with Mongoose.

Think of a schema as a template or a blueprint. It specifies the data structure that is


to be kept in the database.

The curly brackets {. } hold the field definitions (or columns) to be inserted in the
"Application" data.

3. job: {. }

This creates a field named "job" in the "Application" schema.

type: mongoose.Schema.Types.ObjectId,

That is, the "job" field will store a special type of ID called an "ObjectId," which
MongoDB uses to access documents (rows) in different collections (tables) in just
the same manner.

ref: "Job,

What this implies is that the "job" field will be pointing at documents in some other
collection named "Job". This establishes an association between the "Application"
and "Job" collections.

required: true

This means that the "job" field must be filled and completed when creating a new
"Application."

Page 19
4. applicant: {. }

Just like "job," it defines a class named "applicant."

type: mongoose.Schema.Types.ObjectId

This specifies that the "applicant" field would be an ObjectId too.

ref: "User,

This implies that the "applicant" field will point to documents in a "User" collection,
creating a relationship between "Application" and "User."

required: true

This aspect is also necessary.

5. status: {. }

This creates a field called "status."

type: String,

This requires that the "status" field would contain a string of characters.

enum: ["pending", "accepted", "rejected"],

This defines a set of permitted values for the "status" field. It is either just "pending,"
"accepted," or "rejected." This maintains data consistency.
6. default: "pending
This makes the default "status" "pending" if no other value is specified when
creating a new "Application".

7. timestamps: true,

This introduces two special fields into the schema: createdAt and updatedAt.
createdAt automatically logs the date and time of document creation.

updatedAt automatically records the date and time the document was last saved.

This is also useful for monitoring when data was added or changed.

Page 20
8. ConstApplication = mongoose.model("Application", applicationSchema);

This line creates a "model" named "Application" from our applicationSchema created
above. Think of a model as a way of accessing the database via the schema. It
provides ways to insert, read, update, and delete "Application" documents.

8. import mongoose from 'mongoose';

We used Mongoose , a very popular library that makes it easy for us to work with
MongoDB along with Javascript

10. const connectDB = async () => {. }

It declares a function called connectDB. A function is similar to a list of instructions


that the computer can execute. The async keyword specifies that the function may
possibly take a very long time to finish (given that DB connection could easily take a
second).

11. try {

} catch (error) {
This is a "try-catch" block. It's a means of saying, "Try to do this, and if something
goes
wrong, catch the error and do this instead." This allows the program to deal with
issues gracefully.

12. await mongoose.connect(process.env.MONGO_URI);

This is the core of the connection.

mongoose.connect() is actually the function that establishes the connection to the


MongoDB database.

Page 21
process.env.MONGO_URI is how you get the database URL. It's kind of a secret key
or web URL that tells the program where the database is. Having it in process.env is
one of the common ways of keeping sensitive information (like database passwords)
out of the core code.

await is utilized since accessing a database might take some time. It causes the
program to wait until it is connected.

13. console.log('MongoDB Connected.');

If successful, this line prints out "MongoDB Connected." to the console window (a
simple text box window). This informs the programmer that the connection is
successful.

14. console.error('Could not connect to MongoDB', error.message);

If there is an error in connection, this line prints an error message to the console, along
with error information (error.message).

15. process.exit(1);

If it's a connection issue, this line halts the program. The 1 shows that the program
halted due to an error.

16. export default connectDB;

This statement makes the connectDB function accessible throughout the program. It is
as if saying, "Anyone who wants to connect to the database may use this function."

Page 22
Fig(13): Score Calculation basis on resume

Page 23
Understanding the Code:

1. Job Description

$(description)

This indicates that the job description text will be inserted here. The $(...) syntax suggests
a variable or placeholder.

2. Skill Requirements

$(requirements)

Similar to the job description, this indicates that the required skills for the job will be
inserted here.

3. Numerical Parameters (Objective Evaluation)

$(numericalParametersText)

This section will contain a list of numerical parameters to evaluate from the resume.
Examples might be years of experience, number of projects, or GPA.

4. Intellectual Parameters (Subjective Evaluation)

$(intellectualParametersText)

This section will contain a list of more subjective parameters to evaluate, such as
"technical depth of projects" or "leadership quality."

5. Task

LLM analyzes the given resume text and assigns a score out of 10 for each parameter
based on relevance, completeness, and quality. Ensure the scoring follows a fair, unbiased
methodology based on industry standards.

The core instruction tells the system to analyze the resume . After analyzing score each
parameter both numerical and intellectual out of 10

6.Expected JSON Response Format:

The output of the analysis must be in a JSON object. JSON means JavaScript Object
Notation.

Page 24
7. { "numerical_scores": { ... }, "intellectual_scores": { ... }, "final_weighted_score":
(total_weighted_score_out_of_1000) }

This is the overall structure of the JSON output:

numerical_scores: It’s the object containing scores about numerical parameters.

intellectual_scores: It’s the object containing scores for intellectual parameters.

final_weighted_score: Final Score that is out of 1000 will be the average weighted
score.

8. "numerical_scores": { $(numParamArr.map(param =>"${param}":


${score_out_of_100}).join(",\n")) }
It is part of the JSON structure which is complex and generates numerical objects
dynamically.

$(numParamArr.map(param => ...)): This creates a loop that goes through the array
called numParamArr, containing the names of numerical parameters.

"${param}": ${score_out_of_100}: For every parameter which is stored , a key-value


pair is created where the key is the parameter name, and the value is the score which is
out of 100.

.join(",\n"): The key value -pairs are merged with commas and .newlines to create a
perfect JSON string for parsing and analysing.

9. "intellectual_scores": { $(intelParamArr.map(param =>"${param}":


${score_out_of_100}).join(",\n")) }

It is similar to the numerical part where it generates intellectual scores using an array
called intelParamArr, containing the names of intellectual parameters.

10. JSON.parse function :

It is a function that instructs the output to be in valid JSON format. The reason for this is
because it will be easier to parse and process by the LLM.

Page 25
RESULT AND DISCUSSION

The result analysis of the AI based resume parsing and ranking system is done by
comparing it with keyword based screening and NLP based resume parsers. The
analysis is done on four important metrics:

Accuracy (%) – It measures how accurately the system selects appropriate candidates.

False Positives (%) – It shows the percentage of wrongly shortlisted candidates.

Processing Time (ms) – It is the time required to process one resume.

Bias Reduction (%) – Assesses the extent to which the system reduces hiring biases.

Analysis of the Results:

Rule-based parsing is limited in accuracy and struggles with complex resume formats.
NLP-based approaches improve performance by recognizing job-related terms but still
have constraints in handling variations in resume structures.

Machine learning models provide better adaptability and accuracy by training on diverse
datasets.

BERT-based deep learning models excel in contextual understanding but require


significant computational resources.

The LLM we used in this case is significantly faster from the others in resume parsing
while achieving the highest accuracy in minimum process time. So multiple resumes
can process quickly.

Page 26
Comparison with Existing Systems

This provides a comparative analysis between the proposed system and traditional
recruitment methods:

Analysis of the Results:

AI-powered ATS improves hiring efficiency by automating resume parsing and


candidate ranking.

LLMs like Gemini-2.0-Flash enhance accuracy by understanding context, skills, and


experience. Reduces bias in hiring by focusing on qualifications rather than manual
judgments.

Supports multiple languages, making global recruitment seamless. Shortens hiring


cycles by quickly filtering and ranking candidates. Ensures data security and compliance
with hiring regulations. Scalable and adaptable to evolving job market trends.

Page 27
FUTURE SCOPE

The future potential of this Applicant Tracking System (ATS) is to incorporate


increasingly sophisticated AI-powered technologies to further improve the
efficiency, accuracy, and equity of hiring. Using Gemini-2.0-Flash LLM , the
ATS system is much smarter than traditional NLP based systems. It became more
responsive and better in analysing more complex job needs and match applicants
with higher specifications.Such models are capable of reviewing resumes
contextually and pinpointing skills not necessarily stated but implied from work
experience and previous positions. AI-driven video interview analysis can be a
potential development in the future as well. With Natural Language Processing
(NLP) and facial recognition capabilities, the platform can determine speech
patterns, level of confidence, and general communication ability of the candidates.
Such automated interview scoring can benefit recruiters by offering important
insights, mitigating bias, and enhancing the accuracy of selection.

One of the major features is multilingual support. The model is already pre-trained
so if candidates upload resumes in their native language, the LLMs can extract ,
parse and analyse them very efficiently.

The automated tracking system can also be adaptive with job matching. It can
learn from the job description uploaded by the companies and the candidates
uploading their resumes. The AI model will train on that information in which it
will reduce biases while scoring the resumes of the candidates.

Finally, the integration of high-end recruitment analytics and AI-powered


dashboards can enable organizations to monitor hiring trends, diversity statistics,
and hiring effectiveness, enabling data-based decision-making for efficient
workforce planning. The ATS of the future is to automate recruiting, making it
smarter, more diverse, and highly mechanized.

Page 28
CONCLUSION

The Applicant Tracking System developed in this project is a quantum leap in


recruitment streamlining using the most up-to-date technologies such as Natural
Language Processing (NLP), Machine Learning (ML), and large-scale Language
Models (LLM), including Gemini-2. 0-Flash. This system helps in overcoming the
usual pitfalls of the traditional recruitment process, such as posting jobs, screening
resumes, and filtering candidates, as this process is all the work of automation. It
minimizes human bias, eliminates manual work and provides an impartial assessment
based on agreed criteria by leveraging the magic of AI.

This applicant tracking system's ability to handle a large number of applications while
maintaining remarkable accuracy in parsing resumes and matching them to job
descriptions is one of its most notable features. Key information such as education,
experience, and skills are extracted by the AI-powered resume analyzers and compared
to the job specifications. This procedure is further improved by Gemini-2.0-Flash,
which evaluates resumes in a manner that closely resembles human comprehension,
identifying industry-specific language and contextual subtleties. As a result, recruiters
receive excellent candidate recommendations.

The project also gives scalability and user experience top importance with a simple
website that enables interaction among recruiters, companies, coordinators, and
candidates. Its capacity to fit various hiring situations—whether for small groups or
big corporations—shows the platform's adaptability across various organizational
needs. Strong protections to guard candidate data and follow data privacy regulations
help to ensure privacy and information integrity top concerns.

The applicant tracking system (ATS) has shown to be a consistent tool for modern
hiring following thorough testing including security checks, accuracy tests, and stress
tests. It raises the quality of new hires, reduces hiring time, and enhances the general
experience for companies and applicants.

The applicant tracking system (ATS) has shown to be a consistent tool for
contemporary hiring after thorough testing including security checks, accuracy tests,
and stress tests. It raises the quality of new hires, reduces hiring time, and enhances the
whole experience for companies and applicants. Focusing on strategic decision-making
rather than getting mired in managing administrative duties helps recruiters to rank
resumes and sort candidates, enabling them to make better choices.

Page 29
BIBLIOGRAPHY

[1]. Salakar, E., Rai, J., Salian, A., Shah, Y. and Wadmare, J., 2023, December.
Resume Screening Using Large Language Models. In 2023 6th International
Conference on Advances in Science and Technology (ICAST) (pp. 494-499). IEEE.

[2]. Patel, S., Patel, J., Shah, D., Goel, P. and Patel, B., 2024, November. A RAG
based Personal Placement Assistant System using Large Language Models for
Customized Interview Preparation. In 2024 5th International Conference on Data
Intelligence and Cognitive Informatics (ICDICI) (pp. 1468-1475). IEEE

[3]. Prasad, B.L., Srividya, K., Kumar, K.N., Chandra, L.K., Dil, N.S.S.K. and Krishna,
G.V., 2023, October. An Advanced Real-Time Job Recommendation System and
Resume Analyser. In 2023 International Conference on Self Sustainable Artificial
Intelligence Systems (ICSSAS) (pp. 1039- 1045). IEEE.

[4]. Gulati, V., Gupta, I., Firdous, F. and Narwal, R., 2024. Resume Analyzer Using
Natural Language Processing (NLP).

[5]. Bhatt, A., Uniyal, A., Jyala, D., Mittal, S., Tiwari, P. and Singh, D., 2024, March.
Resume Analyzer based on MapReduce and Machine Learning. In 2024 IEEE
International Conference on Interdisciplinary Approaches in Technology and
Management for Social Innovation (IATMSI) (Vol. 2, pp. 1-5). IEEE.

[6]. Prashanth, V.J., Gopinath, S., Udith, S. and Kavitha, C.R., 2024, July. Resume
Analyzer and Skill Enhancement Recommender System. In 2024 Asia Pacific
Conference on Innovation in Technology (APCIT) (pp. 1-6). IEEE

Page 30
[7]. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K. and
Kuksa, P., 2011. Natural language processing (almost) from scratch.

[8]. Zu, S. and Wang, X., 2019. Resume information extraction with a novel
text block segmentation algorithm. Int J Nat Lang Comput, 8(2019), pp.29-
48.

[9]. Kaygin, E., 2023. Comparative Analysis of ML (Machine Learning) and


LLM (Large Language Models) in Resume Parsing: A Paradigm Shift in
Talent Acquisition.

Page 31

You might also like