Project Report 8th Sem
Project Report 8th Sem
CSE (IOT,CS,BT)
This is to certify that the project titled “Enhancing recruitment through AI with an
LLM-powered resume analyzer for smarter and more efficient hiring” submitted by
Arkaprava Roy (Section A) 12021002029096, Aakash Nath (Section A)
12021002029097, Akash Dey (Section A) 12021002029098, Sumon Santara
( Section B ) 12021002029095, Promit Kumar Bodak (Section A) 12021002029094,
Suman Mondal (Section A) 12021002029085, Sneha Chakraborty (Section A)
12021002029190, Rounak Chatterjee (Section A) 12021002029102, Kingshuk
Bhowmick (Section B) 12021002029088, Tanmoy Daw (Section A)
12021002029035 students of University of Engineering & Management, Kolkata, in
partial fulfillment of requirement for the degree of Bachelor of Computer Science and
Engineering (Internet of Things, Cybersecurity, Blockchain Technology), is a bonafide
work carried out by them under the supervision and guidance of Prof. (Dr.) Siddhartha
Roy during 8th Semester of academic session of 2021-25. The content of this report has
not been submitted to any other university or institute. I am glad to inform that the
work is entirely original and its performance is found to be quite satisfactory.
We would like to take this opportunity to thank everyone whose cooperation and
encouragement throughout the ongoing course of this project remains invaluable to
us.
We are sincerely grateful to our guide Prof. (Dr.) Siddhartha Roy and Head of the
Department Prof. (Dr) Sandip Mondal of CSE (IoT, CS, BT), University of
Engineering & Management, Kolkata, for his wisdom, guidance and inspiration that
helped us to go through with this project and take it to where it stands now.
Last but not the least, we would like to extend our warm regards to our families
and peers who have kept supporting us and always had faith in our work.
Arkaprava Roy
Aakash Nath
Akash Dey
Kingshuk Bhowmick
Rounak Chatterjee
Sumon Santara
Suman Mondal
Sneha Chakraborty
Tanmoy Daw
TABLE OF CONTENTS
ABSTRACT
Page No. 1
1. INTRODUCTION
Page No. 2
2. LITERATURE SURVEY
This project aims to automate the hiring process with the help of the LLM
powered resume matching and automate the hiring process. It screens the
candidates resumes and also classifies candidates and ranks applications in terms
of how well they match the job description. We have developed a feature to help
the recruiter shortlist the candidates on a large scale . It will help the recruiter to
save time and reduce the hustle to manage the candidate.
It goes through every single resume and creates a custom profile for each
candidate and gives them a score after matching with the job description. It is
more accurate, effective, and fair. This report gives a detailed overview of our
project development, methodologies, and impact on modern recruitment. It
offers insights for future technological innovation in automated hiring solutions.
INTRODUCTION
This smart Applicant Tracking System (ATS) refines candidate shortlisting .It
interprets relevance of experience and skills rather than keyword matching. It
accurately ranks and filters the candidates to select the best fit candidate for the
respective job. To increase accuracy, we have used Google's Gemini 2.5 Flash
Large Language Model which efficiently scans the job titles and description. It
creates an individual profile for each candidate and gives them a score
individually out of 1000.
This website is also capable of creating individual profiles for candidates and
the recruiter. The candidate can apply for as many jobs as they want. And
everything will be reflected in the candidate's profile. Similarly the recruiter can
manage the job posting. Also can see the candidates profile and their resume.
This report briefs the development, concept, and effect of HireLine in the
market. Resume automation,large hiring choices, and candidate ranking,which
increases the recruitment efficiency. It is a crucial tool in the current job market
fro the recruiter and as well as for the job seekers.
Page 1
LITERATURE SURVEY
The researchers have created new segmentation algorithms and split resumes into
significant text blocks. The algorithms recognize important sections like work
experience, education, skills, and certifications. It allows information to be properly
categorized and organized. Through the segmentation of resumes into smaller pieces,
parsers can minimize the errors and improve the accuracy. It makes them more efficient
in bulk recruitment processes [2].
New learning techniques have also contributed to refining resume parsing models. It
uses prompt based approaches. This model allows Natural Language Processing systems
to predict from a small amount of training data. It makes them adaptable to new resume
formats with minimal error. This is particularly useful in recruitment.I is useful when
resumes vary in structure and style. Prompt based learning allows these models to make
inferences of relationships between job postings and resume content. It enhances
candidate-job posting matching accuracy [3].
Old type analyzers also increase the time and efforts. It is required for resume parsing by
combining Natural Language Processing with context awareness. Conventional rule-
based systems tend to fail with vague language and diverse resume formats.It results in
candidate evaluation. But contemporary models take advantage of deep learning and
contextual embeddings to understand resume content. These models have the capacity to
differentiate between jobs with similar job titles. It connects hierarchical relationships
between work experiences. It captures key extraction from natural language descriptions.
This capacity for free text resume processing enhances candidate job fit .[4].
Page 2
Python library Pyresparser is now a precious asset in applicant tracking systems (ATS).
They use tokenization, named entity recognition.Vector embeddings to extract
information from resumes with great efficiency. Pyresparser can accurately extract
names, job titles, company names, skills, and educational information. With the
integration of such libraries in ATS platforms, recruiters can make resume screening
automated. It saves manual effort . It makes the initial candidate selection more efficient
[5].
Text mining and similarity scoring algorithms are also commonly used to shortlist
candidates with job description match . These algorithms apply cosine similarity Term
Frequency-Inverse Document Frequency. The word embeddings to match resume content
against job descriptions. The higher the score, the higher the similarity between a
candidate's background and the job description. This automated ranking process enables
recruiters to prioritize top candidates efficiently.It minimizes the risk of overlooking
qualified applicants in bulk hiring scenarios [6].
Large language models have transformed resume parsing. It increases data extraction
accuracy and better interpreting. To the conventional Machine Learning models they are
heavily dependent on pre-defined rules. Structured data, Large Language Models are
capable of parsing resumes in a more human-like fashion. These models can handle
multilingual resumes. It makes them useful in international hiring scenarios [7].
ATS applications such as Mettle and Hirepro offer expansive data sets. It enables data-
backed hiring decisions. The applications allow integration of resume parsing
capabilities. And also organizes candidate’s information in an efficient
manner.Inconsistent formats, graphical elements, and different layouts slow down
automated processes. Solution of these kinds of inconsistencies in the documents and
automated template adaptation in these models [8].
The resume parsers in ATS systems has resulted in structured data extraction. It reduces
the hiring time and overall recruitment efficiency. Resumes continue to be limited by
variations in format. Difficult sentence structures, and domain-specific terminology
despite these benefits. Future advances such as more powerful deep learning
architectures and fine-tuned LLMs. Those will be instrumental in breaking these
limitations. I will be realizing seamless automation in recruitment processes [9].
Page 3
RESEARCH GAP
Despite outstanding developments in resume parsing using NLP and machine learning,
some outstanding issues are yet to be solved. Current models, including deep neural
networks and segmentation approaches, have enhanced the extraction of structured data
from resumes but are still challenged by non-standard resume formats, graphical
content, and domain-specific vocabularies. These variations prevent the full automation
of resume screening and necessitate more enhancements in AI-based document
standardization and template fitting [1][8].
Furthermore, although ATS platforms utilize text mining and similarity scoring methods
(e.g., TF-IDF, cosine similarity), they also struggle to properly rank candidates when
resumes have ambiguous wording. Further, current AI models tend not to consider soft
skills, career path, and cultural alignment—crucial considerations in end-to-end hiring
decisions [6][9].
Our project fills these gaps through the use of LLM-enabled ATS (via Gemini-2.0-
Flash) to improve resume parsing and ranking. In contrast to traditional parsers, our
method combines deep-learning-driven contextual comprehension for improved
processing of unstructured data, multilingual resumes, and implicit skill identification.
Future directions will include reduction of bias, adaptive learning, and enhanced
semantic analysis to further improve automated hiring efficiency.
Page 4
PROBLEM STATEMENT
The Applicant Tracking System aims to make the hiring process simpler job postings
and application reception. It organizes recruitment workflows. It ensures a systematized
and efficient way of hiring, promoting effective coordination.
The Users:
The candidate signs up in the portal then searches job vacancies, and applies for positions.
The candidate uploads a resume and fills out the necessary check form before submission
Shortlisted candidates' applications are shown on both the employer's and the
dashboards for final inspection.
This website provides a smooth hiring process and also enhances efficiency and
transparency with coordination throughout all phases of recruiting.
Application Flow:
The candidate signs up in the portal then searches job vacancies, and applies for positions.
The candidate uploads a resume and fills out the necessary check form before submission
Shortlisted candidates' applications are shown on both the employer's and the
dashboards for final inspection.
This website provides a smooth hiring process and also enhances efficiency and
transparency with coordination throughout all phases of recruiting.
Page 5
PROPOSED SOLUTION
When a recruiter hires candidates in a large number , it becomes very time consuming
and needs a lot of effort . With time the recruitment process changes. The businesses
want quicker and more accurate solutions. Traditions screening process is slow, has
human-error and misses talent. That is why we have created a Resume Analyzer based on
Large Language Models (L and Machine Learning to automate recruitment process.This
smart Applicant Tracking System (ATS) refines candidate shortlisting .It interprets
relevance of experience and skills rather than keyword matching. It accurately ranks and
filters the candidates to select the best fit candidate for the respective job. To increase
accuracy, we have used Google's Gemini 2.5 Flash Large Language Model which
efficiently scans the job titles and description. It creates an individual profile for each
candidate and gives them a score individually out of 1000.
This website is also capable of creating individual profiles for candidates and the
recruiter. The candidate can apply for as many jobs as they want. And everything will
be reflected in the candidate's profile. Similarly the recruiter can manage the job posting.
Also can see the candidates profile and their resume.This report briefs the development,
concept, and effect of HireLine in the market. Resume automation,large hiring choices,
and candidate ranking,which increases the recruitment efficiency. It is a crucial tool in the
current job market from the recruiter and as well as for the job seekers.
Security and compliance are the foundation of the system. Resume Analyzer employs a
hybrid database to handle different resume types efficiently. It provides strong APIs for
integration with installed ATS platforms. MongoDB, which is a NoSQL database, is
employed to store job listings and user information. It minimizes the recruitment process
ensuring secure, and highly efficient hiring procedure
Page 6
SYSTEM DESIGN
Developing the Applicant Tracking System (ATS) involves creating the structure,
components, and designed to make it function effectively and scalable easily. The system
comprises a user friendly interface, a backend unit, and an AI resume analyzer.
The frontend is developed as a web site where recruiters, employers and applicants
communicate. It has job shortlisting, application forms, resume upload, and recruiter
dashboards.
The backend is responsible for storing information, processing, and managing data. It
also has a database to monitor candidate information, job listings, and updates to
application status. The resume parser and ranking algorithms employ Natural Language
Processing (NLP) and Machine Learning (ML).
The AI model which is Gemini-2.0-Flash LLM is used to search resumes for vital details
like skills, experience, and qualifications to match candidates with the job. The system
also has security features to protect user data and follow hiring regulations.
Therefore the ATS system facilitates fast, automatic, and unbiased hiring and enhances
the effectiveness of hiring and decision-making.
Figure(1) shows that When a candidate submits their resume for a job application, the
resume is first stored in the database. The same resume is then processed by the Large
Language Module (LLM) to calculate its score. The recruiter shortlisted the candidate
based on the criteria for the job. The coordinator manages both employer and job posts.
Page 7
Activity Diagram of HireLine- The Resume Parsing System :
Figure(2) shows that It starts with a candidate submitting their application and
uploading their resume. The AI checks the resume validity if invalid the application is
rejected. Valid resumes are processed using a Large Language Model (LLM) to
generate a resume score. Based on these scores candidates are shortlisted. The recruiter
then verify the shortlisted candidates leading to the final selection process. This process
ensures that only qualified candidates advance through the selection stages.
Page 8
Flowchart of the uploading jobs and hiring process in HireLine :
Figure(3) shows that the job application process starts with account creation and job
posting review. Candidates upload resumes and create a profile with their information
which recruiters review. If the recruiter’s criteria are met candidates are shortlisted
otherwise they are rejected. Shortlisted applications are sent to employers for final
review. Selected candidates receive offers while others are rejected. This process ensures
only qualified and verified candidates reach the final stages.
Page 9
EXPERIMENTAL SETUP
The Experimental setup for the Applicant Tracking System (ATS) involves the latest AI
innovations, such as the integration of Large Language Models (LLMs) like Gemini-
2.0-Flash, to enhance resume parsing, job matching, and recruitment processes. The
system is engineered to be highly scalable and efficient, with smooth operations at every
phase of the hiring process. A cloud-based relational database is set up to keep job
postings, candidate information, resumes, recruiter comments, and application statuses
secure and intact. A web-based user interface is developed to offer easy navigation to
employers, recruiters, and candidates, automating interactions in the system.
To make it robust, the system is subjected to intense testing, including stress testing to
assess its capacity to process large numbers of applications, accuracy testing to
determine the precision of AI-based resume parsing and ranking, and security testing to
safeguard candidate data against threats. Recruiters' and applicants' feedback is obtained
to further refine the system's efficacy and precision. By combining AI-driven
automation with LLM-based resume analysis, the ATS minimizes human effort,
enhances the accuracy of hiring, and provides a fair and streamlined recruitment
process, and thus, it is a trustworthy solution for the needs of current hiring.
Page 10
SETUP
Home Page:
Page 11
Login Page:
The image [Figure 5] captures the login page of HireLine Platform which is an
employment and applicant tracking system. The appearance has a gradient
background and a login form in the middle where users insert their email and
password. A New user Register here link assists new users in registering. The
top navigation menu assists users in accessing Home, All Jobs, and Dashboard,
while the footer contains Jobs, Login, Signup, and Post Job links. This login
system is an important part of authenticating users and securing the platform.
Page 12
Register Page:
Page 13
Jobs Page:
HireLine features a job listing interface that displays six job postings in a clean card
layout. Each job card displays key details like company name, job title, description,
salary, available positions, and relevant tags Engineering Trainee or Full-Time.Users
can click Details for more information or Save for Later to bookmark jobs. The
header includes navigation options Home, Browse, Jobs and a user profile icon.The
design features a layout with well-organized sections for easy understanding. The
interface is designed for job seekers to browse job listings easily while maintaining a
visually appealing and user-friendly experience.
Dashboard Page:
The image is of the HireLine registered company page where companies update
their profile and can add jobs later on. In this page as an example we used TCS
and LTIMindtree as an example.
Page 14
Job Posting Page:
This is the image from the HireLine Application about the companies uploading
there jobs in the portal.They can also set numerical criteria (e.g., educational
qualifications, CGPA, language proficiency) and qualitative factors (e.g., problem-
solving ability, leadership skills, technical expertise). Recruiters can also assign
weights to these criteria, ensuring evaluations align with company priorities which
will be described later on the Code Snippet Section.
Page 15
CODE SNIPPETS
Fig(12): Database
Think about it: If you have to employ special tooling in order to do work with
MongoDB, that statement is akin to saying, "Go get the 'mongoose' toolkit,
which enables us to talk to MongoDB." Mongoose is a widely used library that
gives us a simpler way of talking to MongoDB in JavaScript.
2. // Connect to MongoDb
This is a comment. Comments are pieces of code that are not run by the
computer. They exist in the code to assist individuals in understanding what
the code is performing.
Page 16
1. const connectDB = async () => {. };
This line creates a function named connectDB. A function is a group of steps that
may be run by the computer. The async term means that the function may be
very slow in completing (since connecting to a database takes some time).
This is a "try-catch" block. It's like, "Try doing this, and if that fails, catch the
error and do this instead." This makes the program handle problems nicely.
3. await mongoose.connect(process.env.MONGO_URI);
await is used as it takes time to connect to the database. It makes the program
wait for the connection to establish before moving further.
4. console.log('MongoDB Connected.');
If there's an error with the connection, this line prints an error message to the
console, and also information about the error (error.message).
Page 17
6. process.exit(1);
If there is a mistake in the connection, this line halts the program. The 1 shows
that the program halted due to an error. export default connectDB; This line
exposes the connectDB function to other sections of the program. This is like
expressing, "If any section of the program wishes to access the database, use this
function."
Fig(13):Application model
Page 18
Understanding the Code:
Just like the above code, this line is importing the Mongoose library, which is
utilized for MongoDB interaction.
The curly brackets {. } hold the field definitions (or columns) to be inserted in the
"Application" data.
3. job: {. }
type: mongoose.Schema.Types.ObjectId,
That is, the "job" field will store a special type of ID called an "ObjectId," which
MongoDB uses to access documents (rows) in different collections (tables) in just
the same manner.
ref: "Job,
What this implies is that the "job" field will be pointing at documents in some other
collection named "Job". This establishes an association between the "Application"
and "Job" collections.
required: true
This means that the "job" field must be filled and completed when creating a new
"Application."
Page 19
4. applicant: {. }
type: mongoose.Schema.Types.ObjectId
ref: "User,
This implies that the "applicant" field will point to documents in a "User" collection,
creating a relationship between "Application" and "User."
required: true
5. status: {. }
type: String,
This requires that the "status" field would contain a string of characters.
This defines a set of permitted values for the "status" field. It is either just "pending,"
"accepted," or "rejected." This maintains data consistency.
6. default: "pending
This makes the default "status" "pending" if no other value is specified when
creating a new "Application".
7. timestamps: true,
This introduces two special fields into the schema: createdAt and updatedAt.
createdAt automatically logs the date and time of document creation.
updatedAt automatically records the date and time the document was last saved.
This is also useful for monitoring when data was added or changed.
Page 20
8. ConstApplication = mongoose.model("Application", applicationSchema);
This line creates a "model" named "Application" from our applicationSchema created
above. Think of a model as a way of accessing the database via the schema. It
provides ways to insert, read, update, and delete "Application" documents.
We used Mongoose , a very popular library that makes it easy for us to work with
MongoDB along with Javascript
11. try {
} catch (error) {
This is a "try-catch" block. It's a means of saying, "Try to do this, and if something
goes
wrong, catch the error and do this instead." This allows the program to deal with
issues gracefully.
Page 21
process.env.MONGO_URI is how you get the database URL. It's kind of a secret key
or web URL that tells the program where the database is. Having it in process.env is
one of the common ways of keeping sensitive information (like database passwords)
out of the core code.
await is utilized since accessing a database might take some time. It causes the
program to wait until it is connected.
If successful, this line prints out "MongoDB Connected." to the console window (a
simple text box window). This informs the programmer that the connection is
successful.
If there is an error in connection, this line prints an error message to the console, along
with error information (error.message).
15. process.exit(1);
If it's a connection issue, this line halts the program. The 1 shows that the program
halted due to an error.
This statement makes the connectDB function accessible throughout the program. It is
as if saying, "Anyone who wants to connect to the database may use this function."
Page 22
Fig(13): Score Calculation basis on resume
Page 23
Understanding the Code:
1. Job Description
$(description)
This indicates that the job description text will be inserted here. The $(...) syntax suggests
a variable or placeholder.
2. Skill Requirements
$(requirements)
Similar to the job description, this indicates that the required skills for the job will be
inserted here.
$(numericalParametersText)
This section will contain a list of numerical parameters to evaluate from the resume.
Examples might be years of experience, number of projects, or GPA.
$(intellectualParametersText)
This section will contain a list of more subjective parameters to evaluate, such as
"technical depth of projects" or "leadership quality."
5. Task
LLM analyzes the given resume text and assigns a score out of 10 for each parameter
based on relevance, completeness, and quality. Ensure the scoring follows a fair, unbiased
methodology based on industry standards.
The core instruction tells the system to analyze the resume . After analyzing score each
parameter both numerical and intellectual out of 10
The output of the analysis must be in a JSON object. JSON means JavaScript Object
Notation.
Page 24
7. { "numerical_scores": { ... }, "intellectual_scores": { ... }, "final_weighted_score":
(total_weighted_score_out_of_1000) }
final_weighted_score: Final Score that is out of 1000 will be the average weighted
score.
$(numParamArr.map(param => ...)): This creates a loop that goes through the array
called numParamArr, containing the names of numerical parameters.
.join(",\n"): The key value -pairs are merged with commas and .newlines to create a
perfect JSON string for parsing and analysing.
It is similar to the numerical part where it generates intellectual scores using an array
called intelParamArr, containing the names of intellectual parameters.
It is a function that instructs the output to be in valid JSON format. The reason for this is
because it will be easier to parse and process by the LLM.
Page 25
RESULT AND DISCUSSION
The result analysis of the AI based resume parsing and ranking system is done by
comparing it with keyword based screening and NLP based resume parsers. The
analysis is done on four important metrics:
Accuracy (%) – It measures how accurately the system selects appropriate candidates.
Bias Reduction (%) – Assesses the extent to which the system reduces hiring biases.
Rule-based parsing is limited in accuracy and struggles with complex resume formats.
NLP-based approaches improve performance by recognizing job-related terms but still
have constraints in handling variations in resume structures.
Machine learning models provide better adaptability and accuracy by training on diverse
datasets.
The LLM we used in this case is significantly faster from the others in resume parsing
while achieving the highest accuracy in minimum process time. So multiple resumes
can process quickly.
Page 26
Comparison with Existing Systems
This provides a comparative analysis between the proposed system and traditional
recruitment methods:
Page 27
FUTURE SCOPE
One of the major features is multilingual support. The model is already pre-trained
so if candidates upload resumes in their native language, the LLMs can extract ,
parse and analyse them very efficiently.
The automated tracking system can also be adaptive with job matching. It can
learn from the job description uploaded by the companies and the candidates
uploading their resumes. The AI model will train on that information in which it
will reduce biases while scoring the resumes of the candidates.
Page 28
CONCLUSION
This applicant tracking system's ability to handle a large number of applications while
maintaining remarkable accuracy in parsing resumes and matching them to job
descriptions is one of its most notable features. Key information such as education,
experience, and skills are extracted by the AI-powered resume analyzers and compared
to the job specifications. This procedure is further improved by Gemini-2.0-Flash,
which evaluates resumes in a manner that closely resembles human comprehension,
identifying industry-specific language and contextual subtleties. As a result, recruiters
receive excellent candidate recommendations.
The project also gives scalability and user experience top importance with a simple
website that enables interaction among recruiters, companies, coordinators, and
candidates. Its capacity to fit various hiring situations—whether for small groups or
big corporations—shows the platform's adaptability across various organizational
needs. Strong protections to guard candidate data and follow data privacy regulations
help to ensure privacy and information integrity top concerns.
The applicant tracking system (ATS) has shown to be a consistent tool for modern
hiring following thorough testing including security checks, accuracy tests, and stress
tests. It raises the quality of new hires, reduces hiring time, and enhances the general
experience for companies and applicants.
The applicant tracking system (ATS) has shown to be a consistent tool for
contemporary hiring after thorough testing including security checks, accuracy tests,
and stress tests. It raises the quality of new hires, reduces hiring time, and enhances the
whole experience for companies and applicants. Focusing on strategic decision-making
rather than getting mired in managing administrative duties helps recruiters to rank
resumes and sort candidates, enabling them to make better choices.
Page 29
BIBLIOGRAPHY
[1]. Salakar, E., Rai, J., Salian, A., Shah, Y. and Wadmare, J., 2023, December.
Resume Screening Using Large Language Models. In 2023 6th International
Conference on Advances in Science and Technology (ICAST) (pp. 494-499). IEEE.
[2]. Patel, S., Patel, J., Shah, D., Goel, P. and Patel, B., 2024, November. A RAG
based Personal Placement Assistant System using Large Language Models for
Customized Interview Preparation. In 2024 5th International Conference on Data
Intelligence and Cognitive Informatics (ICDICI) (pp. 1468-1475). IEEE
[3]. Prasad, B.L., Srividya, K., Kumar, K.N., Chandra, L.K., Dil, N.S.S.K. and Krishna,
G.V., 2023, October. An Advanced Real-Time Job Recommendation System and
Resume Analyser. In 2023 International Conference on Self Sustainable Artificial
Intelligence Systems (ICSSAS) (pp. 1039- 1045). IEEE.
[4]. Gulati, V., Gupta, I., Firdous, F. and Narwal, R., 2024. Resume Analyzer Using
Natural Language Processing (NLP).
[5]. Bhatt, A., Uniyal, A., Jyala, D., Mittal, S., Tiwari, P. and Singh, D., 2024, March.
Resume Analyzer based on MapReduce and Machine Learning. In 2024 IEEE
International Conference on Interdisciplinary Approaches in Technology and
Management for Social Innovation (IATMSI) (Vol. 2, pp. 1-5). IEEE.
[6]. Prashanth, V.J., Gopinath, S., Udith, S. and Kavitha, C.R., 2024, July. Resume
Analyzer and Skill Enhancement Recommender System. In 2024 Asia Pacific
Conference on Innovation in Technology (APCIT) (pp. 1-6). IEEE
Page 30
[7]. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K. and
Kuksa, P., 2011. Natural language processing (almost) from scratch.
[8]. Zu, S. and Wang, X., 2019. Resume information extraction with a novel
text block segmentation algorithm. Int J Nat Lang Comput, 8(2019), pp.29-
48.
Page 31