0% found this document useful (0 votes)
8 views41 pages

Machine Learning new

Machine learning document
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
8 views41 pages

Machine Learning new

Machine learning document
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 41

What is Machine Learning?

Machine learning (ML) is a type of Artificial Intelligence (AI) that allows


computers to learn and make decisions without being explicitly
programmed. It involves feeding data into algorithms that can then
identify patterns and make predictions on new data. Machine learning is
used in a wide variety of applications, including image and speech
recognition, natural language processing, and recommender systems.]
Why we need Machine Learning?
Machine learning is able to learn, train from data and solve/predict complex
solutions which cannot be done with traditional programming. It enables us
with better decision making and solve complex business problems in
optimized time. Machine learning has applications in various fields, like
Healthcare, finance, educations, sports and more.
Let’s explore some reasons why Machine learning has become essential in
every field –

1. Solving Complex Business Problems:

It is too complex to tackle problems like Image recognition, Natural language


processing, disease diagnose etc. with Traditional programming. Machine
learning can handle such problems by learning from examples or making
predictions, rather than following some rigid rules.

2. Handling Large Volumes of Data:

Expansion of Internet and users is producing massive amount of data.


Machine Learning can process these data effectively and analyze, predict
useful insights from them.
 For example, ML can analyze millions of everyday transactions to detect
any fraud activity in real time.
 Social platforms like Facebook, Instagram use ML to analyze billions of
post, like and share to predict next recommendation in your feed.

3. Automate Repetitive Tasks:

With Machine Learning, we can automate time-consuming and repetitive


tasks, with better accuracy.
 GMail uses ML to filter out Spam emails and ensure your Index stay clean
and spam free. Using traditional programming or handling these manually
will only make the system error-prone.
 Customer Support chatbots can use ML to solve frequent occurring
problems like Checking order status, Password reset etc.
 Big organizations can use ML to process large amount of data (like
Invoices etc) to extract historical and current key insights.

4. Personalized User Experience:

All social-media, OTT and E-commerce platforms uses Machine learning to


recommend better feed based on user preference or interest.
 Netflix recommends movies and TV shows based on what you’ve
watched
 E-commerce platforms suggesting products you are likely to buy.

5. Self Improvement in Performance:

ML models are able to improve themselves based on more data, like user-
behavior and feedback. For example,
 Voice Assistants (Siri, Alexa, Google Assistant) – Voice assistants
continuously improve as they process millions of voice inputs. They adapt
to user preferences, understand regional accents better, and handle
ambiguous queries more effectively.
 Search Engines (Google, Bing) – Search engines analyze user behavior
to refine their ranking algorithms.
 Self-driving Cars – Self-driving cars use data from millions of miles
driven (both in simulations and real-world scenarios) to enhance their
decision-making.

Evolution of
Machine Learning
Imagine a world where machines learn like humans, constantly evolving and improving. This
isn’t a scene from a sci-fi movie—it’s the reality of machine learning. This technology has come
a long way since its inception, and today, we’re taking you on a fascinating journey through the
milestones of machine learning, from 1805 to the present.
The Humble Beginnings: Linear Regression (1805-1809) It all started with Linear Regression,
developed independently by Adrien-Marie Legendre and Carl Friedrich Gauss. This technique,
based on the method of least squares, was a stepping stone in predictive modeling, allowing us to
forecast future trends from past data. It laid the groundwork for what was to become a revolution
in data analysis.
The Neural Network Precursor: Perceptron (1957) Fast forward to 1957, and we witness the
birth of the Perceptron by Frank Rosenblatt. This simple yet powerful model, simulating a
neuron for binary classification tasks, was labeled as the precursor to neural networks. It marked
the beginning of machines mimicking human brain functions.

The Art of Decision Making: Reinforcement Learning (1959) Richard Bellman’s invention of
Reinforcement Learning in 1959 introduced a new era of decision-making algorithms. By
teaching agents to make decisions based on rewards and penalties, this method laid the
foundation for developing autonomous systems and robotics.

Redefining Classification: Support Vector Machines (1964) The introduction of Support


Vector Machines by Vladimir Vapnik and Alexey Chervonenkis in 1964 was a game-changer in
classification tasks. Excelling in handwriting recognition and face detection, these machines
demonstrated the potential of ML in practical applications.

Training the Neural Networks: Backpropagation (1986) The popularization of


backpropagation for neural networks in 1986, thanks to David Rumelhart, Geoffrey Hinton, and
Ronald Williams, marked a significant advancement in training complex neural networks. This
method optimized the learning process, making it possible to develop more sophisticated AI
models.

Combining Weakness for Strength: Boosting Algorithms (1995) In 1995, Yoav Freund and
Robert Schapire introduced AdaBoost, an algorithm that improved prediction accuracy by
combining multiple weak learning models. This concept showed that strength could indeed be
found in numbers, or in this case, algorithms.

The Ensemble Approach: Random Forests (1995) Tin Kam Ho’s introduction of Random
Forests in 1995 brought a robust approach to classification and regression. By creating
ensembles of decision tree-like models, these forests demonstrated improved accuracy and
stability in predictions.

Sequencing Success: RNN and LSTM (1997) The development of RNN (Recurrent Neural
Networks) and LSTM (Long Short-Term Memory) networks, particularly by Sepp Hochreiter
and Jürgen Schmidhuber for LSTM, revolutionized sequential data processing. This was a
milestone in natural language processing and speech recognition, enabling machines to
understand and generate human-like language.

Visionary Machines: Deep Convolutional Neural Networks (2012) In 2012, Alex Krizhevsky,
Ilya Sutskever, and Geoffrey Hinton introduced Deep Convolutional Neural Networks. These
networks revolutionized image recognition, enabling machines to identify and classify images
with incredible accuracy, mimicking the human visual system.

The Creative AI: Generative Adversarial Networks (2014) Ian Goodfellow’s invention of
Generative Adversarial Networks in 2014 opened up new horizons in AI creativity. These
networks became groundbreaking in generating realistic images and videos, blurring the line
between AI-generated and real-life content.
Transforming Language Processing: Transformer Networks (2017) The introduction of
Transformer Networks by Ashish Vaswani and his team in 2017 marked a new era in natural
language processing. These networks, efficient in context-aware processing, became the
cornerstone for modern NLP tasks, leading to advanced models like BERT and GPT series.

From linear regression to transformer networks, the evolution of machine learning has been
nothing short of extraordinary. Each breakthrough has built upon the last, pushing the boundaries
of what’s possible with artificial intelligence. As we look to the future, one thing is certain:
machine learning will continue to evolve, transforming our world in ways we can only begin to
imagine. Stay tuned for the next chapter in this incredible journey.

Machine Learning Paradigms


Machine learning is commonly separated into three main learning
paradigms: supervised learning, unsupervised learning, and reinforcement
learning. These paradigms differ in the tasks they can solve and in how the
data is presented to the computer.

Supervised and Unsupervised learning

Supervised Learning (SL)


Supervised learning involves labelled datasets, where each
data observation is paired with a corresponding class label.
Algorithms in supervised learning aim to build a
mathematical function that maps input features to desired
output values based on these labeled examples. Common
applications include classification and regression.

Stages in Supervised Learning

Understanding Supervised Learning pictorially

Unsupervised Learning
In unsupervised learning, algorithms work with unlabeled
data to identify patterns and relationships. These methods
uncover commonalities within the data without predefined
categories. Techniques such as clustering and association
rules fall under unsupervised learning.

Stages in Unsupervised Learning

Understanding Unsupervised Learning pictorially


Semi-supervised Learning
Semi-supervised learning strikes a balance by combining a
small amount of labelled data with a larger pool of unlabeled
data. This approach leverages the benefits of both supervised
and unsupervised learning paradigms, making it a cost-
effective and efficient method for training models when the
labeled data is limited.

Understanding Semi-supervised Learning pictorially

Self-supervised Learning (SSL)


In scenarios where obtaining high-quality labeled data is
challenging, self-supervised learning emerges as a solution.
In this paradigm, models are pre-trained using unlabeled
data, and data labels are generated automatically during
subsequent iterations. SSL transforms unsupervised ML
problems into supervised ones, enhancing learning
efficiency. This paradigm is particularly relevant with the
rise of large language models.

Reinforcement Learning
Reinforcement learning focuses on enabling intelligent
agents to learn tasks through trial-and-error interactions
with dynamic environments. Without the need for labelled
datasets, agents make decisions to maximize a reward
function. This autonomous exploration and learning approach
is crucial for tasks where explicit programming is
challenging.
Action-Reward feedback loop: an agent takes actions in an environment, which is interpreted into a
reward and a representation of the state, which are fed back into the agent.

Action-Reward Feedback Loop:

Reinforcement learning operates on an action-reward


feedback loop, where agents take actions, receive rewards,
and interpret the environment’s state. This iterative process
allows the agent to autonomously learn optimal actions to
maximize positive feedback.
Action-Reward Feedback Loop

Understanding these ML paradigms provides valuable


insights into the diverse approaches used to address
different types of problems. Each paradigm comes with its
strengths and applications, contributing to the versatility of
machine learning in various domains.

Differences S.L ,US.L & R.L

 Supervised Learning: Relies on labeled data. Each data point has a pre-defined

output or label (e.g., classifying emails as spam or not spam). The model learns the

mapping between the input data and the desired output.

 Unsupervised Learning: Deals with unlabeled data. The goal is to identify patterns

or structures within the data itself (e.g., grouping customers with similar purchase

history). No pre-defined output is provided.

 Reinforcement Learning: Doesn’t use labeled data. The agent interacts with the

environment and receives feedback in the form of rewards (positive, negative, or

neutral). The agent learns through trial and error to maximize future rewards.
Learning Process

 Supervised Learning: The model is like a student directly taught by a teacher

(training data) what the correct output should be for a given input.

 Unsupervised Learning: The model is like an explorer trying to find patterns and

relationships within uncharted territory (data) with minimal guidance.

 Reinforcement Learning: The model resembles an athlete learning through trial

and error in a competition (environment). It receives feedback (rewards) but needs

to figure out the best strategy on its own.


Goal

 Supervised Learning: Aims to learn a function that maps inputs to desired outputs

accurately.

 Unsupervised Learning: Focuses on uncovering hidden structures or patterns

within the data.

 Reinforcement Learning: The objective is to learn a policy or strategy that

maximizes long-term rewards within an environment.

In supervised learning, the model is trained with a training dataset that has a

correct answer key. The decision is done on the initial input given as it has all the

data that’s required to train the machine. The decisions are independent of each

other so each decision is represented through a label.


Rote learning
In machine learning, rote learning is a simple learning pattern where a
machine stores new information and compares it to its history of inputs and
outputs. This technique is used to save time when storing computed values.

Here are some examples of rote learning in machine learning:


 Face recognition
An AI system can extract features from an image, such as the distance
between the eyes, and search for a match in a database of stored features.
 Checkers-playing program
A checkers-playing program can use rote learning to learn board positions it
evaluates in its look-ahead search.
Rote learning can be effective for certain types of learning, but it has been
criticized for promoting surface-level understanding and limiting critical
thinking.

Rote learning is an education method that involves repeating a piece of information many times
to embed it in a person's memory.

Ex: phonics in reading, the periodic table in chemistry, multiplication tables in mathematics,
anatomy in medicine, cases or statutes in law, basic formulae in any science, etc.

Learning by Induction:
Inductive learning is a fundamental machine learning technique that involves
using specific examples to make general predictions or generalizations. It's
also known as inductive reasoning or inductive inference.
Here are some key aspects of inductive learning:
 Process
Inductive learning involves identifying common features in a set of examples,
and then using those features to create a model or hypothesis that can
predict or classify new instances.
 Algorithms
Inductive learning algorithms search for relationships and structures in data,
allowing machines to classify new instances or make predictions based on
the learned patterns.

 Rules
Inductive learning algorithms generate classification rules in the format of "If
this, then that". These rules determine the state of an entity at each iteration
step.
 Inductive bias
Inductive learning is closely related to the concept of inductive bias. For
example, k-Nearest Neighbors (k-NN) has an inductive bias that assumes
similar data points are close to each other in feature space.
Reinforcement Learning:
Reinforcement Learning is a part of machine learning. Here, agents are
self-trained on reward and punishment mechanisms.
Reinforcement learning focuses on enabling intelligent
agents to learn tasks through trial-and-error interactions
with dynamic environments. Without the need for labelled
datasets, agents make decisions to maximize a reward
function.
It acts as a signal to positive and negative behaviors.
Basic Diagram of Reinforcement Learning

Reinforcement learning, a type of machine learning, in which agents take


actions in an environment aimed at maximizing their cumulative rewards.

Terminologies used in Reinforcement


Learning

Terminologies in RL

 Agent – Agent or Reinforcement learning agent or Learning agent all are

same. It is the sole decision-maker and learner

 Environment – a physical world where an agent learns and decides the

actions to be performed
 Action Space – a list of action which an agent can perform

 Action -An agent’s single choice (move left, pick up object) in the

environment.

 State – the current situation of the agent in the environment

 Reward – For each selected action by agent to solve reinforcement learning

problem, the environment gives a reward. It’s usually a scalar value and

nothing but feedback from the environment

 Reward Function: This is a predefined function within the RL framework that

determines how rewards are assigned based on the state of the environment

and the agent’s actions.

 Policy – the agent prepares strategy(decision-making) to map situations to

actions.

 Value Function – The value of state shows up the reward achieved starting

from the state until the policy is executed

 Model – Every RL agent doesn’t use a model of its environment. The agent’s

view maps state-action pairs probability distributions over the states.

Characteristics of Reinforcement Learning

 No supervision, only a real value or reward signal

 Decision making is sequential


 Time plays a major role in reinforcement problems

 Feedback isn’t prompt but delayed

 The following data it receives is determined by the agent’s actions

Types of Reinforcement Learning

There are two types :


Positive Reinforcement

Positive reinforcement is defined as when an event, occurs due to specific

behavior, increases the strength and frequency of the behavior. It has a positive

impact on behavior.

Advantages

 Maximizes the performance of an action

 Sustain change for a longer period

Disadvantage

 Excess reinforcement can lead to an overload of states which would minimize the

results.

2. Negative Reinforcement

Negative Reinforcement is represented as the strengthening of a behavior. In other

ways, when a negative condition is barred or avoided, it tries to stop this action in

the future.

Advantages

 Maximized behavior

 Provide a decent to minimum standard of performance


Disadvantage

 It just limits itself enough to meet up a minimum behavior


Applications of reinforcement learning

 Robotics for Industrial Automation

 Text summarization engines, dialogue agents (text, speech), gameplays

 Autonomous Self Driving Cars

 Machine Learning and Data Processing

 Training system which would issue custom instructions and materials with respect to

the requirements of students

 AI Toolkits, Manufacturing, Automotive, Healthcare, and Bots

 Aircraft Control and Robot Motion Control

 Building artificial intelligence for computer games

Conclusion

Reinforcement learning guides us in determining actions that maximize long-term

rewards.

TYPES OF DATA

Machine learning models use four primary types of data:


 Numerical data: A cornerstone of machine learning, numerical data is
represented by numbers and can be further classified as discrete or
continuous.
 Categorical data: Represents characteristics, such as a hockey player's
team, position, or hometown. Categorical data can take numerical values, but
these numbers don't have a mathematical meaning.
 Time series data: A type of data used in machine learning.
 Text data: A type of data used in machine learning.

Data Types In ML
Data Types Are A Way Of Classification That Specifies Which Type Of Value A Variable
Can Store And What Type Of Mathematical Operations, Relational, Or Logical
Operations Can Be Applied To The Variable Without Causing An Error. In Machine
Learning, It Is Very Important To Know Appropriate Datatypes Of Independent And
Dependent Variable.

Different Types Of Data Types


The Data Type Is Broadly Classified Into

1. Quantitative
2. Qualitative

Different Data Types


1. Quantitative Data Type: –

This Type Of Data Type Consists Of Numerical Values. Anything Which Is


Measured By Numbers.

E.G., Profit, Quantity Sold, Height, Weight, Temperature, Etc.

This Is Again Of Two Types

A.) Discrete Data Type: –

The Numeric Data Which Have Discrete Values Or Whole Numbers. This Type Of
Variable Value If Expressed In Decimal Format Will Have No Proper Meaning.
Their Values Can Be Counted.

E.G.: – No. Of Cars You Have, No. Of Marbles In Containers, Students In A Class,
Etc.

Fig: Discrete Data Types

B.) Continuous Data Type: –

The Numerical Measures Which Can Take The Value Within A Certain Range. This
Type Of Variable Value If Expressed In Decimal Format Has True Meaning. Their
Values Can Not Be Counted But Measured. The Value Can Be Infinite

E.G.: – Height, Weight, Time, Area, Distance, Measurement Of Rainfall, Etc.


Fig: Continuous Data Types

2. Qualitative Data Type: –

These Are The Data Types That Cannot Be Expressed In Numbers. This
Describes Categories Or Groups And Is Hence Known As The Categorical Data
Type.

This Can Be Divided Into:-

A. Structured Data:

This Type Of Data Is Either Number Or Words. This Can Take Numerical Values
But Mathematical Operations Cannot Be Performed On It. This Type Of Data Is
Expressed In Tabular Format.

E.G.) Sunny=1, Cloudy=2, Windy=3 Or Binary Form Data Like 0 Or1, Good Or
Bad, Etc.

Fig: Structured Data


B. Unstructured Data:

This Type Of Data Does Not Have The Proper Format And Therefore Known As
Unstructured Data.This Comprises Textual Data, Sounds, Images, Videos, Etc.

Fig: Unstructured Data

Besides This, There Are Also Other Types Refer As Data Types Preliminaries Or
Data Measures:-

1. Nominal
2. Ordinal
3. Interval
4. Ratio

These Can Also Be Refer Different Scales Of Measurements.

I. Nominal Data Type:

This Is In Use To Express Names Or Labels Which Are Not Order Or Measurable.

E.G., Male Or Female (Gender), Race, Country, Etc.


Fig: Gender (Female, Male), An Example Of Nominal
Data Type

II. Ordinal Data Type:

This Is Also A Categorical Data Type Like Nominal Data But Has Some Natural
Ordering Associated With It.

E.G., Likert Rating Scale, Shirt Sizes, Ranks, Grades, Etc.

Fig: Rating (Good,


Average, Poor), An Example Of Ordinal Data Type

III. Interval Data Type:

This Is Numeric Data Which Has Proper Order And The Exact Zero Means The
True Absence Of A Value Attached. Here Zero Means Not A Complete Absence
But Has Some Value. This Is The Local Scale.

E.G., Temperature Measured In Degree Celsius, Time, Sat Score, Credit Score,
PH, Etc. Difference Between Values Is Familiar. In This Case, There Is No
Absolute Zero. Absolute
Fig: Temperature, An Example Of Interval Data
Type

IV. Ratio Data Type:

This Quantitative Data Type Is The Same As The Interval Data Type But Has The
Absolute Zero. Here Zero Means Complete Absence And The Scale Starts From
Zero. This Is The Global Scale.

E.G., Temperature In Kelvin, Height, Weight, Etc.

Fig: Weight, An
Example Of Ratio Data Type
Matching
There are multiple matches for matches in machine learning, including data
matching, exact matching, and local feature matching:
 Data matching
The process of identifying which records from different data sources
correspond to the same real-world entity. Machine learning models can learn
the relationship between data and what is considered a match in a specific
instance.
 Exact matching
A stricter version of accuracy where all classes or labels must match exactly
for the sample to be correctly classified.
 Local feature matching
A technique that has been explored in recent years with the introduction of
deep learning models. However, challenges remain in improving the
accuracy and robustness of matching due to factors like lighting and
viewpoint variations.
 Probabilistic matching
A data matching technique that uses statistical methods to determine the
probability that two records represent the same entity.
 Supervised learning
A subfield of machine learning that trains algorithms to make predictions or
decisions based on labeled training data

The stages in machine learning are:

The 7 Stages of Machine Learning are:

1. Problem Definition

2. Data Collection

3. Data Preparation
4. Data Visualization

5. ML Modeling

6. Feature Engineering

7. Model Deployment
These 7 stages are the key steps in our framework. We have
categorized them additionally into groups to get a better
understanding of the larger picture.

The stages are grouped into 3 phases:


1. Business Value

2. Proof of Concept (POC)

3. Production

Phase 1 — Business Value

It is absolutely crucial to adopt a business mindset when


thinking about a problem that should be solved with Machine
Learning — defining customer benefits and creating business
impact is top priority. Domain expertise and knowledge is
also essential as the true power of data can only be
harnessed if the domain is well known and understood.

Phase 2 — Proof of Concept (POC)

Proof of Concept (POC) is the most comprehensive part of


our framework. From Data Collection to Feature
Engineering, 5 stages of our ML framework are included
here. Core of any POC to test an idea in terms of its
feasibility and value to the business. Also, questions around
performance and evaluation metrics are answered in that
phase. Only a strong POC that delivers business value and is
feasible allows one putting the ML Model into production.

Phase 3 — Production

In the third phase, one is taking the ML model and scaling it.
The goal is to integrate Machine Learning into a business
process solving a problem with a superior solution compared
to, for example, traditional programming. The process of
taking a trained ML model and making its predictions
available to users or other systems is known as model
deployment. Lastly, it is also essential to iterate on the ML
model over time to improve it.

7 Stages of Machine Learning

1. Problem Definition
The first stage in the DDS Machine Learning Framework is
to define and understand the problem that someone is going
to solve. Start by analyzing the goals and the why behind a
particular problem statement. Understand the power of data
and how one can use it to make a change and drive results.
And asking the right questions is always a great start.

Few possible questions:

 What is the business?

 Why does the problem need to be solved?

 Is a traditional solution available to solve the problem?

 If probabilistic in nature, then does available data allow to


model it?

 What is a measurable business goal?

2. Data Collection
Once the goal is clearly defined, one has to start
getting the data that is needed from various available data
sources.

There are many different ways to collect data that is used for
Machine Learning. For example, focus groups, interviews,
surveys, and internal usage & user data. Also, public data
can be another source and is usually free. These include
research and trade associations such as banks, publicly-
traded corporations, and others. If data isn’t publicly
available, one could also use web scraping to get it (however,
there are some legal restrictions).

At this stage, some of the questions worth considering


are:

 What data do I need for my project?

 Where is that data available?

 How can I obtain it?

 What is the most efficient way to store and access all of it?

3. Data Preparation
The third stage is the most time-consuming and labor-
intensive. Data Preparation can take up to 70% and
sometimes even 90% of the overall project time. But what is
the purpose of this stage?

Well, the type and quality of data that is used in a Machine


Learning model affects the output considerably. In Data
Preparation one explores, pre-processes, conditions, and
transforms data prior to modeling and analysis. It is
absolutely essential to understand the data, learn about it,
and become familiar before moving on to the next stage.

Some of the steps involved in this stage are:

 Data Filtering

 Data Validation & Cleansing


 Data Formatting

 Data Aggregation & Reconciliation

4. Data Visualization

Data Visualization is used to perform Exploratory Data


Analysis (EDA). When one is dealing with large volumes of
data, building graphs is the best way to explore and
communicate findings. Visualization is an incredibly helpful
tool to identify patterns and trends in data, which leads to
clearer understanding and reveals important insights. Data
Visualization also helps for faster decision making through
the graphical illustration.

Here are some common ways of visualization:


 Area Chart

 Bar Chart

 Box-and-whisker Plots

 Bubble Cloud

 Dot Distribution Map

 Heat Map

 Histogram

 Network Diagram

 Word Cloud

5. ML Modeling

Finally, this is where ‘the magic happens’. Machine Learning is finding


patterns in data, and one can perform either supervised or unsupervised
learning. ML tasks include regression, classification, forecasting, and
clustering.
In this stage of the process one has to apply mathematical, computer science,
and business knowledge to train a Machine Learning algorithm that will make
predictions based on the provided data. It is a crucial step that will determine
the quality and accuracy of future predictions in new situations. Additionally,
ML algorithms help to identify key features with high predictive value.
6. Feature Engineering

Machine Learning algorithms learn recurring patterns from data. Carefully


engineered features are a robust representation of those patterns.
Feature Engineering is a process to achieve a set of features by performing
mathematical, statistical, and heuristic procedures. It is a collection of
methods for identifying an optimal set of inputs to the Machine Learning
algorithm. Feature Engineering is extremely important because well-
engineered features make learning possible with simple models.
Following are the characteristics of good features:
 Represents data in an unambiguous way
 Ability to captures linear and non-linear relationships among data points
 Capable of capturing the precise meaning of input data
 Capturing contextual details
7. Model Deployment
The last stage is about putting a Machine Learning model into a production
environment to make data-driven decisions in a more automated way.
Robustness, compatibility, and scaleability are important factors that should
be tested and evaluated before deploying a model. There are various ways
such as Platform as a Service (PaaS) or Infrastructure as a Service (IaaS).
For containerized applications, one can use container orchestration platform
such as Kubernetes to rapidly scale the number of containers as demand
shifts.
Another important part of the last stage is iteration and interpretation. It is
critical to constantly optimize the model and pressure test the results. At the
end, Machine Learning has to provide value to the business and make a
positive impact. Therefore, monitoring the model in production is key.
Conclusion
This was an overview about ‘The 7 Stages of Machine Learning’ — a
framework that helps to structure the typical process of a ML project.

You might also like