BHARATHIDASAN ENGINEERING COLLLEGE
(Approved by AICTE, New Delhi and Affiliated to Anna University)
NATTRAMPALLI – 635 854.
NAME :
REGISTER NUMBER :
SUBJECT CODE/ NAME : CS3491 – ARTIFICIAL INTELLIGENCE
AND MACHINE LEARNING
LABORATORY
YEAR / SEMESTER : II / IV
MAY – JUNE - 2025
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERIG
BHARATHIDASAN ENGINEERING COLLLEGE
(Approved by AICTE, New Delhi and Affiliated to Anna University)
NATTRAMPALLI – 635 854.
Department of Computer Science & Engineering
BONAFIDE CERTIFICATE
This is to certify that Mr. / Ms. is
studying in the Department of Computer Science and Engineering during the
academic year 2025 (May - June) Semester – 4th. This record is submitted by
the above student for University Examination for the
Staff In-Charge Head of the Department
Register Number:
Submitted for the practical examination held on …………………. at
Bharathidasan Engineering College, Nattrampalli.
Internal Examiner External Examiner
TABLE OF CONTENT
[Link] Date Name of the Experiment Page No Signature
[Link] IMPLEMENTING BREADTH-FIRST SEARCH (BFS) AND
Date: DEPTH- FIRST SEARCH (DFS)
Aim:
Algorithm:
Program: BFS
from collections import defaultdict
class Graph:
def init (self):
[Link]=defaultdict(list)
def addEdge(self,u,v):
[Link][u].append(v)
def BFS(self,s):
visited=[False]*(max([Link])+1)
queue=[]
[Link](s)
visited[s]=True
while queue:
s=[Link](0)
print(s,end="")
for i in [Link][s]:
if visited[i]==False:
[Link](i)
visited[i]=True
if name ==" main ":
g = Graph()
[Link](0,1)
[Link](0,2)
[Link](1,2)
[Link](2,0)
[Link](2,3)
[Link](3,3)
print("the following BSF traversal is started from the vertex 2")
[Link](2)
Program DFS
from collections import defaultdict
class Graph:
def init (self):
[Link]=defaultdict(list)
def addEdge(self,u,v):
[Link][u].append(v)
def DFSUtil(self,v,visited):
[Link](v)
print(v, end=' ')
for neighbour in [Link][v]:
if neighbour not in visited:
[Link](neighbour, visited)
def DFS(self,v):
visited =set()
[Link](v,visited)
if name ==" main ":
g = Graph() [Link](0, 1)
[Link](0, 2)
[Link](1, 2)
[Link](2, 0)
[Link](2, 3)
[Link](3, 3)
print("Following is Depth First Traversal (starting from vertex 2)")
# Function call
[Link](2)
Output:
BFS
Following is the Breadth-First Search 5 3 7 2 4 8
DFS
Following is the Depth-First Search 5
3
2
4
8
7
Result:
[Link] IMPLEMENTING INFORMED SEARCH ALGORITHMS LIKE
Date: A* AND MEMORY-BOUNDED A*
Aim:
Algorithm:
Program:
from queue import PriorityQueue
v =14
graph =[[] for i in range(v)]
def best_first_search(actual_Src, target, n):
visited =[False] *n
pq =PriorityQueue()
[Link]((0, actual_Src))
visited[actual_Src] =True
while [Link]() ==False:
u =[Link]()[1]
print(u, end=" ")
if u ==target:
break
for v, c in graph[u]:
if visited[v] ==False:
visited[v] =True
[Link]((c, v))
print()
def addedge(x, y, cost):
graph[x].append((y, cost))
graph[y].append((x, cost))
if name ==" main ":
addedge(0, 1, 3)
addedge(0, 2, 6)
addedge(0, 3, 5)
addedge(1, 4, 9)
addedge(1, 5, 8)
addedge(2, 6, 12)
addedge(2, 7, 14)
addedge(3, 8, 7)
addedge(8, 9, 5)
addedge(8, 10, 6)
addedge(9, 11, 1)
addedge(9, 12, 10)
addedge(9, 13, 2)
source =0
target =9
best_first_search(source, target, v)
Memory Bounded A *
Import heapq
import math
classPriorityQueue:
"""Priority queue implementation using heapq"""
def init (self): [Link] = []
defis_empty(self):
returnlen([Link]) == 0
def put(self, item, priority):
[Link]([Link], (priority, item))
def get(self):
[Link]([Link])[1]
class Node:
"""Node class for representing the search tree""" def
init (self, state, parent=None, action=None, path_cost=0):
[Link] = state
[Link] = parent
[Link] = action
self.path_cost = path_cost
def lt (self, other):
returnself.path_cost + heuristic([Link]) <other.path_cost + heuristic([Link])
def eq (self, other):
[Link] == [Link]
def heuristic(state):
"""Heuristic function for estimating the cost to reach the goal state"""
# Example heuristic function - Euclidean distance to the goal goal_state = (0, 0)
# Replace with actual goal state
[Link]((state[0] - goal_state[0])**2 + (state[1] - goal_state[1])**2)
defmemory_bounded_a_star_search(start_state, max_memory):
"""Memory-bounded A* search algorithm"""
frontier = PriorityQueue()
[Link](Node(start_state), 0)
explored = set() memory ={start_state: 0}
while not frontier.is_empty():
node = [Link]() [Link] not in explored:
[Link]([Link])
ifis_goal_state([Link]): returnget_solution_path(node)
forchild_state, action, step_cost in get_successor_states([Link]):
child_node = Node(child_state, node, action, node.path_cost + step_cost)
child_node_f = child_node.path_cost + heuristic(child_state) ifchild_state not in memory or
child_node_f< memory[child_state]:
[Link](child_node, child_node_f)
memory[child_state] = child_node_f
whilememory_usage(memory) >max_memory:
state_to_remove = min(memory, key=[Link]) del
memory[state_to_remove]
return None
defget_successor_states(state):
"""Function for generating successor states"""
# Replace with actual successor state generation logic return []
defis_goal_state(state):
"""Function for checking if a state is the goal state"""
# Replace with actual goal state checking logic return False
defget_solution_path(node):
"""Function for retrieving the solution path"""
path = [] [Link] is not None:
[Link](([Link], [Link]))
node = [Link] [Link]()
return path
defmemory_usage(memory):
"""Function for estimating the memory usage of a dictionary"""
return sum([Link]())
Output:
A*
013289
Memory Bounded A* 48
48
98*
SyntaxError: incomplete input 9*
SyntaxError: incomplete input 8
8
87/
SyntaxError: incomplete input 7
7
8
8
Result:
[Link]
IMPLEMENTING NAÏVE BAYES
Date:
Aim:
Algorithm:
Program:
import math
import random
import csv
def encode_class(mydata):
classes = []
for i in range(len(mydata)):
if mydata[i][-1] not in classes:
[Link](mydata[i][-1])
for i in range(len(classes)):
for j in range(len(mydata)):
if mydata[j][-1] == classes[i]:
mydata[j][-1] = i
return mydata
def splitting(mydata, ratio):
train_num = int(len(mydata) * ratio)
train = []
test = list(mydata)
while len(train) < train_num:
index = [Link](len(test))
[Link]([Link](index))
return train, test
def groupUnderClass(mydata):
dict = {}
for i in range(len(mydata)):
if (mydata[i][-1] not in dict):
dict[mydata[i][-1]] = []
dict[mydata[i][-1]].append(mydata[i])
return dict
def mean(numbers):
return sum(numbers) / float(len(numbers))
def std_dev(numbers):
avg = mean(numbers)
variance = sum([pow(x - avg, 2) for x in numbers]) / float(len(numbers) - 1)
return [Link](variance)
def MeanAndStdDev(mydata):
info = [(mean(attribute), std_dev(attribute)) for attribute in zip(*mydata)]
del info[-1]
return info
def MeanAndStdDevForClass(mydata):info = {}
dict = groupUnderClass(mydata)
for classValue, instances in [Link]():
info[classValue] = MeanAndStdDev(instances)
return info
def calculateGaussianProbability(x, mean, stdev):
expo = [Link](-([Link](x - mean, 2) / (2 * [Link](stdev, 2))))
return (1 / ([Link](2 * [Link]) * stdev)) * expo
def calculateClassProbabilities(info, test):
probabilities = {}
for classValue, classSummaries in [Link]():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, std_dev = classSummaries[i]
x = test[i]
probabilities[classValue] *= calculateGaussianProbability(x, mean, std_dev)
return probabilities
def predict(info, test):
probabilities = calculateClassProbabilities(info, test)
bestLabel, bestProb = None, -1
for classValue, probability in [Link]():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
def getPredictions(info, test):
predictions = []
for i in range(len(test)):
result = predict(info, test[i])
[Link](result)
return predictions
def accuracy_rate(test, predictions):
correct = 0
for i in range(len(test)):
if test[i][-1] == predictions[i]:
correct += 1
return (correct / float(len(test))) * 100.0
# driver code
filename = r'C:\Users\exam2\Desktop\[Link]'
split_ratio = 0.7
with open(filename, 'r') as csvfile:
lines = [Link](csvfile)
mydata = list(lines)
for i in range(len(mydata)):
mydata[i] = [float(x) for x in mydata[i]]
train_data, test_data = splitting(mydata, split_ratio)
print('Total number of examples are: ', len(mydata))
print('Out of these, training examples are: ', len(train_data))
print("Test examples are: ", len(test_data))
info = MeanAndStdDevForClass(train_data)
predictions = getPredictions(info, test_data)
accuracy = accuracy_rate(test_data, predictions)
print("Accuracy of your model is: ", accuracy)
Output:
Total number of examples are: 200
Out of these, training examples are: 140
Test examples are: 60
Accuracy of your model is: 71.2376788
Result:
[Link]
IMPLEMENTING BAYESIAN NETWORKS
Date:
Aim:
Algorithm:
Program:
import pandas as pd
from [Link] import BayesianNetwork
from [Link] import MaximumLikelihoodEstimator
from [Link] import VariableElimination
# Define a small dataset
data = [Link](data={'age': [23, 50, 30, 40, 35, 52, 47],
'sex': [1, 0, 1, 1, 0, 1, 0],
'cp': [3, 2, 1, 1, 2, 3, 1],
'trestbps': [145, 130, 120, 140, 160, 150, 130],
'chol': [233, 250, 204, 236, 354, 192, 294],
'fbs': [1, 0, 0, 0, 0, 1, 0],
'restecg': [0, 1, 0, 1, 1, 1, 0],
'thalach': [150, 187, 172, 178, 163, 148, 153],
'exang': [0, 0, 0, 0, 1, 0, 0],
'oldpeak': [2.3, 3.5, 1.4, 0.8, 0.6, 0.4, 1.3],
'slp': [0, 0, 2, 2, 2, 1, 1],
'caa': [0, 0, 0, 0, 0, 0, 0],
'thall': [1, 2, 2, 2, 2, 1, 2],
'output': [1, 1, 1, 1, 1, 1, 1]})
# Define the structure of the model
model = BayesianNetwork([('age', 'trestbps'), ('sex', 'chol'), ('cp', 'fbs'), ('restecg', 'thalach'),
('exang', 'oldpeak'), ('slp', 'caa'), ('thall', 'output')])
# Fit the data to the model
[Link](data, estimator=MaximumLikelihoodEstimator)
# Perform inference on the model
infer = VariableElimination(model)
# Query the 'output' variable given some evidence
q = [Link](variables=['output'], evidence={'age': 50, 'sex': 1})
# Print the query result
print(q)
Output:
+-----------+---------------+
| output | phi(output) |
+===========+===============+
| output(1) | 1.0000 |
+-----------+---------------+
Result:
[Link]
BUILDING THE REGRESSION MODEL
Date:
Aim:
Algorithm:
Program:
import pandas as pd
import numpy as np
import [Link] as plt
# Create a simple dataset (you can replace this with your own data)
x = [Link]([5, 7, 8, 7, 2, 17, 2, 9, 4, 11, 12, 9, 6]) # Age of cars
y = [Link]([99, 86, 87, 88, 111, 86, 103, 87, 94, 78, 77, 85, 86]) # Speed of cars
# Calculate mean of x and y
x_mean = [Link](x)
y_mean = [Link](y)
# Calculate slope (m) and intercept (c) for the regression line
numerator = [Link]((x - x_mean) * (y - y_mean))
denominator = [Link]((x - x_mean) ** 2)
slope = numerator / denominator
intercept = y_mean - slope * x_mean
# Create the regression line function
def predict_speed(age):
return slope * age + intercept
# Make predictions for the entire dataset
predicted_speeds = predict_speed(x)
# Calculate R-squared value
total_variance = [Link]((y - y_mean) ** 2)
residual_variance = [Link]((y - predicted_speeds) ** 2)
r_squared = 1 - (residual_variance / total_variance)
print(f"Slope (m): {slope:.2f}")
print(f"Intercept (c): {intercept:.2f}")
print(f"R-squared value: {r_squared:.2f}")
# Visualize the regression line
[Link](x, y, label="Actual data")
[Link](x, predicted_speeds, color="red", label="Regression line")
[Link]("Age of Cars")
[Link]("Speed (mph)")
[Link]("Simple Linear Regression from Scratch")
[Link]() [Link]()
Out Put:
Slope (m): -1.75
Intercept (c): 103.11
R-squared value: 0.58
Result:
[Link]
BUILDING DECISION TREES
Date:
Aim:
Algorithm:
Program:
import pandas as pd
from [Link] import DecisionTreeRegressor
from [Link] import RandomForestRegressor
from sklearn.model_selection import train_test_split
from [Link] import mean_squared_error
# Load your dataset
# data = pd.read_csv('path_to_your_data.csv')
# For demonstration, let's create a simple dataset
data = [Link]({
'Feature1': [1, 2, 3, 4, 5],
'Feature2': [5, 4, 3, 2, 1],
'Target': [1.2, 2.1, 3.5, 4.8, 5.6]
})
# Split the data into features and target
X = [Link]('Target', axis=1)
y = data['Target']
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)
# Initialize the Decision Tree Regressor
decision_tree =
DecisionTreeRegressor(random_state=42)
# Fit the model on the training data
decision_tree.fit(X_train, y_train)
# Predict on the test data
y_pred_tree = decision_tree.predict(X_test)
# Calculate the Mean Squared Error for the Decision
Tree
mse_tree = mean_squared_error(y_test, y_pred_tree)
print(f'Decision Tree Mean Squared Error:
{mse_tree:.4f}')
# Initialize the Random Forest Regressor
random_forest =
RandomForestRegressor(random_state=42)
# Fit the model on the training data
random_forest.fit(X_train, y_train)
# Predict on the test data
y_pred_forest = random_forest.predict(X_test)
# Calculate the Mean Squared Error for the Random
Forest
mse_forest = mean_squared_error(y_test,
y_pred_forest)
print(f'Random Forest Mean Squared Error:
{mse_forest:.4f}')
Output:
Decision Tree Mean Squared Error: 0.8100
Random Forest Mean Squared Error: 0.1475
Result:
[Link]
BUILDING SVM MODEL
Date:
Aim:
Algorithm:
Program:
import pandas as pd
from sklearn.model_selection import train_test_split
from [Link] import SVC
from [Link] import accuracy_score
# Load the Fish dataset
dataset_url =
"[Link]
906_491820_Fish.csv"
fish = pd.read_csv(dataset_url)
# Split the data into training and testing sets
X = [Link](columns=['Species']) # Features
y = fish['Species'] # Target variable
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train an SVM model with a linear kernel
svm_linear = SVC(kernel='linear')
svm_linear.fit(X_train, y_train)
# Predict the species labels for the test set
y_pred = svm_linear.predict(X_test)
# Evaluate the model's accuracy
accuracy = accuracy_score(y_test, y_pred)
print(f'Linear SVM accuracy: {accuracy:.2f}')
Output:
Linear SVM accuracy: 0.96
Result:
[Link]
IMPLEMENT ENSEMBLING TECHNIQUES
Date:
Aim:
Algorithm:
Program:
from sklearn.model_selection import train_test_split
from [Link] import VotingClassifier
from sklearn.linear_model import LogisticRegression
from [Link] import SVC
from [Link] import DecisionTreeClassifier
from [Link] import make_classification
from [Link] import accuracy_score
# 1. Load the dataset and split it into training and testing sets
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15,
n_redundant=5, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 2. Choose the base models to be included in the ensemble
log_clf = LogisticRegression(random_state=42)
svm_clf = SVC(probability=True, random_state=42)
dt_clf = DecisionTreeClassifier(random_state=42)
# 3. Train each base model on the training set
for clf in (log_clf, svm_clf, dt_clf):
[Link](X_train, y_train)
# 4. Combine the predictions of the base models using the chosen ensembling technique
voting_clf = VotingClassifier(estimators=[('lr', log_clf), ('svc', svm_clf), ('dt', dt_clf)],
voting='soft')
voting_clf.fit(X_train, y_train)
# 5. Evaluate the performance of the ensemble model on the testing set
y_pred = voting_clf.predict(X_test)
print(f'Ensemble model accuracy: {accuracy_score(y_test, y_pred)}')
Output:
Ensemble model accuracy: 0.895
Result:
[Link]
IMPLEMENT CLUSTERING ALGORITHMS
Date:
Aim:
Algorithm:
Program:
import warnings
[Link]('ignore', category=UserWarning, module='sklearn')
from [Link] import make_blobs
from [Link] import KMeans, AgglomerativeClustering
import [Link] as plt
# Generate a random dataset with 100 samples and 4 clusters
X, y = make_blobs(n_samples=100, centers=4, random_state=42)
# Create a K-Means clustering object with 4 clusters
kmeans = KMeans(n_clusters=4, random_state=42)
# Fit the K-Means model to the dataset
[Link](X)
# Create a scatter plot of the data colored by K-Means cluster assignment
[Link](X[:, 0], X[:, 1], c=kmeans.labels_)
[Link]("K-Means Clustering")
[Link]()
# Create a Hierarchical clustering object with 4 clusters
hierarchical = AgglomerativeClustering(n_clusters=4)
# Fit the Hierarchical model to the dataset
[Link](X)
# Create a scatter plot of the data colored by Hierarchical cluster assignment
[Link](X[:, 0], X[:, 1], c=hierarchical.labels_)
[Link]("Hierarchical Clustering")
[Link]()
Output:
Result:
[Link]
IMPLEMENT THE EXPECTATION-MAXIMIZATION (EM)
Date:
Aim:
Algorithm:
Program:
from [Link] import BayesianNetwork
from [Link] import MaximumLikelihoodEstimator
from [Link] import VariableElimination
from [Link] import TabularCPD
import numpy as np
import pandas as pd
# Define the structure of the Bayesian network
model = BayesianNetwork([('C', 'S'), ('D', 'S')])
# Define the conditional probability distributions (CPDs)
cpd_c = TabularCPD('C', 2, [[0.5], [0.5]])
cpd_d = TabularCPD('D', 2, [[0.5], [0.5]])
cpd_s = TabularCPD('S', 2, [[0.8, 0.6, 0.6, 0.2], [0.2, 0.4, 0.4,
0.8]],
evidence=['C', 'D'], evidence_card=[2, 2])
# Add the CPDs to the model
model.add_cpds(cpd_c, cpd_d, cpd_s)
# Generate some data
data = [Link](low=0, high=2, size=(5000, 3))
# Convert the numpy ndarray to a pandas DataFrame
data = [Link](data, columns=['C', 'D', 'S']) # Add
column names
# Create a Maximum Likelihood Estimator
mle = MaximumLikelihoodEstimator(model, data)
# Estimate the CPDs for all variables in the model
model_estimated = mle.get_parameters()
# Create a Variable Elimination object to perform inference
infer = VariableElimination(model)
# Perform inference on some observed evidence
query = [Link](['S'], evidence={'C': 1})
print(query)
Output:
+------+----------+
| S | phi(S) |
+======+==========+
| S(0) | 0.4000 |
+------+----------+
| S(1) | 0.6000 |
+------+----------+
Result:
[Link]
BUILD SIMPLE NN MODELS
Date:
Aim:
Algorithm:
Program:
import tensorflow as tf
from tensorflow import keras
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = [Link].load_data()
# Normalize the input data
x_train = x_train / 2.0
x_test = x_test / 2.0
# Define the model architecture
model = [Link]([
[Link](input_shape=(28, 28)),
[Link](130, activation='relu'),
[Link](10, activation='softmax')
])
# Compile the model
[Link](optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
[Link](x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print('Test accuracy:', test_acc)
Output:
Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 755us/step - accuracy: 0.836
3 - loss: 3.8895 - val_accuracy: 0.9264 - val_loss: 0.3152
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 713us/step - accuracy: 0.933
5 - loss: 0.2749 - val_accuracy: 0.9382 - val_loss: 0.2474
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 710us/step - accuracy: 0.945
0 - loss: 0.2148 - val_accuracy: 0.9473 - val_loss: 0.2126
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 698us/step - accuracy: 0.952
3 - loss: 0.1770 - val_accuracy: 0.9513 - val_loss: 0.2109
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 715us/step - accuracy: 0.959
7 - loss: 0.1538 - val_accuracy: 0.9445 - val_loss: 0.2121
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 711us/step - accuracy: 0.963
0 - loss: 0.1382 - val_accuracy: 0.9457 - val_loss: 0.2214
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 706us/step - accuracy: 0.967
6 - loss: 0.1239 - val_accuracy: 0.9584 - val_loss: 0.1854
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 710us/step - accuracy: 0.965
7 - loss: 0.1300 - val_accuracy: 0.9588 - val_loss: 0.1870
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 699us/step - accuracy: 0.970
0 - loss: 0.1132 - val_accuracy: 0.9524 - val_loss: 0.2495
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 705us/step - accuracy: 0.970
1 - loss: 0.1165 - val_accuracy: 0.9530 - val_loss: 0.2668
313/313 - 0s - 458us/step - accuracy: 0.9530 - loss: 0.2668
Test accuracy: 0.953000009059906
Result:
[Link]
BUILD DEEP LEARNING NN MODELS
Date:
Aim:
Algorithm:
Program:
import tensorflow as tf
from tensorflow import keras
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) = [Link].load_data()
# Normalize the input data
x_train = x_train / 255.0
x_test = x_test / 255.0
# Define the model architecture
model = [Link]([
[Link](input_shape=(28, 28)),
[Link](128, activation='relu'),
[Link](0.2),
[Link](10)
])
# Compile the model
[Link](optimizer='adam',
loss=[Link](from_logits=True),
metrics=['accuracy'])
# Train the model
[Link](x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print('Test accuracy:', test_acc)
Output:
Epoch 1/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 2s 741us/step - accuracy: 0.857
8 - loss: 0.4806 - val_accuracy: 0.9591 - val_loss: 0.1417
Epoch 2/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 697us/step - accuracy: 0.954
9 - loss: 0.1550 - val_accuracy: 0.9656 - val_loss: 0.1127
Epoch 3/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 698us/step - accuracy: 0.967
0 - loss: 0.1083 - val_accuracy: 0.9728 - val_loss: 0.0878
Epoch 4/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 699us/step - accuracy: 0.972
6 - loss: 0.0911 - val_accuracy: 0.9758 - val_loss: 0.0792
Epoch 5/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 690us/step - accuracy: 0.975
7 - loss: 0.0746 - val_accuracy: 0.9767 - val_loss: 0.0750
Epoch 6/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 695us/step - accuracy: 0.979
8 - loss: 0.0636 - val_accuracy: 0.9786 - val_loss: 0.0694
Epoch 7/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 719us/step - accuracy: 0.983
2 - loss: 0.0542 - val_accuracy: 0.9803 - val_loss: 0.0651
Epoch 8/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 691us/step - accuracy: 0.984
0 - loss: 0.0510 - val_accuracy: 0.9780 - val_loss: 0.0738
Epoch 9/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 697us/step - accuracy: 0.985
8 - loss: 0.0451 - val_accuracy: 0.9791 - val_loss: 0.0726
Epoch 10/10
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 1s 695us/step - accuracy: 0.985
8 - loss: 0.0432 - val_accuracy: 0.9800 - val_loss: 0.0706
313/313 - 0s - 350us/step - accuracy: 0.9800 - loss: 0.0706
Test accuracy: 0.9800000190734863
Result: