0% found this document useful (0 votes)
30 views16 pages

AI Lab Manual - New

The document is a practical record certificate for students at Bangalore University, detailing various programming tasks to be completed. It includes tasks such as solving the Water Jug Problem, implementing A* search, and performing image processing with OpenCV, among others. Additionally, it covers machine learning tasks using Keras and Naive Bayes classifiers, as well as face recognition using Python.

Uploaded by

anupama madhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views16 pages

AI Lab Manual - New

The document is a practical record certificate for students at Bangalore University, detailing various programming tasks to be completed. It includes tasks such as solving the Water Jug Problem, implementing A* search, and performing image processing with OpenCV, among others. Additionally, it covers machine learning tasks using Keras and Naive Bayes classifiers, as well as face recognition using Python.

Uploaded by

anupama madhu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CERTIFICATE

This is to certify that Smt/Sri _______________________________________


[Link] : ________________________________________________________
Class________________________ Semester have successfully completed
_____________________________Practical Record as prescribed by Banglore
University for the academic Year ____________________

Teacher Incharge Signature of HOD

Internal Examiner External Examiner

SEAL
INDEX
Sno Program Page No Date

1 Write a program to solve the Water Jug Problem


using Breadth Frist Search (BFS).

2 Write a program to find the optimum path from


Source to Destination using A* search technique.

3 Write a program to solve the 4 – Queens


Problem.

4 Write a program to implement Minimax search


for 2 Player games

5 Using OpenCV python library capture an image


and perform the following image processing
operations: a) Image Resizing b) Blurring of
Image c) Grayscaling of image d) Scaling and
rotation e) Edge Detection f) Segmentation using
thresholding g) Background subtraction h)
Morphological operations
6 Write a program with two menu options 1)
Capture Image and 2) Recognise Image. This
program should capture pictures of five students
and save them. The program should
identify/recognise the student and display the
student name
7 Using Keras/any standard dataset write the
programs for the following Machine learning
tasks:
7. Use the Decision tree classifier to classify the
dataset.

8 Using Keras/any standard dataset write the


programs for the following Machine learning
tasks
8. Use the Naïve Bayes classifier to classify the
dataset

9 Using Keras/any standard dataset write the


programs for the following Machine learning
tasks:
9. Implement K-Means clustering Algorithm.

10 Using Python NLTK, perform the following


Natural Language Processing (NLP) tasks for any
textual content. a) Tokenizing b) Filtering Stop
Words c) Stemming d) Part of Speech tagging
e) Chunking f) Named Entity Recognition (NER)

11 Write a program that uses Neural networks for


image classification using Keras Iris dataset
1. Write a program to solve the Water Jug Problem using Breadth Frist
Search (BFS).

from collections import defaultdict

visited = defaultdict(lambda: False)


# To store J1, J2 and Litre
J1, J2, L = 0, 0, 0

def Water_Jug_problem(X, Y):

global J1, J2, L

if (X == L and Y == 0) or (Y == L and X == 0):


print("(",X, ", ",Y,")", sep ="")
return True

if visited[(X, Y)] == False:


print("(",X, ", ",Y,")", sep ="")

visited[(X, Y)] = True

return (Water_Jug_problem(0, Y) or
Water_Jug_problem(X, 0) or
Water_Jug_problem(J1, Y) or
Water_Jug_problem(X, J2) or
Water_Jug_problem(X + min(Y, (J1-X)),
Y - min(Y, (J1-X))) or
Water_Jug_problem(X - min(X, (J2-Y)),
Y + min(X, (J2-Y))))
else:
return False

# Main Code
J1 = 2
J2 = 5
L=3
print("Path is as Follow:")

Water_Jug_problem(0, 0)
2. Write a program to find the optimum path from Source to Destination
using A* search technique.

from collections import defaultdict


class Graph:

def __init__(self, vertices):


self.V = vertices
[Link] = defaultdict(list)
def addEdge(self, u, v):
[Link][u].append(v)

def printAllPathsUtil(self, u, d, visited, path):

visited[u]= True
[Link](u)

if u == d:
print (path)
else:

for i in [Link][u]:
if visited[i]== False:
[Link](i, d, visited, path)

[Link]()
visited[u]= False

def printAllPaths(self, s, d):

visited =[False]*(self.V)
path = []
[Link](s, d, visited, path)
g = Graph(4)
[Link](0, 1)
[Link](0, 2)
[Link](0, 3)
[Link](2, 0)
[Link](2, 1)
[Link](1, 3)
s=2;d=3
print ("Following are all different paths from % d to % d :" %(s, d))
[Link](s, d)
3. Write a program to solve the 4 – Queens Problem

def is_safe(board, row, col, n):


# Check the row on the left side
for i in range(col):
if board[row][i] == 1:
return False

# Check upper diagonal on the left side


for i, j in zip(range(row, -1, -1), range(col, -1, -1)):
if board[i][j] == 1:
return False

# Check lower diagonal on the left side


for i, j in zip(range(row, n, 1), range(col, -1, -1)):
if board[i][j] == 1:
return False

return True

def print_solution(board):
for row in board:
print(" ".join(map(str, row)))

def solve_nqueens_util(board, col, n):


if col == n:
print_solution(board)
return True

res = False
for i in range(n):
if is_safe(board, i, col, n):
board[i][col] = 1
res = solve_nqueens_util(board, col + 1, n) or res
board[i][col] = 0 # backtrack if placing queen at (i, col) doesn't lead to a
solution

return res

def solve_nqueens(n):
board = [[0 for _ in range(n)] for _ in range(n)]
if not solve_nqueens_util(board, 0, n):
print("Solution does not exist.")

# Solve the 4-Queens problem


solve_nqueens(4)
4. Write a program to implement Minimax search for 2 Player games

import math

def minimax (curDepth, nodeIndex,


maxTurn, scores,
targetDepth):

if (curDepth == targetDepth):
return scores[nodeIndex]

if (maxTurn):
return max(minimax(curDepth + 1, nodeIndex * 2,
False, scores, targetDepth),
minimax(curDepth + 1, nodeIndex * 2 + 1,
False, scores, targetDepth))

else:
return min(minimax(curDepth + 1, nodeIndex * 2,
True, scores, targetDepth),
minimax(curDepth + 1, nodeIndex * 2 + 1,
True, scores, targetDepth))

scores = [3, 5, 2, 9, 12, 5, 23, 23]

treeDepth = [Link](len(scores), 2)

print("The optimal value is : ", end = "")


print(minimax(0, 0, True, scores, treeDepth))
5. Using OpenCV python library capture an image and perform the
following image processing operations: a) Image Resizing b) Blurring of
Image c) Grayscaling of image d) Scaling and rotation e) Edge Detection
f) Segmentation using thresholding g) Background subtraction h)
Morphological operations

import cv2
import numpy as np

# Capture an image
img = [Link]('[Link]')
# Resizing the image
resized_img = [Link](img, (300, 300))
# Blurring the image
ksize = (10, 10)
image = [Link](img, ksize)
# Grayscaling the image
gray_img = [Link](img, cv2.COLOR_BGR2GRAY)
# Scaling and rotation
scaled_img = [Link](img, (400, 400))
rotated_img = [Link](img, cv2.ROTATE_180)

# Edge detection
edges = [Link](img, 100, 200)

# Segmentation using thresholding


ret, thresh = [Link](gray_img, 127, 255, cv2.THRESH_BINARY)

# Morphological operations
kernel = [Link]((5, 5), np.uint8)
eroded_img = [Link](img, kernel, iterations=1)
dilated_img = [Link](img, kernel, iterations=1)

# Display the images


[Link]('Original Image', img)
[Link]('Resized Image', resized_img)
[Link]('Blurred Image', image)
[Link]('Grayscaled Image', gray_img)
[Link]('Scaled Image', scaled_img)
[Link]('Rotated Image', rotated_img)
[Link]('Edges', edges)
[Link]('Thresholded Image', thresh)
[Link]('Eroded Image', eroded_img)
[Link]('Dilated Image', dilated_img)
[Link](0)
[Link]()

6. Write a program with two menu options 1) Capture Image and 2)


Recognise Image. This program should capture pictures of five students
and save them. The program should identify/recognise the student and
display the student name.

import face_recognition
import cv2
import numpy as np

video_capture = [Link](0)

# Load a sample picture and learn how to recognize it.


student1_image = face_recognition.load_image_file("[Link]")
student1_face_encoding = face_recognition.face_encodings(student1_image)[0]

# Load a second sample picture and learn how to recognize it.


student2_image = face_recognition.load_image_file("[Link]")
student2_face_encoding = face_recognition.face_encodings(student2_image)[0]

student3_image = face_recognition.load_image_file("[Link]")
student3_face_encoding = face_recognition.face_encodings(student3_image)[0]

student4_image = face_recognition.load_image_file("[Link]")
student4_face_encoding = face_recognition.face_encodings(student4_image)[0]

student5_image = face_recognition.load_image_file("[Link]")
student5_face_encoding = face_recognition.face_encodings(student5_image)[0]

# Create arrays of known face encodings and their names


known_face_encodings = [
student1_face_encoding,
student2_face_encoding,
student3_face_encoding,
student4_face_encoding,
student5_face_encoding
]
known_face_names = [
"Thejaswi",
"saitejasri",
"sushmitha",
"Nitin",
"Simhadri"
]
# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

while True:
# Grab a single frame of video
ret, frame = video_capture.read()

# Only process every other frame of video to save time


if process_this_frame:
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = [Link](frame, (0, 0), fx=0.25, fy=0.25)

# Convert the image from BGR color (which OpenCV uses) to RGB color (which
face_recognition uses)
rgb_small_frame = small_frame[:, :, ::-1]

# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame,
face_locations)

face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings,
face_encoding)
name = "Unknown"

face_distances = face_recognition.face_distance(known_face_encodings,
face_encoding)
best_match_index = [Link](face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]

face_names.append(name)

process_this_frame = not process_this_frame

# Display the results


for (top, right, bottom, left), name in zip(face_locations, face_names):
# Scale back up face locations since the frame we detected in was scaled to 1/4
size
top *= 4
right *= 4
bottom *= 4
left *= 4

# Draw a box around the face


[Link](frame, (left, top), (right, bottom), (0, 0, 255), 2)

# Draw a label with a name below the face


[Link](frame, (left, bottom - 35), (right, bottom), (0, 0, 255), [Link])
font = cv2.FONT_HERSHEY_DUPLEX
[Link](frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

# Display the resulting image


[Link]('Video', frame)

# Hit 'q' on the keyboard to quit!


if [Link](1) & 0xFF == ord('q'):
break

# Release handle to the webcam


video_capture.release()
[Link]()

7. Using Keras/any standard dataset write the programs for the following
Machine learning tasks:
7. Use the Decision tree classifier to classify the dataset.
8. Use the Naïve Bayes classifier to classify the dataset.
9. Implement K-Means clustering Algorithm

import numpy as nm
import [Link] as mtp
import pandas as pd from [Link] import files
#importing datasets uploaded = [Link]()
data_set= pd.read_csv('user_data.csv')
#Extracting Independent and dependent Variable
x= data_set.iloc[:, [2,3]].values
y= data_set.iloc[:, 4].values

from sklearn.model_selection import train_test_split


x_train, x_test, y_train, y_test= train_test_split(x, y, test_size= 0.25, random_state=0)

#feature Scaling
from [Link] import StandardScaler
st_x= StandardScaler()
x_train= st_x.fit_transform(x_train)
x_test= st_x.transform(x_test)

#Fitting Decision Tree classifier to the training set


From [Link] import DecisionTreeClassifier
classifier= DecisionTreeClassifier(criterion='entropy', random_state=0)
[Link](x_train, y_train)

#Predicting the test set result


y_pred= [Link](x_test)

#Creating the Confusion matrix


from [Link] import confusion_matrix
cm= confusion_matrix(y_test, y_pred)

8. Use the Naïve Bayes classifier to classify the dataset.


import numpy as nm
import [Link] as mtp
import pandas as pd

# Importing the dataset


dataset = pd.read_csv('user_data.csv')
x = [Link][:, [2, 3]].values
y = [Link][:, 4].values
print(dataset)

# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 0)

# Feature Scaling
from [Link] import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = [Link](x_test)

# Fitting Naive Bayes to the Training set


from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
[Link](x_train, y_train)
Prediction of the test set result:
y_pred = [Link](x_test)
Creating Confusion Matrix
from [Link] import confusion_matrix
cm = confusion_matrix(y_test, y_pred)

9. Implement K-Means clustering Algorithm.

# importing libraries
import numpy as nm
import [Link] as mtp
import pandas as pd

# Importing the dataset


dataset = pd.read_csv('mall_customers_data.csv')
x = [Link][:, [3, 4]].values

#finding optimal number of clusters using the elbow method


from [Link] import KMeans
wcss_list= [] #Initializing the list for the values of WCSS

#Using for loop for iterations from 1 to 10.


for i in range(1, 11):
kmeans = KMeans(n_clusters=i, init='k-means++', random_state= 42)
[Link](x)
wcss_list.append(kmeans.inertia_)
[Link](range(1, 11), wcss_list)
[Link]('The Elobw Method Graph')
[Link]('Number of clusters(k)')
[Link]('wcss_list')
[Link]()

#training the K-means model on a dataset


kmeans = KMeans(n_clusters=5, init='k-means++', random_state= 42)
y_predict= kmeans.fit_predict(x)

#visulaizing the clusters


[Link](x[y_predict == 0, 0], x[y_predict == 0, 1], s = 100, c = 'blue', label =
'Cluster 1') #for first cluster
[Link](x[y_predict == 1, 0], x[y_predict == 1, 1], s = 100, c = 'green', label =
'Cluster 2') #for second cluster
[Link](x[y_predict== 2, 0], x[y_predict == 2, 1], s = 100, c = 'red', label = 'Cluster
3') #for third cluster
[Link](x[y_predict == 3, 0], x[y_predict == 3, 1], s = 100, c = 'cyan', label =
'Cluster 4') #for fourth cluster
[Link](x[y_predict == 4, 0], x[y_predict == 4, 1], s = 100, c = 'magenta', label =
'Cluster 5') #for fifth cluster
[Link](kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c =
'yellow', label = 'Centroid')
[Link]('Clusters of customers')
[Link]('Annual Income (k$)')
[Link]('Spending Score (1-100)')
[Link]()
[Link]()
10. Using Python NLTK, perform the following Natural Language
Processing (NLP) tasks for any textual content. a) Tokenizing b)
Filtering Stop Words c) Stemming d) Part of Speech tagging e)
Chunking f) Named Entity Recognition (NER)

import nltk
from [Link] import word_tokenize, sent_tokenize
from [Link] import stopwords
from [Link] import PorterStemmer
from nltk import pos_tag, ne_chunk

[Link]("/NLTK_DATA_DIR")
[Link]('punkt')
[Link]('stopwords')
[Link]('averaged_perceptron_tagger')
[Link]('maxent_ne_chunker')
[Link]('words')

# Sample text
text = "Natural Language Processing is a subfield of artificial intelligence that focuses
on the interaction between computers and humans using natural language."

# a) Tokenizing
words = word_tokenize(text)
sentences = sent_tokenize(text)

print("Tokenized Words:", words)


print("Tokenized Sentences:", sentences)

# b) Filtering Stop Words


stop_words = set([Link]("english"))
filtered_words = [word for word in words if [Link]() not in stop_words]

print("Filtered Words:", filtered_words)

# c) Stemming
porter_stemmer = PorterStemmer()
stemmed_words = [porter_stemmer.stem(word) for word in words]

print("Stemmed Words:", stemmed_words)

# d) Part of Speech Tagging


pos_tags = pos_tag(words)

print("Part of Speech Tags:", pos_tags)


# e) Chunking
grammar = r"""
NP: {<DT>?<JJ>*<NN>} # Chunk Noun Phrases
PP: {<IN><NP>} # Chunk Prepositional Phrases
VP: {<VB.*><NP|PP>*} # Chunk Verb Phrases
"""
chunk_parser = [Link](grammar)
chunked_sentence = chunk_parser.parse(pos_tags)

print("Chunked Sentence:", chunked_sentence)

# f) Named Entity Recognition (NER)


ner_result = ne_chunk(pos_tags)

print("Named Entity Recognition (NER):", ner_result)

11. Write a program that uses Neural networks for image classification
using Keras Iris dataset

from [Link] import iris


from [Link] import Sequential
from [Link] import Dense, Activation
from [Link] import Adam
from [Link] import to_categorical

(x_train, y_train), (x_test, y_test) = iris.load_data()

# One-hot encoding the target labels


y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

# Creating a sequential model


model = Sequential()

# Adding the first fully connected layer with 10 neurons


[Link](Dense(10, activation='relu'))

# Adding the second fully connected layer with 5 neurons


[Link](Dense(5, activation='relu'))

# Adding the output layer with 3 neurons


[Link](Dense(3, activation='softmax'))

# Compiling the model using Adam optimizer and categorical crossentropy loss
function
[Link](optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# Training the model
[Link](x_train, y_train, epochs=100)

# Evaluating the model


[Link](x_test, y_test)

You might also like