0% found this document useful (0 votes)
36 views16 pages

Lab Programs

The document contains multiple programming tasks including solving the Water Jug Problem with BFS, finding optimum paths using A* search, solving the 4-Queens Problem, implementing Minimax for two-player games, performing various image processing operations with OpenCV, capturing and recognizing student images, and classifying datasets using Decision Tree and Naïve Bayes classifiers. Each task includes code snippets and example usage. The document serves as a comprehensive guide for implementing these algorithms and techniques in Python.

Uploaded by

praspallavi09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views16 pages

Lab Programs

The document contains multiple programming tasks including solving the Water Jug Problem with BFS, finding optimum paths using A* search, solving the 4-Queens Problem, implementing Minimax for two-player games, performing various image processing operations with OpenCV, capturing and recognizing student images, and classifying datasets using Decision Tree and Naïve Bayes classifiers. Each task includes code snippets and example usage. The document serves as a comprehensive guide for implementing these algorithms and techniques in Python.

Uploaded by

praspallavi09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Lab Programs

1. Write a program to solve the Water Jug Problem using Breadth Frist Search
(BFS).
from collections import deque

def bfs_water_jug(jug1_max, jug2_max, target):


visited, queue = set([(0, 0)]), deque([(0, 0)])

while queue:
jug1, jug2 = [Link]()
if jug1 == target or jug2 == target:
return True

next_states = [
(jug1_max, jug2), # Fill Jug 1
(jug1, jug2_max), # Fill Jug 2
(0, jug2), # Empty Jug 1
(jug1, 0), # Empty Jug 2
(jug1 - min(jug1, jug2_max - jug2), jug2 + min(jug1, jug2_max - jug2)), #
Pour Jug 1 to Jug 2
(jug1 + min(jug2, jug1_max - jug1), jug2 - min(jug2, jug1_max - jug1)) #
Pour Jug 2 to Jug 1
]

for state in next_states:


if state not in visited:
[Link](state)
[Link](state)

return False

# Example usage:
jug1_capacity, jug2_capacity, target_amount = 5, 10, 2
result = bfs_water_jug(jug1_capacity, jug2_capacity, target_amount)
print(f"{'Possible' if result else 'Not possible'} to measure {target_amount} liters
using the given jugs.")
output:

2. Write a program to find the optimum path from Source to Destination using A *
search technique.
import heapq

class Node:
def __init__(self, position, parent=None, g=0, h=0):
[Link], [Link] = position, parent
self.g, self.h = g, h
self.f = g + h
def __lt__(self, other): return self.f < other.f

def heuristic(a, b): return abs(a[0] - b[0]) + abs(a[1] - b[1])

def astar_search(grid, start, end):


start_node = Node(start, None, 0, heuristic(start, end))
open_list, closed_list = [start_node], set()

while open_list:
current = [Link](open_list)
if [Link] == end:
path = []
while current: [Link]([Link]); current = [Link]
return path[::-1]

closed_list.add([Link])
for dx, dy in [(0, -1), (0, 1), (-1, 0), (1, 0)]:
pos = ([Link][0] + dx, [Link][1] + dy)
if (0 <= pos[0] < len(grid) and 0 <= pos[1] < len(grid[0]) and
grid[pos[0]][pos[1]] == 0 and pos not in closed_list):
new_node = Node(pos, current, current.g + 1, heuristic(pos, end))
if not any([Link] == new_node.position and n.f <= new_node.f for n in
open_list):
[Link](open_list, new_node)
return None

# Example usage
grid = [
[0, 1, 0, 0, 0, 0],
[0, 1, 0, 1, 1, 0],
[0, 0, 0, 0, 1, 0],
[0, 1, 1, 0, 1, 0],
[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0],
]

start, end = (0, 0), (5, 5)


path = astar_search(grid, start, end)
print("Path found:", path) if path else print("No path found.")

output:

3. Write a program to solve the 4 - Queens Problem.


def is_safe(board, row, col):
return all(board[row][i] == 0 for i in range(col)) and \
all(board[i][j] == 0 for i, j in zip(range(row, -1, -1), range(col, -1, -1))) and \
all(board[i][j] == 0 for i, j in zip(range(row, len(board)), range(col, -1, -1)))

def solve_n_queens_util(board, col):


if col >= len(board): return True
for i in range(len(board)):
if is_safe(board, i, col):
board[i][col] = 1
if solve_n_queens_util(board, col + 1): return True
board[i][col] = 0
return False

def solve_n_queens(n):
board = [[0] * n for _ in range(n)]
if solve_n_queens_util(board, 0): print_board(board)
else: print("Solution does not exist")

def print_board(board):
print("\n".join(" ".join("Q" if col else "." for col in row) for row in board) + "\n")

# Solve the 4-Queens problem


solve_n_queens(4)

output:

4. Write a program to implement Minimax search for 2 Player games.


import math

def print_board(board):
print("\n".join(" | ".join(row) for row in board) + "\n" + "-" * 9)

def check_winner(board, player):


return any(all(s == player for s in row) for row in board) or \
any(all(board[i][j] == player for i in range(3)) for j in range(3)) or \
all(board[i][i] == player for i in range(3)) or \
all(board[i][2 - i] == player for i in range(3))

def minimax(board, is_max):


if check_winner(board, 'X'): return 1
if check_winner(board, 'O'): return -1
if all(cell in ['X', 'O'] for row in board for cell in row): return 0
best_score = -[Link] if is_max else [Link]
for i in range(3):
for j in range(3):
if board[i][j] == ' ':
board[i][j] = 'X' if is_max else 'O'
score = minimax(board, not is_max)
board[i][j] = ' '
best_score = max(best_score, score) if is_max else min(best_score, score)
return best_score

def best_move(board):
best_score, move = -[Link], None
for i in range(3):
for j in range(3):
if board[i][j] == ' ':
board[i][j] = 'X'
score = minimax(board, False)
board[i][j] = ' '
if score > best_score:
best_score, move = score, (i, j)
return move

def play_game():
board = [[' ']*3 for _ in range(3)]
print("Initial Board:")
print_board(board)

while True:
move = best_move(board)
if move:
board[move[0]][move[1]] = 'X'
print("\nAI (X) makes a move:")
print_board(board)
if check_winner(board, 'X'):
print("X wins!")
break
if all(cell in ['X', 'O'] for row in board for cell in row):
print("It's a draw!")
break

try:
row, col = map(int, input("Enter your move (row col): ").split())
if board[row][col] == ' ':
board[row][col] = 'O'
print("\nYou (O) make a move:")
print_board(board)
if check_winner(board, 'O'):
print("O wins!")
break
else:
print("Cell occupied! Try again.")
except (ValueError, IndexError):
print("Invalid input! Enter row and column as 0, 1, or 2.")

# Run the game


play_game()
output:

5. Using OpenCV python library capture an image and perform the following image
processing operations:
a) Image Resizing
b) Blurring of Image
c) Gray scaling of image
d) Scaling and rotation
e) Edge Detection
f) Segmentation using thresholding
g) Background subtraction
h) Morphological operations
import cv2
import numpy as np
import [Link] as plt

def display_image(title, image, cmap=None):


[Link](figsize=(6, 6))
[Link](title)
[Link]('off')
[Link](image if cmap else [Link](image, cv2.COLOR_BGR2RGB),
cmap=cmap)
[Link]()

# Load image
image_path = '[Link]'
image = [Link](image_path)
if image is None:
print("Failed to load image.")
else:
transformations = {
"Resized Image": [Link](image, (300, 300)),
"Blurred Image": [Link](image, (15, 15), 0),
"Gray Scale Image": [Link](image, cv2.COLOR_BGR2GRAY),
"Scaled Image": [Link](image, None, fx=0.5, fy=0.5),
"Rotated Image": [Link](image, cv2.getRotationMatrix2D(([Link][1]//2,
[Link][0]//2), 45, 1.0), ([Link][1], [Link][0])),
"Edge Detection": [Link](image, 100, 200),
"Thresholded Image": [Link]([Link](image, cv2.COLOR_BGR2GRAY),
128, 255, cv2.THRESH_BINARY)[1],
"Background Subtraction": cv2.createBackgroundSubtractorMOG2().apply(image)
}

# Morphological operations on thresholded image


kernel = [Link]((5, 5), np.uint8)
threshold_image = transformations["Thresholded Image"]
[Link]({
"Dilated Image": [Link](threshold_image, kernel, iterations=1),
"Eroded Image": [Link](threshold_image, kernel, iterations=1)
})

# Display all transformations


for title, img in [Link]():
display_image(title, img, cmap='gray' if "Gray" in title or "Edge" in title or "Threshold"
in title else None)

output:
6. Write a program with two menu options 1) Capture Image and 2) Recognize
Image. This program should capture pictures of five students and save them.
The program should identify/recognize the student and display the student's
name.
import cv2
import numpy as np
import os
from [Link].mobilenet_v2 import MobileNetV2,
preprocess_input
from [Link] import img_to_array, load_img
from [Link] import Model
from [Link] import LabelEncoder
from [Link] import SVC
import pickle
import time # For delays if needed

# Initialize the MobileNetV2 model for feature extraction


base_model = MobileNetV2(weights="imagenet", include_top=False,
input_shape=(224, 224, 3))
model = Model(inputs=base_model.input, outputs=base_model.output)

# Directory for storing captured student images


image_dir = "student_images"
if not [Link](image_dir):
[Link](image_dir)

# List of students
students_list = ["Aaa", "Bbb", "Ccc", "Ddd", "Eee"]

# Function to capture and save images of students


def capture_images(students_list, images_per_student=1):
for student in students_list:
for i in range(images_per_student):
cap = [Link](0)
if not [Link]():
print("Could not open webcam.")
return
print(f"Capturing image for {student}...")

ret, frame = [Link]()


if not ret:
print("Failed to capture image.")
break
# Save captured image
save_path = [Link](image_dir, f"{student}_{i + 1}.jpg")
[Link](save_path, frame)
print(f"Saved {student}'s image at {save_path}")
# Add a small delay
[Link](2)
[Link]()

# Function to extract features using MobileNetV2


def extract_features(img_path):
img = load_img(img_path, target_size=(224, 224))
img_array = img_to_array(img)
img_array = np.expand_dims(img_array, axis=0)
img_array = preprocess_input(img_array)
features = [Link](img_array)
return [Link]()

# Function to train and save the SVM model


def train_recognizer(students_list):
features = []
labels = []
for student in students_list:
student_images = [[Link](image_dir, f) for f in [Link](image_dir) if
[Link](student)]
for img_path in student_images:
[Link](extract_features(img_path))
[Link](student)
le = LabelEncoder()
labels_enc = le.fit_transform(labels)
clf = SVC(kernel="linear", probability=True)
[Link](features, labels_enc)
# Save the model and label encoder
with open("student_recognizer.pkl", "wb") as f:
[Link]((clf, le), f)
print("Model trained and saved successfully.")
# Function to recognize a student from an input image
def recognize_student():
# Load the trained SVM model and label encoder
with open("student_recognizer.pkl", "rb") as f:
clf, le = [Link](f)
# Capture an image for recognition
cap = [Link](0)
if not [Link]():
print("Could not open webcam.")
return
print("Capturing image for recognition...")
ret, frame = [Link]()
if not ret:
print("Failed to capture image.")
return
# Save the captured frame temporarily
temp_path = "[Link]"
[Link](temp_path, frame)
print(f"Image for recognition saved temporarily as {temp_path}")
[Link]()
# Extract features and predict
features = extract_features(temp_path)
[Link](temp_path) # Clean up temp file after extraction

# Predict the student’s identity


prediction = [Link]([features])[0]
student_name = le.inverse_transform([prediction])[0]
print(f"Recognized Student: {student_name}")

# Menu options
def menu():
print("1. Capture Images of Students")
print("2. Recognize Student")
print("3. Exit")
return int(input("Choose an option: "))

# Main program
def main():
while True:
choice = menu()
if choice == 1:
capture_images(students_list)
train_recognizer(students_list)
elif choice == 2:
recognize_student()
elif choice == 3:
print("Exiting the program.")
break
else:
print("Invalid option. Please try again.")

if __name__ == "__main__":
main()
output:

Using Keras/any standard dataset write the program for the following Machine
learning takes:
7. use the decision tree classifier to classify the dataset.
import pandas as pd
from sklearn.model_selection import train_test_split
from [Link] import DecisionTreeClassifier
from [Link] import accuracy_score, classification_report

# Load and preprocess the dataset


df = pd.get_dummies(pd.read_csv('Cleaned_Students_Performance.csv'), drop_first=True)
X, y = [Link][:, :-1], [Link][:, -1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train and evaluate the Decision Tree Classifier


clf = DecisionTreeClassifier(random_state=42).fit(X_train, y_train)
y_pred = [Link](X_test)
print(f"Accuracy: {accuracy_score(y_test, y_pred)}\n\nClassification Report:\
n{classification_report(y_test, y_pred)}")

output:

8. Use the Naïve Bayes classifier to classify the dataset.


import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
from [Link] import accuracy_score, classification_report

# Load, preprocess, and split the dataset


df = pd.get_dummies(pd.read_csv('Cleaned_Students_Performance.csv'), drop_first=True)
X, y = [Link][:, :-1], [Link][:, -1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Train and evaluate the Naïve Bayes Classifier


nb_classifier = GaussianNB().fit(X_train, y_train)
y_pred = nb_classifier.predict(X_test)
print(f"Accuracy: {accuracy_score(y_test, y_pred)}\n\nClassification Report:\
n{classification_report(y_test, y_pred)}")

output:
9. Implement K-Means clustering Algorithm.
import numpy as np
import [Link] as plt
from [Link] import make_blobs
from [Link] import KMeans
from [Link] import StandardScaler

data, _ = make_blobs(n_samples=300, centers=4, cluster_std=0.6, random_state=42)

# Optional: Scale the data for better clustering performance


scaler = StandardScaler()
data = scaler.fit_transform(data)

# Initialize and fit K-Means clustering


k = 4 # Number of clusters
kmeans = KMeans(n_clusters=k, random_state=42)
[Link](data)

# Retrieve cluster labels and centroids


labels = kmeans.labels_
centroids = kmeans.cluster_centers_

# Plot the clusters and centroids


[Link](data[:, 0], data[:, 1], c=labels, s=40, cmap='viridis')
[Link](centroids[:, 0], centroids[:, 1], c='red', s=200, alpha=0.75, marker='X')
[Link]("K-Means Clustering")
[Link]("Feature 1")
[Link]("Feature 2")
[Link]()

output:

10. Using Python NLTK, perform the following Natural Language Processing (NLP)
tasks for any textual content.
a) Tokenizing
b) Filtering Stop Words
c) Stemming
d) Part of Speech tagging
e) Chunking
f) Named Entity Recognition (NER)
import nltk
[Link]('punkt_tab')

[Link]()

import nltk
from [Link] import word_tokenize
from [Link] import stopwords
from [Link] import PorterStemmer
from nltk import pos_tag, ne_chunk

# Sample text
text = "The quick brown fox jumps over the lazy dog. Natural Language Processing is fun!"

# Tokenization, Stop Word Removal, Stemming, POS Tagging, and NER


[Link]('punkt'), [Link]('stopwords'),
[Link]('averaged_perceptron_tagger'), [Link]('maxent_ne_chunker'),
[Link]('words')
stop_words, stemmer = set([Link]('english')), PorterStemmer()
tokens = word_tokenize(text)
filtered_tokens = [[Link](word) for word in tokens if [Link]() not in stop_words]
pos_tags = pos_tag(filtered_tokens)
chunked_text = ne_chunk(pos_tags)

# Outputs
print("Tokens:", tokens)
print("Filtered tokens:", filtered_tokens)
print("Part-of-Speech tags:", pos_tags)
print("Named Entities:", [([Link](), ' '.join(c[0] for c in chunk)) for chunk in
chunked_text if hasattr(chunk, 'label')])

output:
11. Write a program that uses Neural networks for image classification using Keras Iris
dataset.
import tensorflow as tf
from [Link] import Sequential
from [Link] import Dense

# Assuming X_train, y_train, X_test, y_test are already loaded from the notebook

# Create a simple neural network model


model = Sequential([
Dense(8, activation='relu', input_dim=4),
Dense(3, activation='softmax')
])

# Compile the model


[Link](optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the model


[Link](X_train, y_train, epochs=100, batch_size=10)

# Evaluate the model


test_loss, test_acc = [Link](X_test, y_test)
print('Test accuracy:', test_acc)

# Make predictions
predictions = [Link](X_test)
predicted_classes = [[Link](pred).numpy() for pred in predictions]
print('Predicted classes:', predicted_classes)
output:

You might also like