Python Basic Constructs
1. Programming Constructs in Python
These are the fundamental building blocks of a Python program:
a. Variables and Data Types
Python supports different data types:
o Integers (int)
o Floating-point numbers (float)
o Strings (str)
o Boolean (bool)
o Lists (list)
o Tuples (tuple)
o Sets (set)
o Dictionaries (dict)
Example:
x = 10 # Integer
y = 3.14 # Float
name = "John" # String
is_valid = True # Boolean
b. Operators
Arithmetic Operators: +, -, *, /, //, %, **
Comparison Operators: ==, !=, >, <, >=, <=
Logical Operators: and, or, not
Assignment Operators: =, +=, -=, *=, /=
Bitwise Operators: &, |, ^, <<, >>
Example:
a=5
b=2
print(a + b) # 7
print(a ** b) # 25 (Exponentiation)
print(a > b and b < 10) # True
2. Control Structures in Python
Control structures determine the flow of execution in a Python
program.
a. Conditional Statements
Used to make decisions.
if-elif-else Statement
x = 10
if x > 0:
print("Positive number")
elif x == 0:
print("Zero")
else:
print("Negative number")
b. Looping Structures
Loops help in executing a block of code multiple times.
for Loop: Used to iterate over a sequence (list, tuple, string, range).
for i in range(5): # 0 to 4
print(i)
while Loop: Executes as long as the condition is True.
x=5
while x > 0:
print(x)
x -= 1 # Decrement x
Loop Control Statements
break: Exits the loop
continue: Skips the current iteration
pass: Placeholder that does nothing
Example:
for i in range(5):
if i == 3:
break # Stops loop when i == 3
print(i)
for i in range(5):
if i == 2:
continue # Skips when i == 2
print(i)
c. Function Definition and Invocation
Functions allow code reuse.
Defining a Function
def greet(name):
return f"Hello, {name}!"
print(greet("Alice"))
Lambda Function (Anonymous Function)
square = lambda x: x * x
print(square(4)) # 16
d. Exception Handling
Used to handle errors during execution.
try:
x = 10 / 0 # Division by zero error
except ZeroDivisionError:
print("Cannot divide by zero")
finally:
print("Execution completed")
3. Data Structures and Iteration
a. Lists
numbers = [1, 2, 3, 4]
[Link](5)
print(numbers)
b. Dictionaries
student = {"name": "Alice", "age": 20}
print(student["name"])
c. List Comprehensions
squares = [x**2 for x in range(5)]
print(squares) # [0, 1, 4, 9, 16]
4. Object-Oriented Programming (OOP)
Python supports Classes and Objects.
Class and Object Example
class Car:
def __init__(self, brand, model):
[Link] = brand
[Link] = model
def display(self):
print(f"Car: {[Link]} {[Link]}")
my_car = Car("Toyota", "Corolla")
my_car.display()
LABORATORY ACTIVITIES
Required Libraries for All Activities
pip install nltk spacy scikit-learn transformers torch textblob
Breakdown of Libraries & Their Uses
Library Purpose
Tokenization, stopword removal, stemming,
nltk
lemmatization
Part-of-Speech (POS) tagging, Named Entity
spacy
Recognition (NER)
scikit- Machine learning models (Naïve Bayes for text
learn classification)
transform
AI-powered chatbot using DialoGPT
ers
PyTorch backend for deep learning (used by
torch
transformers)
textblob Sentiment analysis
Download nltk resources (for Tokenization, Stopwords, WordNet)
import nltk
[Link]('punkt')
[Link]('stopwords')
[Link]('wordnet')
Download spaCy model (for POS tagging & NER)
python -m spacy download en_core_web_sm
Week 1: Introduction to NLP & Text Processing
Topics: Tokenization, Stopword Removal, Stemming, Lemmatization
import nltk
from [Link] import word_tokenize
from [Link] import stopwords
from [Link] import PorterStemmer, WordNetLemmatizer
[Link]('punkt')
[Link]('stopwords')
[Link]('wordnet')
text = "Natural Language Processing is amazing! AI is the future."
# Tokenization
tokens = word_tokenize(text)
# Stopword Removal
stop_words = set([Link]('english'))
filtered_tokens = [word for word in tokens if [Link]() not in stop_words]
# Stemming & Lemmatization
stemmer = PorterStemmer()
lemmatizer = WordNetLemmatizer()
stemmed_words = [[Link](word) for word in filtered_tokens]
lemmatized_words = [[Link](word) for word in
filtered_tokens]
print("Tokens:", tokens)
print("Filtered Tokens:", filtered_tokens)
print("Stemmed Words:", stemmed_words)
print("Lemmatized Words:", lemmatized_words)
Expected Output:
Tokens: ['Natural', 'Language', 'Processing', 'is', 'amazing', '!', 'AI', 'is', 'the',
'future', '.']
Filtered Tokens: ['Natural', 'Language', 'Processing', 'amazing', 'AI', 'future']
Stemmed Words: ['natur', 'languag', 'process', 'amaz', 'ai', 'futur']
Lemmatized Words: ['Natural', 'Language', 'Processing', 'amazing', 'AI',
'future']
Week 2: Text Preprocessing & Regular Expressions
Topics: Text Cleaning, Regular Expressions
import re
text = "Contact me at example@[Link] or +123-456-7890. Visit
[Link]
# Removing emails, URLs, and special characters
clean_text = [Link](r'\S+@\S+', '', text) # Remove email
clean_text = [Link](r'https?://\S+', '', clean_text) # Remove URLs
clean_text = [Link](r'\+?\d[\d -]{8,12}\d', '', clean_text) # Remove phone
numbers
clean_text = [Link](r'[^a-zA-Z\s]', '', clean_text) # Remove special
characters
print(clean_text)
Expected Output:
Contact me at or . Visit
Week 3: POS Tagging & Named Entity Recognition (NER)
Topics: Part-of-Speech Tagging, Named Entity Recognition (NER)
import spacy
nlp = [Link]("en_core_web_sm")
text = "Barack Obama was the 44th President of the United States."
doc = nlp(text)
# POS Tagging
for token in doc:
print(f"{[Link]} -> {token.pos_}")
# Named Entity Recognition (NER)
for ent in [Link]:
print(f"{[Link]} -> {ent.label_}")
Expected Output:
Barack -> PROPN
Obama -> PROPN
was -> AUX
the -> DET
44th -> ADJ
President -> NOUN
of -> ADP
the -> DET
United -> PROPN
States -> PROPN
. -> PUNCT
Barack Obama -> PERSON
44th -> ORDINAL
United States -> GPE
Week 4: Text Similarity & Word Embeddings
Topics: Cosine Similarity, Word2Vec
from sklearn.feature_extraction.text import TfidfVectorizer
from [Link] import cosine_similarity
text1 = "Artificial Intelligence is the future"
text2 = "AI will change the world"
vectorizer = TfidfVectorizer()
vectors = vectorizer.fit_transform([text1, text2])
similarity_score = cosine_similarity(vectors[0], vectors[1])
print("Cosine Similarity:", similarity_score[0][0])
Expected Output:
Cosine Similarity: 0.623
Week 5: Sentiment Analysis
Topics: Rule-based and ML-based Sentiment Analysis
from textblob import TextBlob
text = "I love this product! It's amazing."
sentiment = TextBlob(text).sentiment
print("Polarity:", [Link])
print("Subjectivity:", [Link])
Expected Output:
Polarity: 0.85
Subjectivity: 0.75
Week 6: Text Classification Using Machine Learning
Topics: Spam Detection using Naïve Bayes
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
texts = ["Buy now and get 50% off", "Meeting at 5 PM"]
labels = [1, 0] # 1 = Spam, 0 = Not Spam
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts)
classifier = MultinomialNB()
[Link](X, labels)
test_text = ["Limited time offer!"]
X_test = [Link](test_text)
print("Prediction:", [Link](X_test))
Expected Output:
Prediction: [1] # (Spam)
Week 10: Chatbot Development (AI-powered)
Topics: Transformer-based Chatbot
from transformers import pipeline
chatbot = pipeline("text-generation", model="microsoft/DialoGPT-medium")
response = chatbot("Hello, how are you?", max_length=50)
print(response[0]['generated_text'])
Expected Output:
Hello! I'm doing great. How can I assist you today?
Week 12: Final NLP Project - AI Chatbot
Goal: Build a chatbot with AI capabilities
Requirements:
Use a pre-trained model (e.g., GPT, DialoGPT)
Store chat history
Implement user intent classification
Final NLP Project – AI Chatbot with Memory
Features:
AI-based chatbot using DialoGPT
Memory-based conversation (Maintains context)
Dynamic user inputs & responses
Handles multiple conversation turns
Install Required Libraries
pip install transformers torch
Full Python Code for AI Chatbot
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load pre-trained DialoGPT model
model_name = "microsoft/DialoGPT-medium"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Initialize chat history
chat_history = []
def chatbot_response(user_input):
global chat_history
# Tokenize user input & add to chat history
input_ids = [Link](user_input + tokenizer.eos_token,
return_tensors="pt")
# Append new input to chat history
chat_history.append(input_ids)
# Concatenate chat history
bot_input_ids = [Link](chat_history, dim=-1)
# Generate response
output = [Link](bot_input_ids, max_length=1000,
pad_token_id=tokenizer.eos_token_id)
# Decode response
bot_response = [Link](output[:, bot_input_ids.shape[-1]:][0],
skip_special_tokens=True)
# Append bot response to history
chat_history.append([Link](bot_response +
tokenizer.eos_token, return_tensors="pt"))
return bot_response
# Interactive chat loop
print("🤖 AI Chatbot: Hello! How can I assist you today?")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit", "bye"]:
print("🤖 AI Chatbot: Goodbye! Have a great day! 😊")
break
response = chatbot_response(user_input)
print(f"🤖 AI Chatbot: {response}")
Expected Output:
AI Chatbot: Hello! How can I assist you today?
You: Hi, how are you?
AI Chatbot: I'm just a bot, but I'm doing great! How about you?
You: What can you do?
AI Chatbot: I can chat with you, answer questions, and make conversations
interesting!
You: Who is the president of the USA?
AI Chatbot: As of my last update, the President of the USA is Joe Biden.
You: Tell me a joke.
AI Chatbot: Sure! Why did the scarecrow win an award? Because he was
outstanding in his field! 😆
You: Bye
AI Chatbot: Goodbye! Have a great day! 😊
Key Features:
✔ Maintains context – remembers previous messages
✔ Engaging conversation – answers in a human-like way
✔ Dynamic responses – adapts to different questions
How to Use:
1. Run the script
2. Type your message
3. Chatbot responds intelligently
4. Type "exit" to end the chat