Competitive Programming is a mental sport that enables you to code a
given problem under provided constraints.
What is Competitive Programming
Programming... Competitive Programming... It teaches you how to think.,
If you are a programmer, you might have understood the deep meaning of
these lines quoted by Steve Jobs, and you might have also experienced that
even after shutting down your computer, you keep on thinking about
programming stuff or code you have written in your project. Once you enter
programming, you just don't learn how to code, but you also learn the "art of
thinking", by breaking your code into smaller chunks and then using
your logic-based creativity to solve a problem from different angles.
Programming is fun, programming is an exercise for your brain, programming
is a mental sport, and when this sport is held on the internet involving sports
programmers as contestants, then it is called Competitive Programming.
If you are new to this competitive programming, then you will need a
structured learning pattern to explore some tips and tricks that are important
also have good knowledge on the mathematics what if you will find all this in
one course yes you heard it right our Competitive programming course cover
all this.
some steps, approaches, and tips to prepare yourself for competitive
programming.
Keep in mind that you need to be proficient in the following:
● Any programming language syntax (Choose any but highly
recommended C/C++/Java).
● Time and space complexity algorithm analysis.
● Ability to think about a Brute Force Solution.
● Good practice of all Data Structures like Array, List, Stack, Queue,
Tree, Graph.
Bit Manipulation for Competitive Programming
Bit manipulation is a technique in competitive programming that
involves the manipulation of individual bits in binary representations
of numbers. It is a valuable technique in competitive programming
because it allows you to solve problems efficiently, often reducing
time complexity and memory usage.
Bitwise Operators:
Bitwise Operators are used to perform operations on individual bits in binary
representations of numbers. Some common bitwise operators that are used in
competitive programming:-
● Bitwise AND (&): It is a bitwise operator that takes two numbers as
operands and performs logical AND on corresponding bits of two
numbers. When both bits in the compared position are 1, the bit in
the resulting binary representation is 1, otherwise, the result is 0.
● Bitwise OR (|): This bitwise operator takes two numbers as
operands and performs a logical OR operation on their
corresponding bits. When at least one of the bits in the compared
position is 1, the bit in the resulting binary representation is 1,
otherwise, the result is 0.
● Bitwise XOR (^): The bitwise XOR operator also takes two numbers
as operands and performs an exclusive OR operation on their
corresponding bits. When exactly one of the bits in the compared
position is 1, the bit in the resulting binary representation is 1,
otherwise, the result is 0.
● Bitwise NOT (~): The bitwise NOT is a unary operator operates on a
single number and flips (inverts) all its bits. It changes 0s to 1s and
1s to 0s, effectively creating the one's complement of the input
number.
● Left Shift (<<): The left shift operator takes two operands, the
number to be shifted and the number of places to move it to the left.
It shifts the bits of the first operand to the left by the number of places
specified in the second operand. This is actually multiplying a
number by 2 raised to the power of shift counts. For example: 5 <<
2 =20, the binary representation of 5 (0101) is shifted left by 2
positions, resulting in 20 (10100) in decimal.
● Right Shift (>>): The right shift operator also takes two operands,
the number to be shifted and the number of places to move it to the
right. It shifts the bits of the first operand to the right by the number of
places specified in the second operand. This is equivalent to dividing
a number by 2 raised to the power of the shift count (integer
division). For example: 20 >> 2 = 5, where the binary representation
of 20 (10100) is shifted right by 2 positions, resulting in 5 (00101) in
decimal.
Useful Bitwise Tricks for Competitive Programming:
1. Set a bit of number:
This can be done by left-shifting the value 1 by 'pos' positions (1<< pos) and
performing a bitwise OR operation with number n. This operation effectively
turns on the bit at the specified position.
1. Set a bit of number:
This can be done by left-shifting the value 1 by 'pos' positions (1<<
pos) and performing a bitwise OR operation with number n. This
operation effectively turns on the bit at the specified position.
// n=number
// pos=It is the position at which we want to set the bit
void set(int & n, int pos)
n |= (1 << pos);
2. Unset a Bit of Number:
This can be done by left-shifting the value 1 by pos positions (1<< pos) and then use bitwise NOT
operator ‘~’ to unset this shifted 1, making the bit at position pos to 0 and then use Bitwise AND with the
number n that will unset bit at desired position of number n.
// Unset (clear) a bit at position pos in number n
void unset(int &n, int pos) {
n &= ~(1 << pos);
int main() {
int n = 15; // 1111 in binary
int pos = 1;
unset(n, pos); // Should change n to 13, which is 1101 in binary
std::cout << n << std::endl; // Output should be 13
return 0;
Flip a Bit of Number:
Use the bitwise XOR (^) operator to toggle (flip) the bit at the given position. If the bit is 0, it becomes 1,
and if it's 1, it becomes 0.
// Flip (toggle) a bit at position pos in number n
void flip(int &n, int pos) {
n ^= (1 << pos);
4. Checking if Bit at nth Position is Set or Unset:
This can be done by performing a bitwise AND operation with a mask having only that bit set. If the
result is non-zero, the bit is set; otherwise, it's unset.
// Check if the bit at position pos in number n is set (1) or unset (0)
bool isBitSet(int n, int pos) {
return ((n & (1 << pos)) != 0);
5. Check Whether n is a Power of Two:
A power of two is a number with only one bit set in its binary representation, while the number just
before it has that bit unset and all the following bits set. Consequently, when you perform a bitwise AND
operation between a number and its predecessor, the result will always be 0.
// Check if n is a power of two
bool isPowerOfTwo(int n) {
return ((n & (n - 1)) == 0);
Prefix Sum and Bit Manipulation Technique:
Suppose you are given an array a of n numbers and q queries and each
query is of the form (l,r). The task is to compute Bitwise AND of the numbers
from index l to r i.e., (al & al+1 ..... & ar-1 & ar).
A simple approach will be for each query travese from index l to r and
compute Bitwise AND. By this we will be able to answer each query in O(n)
time in worst case.
But to answer each query in constant time prefix sum can be a useful method.
1. How to compute Bitwise AND for a range using Prefix Sum
Technique:
● Storing Bit Information: To start, we want to determine whether a
specific bit (let's call it the "j-th bit") in the binary representation of a
number at a given index (let's call it "i") is set (1) or unset (0). We
accomplish this by creating a 2D array called "temp," with
dimensions "n x 32" (assuming 32-bit integers), where "n" is the
number of elements in our array. Each cell "temp[i][j]" stores this
information for the i-th number's j-th bit.
● Computing Prefix Sums: Next, we calculate prefix sums for each
bit position (from 0 to 31, assuming 32-bit integers) in our "temp"
array. This "prefix sum" array, let's call it "psum," keeps track of the
count of numbers up to a certain index that have their j-th bit set.
● Determining the Bitwise AND for a Range: Now, let's focus on
finding the Bitwise AND of numbers within a specific range, say from
index "l" to "r." To determine whether the j-th bit of the result should
be set to 1, we compare the number of elements with the j-th bit set
in the range [l, r]. This can be done using prefix sum
array psum. psum[i][j] will denote numbers of elements till index i,
which have their jth bit set and
psum[r][j]-psum[l-1][j] will give number of indexes from l to r which
have their jth bit set.
● Setting the Result Bit: If the count of numbers with the j-th bit set in
the range [l, r] is equal to the range size (r - l + 1), it means that all
numbers in that range have their j-th bit set. In this case, we set the
j-th bit of the result to 1. Otherwise, if not all numbers in the range
have the j-th bit set, we set it to 0.
Below is the code for above approach:
#include <iostream>
#include <vector>
using namespace std;
vector<vector<int>> prefixsumBit(vector<int>& nums) {
int n = nums.size();
// Step 1: Store bit information in 'temp'
vector<vector<int>> temp(n + 1, vector<int>(32, 0));
for (int i = 1; i <= n; ++i) {
int num = nums[i - 1]; // Fix indexing error
for (int j = 0; j < 32; ++j) {
// Check if the j-th bit of nums[i] is set
if (((1 << j) & num) != 0) { // Fix indexing error
temp[i][j] = 1;
}
}
// Step 2: Compute prefix sums
vector<vector<int>> psum(n + 1, vector<int>(32, 0));
for (int j = 0; j < 32; ++j) {
for (int i = 1; i <= n; ++i) {
// Calculate prefix sum for each bit
psum[i][j] = psum[i - 1][j] + temp[i][j];
return psum;
int rangeBitwiseAND(vector<vector<int>>& psum, int l, int r) {
int result = 0;
for (int j = 0; j < 32; ++j) {
// Calculate the count of elements with j-th bit set
// in the range [l, r]
int count = psum[r][j] - psum[l - 1][j];
if (count == r - l + 1) {
// If all elements in the range have j-th bit
// set, add it to the result
result = result + (1 << j);
return result;
// driver's code
int main() {
// Input Array
vector<int> nums = { 13, 11, 2, 3, 6 };
// Range
int l = 2, r = 4;
// 2D prefix sum
vector<vector<int>> psum = prefixsumBit(nums);
cout << "Bitwise AND of range [2,4] is: " << rangeBitwiseAND(psum, l, r);
return 0;
Output
Bitwise AND of range [2,4] is: 2
Note- When you increase the range for Bitwise AND, the result will never
increase; it will either stay the same or decrease. This is a useful property and
we can apply Binary search on answer we are given to determine the largest
range whose Bitwise AND is greater than or equal to a given number.
2. Determining the Bitwise OR for a Range:
Bitwise OR can be computed in a similar way. WE will
make temp and psum array in a similar way,
● To determine whether the j-th bit of the result should be set to 1, we
compare the number of elements with the j-th bit set in the range [l,
r].
● Use the prefix sum array, psum, we can get count of numbers with
the jth bit set in range [l,r] from psum[r][j]-psum[l-1][j].
● If the count of numbers with the j-th bit set in the range [l, r] is
greater than 0, it means at least one number in that range has the
j-th bit set. In this case, we set the j-th bit of the result to 1.
Otherwise, if no numbers in the range have the j-th bit set, we set it
to 0.
Below is the code for above approach:
#include <iostream>
#include <vector>
using namespace std;
vector<vector<int>> prefixsumBit(vector<int>& nums) {
int n = nums.size();
vector<vector<int>> temp(n + 1, vector<int>(32, 0));
// Step 1: Store bit information in 'temp'
for (int i = 1; i <= n; ++i) {
int num = nums[i - 1];
for (int j = 0; j < 32; ++j) {
// Check if the j-th bit of nums[i] is set
if ((1 << j) & num) {
temp[i][j] = 1;
}
}
// Step 2: Compute prefix sums
vector<vector<int>> psum(n + 1, vector<int>(32, 0));
for (int j = 0; j < 32; ++j) {
for (int i = 1; i <= n; ++i) {
// Calculate prefix sum for each bit
psum[i][j] = psum[i - 1][j] + temp[i][j];
return psum;
int rangeBitwiseOR(vector<vector<int>>& psum, int l, int r) {
int result = 0;
for (int j = 0; j < 32; ++j) {
// Calculate the count of elements with j-th bit set
// in the range [l, r]
int count = psum[r][j] - psum[l - 1][j];
// If at least one element in the range has j-th bit
// set, add it to the result
if (count > 0) {
result += (1 << j);
return result;
}
// Driver's code
int main() {
// Input Array
vector<int> nums = {13, 11, 2, 3, 6};
// Range
int l = 2, r = 4;
// 2D prefix sum
vector<vector<int>> psum = prefixsumBit(nums);
cout << "Bitwise OR of range [2,4] is: " << rangeBitwiseOR(psum, l, r) << endl;
return 0;
Output
Bitwise OR of range [2,4] is: 11
Note: When you increase the range for Bitwise OR, the result will never
decrease; it will either stay the same or increase. Again this is a useful
property and we can apply Binary search on answer we are given to
determine the smallest range whose Bitwise OR is smaller than or equal to a
given number.
3. Determining the Bitwise XOR for a Range:
Bitwise XOR for a range can be done in similar way:
● To determine whether the j-th bit of the result should be set to 1, we
compare the number of elements with the j-th bit set in the range [l,
r].
● Use the prefix sum array, psum, we can get count of numbers with
the jth bit set in range [l,r] from psum[r][j]-psum[l-1][j].
● If the count of numbers with the j-th bit set in the range [l, r] is odd, it
means that the j-th bit of the result should be set to 1. If the count is
even, the j-th bit of the result should be set to 0.
Below is the implementation of the above approach:
#include <bits/stdc++.h>
using namespace std;
vector<vector<int>> prefixsumBit(vector<int>& nums)
int n = nums.size();
// Step 1: Store bit information in 'temp'
vector<vector<int>> temp(n + 1, vector<int>(32, 0));
for (int i = 1; i <= n; ++i) { // Fixed indexing
int num = nums[i - 1]; // Fixed indexing
for (int j = 0; j < 32; ++j) {
// Check if the j-th bit of nums[i] is set
if (((1 << j) & num) != 0) { // Fixed indexing
temp[i][j] = 1;
}
// Step 2: Compute prefix sums
vector<vector<int>> psum(n + 1, vector<int>(32, 0));
for (int j = 0; j < 32; ++j) {
for (int i = 1; i <= n; ++i) {
// Calculate prefix sum for each bit
psum[i][j] = psum[i - 1][j] + temp[i][j];
return psum;
int rangeBitwiseXOR(vector<vector<int>>& psum, int l,
int r)
int result = 0;
for (int j = 0; j < 32; ++j) {
// Calculate the count of elements with j-th bit set
// in the range [l, r]
int count = psum[r][j] - psum[l - 1][j];
// If count is odd, add it to the result
if (count % 2 == 1) {
result = result + (1 << j);
return result;
}
// driver's code
int main()
// Input Array
vector<int> nums = { 13, 11, 2, 3, 6 };
// Range
int l = 2, r = 4;
// 2D prefix sum
vector<vector<int>> psum = prefixsumBit(nums);
cout << "Bitwise XOR of range [2,4] is :" << rangeBitwiseXOR(psum, l, r);
return 0;
Output
Bitwise XOR of range [2,4] is :10
How to solve Bit Manipulation Problems?
In most of the problems involving bit manipulation it is better to work bit by bit i.e., break down the
problem into individual bits. Focus on solving the problem for a single bit position before moving on to
the next.
Let's consider few examples:
Example 1: Given an integer array arr. The task is to find the size of largest subset such that bitwise AND
of all the elements of the subset is greater than 0.
Solution:
Bitwise AND Insight: To start, notice that for a subset's bitwise AND to be greater than zero, there must
be a bit position where all the elements in the subset have that bit set to 1.
Bit by Bit Exploration: We approach this problem bit by bit, examining each of the 32 possible bit
positions in the numbers.
Counting Ones: For each bit position, we count how many elements in the array have that bit set to 1.
Finding the Maximum: Our answer is the largest count of elements that have their bit set for a particular
bit position.
Example 2: Given an integer array arr of size n. A graph is formed using these elements. There exists an
edge between index i and index j if i!=j and a[i] AND a[j]>0. The task is to determine whether there exists
a cycle in the graph.
Solution:
Bitwise Analysis: We begin by analyzing each bit position in the binary representation of the numbers
and for each bit determine how many elements have that bit set.
Cycle Detection: For a specific bit position,
If there are more than two elements in the array that have that bit set, it indicates that there exists a
cycle in the graph.
Otherwise, there will be almost 2 numbers that have a particular bit set. It follows that each bit can
contribute to atmost 1 edge.
Graph Constraints: Importantly, the entire graph won't have more than 32 edges because each number
in the array is represented using 32 bits.
Cycle Detection Algorithm: To ascertain the presence of a cycle in the graph,a straightforward Depth-First
Search (DFS) algorithm can be used.
Introduction to Divide and Conquer Algorithm
Divide and Conquer Algorithm is a problem-solving technique used to solve
problems by dividing the main problem into subproblems, solving them
individually and then merging them to find solution to the original problem.
Divide and Conquer is mainly useful when we divide a problem into
independent subproblems. If we have overlapping subproblems, then we
use Dynamic Programming.
Working of Divide and Conquer Algorithm
Divide and Conquer Algorithm can be divided into three
steps: Divide, Conquer and Merge.
The above diagram shows working with the example of Merge Sort which is
used for sorting
1. Divide:
● Break down the original problem into smaller subproblems.
● Each subproblem should represent a part of the overall problem.
● The goal is to divide the problem until no further division is possible.
In Merge Sort, we divide the input array in two halves. Please note that the
divide step of Merge Sort is simple, but in Quick Sort, the divide step is critical.
In Quick Sort, we partition the array around a pivot.
2. Conquer:
●Solve each of the smaller subproblems individually.
● If a subproblem is small enough (often referred to as the “base
case”), we solve it directly without further recursion.
● The goal is to find solutions for these subproblems independently.
In Merge Sort, the conquer step is to sort the two halves individually.
3. Merge:
● Combine the sub-problems to get the final solution of the whole
problem.
● Once the smaller subproblems are solved, we recursively combine
their solutions to get the solution of larger problem.
● The goal is to formulate a solution for the original problem by
merging the results from the subproblems.
In Merge Sort, the merge step is to merge two sorted halves to create one
sorted array. Please note that the merge step of Merge Sort is critical, but in
Quick Sort, the merge step does not do anything as both parts become sorted
in place and the left part has all elements smaller (or equal( than the right part.
Characteristics of Divide and Conquer Algorithm
Divide and Conquer Algorithm involves breaking down a problem into smaller,
more manageable parts, solving each part individually, and then combining
the solutions to solve the original problem. The characteristics of Divide and
Conquer Algorithm are:
● Dividing the Problem: The first step is to break the problem into
smaller, more manageable subproblems. This division can be done
recursively until the subproblems become simple enough to solve
directly.
● Independence of Subproblems: Each subproblem should be
independent of the others, meaning that solving one subproblem
does not depend on the solution of another. This allows for parallel
processing or concurrent execution of subproblems, which can lead
to efficiency gains.
● Conquering Each Subproblem: Once divided, the subproblems are
solved individually. This may involve applying the same divide and
conquer approach recursively until the subproblems become simple
enough to solve directly, or it may involve applying a different
algorithm or technique.
● Combining Solutions: After solving the subproblems, their solutions
are combined to obtain the solution to the original problem. This
combination step should be relatively efficient and straightforward, as
the solutions to the subproblems should be designed to fit together
seamlessly.
Advantages of Divide and Conquer Algorithm
● Solving difficult problems: Divide and conquer technique is a tool
for solving difficult problems conceptually. e.g. Tower of Hanoi
puzzle. It requires a way of breaking the problem into sub-problems,
and solving all of them as an individual cases and then combining
sub- problems to the original problem.
● Algorithm efficiency: The divide-and-conquer algorithm often helps
in the discovery of efficient algorithms. It is the key to algorithms like
Quick Sort and Merge Sort, and fast Fourier transforms.
● Parallelism: Normally Divide and Conquer algorithms are used in
multi-processor machines having shared-memory systems where the
communication of data between processors does not need to be
planned in advance, because distinct sub-problems can be executed
on different processors.
● Memory access: These algorithms naturally make an efficient use of
memory caches. Since the subproblems are small enough to be
solved in cache without using the main memory that is slower one.
Any algorithm that uses cache efficiently is called cache oblivious.
Disadvantages of Divide and Conquer Algorithm
● Overhead: The process of dividing the problem into subproblems
and then combining the solutions can require additional time and
resources. This overhead can be significant for problems that are
already relatively small or that have a simple solution.
● Complexity: Dividing a problem into smaller subproblems can
increase the complexity of the overall solution. This is particularly
true when the subproblems are interdependent and must be solved
in a specific order.
● Difficulty of implementation: Some problems are difficult to divide
into smaller subproblems or require a complex algorithm to do so. In
these cases, it can be challenging to implement a divide and conquer
solution.
● Memory limitations: When working with large data sets, the
memory requirements for storing the intermediate results of the
subproblems can become a limiting factor.
Short Notes on Two Pointer and Sliding
Window
Basics of Two Pointer
The two-pointer technique uses two indices that move towards each other or in the same direction to
process data efficiently.
It is commonly used when:
Data is sorted or the problem has sequential properties.
We need to find pairs/triplets or process subarrays without restarting from scratch.
It works in O(n) for many problems that would otherwise require O(n²).
Common patterns:
Opposite Direction: Pointers at start and end, moving toward each other (e.g., 2-Sum, Container with
Most Water).
Same Direction: Both pointers move forward, where one lags behind the other to form a range (e.g.,
Remove Duplicates from Sorted Array).
Two Pointer Algorithm – O(n) Time, O(1) Space
Example: 2 - Sum in Sorted Array
You are given an integer array arr[] sorted in non-decreasing order, and an integer target. Find two
elements in the array whose sum equals target.
If such a pair exists, return their indices in increasing order.
If no such pair exists, return [-1, -1].
Approach - Using Two Pointers - O(n) Time and O(1) Space
We can maintain two pointers, left = 0 and right = n - 1, and calculate their sum S = arr[left] + arr[right].
If S = target, then return left and right.
If S < target, then we need to increase sum S, so we will increment left = left + 1.
If S > target, then we need to decrease sum S, so we will decrement right = right - 1.
If at any point left >= right, then no pair with sum = target is found.
Algorithm:
Initialize left = 0 and right = n-1.
While left < right:
● If arr[left] + arr[right] == target, return the pair.
● If sum is smaller, move left++.
● If sum is larger, move right--.
Repeat until pointers meet.
vector<int> twoSumSorted(vector<int>& arr, int target) {
int left = 0, right = arr.size() - 1;
while (left < right) {
int sum = arr[left] + arr[right];
if (sum == target)
return {arr[left], arr[right]};
else if (sum < target)
left++;
else
right--;
return {};
Example: Merge Two Sorted Arrays (No Extra Space)
Given two sorted arrays a[] and b[] of size n and m respectively, merge both the arrays and rearrange the
elements such that the smallest n elements are in a[] and the remaining m elements are in b[]. All
elements in a[] and b[] should be in sorted order.
Approach - Using Swap and Sort
We swap the rightmost element of a[] with the leftmost element of b[], then the second rightmost
element of a[] with the second leftmost element of b[], and so on. This process continues until the
selected element from a[] becomes larger than the selected element from b[]. At this point, the
condition fails automatically and the process stops. Finally, sort both arrays to maintain the order.
Algorithm:
● Start from the end of both arrays.
● Compare and shift larger elements to the end of merged space.
● Fill remaining from other array if needed.
void mergeArrays(vector<int>& arr1, vector<int>& arr2) {
int n = arr1.size(), m = arr2.size();
int i = n - 1, j = 0;
// Swap elements if needed
while (i >= 0 && j < m) {
if (arr1[i] > arr2[j])
swap(arr1[i], arr2[j]);
i--;
j++;
// Sort both arrays
sort(arr1.begin(), arr1.end());
sort(arr2.begin(), arr2.end());
Classical Problems on Two Pointer:
Check if a string is Palindrome
Reverse an array
Dutch National Flag (DNF) Algorithm
2-Sum (sorted array / count all distinct pairs / closest to target)
Check subsequence of a string
Move zeros to end
3-Sum / Count distinct triplets / Closest to target
Count possible triangles
4-Sum
Trapping Rainwater Problem
Basics of Sliding Window:
Sliding Window is a technique for problems involving contiguous subarrays or substrings.
Instead of recalculating the result from scratch for each window, we:
Add the incoming element
Remove the outgoing element
Update our answer in O(1) time per shift
Types:
Fixed Window — Window size k (e.g., Maximum sum in size-k subarray)
Variable Window — Window expands/contracts to meet conditions (e.g., Longest Substring Without
Repeating Characters)
Sliding Window Algorithm – O(n) Time:
Example: Maximum Sum in K Size Subarray
Consider an array arr[] = [5, 2, -1, 0, 3] and value of k = 3 and n = 5
This is the initial phase where we have calculated the initial window sum starting from index 0 . At this
stage the window sum is 6. Now, we set the maximum_sum as current_window i.e 6.
Now, we slide our window by a unit index. Therefore, now it discards 5 from
the window and adds 0 to the window. Hence, we will get our new window
sum by subtracting 5 and then adding 0 to it. So, our window sum now
becomes 1. Now, we will compare this window sum with the maximum_sum.
As it is smaller, we won't change the maximum_sum.
Similarly, now once again we slide our window by a unit index and obtain the
new window sum to be 2. Again we check if this current window sum is
greater than the maximum_sum till now. Once, again it is smaller so we don't
change the maximum_sum.
Therefore, for the above array our maximum_sum is 6.
Algorithm:
● Compute sum of first k elements.
● Slide the window: subtract outgoing element, add incoming
element.
● Track maximum sum.
int maxSubarraySum(vector<int>& arr, int k) {
int n = arr.size();
if (n < k) return -1;
// compute sum of first window
int windowSum = 0;
for (int i = 0; i < k; i++) {
windowSum += arr[i];
int maxSum = windowSum;
// slide the window
for (int i = k; i < n; i++) {
windowSum += arr[i] - arr[i - k];
maxSum = max(maxSum, windowSum);
}
return maxSum;
Classical Problems on Sliding Window:
● Maximum sum in k size subarray
● XOR of every k size subarray
● Number of distinct elements in window size k
● Longest subarray with at most two distinct integers
● Count subarrays with sum = X (positive a[i])
● Maximum consecutive ones after at most k flips
● Count subarrays with k odd numbers
● Count subarrays with at most k distinct elements
● Minimum removals to make target sum
● Smallest window containing all characters of another string
● Count substrings with exactly k distinct characters
Hashing Techniques (Chaining, Open Addressing).
Open Addressing Collision Handling
technique in Hashing
Open Addressing is a method for handling collisions. In Open Addressing, all
elements are stored in the hash table itself. So at any point, the size of the
table must be greater than or equal to the total number of keys (Note that we
can increase table size by copying old data if needed). This approach is also
known as closed hashing. This entire procedure is based upon probing. We
will understand the types of probing ahead:
Insert(k): Keep probing until an empty slot is found. Once an
empty slot is found, insert k.
Search(k): Keep probing until the slot's key doesn't become
equal to k or an empty slot is reached.
Delete(k): Delete operation is interesting. If we simply delete a
key, then the search may fail. So slots of deleted keys are
marked specially as "deleted".
The insert can insert an item in a deleted slot, but the search
doesn't stop at a deleted slot.
Different ways of Open Addressing:
1. Linear Probing:
In linear probing, the hash table is searched sequentially that starts from the
original location of the hash. If in case the location that we get is already
occupied, then we check for the next location.
The function used for rehashing is as follows: rehash(key) =
(n+1)%table-size.
Example: Let us consider a simple hash function as “key mod 5” and a
sequence of keys that are to be inserted are 50, 70, 76, 85, 93.
2. Quadratic Probing
If you observe carefully, then you will understand that the interval between
probes will increase proportionally to the hash value. Quadratic probing is a
method with the help of which we can solve the problem of clustering that
was discussed above. This method is also known as
the mid-square method. In this method, we look for the i2'th slot in
the ith iteration. We always start from the original hash location. If only the
location is occupied then we check the other slots.
let hash(x) be the slot index computed using hash function.
If slot hash(x) % S is full, then we try (hash(x) + 1*1) % S
If (hash(x) + 1*1) % S is also full, then we try (hash(x) + 2*2) % S
If (hash(x) + 2*2) % S is also full, then we try (hash(x) + 3*3) % S
Example: Let us consider table Size = 7, hash function as Hash(x) = x % 7
and collision resolution strategy to be f(i) = i2 . Insert = 22, 30, and 50.
3. Double Hashing
The intervals that lie between probes are computed by another
hash function. Double hashing is a technique that reduces
clustering in an optimized way. In this technique, the increments
for the probing sequence are computed by using another hash
function. We use another hash function hash2(x) and look for the
i*hash2(x) slot in the ith rotation.
let hash(x) be the slot index computed using hash function.
If slot hash(x) % S is full, then we try (hash(x) + 1*hash2(x)) % S
If (hash(x) + 1*hash2(x)) % S is also full, then we try (hash(x) +
2*hash2(x)) % S
If (hash(x) + 2*hash2(x)) % S is also full, then we try (hash(x) +
3*hash2(x)) % S
Example: Insert the keys 27, 43, 692, 72 into the Hash Table of size 7. where
first hash-function is h1(k) = k mod 7 and second hash-function is h2(k) = 1 +
(k mod 5)
Comparison of the above three:
Open addressing is a collision handling technique used in hashing
where, when a collision occurs (i.e., when two or more keys map
to the same slot), the algorithm looks for another empty slot in
the hash table to store the collided key.
In linear probing, the algorithm simply looks for the next
available slot in the hash table and places the collided key there.
If that slot is also occupied, the algorithm continues searching for
the next available slot until an empty slot is found. This process is
repeated until all collided keys have been stored. Linear probing
has the best cache performance but suffers from clustering. One
more advantage of Linear probing is easy to compute.
In quadratic probing, the algorithm searches for slots in a more
spaced-out manner. When a collision occurs, the algorithm looks
for the next slot using an equation that involves the original hash
value and a quadratic function. If that slot is also occupied, the
algorithm increments the value of the quadratic function and
tries again. This process is repeated until an empty slot is found.
Quadratic probing lies between the two in terms of cache
performance and clustering.
In double hashing, the algorithm uses a second hash function to
determine the next slot to check when a collision occurs. The
algorithm calculates a hash value using the original hash
function, then uses the second hash function to calculate an
offset. The algorithm then checks the slot that is the sum of the
original hash value and the offset. If that slot is occupied, the
algorithm increments the offset and tries again. This process is
repeated until an empty slot is found. Double hashing has poor
cache performance but no clustering. Double hashing requires
more computation time as two hash functions need to be
computed.
The choice of collision handling technique can have a significant
impact on the performance of a hash table. Linear probing is
simple and fast, but it can lead to clustering (i.e., a situation
where keys are stored in long contiguous runs) and can degrade
performance. Quadratic probing is more spaced out, but it can
also lead to clustering and can result in a situation where some
slots are never checked. Double hashing is more complex, but it
can lead to more even distribution of keys and can provide better
performance in some cases.
Separate Chaining Collision Handling
Technique in Hashing
Separate Chaining is a collision handling technique. Separate chaining is one
of the most popular and commonly used techniques in order to handle
collisions. In this article, we will discuss about what is Separate Chain
collision handling technique, its advantages, disadvantages, etc.
There are mainly two methods to handle collision:
● Separate Chaining
● Open Addressing
Separate Chaining:
The idea behind separate chaining is to implement the array as a
linked list called a chain.
Linked List (or a Dynamic Sized Array) is used to implement this
technique. So what happens is, when multiple elements are
hashed into the same slot index, then these elements are
inserted into a singly-linked list which is known as a chain.
Here, all those elements that hash into the same slot index are inserted into a
linked list. Now, we can use a key K to search in the linked list by just linearly
traversing. If the intrinsic key for any entry is equal to K then it means that we
have found our entry. If we have reached the end of the linked list and yet we
haven't found our entry then it means that the entry does not exist. Hence, the
conclusion is that in separate chaining, if two different elements have the same
hash value then we store both the elements in the same linked list one after
the other.
Example: Let us consider a simple hash function as "key mod 5" and a
sequence of keys as 12, 22, 15, 25
Advantages:
● Simple to implement.
● Hash table never fills up, we can always add more elements to the
chain.
● Less sensitive to the hash function or load factors.
● It is mostly used when it is unknown how many and how frequently
keys may be inserted or deleted.
Disadvantages:
● The cache performance of chaining is not good as keys are stored
using a linked list. Open addressing provides better cache
performance as everything is stored in the same table.
● Wastage of Space (Some Parts of the hash table are never used)
● If the chain becomes long, then search time can become O(n) in the
worst case
● Uses extra space for links
S.No. Separate Chaining Open Addressing
Open Addressing requires more
1. Chaining is Simpler to implement.
computation.
In chaining, Hash table never fills up,
In open addressing, table may
2. we can always add more elements
become full.
to chain.
Open addressing requires extra
Chaining is Less sensitive to the
3. care to avoid clustering and load
hash function or load factors.
factor.
Chaining is mostly used when it is
Open addressing is used when
unknown how many and how
4. the frequency and number of
frequently keys may be inserted or
keys is known.
deleted.
Cache performance of chaining is not Open addressing provides better
5. good as keys are stored using linked cache performance as everything
list. is stored in the same table.
Wastage of Space (Some Parts of In Open addressing, a slot can be
6. hash table in chaining are never used even if an input doesn't map
used). to it.
S.No. Separate Chaining Open Addressing
7. Chaining uses extra space for links. No links in Open addressing
Dynamic Programming (DP) Introduction
Dynamic Programming is a commonly used algorithmic technique used to
optimize recursive solutions when same subproblems are called again.
● The core idea behind DP is to store solutions to subproblems so that
each is solved only once.
● To solve DP problems, we first write a recursive solution in a way
that there are overlapping subproblems in the recursion tree (the
recursive function is called with the same parameters multiple
times)
● To make sure that a recursive value is computed only once (to
improve time taken by algorithm), we store results of the recursive
calls.
● There are two ways to store the results, one is top down (or
memoization) and other is bottom up (or tabulation).
When to Use Dynamic Programming (DP)?
Dynamic programming is used for solving problems that consists of the
following characteristics:
1. Optimal Substructure:
The property Optimal substructure means that we use the optimal results of
subproblems to achieve the optimal result of the bigger problem.
Example:
Consider the problem of finding the minimum cost path in a
weighted graph from a source node to a destination node. We
can break this problem down into smaller subproblems:
Find the minimum cost path from the source node to each
intermediate node.
Find the minimum cost path from each intermediate node to the
destination node.
The solution to the larger problem (finding the minimum cost
path from the source node to the destination node) can be
constructed from the solutions to these smaller subproblems.
Dynamic Programming
Dynamic Programming (DP) is an algorithmic technique for solving complex problems
by breaking them down into simpler, overlapping subproblems and storing the solutions
to these subproblems to avoid redundant computations. It is primarily used for
optimization problems where one seeks to find the minimum or maximum solution.
Key Properties of Problems Solvable by DP:
● Overlapping Subproblems:
The problem can be divided into smaller subproblems, and these subproblems
are repeatedly encountered during the recursive solution process. DP stores the
solutions to these subproblems to avoid recomputing them.
● Optimal Substructure:
The optimal solution to the overall problem can be constructed from the optimal
solutions of its subproblems. This is often referred to as the "Principle of
Optimality."
Approaches to Dynamic Programming:
● Memoization (Top-Down DP):
● This approach involves writing a recursive solution and then storing the
results of subproblems in a data structure (e.g., an array or hash map) as
they are computed.
● Before computing a subproblem, the stored results are checked. If the
solution already exists, it is retrieved; otherwise, it is computed and stored.
This is essentially optimizing a recursive solution by adding a cache.
●
● Tabulation (Bottom-Up DP):
● This approach involves iteratively solving subproblems in a specific order,
starting from the smallest subproblems and building up to the larger ones.
● A table (e.g., an array) is typically used to store the solutions to
subproblems, and these solutions are used to compute solutions for larger
subproblems.
● This approach is often more efficient in terms of space and can sometimes
be easier to reason about as it avoids recursion overhead.
Steps to Solve a DP Problem:
● Identify if DP is applicable: Check for overlapping subproblems and optimal
substructure.
● Define the state: Determine what information needs to be stored to represent a
subproblem's solution. This often involves defining a DP array or table.
● Formulate the recurrence relation: Express the solution to a larger subproblem
in terms of solutions to smaller subproblems.
● Determine the base cases: Identify the simplest subproblems whose solutions
are known directly.
● Choose an approach: Decide between memoization (top-down) or tabulation
(bottom-up) and implement the solution accordingly.
Examples of Problems Solved by DP:
● Fibonacci Sequence
● Longest Common Subsequence (LCS)
● Knapsack Problem (0/1 Knapsack, Unbounded Knapsack)
● Matrix Chain Multiplication
● Shortest Path problems (e.g., Bellman-Ford, Floyd-Warshall)
● Edit Distance
Dynamic Programming is a fundamental technique in algorithm design, crucial for
solving a wide range of optimization and combinatorial problems efficiently.
Tabulation Vs Memoization
Memoization is a top-down, recursive approach that uses a cache to store subproblem solutions as they
are encountered, while tabulation is a bottom-up, iterative approach that fills a table with solutions to
subproblems starting from the base cases.
Memoization is ideal for sparse state spaces where not all subproblems are needed, but it can
be susceptible to stack overflow;
tabulation is better when all subproblems must be solved and avoids recursion overhead.
Memoization (Top-Down)
● Definition: A recursive approach that solves the main problem by breaking it into
subproblems and storing the results of each subproblem in a cache (like a
dictionary or array) to avoid re-computation.
● Approach: Starts from the main problem and uses recursion to solve it, storing
results along the way.
● Order of Execution: Subproblems are solved as they are encountered during the
recursive calls.
● When to use:
o When the problem has many overlapping subproblems, but not all of them
are necessarily needed.
o When the recursive structure is more intuitive to write.
o When the input size is not extremely large to avoid stack overflow.
● Pros:
o Often more intuitive and easier to implement from a brute-force recursive
solution.
o Can be faster if many subproblems are never needed.
● Cons:
o Can lead to stack overflow errors in languages with a limited recursion
depth for very deep recursion.
o Has the overhead of recursive function calls.
Tabulation (Bottom-Up)
● Definition: An iterative approach where solutions to the subproblems are
computed and stored in a table (usually an array) in a specific order, starting from
the smallest subproblems and working up to the final solution.
● Approach: Iterative; you build the solution from the base cases up to the final
answer.
● Order of Execution: Subproblems are solved in a predefined, iterative order, from
smallest to largest.
● When to use:
o When you need to solve all subproblems.
o When you want to avoid the overhead of recursion and potential stack
overflow issues.
o For problems that involve matrices or grids, like finding the number of
unique paths.
● Pros:
o Avoids recursion overhead and stack overflow risk.
o Can be more space-efficient by sometimes "forgetting" previous rows of
the table that are no longer needed.
o Time complexity is often easier to analyze.
● Cons:
o Can be less intuitive to implement if the iteration order is not obvious.
o It always solves all subproblems, even if some are not needed for the final
solution.
Feature Memoization (Top-Down) Tabulation (Bottom-Up)
Approach Recursive Iterative
order Solves subproblems as they Solves subproblems in a
are needed predefined order
State Space Fills only the necessary Fills all states in the table
states
overhead Incurs recursion overhead No recursion overhead
Risk Stack overflow risk for deep No stack overflow risk
recursion
What is Memoization?
memoization, a powerful optimization technique that can drastically improve
the performance of certain algorithms. Memoization helps by storing the
results of expensive function calls and reusing them when the same inputs
occur again. This avoids redundant calculations, making your code more
efficient.
The term "Memoization" comes from the Latin word "memorandum" (to
remember), which is commonly shortened to "memo" in American English,
and which means "to transform the results of a function into something to
remember".
In computing, memoization is used to speed up computer programs by
eliminating the repetitive computation of results, and by avoiding repeated
calls to functions that process the same input.
What is Memoization?
Memoization is an optimization technique primarily used to
enhance the performance of algorithms by storing the results of
expensive function calls and reusing them when the same
inputs occur again. The term comes from "memorandum",
which refers to a note intended to help with memory.
Memoization is particularly effective in scenarios involving
repeated computations, like recursive algorithms, where the
same calculations may be performed multiple times.
Why is Memoization used?
Memoization is a specific form of caching that is used in
dynamic programming. The purpose of caching is to improve
the performance of our programs and keep data accessible that
can be used later. It basically stores the previously calculated
result of the subproblem and reuses the stored result for the
same subproblem. This removes the extra effort to calculate
again and again for the same problem.
Where to use Memoization?
Memoization is useful in situations where previously calculated
results can be reused. It is particularly effective in recursive
problems, especially those involving overlapping subproblems,
where the same calculations are repeated multiple times.
How Memoization technique is used in Dynamic Programming?
Dynamic programming helps to efficiently solve problems that
have overlapping subproblems and optimal substructure
properties. The idea behind dynamic programming is to break
the problem into smaller sub-problems and save the result for
future use, thus eliminating the need to compute the result
repeatedly.
There are two approaches to formulate a dynamic programming
solution:
Top-Down Approach: This approach follows the memoization
technique. It consists of recursion and caching. In computation,
recursion represents the process of calling functions repeatedly,
whereas cache refers to the process of storing intermediate
results.
Bottom-Up Approach: This approach uses the tabulation
technique to implement the dynamic programming solution. It
addresses the same problems as before, but without recursion.
In this approach, iteration replaces recursion. Hence, there is no
stack overflow error or overhead of recursive procedures.
How Memoization is different from Tabulation?
Memoization Tabulation
State transition relation is
State transition relation is easy difficult to think.
to think.
State
Code is easy and less Code gets complicated when a
Code complicated. lot of conditions are required.
Fast, as we directly
Slow due to many recursive
access previous states from
calls and return statements.
Speed the table.
Memoization Tabulation
If some subproblems in the If all subproblems must be
subproblem space need not be solved at least once, a
solved at all, the memoized bottom-up dynamic
solution has the advantage of programming algorithm
solving only those usually outperforms a
Subproblem subproblems that are definitely top-down memoized algorithm
solving required. by a constant factor.
Unlike the Tabulated version,
all entries of the lookup table In the Tabulated version,
are not necessarily filled in starting from the first entry, all
Memoized version. The table is entries are filled one by one
Table Entries filled on demand.
classical problems (Fibonacci, Knapsack, LCS, LIS) solved using dynamic programming (DP).
Dynamic programming is applicable to problems with overlapping subproblems and optimal
substructure properties.
Fibonacci Sequence
The Fibonacci Sequence is a series of numbers starting with 0 and 1, where
each succeeding number is the sum of the two preceding numbers. The
sequence goes on infinitely. So, the sequence begins as:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …
History of the Fibonacci Sequence
The Fibonacci sequence is named after Leonardo of Pisa, who is
more commonly known as Fibonacci. He was an Italian
mathematician born around 1170 and died around 1250.
Fibonacci introduced the sequence to Western mathematics in
his book "Liber Abaci" (The Book of Calculation), published in
1202. In "Liber Abaci", Fibonacci posed a problem involving the
growth of a population of rabbits. The problem was stated as
follows:
Suppose a pair of rabbits is placed in an enclosed area. How
many pairs of rabbits will be produced in one year if every
month each pair produces a new pair that becomes productive
from the second month on?
This problem led to the formation of the Fibonacci sequence: 0,
1, 1, 2, 3, 5, 8, 13, 21, 34,...
Fibonacci Sequence Formula
The Fibonacci formula is used to find the nth term of the
sequence when its first and second terms are given.
The nth term of the Fibonacci Sequence is represented as Fn. It
is given by the following recursive formula,
Fn = Fn-1 + Fn-2
where,
n>1
The first term is 0,... i.e., F0 = 0
The second term is 1Sequence, i.e., F1 = 1
Using this formula, we can easily find the various terms of the
Fibonacci Sequence. Suppose we have to find the 3rd term of
this Sequence, then we would require the 2nd and the 1st term
according to the given formula, then the 3rd term is calculated
as,
The F3 = F2 + F1 = 1 + 1 = 2
Thus, the third term in the Fibonacci Sequence is 1, and
similarly, the next terms of the sequence can also be found as,
F4 = F3 + F2 = 2 + 1 = 3
F5 = F4 + F3 = 3 + 2 = 5
and so on.
Below are the first 10 Fibonacci numbers in the sequence List.
Fn
F0
F1
F2
F3
F4
F5
F6
F7
F8
F9
2
3
13
21
34
Fibonacci Sequences have infinite terms.
By closely observing the table, we can say that Fn = Fn-1 + Fn-2
for every n > 1.
Note: The Fibonacci Sequence can start in two ways:
0 and 1: This is the most common convention, where the
sequence begins as 0, 1, 1, 2, 3, 5, 8, 13, …
1 and 1: In some contexts, the sequence starts with 1, 1, 2, 3, 5,
8, 13, …
Fibonacci Sequence in Nature
Many natural patterns follow a spiral structure that aligns with
Fibonacci numbers:
Sunflowers: The number of spirals in the center of a sunflower
often corresponds to Fibonacci numbers.
Pinecones: The scales of a pinecone form spiral patterns that
match Fibonacci numbers.
Shells (Nautilus, Snails, etc.): Their growth follows a logarithmic
spiral, which is closely related to the Fibonacci sequence.
Galaxies: Many spiral galaxies, such as the Milky Way, follow
Fibonacci-like spirals.
These spirals are examples of logarithmic spirals, which
maintain the same shape as they expand.
Properties of the Fibonacci Sequence
Important properties of the Fibonacci Sequence are:
We can easily calculate the Fibonacci Numbers using the Binet
Formula:
Fn = (Φn - (1-Φ)n)/√5
Using this formula, we can easily calculate the nth term of the
Fibonacci sequence to find the fourth term of the Fibonacci
sequence.
F4 = (Φ4 - (1-Φ)4)/√5 = ({1.618034}4- (1-1.618034)4)/√5 = 3
For larger terms, the ratio of two consecutive terms of the
Fibonacci Sequence converges to the Golden Ratio.
Multiplying a term of the Fibonacci Sequence with the Golden
Ratio gives the next term of the Fibonacci sequence as,
F7 in the Fibonacci Sequence is 13, then F8 is calculated as,
F(n)=F(n−1)+F(n−2)
F(8) = F(7) + F(6)
F(8) = 13 + 8 = 21
Thus, the F8 in the Fibonacci Sequence is 21.
We can also calculate the Fibonacci Sequence for negative
numbers as,
F-n = (-1)n+1Fn
For example, F-2 = (-1)2+1F2 = -1
Golden Ratio and Fibonacci Sequence
The golden ratio (Φ) is a special mathematical constant
approximately equal to 1.618. It is often represented by the
Greek letter phi (Φ) and is also known as the golden number,
golden proportion, or the divine proportion.
Formula:
Φ = Fn/Fn-1
As you divide two consecutive terms in the Fibonacci sequence, the resulting
ratio approaches the golden ratio. The ratio gets closer to 1.6180339887 as
the Fibonacci numbers increase.
● X-axis: The ratio F(n+1)/F(n), where F(n) denotes the Fibonacci
number at position n.
● Y-axis: The value of the ratio for each Fibonacci pair.
As you move along the x-axis, the value of the ratio F(n+1)/F(n)gets closer to
the golden ratio, Φ. This relationship is a visual representation of how
Fibonacci numbers converge to this constant as the sequence progresses.
Knapsack Problem
Given n items where each item has some weight and profit associated with it
and also given a bag with capacity W, [i.e., the bag can hold at most W
weight in it]. The task is to put the items into the bag such that the sum of
profits associated with them is the maximum possible.
Note: The constraint here is we can either put an item completely into the
bag or cannot put it at all [It is not possible to put a part of an item into the
bag].
Input: W = 4, profit[] = [1, 2, 3], weight[] = [4, 5, 1]
Output: 3
Explanation: There are two items which have weight less than or equal to 4.
If we select the item with weight 4, the possible profit is 1. And if we select
the item with weight 1, the possible profit is 3. So the maximum possible
profit is 3. Note that we cannot put both the items with weight 4 and 1
together as the capacity of the bag is 4.
Input: W = 3, profit[] = [1, 2, 3], weight[] = [4, 5, 6]
Output: 0
[Naive Approach] Using Recursion O(2^n) Time and O(n) Space
A simple solution is to consider all subsets of items and calculate the total
weight and value of all subsets. Consider the only subsets whose total
weight is smaller than W. From all such subsets, pick the subset with
maximum value.
Optimal Substructure: To consider all subsets of items, there can be two
cases for every item.
Case 1: The item is included in the optimal subset.
Case 2: The item is not included in the optimal set.
Follow the below steps to solve the problem:
The maximum value obtained from 'n' items is the max of the following two
values.
● Case 1 (pick the nth item): Value of the nth item + maximum value
obtained by remaining (n-1) items and remaining weight i.e.
(W-weight of the nth item).
● Case 2 (don't pick the nth item): Maximum value obtained by (n-1)
items and W weight.
● If the weight of the 'nth' item is greater than 'W', then the nth item
cannot be included and Case 2 is the only possibility.
#include <bits/stdc++.h>
using namespace std;
// Returns the maximum value that
// can be put in a knapsack of capacity W
int knapsackRec(int W, vector<int> &val, vector<int> &wt, int n) {
// Base Case
if (n == 0 || W == 0)
return 0;
int pick = 0;
// Pick nth item if it does not exceed the capacity of knapsack
if (wt[n - 1] <= W)
pick = val[n - 1] + knapsackRec(W - wt[n - 1], val, wt, n - 1);
// Don't pick the nth item
int notPick = knapsackRec(W, val, wt, n - 1);
return max(pick, notPick);
int knapsack(int W, vector<int> &val, vector<int> &wt) {
int n = val.size();
return knapsackRec(W, val, wt, n);
int main() {
vector<int> val = {1, 2, 3};
vector<int> wt = {4, 5, 1};
int W = 4;
cout << knapsack(W, val, wt) << endl;
return 0;
Output
3
Below is an example run of the above implementation.
[Better Approach 1] Using Top-Down DP (Memoization)- O(n x W) Time and
Space
Note: The above function using recursion computes the same subproblems
again and again. See the following recursion tree, K(1, 1) is being evaluated
twice.
As there are repetitions of the same subproblem again and again we can
implement the following idea to solve the problem.
If we get a subproblem the first time, we can solve this problem by creating a
2-D array that can store a particular state (n, w). Now if we come across the
same state (n, w) again instead of calculating it i again we can directly return
its result stored in the table in constant time.
#include <bits/stdc++.h>
using namespace std;
// Returns the maximum value that
// can be put in a knapsack of capacity W
int knapsackRec(int W, vector<int> &val, vector<int> &wt, int n,
vector<vector<int>> &memo) {
// Base Case
if (n == 0 || W == 0)
return 0;
// Check if we have previously calculated the same subproblem
if(memo[n][W] != -1)
return memo[n][W];
int pick = 0;
// Pick nth item if it does not exceed the capacity of knapsack
if (wt[n - 1] <= W)
pick = val[n - 1] + knapsackRec(W - wt[n - 1], val, wt, n - 1, memo);
// Don't pick the nth item
int notPick = knapsackRec(W, val, wt, n - 1, memo);
// Store the result in memo[n][W] and return it
return memo[n][W] = max(pick, notPick);
int knapsack(int W, vector<int> &val, vector<int> &wt) {
int n = val.size();
// Memoization table to store the results
vector<vector<int>> memo(n + 1, vector<int>(W + 1, -1));
return knapsackRec(W, val, wt, n, memo);
int main() {
vector<int> val = {1, 2, 3};
vector<int> wt = {4, 5, 1};
int W = 4;
cout << knapsack(W, val, wt) << endl;
return 0;
Output
3
[Better Approach 2] Using Bottom-Up DP (Tabulation) - O(n x W)
Time and Space
There are two parameters that change in the recursive solution and these
parameters go from 0 to n and 0 to W. So we create a 2D dp[][] array of size
(n+1) x (W+1), such that dp[i][j] stores the maximum value we can get
using i items such that the knapsack capacity is j.
● We first fill the known entries when m is 0 or n is 0.
● Then we fill the remaining entries using the recursive formula.
For each item i and knapsack capacity j, we decide whether to pick the item or
not.
● If we don't pick the item: dp[i][j] remains same as the previous
item, that is dp[i - 1][j].
● If we pick the item: dp[i][j] is updated to val[i] + dp[i - 1][j - wt[i]].
#include <bits/stdc++.h>
using namespace std;
// Returns the maximum value that
// can be put in a knapsack of capacity W
int knapsack(int W, vector<int> &val, vector<int> &wt) {
int n = wt.size();
vector<vector<int>> dp(n + 1, vector<int>(W + 1));
// Build table dp[][] in bottom-up manner
for (int i = 0; i <= n; i++) {
for (int j = 0; j <= W; j++) {
// If there is no item or the knapsack's capacity is 0
if (i == 0 || j == 0)
dp[i][j] = 0;
else {
int pick = 0;
// Pick ith item if it does not exceed the capacity of knapsack
if(wt[i - 1] <= j)
pick = val[i - 1] + dp[i - 1][j - wt[i - 1]];
// Don't pick the ith item
int notPick = dp[i - 1][j];
dp[i][j] = max(pick, notPick);
return dp[n][W];
int main() {
vector<int> val = {1, 2, 3};
vector<int> wt = {4, 5, 1};
int W = 4;
cout << knapsack(W, val, wt) << endl;
return 0;
Output
3
[Expected Approach] Using Bottom-Up DP (Space-Optimized) - O(n x W)
Time and O(W) Space
For calculating the current row of the dp[] array we require only previous
row, but if we start traversing the rows from right to left then it can be done
with a single row only
#include <bits/stdc++.h>
using namespace std;
// Function to find the maximum profit
int knapsack(int W, vector<int> &val, vector<int> &wt) {
// Initializing dp vector
vector<int> dp(W + 1, 0);
// Taking first i elements
for (int i = 1; i <= wt.size(); i++) {
// Starting from back, so that we also have data of
// previous computation of i-1 items
for (int j = W; j >= wt[i - 1]; j--) {
dp[j] = max(dp[j], dp[j - wt[i - 1]] + val[i - 1]);
return dp[W];
int main() {
vector<int> val = {1, 2, 3};
vector<int> wt = {4, 5, 1};
int W = 4;
cout << knapsack(W, val, wt) << endl;
return 0;
Output
3
Let us understand LCS with an example.
If
S1 = {B, C, D, A, A, C, D}
S2 = {A, C, D, B, A, C}
A subsequence is a sequence derived by deleting some or no elements from
the original sequence without changing the order of the remaining elements. A
common subsequence means a subsequence that appears in both sequences
in the same relative order.
Here, common subsequences are {B, C}, {C, D, A, C}, {D, A, C}, {A, A, C}, {A,
C}, {C, D}, ...
Among these subsequences, {C, D, A, C} is the longest common subsequence.
We are going to find this longest common subsequence using dynamic
programming.
Before proceeding further, if you do not already know about dynamic
programming, please go through dynamic programming.
Longest Common Subsequence (LCS)
Given two strings, s1 and s2, the task is to find the length of the Longest
Common Subsequence. If there is no common subsequence, return 0. A
subsequence is a string generated from the original string by deleting 0 or
more characters, without changing the relative order of the remaining
characters.
For example, subsequences of "ABC" are "", "A", "B", "C", "AB", "AC", "BC"
and "ABC". In general, a string of length n has 2n subsequences.
Examples:
Input: s1 = "ABC", s2 = "ACD"
Output: 2
Explanation: The longest subsequence which is present in both strings is
"AC".
Input: s1 = "AGGTAB", s2 = "GXTXAYB"
Output: 4
Explanation: The longest common subsequence is "GTAB".
Input: s1 = "ABC", s2 = "CBA"
Output: 1
Explanation: There are three longest common subsequences of length 1, "A",
"B" and "C".
[Naive Approach] Using Recursion - O(2 ^ min(m, n)) Time and O(min(m, n))
Space
The idea is to compare the last characters of s1 and s2. While comparing the
strings s1 and s2 two cases arise:
Match : Make the recursion call for the remaining strings (strings of lengths
m-1 and n-1) and add 1 to result.
Do not Match : Make two recursive calls. First for lengths m-1 and n, and
second for m and n-1. Take the maximum of two results.
Base case : If any of the strings become empty, we return 0.
For example, consider the input strings s1 = "ABX" and s2 = "ACX".
LCS("ABX", "ACX") = 1 + LCS("AB", "AC") [Last Characters Match]
LCS("AB", "AC") = max( LCS("A", "AC") , LCS("AB", "A") ) [Last Characters
Do Not Match]
LCS("A", "AC") = max( LCS("", "AC") , LCS("A", "A") )
= max(0, 1 + LCS("", "")) = 1
LCS("AB", "A") = max( LCS("A", "A") , LCS("AB", "") )
= max( 1 + LCS("", "", 0)) = 1
So overall result is 1 + 1 = 2
// A Naive recursive implementation of LCS problem
#include <bits/stdc++.h>
using namespace std;
// Returns length of LCS for s1[0..m-1], s2[0..n-1]
int lcsRec(string &s1, string &s2,int m,int n) {
// Base case: If either string is empty, the length of LCS is 0
if (m == 0 || n == 0)
return 0;
// If the last characters of both substrings match
if (s1[m - 1] == s2[n - 1])
// Include this character in LCS and recur for remaining substrings
return 1 + lcsRec(s1, s2, m - 1, n - 1);
else
// If the last characters do not match
// Recur for two cases:
// 1. Exclude the last character of s1
// 2. Exclude the last character of s2
// Take the maximum of these two recursive calls
return max(lcsRec(s1, s2, m, n - 1), lcsRec(s1, s2, m - 1, n));
int lcs(string &s1,string &s2){
int m = s1.size(), n = s2.size();
return lcsRec(s1,s2,m,n);
int main() {
string s1 = "AGGTAB";
string s2 = "GXTXAYB";
int m = s1.size();
int n = s2.size();
cout << lcs(s1, s2) << endl;
return 0;
Output
[Better Approach] Using Memoization (Top Down DP) - O(m * n) Time and
O(m * n) Space
To optimize the recursive solution, we use a 2D memoization table of size
(m+1)×(n+1)(m+1) \times (n+1)(m+1)×(n+1), initialized to −1-1−1 to track
computed values. Before making recursive calls, we check this table to avoid
redundant computations of overlapping subproblems. This prevents repeated
calculations, improving efficiency through memoization or tabulation.
// C++ implementation of Top-Down DP
// of LCS problem
#include <bits/stdc++.h>
using namespace std;
// Returns length of LCS for s1[0..m-1], s2[0..n-1]
int lcsRec(string &s1, string &s2, int m, int n, vector<vector<int>> &memo) {
// Base Case
if (m == 0 || n == 0)
return 0;
// Already exists in the memo table
if (memo[m][n] != -1)
return memo[m][n];
// Match
if (s1[m - 1] == s2[n - 1])
return memo[m][n] = 1 + lcsRec(s1, s2, m - 1, n - 1, memo);
// Do not match
return memo[m][n] = max(lcsRec(s1, s2, m, n - 1, memo), lcsRec(s1, s2, m
- 1, n, memo));
int lcs(string &s1,string &s2){
int m = s1.length();
int n = s2.length();
vector<vector<int>> memo(m + 1, vector<int>(n + 1, -1));
return lcsRec(s1, s2, m, n, memo);
int main() {
string s1 = "AGGTAB";
string s2 = "GXTXAYB";
cout << lcs(s1, s2) << endl;
return 0;
Output
[Expected Approach 1] Using Bottom-Up DP (Tabulation) - O(m * n) Time and
O(m * n) Space
There are two parameters that change in the recursive solution and these
parameters go from 0 to m and 0 to n. So we create a 2D dp array of size
(m+1) x (n+1).
We first fill the known entries when m is 0 or n is 0.
Then we fill the remaining entries using the recursive formula.
Say the strings are S1 = "AXTY" and S2 = "AYZX", Follow below :
#include <iostream>
#include <vector>
using namespace std;
// Returns length of LCS for s1[0..m-1], s2[0..n-1]
int lcs(string &s1, string &s2) {
int m = s1.size();
int n = s2.size();
// Initializing a matrix of size (m+1)*(n+1)
vector<vector<int>> dp(m + 1, vector<int>(n + 1, 0));
// Building dp[m+1][n+1] in bottom-up fashion
for (int i = 1; i <= m; ++i) {
for (int j = 1; j <= n; ++j) {
if (s1[i - 1] == s2[j - 1])
dp[i][j] = dp[i - 1][j - 1] + 1;
else
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1]);
// dp[m][n] contains length of LCS for s1[0..m-1]
// and s2[0..n-1]
return dp[m][n];
}
int main() {
string s1 = "AGGTAB";
string s2 = "GXTXAYB";
cout << lcs(s1, s2) << endl;
return 0;
Output
[Expected Approach 2] Using Bottom-Up DP (Space-Optimization):
One important observation in the above simple implementation is, in each
iteration of the outer loop we only need values from all columns of the
previous row. So there is no need to store all rows in our DP matrix, we can
just store two rows at a time and use them. We can further optimize to use
only one array.
Longest Increasing Subsequence (LIS)
Given an array arr[] of size n, the task is to find the length of the Longest
Increasing Subsequence (LIS) i.e., the longest possible subsequence in which
the elements of the subsequence are sorted in increasing order.
Examples:
Input: arr[] = [3, 10, 2, 1, 20]
Output: 3
Explanation: The longest increasing subsequence is 3, 10, 20
Input: arr[] = [30, 20, 10]
Output:1
Explanation: The longest increasing subsequences are [30], [20] and [10]
Input: arr[] = [2, 2, 2]
Output: 1
Explanation: We consider only strictly increasing.
Input: arr[] = [10, 20, 35, 80]
Output: 4
Explanation: The whole array is sorted