Important Concepts and Formulas
Important Concepts and Formulas
Relation
Let A and B be two sets. Then a relation R from A to B is a subset of A × B.
R is a relation from A to B R A × B.
Inverse relation
Let A, B be two sets and let R be a relation from a set A to a set B. Then the inverse of R, denoted by R–1,
is a relation from B to A and is defined by R–1 = {(b, a) : (a, b) R}.
Types of Relations
JK
Void relation : Let A be a set. Then A × A and so it is a relation on A. This relation is called the
RA
Identity relation : Let A be a set. Then the relation I A = {(a, a) : a A} on A is called the identity
relation on A.
A relation R on a set A is not reflexive if there exists an element a A such that (a, a) R.
Symmetric relation : A relation R on a set A is said to be a symmetric relation iff (a, b) R (b, a)
R for all a, b A. i.e. aRb bRa for all a, b A.
A relation R on a set A is not a symmetric relation if there are atleast two elements a, b A such that
(a, b) R but (b, a) R.
Transitive relation : A relation R on A is said to be a transitive relation iff (a, b) R and (b, c) R
(a, c) R for all a, b, c A. i.e. aRb and bRc aRc for all a, b, c A.
Antisymmetric relation : A relation R on set A is said to be an antisymmetric relation iff (a, b) R and
(b, a) R a = b for all a, b A.
Equivalence relation : A relation R on a set A is said to be an equivalence relation on A iff
It is reflexive i.e. (a, a) R for all a A.
Page - 1 -
K STEPHEN RAJ (PGT MATHS)
It is symmetric i.e. (a, b) R (b, a) R for all a, b A.
It is transitive i.e. (a, b) R and (b, c) R (a, c) R for all a, b, c A.
Congruence modulo m
Let m be an arbitrary but fixed integer. Two integers a and b are said to be congruence modulo m if a – b
is divisible by m and we write a b(mod m). Thus, a b (mod m) a – b is divisible by m.
The union of two equivalence relations on a set is not necessarily an equivalence relation on the set.
Functions
Let A and B be two empty sets. Then a function 'f ' from set A to set B is a rule or method or
correspondence which associates elements of set A to elements of set B such that
(i) All elements of set A are associated to elements in set B.
(ii) An element of set A is associated to a unique element in set B.
A function ‘f ’ from a set A to a set B associates each element of set A to a unique element of set B.
JK
under f or 'the value of the function f at a'. Also, a is called the preimage of b under the function f.
EN
We write it as : b = f (a).
H
EP
ST
Equal functions
Two functions f and g are said to be equal iff
(i) The domain of f = domain of g
(ii) The codomain of f = the codomain of g, and
(iii) f (x) = g(x) for every x belonging to their common domain.
If two functions f and g are equal, then we write f = g.
Types of Functions
(i) Oneone function (injection)
A function f : A B is said to be a oneone function or an injection if different elements of A have
different images in B. Thus, f : A B is oneone a b f (a) f (b) for all a, b A f (a) = f (b)
a = b for all a, b A.
If f : R R is an injective map, then the graph of y = f (x) is either a strictly increasing curve or a
dy dy
0 or
strictly decreasing curve. Consequently, 0 for all x.
dx dx
n P , if nm
Number of oneone functions from A to B , m
0, if nm
where m = n(Domain) and n = n(Codomain)
If there are some y B for which x, given by x = g(y) is not in A. Then, f is not onto.
JK
Number of onto functions :If A and B are two sets having m and n elements respectively such that
RA
n
n r n
(1) . Cr r m
EN
r 1
EP
ST
Number of bijections : If A and B are finite sets and f : A B is a bijection, then A and B have the
same number of elements. If A has n elements, then the number of bijections from A to B is the total
number of arrangements of n items taken all at a time i.e. n!
Page - 3 -
K STEPHEN RAJ (PGT MATHS)
(v) Into function
A function f : AB is an into function if there exists an element in B having no preimage in A. In other
words f : A B is an into function if it is not an onto function.
Composition of functions
Let A, B and C be three nonvoid sets and let f : A B, g : B C be two functions. For each x A there
exists a unique element g( f (x)) C.
JK
RA
EN
H
EP
ST
The composition of functions is not commutative i.e. fog gof.
The composition of functions is associative i.e. if f, g, h are three functions such that (fog)oh and
fo(goh) exist, then (fog)oh = fo(goh).
The composition of two bijections is a bijection i.e. if f and g are two bijections, then gof is also a
bijection.
Let f : AB. The foIA = IB of = f i.e. the composition of any function with the identity function is the
function itself.
Inverse of an element
Let A and B be two sets and let f : A B be a mapping. If a A is associated to b B under the function
f, then b is called the f image of a and we write it as b = f (a).
Inverse of a function
If f : A B is a bijection, we can define a new function from B to A which associates each element y B
to its preimage f –1(y) A.
Page - 4 -
K STEPHEN RAJ (PGT MATHS)
Algorithm to find the inverse of a bijection
Let f : A B be a bijection. To find the inverse of f we proceed as follows :
Step I : Put f (x) = y , where y B and x A.
Step II : Solve f (x) = y to obtain x in terms of y.
Step III : In the relation obtained in step II replace x by f –1(y) to obtain the inverse of f.
Binary Operation
Let S be a nonvoid set. A function f from S × S to S is called a binary operation on S i.e. f : S × S S
is a binary operation on set S.
Generally binary operations are represented by the symbols *, ,. etc. instead of letters f, g etc.
Addition on the set N of all natural numbers is a binary operation.
Subtraction is a binary operation on each of the sets Z, Q, R and C. But, it is a binary operation on N.
Division is not a binary operation on any of the sets N, Z, Q, R and C. However, it is not a binary
operation on the sets of all nonzero rational (real or complex) numbers.
JK
Addition and multiplication are commutative binary operations on Z but subtraction is not a
ST
Page - 5 -
K STEPHEN RAJ (PGT MATHS)
(v) Inverse of an element
Let * be a binary operation on a set S and let e be the identity element in S for the binary operation *.
An element a S is said to be an inverse of a S, if a * a= e = a * a.
Addition on N has no identity element and accordingly N has no invertible element.
Multiplication on N has 1 as the identity element and no element other than 1 is invertible.
2
Let S be a finite set containing n elements. Then the total number of binary operations on S is n n .
Let S be a finite set containing n elements. Then the total number of commutative binary operation
n(n 1)
on S is n .
2
JK
RA
EN
H
EP
ST
Page - 6 -
K STEPHEN RAJ (PGT MATHS)
CHAPTER – 2: INVERSE TRIGONOMETRIC FUNCTIONS
The inverse of sine function is defined as sin–1x = sinq = x, where [– /2, /2] and
x [–1, 1].
Thus, sin –1 x has infinitely many values for given x [–1, 1]
There is one value among these values which lies in the interval [–/2, /2]. This value is called the
principal value.
tan–1(tan) = and tan(tan–1 x) = x, provided that x and
2 2
cot –1(cot) = and cot(cot –1 x) = x, provided that – < x < and 0 < < .
1 1
sin 1 x cos ec 1
or cos ec 1 x sin 1
x x
Page - 7 -
K STEPHEN RAJ (PGT MATHS)
1 1
cos 1 x s ec 1 or s ec 1 x cos 1
x x
1 1
tan 1 x cot 1 or cot 1 x tan 1
x x
x 1 x2 1 1
1 x2
1 x2 x 1 x x
1 x2 x 1 1
1 x2
x 1 x2 1 x x
x 1 1 1 x2
1 x 2
1 x2 1 x x x
sin 1 x cos 1 x , where 1 x 1
2
tan 1 x cot 1 x , where x
2
JK
sec 1 x cos ec 1 x , where x 1 or x 1
RA
2
EN
H
x y
EP
tan 1 x tan 1 y tan 1 , if xy 1
ST
1 xy
x y
tan 1 x tan 1 y tan 1 , if xy 1
1 xy
x y
tan 1 x tan 1 y tan 1
1 xy
1 y2 1 x2
1 y2 1 x2
1 y2 1 x2
1 y2 1 x2
1 x2 1 y 2
Page - 8 -
K STEPHEN RAJ (PGT MATHS)
1 x2 1 y 2
1 x2 1 y 2
1 x2 1 y 2
sin 1 ( x ) sin 1 x, cos 1 ( x ) cos1 x
tan 1 ( x) tan 1 x, cot 1 ( x) cot 1 x
1 x2
2
2x 1 2x 1 1 x
2 tan 1 x tan 1 2
sin 2
cos
1 x 1 x 1 x
3x x3
3 tan x tan 1 1
2
1 3x
JK
RA
EN
H
EP
ST
Page - 9 -
K STEPHEN RAJ (PGT MATHS)
CHAPTER – 3: MATRICES
Matrix
A matrix is an ordered rectangular array of numbers or functions. The numbers or functions are called the
elements or the entries of the matrix. We denote matrices by capital letters.
Order of a matrix
A matrix having m rows and n columns is called a matrix of order m × n or simply m × n matrix
(read as an m by n matrix).
In general, an m × n matrix has the following rectangular array:
a11 a12 ..... a1n
a a22 ..... a2 n
21
. . . .
. . . .
am1 am 2 ..... amn
or A = [aij]m × n, 1≤ i ≤ m, 1 ≤ j ≤ n i, j N
Thus the ith row consists of the elements ai1, ai2, ai3,..., ain, while the jth column consists of the
elements a1j, a2j, a3j,..., amj ,
In general aij, is an element lying in the ith row and jth column. We can also call it as the (i, j)th
element of A. The number of elements in an m × n matrix will be equal to mn.
JK
RA
x
EN
We can also represent any point (x, y) in a plane by a matrix (column or row) as or x, y
H
y
EP
ST
Types of Matrices
Page - 10 -
K STEPHEN RAJ (PGT MATHS)
A diagonal matrix is said to be a scalar matrix if its diagonal elements are equal, that is, a square
matrix B = [bij]n × n is said to be a scalar matrix if bij = 0, when i ≠ j
bij = k, when i = j, for some constant k.
We denote the identity matrix of order n by In. When order is clear from the context, we simply write
it as I.
Observe that a scalar matrix is an identity matrix when k = 1. But every identity matrix is clearly a
scalar matrix.
Equality of matrices
Two matrices A = [aij] and B = [bij] are said to be equal if
(i) they are of the same order
(ii) each element of A is equal to the corresponding element of B, that is aij = bij for all i and j.
Operations on Matrices
JK
Addition of matrices
RA
The sum of two matrices is a matrix obtained by adding the corresponding elements of the given
EN
Multiplication of a matrix by a scalar
If A = [aij]m × n is a matrix and k is a scalar, then kA is another matrix which is obtained by
multiplying each element of A by the scalar k.
In other words, kA = k [aij]m × n = [k (aij)]m × n, that is, (i, j)th element of kA is kaij for all possible
values of i and j.
Negative of a matrix The negative of a matrix is denoted by –A. We define –A = (– 1) A.
Difference of matrices If A = [aij], B = [bij] are two matrices of the same order, say m × n, then
difference A – B is defined as a matrix D = [dij], where dij = aij – bij, for all value of i and j. In other
words, D = A – B = A + (–1) B, that is sum of the matrix A and the matrix – B.
Page - 11 -
K STEPHEN RAJ (PGT MATHS)
Properties of matrix addition
(i) Commutative Law If A = [aij], B = [bij] are matrices of the same order, say m × n, then A + B =
B + A.
(ii) Associative Law For any three matrices A = [aij], B = [bij], C = [cij] of the same order, say m × n,
(A + B) + C = A + (B + C).
(iii) Existence of additive identity Let A = [aij] be an m × n matrix and O be an m × n zero matrix,
then A + O = O + A = A. In other words, O is the additive identity for matrix addition.
(iv) The existence of additive inverse Let A = [aij]m × n be any matrix, then we have another matrix
as – A = [– aij]m × n such that A + (– A) = (– A) + A= O. So – A is the additive inverse of A or
negative of A.
Properties of scalar multiplication of a matrix
If A = [aij] and B = [bij] be two matrices of the same order, say m × n, and k and l are scalars, then
(i) k(A +B) = k A + kB, (ii) (k + l)A = k A + l A
Multiplication of matrices
The product of two matrices A and B is defined if the number of columns of A is equal to the
number of rows of B. Let A = [aij] be an m × n matrix and B = [bjk] be an n × p matrix. Then the
product of the matrices A and B is the matrix C of order m × p.
To get the (i, k)th element cik of the matrix C, we take the ith row of A and kth column of B, multiply
them elementwise and take the sum of all these products. In other words, if A = [aij]m × n, B = [bjk]n ×
b1k
b
2k
p, then the ith row of A is [ai1 ai2 ... ain] and the kth column of B is . then
.
JK
RA
bnk
EN
cik = ai1 b1k + ai2 b2k + ai3 b3k + ... + ain bnk = The matrix C = [cik]m × p is the product of A and B.
H
EP
ST
If AB is defined, then BA need not be defined. In the above example, AB is defined but BA is not
defined because B has 3 column while A has only 2 (and not 3) rows. If A, B are, respectively m × n,
k × l matrices, then both AB and BA are defined if and only if n = k and l = m. In particular, if both
A and B are square matrices of the same order, then both AB and BA are defined.
Non-commutativity of multiplication of matrices
Now, we shall see by an example that even if AB and BA are both defined, it is not necessary that
AB = BA.
Zero matrix as the product of two non zero matrices
We know that, for real numbers a, b if ab = 0, then either a = 0 or b = 0.
If the product of two matrices is a zero matrix, it is not necessary that one of the matrices is a zero
matrix.
Properties of multiplication of matrices
The multiplication of matrices possesses the following properties:
The associative law For any three matrices A, B and C. We have (AB) C = A (BC), whenever both
sides of the equality are defined.
The distributive law For three matrices A, B and C.
A (B+C) = AB + AC
(A+B) C = AC + BC, whenever both sides of equality are defined.
The existence of multiplicative identity For every square matrix A, there exist an identity matrix of
same order such that IA = AI = A.
Page - 12 -
K STEPHEN RAJ (PGT MATHS)
Transpose of a Matrix
If A = [aij] be an m × n matrix, then the matrix obtained by interchanging the rows and columns of A
is called the transpose of A. Transpose of the matrix A is denoted by A′ or (AT). In other words, if A
= [aij]m × n, then A′ = [aji]n × m.
Properties of transpose of the matrices
For any matrices A and B of suitable orders, we have
(i) (A′)′ = A, (ii) (kA)′ = kA′ (where k is any constant)
(iii) (A + B)′ = A′ + B′ (iv) (A B)′ = B′ A′
Symmetric and Skew Symmetric Matrices
A square matrix A = [aij] is said to be symmetric if A′ = A, that is, [aij] = [aji] for all possible values
of i and j.
A square matrix A = [aij] is said to be skew symmetric matrix if A′ = – A, that is aji = – aij for all
possible values of i and j. Now, if we put i = j, we have aii = – aii. Therefore 2aii = 0 or aii = 0 for all
i’s.
This means that all the diagonal elements of a skew symmetric matrix are zero.
Theorem 1 For any square matrix A with real number entries, A + A′ is a symmetric matrix and A –
A′ is a skew symmetric matrix.
Theorem 2 Any square matrix can be expressed as the sum of a symmetric and a skew symmetric
matrix.
JK
Elementary Operation (Transformation) of a Matrix
RA
There are six operations (transformations) on a matrix, three of which are due to rows and three due
EN
ST
(i) The interchange of any two rows or two columns. Symbolically the interchange of ith and jth
rows is denoted by Ri ↔ Rj and interchange of ith and jth column is denoted by Ci ↔ Cj.
(ii) The multiplication of the elements of any row or column by a non zero number. Symbolically, the
multiplication of each element of the ith row by k, where k ≠ 0 is denoted by Ri → k Ri.
The corresponding column operation is denoted by Ci → kCi
(iii) The addition to the elements of any row or column, the corresponding elements of any other row
or column multiplied by any non zero number.
Symbolically, the addition to the elements of ith row, the corresponding elements of jth row
multiplied by k is denoted by Ri → Ri + kRj.
The corresponding column operation is denoted by Ci → Ci + kCj.
Invertible Matrices
If A is a square matrix of order m, and if there exists another square matrix B of the same order m,
such that AB = BA = I, then B is called the inverse matrix of A and it is denoted by A–1. In that case
A is said to be invertible.
Page - 13 -
K STEPHEN RAJ (PGT MATHS)
A rectangular matrix does not possess inverse matrix, since for products BA and AB to be defined
and to be equal, it is necessary that matrices A and B should be square matrices of the same order.
If B is the inverse of A, then A is also the inverse of B.
Theorem 3 (Uniqueness of inverse) Inverse of a square matrix, if it exists, is unique.
Theorem 4 If A and B are invertible matrices of the same order, then (AB)–1 = B–1A–1.
Inverse of a matrix by elementary operations
If A is a matrix such that A–1 exists, then to find A–1 using elementary row operations, write A = IA
and apply a sequence of row operation on A = IA till we get, I = BA. The matrix B will be the
inverse of A. Similarly, if we wish to find A–1 using column operations, then, write A = AI and
apply a sequence of column operations on A = AI till we get, I = AB.
In case, after applying one or more elementary row (column) operations on A = IA (A = AI), if we
obtain all zeros in one or more rows of the matrix A on L.H.S., then A–1 does not exist.
JK
RA
EN
H
EP
ST
Page - 14 -
K STEPHEN RAJ (PGT MATHS)
CHAPTER – 4: DETERMINANTS
Determinant
a b a b
If A = , then determinant of A is written as |A| = = det (A) or Δ
c d c d
(i) For matrix A, |A| is read as determinant of A and not modulus of A.
(ii) Only square matrices have determinants.
Determinant of a matrix of order one
Let A = [a ] be the matrix of order 1, then determinant of A is defined to be equal to a
Determinant of a matrix of order two
a a12
Let A = 11 be a matrix of order 2 × 2, then the determinant of A is defined as:
a21 a22
a a12
det (A) = |A| = Δ = 11 = a11a22 a21a12
a21 a22
Determinant of a matrix of order 3 × 3
Determinant of a matrix of order three can be determined by expressing it in terms of second order
JK
determinants. This is known as expansion of a determinant along a row (or a column). There are six
RA
ways of expanding a determinant of order 3 corresponding to each of three rows (R1, R2 and R3) and
EN
three columns (C1, C2 and C3) giving the same value as shown below.
H
EP
Expansion along first Row (R1)
Step 1
Multiply first element a11 of R1 by (–1)(1 + 1) [(–1)sum of suffixes in a11] and with the second order
determinant obtained by deleting the elements of first row (R1) and first column (C1) of | A | as a11
lies in R1 and C1,
a a23
i.e., (1)11 a11 22
a32 a33
Step 2
Multiply 2nd element a12 of R1 by (–1)1 + 2 [(–1)sum of suffixes in a12] and the second order determinant
obtained by deleting elements of first row (R1) and 2nd column (C2)
of | A | as a12 lies in R1 and C2,
a a
i.e., (1)1 2 a12 21 23
a31 a33
Step 3
Multiply third element a13 of R1 by (–1)1 + 3 [(–1)sum of suffixes in a13] and the second order determinant
obtained by deleting elements of first row (R1) and third column (C3) of | A | as a13 lies in R1 and C3,
Page - 15 -
K STEPHEN RAJ (PGT MATHS)
a21 a22
i.e., (1)13 a13
a31 a32
Step 4
Now the expansion of determinant of A, that is, | A | written as sum of all three terms obtained in
steps 1, 2 and 3 above is given by
a22 a23 a21 a23 a21 a22
| A | (1)11 a11 (1)1 2 a12 (1)1 3 a13
a32 a33 a31 a33 a31 a32
or |A| = a11 (a22a33 – a32 a23) – a12 (a21a33 – a31a23) + a13 (a21a32 – a31a22)
Expansion along second row (R2)
a11 a12 a13
| A | = a21 a22 a23
a31 a32 a33
Expanding along R2, we get
a12 a13 a11 a13 a11 a12
| A | (1) 21 a21 (1) 2 2 a22 (1) 2 3 a23
a32 a33 a31 a33 a31 a32
Expansion along first Column (C1)
a11 a12 a13
| A | = a21 a22 a23
JK
For easier calculations, we shall expand the determinant along that row or column which contains
maximum number of zeros.
If Ri = ith row and Ci = ith column, then for interchange of row and columns, we will
symbolically write Ci↔ Ri
Property 2
If any two rows (or columns) of a determinant are interchanged, then sign of determinant changes.
We can denote the interchange of rows by Ri ↔ Rj and interchange of columns by Ci ↔ Cj.
Page - 16 -
K STEPHEN RAJ (PGT MATHS)
Property 3
If any two rows (or columns) of a determinant are identical (all corresponding elements are same),
then value of determinant is zero.
Property 4
If each element of a row (or a column) of a determinant is multiplied by a constant k, then its value
gets multiplied by k.
o By this property, we can take out any common factor from any one row or any one
column of a given determinant.
o If corresponding elements of any two rows (or columns) of a determinant are
proportional (in the same ratio), then its value is zero.
Property 5
If some or all elements of a row or column of a determinant are expressed as sum of two (or more)
terms, then the determinant can be expressed as sum of two (or more) determinants.
Property 6
If, to each element of any row or column of a determinant, the equimultiples of corresponding
elements of other row (or column) are added, then value of determinant remains the same, i.e., the
value of determinant remain same if we apply the operation Ri → Ri + kRj or Ci → Ci + k Cj.
If Δ1 is the determinant obtained by applying Ri → kRi or Ci → kCi to the determinant Δ, then Δ1
= kΔ.
If more than one operation like Ri→ Ri + kRj is done in one step, care should be taken to see that
JK
a row that is affected in one operation should not be used in another operation. A similar remark
RA
Area of triangle
EP
Area of a triangle whose vertices are (x1, y1), (x2, y2) and (x3, y3), is given by the expression
ST
x1 y1 1
1
x2 y2 1 …………………… (1)
2
x3 y3 1
Since area is a positive quantity, we always take the absolute value of the determinant in (1).
If area is given, use both positive and negative values of the determinant for calculation.
The area of the triangle formed by three collinear points is zero.
Minors and Cofactors
Minor of an element aij of a determinant is the determinant obtained by deleting its ith row and jth
column in which element aij lies. Minor of an element a= is denoted by Mij.
Minor of an element of a determinant of order n(n ≥ 2) is a determinant of order n – 1.
Cofactor of an element aij, denoted by Aij is defined by Aij = (–1)i + j Mij , where Mij is minor of aij.
If elements of a row (or column) are multiplied with cofactors of any other row (or column), then
their sum is zero.
Adjoint and Inverse of a Matrix
The adjoint of a square matrix A = [aij]n × n is defined as the transpose of the matrix [Aij]n × n, where
Aij is the cofactor of the element aij. Adjoint of the matrix A is denoted by adj A.
Page - 17 -
K STEPHEN RAJ (PGT MATHS)
a a12
For a square matrix of order 2, given by 11
a21 a22
The adj A can also be obtained by interchanging a11 and a22 and by changing signs of a12 and a21,
i.e.,
Theorem 1 If A be any given square matrix of order n, then A(adj A) = (adj A) A = |A| I, where I is
the identity matrix of order n
A square matrix A is said to be singular if |A| = 0.
A square matrix A is said to be non-singular if |A| ≠ 0
Theorem 2 If A and B are nonsingular matrices of the same order, then AB and BA are also
nonsingular matrices of the same order.
Theorem 3 The determinant of the product of matrices is equal to product of their respective
determinants, that is, |AB| = |A| |B|, where A and B are square matrices of the same order
JK
Theorem 4 A square matrix A is invertible if and only if A is nonsingular matrix.
H
EP
1
A is invertible and A1 adjA
| A|
Applications of Determinants and Matrices
Application of determinants and matrices for solving the system of linear equations in two or three
variables and for checking the consistency of the system of linear equations:
Consistent system A system of equations is said to be consistent if its solution (one or more) exists.
Inconsistent system A system of equations is said to be inconsistent if its solution does not exist.
Solution of system of linear equations using inverse of a matrix
Consider the system of equations
a1 x + b1 y + c1 z = d1
a2 x + b2 y + c2 z = d2
a3 x + b3 y + c3 z = d3
a1 b1 c1 x d1
Let A a2 b2 c2 , X y and B d 2
a3 b3 c3 z d3
Page - 18 -
K STEPHEN RAJ (PGT MATHS)
a1 b1 c1 x d1
a b2 c2 y d 2
2
a3 b3 c3 z d3
Case I
If A is a nonsingular matrix, then its inverse exists. Now AX = B
or A–1 (AX) = A–1 B (premultiplying by A–1)
or (A–1A) X = A–1 B (by associative property)
or I X = A–1 B
or X = A–1 B
This matrix equation provides unique solution for the given system of equations as inverse of a
matrix is unique. This method of solving system of equations is known as Matrix Method.
Case II
If A is a singular matrix, then |A| = 0. In this case, we calculate (adj A) B.
If (adj A) B ≠ O, (O being zero matrix), then solution does not exist and the system of equations is
called inconsistent.
If (adj A) B = O, then system may be either consistent or inconsistent according as the system have
either infinitely many solutions or no solution.
JK
RA
EN
H
EP
ST
Page - 19 -
K STEPHEN RAJ (PGT MATHS)
CHAPTER – 10: VECTOR ALGEBRA
Vector
The line l to the line segment AB, then a magnitude is prescribed on the line l with one of the two
directions, so that we obtain a directed line segment. Thus, a directed line segment has magnitude as
well as direction.
A quantity that has magnitude as well as direction
is called a vector.
A directed line segment is a vector, denoted as AB or simply as | a | , and read as ‘vector AB ’ or
‘vector | a | ’.
The point A from where the vector AB starts is called its initial point, and the point B where it ends
is called its terminal point. The distance between initial and terminal points of a vector is called the
magnitude (or length) of the vector, denoted as | AB | or | a | . The arrow indicates the direction of the
vector.
The vector OP having O and P as its initial and terminal points, respectively, is called the position
JK
vector of the point P with respect to O. Using distance formula, the magnitude of vector OP is given
RA
by | OP | x 2 y 2 z 2
EN
H
EP
ST
The position vectors of points A, B, C, etc., with respect to the origin O are denoted by
a, b and c , etc., respectively
Direction Cosines
Consider the position vector OP (or r ) of a point P(x, y, z) in below figure. The angles α, β, γ made
by the vector r with the positive directions of x, y and z-axes respectively, are called its direction
angles. The cosine values of these angles, i.e., cosα, cosβ and cos γ are called direction cosines of
the vector r , and usually denoted by l, m and n, respectively.
Page - 20 -
K STEPHEN RAJ (PGT MATHS)
JK
x
RA
The triangle OAP is right angled, and in it, we have cos (r stands for | r |). Similarly, from the
r
EN
y z
H
right angled triangles OBP and OCP, we may write cos and cos . Thus, the coordinates
EP
r r
ST
of the point P may also be expressed as (lr, mr, nr). Thenumbers lr, mr and nr, proportional to the
direction cosines are called as direction ratios of vector r , and denoted as a, b and c, respectively.
l2 + m2 + n2 = 1 but a2 + b2 + c2 ≠ 1, in general.
Types of Vectors
Zero Vector A vector whose initial and terminal points coincide, is called a zero vector (or null
vector), and denoted as 0 . Zero vector can not be assigned a definite direction as it has zero
magnitude. Or, alternatively otherwise, it may be regarded as having any direction. The vectors
AA, BB represent the zero vector,
Unit Vector A vector whose magnitude is unity (i.e., 1 unit) is called a unit vector. The unit vector
in the direction of a given vector a is denoted by a
Coinitial Vectors Two or more vectors having the same initial point are called coinitial vectors.
Collinear Vectors Two or more vectors are said to be collinear if they are parallel to the same line,
irrespective of their magnitudes and directions.
Equal Vectors Two vectors a and b are said to be equal, if they have the same magnitude and
direction regardless of the positions of their initial points, and written as a b .
Page - 21 -
K STEPHEN RAJ (PGT MATHS)
Negative of a Vector A vector whose magnitude is the same as that of a given vector (say, AB ),
but direction is opposite
to that of it, is called negative
of the given vector.
For example, vector BA is negative of the vector AB , and written as BA = − AB .
The vectors defined above are such that any of them may be subject to its parallel displacement
without changing its magnitude and direction. Such vectors are called free vectors.
Addition of Vectors
Triangle law of vector addition
If two vectors a and b are represented (in magnitude and direction) by two sides of a triangle
taken in order, then their sum (resultant) is represented by the third side c ( a b ) taken in the
opposite order.
Subtraction of Vectors : To subtract b from a , reverse the direction of b and add to a .
Geometrical Representation of Addition and Subtraction :
JK
Parallelogram law of vector addition
RA
If we have two vectors a and b represented by the two adjacent sides of a parallelogram in
EN
magnitude and direction, then their sum a b is represented in magnitude and direction by the
H
EP
diagonal of the parallelogram through their common point. This is known as the parallelogram law
ST
of vector addition.
Property 2
For any three vectors a, b and c , a b c a b c (Associative property)
Property 3
For any vector a , we have a 0 0 a a , Here, the zero vector 0 is called the additive identity
for the vector addition.
Page - 22 -
K STEPHEN RAJ (PGT MATHS)
Property 4
For any vector a , we have a a a a 0
Here, the vector a is called the additive inverse for the vector addition.
Multiplication of a Vector by a Scalar
Let a be a given vector and λ a scalar. Then the product of the vector a by the scalar λ, denoted as
λ a , is called the multiplication of vector a by the scalar λ. Note that, λ a is also a vector, collinear
to the vector a . The vector λ a has the direction same (or opposite) to that of vector a according as
the value of λ is positive (or negative). Also, the magnitude of vector λ a is |λ| times the magnitude
of the vector a , i.e., | λ a | = | λ | | a |
1
Unit vector in the direction of vector a is given by a .a
|a|
Properties of Multiplication
of Vectors by a Scalar
For vectors a , b and scalars m, n, we have
(i) m ( a ) = (–m) a = – (m a )
(ii) (–m) ( a ) = m a
(iii) m(n a ) = (mn) a = n(m a )
(iv) (m + n) a = m a + n a
(v) m( a + b ) = m a + m b .
JK
If P1(x1, y1, z1) and P2(x2, y2, z2) are any two points, then the vector joining P1 and P2 is the vector
EN
P1P2 .
H
EP
2 2 2
The magnitude of vector P1 P2 is given by ( x2 x1 ) ( y2 y1 ) ( z2 z1 )
Page - 23 -
K STEPHEN RAJ (PGT MATHS)
ab
The position vector of the midpoint of AB is
2
Position vector of any point C on AB can always be taken as c b a where + = 1.
n.OA m.OB (n m).OC , where C is a point on AB dividing it in the ratio m : n.
External Division : Let A and B be two points with position vectors a and b respectively and let C
be a point dividing AB externally in the ratio m : n. Then the position vector of C is given by
mb na
OC =
mn
Two vectors a and b are collinear if and only if there exists a nonzero scalar λ such that b = λa. If
the vectors a and b are given in the component form, i.e. a a1i a2 j a3 k and
b b1i b2 j b3 k , then the two vectors are collinear if and only if
b i b j b k a i a j a k
1 2 3 1 2 3
b1 b2 b3
a1 a2 a3
If a a1i a2 j a3 k , then a1, a2, a3 are also called direction ratios of a .
In case if it is given that l, m, n are direction cosines of a vector, then li m j nk
JK
(cos )i (cos ) j (cos )k is the unit vector in the direction of that vector, where α, β and γ are
RA
the angles which the vector makes with x, y and z axes respectively.
EN
H
EP
Observations
a.b is a real number.
Let a and b be two nonzero vectors, then a.b 0 if and only if a and b are perpendicular to each
other. i.e.
a.b 0 a b
If θ = 0, then a.b | a || b | . In particular, a.a | a |2 , as θ in this case is 0.
If θ = π, then a.b | a || b | In particular, a.( a ) | a |2 , as θ in this case is π.
In view of the Observations 2 and 3, for mutually perpendicular unit vectors i, j and k , we have
i.i j. j k .k 1
i. j j.k k .i 0
The angle between two nonzero vectors a and b , is given by
a.b a.b
cos , or cos
1
| a || b | | a || b |
The scalar product is commutative. i.e. a.b b.a
Page - 24 -
K STEPHEN RAJ (PGT MATHS)
Property 1 (Distributivity of scalar product over addition) Let a, b and c be any three vectors,
then a.(b c ) a.b a.c
Property 2 Let a and b be any two vectors, and be any scalar. Then ( a ).b (a.b) a.( b)
EN
Observations
ST
a × b is a vector.
Let a and b be two nonzero vectors. Then a b 0 if and only if and a and b are parallel (or
collinear) to each other, i.e.,
a b 0 a || b
In particular, a a 0 and a ( a ) 0 , since in the first situation, θ = 0 and in the second one, θ =
π, making the value of sin θ to be 0.
If then a b | a || b |
2
In view of the Observations 2 and 3, for mutually perpendicular unit vectors i, j and k , we have
i i j j k k 0
i j k , j k i, k i j
In terms of vector product, the angle between two vectors a and b may be given as
| ab |
sin
| a || b |
The vector product is not commutative, as a b b a
In view of the above observations, we have j i k , k j i, i k j
1
If a and b represent the adjacent sides of a triangle then its area is given as | ab |.
2
Page - 25 -
K STEPHEN RAJ (PGT MATHS)
If a and b represent the adjacent sides of a parallelogram, then its area is given by | a b | .
1
The area of a parallelogram with diagonals a and b is | ab |
2
ab
is a unit vector perpendicular to the plane of a and b .
| ab |
a b
is also a unit vector perpendicular to the plane of a and b .
| a b |
Property 3 (Distributivity of vector product over addition): If a, b and c are any three vectors
and λ be a scalar, then
(i) a (b c ) a b a c
(ii) (a b) ( a ) b a ( b)
Let a and b be two vectors given in component form as a a1i a2 j a3 k and b b1i b2 j b3 k ,
respectively. Then their cross product may be given by
i j k
a b a1 a2 a3
b1 b2 b3
JK
RA
EN
H
EP
ST
Page - 26 -
K STEPHEN RAJ (PGT MATHS)
CHAPTER – 12: LINEAR PROGRAMMING
A half-plane in the xy-plane is called a closed half-plane if the points on the line separating the half-
plane are also included in the half-plane.
The graph of a linear inequation involving sign ‘’or ‘’is a closed half-plane.
A half-plane in the xy-plane is called an open half-plane if the points on the line separating the half-
plane are not included in the half-plane.
The graph of linear inequation involving sign ‘<‘or ‘>’ is an open half-plane.
Two or more linear inequations are said to constitute a system of linear inequations.
The solution set of a system of linear inequations is defined as the intersection of solution sets of
linear inequations in the system.
A linear inequation is also called a linear constraint as it restricts the freedom of choice of the
values x and y.
JK
LINEAR PROGRAMMING
RA
EN
function of a number of variables subject to a number of restrictions (or constraints) on variables, in the
ST
A Linear Programming Problem is one that is concerned with finding the optimal value (maximum
or minimum value) of a linear function (called objective function) of several variables (say x and y),
subject to the conditions that the variables are non-negative and satisfy a set of linear inequalities
(called linear constraints).
The term linear implies that all the mathematical relations used in the problem are linear relations
while the term programming refers to the method of determining a particular programme or plan of
action.
Objective function Linear function Z = ax + by, where a, b are constants, which has to be
maximised or minimized is called a linear objective function. Variables x and y are called decision
variables.
Constraints The linear inequalities or equations or restrictions on the variables of a linear
programming problem are called constraints. The conditions x ≥ 0, y ≥ 0 are called non-negative
restrictions.
Optimisation problem A problem which seeks to maximise or minimise a linear function (say of
two variables x and y) subject to certain constraints as determined by a set of linear inequalities is
called an optimisation problem. Linear programming problems are special type of optimisation
problems.
Page - 27 -
K STEPHEN RAJ (PGT MATHS)
GRAPHICAL METHOD OF SOLVING LINEAR PROGRAMMING PROBLEMS
Feasible region The common region determined by all the constraints including non-negative
constraints x, y ≥ 0 of a linear programming problem is called the feasible region (or solution
region) for the problem. The region other than feasible region is called an infeasible region.
Feasible solutions Points within and on the boundary of the feasible region represent feasible
solutions of the constraints.
Any point outside the feasible region is called an infeasible solution.
Optimal (feasible) solution: Any point in the feasible region that gives the optimal value (maximum
or minimum) of the objective function is called an optimal solution.
Theorem 1 Let R be the feasible region (convex polygon) for a linear programming problem and let
Z = ax + by be the objective function. When Z has an optimal value (maximum or minimum), where
the variables x and y are subject to constraints described by linear inequalities, this optimal value
must occur at a corner point (vertex) of the feasible region.
A corner point of a feasible region is a point in the region which is the intersection of two boundary
lines.
Theorem 2 Let R be the feasible region for a linear programming problem, and let Z = ax + by be
the objective function. If R is bounded, then the objective function Z has both a maximum and a
minimum value on R and each of these occurs at a corner point (vertex) of R.
JK
RA
A feasible region of a system of linear inequalities is said to be bounded if it can be enclosed within
EN
a circle. Otherwise, it is called unbounded. Unbounded means that the feasible region does extend
H
If R is unbounded, then a maximum or a minimum value of the objective function may not exist.
However, if it exists, it must occur at a corner point of R. (By Theorem 1).
The method of solving linear programming problem is referred as Corner Point Method. The method
comprises of the following steps:
1. Find the feasible region of the linear programming problem and determine its corner points (vertices)
either by inspection or by solving the two equations of the lines intersecting at that point.
2. Evaluate the objective function Z = ax + by at each corner point. Let M and m, respectively denote
the largest and smallest values of these points.
3. (i) When the feasible region is bounded, M and m are the maximum and minimum values of Z.
(ii) In case, the feasible region is unbounded, we have:
4. (a) M is the maximum value of Z, if the open half plane determined by ax + by > M has no point in
common with the feasible region. Otherwise, Z has no maximum value.
(b) Similarly, m is the minimum value of Z, if the open half plane determined by ax + by < m has no
point in common with the feasible region. Otherwise, Z has no minimum value.
Page - 28 -
K STEPHEN RAJ (PGT MATHS)
DIFFERENT TYPES OF LINEAR PROGRAMMING PROBLEMS
Working Rule
ST
Working Rule
(i) Identify the unknown variables in the given Linear programming problems. Denote them by
x and y.
(ii) Formulate the objective function in terms of x and y. Also, observe it is maximized or
minimized.
Page - 29 -
K STEPHEN RAJ (PGT MATHS)
(iii) Write the linear constraints in the form of linear inequations formed by the given conditions.
(iv) Consider the linear equations of their corresponding linear inequations.
(v) Draw the graph of each linear equation.
(vi) Check the solution region of each linear inequations by testing the points and then shade the
common region of all the linear inequations.
(vii) Determine the corner points of the feasible region.
(viii) Evaluate the value of objective function at each corner points obtained in the above step.
(ix) As the feasible region is unbounded, therefore the value may or may not be minimum or
maximum value of the objective function. For this draw a graph of the inequality by equating
the objective function with the above value to form linear inequation i.e. < for minimum or >
for maximum. And check whether the resulting half plane has points in common with the
feasible region or not.
JK
RA
EN
H
EP
ST
Page - 30 -
K STEPHEN RAJ (PGT MATHS)
CHAPTER – 13: PROBABILITY
Trial and Elementary Events : Let a random experiment be repeated under identical conditions.
Then the experiment is called a trial and the possible outcomes of the experiment are known as
elementary events or cases.
Elementary events are also known as indecomposable events.
Decomposable Events/Compound Events : Events obtained by combining together two or more
elementary events are known as the compound events or decomposable events.
Exhaustive Number of Cases : The total number of possible outcomes of a random experiment in a
trial is known as the exhaustive number of cases.
The total number of elementary events of a random experiment is called the exhaustive number of
cases.
Mutually Exclusive Events : Events are said to be mutually exclusive or incompatible if the
occurrence of anyone of them prevents the occurrence of all the others, i.e., if no two or more of
them can occur simultaneously in the same trial.
JK
RA
Equally Likely Events : Events are equally likely if there is no reason for an event to occur in
EN
The number of cases favourable to an events in a trial is the total number of elementary events such
that the occurrence of any one of them ensures the happening of the event.
Independent Events : Events are said to be independent if the happening (or non-happening) of one
event is not affected by the happening (or non-happening) of others.
Classical Definition of Probability of An Event : If there are n elementary events associated with a
random experiment and m of them are favourable to an event A, then the probability of happening of
m
A is denoted by P(A) and is defined as the ratio .
n
m
P( A)
n
0 P(A) 1.
A denotes not happening of A
P( A ) = 1– P(A)
P(A) + P(A) = 1
If P(A) = 1, then A is called certain event and A is called an impossible event if P(A) = 0.
m
The odds in favour of occurrence of the event A are defined by i.e., P(A) : P( A ) and the odds
nm
nm
against the occurrence of A are defined by , i.e., P( A ) : P(A).
m
Page - 31 -
K STEPHEN RAJ (PGT MATHS)
Sample Space : The set of all possible outcomes of a random experiment is called the sample space
associated with it and it is generally denoted by S.
If E1 , E2 , ..., En are the possible outcomes of a random experiment, then S = {E1 , E2 , ..., En }. Each
element of S is called a sample point.
Event : A subset of the sample space associated with a random experiment is called an event.
Elementary Events : Single element subsets of the sample space associated with a random
experiment are known as the elementary events or indecomposable events.
Compound Events : Those subsets of the sample space S associated to an experiment which are
disjoint union of single element subsets of the sample space S are known as the compound or
decomposable events.
Impossible and Certain Event : Let S be the sample space associated with a random experiment.
Then f and S, being subsets of S are events.The event f is called an impossible event and the event S
is known as a certain event.
Occurence or Happening of An Event : Let S be the sample space associated with a random
experiment and let A be an event. If w is an outcome of a trial such that w A, then we say that the
event A has occurred. If w A , we say that the event A has not occured.
Algebra of Events
JK
RA
EN
H
EP
ST
Mutually Exclusive Events : Let S be the sample space associated with a random experiment and
let A1 and A2 be two events.Then A1 and A2 are mutually exclusive events if A1 ∩ A2 = .
Mutually Exclusive and Exhaustive System of Events : Let S be the sample space associated with
a random experiment. Let A1 , A2 , ..., An be subsets of S such that
(i) Ai ∩ Aj = for i j, and (ii) A1 A2 ... An = S.
If E1 , E2 , ..., En are elementary events associated with a random experiment. Then
(i) E i ∩ E j = f for i j and (ii) E1 E2 ….. En = S.
Favourable Events : Let S be the sample space associated with a random experiment and let A S.
Then the elementary events belonging to A are known as the favourable events to A.
Page - 32 -
K STEPHEN RAJ (PGT MATHS)
Experimentally Probability : Let S be the sample space associated with a random experiment, and
let A be a subset of S representing an event. Then the probability of the event A is defined as
Number of elements in A n( A)
P( A)
Number of elements in S n( s )
P() = 0, P (S) = 1.
Addition theorem for two events : If A and B are two events associated with a random experiment,
then P(A B) = P (A) + P(B) – P(A ∩ B).
If A and B are mutually exclusive events, then P(A ∩ B) = 0, therefore P(A B) = P(A) + P(B).
This is the addition theorem for mutually exclusive events.
Addition theorem for three events : If A, B, C are three events associated with a random
experiment then,
P(A B C) = P(A) + P(B) + P(C) – P(A ∩ B) – P(B ∩ C) – P(A ∩ C) + P(A ∩ B ∩ C).
Let A and B be two events associated with a random experiment.Then
(i) P( A ∩ B) = P(B) – P(A ∩ B) (ii) P(A ∩ B ) = P(A) – P(A ∩ B )
P ( A ∩ B) is known as the probability of occurence of B only.
JK
P(A ∩ B ) is known as the probability of occurence of A only.
RA
If B A, then (i) P(A B ) = P(A) – P(B) (ii) P(B) P(A).
EN
H
EP
Conditional Probability
ST
Multiplication Theorem : If A and B are two events, then
P(A ∩ B) = P(A)P(B / A), if P(A) 0
P(A ∩ B) = P(B)P(A/ B), if P(B ) 0.
P( A B) P( A B)
P( B / A) and P( A / B)
P( A) P( B)
Extension of multiplication theorem : If A1, A2 ,..., An are n events related to a random experiment,
then
A A3 An
P ( A1 A2 A3 .... An ) P( A1 ) P 2 P .....P
A1 A1 A2 A1 A2 ..... An 1
where P(Ai / A1 A2 ... A i–1 ) represents the conditional probability of the event Ai , given that
the events A1 , A2 , ..., Ai–1 have already happened.
Page - 33 -
K STEPHEN RAJ (PGT MATHS)
Independent Events : Event are said to be independent, if the occurrence or nonoccurrence of one
does not affect the probability of the occurrence or nonoccurrence of the other.
If A and B are two independent events associated with a random experiment then,
P(A/B) = P(A) and P(B/A) = P(B) and viceversa.
If A and B are independent events associated with a random experiment, then P(A B) = P(A) P(B)
i.e., the probability of simultaneous occurrence of two independent events is equal to the product of
their probabilities.
If A1 , A2 , ..., An are independent events associated with a random experiment,
then P(A1 A2 A3 ... An) = P (A1) P (A2) ... P(An)
If A1 , A2 , ..., An are n independent events associated with a random experiment, then
P ( A1 A2 A3 .... An ) P ( A1 ) P ( A2 )....P ( An )
Events A1, A2 ,..., An are independent or mutually independent if the probability of the simultaneous
occurence of (any) finite number of them is equal to the product of their separate probabilities while
these events are pair wise independent if P(Ai Aj) = P(Aj ) P(Ai ) for all i j.
The Law of Total Probability : Let S be the sample space and let E1, E2 , ..., En be n mutually
exclusive and exhaustive events associated with a random experiment.
If A is any event which occurs with E1 or E2 or ... or En , then
A A A
JK
P ( A) P ( E1 ) P P ( E2 ) P ......... P( En ) P
E1 E2 En
RA
EN
H
Baye's Rule : Let S be the sample space and let E1 , E2 , ..., En be n mutually exclusive and
EP
exhaustive events associated with a random experiment. If A is any event which occurs with E1 or E2
ST
or ... or En , then
E P ( Ei ) P ( A / Ei )
P i n , i 1, 2,...n
A
P( Ei ) P( A / Ei )
i 1
The events E1 , E2 , ..., En are usually referred to as 'hypothesis' and the probabilities P(E1), P(E2), ...,
P(En) are known as the 'priori' probabilities as they exist before we obtain any information from the
experiment.
The probabilities P(A/Ei); i = 1, 2, ..., n are called the likelihood probabilities as they tell us how
likely the event A under consideration occur, given each and every priori probabilities.
The probabilities P (Ei /A); i = 1, 2, ..., n are called the posterior probabilities as they are determined
after the result of the experiment are known.
Random Variable : A random variable is a real valued function having domain as the sample space
associated with a given random experiment.
A random variable associated with a given random experiment associates every event to a unique
real number.
Probability Distribution : If a random variable X takes values x1 , x2 , ..., xn with respective,
probabilities p1 , p2 , ..., pn , then
X : x1 x2 ... xn
Page - 34 -
K STEPHEN RAJ (PGT MATHS)
P(X) : p1 p2 ... pn
is called the probability distribution of X.
Binomial Distribution : A random variable X which takes values 0, 1, 2, ..., n is said to follow
binomial distribution if its probability distribution function is given by P ( X r ) nCr p r q n r , r = 0,
1, 2, ..., n where p, q > 0 such that p + q = 1.
If n trials constitute an experiment and the experiment is repeated N times, then the frequencies of 0,
1, 2, ..., n successes are given by
N×P(X = 0), N×P (X=1), N×P (X=2), ..., N×P (X=n).
If X is a random variable with probability distribution
X : x1 x2 ... xn
P(X) : p1 p2 ... pn
n
then the mean or expectation of X is defined as X E ( X ) pi xi and the variance of X is defined
i 1
as and the variance of X is defined as
n n
2 2
Var ( X ) pi xi E ( X ) pi xi 2 E ( X )
i 1 i 1
2 2
or Var (X) = E (X ) – [E(X)]
The mean of the binomial variate X ~ B (n, p) is n p.
JK
The variance of the binomial variate X ~ B (n, p) is npq, where p + q =1
RA
EN
H
Maximum Value of P(X = r) for given Values of n and p for a Binomial Variate X
If (n + 1) p is an integer, say m, then P ( X r ) nCr p r (1 p) n r is maximum when r = m or
r = m – 1.
If (n + 1)p is not an integer, then P(X = r) is maximum when r = [(n + 1)p].