0% found this document useful (0 votes)
53 views

Image Processing and Pattern Recognition

The document appears to be an exam for a course on Image Processing and Pattern Recognition. It contains 20 multiple choice questions and 20 fill-in-the-blank questions to be completed within 20 minutes for 20 marks. The exam covers topics like decision functions, Bayes classifiers, least mean square error algorithms, minimum distance classification, and gradient descent.

Uploaded by

vasuvlsi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Image Processing and Pattern Recognition

The document appears to be an exam for a course on Image Processing and Pattern Recognition. It contains 20 multiple choice questions and 20 fill-in-the-blank questions to be completed within 20 minutes for 20 marks. The exam covers topics like decision functions, Bayes classifiers, least mean square error algorithms, minimum distance classification, and gradient descent.

Uploaded by

vasuvlsi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Code No:

05411101

Set No. 1

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD


IV B.Tech I-Sem I Mid-Term Examinations, Aug/Sept. 2009
IMAGE PROCESSING AND PATTERN RECOGNITION
Objective Exam
Name: ______________________________ Hall Ticket No.
A

Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I

Choose the correct alternative:

1.

Character recognition utilizes the following concept:


a. Heuristic b. Linguistic c. Syntactic
d. Membership roster

2.

The conditional average risk / function in a Bayes Classifier is given by r:


a. Lij p ( wi / x)
b. Lij p( wi / x) c. Lij p ( x / wi ) d. L ji p ( wi / x )

3.

For a function f the negative of a gradient points in the direction of:


a. Max rate of decrease of f
b. Min rate of decrease of f
c. Max rate of increase of f
c. Min rate of increase of f

4.

The minimum number of decision functions for M pattern classes using case 2 is:
a. M b. M+1
c. M(M-1)
d. M(M-1)/2

5.

The LMSE algorithm converges when


a. w > 0
b. X 0
c. Xw > 0

d. Xw > 0

6.

Which of the following does not constitute in the design of automatic pattern recognition model of
a system:
[
]
a. Observation
b. Sensing
c. Feature selection
d. Preprocessing

7.

In a two-pattern class case, a linear decision function, d(x), will classify a sample set x to the
pattern class when:
[
a. d(x)>0 or d(x) < 0 b. d(x)<0 or d(x)>=0 c. d(x)=0
d. d(x) is not used

8.

The classifier that minimizes the total expected loss is called:


a. Optimal classifier b. Likelihood classifier c. Bayes Classifier

9.

In LMSE algorithm, X + + is called


a. Least mean b. Squared error

10.

[
d. Best classifier
[

c. Generalized Inverse

d. Gradient

In the minimum distance algorithm, the decision function d(x) assigns x to wi if


[
]
a. d i ( x ) = d j ( x )
b. d i ( x ) < d j ( x )
c. d i ( x ) > d j ( x )
d. d i (x ) cannot help

Cont2

Code No:

05411101

:2:

Set No. 1

II

Fill in the blanks:

11.

The recognition function rij(x) represents a __________________.

12.

When a priori probabilities are not available ____________ criterion offers an alternative approach.

13.

Minimum distance classification is also known as _________________ matching.

14.

Least mean square error algorithm is also known as the ____________________ algorithm.

15.

A ___________ is a category determined by some given common attributes

16.

Test of separability is an important feature in ____________________ algorithm.

17.

The reward punishment concept is also known as ________________ algorithm.

18.

Pattern classification by distance functions will yield satisfactory results only when patterns
have ________________ properties

19.

|e| represents the _______________ of the vector e

20.

In the gradient descent algorithm, w is incremented in the _________________ gradient of


J(w,x)

-oOo-

Code No:

05411101

Set No. 2

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD


IV B.Tech I-Sem I Mid-Term Examinations, Aug/Sept. 2009
IMAGE PROCESSING AND PATTERN RECOGNITION
Objective Exam
Name: ______________________________ Hall Ticket No.
A

Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I

Choose the correct alternative:

1.

The minimum number of decision functions for M pattern classes using case 2 is:
a. M b. M+1
c. M(M-1)
d. M(M-1)/2

2.

The LMSE algorithm converges when


a. w > 0
b. X 0
c. Xw > 0

d. Xw > 0

3.

Which of the following does not constitute in the design of automatic pattern recognition model of
a system:
[
]
a. Observation
b. Sensing
c. Feature selection
d. Preprocessing

4.

In a two-pattern class case, a linear decision function, d(x), will classify a sample set x to the
pattern class when:
[
a. d(x)>0 or d(x) < 0 b. d(x)<0 or d(x)>=0 c. d(x)=0
d. d(x) is not used

5.

The classifier that minimizes the total expected loss is called:


a. Optimal classifier b. Likelihood classifier c. Bayes Classifier

6.

In LMSE algorithm, X + + is called


a. Least mean b. Squared error

[
d. Best classifier
[

c. Generalized Inverse

d. Gradient

7.

In the minimum distance algorithm, the decision function d(x) assigns x to wi if


[
]
a. d i ( x ) = d j ( x )
b. d i ( x ) < d j ( x )
c. d i ( x ) > d j ( x )
d. d i (x ) cannot help

8.

Character recognition utilizes the following concept:


a. Heuristic b. Linguistic c. Syntactic
d. Membership roster

9.

The conditional average risk / function in a Bayes Classifier is given by r:


a. Lij p ( wi / x)
b. Lij p( wi / x) c. Lij p ( x / wi ) d. L ji p ( wi / x )

10.

For a function f the negative of a gradient points in the direction of:


a. Max rate of decrease of f
b. Min rate of decrease of f
c. Max rate of increase of f
c. Min rate of increase of f

Cont2

Code No:

05411101

:2:

Set No. 2

II

Fill in the blanks:

11.

Least mean square error algorithm is also known as the ____________________ algorithm.

12.

A ___________ is a category determined by some given common attributes

13.

Test of separability is an important feature in ____________________ algorithm.

14.

The reward punishment concept is also known as ________________ algorithm.

15.

Pattern classification by distance functions will yield satisfactory results only when patterns
have ________________ properties

16.

|e| represents the _______________ of the vector e

17.

In the gradient descent algorithm, w is incremented in the _________________ gradient of


J(w,x)

18.

The recognition function rij(x) represents a __________________.

19.

When a priori probabilities are not available ____________ criterion offers an alternative approach.

20.

Minimum distance classification is also known as _________________ matching.

-oOo-

Code No:

05411101

Set No. 3

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD


IV B.Tech I-Sem I Mid-Term Examinations, Aug/Sept. 2009
IMAGE PROCESSING AND PATTERN RECOGNITION
Objective Exam
Name: ______________________________ Hall Ticket No.
A

Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I

Choose the correct alternative:

1.

Which of the following does not constitute in the design of automatic pattern recognition model of
a system:
[
]
a. Observation
b. Sensing
c. Feature selection
d. Preprocessing

2.

In a two-pattern class case, a linear decision function, d(x), will classify a sample set x to the
pattern class when:
[
a. d(x)>0 or d(x) < 0 b. d(x)<0 or d(x)>=0 c. d(x)=0
d. d(x) is not used

3.

The classifier that minimizes the total expected loss is called:


a. Optimal classifier b. Likelihood classifier c. Bayes Classifier

4.

In LMSE algorithm, X + + is called


a. Least mean b. Squared error

[
d. Best classifier

5.

In the minimum distance algorithm, the decision function d(x) assigns x to wi if


[
a. d i ( x ) = d j ( x )
b. d i ( x ) < d j ( x )
c. d i ( x ) > d j ( x )
d. d i (x ) cannot help

6.

Character recognition utilizes the following concept:


a. Heuristic b. Linguistic c. Syntactic
d. Membership roster

7.

The conditional average risk / function in a Bayes Classifier is given by r:


a. Lij p ( wi / x)
b. Lij p( wi / x) c. Lij p ( x / wi ) d. L ji p ( wi / x )

8.

For a function f the negative of a gradient points in the direction of:


a. Max rate of decrease of f
b. Min rate of decrease of f
c. Max rate of increase of f
c. Min rate of increase of f

9.

The minimum number of decision functions for M pattern classes using case 2 is:
a. M b. M+1
c. M(M-1)
d. M(M-1)/2

10.

The LMSE algorithm converges when


a. w > 0
b. X 0
c. Xw > 0

c. Generalized Inverse

d. Gradient

d. Xw > 0

Cont2

Code No:

05411101

:2:

Set No. 3

II

Fill in the blanks:

11.

Test of separability is an important feature in ____________________ algorithm.

12.

The reward punishment concept is also known as ________________ algorithm.

13.

Pattern classification by distance functions will yield satisfactory results only when patterns
have ________________ properties

14.

|e| represents the _______________ of the vector e

15.

In the gradient descent algorithm, w is incremented in the _________________ gradient of


J(w,x)

16.

The recognition function rij(x) represents a __________________.

17.

When a priori probabilities are not available ____________ criterion offers an alternative approach.

18.

Minimum distance classification is also known as _________________ matching.

19.

Least mean square error algorithm is also known as the ____________________ algorithm.

20.

A ___________ is a category determined by some given common attributes

-oOo-

Code No:

05411101

Set No. 4

JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD


IV B.Tech I-Sem I Mid-Term Examinations, Aug/Sept. 2009
IMAGE PROCESSING AND PATTERN RECOGNITION
Objective Exam
Name: ______________________________ Hall Ticket No.
A

Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I

Choose the correct alternative:

1.

The classifier that minimizes the total expected loss is called:


a. Optimal classifier b. Likelihood classifier c. Bayes Classifier

[
d. Best classifier

2.

In LMSE algorithm, X + + is called


a. Least mean b. Squared error

d. Gradient

[
c. Generalized Inverse

3.

In the minimum distance algorithm, the decision function d(x) assigns x to wi if


[
]
a. d i ( x ) = d j ( x )
b. d i ( x ) < d j ( x )
c. d i ( x ) > d j ( x )
d. d i (x ) cannot help

4.

Character recognition utilizes the following concept:


a. Heuristic b. Linguistic c. Syntactic
d. Membership roster

5.

The conditional average risk / function in a Bayes Classifier is given by r:


a. Lij p ( wi / x)
b. Lij p( wi / x) c. Lij p ( x / wi ) d. L ji p ( wi / x )

6.

For a function f the negative of a gradient points in the direction of:


a. Max rate of decrease of f
b. Min rate of decrease of f
c. Max rate of increase of f
c. Min rate of increase of f

7.

The minimum number of decision functions for M pattern classes using case 2 is:
a. M b. M+1
c. M(M-1)
d. M(M-1)/2

8.

The LMSE algorithm converges when


a. w > 0
b. X 0
c. Xw > 0

d. Xw > 0

9.

Which of the following does not constitute in the design of automatic pattern recognition model of
a system:
[
]
a. Observation
b. Sensing
c. Feature selection
d. Preprocessing

10.

In a two-pattern class case, a linear decision function, d(x), will classify a sample set x to the
pattern class when:
[
a. d(x)>0 or d(x) < 0 b. d(x)<0 or d(x)>=0 c. d(x)=0
d. d(x) is not used

Cont2

Code No:

05411101

:2:

Set No. 4

II

Fill in the blanks:

11.

Pattern classification by distance functions will yield satisfactory results only when patterns
have ________________ properties

12.

|e| represents the _______________ of the vector e

13.

In the gradient descent algorithm, w is incremented in the _________________ gradient of


J(w,x)

14.

The recognition function rij(x) represents a __________________.

15.

When a priori probabilities are not available ____________ criterion offers an alternative approach.

16.

Minimum distance classification is also known as _________________ matching.

17.

Least mean square error algorithm is also known as the ____________________ algorithm.

18.

A ___________ is a category determined by some given common attributes

19.

Test of separability is an important feature in ____________________ algorithm.

20.

The reward punishment concept is also known as ________________ algorithm.

-oOo-

You might also like