This document contains an assignment with multiple questions related to computer architecture and parallel processing. It asks the student to answer questions about Flynn's classification of computer architecture, operating system classification, interconnection networks, partitioning and parallelism conditions, CISC and RISC architectures, pipelining principles, RISC variations, vector versus stream architectures, VLIW architecture advantages over CISC and RISC, and other topics such as memory hierarchies and multiprocessor communication. The student must select the most appropriate answer from multiple choice responses.
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
0 ratings0% found this document useful (0 votes)
303 views6 pages
Assignment Parallel Processing
This document contains an assignment with multiple questions related to computer architecture and parallel processing. It asks the student to answer questions about Flynn's classification of computer architecture, operating system classification, interconnection networks, partitioning and parallelism conditions, CISC and RISC architectures, pipelining principles, RISC variations, vector versus stream architectures, VLIW architecture advantages over CISC and RISC, and other topics such as memory hierarchies and multiprocessor communication. The student must select the most appropriate answer from multiple choice responses.
Download as DOC, PDF, TXT or read online on Scribd
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1/ 6
AMITY SCHOOL OF DISTANCE LEARNING
Post Box No. 503, Sector!!
No"#$ % &0'30 Co()*ter Arc+"tect*re ,P$r$--e- Process"./ 0MCA1 Ass"/.(e.t A M$r2s '0 A.s3er $-- 4*est"o.s. 1. Explain Flynn's classification of computer architecture using neat block diagram. 2. Write about the classification of operating system. Ho chuka final 3. iscuss different types of !nterconnection "et#orks. $. Explain the conditions for partitioning and parallelism #ith example. %. What are the &haracteristics of &!'& and (!'& )rchitecture5 Co()*ter Arc+"tect*re , P$r$--e- Process"./ 0MCA1 Ass"/.(e.t B M$r2s '0 A.s3er $-- 4*est"o.s. 1. What is pipeline computer* Explain the principles for +ipelining. 2. iscuss '!, )rchitecture in detail #ith its -ariances. 3. What is a .ector +rocessor* &ompare .ector and 'tream )rchitecture. $. (ead he case study gi-en belo# and ans#er the /uestions gi-en at the end. C$se St*#6 0he key to higher performance in microprocessors for a broad range of applications is the ability to exploit fine1grain2 instruction1le-el parallelism. 'ome methods for exploiting fine1grain parallelism include3 1. +ipelining 2. ,ultiple +rocessors 3. 'uperscalar implementation $. 'pecifying multiple independent operations per instruction. +ipelining is no# uni-ersally implemented in high1performance processors. 4ittle more can be gained by impro-ing the implementation of a single pipeline. 5sing multiple processors impro-es performance for only a restricted set of applications. 'uperscalar implementations can impro-e performance for all types of applications. 'uperscalar means the ability to fetch2 issue to execution units2 and complete more than one instruction at a time. 'uperscalar implementations are re/uired #hen architectural compatibility must be preser-ed2 and they #ill be used for entrenched architectures #ith legacy soft#are2 such as 6 78 architecture that dominates the desktop computer market. 'pecifying multiple operations per instruction creates a -ery1long instruction #ord architecture or .4!W. ).4!W implementation has capabilities -ery similar to those of a superscalar processor9issuing and completing more than one operation at a time9#ith one important exception3 the .4!W hard#are is not responsible for disco-ering opportunities to execute multiple operations concurrently. For the .4!W implementation2 the long instruction #ord already encodes the concurrent operations. 0his explicit encoding leads to dramatically reduced hard#are complexity compared to a high1degree superscalar implementation of a (!'& or &!'&. 0he big ad-antage of .4!W2 then2 is that a highly concurrent :parallel; implementation is much simpler and cheaper to build than e/ui-alently concurrent (!'& or &!'& chips. .4!W is a simpler #ay to build a superscalar microprocessor. 7*est"o.s 1. Why do #e need .4!W )rchitecture* 2. &ompare .!4W #ith &!'& and (!'&* 3. iscuss 'oft#are instead of Hard#are implementation ad-antages of .4!W. Co()*ter Arc+"tect*re ,P$r$--e- Process"./ 0MCA1 Ass"/.(e.t C M$r2s '0 A.s3er $-- 4*est"o.s. T"c2 ($r2081 t+e (ost $))ro)r"$te $.s3er 1. ,ulti1computers are11 a; istributed address space accessible by local processors b; 'imultaneous access to shared -ariables can produce inconsistent results. c; (e/uires message filtering for more than 1 computer. d) 'hare the common memory. 2. ,ulti1+rocessors are11 a) 'hare the common memory. b) 'ystems contain multiple processors on a single machine. c) &onsists of a number of processors accessing other processors. d) ,ultiprocessor implementation for "on1embedded systems. 3. ,ulti-ector is 11 a) ) manufacturer of +rocess &ontrol b) )n element of a -ector space V. )2 c; uni/ue High1Expression :HEx0,; technology platforms d; +air End (eads. Faster2 Easier. $. '!, computers are11 a) &omputer consists of limited identical processors. b; ) modern supercomputer is almost al#ays a cluster of ,!, machines c) 'ingle instruction #ith multiple data. d) <eneral instruction in computer. %. +rogram +artitioning and 'cheduling 4ines are defined as those lines #hich are coplanar and do not intersect2 is the &ondition of11 a) +artitioning b) +arallelism. c) 'cheduling d; ,ultiprocessing 8. .4'! stands for11 a) .ery 4arge 'cale !ntegration b) .ariable length serial mask. c) .irtual limit of sub interface. d) .ery last stack instruction. =. Which +arallel )lgorithms is used for multiprocessor11 a) '!, b) .4'! c) )+'0 d) "+4 7. !EEE standard backplane bus specification is for -- a) ,ultile-el architectures b) ,ultiprocessor architectures c) ,ultipath architectures. d) ,ulti programming architecture. 9. Hierarchical memory system technology uses11 a) &ache memory b) ,emory sticks. c) Hdd. d) .irtual memory. 1>. )n arbitration protocol go-erns the11. a) !nterrupt b; !?@ c; H?# d; +arity. 11. !n #hich of the follo#ing order of program execution explicitly stated in user programs* a; +rogram Flo# ,echanisms b; &ontrol Flo# mechanism c; ata Flo# mechanism d; (eduction flo# mechanism 12 shared memory2 program counter2 control se/uencer are features of 11 a; ata Flo# b; +rogram Flo# c; &ontrol Flo# mechanism d; (eduction flo# mechanism 13. AAA!nstruction address :es; effecti-ely replaces the program counter in a control flo# machine. a; ataflo# )rchitecture b; emand1ri-en ,echanisms c; ata (eduction ,echanism. d; (eduction mechanism. 1$. )+0 is 11 a; )d-anced processor technology b; )d-ertise poster trend. c; )ddition part of tech. d; )ctual planning of tech. 1%. )ddressing ,odes on the B78 are used in11 a; !'). b; )+0. c; .4'! d; !', '9. T+e cross:$r s3"tc+ 3$s (ost )o)*-$r ;ro( a) '<50 to '<=0 :1 '<=0&000 c1 '<>0'<<0 #1 '<90&000 1=. A (e(or6 s+$re# :6 ($.6 )rocessors to co((*."c$te $(o./ "s ter(e# $s a) M*-t")ort (e(or6 b) M*-t")rocessor (e(or6. c) M*-t"-e?e- (e(or6. d) M*-t"#e?"ce (e(or6. 17. ) s#itching system for accessing memory modules in a multiprocessor is called11 a) &ombining n?#. b) &ombining processors. c) &ombining de-ices. d) &ombining cables. 1C. What does the follo#ing diagram sho#* a) +rocess Hierarchy b) ,emory Hierarchy c) )ccessing of ,emory d; &+5 connections. 2>. AA.. is the first super computer produced by !ndia. a) +)(), b) !ntel %>>> c) super !ndia d) none of these 21. '!, stands for11 a) 'ingle !nstruction ,ultiple ata stream b) 'ynchronous !nstruction ,ultiple ata stream c) 'ingle !nterface ,ultiple ata stream d) 'ingle !nstruction ,ultiple ata signal 22. '!' stands for11 a; 'ingle !nstruction 'e-eral ata stream. b) 'ingle !nstruction 'ingle ata stream c) 'ingle !nstruction 'e-eral ocument stream d) none of these 23. ,!, stands for 11 a) ,ultiple !nstruction ,ultiple ata stream. b) ,ultiple !nstruction ,eta ata stream. c) ,ultiple !nstruction ,odular ata stream. d) none of these 2$. ,!' stands for 11 a) ,ultiple !nstruction 'ingle ata stream. b) ,ore !nstruction 'ingle ata stream. c) ,ultiple !nstruction 'imple ata stream. d) none of these 2%. Which one is true about ,!'* a) !s not a practically existing model. b) +ractically existing model. c; c.,eta instruction single data d) )ll abo-e are true 28. '!' is the example of 11 a) istributed parallel processor system. b) 'e/uential system c) ,ultiprocessing system d) "one of these 2=. .4!W is stand for11 a) .ariable length !nstruction #all b) .ary large instruction #ord c; .ary long instruction #ord d; "one of these 27. (!'& stands for11 a) (ich instruction set computers. b) (ich instruction serial computers. c) (eal instruction set computers. d) none of these 2C. '!, has follo#ing component11 a; +E b) &5 c) )5 d) )ll of )bo-e 3>. &!'& stands for 11 a) &omplex instruction set computers b) &omplete instruction set computers c) &ore instruction set computers d) "one of these 31. 0he )45 and control unit of most of the microcomputers are combined and manufactured on a single silicon chip. What is it called* a; ,onochip b; ,icroprocessor c; )lu d; &ontrol unit 32. Which of the follo#ing registers is used to keep track of address of the memory location #here the next instruction is located* a; ,emory )ddress (egister b; ,emory ata (egister c; !nstruction (egister d; +rogram (egister
33. ) complete microcomputer system consist of11 a; ,icroprocessor b; ,emory c) +eripheral e/uipment d; )ll of abo-e
3$. &+5 does perform the operation of11 a; ata transfer b; 4ogic operation c; )rithmetic operation d; )ll of abo-e
3%. +ipelining strategy is called implementing11 a; !nstruction execution b; !nstruction prefetch c; !nstruction decoding d; !nstruction manipulation 38. What is the function of control unit in a &+5* a; 0o transfer data to primary storage b; 0o store program instruction c; 0o perform logic operations d; 0o decode program instruction 3=. +ipeline implements11 a; Fetch instruction b; ecode instruction c; Fetch operand d; &alculate operand e; Execute instruction f; )ll of abo-e 37. ,emory access in (!'& architecture is limited to instructions like11 a; &)44 and (E0 b; +5'H and +@+ c; '0) and 4) d; ,@. and D,+ 3C. 0he most common addressing techni/ues employed by a &+5 is11 a; !mmediate b; irect c; !ndirect d; (egister e) )ll of the abo-e $>. ) shared memory '!, model is AA.. than distributed memory model. a; ,ore complex b; 4ess complex c; E/ually complex d; &anEt say