Parallel Computing Important Questions and Answers Set - 2
18. Synchronous communications
A. It require some type of “handshaking “ between tasks that are sharing
dat(A) This can be explicitly structured in code by the programmer ,or it may
happen at a lower level unknown to the programmer
B. It involves data sharing between more than two tasks , which are often
specified as being members in a common group, or collective
C. It involves two tasks with one task acting as the sender /producer of
data ,and the other acting as the receiver/consumer
D. It allows tasks to transfer data independently from one another
Ans- A. It require some type of “handshaking “ between tasks that are
sharing dat(A) This can be explicitly structured in code by the
programmer ,or it may happen at a lower level unknown to the
programmer
19. Collective communication
A.It involves data sharing between more than two tasks , which are often
specified as being members in a common group, or collective
B.It involves two tasks with one task acting as the sender /producer of
data ,and the other acting as the receiver/consumer
C.It allows tasks to transfer data independently from one another
D. None of these
Ans- A. It involves data sharing between more than two tasks , which are
often specified as being members in a common group, or collective
20. Point -to point communication referred to
A.It involves data sharing between more than two tasks , which are often
specified as being members in a common group, or collective
B.It involves two tasks with one task acting as the sender /producer of
data ,and the other acting as the receiver/consumer
C.It allows tasks to transfer data independently from one another
D. None of these
Ans- B. It involves two tasks with one task acting as the sender
/producer of data ,and the other acting as the receiver/consumer
21. Uniform Memory Access(UMA) referred to
A. Here all processors have equal access and access times to memory
B. Here if one processor updates a location in shared memory , all the other
processors know about the update
C. Here one SMP can directly access memory of another SMP and not all
processors have equal access time
D. None of these
Ans- A. Here all processors have equal access and access times to
memory
22. Asynchronous communications
A. It involves data sharing between more than two tasks , which are often
specified as being members in a common group, or collective
B.It involves two tasks with one task acting as the sender /producer of
data ,and the other acting as the receiver/consumer
C.It allows tasks to transfer data independently from one another
D. None of these
Ans- C. It allows tasks to transfer data independently from one another
23. Granularity is
A.In parallel computing ,it is a qualitative measure of the ratio of computation
to communication
B.Here relatively small amounts of computational work are done between
communication events
C.Relatively large amounts of computational work are done between between
communication /synchronization events
D.None of these
Ans- A. In parallel computing ,it is a qualitative measure of the ratio of
computation to communication
24. Coarse -grain Parallelism
A.In parallel computing ,it is a qualitative measure of the ratio of computation
to communication
B.Here relatively small amounts of computational work are done between
communication events
C.Relatively large amounts of computational work are done between between
communication /synchronization events
D.None of these
Ans- C. Relatively large amounts of computational work are done
between between communication /synchronization events
25. Cache Coherent UMA (CC-UMA) is
A.Here all processors have equal access and access times to memory
B.Here if one processor updates a location in shared memory , all the other
processors know about the update
C.Here one SMP can directly access memory of another SMP and not all
processors have equal access time
D.None of these
Ans- B.Here if one processor updates a location in shared memory , all
the other processors know about the update
26. Non -Uniform Access (NUMA ) is
A.Here all processors have equal access and access times to memory
B.Here if one processor updates a location in shared memory , all the other
processors know about the update
C.Here one SMP can directly access memory of another SMP and not all
processors have equal access time
D.None of these
Ans- C. Here one SMP can directly access memory of another SMP and
not all processors have equal access time
27. It distinguishes multi-processor computer architectures according to how
they can be classified along the two independent dimensions of instructions
and Dat(A) Each of these dimensions can have only one of two possible
stages : Single or Multiple
A. Single Program Multiple Data (SPMD)
B. Flynn’s taxonomy
C. Von Neumann Architecture
D. None of these
Ans- B. Flynn’s taxonomy
28. In the threads model of parallel programming
A. A single process can have multiple ,concurrent execution paths
B. A single process can have single , concurrent execution paths
C. A multiple process can have single concurrent execution paths
D. None of these
Ans- A. A single process can have multiple ,concurrent execution paths
29. These applications typically have multiple executable object files
(programs ) .While the application is being run in parallel ,each task can be
executing the same or different program as other tasks .All tasks may use
different data
A. Single Program Multiple Data (SPMD)
B. Multiple Program Multiple Data (MPMD)
C. Von Neumann Architecture
D. None of these
Ans- B. Multiple Program Multiple Data (MPMD)
30. Here a single program is executed by all tasks simultaneously . At any
moment in time ,tasks can be executing the same or different instructions
within the same program .These programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally
execute only those parts of the program they are designed to execute
A.Single Program Multiple Data (SPMD)
B.Multiple Program Multiple Data (MPMD)
C.Von Neumann Architecture
D.None of these
Ans- A. Single Program Multiple Data (SPMD)
31. These computers uses the stored-program concept. Memory is used to
store both program and data instructions and central processing unit (CPU)
gets instructions and/or data from memory . CPU, decodes the instructions
and then sequentially performs them
A. Single Program Multiple Data (SPMD)
B. Flynn’s taxonomy
C. Von Neumann Architecture
D. None of these
Ans- C. Von Neumann Architecture
32. Lord balancing is
A. Involves only those tasks executing a communication operation
B. It exists between program statements when the order of statement
execution affects the results of the program. C. It refers to the practice of distributing work among tasks so that all tasks
are kept busy all of the time . It can be considered as minimization of task idle
time . D. None of these
Ans- C. It refers to the practice of distributing work among tasks so that
all tasks are kept busy all of the time . It can be considered as minimization of task idle time .
33. Synchronous communication operations referred to
A. Involves only those tasks executing a communication operation
B. It exists between program statements when the order of statement
execution affects the results of the program. C. It refers to the practice of distributing work among tasks so that all tasks
are kept busy all of the time . It can be considered as minimization of task idle
time . D. None of these
Ans- A.Involves only those tasks executing a communication operation
34. Data dependence is
A. Involves only those tasks executing a communication operation
B. It exists between program statements when the order of statement
execution affects the results of the program. C. It refers to the practice of distributing work among tasks so that all tasks
are kept busy all of the time . It can be considered as minimization of task idle
time . D. None of these
Ans- B. It exists between program statements when the order of
statement execution affects the results of the program.