Digital Communication Test Question Set 3

1)   In DPSK technique, the technique used to encode bits is

a. AMI
b. Differential code
c. Uni polar RZ format
d. Manchester format
Answer  Explanation  Related Ques

ANSWER: Differential code

Explanation:
No explanation is available for this question!


2)   Synchronization of signals is done using

a. Pilot clock
b. Extracting timing information from the received signal
c. Transmitter and receiver connected to master timing source
d. All of the above
Answer  Explanation  Related Ques

ANSWER: All of the above

Explanation:
No explanation is available for this question!


3)   In coherent detection of signals,

a. Local carrier is generated
b. Carrier of frequency and phase as same as transmitted carrier is generated
c. The carrier is in synchronization with modulated carrier
d. All of the above
Answer  Explanation  Related Ques

ANSWER: All of the above

Explanation:
No explanation is available for this question!


4)   Impulse noise is caused due to

a. Switching transients
b. Lightening strikes
c. Power line load switching
d. All of the above
Answer  Explanation  Related Ques

ANSWER: All of the above

Explanation:
No explanation is available for this question!


5)   Probability density function defines

a. Amplitudes of random noise
b. Density of signal
c. Probability of error
d. All of the above
Answer  Explanation  Related Ques

ANSWER: Amplitudes of random noise

Explanation:
No explanation is available for this question!


6)   Timing jitter is

a. Change in amplitude
b. Change in frequency
c. Deviation in location of the pulses
d. All of the above
Answer  Explanation  Related Ques

ANSWER: Deviation in location of the pulses

Explanation:
No explanation is available for this question!


7)   ISI may be removed by using

a. Differential coding
b. Manchester coding
c. Polar NRZ
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Differential coding

Explanation:
No explanation is available for this question!


8)   Overhead bits are

a. Framing and synchronizing bits
b. Data due to noise
c. Encoded bits
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Framing and synchronizing bits

Explanation:
No explanation is available for this question!


9)   The expected information contained in a message is called

a. Entropy
b. Efficiency
c. Coded signal
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Entropy

Explanation:
No explanation is available for this question!


10)   The information I contained in a message with probability of occurrence is given by (k is constant)

a. I = k log21/P
b. I = k log2P
c. I = k log21/2P
d. I = k log21/P2
Answer  Explanation  Related Ques

ANSWER: I = k log21/P

Explanation:
No explanation is available for this question!


11)   The memory less source refers to

a. No previous information
b. No message storage
c. Emitted message is independent of previous message
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Emitted message is independent of previous message

Explanation:
No explanation is available for this question!


12)   Entropy is

a. Average information per message
b. Information in a signal
c. Amplitude of signal
d. All of the above
Answer  Explanation  Related Ques

ANSWER: Average information per message

Explanation:
No explanation is available for this question!


13)   The relation between entropy and mutual information is

a. I(X;Y) = H(X) - H(X/Y)
b. I(X;Y) = H(X/Y) - H(Y/X)
c. I(X;Y) = H(X) - H(Y)
d. I(X;Y) = H(Y) - H(X)
Answer  Explanation  Related Ques

ANSWER: I(X;Y) = H(X) - H(X/Y)

Explanation:
No explanation is available for this question!


14)   The mutual information

a. Is symmetric
b. Always non negative
c. Both a and b are correct
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Both a and b are correct

Explanation:
No explanation is available for this question!


15)   Information rate is defined as

a. Information per unit time
b. Average number of bits of information per second
c. rH
d. All of the above
Answer  Explanation  Related Ques

ANSWER: All of the above

Explanation:
No explanation is available for this question!


16)   The information rate R for given average information H= 2.0 for analog signal band limited to B Hz is

a. 8 B bits/sec
b. 4 B bits/sec
c. 2 B bits/sec
d. 16 B bits/sec
Answer  Explanation  Related Ques

ANSWER: 4 B bits/sec

Explanation:
No explanation is available for this question!


17)   Code rate r, k information bits and n as total bits, is defined as

a. r = k/n
b. k = n/r
c. r = k * n
d. n = r * k
Answer  Explanation  Related Ques

ANSWER: r = k/n

Explanation:
No explanation is available for this question!


18)   The technique that may be used to increase average information per bit is

a. Shannon-Fano algorithm
b. ASK
c. FSK
d. Digital modulation techniques
Answer  Explanation  Related Ques

ANSWER: Shannon-Fano algorithm

Explanation:
No explanation is available for this question!


19)   For a binary symmetric channel, the random bits are given as

a. Logic 1 given by probability P and logic 0 by (1-P)
b. Logic 1 given by probability 1-P and logic 0 by P
c. Logic 1 given by probability P2 and logic 0 by 1-P
d. Logic 1 given by probability P and logic 0 by (1-P)2
Answer  Explanation  Related Ques

ANSWER: Logic 1 given by probability P and logic 0 by (1-P)

Explanation:
No explanation is available for this question!


20)   The channel capacity according to Shannon's equation is

a. Maximum error free communication
b. Defined for optimum system
c. Information transmitted
d. All of the above
Answer  Explanation  Related Ques

ANSWER: All of the above

Explanation:
No explanation is available for this question!


21)   For M equally likely messages, M>>1, if the rate of information R > C, the probability of error is

a. Arbitrarily small
b. Close to unity
c. Not predictable
d. Unknown
Answer  Explanation  Related Ques

ANSWER: Close to unity

Explanation:
No explanation is available for this question!


22)   For M equally likely messages, M>>1, if the rate of information R ≤ C, the probability of error is

a. Arbitrarily small
b. Close to unity
c. Not predictable
d. Unknown
Answer  Explanation  Related Ques

ANSWER: Arbitrarily small

Explanation:
No explanation is available for this question!


23)   The negative statement for Shannon's theorem states that

a. If R > C, the error probability increases towards Unity
b. If R < C, the error probability is very small
c. Both a & b
d. None of the above
Answer  Explanation  Related Ques

ANSWER: If R > C, the error probability increases towards Unity

Explanation:
No explanation is available for this question!


24)   According to Shannon Hartley theorem,

a. The channel capacity becomes infinite with infinite bandwidth
b. The channel capacity does not become infinite with infinite bandwidth
c. Has a tradeoff between bandwidth and Signal to noise ratio
d. Both b and c are correct
Answer  Explanation  Related Ques

ANSWER: Both b and c are correct

Explanation:
No explanation is available for this question!


25)   The capacity of a binary symmetric channel, given H(P) is binary entropy function is

a. 1 - H(P)
b. H(P) - 1
c. 1 - H(P)2
d. H(P)2 - 1
Answer  Explanation  Related Ques

ANSWER: 1 - H(P)

Explanation:
No explanation is available for this question!


26)   The channel capacity is

a. The maximum information transmitted by one symbol over the channel
b. Information contained in a signal
c. The amplitude of the modulated signal
d. All of the above
Answer  Explanation  Related Ques

ANSWER: The maximum information transmitted by one symbol over the channel

Explanation:
No explanation is available for this question!


27)   For M equally likely messages, the average amount of information H is

a. H = log10M
b. H = log2M
c. H = log10M2
d. H = 2log10M
Answer  Explanation  Related Ques

ANSWER: H = log2M

Explanation:
No explanation is available for this question!


28)   The capacity of Gaussian channel is

a. C = 2B(1+S/N) bits/s
b. C = B2(1+S/N) bits/s
c. C = B(1+S/N) bits/s
d. C = B(1+S/N)2 bits/s
Answer  Explanation  Related Ques

ANSWER: C = B(1+S/N) bits/s

Explanation:
No explanation is available for this question!


29)   The probability density function of a Markov process is

a. p(x1,x2,x3.......xn) = p(x1)p(x2/x1)p(x3/x2).......p(xn/xn-1)
b. p(x1,x2,x3.......xn) = p(x1)p(x1/x2)p(x2/x3).......p(xn-1/xn)
c. p(x1,x2,x3......xn) = p(x1)p(x2)p(x3).......p(xn)
d. p(x1,x2,x3......xn) = p(x1)p(x2 * x1)p(x3 * x2)........p(xn * xn-1)
Answer  Explanation  Related Ques

ANSWER: p(x1,x2,x3.......xn) = p(x1)p(x2/x1)p(x3/x2).......p(xn/xn-1)

Explanation:
No explanation is available for this question!


30)   Orthogonality of two codes means

a. The integrated product of two different code words is zero
b. The integrated product of two different code words is one
c. The integrated product of two same code words is zero
d. None of the above
Answer  Explanation  Related Ques

ANSWER: The integrated product of two different code words is zero

Explanation:
No explanation is available for this question!


31)   The Golay code (23,12) is a codeword of length 23 which may correct

a. 2 errors
b. 3 errors
c. 5 errors
d. 8 errors
Answer  Explanation  Related Ques

ANSWER: 3 errors

Explanation:
No explanation is available for this question!


32)   The minimum distance for unextended Golay code is

a. 8
b. 9
c. 7
d. 6
Answer  Explanation  Related Ques

ANSWER: 7

Explanation:
No explanation is available for this question!


33)   The prefix code is also known as

a. Instantaneous code
b. Block code
c. Convolutional code
d. Parity bit
Answer  Explanation  Related Ques

ANSWER: Instantaneous code

Explanation:
No explanation is available for this question!


34)   Run Length Encoding is used for

a. Reducing the repeated string of characters
b. Bit error correction
c. Correction of error in multiple bits
d. All of the above
Answer  Explanation  Related Ques

ANSWER: Reducing the repeated string of characters

Explanation:
No explanation is available for this question!


35)   For hamming distance dmin and number of errors D, the condition for receiving invalid codeword is

a. D ≤ dmin + 1
b. D ≤ dmin - 1
c. D ≤ 1 - dmin
d. D ≤ dmin
Answer  Explanation  Related Ques

ANSWER: D ≤ dmin - 1

Explanation:
No explanation is available for this question!


36)   For hamming distance dmin and t errors in the received word, the condition to be able to correct the errors is

a. 2t + 1 ≤ dmin
b. 2t + 2 ≤ dmin
c. 2t + 1 ≤ 2dmin
d. Both a and b
Answer  Explanation  Related Ques

ANSWER: Both a and b

Explanation:
No explanation is available for this question!


37)   Parity check bit coding is used for

a. Error correction
b. Error detection
c. Error correction and detection
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Error detection

Explanation:
No explanation is available for this question!


38)   Parity bit coding may not be used for

a. Error in more than single bit
b. Which bit is in error
c. Both a & b
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Both a & b

Explanation:
No explanation is available for this question!


39)   For a (7, 4) block code, 7 is the total number of bits and 4 is the number of

a. Information bits
b. Redundant bits
c. Total bits- information bits
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Information bits

Explanation:
No explanation is available for this question!


40)   Interleaving process permits a burst of B bits, with l as consecutive code bits and t errors when

a. B ≤ 2tl
b. B ≥ tl
c. B ≤ tl/2
d. B ≤ tl
Answer  Explanation  Related Ques

ANSWER: B ≤ tl

Explanation:
No explanation is available for this question!


41)   The code in convolution coding is generated using

a. EX-OR logic
b. AND logic
c. OR logic
d. None of the above
Answer  Explanation  Related Ques

ANSWER: EX-OR logic

Explanation:
No explanation is available for this question!


42)   For decoding in convolution coding, in a code tree,

a. Diverge upward when a bit is 0 and diverge downward when the bit is 1
b. Diverge downward when a bit is 0 and diverge upward when the bit is 1
c. Diverge left when a bit is 0 and diverge right when the bit is 1
d. Diverge right when a bit is 0 and diverge left when the bit is 1
Answer  Explanation  Related Ques

ANSWER: Diverge upward when a bit is 0 and diverge downward when the bit is 1

Explanation:
No explanation is available for this question!


43)   A linear code

a. Sum of code words is also a code word
b. All-zero code word is a code word
c. Minimum hamming distance between two code words is equal to weight of any non zero code word
d. All of the above
Answer  Explanation  Related Ques

ANSWER: All of the above

Explanation:
No explanation is available for this question!


44)   Graphical representation of linear block code is known as

a. Pi graph
b. Matrix
c. Tanner graph
d. None of the above
Answer  Explanation  Related Ques

ANSWER: Tanner graph

Explanation:
No explanation is available for this question!