Ten or twenty years ago when I used to work at Stevens, I ran into a circuit that used a couple of ADCs (Analog to Digital Convertors). I was flummoxed when I started looking at it. ADCs are used to read the voltage of a signal and often that voltage is fluctuating at some frequency, so you want to taking readings often enough that you are getting a picture that reflects what is actually happening. If the signal is varies periodically with a frequency of, say 60Hz, like house current, and you take readings at the same frequency, your readings will all show the same voltage. It might be any value from zero to the maximum value. To get an accurate picture you need to take readings at some multiple of the signal frequency. Two, three or more times faster than the signal frequency.
When I look at the circuit and the ADC chips, I discover that the readings are transferred from the ADC to the CPU as serial data, that is, one bit at a time. Further, the two ADCs were daisy chained together, so the eight bits of data from one ADC were pumped out of one and into the other, and then pumped out of the second and into the CPU. This means that it took at least sixteen ticks of the serial clock to transfer the readings from the two ADCs to the CPU. This means that the serial data clock needs to be running 16 to 20 times faster than the frequency of the signal that you are trying to measure. This sounded nuts to me. Why would you want to handicap your measurement circuit with a 20x speed reduction?
Then I did the math and realized that the serial data clock was running at a frequency of something like 30 Mhz while the signal we were trying to measure was only around 1 megahertz.
This memory just popped into my head last night. Since it made such an impression on me I thought I would write it down.
Here are the deatails : https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
ReplyDelete