Cirrus Logic AN150 User Manual

Application Note
CS5531/32/33/34 FREQUENTLY ASKED QUESTIONS
INTRODUCTION
The CS5531/32/33/34 are 16 and 24-bit ADCs that include an ultra low-noise amplifier, a 2 or 4-chan­nel multiplexer, and various conversion and cali­bration options. This application note is intended to provide a resource to help users understand how to best use the features of these ADCs. The “Getting Started” section outlines the order in which certain things should be done in software to ensure that the converter functions correctly. The “Questions and Answers” section discusses many of the common questions that arise when using these ADCs for the first time.
GETTING STARTED
Initialize the ADCs Serial Port
The CS5531/32/33/34 do not have a reset pin. A re­set is performed in software by re-synchronizing the serial port and doing a software reset. Re-syn­chronizing the serial port ensures that the device is expecting a valid command. It does not initiate a reset of the ADC, and all of the register settings of the device are retained.
A serial port re-synchronization is performed by sending 15 (or more) bytes of 0xFF (hexadecimal) to the converter, followed by a single byte of 0xFE. Note that anytime a command or any other infor­mation is to be sent to or read from the ADC’s se­rial port, the CS pin must be low.
A software reset is performed by writing a “1” to the RS bit (Bit 29) in the Configuration Register. When a reset is complete, the RV bit (Bit 28) in the Configuration Register will be set to a “1” by the ADC. Any other bits in the Configuration Register that need to be changed must be done with a sepa­rate write to the register after the software reset is performed.
Set Up the Configuration Register
After a software reset has been performed, the Con­figuration Register can be written to configure the general operation parameters of the device. This step can be omitted if the system is using the de­fault register value. Particular attention must be paid to the setting of the VRS bit (Bit 25). The VRS bit should be set to “1” if the voltage on the VREF+ and VREF- pins is 2.5 V or less. If the voltage on the VREF+ and VREF- pins is greater than 2.5V, the VRS bit should be set to “0”.
Set up the Channel Setup Registers
The Channel Setup Registers determine how the part should operate when given a conversion or cal­ibration command. If the system is using the device with its default settings, the Channel Setup Regis­ters need not be written. Whether the Channel Set­up Registers are written or not, they should be configured for the desired operation of the device before performing any calibrations or conversions.
Perform a Software Reset
After re-synchronizing the ADCs serial port, a soft­ware reset should be performed on the device. A re­set will set all of the internal registers to their default values, as detailed in the datasheet.
Cirrus Logic, Inc. Crystal Semiconductor Products Division
P.O. Box 17847, Austin, Texas 78760 (512) 445 7222 FAX: (512) 445 7581 http://www.crystal.com
Calibrate the ADC
The CS5531/32/33/34 can be calibrated using the on-chip calibration features for more accuracy. The parts do not need to be calibrated to function, and in some systems a calibration step may not be nec-
Copyright Cirrus Logic, Inc. 2001
(All Rights Reserved)
AN150REV2
SEP ‘01
1
AN150
essary. Any offset or gain errors in the ADC itself and the front-end analog circuitry will remain if the device is left uncalibrated.
If the built-in calibration functions of the device are to be used, the calibrations should be performed be­fore any conversions take place. Calibrations are performed by sending the appropriate calibration command to the converter’s serial port, and waiting until the SDO line falls low, which indicates that the calibration has completed. New commands should not be sent to the converter until the calibra­tion cycle is complete. More detail about perform­ing calibrations can be found later in this document and in the datasheet.
Perform Conversions
Conversions can be performed by sending the ap­propriate command to the converter, waiting for SDO to fall, and then clocking the data from the se­rial port. New commands should not be sent to the converter during a conversion cycle. The various conversion modes and options are discussed in more detail later in this document and in the datasheet.
QUESTIONS AND ANSWERS
How is the input voltage span of the con­verter calculated?
The positive full-scale input voltage (VFS) is deter­mined by Equation 1.
V
Equation 1. Full-Scale Input Voltage
VREF+()VREF-()()
--------------------------------------------------------
FS
GA×()
In Equation 1, (VREF+) - (VREF-) is the differ­ence between the voltage levels on the VREF+ and VREF- pins of the converter. The variable G in the equation represents the setting of the programma­ble-gain instrumentation amplifier (PGIA) inside the part. The variable A in the equation is depen­dent on the setting of the VRS bit in the Configura­tion register (bit 25). When this bit is set to ‘0’, A =
1
-------×=
R
G
2, and when the bit is set to ‘1’, A = 1. RG is the decimal value of the digital gain register, which is discussed in a later section. For the purposes of this section, the value of RG is 1.0.
The input voltage span in unipolar mode will be from 0 V to the positive full-scale input voltage computed using Equation 1. In bipolar mode, the input voltage span is twice as large, since the input range goes from negative full-scale (-VFS) to posi­tive full-scale (VFS). So for unipolar mode, the in­put voltage span is VFS, and in bipolar mode, it is 2 * VFS.
Example: Using a 5V voltage reference, with the VRS bit set to 0 in the 32X bipolar gain range, we see that (VREF+)-(VREF-) = 5 V, G = 32, and A =
2. Using Equation 1, VFS = (5 V)/(32 * 2) = 78.125 mV. Since we are using bipolar mode, the input voltage span becomes 2 * VFS = 156.25 mV, or ±78.125 mV.
How are the digital output codes mapped to the analog input voltage of the converters?
The output codes from the converter are mapped as either straight binary or two’s complement binary values, depending on whether the part is in unipolar or bipolar mode. The part measures voltage on the analog inputs as the differential between the AIN+ and AIN- pins (AIN+ - AIN-). The smallest amount of voltage change on the analog inputs which will cause a change in the output code from the convert­er is known as an “LSB” (Least Significant Bit), because it is the LSB of the converter’s output word that is affected by this voltage change. The size of one LSB can be calculated with Equation 2.
V
()
LSB
Equation 2. LSB Size
In Equation 2, “V
SPAN
range as determined by the voltage reference, PGIA setting, and gain register value. “N” is the
SPAN
---------------------= 2N()
” is the full input voltage
2 AN150REV2
AN150
CS5531/33 16-Bit Output Coding CS5532/34 24-Bit Output Coding
Unipolar Input
Voltage
>(VFS-1.5 LSB) FFFF >(VFS-1.5 LSB) 7FFF >(VFS-1.5 LSB) FFFFFF >(VFS-1.5 LSB) 7FFFFF
VFS-1.5 LSB FFFF
VFS/2-0.5 LSB 8000
+0.5 LSB 0001
<(+0.5 LSB) 0000 <(-VFS+0.5 LSB) 8000 <(+0.5 LSB) 000000 <(-VFS+0.5 LSB) 800000
Offset
Binary
------
FFFE
------
7FFF
------
0000
Bipolar Input
Voltage
VFS-1.5 LSB
-0.5 LSB
-VFS+0.5 LSB
Table 1: Output Coding for 16-bit CS5531/33 and 24-bit CS5532/34.
Two's
Complement
7FFF
------
7FFE
0000
------
FFFF
8001
------
8000
Unipolar Input
Voltage
VFS-1.5 LSB FFFFFF
VFS/2-0.5 LSB 800000
+0.5 LSB 000001
Offset
Binary
------
FFFFFE
------
7FFFFF
------
000000
Bipolar Input
VFS-1.5 LSB
-0.5 LSB
-VFS+0.5 LSB
Voltage
Two's
Complement
7FFFFF
------
7FFFFE
000000
------
FFFFFF
800001
------
800000
number of bits in the output word (16 for the CS5531/33 and 24 for CS5532/34).
Example: Using the CS5532 in the 64X unipolar range with a 2.5V reference and the gain register set to 1.0, V
is nominally 39.0625 mV, and N
SPAN
is 24. The size of one LSB is then equal to 39.0625 mV / 2^24, or approximately 2.328 nV.
The output coding for both the 16-bit and 24-bit parts depends on whether the device is used in uni­polar or bipolar mode, as shown in Table 1. In uni­polar mode, when the differential input voltage is zero Volts ±1/2 LSB, the output code from the con­verter will be zero. When the differential input voltage exceeds +1/2 LSB, the converter will out­put binary code values related to the magnitude of the input voltage (if the differential input voltage is equal to 434 LSBs, then the output of the converter will be 434 decimal). When the input voltage is within 1/2 LSB of the maximum input level, the codes from the converter will max out at all 1’s (hexadecimal FFFF for the CS5531/33 and hexa­decimal FFFFFF for the CS5532/34). If the differ­ential input voltage is negative (AIN+ is less than AIN-), then the output code from the converter will be equal to zero, and the overflow flag will be set. If the differential input voltage exceeds the maxi­mum input level, then the code from the converter will be equal to all 1’s, and the overflow flag will be set.
In bipolar mode, half of the available codes are used for positive inputs, and the other half are used for negative inputs. The input voltage is represent­ed by a two’s complement number. When the dif­ferential input voltage is equal to 0 V ±1/2 LSB, the output code from the converter will equal zero. As in unipolar mode, when the differential voltage ex­ceeds +1/2 LSB, the converter will output binary values related to the magnitude of the voltage in­put. When the input voltage is within 1/2 LSB of the maximum input level however, the code from the converter will be a single 0 followed by all 1’s (hexadecimal 7FFF for the CS5531/33 and hexa­decimal 7FFFFF for the CS5532/34). For negative differential inputs, the MSB of the output word will be set to 1. When the differential input voltage is within 1/2 LSB of the full-scale negative input volt­age, the code from the converter will be a single 1 followed by all 0’s (hexadecimal 8000 for the CS5531/33 and hexadecimal 800000 for the CS5532/34). As the negative differential voltage gets closer to zero, the output codes will count up­wards until the input voltage is between -1 1/2 and
-1/2 LSB, when the output code will be all 1’s (hexadecimal FFFF for the CS5531/33 and hexa­decimal FFFFFF for the CS5532/34).
To calculate the expected decimal output code that you would receive from the ADC for a given input voltage, divide the given input voltage by the size
AN150REV2 3
AN150
of one LSB. For a 5 mV input signal when the LSB size is 4 nV, the expected output code (decimal) from the converter would be 1,250,000.
What is the relationship of the VREF input voltage and the VRS bit to the analog inputs of the converter?
The voltage present on the VREF+ and VREF- in­puts have a direct relationship to the input voltage span of the converter. The differential voltage be­tween the VREF inputs ((VREF+) - (VREF-)) scales the span of the analog input proportionally. If the VREF voltage changes by 5%, the analog in­put span will also change by 5%. The VREF input voltage does not limit the absolute magnitude of the voltages on the analog inputs, but only sets the slope of the transfer function (codes output vs. volt­age input) of the converter. The analog input volt­ages are only limited with respect to the supply voltages (VA+ and VA-) on the part. See the “Common-mode + signal on AIN+ or AIN-” dis­cussion in this document for more details on these limitations.
The VRS bit in the configuration register also has a direct effect on the analog input span of the con­verter. When the differential voltage on the VREF pins is between 1 V and 2.5 V, the VRS bit should be set to ‘1’. When this voltage is greater than 2.5 V, the VRS bit should be set to ‘0’. When set to ‘0’, a different capacitor is used to sample the VREF voltage, and the input span of the converter is halved. The proper setting of this bit is crucial to the optimal operation of the converter. If this bit is set incorrectly, the converter will not meet the data sheet noise specifications.
The purpose of the VRS bit is to optimize the per­formance for two different types of systems. In some systems, a precision 2.5 V reference is used to get absolute accuracy of voltage measurement. Other systems use a 5 V source to provide both the reference voltage and an excitation voltage for a ra­tiometric bridge sensor. The performance of the
system can be enhanced by selecting the appropri­ate reference range.
In a system that is performing ratiometric measure­ments, using a 5 V reference is usually the best op­tion. Ratiometric bridge sensors typically have a very low output voltage range that scales directly with the excitation voltage to the sensor. Because the converter’s input span can be the same with ei­ther a 2.5 V reference or a 5 V reference, and the voltage output from the ratiometric sensor will be twice as large with a 5 V excitation, the system can achieve higher signal to noise performance when the sensor excitation and the voltage reference are at 5 V.
For systems in which absolute voltage accuracy is a concern, using a 2.5 V reference has some advan­tages. There are a wide variety of precision 2.5 V reference sources available which can be powered from the same 5 V source as the ADC. However, most precision 5 V references require more than 5 V on their power supplies, and a second supply would be needed to provide the operating voltage to a voltage reference. Since the same input ranges are available with either reference voltage, a 2.5 V reference provides a more cost and space-effective solution. Additionally, for systems where the 1X gain range is used, a 2.5 V reference voltage gives the user the option of using the self gain calibration function of the ADC, where a 5 V reference does not.
What are the noise contributions from the amplifier and the modulator?
The amplifier used in the 2X-64X gain ranges of the part has typical input-referred noise of 6 nV/Hz for the -BS versions, and 12 nV/Hz for the -AS versions. The modulator has typical noise of 70 nV/√Hz for the -BS versions, and 110 nV/Hz for the -AS versions at word rates of 120 samples/s and less. At word rates higher than 120 samples/s, the modulator noise begins to rise, and is difficult to model with an equation. The
4 AN150REV2
AN150
CS5531/32/33/34 datasheet lists the typical RMS noise values for all combinations of gain range and word rate.
In the 32X and 64X gain ranges, the amplifier noise dominates, and the modulator noise is not very sig­nificant. As the gain setting decreases, the amplifi­er noise becomes less significant, and the modulator becomes the dominant noise source in the 1X and 2X gain ranges. The noise density from the amplifier and the modulator for word rates of 120 samples/s and lower can be calculated using Equation 3.
Noise Density
Equation 3. Noise Density
NAG×()2NM()
----------------------------------------------------=
G
2
+
In Equation 3, G refers to the gain setting of the PGIA. NA refers to the amplifier noise, and NM re­fers to the modulator noise. By using the noise numbers at the beginning of this section, a noise density number can be found for any gain range setting. The typical RMS noise for a given word rate can be estimated by multiplying the noise den­sity at the desired gain range by the square root of the filter’s corner frequency for that word rate. This estimate does not include the noise that is outside the filter bandwidth, but it can give a rough idea of what the typical noise would be for those settings. The true RMS noise number will be slightly higher, as indicated by the RMS noise tables in the datasheet.
The apparent noise numbers seen at the output of the converter will be affected by the setting of the internal gain register of the part. The typical RMS noise numbers calculated in this section and shown in the datasheet’s RMS noise tables correspond to the noise seen at the converter’s output using a gain register setting of approximately 1.0.
What factors affect the input current on the
analog inputs?
In the 1X gain range, the inputs are buffered with a rough-fine charge scheme. With this input struc­ture, the modulator sampling capacitor is charged in two phases. During the first (rough) phase, the capacitor is charged to approximately the correct value using the 1X buffer amplifier, and the neces­sary current is provided by the buffer output to the sampling capacitor. During the second (fine) phase, the capacitor is connected directly to the input, and the necessary current to charge the capacitor to the final value comes from the AIN+ and AIN- lines. The size of the sampling capacitor, the offset volt­age of the buffer amplifier, and the frequency at which the front-end switches are operating can be multiplied together (CxVxF) to calculate the input current. The buffer amplifier’s offset voltage and the modulator sampling capacitor size are a func­tion of the silicon manufacturing process, and can­not be changed. The frequency at which the switches are operating is determined directly by the master clock for the part, and is the only variable that users can modify which will have an effect on the input current in this mode. The input current specified in the datasheet assumes a 4.9152 MHz master clock.
In the 2X-64X gain ranges, the input current is due to small differences in the silicon that makes up the chopping switches on the front end of the amplifier. The difference between these switches produces a small charge injection current on the analog inputs. The frequency at which the switches are operating is derived directly from the master clock of the part, and the input current will change as the master clock frequency changes. Higher master clock fre­quencies will produce higher input currents. Like­wise, changes in the VA+ and VA- supply voltages will change the amount of charge injection that is produced by the switches, and higher supply volt­ages will produce more current on the inputs. The input current specified in the datasheet assumes a
4.9152 MHz master clock and 5 V between the
AN150REV2 5
Loading...
+ 9 hidden pages